Major in-memory performance boosts for Oracle 12c

Pro

23 September 2013

Fresh from triumph in the Americas Cup ocean sailing races with Oracle Team USA, Larry Ellison, CEO, Oracle, took to the stage to deliver three major announcements.

The first was the new in memory capabilities of the Oracle 12c database, the second was the new M6-32 Big Memory machine and the third is a database optimised Oracle Database Backup Logging Recovery Appliance.

The Oracle 12c database has had its usage of in memory capabilities radically increased, with its implementation also hugely simplified. Users need only activate the option, specify how much memory to use and which tables or partitions to load.

Analytics can now be run 100 times faster and more. In one demonstration run against three billion rows of data, the query ran more than 1300 times faster. But the speed benefits were not confined to analytics processing. Ellison emphasised that there were benefits for transaction processing also.

Ellison said that in the past, optimising for analytics could sometimes detract from transactional performance, but for 12c there was doubling of transaction processing performance, with the benefits delivered through a hybrid data format.

Traditionally, transactions run faster in a row format database, whereas analytics queries run faster in column format databases. Oracle 12c stores data in both formats simultaneously. This breakthrough in columnar technology allows performance leaps in both queries and transactions.
Ellison said that there was a near zero overhead to using this new columnar technology.

12c can scan billions of rows per second per CPU core. The scans use super-fast, single instruction multiple values (SIMD) vector instructions. The scans can combine data from multiple table and facilitates complex joins as part of these queries.

On the transaction processing side, OLTP is slowed down by Analytics Indexes. The new system replaces analytics indexes with Column Store, which is in memory. That allows the analytics benefits, but with OLTP and batch processing running two to three times faster, with no Analytic Index overhead. This also means that there is less tuning required by administrators, which makes design easier too.

With regards to the analytics performance, Ellison said "the answers are coming back faster than you can come up with the questions".

To take full advantage of these new in memory capabilities in 12c, Ellison also announced the M6-32 Big Memory machine. This machine has 32TB of RAM, with Sparc M6 processors, with 12 cores per processor and 96 threads. Input/output is in the range of terabytes per second. Oracle claim that it is twice as fast and about one third the cost of an equivalent IBM machine.

There is also a clustering option that will allow the M6-32 to be attached via InfiniBand to Exadata storage cells.

The third major announcement was a new appliance, which Ellison acknowledged was not the most imaginatively named, that is optimised for backing up databases. The Oracle Database Backup Logging Recovery Appliance is a backup appliance engineered specifically for database protection. It delivers near zero loss data protection, minimal impact to user performance, and a massively scalable architecture.

Ellison said that most back up software is optimised for file systems, not for databases. The Oracle appliance backs up the database and then relies on only the differences to keep the back up image up to date. However, because the updates are minimal compared to the overall image, the appliance need not be located close to the source of the back up, said Ellison, in fact it could easily be provided as a cloud service. This has inherent benefits in keep back up windows small, with the ability to do point in time restorations too.

Ellison went on to talk about the future of data centres and acknowledged the fact that a leading trend is still the use of commodity hardware, running virtualised environments and connected via Ethernet. He acknowledged that while this approach kept costs low, it did not necessarily suit everything. Ellison said that there will be an increasing need for pre-engineered systems for specific tasks in the data centre.

"We think that by designing the hardware and software together, you need less of them, less management of them with less floor space for them." These systems need less power, less configuration etc. Updates are easier and they are pretested and reliable.

The data centre of the future will still have at its core, said Ellison, the commodity hardware but they will be accompanied by a collection of purpose built machines for specific tasks that can provide highly optimised performance, resilience and availability.
 

TechCentral Reporters

Read More:


Back to Top ↑

TechCentral.ie