5/23/2007

using SSD instead of platter disks

I believe in near future there will be a big change on the storage system। SSD will be the current. What would it bring to us DBers? here's a letter from oracle newsletter:

"Donald K.urleson"




Using tiny data buffers with solid-state disks

When using SSD instead of platter disks, the Oracle architecture changes radically, and we have two options:

Option 1: Large RAM data buffers, solid-state disk files (SSD)

In this option we have higher overhead within Oracle, as he tries to manage the heap to reduce I/O, which is now a negligible time expense (RAM-to-RAM data transfer is very fast).

Option 2: Small RAM data buffers with solid-state disk files (SSD)

In this option we force a read from SSD into a tiny data buffer. The overhead of the repeated loads is negligible, and we have spaced Oracle from having to manage a giant db_cache_size.

SSD has the same super-fast access speeds, plus it backs-up to disk without effecting performance (SSD has much higher I/O bandwidth than disk). Bandwidth is very important today, especially with the current plague of super-large disks. Disk bandwith is more important than RAM speed, which has been relatively "flat" for the past 30 years, while everything else (disk, network, CPU) sees radically improved speed every year.

Today, many Oracle database have shifted from being I/O bound (a 32-bit RAM constraint) to CPU bound (data buffer gets drive-up CPU consumption. This I/O shift led Oracle to make the great change to the cost-based SQL optimizer. Traditionally, the decision trees were built from estimated I/O costs, and this default change in 10g such that the SQL optimizer (the CBO) build his decision tree values based on estimated CPU costs. (This is why Oracle shops that are not 64-bit (I/O bound) will want to change the costing back to the earlier value of "_optimizer_cost_model"=io.

The only reason for having a data buffer cache is to reduce the probability of having to re-read the data block repeatedly from disk. When we have no more disk, the data buffer becomes redundant.

It would not surprise me if a future release of Oracle allowed for solid-state disks and removed the data buffer cache, but for now, the SSD block must be transferred into the data buffer to allow Oracle to manage the locks required for integrity and read consistency.

If we think of the data buffer as nothing more than a place for Oracle to set locks, then we can understand why a smaller data buffer has faster performance. If we have duplicitous RAM (once on the SSD and yet again in db_cache_size), then we see higher management overhead from Oracle:

Large data buffers take more time for standard management tasks (i.e. DBWR), and in many cases, smaller is better.

Read more about Oracle SSD tuning as this article continues:

http://oracle-tips.c.topica.com/maafS19abwE3mbIGPSxb/