Technological advances in flash memory make it a viable option for speeding up database performance. As the price of flash memory continues to plummet, it is becoming a good alternative for disk storage. Read on to learn more...
It is fairly common knowledge that technological advances in flash memory make it a viable option for speeding up database performance. As the price of flash memory continues to plummet, mainly due to extensive use in mobile devices, it is becoming a good alternative for disk storage. Following, are some interesting points:
- Flash memory has about one-tenth of latency compared to a disk and does not involve seek operations; write operations involve erase and take at least twice the time compared to disk.
- Flash memory can be used for extending RAM storage or caching disk. Database folks may be interested in the former option as this potentially expands memory capacity and makes more data available. Systems personnel look at speeding up file system access.
- A database utilizes a different set of tablespaces or disk allocations to manage not just data, but also redo logs, temporary storage for sorting and joins, and rollback operations. Faster performance of a database is not just limited to getting easier access to the data; it also involves other database related activities such as redo log generation. Redo involves write once and read a few times to achieve a multi version. Flash memory is particularly suited for this purpose.
- Though external sorting involves writes, flash memory benefits from the random nature of merge. Sorting has two major steps: writing sequentially for sorted list; random reads using this written data. In a typical operation, these steps happen multiple times. Significant performance gains are achieved for random read operations, which are typically parallelized.
- Any computer program uses data structures and that requires a significant change to take advantage of flash memory. In other words, existing databases are not flash-aware to make effective use of it. Most of the CPU time is wasted as CPU time spent managing locks, latches and buffer pool management. The work by Rick Cattell is probably one of the cases against using Exadata.
1) Rick Cattell on data stores: http://www.odbms.org/download/RickCattell.pdf
2) Flash memory in enterprise database environment: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.141.2235&rep=rep1&type=pdf
Srinivasa Meka worked for many years on very large database implementations. His background includes Oracle 9i through 11g R2, Teradata 6 & 13.x, Netezza 3.x through 5.x, Datallegro, Ingres / Postgres, Red brick and a bit of Sybase / SQL server. Unique skill sets include database design in both OLTP and OLAP with data modeling knowledge, with strong enough UNIX system background. He can be reached through srini underscore nova, at yahoo dot com.