When this correspondent caught up recently with Dmitriy Setrakyan, CTO, GridGain Systems Inc., the topic of in-memory data caching was on the docket. In-memory data grids (IMDGs) are applicable anywhere performance and/or data consistency matter, he said. Today, this mostly has to do with businesses' need to get real-time responses about huge amounts of data they have coming into systems. By email, Setrakyan provided these IMDG tips and trends:
In-memory data grids cover speed bottlenecks
For more information on in-memory data grid trends, check out this related feature on data cache products and technology.
- Many people have turned to Hadoop, but they may quickly realize that Hadoop is a good disk warehouse solution with batch-oriented processing of historic data, but it is really slow and awkward at processing an operational data set that changes online, in real time.
- That's exactly where in-memory data grids become important -- to provide real-time sub-second responses on operational data.
- You know you need to introduce data grids into your architecture when doing things the old way, like constantly going to disk, becomes too slow.
- You need IMDG when keeping your whole data set locally is not an option and does not scale anymore.
- You need a system that can elastically grow with your business demands. Data grids add scalability to your system. The more grid nodes you add, the more data you can cache and the more load you can handle.
But there are caveats. Another thing to consider is integration with compute grids. Being able to cache data in memory is usually not enough. Here's what else needs to happen:
- You have to be able to process this data fast.
- You need load-balance processing across the grid.
- You must be sure that processing happens in resilient fashion -- in other words, no computation gets lost.
He cites query and management capabilities as other differentiators in IMDG architectures.