Retrieving data demands both time and compute power of an application, and that demand can grow when data is housed in disparate data centers, as in a cloud or grid computing environment. Distributed data caches can make the data a little easier to find. A distributed data cache can act as an intermediary storage layer, holding frequently used data so that the application doesn’t have to constantly query multiple databases.
Distributed data caches are a relatively new development, but the notion of an intermediary storage layer is not. “Temporary spaces existed in the mainframe era, the client/server era and now in this Web based era. You’d see the same thought process but applied in different technologies,” said independent analyst Sandy Rogers. “Around the late nineties an industry developed around enterprise information integration (EII). Like a distributed data cache, it was virtual type of approach to data integration.”
These earlier intermediary storage layers, though, often had limited accessibility. “Depending on what domain users were processing in, they had to use technologies for that space,” said Rogers.
The key with a distributed data cache, however, is that it can make the data more accessible to multiple entities. While earlier data caches once were often coded into a program, widespread use of protocols and standards allow today’s distributed data caches to exist independently of any one application, instead letting multiple programs tap into its resources. As such, the data cache is no longer a feature of individual applications, but part of the data infrastructure.
Read more about distributed data caches in our Cloud Data Architecture Quick Guide