About Terracotta Documentation
This documentation is about Terracotta DSO, an advanced distributed-computing technology aimed at meeting special clustering requirements.
Terracotta products without the overhead and complexity of DSO meet the needs of almost all use cases and clustering requirements. To learn how to migrate from Terracotta DSO to standard Terracotta products, see Migrating From Terracotta DSO. To find documentation on non-DSO (standard) Terracotta products, see Terracotta Documentation. Terracotta release information, such as release notes and platform compatibility, is found in Product Information.
- Introduction
- How DSO Clustering Works
- Platform Concepts
- Hello Clustered World
- Setup and Configuration
- Planning for a Clustered App
- Configuring Terracotta DSO
- Configuration Reference
- Installation
- APIs
- Using Annotations
- Cluster Events
- Data Locality Methods
- Distributed Cache
- Clustered Async Data Processing
- Tool Guides
- Developer Console
- Operations Center
- tim-get (TIM Management Tool)
- Platform Statistics Recorder
- Eclipse Plugin
- Sessions Configurator
- Clustering Spring Webapp with Sessions Configurator
- Maven
- JMX
- Testing, Tuning, and Deployment
- Top 5 Tuning Tips
- Testing a Clustered App
- Tuning a Clustered App
- Deployment Guide
- Operations Guide
- FAQs and Troubleshooting
- General FAQ
- DSO Technical FAQ
- Troubleshooting Guide
- Gotchas
- Non-portable Classes
- Reference
- Migrating From DSO
- Concept and Architecture Guide
- Examinator Reference Application
- Clustered Data Structures Guide
- Integrating Terracotta DSO
- Clustering Spring Framework
- Integration Modules Manual
- AspectWerkz Pattern Language
- Glossary
Release: 3.6
Publish Date: November, 2011
|
Terracotta Distributed Data Cache
Introduction
Clustered applications with a system of record (SOR) on the backend can benefit from a distributed cache that manages certain data in memory while reducing costly application-SOR interactions. However, using a cache can introduce increased complexity to software development, integration, operation, and maintenance.
The Terracotta distributed cache, implemented with the Terracotta Integration Module tim-map-evictor
, takes both established and innovative approaches to the caching model.
tim-map-evictor
solves the performance and complexity issues by:
- making cached application data available in-memory across a cluster of application servers;
- offering standard methods for working with cache elements and performing cache-wide operations;
- incorporating concurrency for readers and writers;
- evicting data efficiently based on standard expiration metrics;
- obviating SOR commits for data with a limited lifetime;
- providing a lightweight implementation with a simple API that's easier to understand and code against;
- being "native" to Terracotta to eliminate integration issues;
- utilizing a flexible map implementation to adapt to more applications;
- minimizing inter-node faulting to speed data operations.
Requirements
The Terracotta distributed data cache requires JDK 1.5 or greater.
Structure and Characteristics
The Terracotta distributed cache is an interface incorporating a distributed map with standard map operations:
public interface DistributedMap<K, V> { // Single item operations void put(K key, V value); V get(K key); V remove(K key); boolean containsKey(K key); // Multi item operations int size(); void clear(); Set<K> getKeys(); // For managing the background evictor thread void start(); void shutdown(); }
Notable characteristics include:
- getValues() is not provided, but an iterator can be obtained for Set<K> to obtain values.
- A cache-wide Time To Live (TTL) value can be set. The TTL determines the maximum amount of time an object can remain in the cache before becoming eligible for eviction, regardless of other conditions such as use.
- A cache-wide Time To Idle (TTI) value can be set. The TTI determines the maximum amount of time an object can remain idle in the cache before becoming eligible for eviction. TTI is reset each time the object is used.
- Each cache element receives an internal timestamp used against the cache-wide TTL and TTI.
- The map containing the timestamps is clustered, allowing all nodes to be aware of expirations and pending evictions.
Usage Example
The following is an example of cache that implements the Terracotta distributed cache:
Cache cache = new CacheBuilder() .setTimeToLive(10, TimeUnit.SECONDS) .setTimeToIdle(5, TimeUnit.SECONDS) .setConcurrency(16) .newCache(); cache.put("Rabbit", "Carrots"); cache.put("Dog", "Bone"); cache.put("Owl", "Mouse"); // wait 3 seconds cache.get("Rabbit"); // wait 2 seconds - expire Dog and Owl due to TTI assert !cache.contains("Dog"); assert !cache.contains("Owl"); assert cache.contains("Rabbit"); // wait 5 seconds - expire Rabbit due to TTL assert !cache.contains("Rabbit");
Terracotta Distributed Cache in a Reference Application
The [Examinator reference application] uses the Terracotta distributed cache to handle pending user registrations. This type of data has a "medium-term" lifetime which needs to be persisted long enough to give prospective registrants a chance to verify their registrations. If a registration isn't verified by the time TTL is reached, it can be evicted from the cache. Only if the registration is verified is it written to the database.
The combination of Terracotta and the Terracotta distributed cache gives Examinator the following advantages:
- The simple Terracotta distributed cache's API makes it easy to integrate with Examinator and to maintain and troubleshoot.
- Medium-term data is not written to the database unnecessarily, improving application performance.
- Terracotta persists the pending registrations so they can survive node failure.
- Terracotta clusters (shares) the pending registration data so that any node can handle validation.