About Terracotta Documentation
This documentation is about Terracotta DSO, an advanced distributed-computing technology aimed at meeting special clustering requirements.
Terracotta products without the overhead and complexity of DSO meet the needs of almost all use cases and clustering requirements. To learn how to migrate from Terracotta DSO to standard Terracotta products, see Migrating From Terracotta DSO. To find documentation on non-DSO (standard) Terracotta products, see Terracotta Documentation. Terracotta release information, such as release notes and platform compatibility, is found in Product Information.
- How DSO Clustering Works
- Platform Concepts
- Hello Clustered World
- Setup and Configuration
- Planning for a Clustered App
- Configuring Terracotta DSO
- Configuration Reference
- Using Annotations
- Cluster Events
- Data Locality Methods
- Distributed Cache
- Clustered Async Data Processing
- Tool Guides
- Developer Console
- Operations Center
- tim-get (TIM Management Tool)
- Platform Statistics Recorder
- Eclipse Plugin
- Sessions Configurator
- Clustering Spring Webapp with Sessions Configurator
- Testing, Tuning, and Deployment
- Top 5 Tuning Tips
- Testing a Clustered App
- Tuning a Clustered App
- Deployment Guide
- Operations Guide
- FAQs and Troubleshooting
- General FAQ
- DSO Technical FAQ
- Troubleshooting Guide
- Non-portable Classes
- Migrating From DSO
- Concept and Architecture Guide
- Examinator Reference Application
- Clustered Data Structures Guide
- Integrating Terracotta DSO
- Clustering Spring Framework
- Integration Modules Manual
- AspectWerkz Pattern Language
Publish Date: November, 2011
The Terracotta cache evictor is an interface providing a simple distributed eviction solution for map elements. The cache evictor, implemented with the Terracotta Integration Module
tim-map-evictor, provides a number of advantages over more complex solutions:
- Simple: API is easy to understand and code against.
- Standard: Data eviction is based on standard expiration metrics.
- Lightweight: Implementation does not hog resources.
- Efficient: Optimized for a clustered environment to minimize faulting due to low locality of reference.
- Fail-safe: Data can be evicted even if written by a failed node or after all nodes have been restarted.
- Native: Designed for Terracotta to eliminate integration issues.
The Terracotta distributed data cache requires JDK 1.5 or greater.
Notable characteristics include:
- A cache-wide Time To Live (TTL) value can be set. The TTL determines the maximum amount of time an object can remain in the cache before becoming eligible for eviction, regardless of other conditions such as use.
- A cache-wide Time To Idle (TTI) value can be set. The TTI determines the maximum amount of time an object can remain idle in the cache before becoming eligible for eviction. TTI is reset each time the object is used.
- Each cache element receives an internal timestamp used against the cache-wide TTL and TTI.
The [Examinator reference application] uses the Terracotta cache evictor to handle pending user registrations. This type of data has a "medium-term" lifetime which needs to be persisted long enough to give prospective registrants a chance to verify their registrations. If a registration isn't verified by the time TTL is reached, it can be evicted from the cache. Only if the registration is verified is it written to the database.
The combination of Terracotta and the Terracotta cache evictor gives Examinator the following advantages:
- The simple Terracotta cache evictor's API makes it easy to integrate with Examinator and to maintain and troubleshoot.
- Medium-term data is not written to the database unnecessarily, improving application performance.
- Terracotta persists the pending registrations so they can survive node failure.
- Terracotta clusters (shares) the pending registration data so that any node can handle validation.
Clustered applications with a system of record (SOR) on the backend can benefit from a distributed cache that manages certain data in memory while reducing costly application-SOR interactions. However, using a cache can introduce increased complexity to software development, integration, operation, and maintenance.
The Terracotta cache evictor includes a distributed-map that can be used as a simple distributed cache. This cache uses the Terracotta cache evictor, incorporating all of its benefits. It also takes both established and innovative approaches to the caching model, solving performance and complexity issues by:
- obviate SOR commits for data with a limited lifetime;
- making cached application data available in-memory across a cluster of application servers;
- offering standard methods for working with cache elements and performing cache-wide operations;
- incorporating concurrency for readers and writers;
- utilizing a flexible map implementation to adapt to more applications;
- minimizing inter-node faulting to speed data operations.
The Terracotta distributed cache is an interface incorporating a distributed map with standard map operations:
getValues() is not provided, but an iterator can be obtained for Set<K> to obtain values.
A typical usage pattern for the Terracotta cache evictor is shown in the MyStuff class below. The next section contains a full list of configuration parameters available to
name | "Distributed Map" | A descriptive name used in log messages and evictor thread names |
concurrency | 16 | The concurrency, or maximum number of allowed in the map implementation |
maxTTIMillis | 0 | Time To Idle - the maximum amount of time an item can be in the map and unused before expiration. 0 means never expire due to TTI |
maxTTLMillis | 0 | Time To Live - the max amount of time an item may be in the map regardless of use before expiration. 0 means never expire due to TTL |
evictorSleepMillis | 30000 | Period to wait between eviction cycles. You should adjust this to be appropriate to your TTI/TTL values. |
orphanEvictionEnabled | true | Determines whether "orphaned" values no longer local to any node are evicted |
orphanEvictionFrequency | 4 | # of times to run local eviction between doing orphan eviction |
orphanBatchSize | 1000 | size of each set of items evicted during orphan eviction |
orphanBatchPauseMillis | 20 | rest time between each orphan eviction batch |
loggingEnabled | false | Basic distributed map logging |
evictorLoggingEnabled | false | Eviction logging |
The following is an example of a cache that implements the Terracotta distributed cache: