Child pages
  • Terracotta Distributed Cache
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

About Terracotta Documentation

This documentation is about Terracotta DSO, an advanced distributed-computing technology aimed at meeting special clustering requirements.

Terracotta products without the overhead and complexity of DSO meet the needs of almost all use cases and clustering requirements. To learn how to migrate from Terracotta DSO to standard Terracotta products, see Migrating From Terracotta DSO. To find documentation on non-DSO (standard) Terracotta products, see Terracotta Documentation. Terracotta release information, such as release notes and platform compatibility, is found in Product Information.

Unknown macro: {div9}
Release: 3.6
Publish Date: November, 2011

Documentation Archive »

Terracotta Cache Evictor


Clustered applications with a system of record (SOR) on the backend can benefit from a distributed cache that manages certain data in memory while reducing costly application-SOR interactions. However, using a cache can introduce increased complexity to software development, integration, operation, and maintenance.

The Terracotta distributed cache, implemented with the Terracotta Integration Module tim-map-evictor, takes both established and innovative approaches to the caching model. tim-map-evictor solves the performance and complexity issues by:

  • making cached application data available in-memory across a cluster of application servers;
  • offering standard methods for working with cache elements and performing cache-wide operations;
  • incorporating concurrency for readers and writers;
  • evicting data efficiently based on standard expiration metrics;
  • obviating SOR commits for data with a limited lifetime;
  • providing a lightweight implementation with a simple API that's easier to understand and code against;
  • being "native" to Terracotta to eliminate integration issues;
  • utilizing a flexible map implementation to adapt to more applications;
  • minimizing inter-node faulting to speed data operations.


The Terracotta distributed data cache requires JDK 1.5 or greater.

Structure and Characteristics

The Terracotta distributed cache is an interface incorporating a distributed map with standard map operations:

public interface DistributedMap<K, V> {
  // Single item operations
  void put(K key, V value);
  V get(K key);
  V remove(K key);
  boolean containsKey(K key);

  // Multi item operations
  int size();
  void clear();
  Set<K> getKeys();

  // For managing the background evictor thread
  void start();
  void shutdown();

Notable characteristics include:

  • getValues() is not provided, but an iterator can be obtained for Set<K> to obtain values.
  • A cache-wide Time To Live (TTL) value can be set. The TTL determines the maximum amount of time an object can remain in the cache before becoming eligible for eviction, regardless of other conditions such as use.
  • A cache-wide Time To Idle (TTI) value can be set. The TTI determines the maximum amount of time an object can remain idle in the cache before becoming eligible for eviction. TTI is reset each time the object is used.
  • Each cache element receives an internal timestamp used against the cache-wide TTL and TTI.
  • Expiration and eviction are optimized for a clustered environment to minimize unnecessary faulting.

Usage Example

The following is an example of cache that implements the Terracotta distributed cache:

import org.terracotta.modules.dmap.*;
import static org.terracotta.modules.dmap.DistributedMapBuilder.*;

DisributedMap<String,String> map = new DistributedMapBuilder()
.setMaxTTLMillis(10 * SECOND)
.setMaxTTIMillis(5 * SECOND)
map.start(); // start eviction thread

map.put("Rabbit", "Carrots");
map.put("Dog", "Bone");
map.put("Owl", "Mouse");
// wait 3 seconds

// wait 2 seconds - expire Dog and Owl due to TTI
assert ! map.containsKey("Dog");
assert ! map.containsKey("Owl");
assert map.containsKey("Rabbit");

// wait 5 seconds - expire Rabbit due to TTL
assert ! map.containsKey("Rabbit");

Terracotta Distributed Cache in a Reference Application

The [Examinator reference application] uses the Terracotta distributed cache to handle pending user registrations. This type of data has a "medium-term" lifetime which needs to be persisted long enough to give prospective registrants a chance to verify their registrations. If a registration isn't verified by the time TTL is reached, it can be evicted from the cache. Only if the registration is verified is it written to the database.

The combination of Terracotta and the Terracotta distributed cache gives Examinator the following advantages:

  • The simple Terracotta distributed cache's API makes it easy to integrate with Examinator and to maintain and troubleshoot.
  • Medium-term data is not written to the database unnecessarily, improving application performance.
  • Terracotta persists the pending registrations so they can survive node failure.
  • Terracotta clusters (shares) the pending registration data so that any node can handle validation.
  • No labels