Child pages
  • Terracotta Distributed Cache

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Section
Column
width250px
Wiki Markup
{div:style=margin-top:20px}
{include:documentation-index}
{div}
Column
Wiki Markup
{div9:id=documentationcolumn}
{include:HeaderBasic}

h1. Terracotta Distributed Cache
{toc:minLevel=2|maxLevel=2}

h2. Introduction
The Terracotta Distributed Cache is an interface providing a simple distributed eviction solution for map elements. The Distributed Cache, implemented with the Terracotta Integration Module {{tim-distributed-cache}}, provides a number of advantages over more complex solutions:

* Simple -- API is easy to understand and code against.
* Distributed -- Eviction is distributed along with data to maintain coherence.
* Standard -- Data eviction is based on standard expiration metrics.
* Lightweight -- Implementation does not hog resources.
* Efficient -- Optimized for a clustered environment to minimize faulting due to low locality of reference.
* Fail-Safe -- Data can be evicted even if written by a failed node or after all nodes have been restarted.
* Self-Contained -- Implements a Map for optional ready-to-use distributed cache.
* Native -- Designed for Terracotta to eliminate integration issues.

h2. How to Implement and Configure
Under the appropriate conditions, the Terracotta Distributed Cache can be used in any Terracotta cluster. If your application can use the Distributed Cache's built-in Map implementation for a cache, you can avoid having to customize your own data structure. See [#A Simple Distributed Cache] for instructions on using the Distributed Cache with the provided Map implementation.

h3. Characteristics and Requirements
The Terracotta Distributed Cache has the following eviction parameters:

*Time to Live*
The Time to Live (TTL) value determines the maximum amount of time an object can remain in the cache before becoming eligible for eviction, regardless of other conditions such as use. 
*Time to Idle*
The Time to Idle (TTI) value determines the maximum amount of time an object can remain idle in the cache before becoming eligible for eviction. TTI is reset each time the object is used.
*Target Max In-Memory Count*
The in-memory count is the maximum number of elements allowed in a region in any one client (any one application server). If this target is exceeded, eviction occurs to bring the count within the allowed target. The flexibility of using a target as opposed to hard limit serves to improve concurrency. 0 means no eviction takes place (infinite size is allowed).
*Target Max Total Count*
The total count is the maximum total number of elements allowed for a region in all clients (all application servers). If this target is exceeded, eviction occurs to bring the count within the allowed target. The flexibility of using a target as opposed to hard limit serves to improve concurrency. 0 means no eviction takes place (infinite size is allowed).


To learn how to configure these eviction parameters, see [#Usage Pattern].

{note}
The Terracotta Distributed Cache requires JDK 1.5 or greater.
{note}

h4. Using Your Own Map Implementation
If you choose not to use the provided Map implementation, you must provide your own data structure and take the following steps:

* Use a partial-loading data structure for the evictor to target (see [Clustered Data Structures Guide]).
* Write start/stop thread-management code to run the evictor.
* Include the code from ConcurrentDistributedMap that performs local eviction (see the {{tim-distributed-cache}} library).
* Implement the Evictor interface from {{tim-distributed-cache}}.

See the following sections for an example of how the Terracotta Distributed Cache is intended to function with its built-in Map implementation.

h3. Installing the TIM
To use the Terracotta Distributed Cache, you must both install {{tim-distributed-cache}} and include the evictor JAR file in your classpath.

To install the TIM, run the following command from $\{TERRACOTTA_HOME\}:

{noformat:title=UNIX/Linux}
[PROMPT] bin/tim-get.sh install tim-distributed-cache
{noformat}
{noformat:title=Microsoft Windows}
[PROMPT] bin\tim-get.bat install tim-distributed-cache
{noformat}

You should see output that appears similar to the following:

{noformat}
Installing tim-distributed-cache 1.3.0-SNAPSHOT and dependencies...
   INSTALLED: tim-distributed-cache 1.3.0-SNAPSHOT - Ok
   INSTALLED: tim-concurrent-collections 1.3.0-SNAPSHOT - Ok
{noformat}

Run the following command from $\{TERRACOTTA_HOME\} to update the Terracotta configuration file ({{tc-config.xml}} by default):

{noformat:title=UNIX/Linux}
[PROMPT] bin/tim-get.sh upgrade <path/to/Terracotta/configuration/file>
{noformat}
{noformat:title=Microsoft Windows}
[PROMPT] bin\tim-get.bat upgrade <path\to\Terracotta\configuration\file>
{noformat}

For more information about installing and updating TIMs, see the [TIM Update Center|tim-get].

h3. Locking Requirements
Terracotta automatically provides locking for read (get) and write (put) operations on the distributed map. These locks last for the duration of the get or put operation.

Mutating an object obtained from the distributed map requires a read/write lock to avoid race conditions and potential corruption to data.

For example, assume a distributed map has an element <k1, v1> in it. The following operation does not require explicit locking:

{code}
myObject = getFromMyDistributedMap(k1); // Terracotta provides a lock for the duration of getFromMyDistributedMap().
{code}

Adding a new element to the map also does not require explicit locking:

{code}
putIntoMyDistributedMap(k2, v2); // Terracotta provides a lock for the duration of putIntoMyDistributedMap().
{code}

However, the following operation _requires_ a read/write lock:

{code}
myNewObject = myMutator(myObject); // myObject should be locked until it is put back into the map.
{code}

Note the following:

* To be shared across the cluster, the {{myObject}} field must be declared a Terracotta root or its class (its type) must be instrumented.
* Cluster-wide locking (Terracotta locking) must be present when {{myMutator()}} changes {{myObject}}. There are several ways to implement Terracotta locking.
* To gain visibility into the cluster and better understand how the map and object graphs are being clustered, use the Terracotta Developer Console's Object Browser.
* Use the Terracotta Developer Console's Lock Profiler to see what locks are being used and their usage patterns.

{info:Title=More Information}
For more information on locks, Terracotta roots, and instrumenting classes, see the following resources:
* The section [Resolving UnlockedSharedObjectException - Configuring Locking|Configuring Terracotta#] in Configuring Terracotta
* Sections on locks, Terracotta roots, and instrumenting classes in the [Concept and Architecture Guide] and in the [Configuration Guide and Reference].

For more information on the Terracotta Developer Console, see the [console guide|Terracotta Developer Console].
{info}


h2. A Simple Distributed Cache
Clustered applications with a system of record (SOR) on the backend can benefit from a distributed cache that manages certain data in memory while reducing costly application-SOR interactions. However, using a cache can introduce increased complexity to software development, integration, operation, and maintenance.

The Terracotta Distributed Cache includes a distributed Map that can be used as a simple distributed cache. This cache uses the Terracotta Distributed Cache, incorporating all of its benefits. It also takes both established and innovative approaches to the caching model, solving performance and complexity issues by:

* obviating SOR commits for data with a limited lifetime;
* making cached application data available in-memory across a cluster of application servers;
* offering standard methods for working with cache elements and performing cache-wide operations;
* incorporating concurrency for readers and writers;
* utilizing a flexible map implementation to adapt to more applications;
* minimizing inter-node faulting to speed data operations.

h3. Structure and Characteristics
The Terracotta distributed cache is an interface incorporating a distributed map with standard map operations:

{code}
public interface DistributedMap<K, V> {
  // Single item operations
  void put(K key, V value);
  V get(K key);
  V remove(K key);
  boolean containsKey(K key);

  // Multi item operations
  int size();
  void clear();
  Set<K> getKeys();

  // For managing the background evictor thread
  void start();
  void shutdown();
}
{code}

{tip}
getValues() is not provided, but an iterator can be obtained for Set<K> to obtain values.
{tip}

{comment}
* The map containing the timestamps is clustered, allowing all nodes to be aware of expirations and pending evictions.
{comment}


h3. Usage Pattern
A typical usage pattern for the Terracotta Distributed Cache is shown in the MyStuff class below. The next section contains a full list of configuration parameters available to {{CacheConfigFactory}}.

{code:java}
import org.terracotta.cache.CacheConfigFactory;
import org.terracotta.cache.DistributedCache;
import static org.terracotta.cache.CacheConfigFactory.HOUR;
import static org.terracotta.cache.CacheConfigFactory.MINUTE;

public class MyStuff {

   // Mark as Terracotta root
   private DistributedCache<String, Stuff> sharedCache;

   public MyStuff() {
     if(sharedCache == null) {
       DistributedCache<String, Stuff> newCache = CacheConfigFactory.newConfig() 
            .setMaxTTLMillis(6*HOUR)       // Regardless of use, remove after 6 hours.
           .setMaxTTIMillis(30*MINUTE)      // Remove after 30 minutes of none-use.
           .setEvictorSleepMillis(5*MINUTE)  // Perform eviction every 5 minutes.
           .newCache();

       // Set root - if this doesn't succeed, shutdown the newCache as it has a worthless background evictor thread.
       sharedCache = newCache;
       if(sharedCache != newCache) {
         newCache.shutdown();
       }
   }

   public void putStuff(String key, Stuff stuff) {
       sharedCache.put(key, stuff);
   }

   public Stuff getStuff(String key) {
       return sharedCache.get(key);
   }
}
{code}

h4. Cache Configuration Parameters
The configuration parameters that can be set through {{CacheConfigFactory}} are summarized in the following table.

||Config property || Default value || Description ||
|name | "Distributed Map" | A descriptive string used in log messages and evictor thread names. |
|concurrency | 16 | The maximum number of concurrent threads that can access the map. |
|maxTTIMillis | 0 | Time To Idle - the maximum amount of time (in milliseconds) an item can be in the map unused before expiration; 0 means never expire due to TTI. |
|maxTTLMillis | 0 | Time To Live - the maximum amount of time (in milliseconds) an item may be in the map regardless of use before expiration; 0 means never expire due to TTL. |
|evictorSleepMillis | 30000 | Wait time (in milliseconds) between eviction cycles; should be tuned to work well with TTI/TTL values. |
|orphanEvictionEnabled | true | Determines whether "orphaned" values (values no longer local to any node) are evicted. |
|orphanEvictionPeriod | 4 | Number of times to run local eviction between doing orphan eviction. |
|orphanBatchSize | 1000 | Size of each set of items evicted during orphan eviction. |
|orphanBatchPauseMillis | 20 | Rest time (in milliseconds) between each orphan batch eviction. |
|loggingEnabled | false | Basic distributed-map logging messages saved to the Terracotta logs. |

h3. Usage Example

The following is an example of a cache that implements the Terracotta distributed cache:

{code}
import org.terracotta.cache.*;
import static org.terracotta.cache.CacheConfigFactory.*;


DisributedCache<String,String> cache = CacheConfigFactory.newConfig()
.setMaxTTLMillis(10 * SECOND)
.setMaxTTIMillis(5 * SECOND)
.setConcurrency(16)
.newCache();


// start() method not needed; start is automatic.
cache.put("Rabbit", "Carrots");
cache.put("Dog", "Bone");
cache.put("Owl", "Mouse");
// wait 3 seconds
cache.get("Rabbit");


// wait 2 seconds - expire Dog and Owl due to TTI
assert ! cache.containsKey("Dog");
assert ! cache.containsKey("Owl");
assert cache.containsKey("Rabbit");


// wait 5 seconds - expire Rabbit due to TTL
assert ! cache.containsKey("Rabbit");
{code}

h2. Terracotta Distributed Cache in a Reference Application
The [Examinator reference application|orgsite:Web App Reference Implementation] uses the Terracotta Distributed Cache to handle pending user registrations. This type of data has a "medium-term" lifetime which needs to be persisted long enough to give prospective registrants a chance to verify their registrations. If a registration isn't verified by the time TTL is reached, it can be evicted from the cache. Only if the registration is verified is it written to the database. 

The combination of Terracotta and the Terracotta Distributed Cache gives Examinator the following advantages:

* The simple Terracotta Distributed Cache's API makes it easy to integrate with Examinator and to maintain and troubleshoot.
* Medium-term data is not written to the database unnecessarily, improving application performance.
* Terracotta persists the pending registrations so they can survive node failure.
* Terracotta clusters (shares) the pending registration data so that any node can handle validation.

{div9}