Child pages
  • DSO Data Structures Guide
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 22 Next »

About Terracotta Documentation

This documentation is about Terracotta DSO, an advanced distributed-computing technology aimed at meeting special clustering requirements.

Terracotta products without the overhead and complexity of DSO meet the needs of almost all use cases and clustering requirements. To learn how to migrate from Terracotta DSO to standard Terracotta products, see Migrating From Terracotta DSO. To find documentation on non-DSO (standard) Terracotta products, see Terracotta Documentation. Terracotta release information, such as release notes and platform compatibility, is found in Product Information.

Unknown macro: {div9}
Release: 3.6
Publish Date: November, 2011

Documentation Archive »

Clustered Data Structure Guide

Clustered data structures in Terracotta are functionally equivalent to their non-clustered counterparts. However, there are some performance differences between the different data structures in a clustered context that are important to understand when designing a system.

Some of the performance differences are inherent to the design of the data structure; for example, the very different synchronization and concurrency characteristics of Hashtable vs. HashMap vs. ConcurrentHashMap.

Other performance differences come from the implementation of a datastructure that is expected to work in a local context that operates somewhat differently in a clustered context

Still other performance differences are simply the result of point-in-time implementation details in the Terracotta core engine. The Terracotta product is a rapidly evolving product so you will want to check back here often to see the status of these point-in-time effects.

This guide is intended to provide you with the information you need to make informed decisions about which data structures to use in which context, how to maximize their efficiency, and how to avoid certain pitfalls.

Coming Soon

A discussion of cross-JVM latency and concurrency patterns.

Unknown macro: {HTMLcomment}

Latency

TODO: describe the effect of cross-JVM latency and how to plan for it.

Concurrency

TODO: describe the effect of different concurrency patterns and how to plan for them.

Partial Loading Of Clustered Data Structures

An important concept to understand before reviewing the performance characteristics of the various data structures is what we call "partial loading." Most of the data structures in the java.util and java.util.concurrent packages that are supported by Terracotta are what we call "logically managed" (for more information, see the Concept and Architecture Guide).

Logically managed objects in Terracotta are kept up-to-date across JVMs by replaying the method calls made in another JVM. This is done for performance reasons and, in the case of hash-based structures, for correctness. However, because they aren't managed by keeping track of their physical references to other objects, they do not share the same virtual heap mechanism enjoyed by physically managed objects.

Without special support in the Terracotta core, logically managed data structures are loaded in their entirety into a client JVM when they are accessed . In the case of very large data structures, this can lead to a number of application issues:

  • the data structure may take a long time to load from the Terracotta server
  • the data structure may take up too much space in the local JVM heap
  • the data structure may exceed the available JVM heap, leading to OutOfMemoryErrors.

To solve these problems, we have implemented "partial loading" of some data structures. This means that parts of the data structure are loaded into a client JVM when they are accessed and other parts are left unloaded until accessed. In the case of certain Map implementations, the entire key set is loaded when the Map is loaded, but the values are not. The values are loaded automatically when the value is accessed. This can have a significant improvement on heap usage for clustered Maps.

However, because the partial loading algorithm for each data structure is different, not all data structures are partially loaded. For those data structures that are partially loaded, the partial loading implementation and, therefore, their performance characteristics under different scenarios will be different. In the following list of data structures, we will describe the partial loading strategy and some best practices around taking the best advantage of that strategy.

Data Structures

ConcurrentHashMap

Unknown macro: {table}
Unknown macro: {tr}
Unknown macro: {td}

Package

Unknown macro: {td}

java.util.concurrent

Unknown macro: {td}

testing

Unknown macro: {tr}
Unknown macro: {td}

Partial Loading

Unknown macro: {td}

Keys are monolithically loaded; values are partially loaded.

Unknown macro: {tr}
Unknown macro: {td}

Concurrency

Unknown macro: {td}

Read access is fully concurrent. Write access is exclusive of read and write access per segment.

ConcurrentHashMap (CHM), unlike a synchronized wrapper around a java.util.HashMap, has very good concurrency characteristics that are exploited by Terracotta in a clustered context.

  • Unlike synchronized collections which are locked using a monolithic mutex for all operations, CHM uses read and write locks.
  • CHM keys are partitioned into separate segments so when exclusive access is required (e.g., two threads need to write to the same segment), only the segment is locked instead of the entire data structure.
  • Terracotta clustered CHMs have full read concurrency such that all read operations are allowed to occur concurrently.
  • Terracotta clustered CHMs have a read locking optimization such that all read lock acquisitions become greedy concurrently on all relevant nodes. This allows read lock acquisition to happen without a network hop to the Terracotta server on every node. For more information on the greedy lock optimization in Terracotta, see the Concept and Architecture Guide.
  • Read operations do not have to wait for write operations on other partitions to fully complete.
  • While the key set is monolithically loaded, the values are partially loaded.

Coming Soon

Why CHM is good for a mix of read and write, but not as good for read-only.

Coming Soon

Information on the following data structures:

Array Primitives

ArrayList

HashMap

HashTable

HashSet

LinkedList

TreeMap

TreeSet

Vector

  • No labels