Child pages
  • Integrating Terracotta DSO

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

...

pagetitle
Integrating Terracotta DSO
Section
Column
width260px
{div:style=} {include:
Wiki Markup
Div
stylemargin-top:20px
Include Page
documentation-index
documentation-index
} {div}
Column
{div9:id=documentationcolumn} {include:HeaderBasic} h1. Integrating Terracotta DSO {toc:minLevel=2|maxLevel=2} h2. Introduction Terracotta Distributed Shared Objects (DSO integrates with many of the most powerful Java technologies available today. This document includes resources and provides tips on integrating Terracotta DSO with popular frameworks and libraries. To learn about integrating with Terracotta products, including Ehcache and Sessions, second-level cache for Hibernate, and Spring, see the {html}
Wiki Markup
Div
iddocumentationcolumn
Include Page
HeaderBasic
HeaderBasic

Integrating Terracotta DSO

Table of Contents
minLevel2
maxLevel2

Introduction

Terracotta Distributed Shared Objects (DSO integrates with many of the most powerful Java technologies available today. This document includes resources and provides tips on integrating Terracotta DSO with popular frameworks and libraries.

To learn about integrating with Terracotta products, including Ehcache and Sessions, second-level cache for Hibernate, and Spring, see the

HTML
<a href="/documentation/ga/product-documentation">Terracotta Product Documentation</a>

{html}. {tip} Integrating with a specific application server such as Tomcat is covered in {html}

.

Tip

Integrating with a specific application server such as Tomcat is covered in

HTML
<a href="/documentation/ga/product-documentation">Terracotta Product Documentation</a>

{html}

.

For

example,

clustering

a

web

application

on

Tomcat

is

covered

in

_

Clustering

Web

Applications

with

Terracotta

_

in

{

HTML

}

<a href="/documentation/ga/product-documentation">Terracotta Product Documentation</a>

{html}. {tip} h2. How Integration Works Integration with Terracotta DSO is implemented using Terracotta Integration Modules

.

How Integration Works

Integration with Terracotta DSO is implemented using Terracotta Integration Modules (TIMs),

which

are

installed

along

with

Terracotta

clients

on

the

application

servers

in

a

cluster.

A

TIM

is

a

set

of

configuration

elements

packaged

together

as

a

single,

includable

module

within

the

Terracotta

configuration.

Including

a

TIM

in

your

project

is

as

simple

as

adding

one

line

to

a

configuration

file.

{excerpt} In the following example, support for clustering [Spring Security 2.0|Clustering the Spring Framework] with Terracotta DSO is achieved by adding the following line to the Terracotta configuration file: {code}

Excerpt

In the following example, support for clustering Spring Security 2.0 with Terracotta DSO is achieved by adding the following line to the Terracotta configuration file:

Code Block

 <module name="tim-spring-security-2.0" />  

{code} {tip:title=Module Versions Are Optional} Since the

Tip
titleModule Versions Are Optional

Since the tim-get

script

finds

the

optimal

version

for

the

current

installation

of

the

Terracotta

kit,

module

versions

are

optional.

{tip} While a Terracotta configuration file resides with a Terracotta server instance, TIMs are never installed on Terracotta server instances. This is because applications are never integrated with Terracotta server instances, only with Terracotta clients. Terracotta clients get their TIM configurations when they fetch the Terracotta configuration file from a Terracotta server instance or by having their own Terracotta configuration files. TIMs are available for many software products. The easiest and most efficient way to download and install TIMs is through the [tim-get]. You can view the full complement of TIMs in any of the following ways: * See the [Terracotta Forge|http://forge.terracotta.org] * Run {{tim-get.sh list}} to see top-level TIMs available to the current installation of Terracotta. * Run {{tim-get.sh list --all}} to see all TIMs available to the current installation of Terracotta. {anchor:ibm_jdk}{anchor:hotspot} For the latest certified versions of platforms, including types and versions of JDKs, see the [platforms page|docs:Platform Support]. h2. The Terracotta Toolkit For certain TIMs, the Terracotta Toolkit is installed as a dependency. You must put the Terracotta Toolkit JAR, {{terracotta-toolkit-<API version>\-<version>}} (open source) or {{terracotta-toolkit-<API version>\-ee-<version>}} (Enterprise Edition), on your application's classpath. {note:title=Installing Both Open Source and Enterprise Edition TIMs} If you install both open-source TIMs and Enterprise Edition TIMs, then you must specify both types of Terracotta Toolkit JARs in the Terracotta configuration file. For example, if you want to install {{tim-tomcat-6.0}} and {{

While a Terracotta configuration file resides with a Terracotta server instance, TIMs are never installed on Terracotta server instances. This is because applications are never integrated with Terracotta server instances, only with Terracotta clients. Terracotta clients get their TIM configurations when they fetch the Terracotta configuration file from a Terracotta server instance or by having their own Terracotta configuration files.

TIMs are available for many software products. The easiest and most efficient way to download and install TIMs is through the tim-get.

You can view the full complement of TIMs in any of the following ways:

  • See the Terracotta Forge
  • Run tim-get.sh list to see top-level TIMs available to the current installation of Terracotta.
  • Run tim-get.sh list --all to see all TIMs available to the current installation of Terracotta.

Anchor
ibm_jdk
ibm_jdk
Anchor
hotspot
hotspot

For the latest certified versions of platforms, including types and versions of JDKs, see the platforms page.

The Terracotta Toolkit

For certain TIMs, the Terracotta Toolkit is installed as a dependency. You must put the Terracotta Toolkit JAR, terracotta-toolkit-<API version>-<version> (open source) or terracotta-toolkit-<API version>-ee-<version> (Enterprise Edition), on your application's classpath.

Note
titleInstalling Both Open Source and Enterprise Edition TIMs

If you install both open-source TIMs and Enterprise Edition TIMs, then you must specify both types of Terracotta Toolkit JARs in the Terracotta configuration file. For example, if you want to install tim-tomcat-6.0 and tim-ehcache-2.x-ee

}}

,

then

specify

the

following:

{code:xml}

Code Block
xml
xml

<modules>
  <module group-id="org.terracotta.toolkit" name="terracotta-toolkit-1.2" />
  <module group-id="org.terracotta.toolkit" name="terracotta-toolkit-1.2-ee" />
  <module name="tim-tomcat-6.0" />
  <module name="tim-ehcache-2.x-ee" />
<!-- Other TIMs here. -->
</modules> 

{code}

The

Terracotta

Toolkit

API

version

available

for

your

Terracotta

kit

may

be

different

than

the

one

shown

in

this

example.

{note} h2. Available Terracotta DSO Integrations {anchor:cglib} See the

Available Terracotta DSO Integrations

Anchor
cglib
cglib

See the http://www.terracotta.org/Community

for

more

news

on

Terracotta

community

members'

success

with

integrating

Terracotta.

{anchor:struts} h3. Apache Struts |*Versions Supported*| 1.1 | h4. Introduction {excerpt} [Apache Struts|http://struts.apache.org/] is a free open-source framework for creating Java web applications. {excerpt} h4. Dependencies None. h4. Implementation To enable clustering with Apache Struts 1.1, add the Struts configuration module to {{tc-config.xml}}: {code}

Anchor
struts
struts

Apache Struts

Versions Supported

1.1

Introduction


Apache Struts is a free open-source framework for creating Java web applications.
Excerpt

Dependencies

None.

Implementation

To enable clustering with Apache Struts 1.1, add the Struts configuration module to tc-config.xml:

Code Block

<modules>
    <module name="tim-apache-struts-1.1" />
</modules>

{code}

Then

run

the

following

command

from

$

\

{TERRACOTTA_HOME

\

}

to

automatically

install

the

correct

version

of

the

TIM:

{

No Format

:

title

=

UNIX/Linux

}


[PROMPT] bin/tim-get.sh install-for <path/to/tc-config.xml>

{noformat} {noformat:title=Microsoft Windows}

No Format
titleMicrosoft Windows

[PROMPT] bin\tim-get.bat install-for <path\to\tc-config.xml>

{noformat} h4. Known Issues None. {anchor:ibatis} h3. iBatis |*Versions Supported*|

Known Issues

None.

Anchor
ibatis
ibatis

iBatis

Versions Supported

2.2.0

| h4. Introduction The Apache [iBatis|http://ibatis.apache.org/] Data Mapper framework is an ORM solution that makes it easier to use a database with Java and .NET applications. iBatis allows lazy loading. When enabled, a POJO object returned from an iBatis query will contain a proxy object to the referenced object. For instance, a Customer object returned from a query to the Customer table will have a proxy object in its reference to the Account object. iBatis will de-reference the proxy object when the Account object is queried at a later point in time. With the Terracotta integration module for iBatis, one node can query a Customer table lazily and share the returned Customer object, while a second node can de-reference the Account object reference and obtain the referenced Account object. h4. Dependencies None. h4. Implementation To enable clustering with iBatis, add the iBatis configuration module to {{tc-config.xml}}: {code:xml}

Introduction

The Apache iBatis Data Mapper framework is an ORM solution that makes it easier to use a database with Java and .NET applications.

iBatis allows lazy loading. When enabled, a POJO object returned from an iBatis query will contain a proxy object to the referenced object. For instance, a Customer object returned from a query to the Customer table will have a proxy object in its reference to the Account object. iBatis will de-reference the proxy object when the Account object is queried at a later point in time.

With the Terracotta integration module for iBatis, one node can query a Customer table lazily and share the returned Customer object, while a second node can de-reference the Account object reference and obtain the referenced Account object.

Dependencies

None.

Implementation

To enable clustering with iBatis, add the iBatis configuration module to tc-config.xml:

Code Block
xml
xml

<modules>
    <module name="tim-iBatis-<iBatis_version>"/>
</modules>

{code}

For

example,

use

the

module

{{

tim-iBatis-2.2.0

}}

for

clustering

iBatis

2.2.0,

add

the

following

snippet

to

{{

tc-config.xml

}}

:

{code:xml}

Code Block
xml
xml

<modules>
    <module name="tim-iBatis-2.2.0"/>
</modules>

{code}

Then

run

the

following

command

from

$

\

{TERRACOTTA_HOME

\

}

to

automatically

install

the

correct

version

of

the

TIM:

{

No Format

:

title

=

UNIX/Linux

}


[PROMPT] bin/tim-get.sh install-for <path/to/tc-config.xml>

{noformat} {noformat:title=Microsoft Windows}

No Format
titleMicrosoft Windows

[PROMPT] bin\tim-get.bat install-for <path\to\tc-config.xml>

{noformat} h4. Known Issues None. {anchor:lucene} h3. Lucene h4. Introduction {excerpt} The [Apache Lucene|http://lucene.apache.org/] project develops open-source search software. Lucene support is included via the [Compass|http://www.compass-project.org/] and written about [here|http://www.kimchy.org/compass-200-rc1-released/]. {excerpt} {tip} The Terracotta integration module {{tim-searchable}}, available through the [Terracotta Forge|http://forge.terracotta.org/snapshots/projects/tim-searchable/], is reported to improve clustering of large Lucene indexes. {tip} h4. Integrating Lucene with Terracotta By [Nam Chu|mailto:neochu1983@yahoo.com] h5. Overview This section will show how to integrate Lucene 2.3.2 into Terracotta using the Compass TIM. 1. Download a [Compass release|http://www.compass-project.org/]. In this document, we use the [Compass 2.0.2 release|https://sourceforge.net/project/showfiles.php?group_id=121279&package_id=132331&release_id=620731]. 2. Unzip the downloaded file. 3. Go to the unzip folder - then go to the "dist" folder, you'll need these files to work: *

Known Issues

None.

Anchor
lucene
lucene

Lucene

Introduction


The Apache Lucene project develops open-source search software. Lucene support is included via the Compass and written about here.
Excerpt
Tip

The Terracotta integration module tim-searchable, available through the Terracotta Forge, is reported to improve clustering of large Lucene indexes.

Integrating Lucene with Terracotta

By Nam Chu

Overview

This section will show how to integrate Lucene 2.3.2 into Terracotta using the Compass TIM.

1. Download a Compass release. In this document, we use the Compass 2.0.2 release.

2. Unzip the downloaded file.

3. Go to the unzip folder - then go to the "dist" folder, you'll need these files to work:

  • compass-2.0.2.jar

  • (TIM)

*

  • commons-logging.jar

*

  • lucene-core.jar

  • (which

  • is

  • the

  • Lucene

  • 2.3.2

  • release)

  • in

  • the

  • "lucene"

  • folder

Add

these

3

jar

files

to

our

application

classpath.

4.

Next,

create

a

folder

structure

such

as

TC_HOME/modules/org/compass-project/compass/2.0.2

and

drop

the

compass-2.0.2.jar

file

in

this

directory

and

add

the

following

to

the

tc-config

file:

{

Code Block

}


<modules>
      <module group-id="org.compass-project" name="compass" version="2.0.2"/>
 </modules>

{code}

Alternatively,

if

you

do

not

want

to

create

a

separate

folder,

you

can

specify

a

repository

in

the

tc-config

file:

{

Code Block

}


<modules>
      <!-- You may change the line below to point at the right folder -->    
      <repository>/home/neo/workspace/TerracottaLucene/lib</repository>
      <module group-id="org.compass-project" name="compass" version="2.0.2"/>
 </modules>

{code}

Now

you

are

ready

to

build

a

Terracotta

Lucene

application.

h5.

Sharing

an

index

Lets

make

a

quick

comparison

about

the

index

formats

in

Lucene

(TerracottaDirectory,

RAMDirectory

and

FSDDirectory)

to

enable

you

to

choose

how

to

cluster

your

Lucene

based

application:

|| ||FSDirectory||RAMDirectory||TerracottaDirectory|| |Index read from |Hard drive |RAM |RAM(but must be faulted from TC server)| |Read Speed |Fast| Very Fast|Depends on Network Speed| |Memory Footprint | Small| Depends on index size |Small | |Pre-instrumentation |No | No |Yes (with Compass TIM) | {note} We recommend using TerracottaDirectory. RAMDirectory integration is not supported. {note} Now we will share a TerracottaDirectory index which is similar to RAMDirectory. In my opinion, this class is better since a TerracottaDirectory object is broken into many small buckets. We can thus take advantage of partial faulting by Terracotta. i.e. instead of sending a whole index, the Terracotta Server sends only the required buckets to its clients. Moreover, thanks to pre-instrumentation, we can cluster this class without any extra configuration. (Further information can be found in the Compass documentation) {note} In my code, to avoid extra configuration for our application, I'm using ConcurrentHashMap, which is also pre-instrumented by Terracotta. {note} h5. Sample Code Let's look at a small sample application: The first class, called SharedTerracottaDirectories, has a shared map which is declared as a root in tc-config.xml, containing some indexes (that means these indexes are shared too). In addition, I have two static methods: * addDoc(): add a document to an index. * search(): search a word from an index. {code}

 

FSDirectory

RAMDirectory

TerracottaDirectory

Index read from

Hard drive

RAM

RAM(but must be faulted from TC server)

Read Speed

Fast

Very Fast

Depends on Network Speed

Memory Footprint

Small

Depends on index size

Small

Pre-instrumentation

No

No

Yes (with Compass TIM)

Note

We recommend using TerracottaDirectory. RAMDirectory integration is not supported.

Now we will share a TerracottaDirectory index which is similar to RAMDirectory. In my opinion, this class is better since a TerracottaDirectory object is broken into many small buckets. We can thus take advantage of partial faulting by Terracotta. i.e. instead of sending a whole index, the Terracotta Server sends only the required buckets to its clients. Moreover, thanks to pre-instrumentation, we can cluster this class without any extra configuration. (Further information can be found in the Compass documentation)

Note

In my code, to avoid extra configuration for our application, I'm using ConcurrentHashMap, which is also pre-instrumented by Terracotta.

Sample Code

Let's look at a small sample application:
The first class, called SharedTerracottaDirectories, has a shared map which is declared as a root in tc-config.xml, containing some indexes (that means these indexes are shared too).
In addition, I have two static methods:

  • addDoc(): add a document to an index.
  • search(): search a word from an index.
Code Block

public class SharedTerracottaDirectories {
	// This is the shared map where index will be stored.
	public static ConcurrentHashMap<String, Directory> indexMap = new ConcurrentHashMap<String, Directory>();

	public static void addDoc(Directory directory, String text, Boolean create) {
		try {
			// Adding a document to the local index
			IndexWriter writer = new IndexWriter(directory,
					new StandardAnalyzer(), create);
			Document doc = new Document();
			doc.add(new Field("fieldname", text, Field.Store.YES,
					Field.Index.TOKENIZED));
			System.out.println("Adding this doc --" + doc.get("fieldname")
					+ "-- to the index");
			writer.addDocument(doc);
			writer.optimize();
			writer.close();
		} ...
	}

	public static void search(Directory directory, String text) {
		// Now search the index:
		try {
			IndexSearcher searcher = new IndexSearcher(directory);

			// Parse a simple query that searches for the string "text":
			QueryParser parser = new QueryParser("fieldname",
					new StandardAnalyzer());
			Query query = parser.parse(text);
			Hits hits = searcher.search(query);
			// Iterate through the results:
			System.out.println("Search results for \""+text+"\":");
			for (int i = 0; i < hits.length(); i++) {
				Document hitDoc = hits.doc(i);
				System.out.println(hitDoc.get("fieldname"));
			}
			// Close the index
			searcher.close();
			directory.close();
		} ...
	}
}

{code}

We

must

declare

the

map

as

root

to

make

it

shared:

{

Code Block

}


<root>
 <field-name>demo.SharedTerracottaDirectories.indexMap</field-name>
</root>

{code}

Now

run

Main1:

*

  • We

  • create

  • a

  • TerracottaDirectory

  • index.

*

  • Then

  • we

  • add

  • a

  • document

  • to

  • this

  • index.

*

  • Finally,

  • share

  • the

  • index

  • across

  • the

  • cluster

  • by

  • putting

  • it

  • into

  • a

  • shared

  • map.

{

Code Block

}


public class Main1 {
	public static void main(String[] args) {
		TerracottaDirectory directory = new TerracottaDirectory();
		SharedTerracottaDirectories.addDoc(directory, "Hello Terracotta", true);
		SharedTerracottaDirectories.indexMap.put("1", directory);
	}
}

{code}

Now

let's

run

Main2:

*

  • We

  • retrieve

  • the

  • index

  • from

  • the

  • shared

  • map.

*

  • Add

  • a

  • new

  • document.

*

  • Then

  • search

  • from

  • the

  • new

  • index.

The

search

result

is

like

we

expected.

It

means

that

the

shared

index

is

working

well

because

we

can

add

new

documents

to

it

and

search

from

it

between

JVMs.

{

Code Block

}


public class Main2 {
	public static void main(String[] args) {
		Directory directory = SharedTerracottaDirectories.indexMap.get("1");
		SharedTerracottaDirectories.addDoc(directory, "Bonjour Terracotta",
				false);
		SharedTerracottaDirectories.search(directory, "Terracotta");
	}
}

{code}

You

can

see

it's

really

simple

because

a

lot

of

things

are

pre-configured

in

the

Compass

TIM.

h5.

Master/Worker

model

-

Sharing

queries/results

Building

a

Master/Worker

model

may

speed

up

a

Lucene

application.

In

that

case,

we

need

to

share

queries

and

results.

I

won't

go

in

the

details

but

only

show

you

that

it

is

possible.


Again

I

write

a

class

named

SharedQueryAndHitCollector

which

contains

two

shared

maps:

*

  • One

  • for

  • queries.

*

  • One

  • for

  • HitCollector

  • (sort

  • of

  • Lucene

  • query

  • result).

{

Code Block

}


public class SharedQueryAndHitCollector {
	public static ConcurrentHashMap<String, Query> queryMap = new ConcurrentHashMap<String, Query>();
	public static ConcurrentHashMap<String, HitCollector> hitCollectorMap = new ConcurrentHashMap<String, HitCollector>(); 

}


{code

}

So

I

add

these

roots

to

the

tc-config:

{

Code Block

}


        <root>
          <field-name>demo.SharedQueryAndHitCollector.queryMap</field-name>
        </root>
        <root>
          <field-name>demo.SharedQueryAndHitCollector.hitCollectorMap</field-name>
        </root>

{code}

You

can

see

that

we

manage

the

queries

and

the

results

like

we

deal

with

POJOs.

I

also

make

a

class

named

MyHitCollector

which

is

inherited

from

HitCollector.

This

class

simply

counts

the

number

of

hit

documents.

{

Code Block

}


public class MyHitCollector extends HitCollector {
	AtomicInteger hit = new AtomicInteger();
	@Override
	public void collect(int arg0, float arg1) {
		// TODO Auto-generated method stub
		hit.incrementAndGet();
	}
...
}

{code} {note} AtomicInteger is also

Note

AtomicInteger is also pre-instrumented

so

no

extra

configuration

is

required.

{note} In the Main4

In the Main4 programs,

I've

shared

two

queries:

*

  • A

  • term

  • query.

*

  • A

  • MatchAllDocsQuery

  • query.

{

Code Block

}


public class Main4 {
	public static void main(String[] args) {
		// Making 2 queries
		QueryParser parser = new QueryParser("fieldname",
				new StandardAnalyzer());
		Query q1 = null;
		try {
			q1 = parser.parse("Hello"); // This query looks for documents having
			// the word "Hello"
		} ...

		Query q2 = new MatchAllDocsQuery(); // This query will match all
											// documents

		// Sharing 2 queries
		SharedQueryAndHitCollector.queryMap.put("1", q1);
		SharedQueryAndHitCollector.queryMap.put("2", q2);

		// Making 2 MyHitCollectors for 2 queries
		MyHitCollector h1 = new MyHitCollector();
		MyHitCollector h2 = new MyHitCollector();

		Directory directory = SharedTerracottaDirectories.indexMap.get("1");
		try {
			IndexSearcher searcher = new IndexSearcher(directory);
			// Execute the first query and stock the result into the first
			// MyHitCollector
			Query q = SharedQueryAndHitCollector.queryMap.get("1");
			searcher.search(q, h1);
			// Execute the second query and stock the result into the second
			// MyHitCollector
			q = SharedQueryAndHitCollector.queryMap.get("2");
			searcher.search(q, h2);

			// Sharing 2 HitCollectors
			SharedQueryAndHitCollector.hitCollectorMap.put("1", h1);
			SharedQueryAndHitCollector.hitCollectorMap.put("2", h2);
		} ...
		System.out.println(h1 + " for query 1");
		System.out.println(h2 + " for query 2");
	}
}

{code}

Then

I

execute

these

two

queries

and

stock

the

result

in

two

MyHitCollector

objects.

Finally,

I

can

share

these

two

MyHitCollectors

in

my

cluster.

You

can

check

all

these

shared

objects

in

the

[docs:

Terracotta

Developer

Console

]

:

*

  • 1

  • hit

  • document

  • for

  • the

  • first

  • query.

*

  • 2

  • hit

  • documents

  • for

  • the

  • second

  • query.

About

the

configuration,

I

only

have

to

instrument

the

shared

classes:

{

Code Block

}


        <include>
          <class-expression>org.apache.lucene.search.Query</class-expression>
        </include>
        <include>
          <class-expression>org.apache.lucene.search.TermQuery</class-expression>
        </include>
        <include>
          <class-expression>org.apache.lucene.index.Term</class-expression>
        </include>
        <include>
          <class-expression>org.apache.lucene.search.MatchAllDocsQuery</class-expression>
        </include>
        <include>
          <class-expression>demo.MyHitCollector</class-expression>
        </include>
        <include>
          <class-expression>org.apache.lucene.search.HitCollector</class-expression>
        </include>

{code}

Now

we've

explored

how

to

cluster

Lucene

via

Terracotta.

If

you

have

any

questions,

suggestions

...

I

can

be

reached

at

neochu1983@yahoo.com

{anchor:quartz} h3. Quartz {info:title=Quartz is Now a Terracotta Product} Quartz is now a Terracotta product. The following information pertains to older versions of Quartz and is not recommended unless you are bound to a

Anchor
quartz
quartz

Quartz

Info
titleQuartz is Now a Terracotta Product

Quartz is now a Terracotta product. The following information pertains to older versions of Quartz and is not recommended unless you are bound to a pre-1.7

version

of

Quartz.

For

the

latest

information

on

integrating

Terracotta

and

Quartz,

see

the

{

HTML

}

<a href="/documentation/ga/product-documentation">Terracotta Product Documentation</a>

{html}. {info} |*Versions Supported*|

.

Versions Supported

1.5.1,

1.6.1_RC

| h4. Overview [Quartz|http://www.quartz-scheduler.org] is a

Overview

Quartz is a open-source

job-scheduling

system

that

can

be

integrated

with

or

used

along

side

virtually

any

J2EE

or

J2SE

application.

Quartz

can

be

used

to

create

simple

or

complex

schedules

for

executing

tens,

hundreds,

or

even

tens-of-thousands

of

jobs;

jobs

whose

tasks

are

defined

as

standard

Java

components

or

EJBs.

Quartz

terracotta

integration

enables

Quartz

clustering

based

on

RAMJobStore.

Compared

to

existing

JobStoreTx

or

JobStoreCMT

DB

based

clustering,

Terracotta

based

RAMJobStore

clustering

is

much

more

efficient

and

easy

to

configure.

h4.

Configuration

Module

The

quickest

and

easiest

way

to

cluster

Quartz

is

to

include

Quartz

Configuration

Module

in

your

Terracotta

XML

configuration.

Add

the

following

Quartz-specific

jar

file

to

{{

tc-config.xml

}}

:

{code:xml}

Code Block
xml
xml

<clients>
    <modules>
         <repository>./modules</repository>
         <module name="tim-quartz-1.5.1" /> 
    </modules>
  </clients>

{code} {tip} To integrate Quartz

Tip

To integrate Quartz 1.6.1,

use

{{

tim-quartz-1.6.1

}}. {tip} Then run the following command from $\

.

Then run the following command from ${TERRACOTTA_HOME

\

}

to

automatically

install

the

correct

version

of

the

TIM:

{

No Format

:

title

=

UNIX/Linux

}


[PROMPT] bin/tim-get.sh install-for <path/to/tc-config.xml>

{noformat} {noformat:title=Microsoft Windows}

No Format
titleMicrosoft Windows

[PROMPT] bin\tim-get.bat install-for <path\to\tc-config.xml>

{noformat} h4. What is supported 1. A clustered version of Quartz RAMJobStore is available when using tim-quartz along with Terracotta. 2. Job executions are load balanced across cluster nodes: Each Scheduler instance in the cluster attempts to fire scheduled triggers as fast as the Scheduler permits. The Scheduler instances compete for the right to execute a job by firing its trigger. When a trigger for a job has been fired, no other Scheduler will attempt to fire that particular trigger until the next scheduled firing time. 3. Recovery from failed scheduler instances: When a Scheduler instance fails, Terracotta detects it immediately and Jobs are recovered to be executed by another scheduler at their next execution time. 4. Jobs and trigger information survive node failures: If Terracotta server is configured to run in persistent mode, information is maintained across server restart. i.e. A job triggered on one node, can resume its execution on some other node, if the current node comes down or dies. 5. Recoverable jobs are executed immediately. If their scheduler fails, they are executed by another scheduler instance in the cluster. h4. Comparison with DB based clustering from Quartz 1. The Terracotta implementation is much faster as everything works in RAM (as compared to DB approach used in JobStoreTx and JobStoreCMT). Terracotta field level changes makes changes to Job and Triggers lightening fast. 2. Hassle free. No DB setup is required. 3. Failed jobs are recovered immediately as compared to DB based approach where scheduler instance are checked for failure based on an interval. 4. No Jgroups or any other cluster configuration is required. h4. Public Source Repository SVN:

What is supported

1. A clustered version of Quartz RAMJobStore is available when using tim-quartz along with Terracotta.

2. Job executions are load balanced across cluster nodes: Each Scheduler instance in the cluster attempts to fire scheduled triggers as fast as the Scheduler permits. The Scheduler instances compete for the right to execute a job by firing its trigger. When a trigger for a job has been fired, no other Scheduler will attempt to fire that particular trigger until the next scheduled firing time.

3. Recovery from failed scheduler instances: When a Scheduler instance fails, Terracotta detects it immediately and Jobs are recovered to be executed by another scheduler at their next execution time.

4. Jobs and trigger information survive node failures: If Terracotta server is configured to run in persistent mode, information is maintained across server restart. i.e. A job triggered on one node, can resume its execution on some other node, if the current node comes down or dies.

5. Recoverable jobs are executed immediately. If their scheduler fails, they are executed by another scheduler instance in the cluster.

Comparison with DB based clustering from Quartz

1. The Terracotta implementation is much faster as everything works in RAM (as compared to DB approach used in JobStoreTx and JobStoreCMT). Terracotta field level changes makes changes to Job and Triggers lightening fast.

2. Hassle free. No DB setup is required.

3. Failed jobs are recovered immediately as compared to DB based approach where scheduler instance are checked for failure based on an interval.

4. No Jgroups or any other cluster configuration is required.

Public Source Repository

SVN: http://svn.terracotta.org/svn/forge/projects/tim-quartz

h4.

Private

Source

Repository

(for

committers)

SVN:

https://svn.terracotta.org/svn/forge/projects/tim-quartz

{anchor:rife} h3. Rife |*Versions Supported*|

Anchor
rife
rife

Rife

Versions Supported

1.6.0

| h4. Introduction [RIFE|http://rifers.org/] is a

Introduction

RIFE is a full-stack

web

application

framework

with

tools

and

APIs

to

implement

most common web features. {excerpt} h4. Dependencies None. h4. Implementation To integrate Rife

most common web features.

Dependencies

None.

Implementation

To integrate Rife 1.6.0,

add

the

following

snippet

to

{{

tc-config.xml

}}

:

{

Code Block
}

<modules>
    <module name="tim-rife-1.6.0" />
</modules>
{code}

Then

run

the

following

command

from

$

\

{TERRACOTTA_HOME

\

}

to

automatically

install

the

correct

version

of

the

TIM:

{

No Format
:
title
=
UNIX/Linux
}

[PROMPT] bin/tim-get.sh install-for <path/to/tc-config.xml>
{noformat} {noformat:title=Microsoft Windows}
No Format
titleMicrosoft Windows

[PROMPT] bin\tim-get.bat install-for <path\to\tc
-config.xml> {noformat} h4. Known Issues None. {anchor:tc_cache} h3. Terracotta Tree Map Cache h4. Introduction Terracotta Tree Map Cache, a Terracotta integration module (TIM), is a drop-in replacement for JBoss TreeCache and PojoCache. Terracotta Cache is a distributed cache that is API-compatible with the JBoss TreeCache and PojoCache components. Terracotta Cache can replace these JBoss components with minimal code changes. Typically, simply changing 1 line of code to instantiate TerracottaCache() instead of TreeCache() is enough because Terracotta Cache maintains most of the JBoss TreeCache APIs. h4. Dependencies The following integration modules must be installed with the Terracotta Cache module: - tim-synchronizedcollection - tim-synchronizedset - tim-synchronizedsortedset - tim-synchronizedmap - tim-synchronizedsortedmap See [tim-collections|http://forge.terracotta.org/releases/projects.html#tim-collections] for more information. h4. Implementation After you download and unpack the synchronized Java collection TIMs listed above into the Terracotta modules directory, add the following to your {{tc-config.xml}}: {code:xml}
-config.xml>

Known Issues

None.

Anchor
tc_cache
tc_cache

Terracotta Tree Map Cache

Introduction

Terracotta Tree Map Cache, a Terracotta integration module (TIM), is a drop-in replacement for JBoss TreeCache and PojoCache. Terracotta Cache is a distributed cache that is API-compatible with the JBoss TreeCache and PojoCache components. Terracotta Cache can replace these JBoss components with minimal code changes.

Typically, simply changing 1 line of code to instantiate TerracottaCache() instead of TreeCache() is enough because Terracotta Cache maintains most of the JBoss TreeCache APIs.

Dependencies

The following integration modules must be installed with the Terracotta Cache module:

  • tim-synchronizedcollection
  • tim-synchronizedset
  • tim-synchronizedsortedset
  • tim-synchronizedmap
  • tim-synchronizedsortedmap

See tim-collections for more information.

Implementation

After you download and unpack the synchronized Java collection TIMs listed above into the Terracotta modules directory, add the following to your tc-config.xml:

Code Block
xml
xml

<clients>
   <modules>
      <module name="tim-tree-map-cache" />
   </modules>
</clients>
{code}

Then

run

the

following

command

from

$

\

{TERRACOTTA_HOME

\

}

to

automatically

install

the

correct

version

of

the

TIM:

{

No Format
:
title
=
UNIX/Linux
}

[PROMPT] bin/tim-get.sh install-for <path/to/tc-config.xml>
{noformat} {noformat:title=Microsoft Windows}
No Format
titleMicrosoft Windows

[PROMPT] bin\tim-get.bat install-for <path\to\tc-
config.xml> {noformat} h4. Known Issues * Client applications should declare a {{CacheManager}} field and mark it as root. * TerraCotta Cache currently replaces JBoss Cache
config.xml>

Known Issues

  • Client applications should declare a CacheManager field and mark it as root.
  • TerraCotta Cache currently replaces JBoss Cache v.1.4.
  • Due
  • to
  • incompatible
  • API
  • changes,
  • it
  • can
  • not
  • replace
  • JBoss
  • Cache
  • 2.0.
{anchor:wicket} h3. Wicket |*Versions Supported*| 1.3 | Apache [Wicket|http://wicket.sourceforge.net/] is a feature-rich web-application development framework based on Java and HTML. The Terracotta integration module provides for clustering Wicket, primarily user sessions for fail-over. Richard Wilkinson has created an example clustering a simple Wicket application with Terracotta, available on [his blog|http://richard-wilkinson.co.uk/2008/11/23/a-simple-wicket-and-terracotta-example/]. h4. Dependencies Java 1.5. h4. Implementation To integrate Wicket 1.3, add the following snippet to {{tc-config.xml}}: {code}

Anchor
wicket
wicket

Wicket

Versions Supported

1.3

Apache Wicket is a feature-rich web-application development framework based on Java and HTML. The Terracotta integration module provides for clustering Wicket, primarily user sessions for fail-over.

Richard Wilkinson has created an example clustering a simple Wicket application with Terracotta, available on his blog.

Dependencies

Java 1.5.

Implementation

To integrate Wicket 1.3, add the following snippet to tc-config.xml:

Code Block

<modules>
    <module name="tim-wicket-1.3" />
</modules>
{code}

Then

run

the

following

command

from

$

\

{TERRACOTTA_HOME

\

}

to

automatically

install

the

correct

version

of

the

TIM:

{

No Format
:
title
=
UNIX/Linux
}

[PROMPT] bin/tim-get.sh install-for <path/to/tc-config.xml>
{noformat} {noformat:title=Microsoft Windows}
No Format
titleMicrosoft Windows

[PROMPT] bin\tim-get.bat install-for <path\to\tc-config.xml>
{noformat} h4. Known Issues None. {anchor:grails} h3. Grails Courtesy of [ALTERThought|http://www.alterthought.com]. Direct inquiries to [ALTERThought|http://www.alterthought.com]. h4. Overview This section will show how to cluster Grails using Terracotta. We will cover: * Generation of a terracotta configuration file * Generation of terracotta enabled start up scripts for containers h4. Usage 1. Install the Terracotta plugin {code}

Known Issues

None.

Anchor
grails
grails

Grails

Courtesy of ALTERThought. Direct inquiries to ALTERThought.

Overview

This section will show how to cluster Grails using Terracotta. We will cover:

  • Generation of a terracotta configuration file
  • Generation of terracotta enabled start up scripts for containers

Usage

1. Install the Terracotta plugin

Code Block

grails install-plugin terracotta
{code}

2.

Run

the

start

up

script

generation

task

3.

Run

the

terracotta

configuration

generation

task

4.

Generate

a

war

for

the

application

and

deploy

in

a

container

5.

Start

the

terracotta

server(s)

6.

Start

the

application

servers

h4.

Generate

<ContainerName>

Script

Currently

only

two

containers

are

supported:

jboss

and

tomcat.


Edit

TerracottaConfig.properties

to

set

the

parameters

for

the

start

up

scripts:

The

terracotta

install

directory

on

the

target

jboss

or

tomcat

server

{

Code Block
}

terracotta.install.
dir {code} The path where the
dir

The path where the tc.config.xml

will

be

made

available

on

the

servers

(This

plugin

does

not

remote

copy

it.

You

have

to

do

it

manually)

{

Code Block
}

terracotta.config.path
{code} Run {code}

Run

Code Block

grails generate-jboss-scripts
{code}

Copy

the

generated

scripts

in

the

startup

script

folders

of

the

containers.

You

will

later

start

the

containers

using

these

scripts

instead

of

standard

ones.

h4.

Generate

TcConfig

Generates

the

tc-config.xml

required

to

run

a

terracotta

enabled

container.

It

enables

the

clustering

of

all

the

domain

classes

defined

in

the

project

and

allows

to

add

any

additional

classes

to

be

made

clusterable

by

including

a

user

defined

xml

segment.

It

also

clusters

the

http

sessions

across

the

container

instances

for

your

application.


Edit

TerracottaConfig.properties

to

set

the

host

names

and

ports

for

the

terracotta

servers

to

be

used

by

the

application.

Edit

CustomIncludes.xml

to

specify

any

additional

terracotta

include

rules

required

by

your

application.

Run {code}

Run

Code Block

grails generate-tc-config
{code}

Copy

the

generated

file

on

each

container

server,

in

the

location

specified

when

generating

the

container

start

up

scripts.

h4. Running the Clustered Application Deploy your application on all the container instances. Start the terracotta

Running the Clustered Application

Deploy your application on all the container instances. Start the terracotta server(s)

Start

the

containers

using

the

generated

start

up

scripts.

{note} You need a form of load balancing to witness the effects of the clustering, such as Apache with mod_jk. {note} h4. Test Session Continuity Start one node of your cluster. Start a session (log in your app,

Note

You need a form of load balancing to witness the effects of the clustering, such as Apache with mod_jk.

Test Session Continuity

Start one node of your cluster. Start a session (log in your app, ...).

Stop

that

server

instance,

then

start

a

second

instance.

Continue

using

your

application

without

losing

your

session!

h4. Additional Documentation

Additional Documentation

1.

[

The

ALTERthought

blog|http://alterlabs.com/technologies/java/terracotta-plugin-for-grails/] 2. [The grails blog|http://grails.org/Terracotta+Plugin] h3. Shibboleth h4. Overview [Shibboleth|http://shibboleth.internet2.edu] allows users to securely send trusted information about themselves to remote resources. This information may then be used for authentication, authorization, content personalization, and enabling single sign-on across a broad range of services from many different providers. h4. IdP Clustering via Terracotta To cluster IdP, the Shibboleth team recommends using Terracotta to replicate the IdP's in-memory state between nodes. For fail-over, you should run a Terracotta server on each IdP node. One Terracotta server will run in primary mode while the others run in "hot-standby" (backup) mode. If the primary server fails, the other servers elect a new primary and clients can reconnect to it. The details for IdP clustering using Terracotta are on [Shibboleth's site|https://spaces.internet2.edu/display/SHIB2/IdPCluster]. {anchor:glassbox}{anchor:websphere}{anchor:glassfish} h3. Not Supported at this Time * Glassbox * IBM Websphere * WebSphere CE * Geronimo h2. Other Integration Resources * [The TIM Manual|docs:Terracotta Integration Modules Manual] -- What if your technology isn't listed? Build your own Terracotta Integration Module (TIM)! Also learn how to manage TIM versions in accordance with Terracotta policies. * [Enterprise Services|enterprise:Services] -- If a TIM is not available for your technology and you can't devote resources to build your own, we can custom-build it. * [Certified platforms page|docs:Platform Support] -- A one-page list of certified platforms for the current release. * [The Terracotta Forge|About Terracotta Forge] -- One of the most effective resources available for successfully integrating your project with Terracotta is the Terracotta Forge. {div9}

blog
2. The grails blog

Shibboleth

Overview

Shibboleth allows users to securely send trusted information about themselves to remote resources. This information may then be used for authentication, authorization, content personalization, and enabling single sign-on across a broad range of services from many different providers.

IdP Clustering via Terracotta

To cluster IdP, the Shibboleth team recommends using Terracotta to replicate the IdP's in-memory state between nodes.

For fail-over, you should run a Terracotta server on each IdP node. One Terracotta server will run in primary mode while the others run in "hot-standby" (backup) mode. If the primary server fails, the other servers elect a new primary and clients can reconnect to it.

The details for IdP clustering using Terracotta are on Shibboleth's site.

Anchor
glassbox
glassbox
Anchor
websphere
websphere
Anchor
glassfish
glassfish

Not Supported at this Time

  • Glassbox
  • IBM Websphere
  • WebSphere CE
  • Geronimo

Other Integration Resources

  • The TIM Manual – What if your technology isn't listed? Build your own Terracotta Integration Module (TIM)! Also learn how to manage TIM versions in accordance with Terracotta policies.
  • Enterprise Services – If a TIM is not available for your technology and you can't devote resources to build your own, we can custom-build it.
  • Certified platforms page – A one-page list of certified platforms for the current release.
  • The Terracotta Forge – One of the most effective resources available for successfully integrating your project with Terracotta is the Terracotta Forge.