How JSR107 Caching Annotations are meant to be used

I am getting a few questions lately on JSR107 caching annotations and whether implementations of JSR107 are providing them.
Caching annotations can be added to your Java classes and will invoke caching operations as the method. For example below is an annotated BlogManager.
@CacheDefaults(cacheName = "blgMngr")
public class BlogManagerImpl implements BlogManager {private static Map<String, Blog> map = new HashMap<String, Blog>();@CacheResult
public Blog getEntryCached(String title) {
return map.get(title);
}

public Blog getEntryRaw(String title) {
return map.get(title);
}

/**
* @see manager.BlogManager#clearEntryFromCache(java.lang.String)
*/
@CacheRemove
public void clearEntryFromCache(String title) {
}

public void clearEntry(String title) {
map.put(title, null);
}

@CacheRemoveAll
public void clearCache() {
}

public void createEntry(Blog blog) {
map.put(blog.getTitle(), blog);
}

@CacheResult
public Blog getEntryCached(String randomArg, @CacheKey String title,
String randomArg2) {
return map.get(title);
}

}
Caching annotations, though defined in JSR107, are not meant to be provided by a CachingProvider such as Hazelcast. Instead they must be provided by the dependency injection containers: Spring, Guice, CDI (for Java EE). This will happen with EE in 8 which is a couple of years away. Spring support is coming in 4.1 and is available now for developers to play with in snapshot. See https://spring.io/blog/2014/04/14/cache-abstraction-jcache-jsr-107-annotations-support for how to use it.
While it will take time for the DIs to add support, in the JSR107 RI we have written a module for each of them. That code can be added to your existing DI Container and it will enable caching annotations processing. See https://github.com/jsr107/RI/tree/master/cache-annotations-ri.

I will be joining Hazelcast as CTO

I am very excited to announce that I will be joining the world-class team at Hazelcast as CTO.

Hazelcast (www.hazelcast.com) develops, distributes and supports the leading open source in-memory data grid. The product, also called Hazelcast, is a free open source download under the Apache license that any developer can include in minutes to enable them to build elegantly simple mission-critical, transactional, and terascale in-memory applications. The company provides commercially licensed Enterprise extensions, Hazelcast Management Console and professional open source training, development support and deployment support. The company is privately held and headquartered in Palo Alto, California.

What this means for Hazelcast Users

I will join my efforts to those of Hazelcast CEO Talip Ozturk and the team and bring my deep knowledge of caching to Hazlecast to complement that of the team. I will be out and about at conferences and visiting customers.

With the team we will be figuring out what great features to add to Hazelcast and how to improve the leading open source In-Memory Data Grid.

We will also be bring to market Enterprise Extensions which add high value features based around the Hazelcast core.

Hazelcast has made an announcement that puts this move into their own words.

What this means for Terracotta BigMemory Users

We will develop a comparative caching/operational store project and product based on Hazelcast core.   This will then be an alternative for BigMemory users.

What this means for Ehcache Users

The ownership of Ehcache was transferred to Terracotta 4 and a half years ago when I joined them who then took over maintenance of it.

While Ehcache remains widely used today, the open source version is only suitable for single node caching. This is not that useful for most production contexts so it is not directly competitive with Hazelcast or for that matter In-Memory Data Grids, which deal with clusters of computers.

I expect Ehcache will implement JCache and that in the future those ISVs and open source frameworks which currently define Ehcache as their caching layer will instead define it using JCache, of which Ehcache will be one provider.

Hazelcast is already developing their JCache implementation, which is already up on GitHub.

What this means for JCache

JCache is the new standard for caches and IMDGs. It includes a key-value API suitable for caches and operational stores. Importantly it was designed primarily for IMDGs. Listeners, loaders, writers and other user-defined classes are expected to be executed somewhere in the cluster, not in process.  And the spec defines and single and batch EntryProcessors, the defining feature of IMDG, which enables in-situ computation.

I will continue to act as spec lead on new maintenance releases of JCache. And I will also work with the Java EE 8 expert group who are including JCache in Java EE8. And I will be working with open source frameworks and ISVs as they move to add a JCache layer to their architectures.

Hazelcast will be one of the first to market with an implementation of JCache which should be available in a production-ready implementation in February.

As it is Apache 2 open source, I encourage open source frameworks and ISVs to include Hazelcast in their distributions as they add JCache. That way they can ship with an out of the box IMDG built in without locking themselves or their users/customers into a single vendor.

JSR107 JCache enters Public Draft Review

On 5 July 2013, JSR107 went into Public Draft Review and which will last until 4 August.

Visit the JSR home page on GitHub to get the code. Or read the spec.

I encourage anyone interested in Java caching to take a look and give us feedback which you can do by posting an issue on GitHub for specific things or for more general feedback posting to the JSR107 Google Groups mailing list.

JSR107 enters Early Draft Review after nearly 12 years

On 23 October 2012 the JCP posted the Early Draft specification and API for JSR107. See http://jcp.org/en/jsr/detail?id=107. This is almost 12 years since the JSR kicked off. Note that this material was uploaded to the JCP in February this year but was delayed while the legal complications of having two companies as shared spec leads got sorted out. That is now done and will not be an issue going forward in the process.
We will now be working intensively to drive this to completion in its own right and for inclusion in Java ## 7. We expect to be final in early 2013.
We need your review
In the meantime the early draft review period is open until 22 November. Please visit the home of the project at https://github.com/jsr107 and send your comments to jsr107@googlegroups.com or create issues at https://github.com/jsr107/jsr107spec/issues. For a quick into see  https://github.com/jsr107/jsr107spec.
We have also just added a few new artifacts up on GitHub:
- A very simple demo which can be used when giving talks. https://github.com/jsr107/demo
- ehcache-jcache – an implementation of the 0.5 specification that works with the latest version of ehcache. https://github.com/jsr107/ehcache-jcache
Remaining JCP 2.7 Process Steps
The review period ends 22 November. Once we have a public draft, we will submit that for 30 days’ review. A EC ballot will be held in the last week of the public draft period before we move on to complete the RI and TCK and seek final approval.
Java EE 7 Deadline
We have sought clarification from the EE JSR on their deadline. It is fast approaching. We therefore intend to go hard.

javax.cache: The new Java Caching Standard

This post explores the new Java caching standard: javax.cache.

How it Fits into the Java Ecosystem

This standard is being developed by JSR107, of which the author is co-spec lead. JSR107 is included in Java EE 7, being developed by JSR342. Java EE 7 is due to be finalised at the end of 2012. But in the meantime javax.cache will work in Java SE 6 and higher and Java EE 6 environments as well aswith Spring and other popular environments.

JSR107 has draft status. We are currently at release 0.3 of the API, the reference implementation and the TCK. The code samples in this article work against this version.

Adoption

Vendors who are either active members of the expert group or have expressed interest in implementing the specification are:

  • Terracotta – Ehcache
  • Oracle – Coherence
  • JBoss – Infinispan
  • IBM – ExtemeScale
  • SpringSource – Gemfire
  • GridGain
  • TMax
  • Google App Engine Java

Terracotta will be releasing a module for Ehcache to coincide with the final draft and then updating that if required for the final version.

Features

From a design point of view, the basic concepts are a CacheManager that holds and controls a collection of Caches. Caches have entries. The basic API can be thought of map-­like with the following additional features:

  • atomic operations, similar to java.util.ConcurrentMap
  • read-through caching
  • write-through caching
  • cache event listeners
  • statistics
  • transactions including all isolation levels
  • caching annotations
  • generic caches which hold a defined key and value type
  • definition of storage by reference (applicable to on heap caches only) and storage by value

Optional Features

Rather than split the specification into a number of editions targeted at different user constituencies such as Java SE and Spring/EE, we have taken a different approach.

Firstly, for Java SE style caching there are no dependencies. And for Spring/EE where you might want to use annotations and/or transactions, the dependencies will be satisfied by those frameworks.

Secondly we have a capabilities API via ServiceProvider.isSupported(OptionalFeature feature)so that you can determine at runtime what the capabilities of the implementation are.  Optional features are:

  • storeByReference – storeByValue is the default
  • transactional
  • annotations

This makes it possible for an implementation to support the specification without necessarily supporting all the features, and allows end users and frameworks to discover what the features are so they can dynamically configure appropriate usage.

Good for Standalone and Distributed Caching

While the specification does not mandate a particular distributed cache topology it is cognizant that caches may well be distributed. We have one API that covers both usages but it is sensitive to distributed concerns. For example CacheEntryListener has a NotificationScope of events it listens for so that events can be restricted to local delivery. We do not have high network cost map-like methods such as keySet() and values(). And we generally prefer zero or low cost return types. So while Map has V put(K key, V value) javax.cache.Cache has void put(K key, V value).

Classloading

Caches contain data shared by multiple threads which may themselves be running in different container applications or OSGi bundles within one JVM and might be distributed across multiple JVMs in a cluster. This makes classloading tricky.

We have addressed this problem. When a CacheManager is created a classloader may be specified. If none is specified the implementation provides a default. Either way object de-serialization will use the CacheManager’s classloader.

This is a big improvement over the approach taken by caches like Ehcache that use a fall-back approach. First the thread’s context classloader is used and it that fails, another classloader is tried. This can be made to work in most scenarios but is a bit hit and miss and varies considerably by implementation.

Getting the Code

The spec is in Maven central. The Maven snippet is:

<dependency>
     <groupId>javax.cache</groupId>
     <artifactId>cache-api</artifactId>
     <version>0.3</version>
</dependency>

A Cook’s Tour of the API

Creating a CacheManager

We support the Java 6 java.util.ServiceLoader creational approach. It will automaticaly detect a cache implementation in your classpath. You then create a CacheManager with:

CacheManager cacheManager = Caching.getCacheManager();

which returns a singleton CacheManager called “__default__”. Subsequent calls return the same CacheManager.

CacheManagers can have names and classloaders configured in. e.g.

CacheManager cacheManager = Caching.getCacheManager(“app1”, Thread.currentThread().getContextClassLoader());

Implementations may also support direct creation with new for maximum flexibility:

CacheManager cacheManager = new RICacheManager(“app1”, Thread.currentThread().getContextClassLoader());

Or to do the same thing without adding a compile time dependency on any particular implementation:

String className = "javax.cache.implementation.RIServiceProvider";
Class<ServiceProvider> clazz =(Class<ServiceProvider>)Class.forName(className);
ServiceProvider provider = clazz.newInstance();
return provider.createCacheManager(Thread.currentThread().getContextClassLoader(), "app1");
We expect implementations to have their own well-known configuration files which will be used to configure the CacheManager. The name of the CacheManager can be used to distinguish the configuration file. For ehcache, this will be the familiar ehcache.xml placed at the root of the classpath with a hyphenated prefix for the name of the CacheManager. So, the default CacheManager will simply be ehcache.xml and “myCacheManager” will be app1-ehcache.xml.

Creating a Cache

The API supports programmatic creation of caches. This complements the usual convention of configuring caches declaratively which is left to each vendor.

To programmatically configure a cache named “testCache” which is set for read-through

cacheManager = getCacheManager();
CacheConfiguration cacheConfiguration = cacheManager.createCacheConfiguration()
cacheConfiguration.setReadThrough(true);
Cache testCache = cacheManager.createCacheBuilder(“testCache”)
.setCacheConfiguration(cacheConfiguration).build();

Getting a reference to a Cache

You get caches from the CacheManager. To get a cache called “testCache”:

Cache<Integer, Date> cache = cacheManager.getCache(“testCache”);

Basic Cache Operations

To put to a cache:

Cache<Integer, Date> cache = cacheManager.getCache(cacheName);

Date value1 = new Date();

Integer key = 1;

cache.put(key, value1);

 

To get from a cache:

Cache<Integer, Date> cache = cacheManager.getCache(cacheName);
Date value2 = cache.get(key);

 

To remove from a cache:

Cache<Integer, Date> cache = cacheManager.getCache(cacheName);
Integer key = 1;
cache.remove(key);

Annotations

JSR107 introduces a standardised set of caching annotations, which do method level caching interception on annotated classes running in dependency injection containers. Caching annotations are becoming increasingly popular, starting with Ehcache Annotations for Spring, which then influenced Spring 3’s caching annotations.

The JSR107 annotations cover the most common cache operations including:

  • @CacheResult – use the cache
  • @CachePut – put into the cache
  • @CacheRemoveEntry – remove a single entry from the cache
  • @CacheRemoveAll – remove all entries from the cache

When the required cache name, key and value can be inputed they are not required. See the JavaDoc for the details. To allow greater control, you can specify all these and more. In the following example, the cacheName attribute is specified to be “domainCache”, index is specified as the key and domain as the value.

public class DomainDao {
     @CachePut(cacheName="domainCache")
     public void updateDomain(String domainId, @CacheKeyParam int index,
          @CacheValue Domain domain) {
     ...
     }
}

The reference implementation includes an implementation for both Spring and CDI. CDI is the standardised container driven injection introduced in Java EE 6. The implementation is nicely modularised for reuse, uses an Apache license, and we therefore expect several open source caches to reuse them. While we have not done an implementation for Guice, this could be easily done.

Annotation Example

This example shows how to use annotations to keep a cache in sync with an underlying data structure, in this case a Blog manager, and also how to use the cache to speed up responses, done with @CacheResult

public class BlogManager {
@CacheResult(cacheName="blogManager")
public Blog getBlogEntry(String title) {...}
@CacheRemoveEntry(cacheName="blogManager")
public void removeBlogEntry(String title) {...}
@CacheRemoveAll(cacheName="blogManager")
public void removeAllBlogs() {...}
@CachePut(cacheName=”blogManager”)
public void createEntry(@CacheKeyParam String title,
@CacheValue Blog blog) {...}
@CacheResult(cacheName="blogManager")
public Blog getEntryCached(String randomArg,
@CacheKeyParam String title){...}
}

Wiring Up Spring

For Spring the key is the following config line, which adds the caching annotation interceptors into the Spring context:

<jcache-spring:annotation-driven proxy-target-class="true"/>

A full example  is:

<beans ...>
<context:annotation-config/>
<jcache-spring:annotation-driven proxy-target-class="true"/>
<bean id="cacheManager" factory-method="getCacheManager" />
</beans>

Spring has it’s own caching annotations based on earlier work from JSR107 contributor Eric Dalquist. Those annotations and JSR107 will happily co-exist.

Wiring Up CDI

First create an implementation of javax.cache.annotation.BeanProvider and then tell CDI where to find it  declaring a resource named javax.cache.annotation.BeanProvider in the classpath at /META-INF/services/.

For an example using the Weld implementation of CDI, see the CdiBeanProvider in our CDI test harness.

Further Reading

For further reading visit the JSRs home page at https://github.com/jsr107/jsr107spec.

0.3 of JSR107:javax.cache released

0.3 of the JSR107 spec, RI and TCK have been released.

Changes in this release:

  • Numerous changes across the spec, TCK and RI
  • Annotations implementations in the RI for Spring and CDI
  • Transactions API finalised
The release is in Maven central so the snippet for the API is:
<dependency>
<groupId>javax.cache</groupId>
<artifactId>cache-api</artifactId>
<version>0.3</version>
</dependency>

We are pretty much on the home run with this now. Work on Ehcache, Infinispan and Coherence implementations are starting. Work will now shift to closing open issues and dealing with review comments as they come in.

We welcome community involvement. The jumping off point for all things JSR107 is the GitHub Page.

Start using JSR107′s JCache API

JCache is rapidly nearing completion and we would like the community to start using it. The API is becoming quite stable.

The home for all things JCache is: https://github.com/jsr107/jsr107spec. Today I updated that page with the following details so that you can all get started.

We expect to release our first non-snapshot release in a few week’s time with further releases leading up to JavaOne.

I am doing two sessions on caching at JavaOne. If you are attending please come along to learn more. My sessions are:

Session ID: 24223

Session Title: The New JSR 107 Caching Standard

Session ID: 24241

Session Title: The Essence of Caching

For the uninitiated JCache is the API being defined in JSR107. It defines a standard Java Caching API for use by developers and a standard SPI (“Service Provider Interface”) for use by implementers.

Release

The stable releases of this software are tagged with version numbers, starting with 0.1. Eventually, when the specification is further along releases will match the specification number.

We expect out first stable release early August 2011.

Snapshot Releases

Snapshot releases of jars for binaries, source and javadoc are available.

Download the cache-api from https://oss.sonatype.org/index.html#nexus-search;quick~javax-cache

or use the following Maven snippet:

<repository>
    <id>sonatype-nexus-snapshots</id>
    <name>Sonatype Nexus Snapshots</name>
    <url>https://oss.sonatype.org/content/repositories/snapshots</url>
    <releases>
        <enabled>false</enabled>
    </releases>
    <snapshots>
        <enabled>true</enabled>
    </snapshots>
</repository>

<dependency>
  <groupId>javax.cache</groupId>
  <artifactId>cache-api</artifactId>
  <version>0.2-SNAPSHOT</version>
</dependency>

Javadoc

The JavaDoc is available as a jar with the releases. We also have the latest JavaDoc online.

Specification

The evolving specification is available online on as a Google Doc.

Reference Implementation

The reference implementation (“RI”) source is available on GitHub.

This implementation is not meant for production use. For that we would refer you to one of the many open source and commercial implementations of JCache.

The RI is there to ensure that the specification and API works.

For example, some things that we leave out:

  • implementation of transactions.
  • eviction does not use an LRU or similar algorithm it just evicts an entry when full.
  • concurrency. The RI is not exhaustively tested for thread safety.
  • tiered storage. A simple on heap store is used.
  • replicated or distributed caching
  • cache sizing. All caches are hard coded to be of size 100 entries.

Why did we do this? Because a much greater engineering effort, which gets put into the open source and commercial caches which implement this API, is required to accomplish these things.

Having said that, the RI is Apache 2 and is a correct implementation of the spec. It can be used to create new cache implementations.

Building From Source

mvn clean install

Mailing list

Please join the mailing list if you’re interested in using or developing the software: http://groups.google.com/group/jsr107

IRC

We will be using the #jsr107 channel on Freenode for chat.

We also have set up a commit hook which publishes commits to the channel.

Issue tracker

Please log issues to: https://github.com/jsr107/jsr107spec/issues

Contributing

Right now code contribution is limited to the Expert Group, but please feel free to post to the mailing list.

License

The API is available under the JPA license and may be freely used.

The TCK is available under a restricted TCK license although the tests.

The reference implementation is available under an Apache 2 license.

For details please read the license in each source code file.

Contributors

This free, open source software was made possible by the JSR107 Expert Group who put many hours of hard work into it.