I will be joining Hazelcast as CTO

I am very excited to announce that I will be joining the world-class team at Hazelcast as CTO.

Hazelcast (www.hazelcast.com) develops, distributes and supports the leading open source in-memory data grid. The product, also called Hazelcast, is a free open source download under the Apache license that any developer can include in minutes to enable them to build elegantly simple mission-critical, transactional, and terascale in-memory applications. The company provides commercially licensed Enterprise extensions, Hazelcast Management Console and professional open source training, development support and deployment support. The company is privately held and headquartered in Palo Alto, California.

What this means for Hazelcast Users

I will join my efforts to those of Hazelcast CEO Talip Ozturk and the team and bring my deep knowledge of caching to Hazlecast to complement that of the team. I will be out and about at conferences and visiting customers.

With the team we will be figuring out what great features to add to Hazelcast and how to improve the leading open source In-Memory Data Grid.

We will also be bring to market Enterprise Extensions which add high value features based around the Hazelcast core.

Hazelcast has made an announcement that puts this move into their own words.

What this means for Terracotta BigMemory Users

We will develop a comparative caching/operational store project and product based on Hazelcast core.   This will then be an alternative for BigMemory users.

What this means for Ehcache Users

The ownership of Ehcache was transferred to Terracotta 4 and a half years ago when I joined them who then took over maintenance of it.

While Ehcache remains widely used today, the open source version is only suitable for single node caching. This is not that useful for most production contexts so it is not directly competitive with Hazelcast or for that matter In-Memory Data Grids, which deal with clusters of computers.

I expect Ehcache will implement JCache and that in the future those ISVs and open source frameworks which currently define Ehcache as their caching layer will instead define it using JCache, of which Ehcache will be one provider.

Hazelcast is already developing their JCache implementation, which is already up on GitHub.

What this means for JCache

JCache is the new standard for caches and IMDGs. It includes a key-value API suitable for caches and operational stores. Importantly it was designed primarily for IMDGs. Listeners, loaders, writers and other user-defined classes are expected to be executed somewhere in the cluster, not in process.  And the spec defines and single and batch EntryProcessors, the defining feature of IMDG, which enables in-situ computation.

I will continue to act as spec lead on new maintenance releases of JCache. And I will also work with the Java EE 8 expert group who are including JCache in Java EE8. And I will be working with open source frameworks and ISVs as they move to add a JCache layer to their architectures.

Hazelcast will be one of the first to market with an implementation of JCache which should be available in a production-ready implementation in February.

As it is Apache 2 open source, I encourage open source frameworks and ISVs to include Hazelcast in their distributions as they add JCache. That way they can ship with an out of the box IMDG built in without locking themselves or their users/customers into a single vendor.

Crimson and Clover

As part of work to do provide coverage reports for the TCK of JCache we had to add a coverage tool. As we are open source I spent some time playing with coverage tools.

We build with Maven 3. For reasons do with usage of the TCK, we have four standalone modules: jsr107soec, jsr107tck, RI and Demo. Within the jsr107tck and RI there is a parent pom and children with back references to their parents. To easily run the four top level modules we have an aggregate module which is not declared as a parent in the poms of the four top level modules. It looks like this.

<project xmlns="http://maven.apache.org/POM/4.0.0"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 <modelVersion>4.0.0</modelVersion>
     <groupId>javax.cache</groupId>
 <artifactId>jsr107-parent</artifactId>
 <version>0.12-SNAPSHOT</version>
 <packaging>pom</packaging>
<build>
 <plugins>
 <plugin>
 <groupId>com.atlassian.maven.plugins</groupId>
 <artifactId>maven-clover2-plugin</artifactId>
 <version>3.2.0</version>
 <configuration>
     <cloverDatabase>${java.io.tmpdir}/clover/clover.db</cloverDatabase>
     <singleCloverDatabase>true</singleCloverDatabase>
     <generatePdf>false</generatePdf>
     <generateXml>false</generateXml>
     <generateHtml>true</generateHtml>
 </configuration>
 </plugin>
 </plugins>
 </build>
 <modules>
 <module>jsr107spec</module>
 <module>RI</module>
 <module>jsr107tck</module>
 <module>demo</module>
 </modules>
</project>

An unusual aspect of this arrangements is that all tests for the RI are contained in the jsr107tck module. The RI itself has no tests.

My approach was to spend a few hours with first the open source coverage tools, Emma and Cobertura, and then to move on to Clover which I have used before, if needed.

Emma’s Maven plugin is old and needs a work around for Java 7. But got it basically working but could not get it to deal with our complicated structure.

Then I tried Cobertura. Version 2.6 of the Maven module was released just in August. It worked fine with Java 7. But once again I found it difficult to make progress with our complicated structure. I was thinking the best approach would be to give all modules the same coverage database location. In Cobertura you do this with a system property. e.g. -Dnet.sourceforge.cobertura.datafile=/path/cobertura.ser . I didn’t get this working.

I then tried Clover, and got up and running. I specified the same clover.db location for all modules. I needed to add this to the aggregate pom, and each top level pom. For those top level modules with children, defining it in the parent was sufficient. I added the following to each of those poms.

Then to run a build with clover instrumentation:
mvn clean com.atlassian.maven.plugins:maven-clover2-plugin:setup install

And to run the clover report:
mvn com.atlassian.maven.plugins:maven-clover2-plugin:clover

Note the wordy module definitions. You can use the short name of clover2 if you add the following to your settings.xml:

<pluginGroups>
    <pluginGroup>com.atlassian.maven.plugins</pluginGroup>
</pluginGroups>

The first example then becomes:
mvn clean clover2:setup install

It would be great if they could change the User Guide to point out this step as it mars what is otherwise a very smooth experience.

Lambda Hello World

I was recently checking Lambda expressions out to ensure the forthcoming JCache spec would work Ok with them. I just wanted a really simple example that worked through the different syntactical forms and ended up putting a Hello World type example myself.


/**
* Hello world!
*/
public class App {

static void print(Something something) {
System.out.println(something.doSomeThing("Hello World"));
}

@FunctionalInterface
interface Something {

String doSomeThing(String word);
}

public static void main(String[] args) {
print(new Something() {

@Override
public String doSomeThing(String word) {
return word.concat(" - it is no fun to AnonymousInnerClass");
}
});

//expression form. No return
print((String s) -> s.concat(" = it is fun to Lambda"));

//block. Need return as interface method has return.
print((String s) -> {return s.concat(" = it is fun to Lambda");});

//expression. No type required as Interface defines it.
print((s) -> s.concat(" = it is fun to Lambda"));

//creating a concrete instance of a functional interface via assignment
Something something = (s) -> s.replace("the", "thee");
System.out.println(something.doSomeThing("What the!"));

java.util.function.Predicate predicate = new

}
}

JSR107 JCache enters Public Draft Review

On 5 July 2013, JSR107 went into Public Draft Review and which will last until 4 August.

Visit the JSR home page on GitHub to get the code. Or read the spec.

I encourage anyone interested in Java caching to take a look and give us feedback which you can do by posting an issue on GitHub for specific things or for more general feedback posting to the JSR107 Google Groups mailing list.

JSR107 enters Early Draft Review after nearly 12 years

On 23 October 2012 the JCP posted the Early Draft specification and API for JSR107. See http://jcp.org/en/jsr/detail?id=107. This is almost 12 years since the JSR kicked off. Note that this material was uploaded to the JCP in February this year but was delayed while the legal complications of having two companies as shared spec leads got sorted out. That is now done and will not be an issue going forward in the process.
We will now be working intensively to drive this to completion in its own right and for inclusion in Java ## 7. We expect to be final in early 2013.
We need your review
In the meantime the early draft review period is open until 22 November. Please visit the home of the project at https://github.com/jsr107 and send your comments to jsr107@googlegroups.com or create issues at https://github.com/jsr107/jsr107spec/issues. For a quick into see  https://github.com/jsr107/jsr107spec.
We have also just added a few new artifacts up on GitHub:
- A very simple demo which can be used when giving talks. https://github.com/jsr107/demo
- ehcache-jcache – an implementation of the 0.5 specification that works with the latest version of ehcache. https://github.com/jsr107/ehcache-jcache
Remaining JCP 2.7 Process Steps
The review period ends 22 November. Once we have a public draft, we will submit that for 30 days’ review. A EC ballot will be held in the last week of the public draft period before we move on to complete the RI and TCK and seek final approval.
Java EE 7 Deadline
We have sought clarification from the EE JSR on their deadline. It is fast approaching. We therefore intend to go hard.

Introducing Deliberate Caching

A few weeks ago I attended a ThoughtWorks Technology Radar seminar. I worked at ThoughtWorks for years and think if anyone knows what is trending up and down in software development these guys do. At number 17 in Techniques with a rising arrow is what they called Thoughtful Caching. At drinks with Scott Shaw, I asked him what it meant.

What the trend is about is the movement from reactive caching to a new style. By reactive I mean you find out your system doesn’t perform or scale after you build it and it is already in production. Lots of Ehcache users come to it that way. This is a trend I am very happy to see.

Deliberate Caching

The new technique is:

  • proactive
  • planned
  • implemented before the system goes live
  • deliberate
  • is more than turning on caching in your framework and hoping for the best – this is the Thoughtful part
  • uses an understanding of the load characteristics and data access patterns
We kicked around a few names for this and came up with Deliberate Caching to sum all of this up.
The work we are doing standardising Caching for Java and JVM based languages, JSR107, will only aid with this transition. It will be included in Java EE 7 which even for those who have lost interest in following EE specifically will still send a signal that this is an architectural decision which should be made deliberately.

Why it has taken this long?

So, why has it taken until 10 years after Ehcache and Memcache and plenty of others came along for this “new” trend to emerge?  I think there are a few reasons.

Some people think caching is dirty

I have met plenty of developers who think that caching is dirty. And caching is cheating. They think it indicates some architectural design failure that is best of being solved some other way.
One of the causes of this is that many early and open source caches (including Ehcache) placed limits on the data safety that could be achieved. So the usual situation is that the data in the cache might but was not sure to be correct. Complicated discussions with Business Analysts were required to find out whether this was acceptable and how stale data was allowed to be. This has been overcome by the emergence of enterprise caches, such as Enterprise Ehcache, so named because they are feature rich and contain extensive data safety options, including in Ehcache’s case: weak consistency, eventual consistency, strong consistency, explicitly locking, Local and XA transactions and atomic operations.  So you can use caching even in situations where the data has to be right.

Following the lead of giant dotcom

The other thing that has happened is that as giant dotcoms it cannot have escaped anyone’s notice that they all use tons of caching. And that they won’t work if the caching layer is down. So much so that if you are building a big dot com app it is clear that you need to build a caching layer in.

Early Performance Optimisation is seen as an anti -pattern

Under Agile we focus on the simplest thing that can possibly work. Requirements are expected to keep changing. Any punts you take on future requirements may turn out to be wrong and your effort wasted. You only add things once it is clear they are needed. Performance and scalability tend to get done this way as well. Following this model you find out about the requirement after you put the app in production and it fails. This same way of thinking causes monolithic systems with single data stores to be built which later turn out to need expensive re-architecting.

I think we need to look at this as Capacity Planning. If we get estimated numbers at the start of the project for number of users, required response times, data volumes, access patterns etc then we can capacity plan the architecture as well as the hardware. And in that architecture planning we can plan to use caching. Because caching affects how the system is architected and what the hardware requirements are, it makes sense to do it then.

 

 

javax.cache: The new Java Caching Standard

This post explores the new Java caching standard: javax.cache.

How it Fits into the Java Ecosystem

This standard is being developed by JSR107, of which the author is co-spec lead. JSR107 is included in Java EE 7, being developed by JSR342. Java EE 7 is due to be finalised at the end of 2012. But in the meantime javax.cache will work in Java SE 6 and higher and Java EE 6 environments as well aswith Spring and other popular environments.

JSR107 has draft status. We are currently at release 0.3 of the API, the reference implementation and the TCK. The code samples in this article work against this version.

Adoption

Vendors who are either active members of the expert group or have expressed interest in implementing the specification are:

  • Terracotta – Ehcache
  • Oracle – Coherence
  • JBoss – Infinispan
  • IBM – ExtemeScale
  • SpringSource – Gemfire
  • GridGain
  • TMax
  • Google App Engine Java

Terracotta will be releasing a module for Ehcache to coincide with the final draft and then updating that if required for the final version.

Features

From a design point of view, the basic concepts are a CacheManager that holds and controls a collection of Caches. Caches have entries. The basic API can be thought of map-­like with the following additional features:

  • atomic operations, similar to java.util.ConcurrentMap
  • read-through caching
  • write-through caching
  • cache event listeners
  • statistics
  • transactions including all isolation levels
  • caching annotations
  • generic caches which hold a defined key and value type
  • definition of storage by reference (applicable to on heap caches only) and storage by value

Optional Features

Rather than split the specification into a number of editions targeted at different user constituencies such as Java SE and Spring/EE, we have taken a different approach.

Firstly, for Java SE style caching there are no dependencies. And for Spring/EE where you might want to use annotations and/or transactions, the dependencies will be satisfied by those frameworks.

Secondly we have a capabilities API via ServiceProvider.isSupported(OptionalFeature feature)so that you can determine at runtime what the capabilities of the implementation are.  Optional features are:

  • storeByReference – storeByValue is the default
  • transactional
  • annotations

This makes it possible for an implementation to support the specification without necessarily supporting all the features, and allows end users and frameworks to discover what the features are so they can dynamically configure appropriate usage.

Good for Standalone and Distributed Caching

While the specification does not mandate a particular distributed cache topology it is cognizant that caches may well be distributed. We have one API that covers both usages but it is sensitive to distributed concerns. For example CacheEntryListener has a NotificationScope of events it listens for so that events can be restricted to local delivery. We do not have high network cost map-like methods such as keySet() and values(). And we generally prefer zero or low cost return types. So while Map has V put(K key, V value) javax.cache.Cache has void put(K key, V value).

Classloading

Caches contain data shared by multiple threads which may themselves be running in different container applications or OSGi bundles within one JVM and might be distributed across multiple JVMs in a cluster. This makes classloading tricky.

We have addressed this problem. When a CacheManager is created a classloader may be specified. If none is specified the implementation provides a default. Either way object de-serialization will use the CacheManager’s classloader.

This is a big improvement over the approach taken by caches like Ehcache that use a fall-back approach. First the thread’s context classloader is used and it that fails, another classloader is tried. This can be made to work in most scenarios but is a bit hit and miss and varies considerably by implementation.

Getting the Code

The spec is in Maven central. The Maven snippet is:

<dependency>
     <groupId>javax.cache</groupId>
     <artifactId>cache-api</artifactId>
     <version>0.3</version>
</dependency>

A Cook’s Tour of the API

Creating a CacheManager

We support the Java 6 java.util.ServiceLoader creational approach. It will automaticaly detect a cache implementation in your classpath. You then create a CacheManager with:

CacheManager cacheManager = Caching.getCacheManager();

which returns a singleton CacheManager called “__default__”. Subsequent calls return the same CacheManager.

CacheManagers can have names and classloaders configured in. e.g.

CacheManager cacheManager = Caching.getCacheManager(“app1”, Thread.currentThread().getContextClassLoader());

Implementations may also support direct creation with new for maximum flexibility:

CacheManager cacheManager = new RICacheManager(“app1”, Thread.currentThread().getContextClassLoader());

Or to do the same thing without adding a compile time dependency on any particular implementation:

String className = "javax.cache.implementation.RIServiceProvider";
Class<ServiceProvider> clazz =(Class<ServiceProvider>)Class.forName(className);
ServiceProvider provider = clazz.newInstance();
return provider.createCacheManager(Thread.currentThread().getContextClassLoader(), "app1");
We expect implementations to have their own well-known configuration files which will be used to configure the CacheManager. The name of the CacheManager can be used to distinguish the configuration file. For ehcache, this will be the familiar ehcache.xml placed at the root of the classpath with a hyphenated prefix for the name of the CacheManager. So, the default CacheManager will simply be ehcache.xml and “myCacheManager” will be app1-ehcache.xml.

Creating a Cache

The API supports programmatic creation of caches. This complements the usual convention of configuring caches declaratively which is left to each vendor.

To programmatically configure a cache named “testCache” which is set for read-through

cacheManager = getCacheManager();
CacheConfiguration cacheConfiguration = cacheManager.createCacheConfiguration()
cacheConfiguration.setReadThrough(true);
Cache testCache = cacheManager.createCacheBuilder(“testCache”)
.setCacheConfiguration(cacheConfiguration).build();

Getting a reference to a Cache

You get caches from the CacheManager. To get a cache called “testCache”:

Cache<Integer, Date> cache = cacheManager.getCache(“testCache”);

Basic Cache Operations

To put to a cache:

Cache<Integer, Date> cache = cacheManager.getCache(cacheName);

Date value1 = new Date();

Integer key = 1;

cache.put(key, value1);

 

To get from a cache:

Cache<Integer, Date> cache = cacheManager.getCache(cacheName);
Date value2 = cache.get(key);

 

To remove from a cache:

Cache<Integer, Date> cache = cacheManager.getCache(cacheName);
Integer key = 1;
cache.remove(key);

Annotations

JSR107 introduces a standardised set of caching annotations, which do method level caching interception on annotated classes running in dependency injection containers. Caching annotations are becoming increasingly popular, starting with Ehcache Annotations for Spring, which then influenced Spring 3’s caching annotations.

The JSR107 annotations cover the most common cache operations including:

  • @CacheResult – use the cache
  • @CachePut – put into the cache
  • @CacheRemoveEntry – remove a single entry from the cache
  • @CacheRemoveAll – remove all entries from the cache

When the required cache name, key and value can be inputed they are not required. See the JavaDoc for the details. To allow greater control, you can specify all these and more. In the following example, the cacheName attribute is specified to be “domainCache”, index is specified as the key and domain as the value.

public class DomainDao {
     @CachePut(cacheName="domainCache")
     public void updateDomain(String domainId, @CacheKeyParam int index,
          @CacheValue Domain domain) {
     ...
     }
}

The reference implementation includes an implementation for both Spring and CDI. CDI is the standardised container driven injection introduced in Java EE 6. The implementation is nicely modularised for reuse, uses an Apache license, and we therefore expect several open source caches to reuse them. While we have not done an implementation for Guice, this could be easily done.

Annotation Example

This example shows how to use annotations to keep a cache in sync with an underlying data structure, in this case a Blog manager, and also how to use the cache to speed up responses, done with @CacheResult

public class BlogManager {
@CacheResult(cacheName="blogManager")
public Blog getBlogEntry(String title) {...}
@CacheRemoveEntry(cacheName="blogManager")
public void removeBlogEntry(String title) {...}
@CacheRemoveAll(cacheName="blogManager")
public void removeAllBlogs() {...}
@CachePut(cacheName=”blogManager”)
public void createEntry(@CacheKeyParam String title,
@CacheValue Blog blog) {...}
@CacheResult(cacheName="blogManager")
public Blog getEntryCached(String randomArg,
@CacheKeyParam String title){...}
}

Wiring Up Spring

For Spring the key is the following config line, which adds the caching annotation interceptors into the Spring context:

<jcache-spring:annotation-driven proxy-target-class="true"/>

A full example  is:

<beans ...>
<context:annotation-config/>
<jcache-spring:annotation-driven proxy-target-class="true"/>
<bean id="cacheManager" factory-method="getCacheManager" />
</beans>

Spring has it’s own caching annotations based on earlier work from JSR107 contributor Eric Dalquist. Those annotations and JSR107 will happily co-exist.

Wiring Up CDI

First create an implementation of javax.cache.annotation.BeanProvider and then tell CDI where to find it  declaring a resource named javax.cache.annotation.BeanProvider in the classpath at /META-INF/services/.

For an example using the Weld implementation of CDI, see the CdiBeanProvider in our CDI test harness.

Further Reading

For further reading visit the JSRs home page at https://github.com/jsr107/jsr107spec.

0.3 of JSR107:javax.cache released

0.3 of the JSR107 spec, RI and TCK have been released.

Changes in this release:

  • Numerous changes across the spec, TCK and RI
  • Annotations implementations in the RI for Spring and CDI
  • Transactions API finalised
The release is in Maven central so the snippet for the API is:
<dependency>
<groupId>javax.cache</groupId>
<artifactId>cache-api</artifactId>
<version>0.3</version>
</dependency>

We are pretty much on the home run with this now. Work on Ehcache, Infinispan and Coherence implementations are starting. Work will now shift to closing open issues and dealing with review comments as they come in.

We welcome community involvement. The jumping off point for all things JSR107 is the GitHub Page.