Performance problems in ConcurrentHashMap vs synchronized HashMap

In Ehcache 1.6, HashMap was replaced with ConcurrentHashmap with statistical sampling for eviction.
Having completed 1.6 and released it there were a few surprises along the way with ConcurrentHashMap performance.

PUT and GET performance

There is some existing material online about ConcurrentHashMap versus HashMap performance, notably
This article finds that ConcurrentHashMap puts are slower than HashMap when the map gets large: for a map of 1 million objects fully population took three times longer than HashMap for a single threaded scenario. However once you get to multi-threaded scenarios, you need to put synchonrization around HashMap. For those few of you in doubt as to this, email me your HashMap usage I will send you back a multi-threaded test that turns your computer into a fan heater (i.e. 100% infinite loop in the CPUs) in about 30 seconds. The cost of synchronization grows as you add concurrency. Put and Get work well with ConcurrentHashMap in multi-threaded scenarios.

Iterate Performance

iterate in ConcurrentHashMap is a lot slower than it is in HashMap and get worse as the map size gets larger. See CacheTest#testConcurrentReadWriteRemoveLFU. For my testing scenarios, which uses 57 threads doing a majority of gets but doing other operations with a variety of map sizes, we get:

In summary, with 1 million objects in the map a put to Ehcache using iterate for eviction, takes 381ms!

As a result I have used an alternative to iteration using an algorithm called FastRandom. The result is 0.16 ms, 2,300 times faster!

For very small maps, ConcurrentHashMap iteration is quite quick. From experimental testing in Ehcache 1.6 we use iteration up to 5000 entries and FastRandom for sizes above that.

size() performance

Thought not as bad as iteration I have noted size as slow in ConcurrentHashMap compared to HashMap. In Ehcache 1.6 we limit the the usage of size().


If you using ConcurrentHashMap and using more that get/put, test the performance. It may be far, far worse than you were expecting.

To give ConcurrentHashMap the best chance of optimisation remember to set the size and expected concurrency when you create it. In ehcache we set the size to the exact size configured for the cache, and we set concurrency to 100 threads.

Ehcache Server in the Cloud

I am seriously impressed with Amazon’s cloud offering. You get a pick list of virtual machines of different sizes, a CDN, monitoring with elastic forking of new instances, fixed IPs if required, S3, attachable storage and the ability to release software as .amis for easy deployment, map-reduce with Hadoop, load balancing and a payment service.

Each of these is configurable via a RESTful web service. Each one has command-line tools that interact with the web services which you can easily script. And I can see that the new management console will bring this together into an easy to use package. Right now there is a tab for EC2 and another for Map-Reduce. Give it a few more months and I can see this populated with tabs for the other services.

Ehcache Server AMI

I love EC2 so much that I decided to create an Amazon Machine Image (AMI) for Ehcache Server. It is marked public and is available for anyone to use. I see it being used in two ways:

  • To quickly try out and demo Ehcache Server. If you have an EC2 account you can be up and running in less than a minute.
  • As an example of how to deploy Ehcache Server. The AMI comes with an init.d script for service control and ipchains rules mapping ports 80 and 81. You can use it as a template to create your own AMI with your own cache configuration.

Getting Started

  1. Create a new virtual machine. Select ami-3512f45c from the Community AMIs tab. Select a security configuration and a machine size (small is fine) and start it up. Ehcache Server will start automatically.
  2. To test it, hit it with http://amazon_instance_address/ehcache/rest/sampleCache1. From there, try writing a client. See the Cache Server documentation for sample client code in several languages.
  3. To make configuration changes log into your machine. Ehcache Server is insatalled in /root/ehcache-server-0.7.

Video Tutorial

I have put all this in a video tutorial.

The Limitations of Google App Engine

I have a very simple test application up on Google App Engine. See

80MB heap limit

Go to Each time you hit is exactly 10MB gets added to Ehcache in-process cache. This is an intentiontal memory leak designed to find out how much you stick in the heap.

The answer is around 80MB. I suspect, taking Jetty into account that there is an -Xmx100m setting in play.

Crashed sites do not recover immediately or start a new instance.Update Feb 2010: Now they do

When you get an OutOfMemory error the site is cooked. There should be some monitoring that notices and takes it down. That is not the case.

I have a wget script which, every 30 seconds, does

The answer is that the dead site stays down for 5 minutes (10 repetitions of my script). And no new instance gets fired up. Your whole site is down.

Update: Google fixed this as of February 2010.

Static content is not distributed through the Google CDN

On the page I put an image. I did not configure it as static. I downloaded it and got the IP which is in Mountain View, California.
I then marked the images as static in appengine-web-app and redeployed.
There was no effect on the serving location or speed of download.
It may be that the files are served from the static content location
You would expect this to be distributed via Google’s CDN.
Here is the header you get from the static content servers.

Another interesting thing – cache expiry is set to 10 minutes. A CDN will normally set the TTL longer and rely on a technique such as resource renaming to overcome browser cache issues.


None of this is good. The first is a very serious limitation. The last two are killers for running a production app. Hopefully Google will fix these things.

Ehcache 1.6.0 is now compatible with Google App Engine

The forthcoming Ehcache 1.6.0 is compatible and works with Google App Engine. You can get it now from ehcache snapshots.
Google App Engine provides a constrained runtime which restricts networking, threading and file system access. All features of Ehcache can be used except for the DiskStore and replication. Having said that, there are workarounds for these limitations.

Why use Ehcache with Google App Engine?

Ehcache cache operations take a few ?s, versus around 60ms for Google’s provided client-server cache memcacheg (as reported on Because it uses way less resources, it is also cheaper.
You can also store non-Serializable objects in it. And finally there is the rich Ehcache API that you can leverage.


Setting up Ehcache as a local cache in front of memcacheg

The idea here is that your caches are set up in a cache hierarchy. Ehcache sits in front and memcacheg behind. Combining the two lets you elegantly work around limitations imposed by Googe App Engine. You get the benefits of the ?s speed of Ehcache together with the umlimited size of memcached.
Ehcache contains the hooks to easily do this.
To update memcached, use a CacheEventListener .
To search against memcacheg on a local cache miss, use cache.getWithLoader() together with a CacheLoader for memcacheg.

Using memcacheg in place of a DiskStore

In the CacheEventListener , ensure that when notifyElementEvicted() is called, which it will be when a put exceeds the MemoryStore’s capacity, that the key and value are put into memcacheg.

Distributed Caching

Configure all notifications in CacheEventListener to proxy throught to memcacheg.
Any work done by one node can then be shared by all others, with the benefit of local caching of frequently used data.

Dynamic Web Content Caching

Google App Engine provides acceleration for files declared static in appengine-web.xml.

You can get acceleration for dynamic files using Ehcache’s caching filters as you usually would.

Getting Started

To get started see the Ehcache with Google App Engine HowTo.