Java Fitnesse quick start tutorial

Posted: February 16, 2014 in Uncategorized

This is a short and quick tutorial to get you started on Fitnesse and I will list down the steps below.
The tutorials on should do the trick,

Prerequisites: Why Fitnesse is used(if you are here, I am guessing, you already know this, and you want to get your hands dirty 🙂 )
Code repository:
1. Fitness is available at . Download and setup

Upon successful setup, you should see the customary http://localhost/FitNesse.UserGuide.TwoMinuteExample (Your port might differ, I am using the default 80)

2. Once, you are at the 2 minute example, an easy way to test your setup would be to edit and add a sample test case, for eg. denominator is 1. The below pic is a before pic, and if you notice, there is no denominator with 1 and no Edit button.


Lets edit this wiki page, and add numerator = 10, denominator = 1, and quotient ? = 10

Search for the table below and add the last row.

| eg.Division |
 | numerator | denominator | quotient? |
 | 10 | 2 | 5.0 |
 | 12.6 | 3 | 4.2 |
 | 22 | 7 | ~=3.14 |
 | 9 | 3 | <5 |
 | 11 | 2 | 4<_<6 |
 | 100 | 4 | 33 |
 | 10 | 1 | 10 |

Save and Test. The last assertion should fail, which is expected..


3. Now, lets customize.  Lets put up

  • New test page
  • Add Fixture which sits between System Under Test(SUT)
  • Create a very simple biz logic class which mimics the SUT)

4.  Test Page

Edit  http://localhost/FitNesse.UserGuide.TwoMinuteExample?edit
and add next line at the end and save the page.


Once you save the page, click on the [?] which should open up a new test page. Edit it. Replace everything with below:

!define TEST_SYSTEM {slim}
!path D:\workspace\FitnesseBusinessApplication\bin
 !define COLLAPSE_SETUP {true}
 !define COLLAPSE_TEARDOWN {true}

!|Division Program |
 | 10 | 2 | 5.0 |
 | 12.6 | 3 | 4.2 |
 | 22 | 7 | ~=3.14 |
 | 9 | 3 | <5 |
 | 11 | 2 | <6 |
 | 10 | 1 | 10.0 |

From here onwards, it is a Java implementation, and the below steps should be customized for your end system (.Net etc)

Everything below is coded on Eclipse and finally the class files are generated under :


5. SUT (Business Logic)


public class Calculator {
private float numerator;
private float denominator;
private float quotient;

public Calculator(float numerator, float denominator) {
 this.numerator = numerator;
 this.denominator = denominator;
 public float getNumerator() {
 return numerator;
 public void setNumerator(float numerator) {
 this.numerator = numerator;
 public float getDenominator() {
 return denominator;
 public void setDenominator(float denominator) {
 this.denominator = denominator;
 public float getQuotient() {
 return numerator/denominator;

6. Fixture class which communicates between the SUT and the Fitnesse HTML.
(Actually, this is compiled to D:\workspace\FitnesseBusinessApplication\bin which is present under !path on the Fitnesse page)

package com.fitnesse.fixture;

public class DivisionProgram {

private float numerator;
 private float denominator;
 private float quotient;

public void setNumerator(float numerator) {
 this.numerator = numerator;

public void setDenominator(float denominator) {
 this.denominator = denominator;

public float quotient() {
 Calculator calculator = new Calculator(numerator, denominator);
 this.quotient = calculator.getQuotient();
 return this.quotient;

Few common problems:

a. Could not invoke constructor for DivisionProgram[0]


This means, Fitnesse is not able to resolve the Fixtures class, check the below settings on your Fitnesse page

Division Program should point to DivisionProgram class in fixture.

!path D:\workspace\FitnesseBusinessApplication\bin
!|Division Program              |

b.  Method Quotient[0] not found in com.fitnesse.fixture.DivisionProgram.

This will not come if you follow the instructions properly, but if you customize, then this might happen. Basically, Quotient? searches for a getter named Quotient() on the Fixture, but if you check again, you have a getter method named quotient() [Case sensitive]


7 weeks before


7 weeks after

How did I do it?

1. Goal Setting  – Tom Venuto (Burn the Fat, Feed the Muscle) . Absolutely top notch , no

2. Training Program  –

Weight Training :

first 4 weeks : Ian King (Get Buffed)

next 3 weeks : John Romannielo I wish I had got this first, coz I would have lost more

While I did do some cardio in the 1st 4 weeks, there was none in the last 2 weeks, and I still lost fat which was kind of amazing, because I find cardio extremely boring.

while there is no substitute to Ian King’s methods. none whatsover, period, but if you have to lose max. fat in shortest possible time, John Rommanielo is the man to go to.

3. Diet – 4 to 5 meals a day, high protein , low carbs for the most part. This is where I did the most mistakes and I would do so much better if I have to redo this. But the key thing was : all my meals were pre planned, printed and put on my fridge door,  and there were no waverings, none. No cheat meals. nothing.

Simple rules :

1. Dont mix fat and simple carbs together

2.  complex carbs and very little fat only after workouts



Image  —  Posted: November 2, 2013 in Uncategorized

Pulau Ubin, or the Tile Island is probably the last place in Singapore where you can get a feel of some village life. (Technically it’s an offshore island, so it’s not really on Singapore mainland,but it’s still part of Singapore).

We decided to check it out on Singapore’s National Day, Aug 9,2012.  Us being a group of seven guys, and the rendezvous time at Changi Point ferry terminal was at 9:30 am. Since, four of us shared a flat, we were first to reach at 9:30 am. However, the rest of them didn’t turn up till 10:30 am owing to some unforeseen circumstances like getting up late, not getting a cab on time etc. etc. After venting our feelings when they finally arrived, we started on the queue.

After about a short wait , we boarded a bumboat towards Ubin, which is about 10 min away from mainland. After taking turns at posing and hunting for that perfect Facebook profile pic, we reached the island without further ado.

Once on the island, we were greeted by lots of bicycle shops and some restaurants.  We were forewarned , so we booked quite good-looking, new bicycles with full suspension at 20$ apiece. Unfortunately, the baskets had run out, so we were left to carry our backpacks on our backs. I also got hold of a map, which however, didn’t prove to be of much use, as there are neat instructions, all across the island.

After cycling about 800m from the shop, there was a split in the road,one which took us to the Chek Jawa wetlands(far east), and the other to Ketam Mountain Bike Park, (north-west). Since, the time was around 11:30 already, and the low tide time had gone by,  we decided to ride straight to the bike park.

Another ride for 10 min, and we all got thirsty as soon as we saw a shop selling coconut water. We had our fill , clicked some more pics and then rode till the bike park.

The bike park is huge, about 8-9 Km and has several paths for beginners, intermediates (blue square),experts(black diamond) and super experts(double black diamonds). We decided to stick to a blue intermediate trail, about 900m in length. The bike park is just next to the sea-shore, so you catch glimpses of the sea in between. After the first trail was covered,  we paused to catch our breath. One amongst us had recently been back to home in  India, and had brought some goodies. We had an amazing snacks break, and clicked some more.

From that spot onwards there were 3 marks: a blue, a black and a double black. Buoyed by a sudden flush of adrenaline, no doubt by the food we just had, the consensus was to try out a double D. However, since none of us could even see the path through the trees and boulders, we decided to give it a miss. Goals readjusted, we ventured for the single D. The climb was steep, narrow, and frankly too much for us. But we powered on, only this time,not on our bikes. The entire track(barring the first 10 metres or so) was covered on foot, with our head (and bikes) held high.

Finally, we came to a gentler slope and since the sea-shore was just beckoning us, we decided to take a dip. The water was not very clean, but nevertheless the swim was refreshing. After some more hilarious photo shoots, as can only happen if there are seven guys in their pants, we decided it was time to move on.

After another coconut water/beer break , we decided to ride back to the jetty. There are some excellent slopes coming back from the park towards the jetty and we had a great time absolutely zapping through the descents with gay abandon. Soon, the time to take a decision was again upon us. Towards the left, was Chek Jawa, a 2.4 Km journey, while the jetty lay close by , on the right , invitingly within a km. The last slope seemed to have raised everyone’s spirits for the majority of us agreed to go towards Chek Jawa. There were a few murmurs among some of them, but they were nipped in the bud as a couple of us quickly started cycling.

This 2.4 Km is the most difficult part of the journey with rocky roads, and seemingly endless inclines. However, the journey is well worth the destination. After docking our bikes at Punai Hut, we walked the last few yards. We were soon greeted by a family of wild boars .

There are 3 spots at Chek Jawa: 2 board walks and an observation tower overlooking the sea. The tower is pretty high, and our legs felt like lead, when we had to drag ourselves up. But once up, you get some amazing views and , yes you guessed it right, some great photo-ops.

We also managed to go to the board walks, but since it was a high tide then, there was very little on view. The sea creatures are on view during the low tides, since Chek Jawa is an intertidal wetland. Since, by this time, we were really feeling hungry, owing to the prolonged cycling, we decided to ride straight back to the jetty and catch the boat back to the mainlands. The track back from Chek Jawa is way easier than the one which goes to it.

Finally, we returned back our bikes, boarded the bumboat back to the mainland. We had a quite filling lunch at a restaurant in Changi Point, the quality of food on offer no doubt enhanced with our ravenous hunger.  That was the end to an extremely nice outing, and for a change, it didn’t burn a hole in our pockets in Singapore.

P.S: We went for an hour of table tennis, followed by an hour of swimming when we came back. Yes, you can call us crazy!! 🙂

How to go there ? Go to Changi Point, board a bumboat, rent a bike, have fun.

Pointers : Can carry the following:

  • Extra pair of footwear/clothing to change into for the journey home.
  • Get a hat/cap.
  • Get some water, though there are plenty of coconut water stalls.
  • umbrella (u never know when it rains in Singapore)
  • Sunscreen: if u don’t fancy sunburns
  • Some food(Nothing Indian/vegetarian) – plus the only restaurants are at the jetty.

Gaga over Scuba!

Posted: August 6, 2012 in outdoors

I want to dive deep and explore the underwater wonders! While that may suggest me as being a “liquid” “adventurous” soul, its stretching the truth a little thin.


As a kid I was petrified of water, thanks to a family friend who had let go of a myopic 8 year old in a humongous pond. (I come from a suburb which produced Bula Chowdhury , the first woman to cross the seven seas and huge water bodies are(were,thanks to the recent spate of construction) numerous around my area) .

It was “his” way of teaching me to swim. However, that one dive scarred my psyche for good for at least a decade or so. A good 10 years later, after my board examination , I finally decided to exorcise the demon for good and asked a kindly soul to teach me to swim. This time the “dighi”(colloquial for water pool) was the largest water body in my district.Coupled with the fact that I was practically blind without my glasses, the sight of the vast pool did nothing to soothe my nerves.However, I persevered and did manage to start swimming after 3 or 4 days I reckon. I had learnt to swim,but I didn’t love water.Not yet. Very quickly the break passed,and my next swimming moment came 6 years next, in Bangalore, in a small pool.It was hardly 15 metres long and I struggled to cross it even then. That was a very costly paying guest accommodation and with our meagre salaries, I and some of my college mates decided to search for a house. “The end” of my swimming in Bangalore.

In Singapore

Fast forward three years later, I travel to Singapore, I and some of my colleagues end up renting a condo which has a swimming pool. Within 2 months of renting the condo, we went to ‘Phuket’, probably the best place to go vacationing if you love the waters.I didn’t. I had a blast nevertheless owing to some cool friends. Even though, I snorkelled  for some time,I didn’t venture out too much.But I saw enough to understand what I was missing. Unfortunately, the condo had a gym as well, the lure of buffing up was nigh impossible to ignore. And since I am a normal human being, I couldn’t prioritize more than 2 things at the same time,so swimming again went to the back burner.

In Love

About 2 more years had passed,and I got a new flat mate,an expert swimmer who again took me to the pool.After about two more months of practising on and off,yesterday I was able to complete my first 100 metres at a single stretch and it was an exhilerating feeling.More importantly, for the first time in my life, I loved being in water. I always try to understand the “why” of things. A big reason was that I had started wearing lens. I have noticed that it’s very difficult for me to exhale under water if I close my eyes. So, the fear of water was not just fear of water, but also fear of darkness or being unable to see. So maybe sometimes,its easier to circumvent a small problem to solve a bigger one. Also, the presence of a better skilled person always helps. Yes, this is something you have to take with a pinch of salt, as being second “always” is NO fun. But improvement is what matters.


I plan to go the whole hog, be NAUI certified as a diver and and as I gleaned from I still have to go someway before I even take the test(200 m swim, 15 min underwater swim, 10 min treading). I am probably midway through all of them. So here’s wishing me the best of luck to complete this by September end.

To scuba diving!!

Why the 1 month break? 1st week of July was a planned deconditioning week. I was feeling restless (possibly overtrained) and my lower back was just beginning to bother me. Then my mrs. fell ill, so had to help her out for another week. All that took its toll, and when I came back,I myself caught the virus. Hence I decided to take another week off. Finally, momentum had its sway and I started the 4th week with very little motivation to lift anything at all. Its strange, how habits change in the course of a few weeks. In the meantime, I gorged on junk food, bought more stuff from MacD in a week, than I did in the past 1 year.

But men will be men! Last weekend, my bodyfat looked to my naked eyes as hovering around the 10-12% mark. (My calipers made a scrawny sound, as soon as I turned them on yesterday, so probably the battery has run out). Coupled with the fact that my guns have got back to the baseline of puny 12.5 inches before I started prioritising them around April, I decided enough was enough. Eventually, I did eke out my workout plan for the coming 3 weeks and visited the temple, ahem, the gym, yesterday.

The results

1. I lost some strength, obvious , given the long break. But it wasn’t much. I was just around my bodyweight squating before the break (2 RM) Yesterday, I went for about 10 kg less for 5RM and literally had plenty in the tank . I decided to play it safe since it was such a long break.

2. I lost LOTS of stamina. I was out of breath after doing just a couple of warmup sets of squats. I used to do 8*8 on squats with 30 s rest interval and anybody who’s done that knows how gruelling that can be. Yesterday I had to go for 1 min to 2 min breaks just to regain my breath. 😦

Its not that, I did absolutely nothing during this 1 month. Last 2 weeks, I swam , though not much. A recent fixation (I hope this dream comes true soon) with NAUI Scuba certification has forced me to spend some liquid moments.. ha ha ha. But then, it was highly inadequate as I kept panting like some dog on a sunlight drenched day after every set.

3. My big three (quads, hams, back) didn’t lose much size or definition. A little, perhaps(since visual appearance is relative, so an inch lost on legs would obviously look to be much less than an inch lost on the biceps). So anything built on low reps, heavy weight is going to last longer. I already knew this, but this was a nice firsthand example to rely upon.


A key point here is my short goals are NOT too high at the moment, I just want to keep on the Progressive overload, to gain strength. I dont want to follow the low rest, high reps to fire the GH . My primary goals are in another domain. Its extremely difficult to the point of self sabotaging the goals if there are too many eggs in the basket.

Long term goal : end 2012 with a body fat of 8% and a weight of 66 kg empty stomach.

There might be substantial upheaval in the personal/professional front in the meantime as there are 6 months to go and lots of other things planned out, but this is something which can be achieved. See you sometime soon!

Hibernate and performance considerations: these 2 items are like twins joined at the hip. And a 2nd level cache is probably the first step towards happy customers and a fast application!

We will go over the following things in this post:

1. A brief introduction to the different cache available and why we chose EhCache

2. Hibernate 2nd level cache implementation with EhCache with a small application.

3. Detailed differences between the various caching strategies : read-only, nonstrict-read-write, read-write and transactional.

4. Ways to clean up the 2nd level cache.

5. Query Cache


There are 2 cache levels in Hibernate:

  • 1st level or the session cache.
  • 2nd level or the SessionFactory level Cache.

The 1st level cache is mandatory and is taken care by Hibernate.

This 2nd level cache can be implemented via various 3rd party cache implementations. We shall use EhCache to implement it. There are several other possibilities like : SwarmCache and OSCache.

Why EhCache?

Pre 3.2 Hibernate releases EhCache is the default one.

EhCache looks to be a really vibrant development community and believe me that’s an important consideration to make before choosing any open source project/tool. We don’t want to be stuck midway in a project and then hunt for answers from a development community which doesn’t answer your queries or track bugs.

As implied earlier, the ‘second-level’ cache exists as long as the session factory is alive. It holds on to the ‘data’ for all properties and associations (and collections if requested) for individual entities that are marked to be cached

It is possible to configure a cluster or JVM-level (SessionFactory-level) cache on a class-by-class and collection-by-collection basis.

As a side-note, to improve on the N+1 selects, 2nd level cache is also used, though a better approach is obviously to improve the original query using the various fetch strategies.

Application Overview

Let have an application structure like below. We have a state where some patients have to be transferred from their homes to the hospitals. We have several organizations who have voluntarily decided to help on this. Each organization has several volunteers. The volunteers can be either drivers or caregivers who help in transporting the patients. Now, the entire state Is split into regions to help the volunteer pick and choose the regions they want to serve in.(perhaps close to home etc) .

To summarize,

  • 1 Organization will have m volunteers.
  • 1 volunteer can be either Driver / Caregiver
  • 1 volunteer will be linked to m regions
  • 1 region will be linked to n volunteers

So, Org: Volunteer = 1 : m

Volunteer : Region = m:n

Hibernate 2nd level cache implementation with EhCache:

Step 1

Download ehcache-core-*.jar from and add it to your classpath. We also need an ehcache.xml in our classpath to override the default implementations

Hibernate Version: 3.6

Step 2

Sample ehcache.xml (to be put in classpath)

<ehcache xmlns:xsi=""
	xsi:noNamespaceSchemaLocation="ehcache.xsd" updateCheck="true"
	monitoring="autodetect" dynamicConfig="true">


Step 3

enable ehcache in our hibernate.cfg.xml

<property name="hibernate.cache.region.factory_class"> 	net.sf.ehcache.hibernate.EhCacheRegionFactory

<property name="hibernate.cache.use_second_level_cache">true</property>
<property name="hibernate.cache.use_query_cache">true</property>


Note: Prior Hibernate versions will require different hibernate properties to be enabled.

As seen above, we have the second level cache and the query cache both enabled.

The second level cache stores the entities/associations/collections(on request). The query cache stores the results of the query but as a key based format, where the values are actually stored in the actual 2nd level cache. So, query cache is useless without a 2nd level cache.

Recall that we can put cache strategies for both classes and collections.

Step 4

Enable the cache at class level

<class name="com.spring.model.Region" table="volunteer">
	<cache usage="read-only" />
	<!—- other properties -->

Difference between Cache strategies in detail

usage (required) specifies the caching strategy: transactional, read-write, nonstrict-read-write or read-only

Straight from the Hibernate API below: (The explanation comes below though 🙂

Strategy: read only(usage=”read-only”)

  • If your application needs to read, but not modify, instances of a persistent class, a read-only cache can be used.
  • Simplest and best performing strategy.
  • Safe for use in a cluster.

Note: We shall see later that Read-Only cache allows for insertions but no updates/deletes.

For our Region persistent class earlier, we had used read-only cache strategy, as the regions will be inserted in the database only through the database, and not from the UI, so we can safely say that the changes would not be made to the cached data.

Strategy: nonstrict read/write(usage=”nonstrict-read-write”)

  • Caches data that is sometimes updated without ever locking the cache.
  • If concurrent access to an item is possible, this concurrency strategy makes no guarantee that the item returned from the cache is the latest version available in the database. Configure your cache timeout accordingly! This is an “asynchronous” concurrency strategy.
  • In a JTA environment, hibernate.transaction.manager_lookup_class has to be set

Foreg. for Weblogic hibernate.transaction.manager_lookup_class=org.hibernate.transaction.WeblogicTransactionManagerLookup

  • For non-managed environments, tx. should be closed when session.close() or session.disconnect() is invoked.
  • This is slower than READ-ONLY but obviously faster than the next one.(READ-WRITE)

Strategy: read/write(usage=”read-write”)

  • Caches data that is sometimes updated while maintaining the semantics of “read committed” isolation level. If the database is set to “repeatable read”, this concurrency strategy almost maintains the semantics. Repeatable read isolation is compromised in the case of concurrent writes. This is an “asynchronous” concurrency strategy.
  • If the application needs to update data, a read-write cache might be appropriate.
  • This cache strategy should never be used if serializable transaction isolation level is required. In a JTA environment, hibernate.transaction.manager_lookup_class has to be set

For eg. hibernate.transaction.manager_lookup_class=org.hibernate.transaction.WeblogicTransactionManagerLookup

Strategy: transactional

  • Support for fully transactional cache implementations like JBoss TreeCache.
  • Note that this might be a less scalable concurrency strategy than ReadWriteCache. This is a “synchronous” concurrency strategy
  • Such a cache can only be used in a JTA environment and you must specify hibernate.transaction.manager_lookup_class.
  • Note: This isn’t available for ehCache singleton ( ava. with a cache server:Terracota)

Now, if you cannot understand the differences between nonStrict R/W vs R/W vs Transactional from above very well, I dont blame you as I was in the same boat earlier. Let’s delve a bit deeper into the cache workings, shall we?

Basically two different cache implementation patterns are provided for :

  • A transaction-aware cache implementation might be wrapped by a “synchronous” concurrency strategy, where updates to the cache are written to the cache inside the transaction.
  • A non transaction-aware cache would be wrapped by an “asynchronous” concurrency strategy, where items are merely “soft locked” during the transaction and then updated during the “after transaction completion” phase;

Note: The soft lock is not an actual lock on the database row – only upon the cached representation of the item. In a distributed cache setup, the cache provider should have a cluster wide lock, otherwise cache correctness is compromised.

In terms of entity caches, the expected call sequences for Create / Update / Delete operations are:


  1. lock(java.lang.Object, java.lang.Object)
  2. evict(java.lang.Object)
  3. release(java.lang.Object, org.hibernate.cache.CacheConcurrencyStrategy.SoftLock)


  1. lock(java.lang.Object, java.lang.Object)
  2. update(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)
  3. afterUpdate(java.lang.Object, java.lang.Object, java.lang.Object,org.hibernate.cache.CacheConcurrencyStrategy.SoftLock)


  1. insert(java.lang.Object, java.lang.Object, java.lang.Object)
  2. afterInsert(java.lang.Object, java.lang.Object, java.lang.Object)

In terms of collection caches, all modification actions actually just invalidate the entry(s).The call sequence here is:

  1. lock(java.lang.Object, java.lang.Object)
  2. evict(java.lang.Object)
  3. release(java.lang.Object, org.hibernate.cache.CacheConcurrencyStrategy.SoftLock)

For an asynchronous cache, cache invalidation must be a two step process (lock->release, or lock-> afterUpdate). Note, however, lock() is only for read-write and not for nonstrict-read-write. release() is meant to release the lock and update() update the cache with the changes.

For a synchronous cache, cache invalidation is a single step process (evict, or update). Since this happens within the original database transaction, there is no locking. Eviction will force Hibernate to look into the database for subsequent queries whereas update will simply update the cache with the changes.

Note that query result caching does not go through a concurrency strategy; they are managed directly against the underlying cache regions

Lets analyze what each of the caches do : though TransactionalCache will most likely be overwritten by the individual implementation. (3rd party cache provider)

DELETE / Collection

lock() throws UnsupportedOperationException(“Can’t write to a readonly object”) returns null, so no lock applied
  • Stop any other transactions reading or writing this item from/to the cache.
  • Send them straight to the database instead. (The lock does time out eventually.)
  • This implementation tracks concurrent locks of transactions which simultaneously attempt to write to an item.
returns null, so no lock applied.
evict() this.cache.remove(key); //does nothing this.cache.remove(key);
release() this.cache.remove(key); Release the soft lock on the item. Other transactions may now re-cache the item (assuming that no other transaction holds a simultaneous lock). But obviously, for this item, there will be nothing, since it has been deleted. //does nothing


lock() throws UnsupportedOperationException (“Can’t write to a readonly object”); returns null, so no lock applied
  • Stop any other transactions reading or writing this item from/to the cache.
  • Send them straight to the database instead. (The lock does time out eventually.)
  • This implementation tracks concurrent locks of transactions which simultaneously attempt to write to an item.
returns null, so no lock applied.
update() evict(key);this.cache.remove(key);return false; return false; Updates cache
afterUpdate() release(key, lock);this.cache.remove(key);return false;
  • Re-cache the updated state, if and only if there are no other concurrent soft locks.
  • Release our lock..
Return false


insert() return false; return false; return false; Update cache.
afterInsert() this.cache.update(key, value);return true; return false; Add the new item to the cache, checking that no other transaction has accessed the item.. Return false.

When Hibernate looks into the cache and when it looks into the database ?

Hibernate will look into the database if any of the below is true:

  1. Entry is not present in the cache
  2. The session in which we look for the entry is OLDER than the cached entry, meaning the session was opened earlier than the last cache loading of the entry. Thus cache will be refreshed.
  3. If the entry is currently being updated/deleted and the cache implemented is a read-write
  4. An update/delete has recently happened for a nonstrict-read-write which has caused the item to be evicted from the cache.

Now, armed with the knowledge above which basically tells that nonstrict R/w(NSRW) never locks any entity while RW locks it, and when Hibernate looks into the database,let look at some code.

Lets have the domain objects(only associations and collections depicted) :

Organization :

<set name="volSets" cascade="all" inverse="true">
	<key column="org_id" not-null="true" />
	<one-to-many class="com.spring.model.Volunteer" />


     <many-to-one name="org" column="org_id" 						class="com.spring.model.Organization" not-null="true" />

<set name="regions" table="volunteer_region" inverse = "false"	 lazy="true" fetch="select" cascade="none" >
           <key column name="volunteer_fk" not-null="true" />
      	 <many-to-many class="com.spring.model.Region">
              <column name="region_fk" not-null="true" />


		<set name="volunteers" table="volunteer_region" inverse="true"
			lazy="true" fetch="select" cascade="none">
			<key column name="region_fk" not-null="true" />
			<many-to-many class="com.spring.model.Volunteer">
				<column name="volunteer_fk" not-null="true" />

We will load the Organization and its set of volunteers in 1 transaction, and then we shall then update the organization name in another tx and we will see the differences in action.

NonStrict R/W vs R/W

organization.hbm.xml is marked with nonstrict-read-write

     <cache usage="nonstrict-read-write"/>

Java code:

System.out.println("session1 starts");
Session session1 = sf.openSession();
Transaction tx = session1.beginTransaction();
Organization orgFromSession1= (Organization) 	session1.get(Organization.class,421l);
//loaded in the cache at time t0
tx.commit(); 	//evicted from the cache
System.out.println("session1 ends");

System.out.println("session2 starts");
Session session2 = sf.openSession();//session 2 opened at time t2
Transaction tx2 = session2.beginTransaction();
Organization orgFromSession2= (Organization) session2.get(Organization.class,421l); System.out.println(orgFromSession2.getOrgName());
System.out.println("session2 ends");

Logs :

session1 starts
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
Hibernate: update organization set version=?, org_name=? where org_id=? and version=?
session1 ends

session2 starts
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?



a. In session 1, a select +update

b. In session 2, another select from the DB to fetch the item since the item was evicted with the update.


read-write cache enabled at organization.hbm.xml

<cache usage="read-write"  region="org_region"  />

Java code:

Same code as above


session1 starts

Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
Hibernate: update organization set version=?, org_name=? where org_id=? and version=?

session1 ends

session2 starts
session2 ends



a. In session 1, a select +update

b. In session 2, no selects since there was no eviction but instead the cache was updated.

Now, we shall just tweak the code such that, we open session 2 just before the transaction 1 commits.We shall also put a check if the item actually exists in the cache. So the changed code becomes:


Session 2 opened just before the transaction 1 commits. Some more diagnostic messages added to check if the item is indeed in the 2nd level cache using sf.getCache().containsEntity(Organization.class,421l)
Java code:

System.out.println("session1 starts");
Session session1 = sf.openSession();
Transaction tx = session1.beginTransaction();
Organization orgFromSession1= (Organization) 	session1.get(Organization.class,421l);
//loaded in the cache at time t0
System.out.println("session2 starts");
Session session2 = sf.openSession();//session 2 opened at time t1
System.out.println("Cache Contains?"+sf.getCache().containsEntity(Organization.class,421l));

session1.close();//reloaded in the cache at time t2,after the flush happens
System.out.println("session1 ends");
Transaction tx2 = session2.beginTransaction();
Organization orgFromSession2= (Organization) session2.get(Organization.class,421l); //should be from the database
System.out.println("session2 ends");



session1 starts
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
session2 starts
Hibernate: update organization set version=?, org_name=? where org_id=? and version=?
Cache Contains? true
session1 ends
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
session2 ends



a. In session 1, a select +update

b. In session 2, again a select since, this session was opened before the cache was updated with the entry.

c. Note that cache did contain the item.

The actual summary is already present in the code comments, but let’s reiterate:The below is true for both Nonstrict-R/w and R/W cache

  • Whenever a session starts, a timestamp is added to it.(ST)
  • Whenever an item is loaded in the cache, a timestamp is added to it.(CT)
  • Now if ST < CT, meaning the session is older than the cached item, then if we look for the cached item in this older session, Hibernate will NOT look in the cache. Instead it will always look in the database, and consequently re-load the item in the cache with a fresh timestamp.

The above was demonstrated in demo 3 where we started the 2nd session before the cache was reloaded with the item. If you check the output, the item was actually present in the cache at the time of querying, but yet the database was referred.

Summary of diff. between NS R/W and R/W

For NonStrict-Read-Write

• There’s no locking ever.

• So, when the object is actually being updated in the database, at the point of committing(till database completes the commit), the cache has the old object, database has the new object.

• Now, if any other session looks for the object , it will look in the cache and find the old object.(DIRTY READ)

• However, as soon as the commit is complete, the object will be evicted from the cache, so that the next session which looks for the object, will have to look in the database.

• If you execute the same code (Demo1) with the diagnostic: System.out.println(“Cache Contains?”+sf.getCache().containsEntity(Organization.class,421l));

Before and update the tx.commit() you will find that before the commit, the cache contained the entry, after the commit, it’s gone. Hence forcing session2 to look in the database and reload the data in the cache.

So, nonstrict read/write is appropriate if you don’t require absolute protection from dirty reads, or if the odds of concurrent access are so slim that you’re willing to accept an occasional dirty read. Obviously the window of Dirty Read is during the time when the database is actually updated, and the object has not YET been evicted from the cache.

For Read-Write

• As soon as somebody tries to update/delete an item, the item is soft-locked in the cache, so that if any other session tries to look for it, it has to go to the database.

• Now, once the update is over and the data has been committed, the cache is refreshed with the fresh data and the lock is released, so that other transactions can now look in the CACHE and don’t have to go to the database.

• So, there is no chance of Dirty Read, and any session will almost ALWAYS read READ COMMITTED data from the database/Cache.

Differences between R/W and Transactional Cache

Below adapted from (Supplemented with code examples of mine below)

We have to understand that since R/W is asynchronous, the updating of the cache happens outside the tx(ie in the afterCommit phase of the transaction) What happens if something goes wrong there?

How is the cache transactionality/correctness maintained in Read-Write caching strategy during transaction commit, rollback & failures (the so called value proposition of transactional cache). Here is how –

1. When application commits transaction, cache entry is soft-locked, there by deflecting all the reads for this entry to DB now.

2. Then changes are flushed to DB and transactional resources are committed.

3. Once transaction is committed (i.e. reflected inside DB), in the after completion phase cache entry is updated and unlocked. Any other transaction starting after the update time stamp can now read the cache entry contents, since the lock has been released.

This is what happens in different stages of transaction completion/failure –

  • So anytime lag between 2 & 3 (i.e. when DB and cache are out of sync), you are using DB to read the latest state, since the cache is still soft-locked.
  • If transaction rolled back, cache entry still remains locked and later reads from the DB refreshes the cache entry state
  • What if node making transactional change fails between step 2 & 3 (i.e. transaction is committed to DB but not to cache) and cache state is preserved (e.g. in clustered cache), is my cache left corrupted? not really.

Since cache entry is locked, other transaction keep reading from the DB directly. Later hibernate times out the locked cache entry and its contents are refreshed with database state and cache entry is again back for read/write operations.

Do you still need a transactional cache that either integrates with hibernate native transactions or JTA?

All a JTA transaction cache guarantees is cache state visibility across transactions and recoverability if any of the transaction phases fails.

With read-write you are guaranteed to read the correct state all the time. If cache entry state gets inconsistent because of any failure in transaction commit phase, it is guaranteed to recover with correct state. This is all what a transactional cache guarantees, but at a higher cost (esp. when read outweigh writes).

Hibernate Read-Write cache strategy makes a smart decision of reading from the cache or database based on the cache entries. Anytime if cache cannot guarantee the correct contents, application is deflected to DB.

What are the caveats? We will test them below in our code sample.

  • Read-Write cache might compromise repeatable read isolation level if an entity is read from the cache and later its contents are evicted from 1st level (session) cache. If transaction reads the same entry again from DB later and in the meantime other transaction updated the entry state, current transaction will get different state than what it read earlier.

Note: This should occur only if session cache contents are flushed otherwise once any entry is read from 2nd level cache/DB, every subsequent read in the same transaction will get the state from session cache and there by guaranteed the same state again and again. How many people really flush session cache?

  • Cache entries might expire in lock mode. In lock mode each entry is assigned a timeout value and if update doesn’t unlock the entry within specified timeout, the entry might be unlocked forcefully(this done to avoid any permanent fallout of cache entry from the cache, e.g. when node fails before unlocking the entry). A genuinely delayed transaction might create a very small window where cache contents are stale and other transactions are reading the old state. Cache entry timeout is a cache provider property and might to be able tune if provider supports it.

Note: For this to occur, update has to be delayed, read has to occur after timeout and moreover the stale window is miniscule. So majority of the applications are safe anyways.

Finally, one word of caution would be:

  • Entity type that are mostly updated and have got concurrent read and updates, read-write caching strategy may not be very useful as most of the read will be deflected to database.

Ok, let’s now put the caveats to test.

Testing the 1st caveat: Repeatable Reads might be compromised. What actually is a repeatable read? It means that if, within a transaction, u read a row at time T1, and you again read it at time T2 (T2>T1) the row shouldn’t change. One imp. thing for us to remember is that Hibernate always looks for the object in the session first, (1st level cache), and then in 2nd level cache.

Java code:

System.out.println("session1 starts");
Session session1 = sf.openSession();
Transaction tx = session1.beginTransaction();
Organization orgFromSession1= (Organization)	session1.get(Organization.class,96783514l);
//loaded in previous step and then evicted from session
System.out.println("Cache 	Contains?"+sf.getCache().containsEntity(Organization.class, 96783514l));
//loaded from 2nd level, not session cache

System.out.println("session2 starts");
Session session2 = sf.openSession();//session 2 opened at time t2
Transaction tx2 = session2.beginTransaction();
Organization orgFromSession2= (Organization) session2.get(Organization.class,96783514l);
//should be from the 2nd level cache
orgFromSession2.setOrgName("org "+System.currentTimeMillis());;
tx2.commit(); // cache updated with new entry
System.out.println("inner "+orgFromSession2.getOrgName());
System.out.println("session2 ends");
System.out.println("Cache 	Contains?"+sf.getCache().containsEntity(Organization.class,96783514l));

//we load the row again. From the database this time, since this session began before //the cache update
orgFromSession1= (Organization) session1.get(Organization.class,96783514l);


If you see the above code: u will see the following pattern:

  • Session 1 loads an object, (also in the2nd level thereby) and then removes it from sesssion using the evict(). Note that its still present in 2nd level cache, but has been removed from session cache.
  • Session 2 updates the same object, by retrieving it from 2nd level cache, hence no DB queries. Once update completes, the cache is refreshed with the new data
  • Session 1 tries to read the same entity again, and this time it refers to the 2nd level cache as it has been evicted from 1st level. Remember that the object is present in the 2nd level cache, but since, this session had started earlier, it will refer to the database for the object, thus the object loaded in step a differs from this one, and hence there are no repeatable reads. Note, that cache did contain the item, but still it referred to the database.


session1 starts
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?

Cache Contains?true
1:org 1339490676715

session2 starts
Hibernate: update organization set version=?, org_name=? where org_id=? and version=?
inner org 1339490694572
session2 ends

Cache 	Contains?true
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
2:org 1339490694572


Right, so we have analyzed the various cacheing strategies that are present.

To summarize:

  • ReadOnly cache can do only reads and inserts , cannot perform updates/deletes. Fastest performing.
  • Nonstrict Read Write Cache doesn’t employ any locks, ever, so there’s always a chance of dirty reads. However, it ALWAYS evicts the entry from the cache so that any subsequent sesssions always refer to the DB.
  • Read Write cache employs locks but in an asynchronous manner, first the insert/update/delete occurs w/in the tx. When the cache entry is softlocked and other sessions have to refer to the database. Once the tx. is completed, the lock is released and the cache is updated.(outside the transaction). In some cases, repeatable reads might be compromised.
  • Transactional caches obviously update the database and the cache within the same transaction, so its always in a consistent state with respect to the database.
  • Entity type that are mostly updated and have got concurrent read and updates, read-write caching strategy may not be very useful as most of the reads will be deflected to database

Collection Caching

Till now, we have discussed cacheing at the individual level. We can also cache the collections. Recall that the collections cacheing follows the same steps as the delete()

ie lock() –> evict() –> release()

Collections are by default not cached, and have to cached explicitly like below:

<set name="volSets" cascade="all" inverse="true" batch-size="10">
	<cache usage="read-write" />

	<key column="org_id" not-null="true" />
	<one-to-many class="com.spring.model.Volunteer" />

Removing Entities and Collections from 2nd level cache

Whenever you pass an object to save(), update() or saveOrUpdate(), and whenever you retrieve an object using load(), get(), list(), iterate() or scroll(), that object is added to the internal cache of the Session.

When flush() is subsequently called, the state of that object will be synchronized with the database. If you do not want this synchronization to occur, or if you are processing a huge number of objects and need to manage memory efficiently, the evict() method can be used to remove the object and its collections from the first-level cache.


To evict all objects for Organization, we can call:


To remove from the SessionFactory or the 2nd level cache:

sf.getCache().evictEntity(Organization.class,421l); //Entity

sf.getCache().evictCollection("com.spring.model.Organization.volSets",421l); //Collections
//Note: the collection contains the name of the fully qualified class.

where sf is the SessionFactory, we can remove the identifier if we would want all volunteer sets to be evicted.

Note: the collection contains the name of the fully qualified class.

Query Cache

As mentioned earlier, we need to enable the below property in our hibernate.cfg.xml

<property name="hibernate.cache.use_query_cache">true</property>

This setting creates two new cache regions:

  • org.hibernate.cache.StandardQueryCache, holding the cached query results
  • org.hibernate.cache.UpdateTimestampsCache, holding timestamps of the most recent updates to queryable tables. These are used to validate the results as they are served from the query cache.


UpdateTimestampsCache region should not be configured for expiry at all. Note, in particular, that an LRU cache expiry policy is never appropriate.

Recall , that the query cache does not cache the state of the actual entities in the cache; it caches only identifier values and results of value type. For this reason, the query cache should always be used in conjunction with the second-level cache for those entities expected to be cached as part of a query result cache (just as with collection caching).

But even then queries are not cached, and hence need to be explicitly specified for cacheing for both HQL and Criteria queries by setting setCacheable(boolean) to the Query.:

Criteria c2 = session.createCriteria(Guitar.class).setCacheable(true);

List volList = session.createQuery("from Volunteer vol join vol.regions").setCacheable(true).list();

Well, that’s it then! Whew! That was a long one indeed! If you have followed thus far, I would be delighted to hear your opinions on this post.


Today, we are going to put forth a small EJB 3 application in Glassfish v3.

Why EJB3?

 Coding in EJB3 is almost as simple as it gets. EJB has gone some major intuitive simplifications in order of coding from releases 2x to 3x.

Gone are the days of cumbersome home interfaces, remote interfaces, deployment descriptors and checked exceptions in SessionBean implementations.

EJB3 supports annotation based coding and dependency injections. While we can (and will ) do the JNDI lookups to get hold of an EJB/Resource, DI takes away the need to do so.

 While the deployment descriptors (xml based) can still be used, we are going to develop a simple application which will be wholly annotation based.

Why Glassfish?

Glassfish is probably the easiest application server out there. It’s so intuitive that it almost feels like a web server. We will eventually use the same application in a Weblogic server(the big daddy) but Glassfish is great because you can start it up and test concepts in almost no time.

Lets then put the 2 heads together and  build a small EJB3 application and deploy it in Glassfish v3.

We will be doing this in Windows and we shall use Eclipse as our IDE.


Download the zip file –  glassfish

Our version : glassfish-3.1.2.

Startup of Glassfish

Go to the bin directory after unzipping and key in the following(or start up asadmin.bat)

asadmin start-database ( If you really need Derby to start)
asadmin start-domain

Open http://localhost:4848/asadmin. to test the installation.

 Login using the user ID admin and password adminadmin. This will validate the installation.

Coding our EJBs

We only need a POJI and a POJI and then we will add some simple annotations to magically turn the POJO to an EJB. We will be using here a stateless session bean.

public interface PlaceAuctionItem {
	void placeAuctionItem();

@Stateless(name="PlaceAuctionItem", mappedName="ejb/PlaceAuctionItem")
public class PlaceAuctionItemBean implements PlaceAuctionItem{

	public void placeAuctionItem() {
		System.out.println("saving the AuctionItem ");
		//save the auctionItem in the database



@Stateless: marks the session bean as stateless

mappedName: would be used to do JNDI lookup from the client

@Remote: tells that this is the remote interface.

Once done, we need to deploy the EJB to Glassfish

 EJB Deployment in Glassfish

  • Right click and Export the project as a jar file(let’s name it as test-ejb.jar) and deploy it to


  • Glassfish auto redeploys the jar file, so every time you change anything on the EJBs, you have to re-export the jar(obviously) but you don’t have to restart the server.

Check out the image below.(ActionBazaar is our project name in Eclipse)

export ejb jar eclipse glassfish

Once we have done this, we would create the client which will be a simple java class to test our EJBs.

EJB Client

public class TestStatelessSessionBeans {
private PlaceAuctionItem placeAuctionItem;

public void mimicPlaceAuctionItem(){

public TestStatelessSessionBeans() {

	try {
		Properties props = new Properties();
		props.put(Context.INITIAL_CONTEXT_FACTORY, 							"com.sun.enterprise.naming.SerialInitContextFactory");
		props.setProperty("org.omg.CORBA.ORBInitialHost", "localhost");

		// glassfish default port value will be 3700,
															props.setProperty("org.omg.CORBA.ORBInitialPort", "3700");

		InitialContext ctx = new InitialContext(props);
		this.placeAuctionItem = (PlaceAuctionItem) 								ctx.lookup("ejb/PlaceAuctionItem");
	} catch (NamingException e) {


public class TestModule {
	public static void main(String args[]){
	   new TestStatelessSessionBeans().mimicPlaceAuctionItem();

Some common errors

• @EJB will also mark this EJB to be DI by the container, however, we are not deploying the client in Glassfish, and hence we would need to do the JNDI lookup.

• We have to use the properties mentioned while instantiating the InitialContext to prevent error:

Lookup failed for ‘ejb/PlaceAuctionItem’ in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.url.pkgs=com.sun.enterprise.naming,} [Root exception is javax.naming.NameNotFoundException: PlaceAuctionItem not found]


javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial

Libs required by project

  • appserv-rt.jar
  • gf-client.jar
  • javaee.jar

 But take note, that the gf-client.jar must not be copied into your workspace. It needs to be referenced from the Glassfish server location. It acts like a wrapper for other glassfish client libraries.

Now, if we run the client, and see the server.log under domains/domain1/logs we get the following statement printed from our StatelessSessionBean.

[#|2012-06-06T01:20:12.346+0800|INFO|glassfish3.1.2||_ThreadID=20;_ThreadName=Thread-2;|saving the AuctionItem AuctionItem

Next up would be deploying an ear into Glassfish(which shall contain this EJB and a servlet to test our DI)

Inheritance vs Aggregation

Posted: May 22, 2012 in Techilla

Inheritance vs Aggregation/ Composition vs Inheritance/ Is-a or Has-a ? Call it by any name, but this quandary is something which has baffled many design studies.
The other day I was designing some POC with Hibernate, and I got bitten by the performance bug. Having prior experience with Hibernate, I know what a beast it can be in terms of performance especially with complex inheritance mappings or associations .

As is often the case with me, one thing led to another and I veered down the Google way 🙂 . To paraphrase many eminent people, who have pretty much pushed me towards aggregation,let me provide a gist below:

An inheritance can always be rewritten as an association as follows

instead of

public class A {}
public class B extends A {}

we can use

public class B {
private A a;

We should use aggregation if part of the interface(available methods for us to override) is not used or has to be changed to avoid an illogical situation.

We only need to use inheritance, if we need almost all of the functionality without major changes. And when in doubt,we should always use Aggregation.
Also another line of thought was :
1.Whatever design strategy you choose, your choice will likely be the wrong one at some point because of changing requirements
2.Changing that choice is difficult once you’ve made it.
3.Inheritance tends to be a worse choice as it’s more constraining and hence we should go for aggregation.

To quote a discussion between Bill Venners and Erich Gamma  at:

Bill Venners: That extra flexibility of composition over inheritance is what I’ve observed, and it’s something I’ve always had difficulty explaining. That’s what I was hoping you could capture in words. Why? What is really going on? Where does the increased flexibility really come from?

Erich Gamma: We call this black box reuse. You have a container, and you plug in some smaller objects. These smaller objects configure the container and customize the behavior of the container. This is possible since the container delegates some behavior to the smaller thing. In the end you get customization by configuration. This provides you with both flexibility and reuse opportunities for the smaller things. That’s powerful. Rather than giving you a lengthy explanation, let me just point you to the Strategypattern. It is my prototypical example for the flexibility of composition over inheritance. The increased flexibility comes from the fact that you can plug-in different strategy objects and, moreovers, that you can even change the strategy objects dynamically at run-time.

Bill Venners: So if I were to use inheritance…

Erich Gamma: You can’t do this mix and match of strategy objects. In particular you cannot do it dynamically at run-time.

So, the writing’s on the wall for me. Choose aggregation over inheritance whenever you can, and hopefully, you will live to design another project before they find out that the design strategy implemented needs to be changed. 🙂

All info above are drawn from inheritance-vs-aggregation
and “Design principle” so all credit goes to original authors and I just summarized it above.

Logging a Spring +Hibernate application is especially useful during the initial stages when you would want to see whether all Hibernate configurations are fine. It might also be useful at the later stages to look at the query log generated to do some optimization. However, logging of combined Spring and Hibernate related data onto the same log file is not very straightforward (until you read this post, that is :). Its primarily because Spring and Hibernate use different logging technologies

Spring uses Jakarta Commons Logging API (JCL) which is mandatory to be in the classpath or the application context doesn’t get loaded.

Hibernate 3.3+ uses SL4J or Simple Logging Façade for Java

I have always used log4j to have a common logging repository for our application,

Now, for all these to work together and post everything onto the log file, we need the following libraries in our classpath:

  • commons-logging-1.1.1.jar
  • log4j-1.2.13.jar
  • slf4j-api-1.6.1.jar
  • slf4j-log4j12-1.6.4.jar

The last jar is the bridge between SL4J and log4j, however note that sl4j-log4j* and sl4j-jdk* cannot remain on the classpath at the same time, or they cause binding issues, so look out for that.(Will print on the logs if you have any such binding issues)

Finally, our log4j.xml which will contain the logging info.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j=""

	<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
		<layout class="org.apache.log4j.PatternLayout">
			<param name="ConversionPattern" value="[%d{dd/MM/yy hh:mm:ss:sss z}] %5p %c{2}: %m%n" />

	<appender name="ASYNC" class="org.apache.log4j.AsyncAppender">
		<appender-ref ref="CONSOLE" />
		<appender-ref ref="FILE" />

	<appender name="FILE" class="org.apache.log4j.RollingFileAppender">

		<param name="File" value="C:/log/spring-hib.log" />
		<param name="MaxBackupIndex" value="100" />

		<layout class="org.apache.log4j.PatternLayout">
			<param name="ConversionPattern" value="[%d{dd/MM/yy hh:mm:ss:sss z}] %5p %c{2}: %m%n" />


	<category name="org.hibernate">
		<priority value="DEBUG" />

	<category name="java.sql">
		<priority value="debug" />

		<priority value="INFO" />
		<appender-ref ref="FILE" />


Note : org.hibernate will show the full blown list of boiler plate stuff , so it might be worthwhile to trim down to org.hibernate.SQL.

Also, if u put hibernate.show_sql=true in your configuration file, it will print the sql statement in the console as well.

There is a very nice pictorial representation here which shows the various binding among different loggers and it pointed me in the right direction, thanks Espen!:)

Uploading a file or an image and retrieving it is an extremely frequent activity and doing so via Mybatis/Spring must be a breeze as there are so many users who must be doing it!

So I thought, 6 hours before I started to code it. Unfortunately , Mybatis user manual has zero references to blob/clob insert /delete and searching on Google didn’t seem to go very far. There were pointers but no complete code examples. In the end, it proved to be exceedingly simple and I went to sleep a happy man. Here’s the full example to save half a day for somebody else.

Mybatis version : mybatis-3.1.0
Spring version : 3.1.1
Mybatis-Spring bundle : 1.1.1
Database : Oracle
Reqd. libs : commons-fileupload-*.jar , commons-io*.jar , ojdbc14.jar and obviously the reqd. spring and mybatis jars.

Database columns in EMPLOYEE table:


The actual file data is stored in a BLOB field.

Note: not a CLOB field, as we are mainly trying to host images here.(Even if some other file type is uploaded, it being stored as a BLOB will help to retrieve it exactly as it was stored, without any character encoding being applied, as in case of CLOB)

We are having a Spring MVC application, so we will augment our web-config.xml or whatever config file the DispatcherServlet listens to with the below code:

<bean id="multipartResolver"
	<property name="maxUploadSize">

The fileupload size is in Bytes.

Model object / Form data

public class Employee {
       private CommonsMultipartFile fileData;
       private byte[] fileDataBytes;
       private String fileName;
       private String fileContentType;

Our jsp will host the below :

spring form file upload

If we don’t put the enctype=”multipart/form-data, we will not be able to typecast the uploaded file into the CommonsMultiPartFile and hence we will not be able to retrieve the contentType and the filename. If we don’t put the encType, we can always retrieve the uploaded file as byte[] but in order to retrieve the file, we will have to store the contenttype as well, so an easier option is to use CommonsMultiPartFile.

Store or Upload the file or image in Mybatis

<update id="updateEmployee" parameterType="com.spring.model.Employee">
	update employee
<if test="fileData.originalFilename != null">filename = #{fileData.originalFilename,jdbcType=VARCHAR},</if>
		<if test="fileData.ContentType != null">fileContentType =					#{fileData.contentType,jdbcType=VARCHAR},</if>
		<if test="fileData.bytes != null">fileData = 							#{fileData.bytes},
	empId = #{empId}

Note, that we don’t set the jdbcType of contentType, nor do we use any typehandlers as many suggested.

Also, note that we use the original CommonsMultipart object to retrieve the fileContentType,filename and fileData.

That’s it for storing the file.

Retrieve the uploaded file/image using Mybatis

Now, for retrieving it back, we will use the below query:

<select id="getUploadedFileForEmployee" parameterType="Long"
		select empId,fileName,fileContentType,fileData as
		employee where


That’s it. We have successfully retrieved the file along with its name and contenttype. Now, we shall see how we can display it / download the file.

public class EmployeeController {
	private EmployeeBaseService employeeService;
	private UploadedObjectView uploadedObjectView;
	public EmployeeBaseService getEmployeeService() {
		return employeeService;
	public ModelAndView downloadFile(@RequestParam("empId") long empId){
		Map model = new HashMap();
		Employee empMap = employeeService.getUploadedFileForEmployee(empId);
		model.put("data", empMap.getFileDataBytes());
		model.put("contentType", empMap.getFileContentType());
		model.put("filename", empMap.getFileName());
		return new ModelAndView(uploadedObjectView, model);

For the save of the employee along with its data and a detailed explanation of the annotations, please refer here.

Ok, so we are done with our controller, but since the file to be downloaded can be of different types, so we construct a generic UploadedObjectView and populate it with the file data.

public class UploadedObjectView extends AbstractView {
	//To rediect to another page, with inline text.
	protected void renderMergedOutputModel1(Map model,
			HttpServletRequest request, HttpServletResponse response) throws Exception {
		  byte[] bytes = (byte[]) model.get("data");
	      String contentType = (String) model.get("contentType");
	      ServletOutputStream out = response.getOutputStream();
	//For a download option
	protected void renderMergedOutputModel(Map model,
			HttpServletRequest request, HttpServletResponse response) throws Exception {
		byte[] bytes = (byte[]) model.get("data");
		String contentType = (String) model.get("contentType");
		response.addHeader("Content-disposition","attachment; filename="+model.get("filename"));
		ServletOutputStream out = response.getOutputStream();

We have 2 methods above:

If we override with the 1st method, the output will be redirected to another page with an inline text / image.

For a totally generic implementation, the overridden method 2 is safer as it provides with a download option.

That’s it folks :)! I hope this could be a help to somebody.

One common error which I have encountered in Mybatis while uploading large files was :

SQL state [72000]; error code [1013]; ORA-03111: break received on communication channel

This error is due to the query timeout being exceeded.

From the mybatis-config.xml(more info here), changed the value from 10 to 100 to resolve this issue.

<setting name="defaultStatementTimeout" value="100" />

Great post here by Juergen Hoeller if you need further inputs or just anything about Spring in particular. Thanks Juergen!