Archive for the ‘Techilla’ Category

Hibernate and performance considerations: these 2 items are like twins joined at the hip. And a 2nd level cache is probably the first step towards happy customers and a fast application!

We will go over the following things in this post:


1. A brief introduction to the different cache available and why we chose EhCache


2. Hibernate 2nd level cache implementation with EhCache with a small application.


3. Detailed differences between the various caching strategies : read-only, nonstrict-read-write, read-write and transactional.


4. Ways to clean up the 2nd level cache.


5. Query Cache


Introduction

There are 2 cache levels in Hibernate:

  • 1st level or the session cache.
  • 2nd level or the SessionFactory level Cache.

The 1st level cache is mandatory and is taken care by Hibernate.

This 2nd level cache can be implemented via various 3rd party cache implementations. We shall use EhCache to implement it. There are several other possibilities like : SwarmCache and OSCache.

Why EhCache?

Pre 3.2 Hibernate releases EhCache is the default one.

EhCache looks to be a really vibrant development community and believe me that’s an important consideration to make before choosing any open source project/tool. We don’t want to be stuck midway in a project and then hunt for answers from a development community which doesn’t answer your queries or track bugs.

As implied earlier, the ‘second-level’ cache exists as long as the session factory is alive. It holds on to the ‘data’ for all properties and associations (and collections if requested) for individual entities that are marked to be cached

It is possible to configure a cluster or JVM-level (SessionFactory-level) cache on a class-by-class and collection-by-collection basis.

As a side-note, to improve on the N+1 selects, 2nd level cache is also used, though a better approach is obviously to improve the original query using the various fetch strategies.

Application Overview

Let have an application structure like below. We have a state where some patients have to be transferred from their homes to the hospitals. We have several organizations who have voluntarily decided to help on this. Each organization has several volunteers. The volunteers can be either drivers or caregivers who help in transporting the patients. Now, the entire state Is split into regions to help the volunteer pick and choose the regions they want to serve in.(perhaps close to home etc) .

To summarize,

  • 1 Organization will have m volunteers.
  • 1 volunteer can be either Driver / Caregiver
  • 1 volunteer will be linked to m regions
  • 1 region will be linked to n volunteers

So, Org: Volunteer = 1 : m

Volunteer : Region = m:n

Hibernate 2nd level cache implementation with EhCache:

Step 1

Download ehcache-core-*.jar from http://ehcache.org/ and add it to your classpath. We also need an ehcache.xml in our classpath to override the default implementations

Hibernate Version: 3.6

Step 2

Sample ehcache.xml (to be put in classpath)



<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:noNamespaceSchemaLocation="ehcache.xsd" updateCheck="true"
	monitoring="autodetect" dynamicConfig="true">

<defaultCache
	        maxElementsInMemory="100000"
	        eternal="false"
	        timeToIdleSeconds="1000"
	        timeToLiveSeconds="1000"
	        overflowToDisk="false"
	        />

Step 3

enable ehcache in our hibernate.cfg.xml

<property name="hibernate.cache.region.factory_class"> 	net.sf.ehcache.hibernate.EhCacheRegionFactory
</property>


<property name="hibernate.cache.use_second_level_cache">true</property>
<property name="hibernate.cache.use_query_cache">true</property>


	

Note: Prior Hibernate versions will require different hibernate properties to be enabled.

As seen above, we have the second level cache and the query cache both enabled.

The second level cache stores the entities/associations/collections(on request). The query cache stores the results of the query but as a key based format, where the values are actually stored in the actual 2nd level cache. So, query cache is useless without a 2nd level cache.

Recall that we can put cache strategies for both classes and collections.

Step 4

Enable the cache at class level


<class name="com.spring.model.Region" table="volunteer">
	<cache usage="read-only" />
	<!—- other properties -->
</class>

Difference between Cache strategies in detail

usage (required) specifies the caching strategy: transactional, read-write, nonstrict-read-write or read-only

Straight from the Hibernate API below: (The explanation comes below though 🙂

Strategy: read only(usage=”read-only”)

  • If your application needs to read, but not modify, instances of a persistent class, a read-only cache can be used.
  • Simplest and best performing strategy.
  • Safe for use in a cluster.

Note: We shall see later that Read-Only cache allows for insertions but no updates/deletes.

For our Region persistent class earlier, we had used read-only cache strategy, as the regions will be inserted in the database only through the database, and not from the UI, so we can safely say that the changes would not be made to the cached data.

Strategy: nonstrict read/write(usage=”nonstrict-read-write”)

  • Caches data that is sometimes updated without ever locking the cache.
  • If concurrent access to an item is possible, this concurrency strategy makes no guarantee that the item returned from the cache is the latest version available in the database. Configure your cache timeout accordingly! This is an “asynchronous” concurrency strategy.
  • In a JTA environment, hibernate.transaction.manager_lookup_class has to be set

Foreg. for Weblogic hibernate.transaction.manager_lookup_class=org.hibernate.transaction.WeblogicTransactionManagerLookup

  • For non-managed environments, tx. should be closed when session.close() or session.disconnect() is invoked.
  • This is slower than READ-ONLY but obviously faster than the next one.(READ-WRITE)

Strategy: read/write(usage=”read-write”)

  • Caches data that is sometimes updated while maintaining the semantics of “read committed” isolation level. If the database is set to “repeatable read”, this concurrency strategy almost maintains the semantics. Repeatable read isolation is compromised in the case of concurrent writes. This is an “asynchronous” concurrency strategy.
  • If the application needs to update data, a read-write cache might be appropriate.
  • This cache strategy should never be used if serializable transaction isolation level is required. In a JTA environment, hibernate.transaction.manager_lookup_class has to be set

For eg. hibernate.transaction.manager_lookup_class=org.hibernate.transaction.WeblogicTransactionManagerLookup

Strategy: transactional

  • Support for fully transactional cache implementations like JBoss TreeCache.
  • Note that this might be a less scalable concurrency strategy than ReadWriteCache. This is a “synchronous” concurrency strategy
  • Such a cache can only be used in a JTA environment and you must specify hibernate.transaction.manager_lookup_class.
  • Note: This isn’t available for ehCache singleton ( ava. with a cache server:Terracota)

Now, if you cannot understand the differences between nonStrict R/W vs R/W vs Transactional from above very well, I dont blame you as I was in the same boat earlier. Let’s delve a bit deeper into the cache workings, shall we?

Basically two different cache implementation patterns are provided for :

  • A transaction-aware cache implementation might be wrapped by a “synchronous” concurrency strategy, where updates to the cache are written to the cache inside the transaction.
  • A non transaction-aware cache would be wrapped by an “asynchronous” concurrency strategy, where items are merely “soft locked” during the transaction and then updated during the “after transaction completion” phase;

Note: The soft lock is not an actual lock on the database row – only upon the cached representation of the item. In a distributed cache setup, the cache provider should have a cluster wide lock, otherwise cache correctness is compromised.

In terms of entity caches, the expected call sequences for Create / Update / Delete operations are:

DELETES :

  1. lock(java.lang.Object, java.lang.Object)
  2. evict(java.lang.Object)
  3. release(java.lang.Object, org.hibernate.cache.CacheConcurrencyStrategy.SoftLock)

UPDATES :

  1. lock(java.lang.Object, java.lang.Object)
  2. update(java.lang.Object, java.lang.Object, java.lang.Object, java.lang.Object)
  3. afterUpdate(java.lang.Object, java.lang.Object, java.lang.Object,org.hibernate.cache.CacheConcurrencyStrategy.SoftLock)

INSERTS :

  1. insert(java.lang.Object, java.lang.Object, java.lang.Object)
  2. afterInsert(java.lang.Object, java.lang.Object, java.lang.Object)

In terms of collection caches, all modification actions actually just invalidate the entry(s).The call sequence here is:

  1. lock(java.lang.Object, java.lang.Object)
  2. evict(java.lang.Object)
  3. release(java.lang.Object, org.hibernate.cache.CacheConcurrencyStrategy.SoftLock)

For an asynchronous cache, cache invalidation must be a two step process (lock->release, or lock-> afterUpdate). Note, however, lock() is only for read-write and not for nonstrict-read-write. release() is meant to release the lock and update() update the cache with the changes.

For a synchronous cache, cache invalidation is a single step process (evict, or update). Since this happens within the original database transaction, there is no locking. Eviction will force Hibernate to look into the database for subsequent queries whereas update will simply update the cache with the changes.

Note that query result caching does not go through a concurrency strategy; they are managed directly against the underlying cache regions

Lets analyze what each of the caches do : though TransactionalCache will most likely be overwritten by the individual implementation. (3rd party cache provider)

DELETE / Collection

METHOD READ-ONLY NONSTRICT-READ-WRITE READ-WRITE TRANSACTIONAL
lock() throws UnsupportedOperationException(“Can’t write to a readonly object”) returns null, so no lock applied
  • Stop any other transactions reading or writing this item from/to the cache.
  • Send them straight to the database instead. (The lock does time out eventually.)
  • This implementation tracks concurrent locks of transactions which simultaneously attempt to write to an item.
returns null, so no lock applied.
evict()   this.cache.remove(key); //does nothing this.cache.remove(key);
release()   this.cache.remove(key); Release the soft lock on the item. Other transactions may now re-cache the item (assuming that no other transaction holds a simultaneous lock). But obviously, for this item, there will be nothing, since it has been deleted. //does nothing

UPDATE

METHOD READ-ONLY NONSTRICT-READ-WRITE READ-WRITE TRANSACTIONAL
lock() throws UnsupportedOperationException (“Can’t write to a readonly object”); returns null, so no lock applied
  • Stop any other transactions reading or writing this item from/to the cache.
  • Send them straight to the database instead. (The lock does time out eventually.)
  • This implementation tracks concurrent locks of transactions which simultaneously attempt to write to an item.
returns null, so no lock applied.
update()   evict(key);this.cache.remove(key);return false; return false; Updates cache
afterUpdate()   release(key, lock);this.cache.remove(key);return false;
  • Re-cache the updated state, if and only if there are no other concurrent soft locks.
  • Release our lock..
Return false

INSERT

METHOD READ-ONLY NONSTRICT-READ-WRITE READ-WRITE TRANSACTIONAL
insert() return false; return false; return false; Update cache.
afterInsert() this.cache.update(key, value);return true; return false; Add the new item to the cache, checking that no other transaction has accessed the item.. Return false.

When Hibernate looks into the cache and when it looks into the database ?

Hibernate will look into the database if any of the below is true:

  1. Entry is not present in the cache
  2. The session in which we look for the entry is OLDER than the cached entry, meaning the session was opened earlier than the last cache loading of the entry. Thus cache will be refreshed.
  3. If the entry is currently being updated/deleted and the cache implemented is a read-write
  4. An update/delete has recently happened for a nonstrict-read-write which has caused the item to be evicted from the cache.

Now, armed with the knowledge above which basically tells that nonstrict R/w(NSRW) never locks any entity while RW locks it, and when Hibernate looks into the database,let look at some code.

Lets have the domain objects(only associations and collections depicted) :

Organization :

<set name="volSets" cascade="all" inverse="true">
	<key column="org_id" not-null="true" />
	<one-to-many class="com.spring.model.Volunteer" />
</set>	

Volunteer:

     <many-to-one name="org" column="org_id" 						class="com.spring.model.Organization" not-null="true" />
		
		
<set name="regions" table="volunteer_region" inverse = "false"	 lazy="true" fetch="select" cascade="none" >
           <key column name="volunteer_fk" not-null="true" />
      	 <many-to-many class="com.spring.model.Region">
              <column name="region_fk" not-null="true" />
      	 </many-to-many>
</set>

Region:


		<set name="volunteers" table="volunteer_region" inverse="true"
			lazy="true" fetch="select" cascade="none">
			<key column name="region_fk" not-null="true" />
			</key>
			<many-to-many class="com.spring.model.Volunteer">
				<column name="volunteer_fk" not-null="true" />
			</many-to-many>
		</set>

We will load the Organization and its set of volunteers in 1 transaction, and then we shall then update the organization name in another tx and we will see the differences in action.

NonStrict R/W vs R/W

DEMO 1
organization.hbm.xml is marked with nonstrict-read-write

     <cache usage="nonstrict-read-write"/>

Java code:

System.out.println("session1 starts");
Session session1 = sf.openSession();
Transaction tx = session1.beginTransaction();
Organization orgFromSession1= (Organization) 	session1.get(Organization.class,421l);
//loaded in the cache at time t0
orgFromSession1.setOrgName("org2"+System.currentTimeMillis());
session1.save(orgFromSession1);
tx.commit(); 	//evicted from the cache 
session1.close();
System.out.println("session1 ends");


System.out.println("session2 starts");
Session session2 = sf.openSession();//session 2 opened at time t2
Transaction tx2 = session2.beginTransaction();
Organization orgFromSession2= (Organization) session2.get(Organization.class,421l); System.out.println(orgFromSession2.getOrgName());
session2.close();
System.out.println("session2 ends");

Logs :

session1 starts
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
Hibernate: update organization set version=?, org_name=? where org_id=? and version=?
session1 ends

session2 starts
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
org21338618053131

	

Note:

a. In session 1, a select +update

b. In session 2, another select from the DB to fetch the item since the item was evicted with the update.

DEMO 2

read-write cache enabled at organization.hbm.xml

<cache usage="read-write"  region="org_region"  />
	

Java code:

Same code as above
	

Logs:

session1 starts

Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
Hibernate: update organization set version=?, org_name=? where org_id=? and version=?

session1 ends

session2 starts
org21338617974053
session2 ends

	

Note:

a. In session 1, a select +update

b. In session 2, no selects since there was no eviction but instead the cache was updated.

Now, we shall just tweak the code such that, we open session 2 just before the transaction 1 commits.We shall also put a check if the item actually exists in the cache. So the changed code becomes:

DEMO 3

Session 2 opened just before the transaction 1 commits. Some more diagnostic messages added to check if the item is indeed in the 2nd level cache using sf.getCache().containsEntity(Organization.class,421l)
Java code:

System.out.println("session1 starts");
Session session1 = sf.openSession();
Transaction tx = session1.beginTransaction();
Organization orgFromSession1= (Organization) 	session1.get(Organization.class,421l);
//loaded in the cache at time t0
orgFromSession1.setOrgName("org2"+System.currentTimeMillis());
session1.save(orgFromSession1);
System.out.println("session2 starts");
Session session2 = sf.openSession();//session 2 opened at time t1
tx.commit();	
System.out.println("Cache Contains?"+sf.getCache().containsEntity(Organization.class,421l));

		
session1.close();//reloaded in the cache at time t2,after the flush happens
System.out.println("session1 ends");
Transaction tx2 = session2.beginTransaction();
Organization orgFromSession2= (Organization) session2.get(Organization.class,421l); //should be from the database
System.out.println(orgFromSession2.getOrgName());
session2.close();
System.out.println("session2 ends");

	

Logs:

session1 starts
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
session2 starts
Hibernate: update organization set version=?, org_name=? where org_id=? and version=?
Cache Contains? true
session1 ends
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
org21338618256152
session2 ends

	

Note:

a. In session 1, a select +update

b. In session 2, again a select since, this session was opened before the cache was updated with the entry.

c. Note that cache did contain the item.

The actual summary is already present in the code comments, but let’s reiterate:The below is true for both Nonstrict-R/w and R/W cache

  • Whenever a session starts, a timestamp is added to it.(ST)
  • Whenever an item is loaded in the cache, a timestamp is added to it.(CT)
  • Now if ST < CT, meaning the session is older than the cached item, then if we look for the cached item in this older session, Hibernate will NOT look in the cache. Instead it will always look in the database, and consequently re-load the item in the cache with a fresh timestamp.

The above was demonstrated in demo 3 where we started the 2nd session before the cache was reloaded with the item. If you check the output, the item was actually present in the cache at the time of querying, but yet the database was referred.

Summary of diff. between NS R/W and R/W

For NonStrict-Read-Write

• There’s no locking ever.

• So, when the object is actually being updated in the database, at the point of committing(till database completes the commit), the cache has the old object, database has the new object.

• Now, if any other session looks for the object , it will look in the cache and find the old object.(DIRTY READ)

• However, as soon as the commit is complete, the object will be evicted from the cache, so that the next session which looks for the object, will have to look in the database.

• If you execute the same code (Demo1) with the diagnostic: System.out.println(“Cache Contains?”+sf.getCache().containsEntity(Organization.class,421l));

Before and update the tx.commit() you will find that before the commit, the cache contained the entry, after the commit, it’s gone. Hence forcing session2 to look in the database and reload the data in the cache.

So, nonstrict read/write is appropriate if you don’t require absolute protection from dirty reads, or if the odds of concurrent access are so slim that you’re willing to accept an occasional dirty read. Obviously the window of Dirty Read is during the time when the database is actually updated, and the object has not YET been evicted from the cache.

For Read-Write

• As soon as somebody tries to update/delete an item, the item is soft-locked in the cache, so that if any other session tries to look for it, it has to go to the database.

• Now, once the update is over and the data has been committed, the cache is refreshed with the fresh data and the lock is released, so that other transactions can now look in the CACHE and don’t have to go to the database.

• So, there is no chance of Dirty Read, and any session will almost ALWAYS read READ COMMITTED data from the database/Cache.

Differences between R/W and Transactional Cache

Below adapted from http://clustermania.blogspot.sg/2009/07/with-read-write-hibernate-2nd-level.html (Supplemented with code examples of mine below)

We have to understand that since R/W is asynchronous, the updating of the cache happens outside the tx(ie in the afterCommit phase of the transaction) What happens if something goes wrong there?

How is the cache transactionality/correctness maintained in Read-Write caching strategy during transaction commit, rollback & failures (the so called value proposition of transactional cache). Here is how –

1. When application commits transaction, cache entry is soft-locked, there by deflecting all the reads for this entry to DB now.

2. Then changes are flushed to DB and transactional resources are committed.

3. Once transaction is committed (i.e. reflected inside DB), in the after completion phase cache entry is updated and unlocked. Any other transaction starting after the update time stamp can now read the cache entry contents, since the lock has been released.

This is what happens in different stages of transaction completion/failure –

  • So anytime lag between 2 & 3 (i.e. when DB and cache are out of sync), you are using DB to read the latest state, since the cache is still soft-locked.
  • If transaction rolled back, cache entry still remains locked and later reads from the DB refreshes the cache entry state
  • What if node making transactional change fails between step 2 & 3 (i.e. transaction is committed to DB but not to cache) and cache state is preserved (e.g. in clustered cache), is my cache left corrupted? not really.

Since cache entry is locked, other transaction keep reading from the DB directly. Later hibernate times out the locked cache entry and its contents are refreshed with database state and cache entry is again back for read/write operations.

Do you still need a transactional cache that either integrates with hibernate native transactions or JTA?

All a JTA transaction cache guarantees is cache state visibility across transactions and recoverability if any of the transaction phases fails.

With read-write you are guaranteed to read the correct state all the time. If cache entry state gets inconsistent because of any failure in transaction commit phase, it is guaranteed to recover with correct state. This is all what a transactional cache guarantees, but at a higher cost (esp. when read outweigh writes).

Hibernate Read-Write cache strategy makes a smart decision of reading from the cache or database based on the cache entries. Anytime if cache cannot guarantee the correct contents, application is deflected to DB.

What are the caveats? We will test them below in our code sample.

  • Read-Write cache might compromise repeatable read isolation level if an entity is read from the cache and later its contents are evicted from 1st level (session) cache. If transaction reads the same entry again from DB later and in the meantime other transaction updated the entry state, current transaction will get different state than what it read earlier.

Note: This should occur only if session cache contents are flushed otherwise once any entry is read from 2nd level cache/DB, every subsequent read in the same transaction will get the state from session cache and there by guaranteed the same state again and again. How many people really flush session cache?

  • Cache entries might expire in lock mode. In lock mode each entry is assigned a timeout value and if update doesn’t unlock the entry within specified timeout, the entry might be unlocked forcefully(this done to avoid any permanent fallout of cache entry from the cache, e.g. when node fails before unlocking the entry). A genuinely delayed transaction might create a very small window where cache contents are stale and other transactions are reading the old state. Cache entry timeout is a cache provider property and might to be able tune if provider supports it.

Note: For this to occur, update has to be delayed, read has to occur after timeout and moreover the stale window is miniscule. So majority of the applications are safe anyways.

Finally, one word of caution would be:

  • Entity type that are mostly updated and have got concurrent read and updates, read-write caching strategy may not be very useful as most of the read will be deflected to database.

Ok, let’s now put the caveats to test.

Testing the 1st caveat: Repeatable Reads might be compromised. What actually is a repeatable read? It means that if, within a transaction, u read a row at time T1, and you again read it at time T2 (T2>T1) the row shouldn’t change. One imp. thing for us to remember is that Hibernate always looks for the object in the session first, (1st level cache), and then in 2nd level cache.

Java code:

System.out.println("session1 starts");
Session session1 = sf.openSession();
Transaction tx = session1.beginTransaction();
Organization orgFromSession1= (Organization)	session1.get(Organization.class,96783514l);
session1.evict(orgFromSession1); 
//loaded in previous step and then evicted from session
System.out.println("Cache 	Contains?"+sf.getCache().containsEntity(Organization.class, 96783514l));
System.out.println("1:"+orgFromSession1.getOrgName());
//loaded from 2nd level, not session cache

{
System.out.println("session2 starts");
Session session2 = sf.openSession();//session 2 opened at time t2
Transaction tx2 = session2.beginTransaction();
Organization orgFromSession2= (Organization) session2.get(Organization.class,96783514l);
//should be from the 2nd level cache
orgFromSession2.setOrgName("org "+System.currentTimeMillis());
session2.save(orgFromSession2);
tx2.commit(); // cache updated with new entry
System.out.println("inner "+orgFromSession2.getOrgName());
session2.close();
System.out.println("session2 ends");
}
System.out.println("Cache 	Contains?"+sf.getCache().containsEntity(Organization.class,96783514l));


//we load the row again. From the database this time, since this session began before //the cache update
orgFromSession1= (Organization) session1.get(Organization.class,96783514l);
System.out.println("2:"+orgFromSession1.getOrgName());
tx.commit();
session1.close();

	

If you see the above code: u will see the following pattern:

  • Session 1 loads an object, (also in the2nd level thereby) and then removes it from sesssion using the evict(). Note that its still present in 2nd level cache, but has been removed from session cache.
  • Session 2 updates the same object, by retrieving it from 2nd level cache, hence no DB queries. Once update completes, the cache is refreshed with the new data
  • Session 1 tries to read the same entity again, and this time it refers to the 2nd level cache as it has been evicted from 1st level. Remember that the object is present in the 2nd level cache, but since, this session had started earlier, it will refer to the database for the object, thus the object loaded in step a differs from this one, and hence there are no repeatable reads. Note, that cache did contain the item, but still it referred to the database.

Logs:

session1 starts
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?

Cache Contains?true
1:org 1339490676715

session2 starts
Hibernate: update organization set version=?, org_name=? where org_id=? and version=?
inner org 1339490694572
session2 ends

Cache 	Contains?true
Hibernate: select organizati0_.org_id as org1_1_0_, organizati0_.version as version1_0_, organizati0_.org_name as org3_1_0_ from organization organizati0_ where organizati0_.org_id=?
2:org 1339490694572

	

Right, so we have analyzed the various cacheing strategies that are present.

To summarize:

  • ReadOnly cache can do only reads and inserts , cannot perform updates/deletes. Fastest performing.
  • Nonstrict Read Write Cache doesn’t employ any locks, ever, so there’s always a chance of dirty reads. However, it ALWAYS evicts the entry from the cache so that any subsequent sesssions always refer to the DB.
  • Read Write cache employs locks but in an asynchronous manner, first the insert/update/delete occurs w/in the tx. When the cache entry is softlocked and other sessions have to refer to the database. Once the tx. is completed, the lock is released and the cache is updated.(outside the transaction). In some cases, repeatable reads might be compromised.
  • Transactional caches obviously update the database and the cache within the same transaction, so its always in a consistent state with respect to the database.
  • Entity type that are mostly updated and have got concurrent read and updates, read-write caching strategy may not be very useful as most of the reads will be deflected to database

Collection Caching

Till now, we have discussed cacheing at the individual level. We can also cache the collections. Recall that the collections cacheing follows the same steps as the delete()

ie lock() –> evict() –> release()

Collections are by default not cached, and have to cached explicitly like below:

<set name="volSets" cascade="all" inverse="true" batch-size="10">
	<cache usage="read-write" />

	<key column="org_id" not-null="true" />
	<one-to-many class="com.spring.model.Volunteer" />
</set>


Removing Entities and Collections from 2nd level cache

Whenever you pass an object to save(), update() or saveOrUpdate(), and whenever you retrieve an object using load(), get(), list(), iterate() or scroll(), that object is added to the internal cache of the Session.

When flush() is subsequently called, the state of that object will be synchronized with the database. If you do not want this synchronization to occur, or if you are processing a huge number of objects and need to manage memory efficiently, the evict() method can be used to remove the object and its collections from the first-level cache.

session.evict(orgFromSession1);
session.evict(orgFromSession1.getVolSets());

To evict all objects for Organization, we can call:

session.evict(Organization.class);

To remove from the SessionFactory or the 2nd level cache:


sf.getCache().evictEntity(Organization.class,421l); //Entity

sf.getCache().evictCollection("com.spring.model.Organization.volSets",421l); //Collections
//Note: the collection contains the name of the fully qualified class.

where sf is the SessionFactory, we can remove the identifier if we would want all volunteer sets to be evicted.

Note: the collection contains the name of the fully qualified class.


Query Cache

As mentioned earlier, we need to enable the below property in our hibernate.cfg.xml

<property name="hibernate.cache.use_query_cache">true</property>

This setting creates two new cache regions:

  • org.hibernate.cache.StandardQueryCache, holding the cached query results
  • org.hibernate.cache.UpdateTimestampsCache, holding timestamps of the most recent updates to queryable tables. These are used to validate the results as they are served from the query cache.

Note:

UpdateTimestampsCache region should not be configured for expiry at all. Note, in particular, that an LRU cache expiry policy is never appropriate.

Recall , that the query cache does not cache the state of the actual entities in the cache; it caches only identifier values and results of value type. For this reason, the query cache should always be used in conjunction with the second-level cache for those entities expected to be cached as part of a query result cache (just as with collection caching).

But even then queries are not cached, and hence need to be explicitly specified for cacheing for both HQL and Criteria queries by setting setCacheable(boolean) to the Query.:

Criteria c2 = session.createCriteria(Guitar.class).setCacheable(true);

List volList = session.createQuery("from Volunteer vol join vol.regions").setCacheable(true).list();

Well, that’s it then! Whew! That was a long one indeed! If you have followed thus far, I would be delighted to hear your opinions on this post.

References

Advertisements

Posted: June 14, 2012 in Techilla

Great post on Performance monitoring using Hibernate

Thoughts on...

Performance monitoring is, by its very nature, a slippery slope. Once you start looking for inefficiencies in code, it’s easy to get carried away wanting to optimize every last line for supreme efficiency. It’s thus important to balance the value of an optimization versus the time spent investigating the proper fix, the effort and difficulty required to implement it, and its importance relative to other things on the roadmap.

Every environment, every team, and every piece of software is different – so I won’t dare try to formulate any hard and fast rules for what is an appropriate or inappropriate amount of optimization. Suffice it to say, before any decisions can be made as to whether an optimization should be implemented, the nature of the problem must be understood. Below I will show how to use functionality built into Hibernate to help understand where the performance issues in your application…

View original post 1,669 more words

Today, we are going to put forth a small EJB 3 application in Glassfish v3.

Why EJB3?

 Coding in EJB3 is almost as simple as it gets. EJB has gone some major intuitive simplifications in order of coding from releases 2x to 3x.

Gone are the days of cumbersome home interfaces, remote interfaces, deployment descriptors and checked exceptions in SessionBean implementations.

EJB3 supports annotation based coding and dependency injections. While we can (and will ) do the JNDI lookups to get hold of an EJB/Resource, DI takes away the need to do so.

 While the deployment descriptors (xml based) can still be used, we are going to develop a simple application which will be wholly annotation based.

Why Glassfish?

Glassfish is probably the easiest application server out there. It’s so intuitive that it almost feels like a web server. We will eventually use the same application in a Weblogic server(the big daddy) but Glassfish is great because you can start it up and test concepts in almost no time.

Lets then put the 2 heads together and  build a small EJB3 application and deploy it in Glassfish v3.

We will be doing this in Windows and we shall use Eclipse as our IDE.

Pre-requisites

Download the zip file –  glassfish 3x.zip

Our version : glassfish-3.1.2.

Startup of Glassfish

Go to the bin directory after unzipping and key in the following(or start up asadmin.bat)

asadmin start-database ( If you really need Derby to start)
asadmin start-domain

Open http://localhost:4848/asadmin. to test the installation.

 Login using the user ID admin and password adminadmin. This will validate the installation.

Coding our EJBs

We only need a POJI and a POJI and then we will add some simple annotations to magically turn the POJO to an EJB. We will be using here a stateless session bean.

@Remote
public interface PlaceAuctionItem {
	void placeAuctionItem();
}


@Stateless(name="PlaceAuctionItem", mappedName="ejb/PlaceAuctionItem")
public class PlaceAuctionItemBean implements PlaceAuctionItem{


	@Override
	public void placeAuctionItem() {
		System.out.println("saving the AuctionItem ");
		//save the auctionItem in the database
		
	}
	 

}

Annotations:

@Stateless: marks the session bean as stateless

mappedName: would be used to do JNDI lookup from the client

@Remote: tells that this is the remote interface.

Once done, we need to deploy the EJB to Glassfish

 EJB Deployment in Glassfish

  • Right click and Export the project as a jar file(let’s name it as test-ejb.jar) and deploy it to

                glassfish-3.1.2\glassfish3\glassfish\domains\domain1\autodeploy\test-ejb.jar

  • Glassfish auto redeploys the jar file, so every time you change anything on the EJBs, you have to re-export the jar(obviously) but you don’t have to restart the server.

Check out the image below.(ActionBazaar is our project name in Eclipse)

export ejb jar eclipse glassfish

Once we have done this, we would create the client which will be a simple java class to test our EJBs.

EJB Client

public class TestStatelessSessionBeans {
@EJB
private PlaceAuctionItem placeAuctionItem;

public void mimicPlaceAuctionItem(){
	placeAuctionItem.placeAuctionItem();
}

public TestStatelessSessionBeans() {
	super();

	try {
		Properties props = new Properties();
		props.put(Context.INITIAL_CONTEXT_FACTORY, 							"com.sun.enterprise.naming.SerialInitContextFactory");
		props.setProperty("org.omg.CORBA.ORBInitialHost", "localhost");

		// glassfish default port value will be 3700,
															props.setProperty("org.omg.CORBA.ORBInitialPort", "3700");

		InitialContext ctx = new InitialContext(props);
			
		this.placeAuctionItem = (PlaceAuctionItem) 								ctx.lookup("ejb/PlaceAuctionItem");
	} catch (NamingException e) {
		e.printStackTrace();
	}

}

public class TestModule {
	public static void main(String args[]){
	   new TestStatelessSessionBeans().mimicPlaceAuctionItem();
	}
}

Some common errors

• @EJB will also mark this EJB to be DI by the container, however, we are not deploying the client in Glassfish, and hence we would need to do the JNDI lookup.

• We have to use the properties mentioned while instantiating the InitialContext to prevent error:

Lookup failed for ‘ejb/PlaceAuctionItem’ in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.url.pkgs=com.sun.enterprise.naming, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl} [Root exception is javax.naming.NameNotFoundException: PlaceAuctionItem not found]

Or

javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file: java.naming.factory.initial

Libs required by project

  • appserv-rt.jar
  • gf-client.jar
  • javaee.jar

 But take note, that the gf-client.jar must not be copied into your workspace. It needs to be referenced from the Glassfish server location. It acts like a wrapper for other glassfish client libraries.

Now, if we run the client, and see the server.log under domains/domain1/logs we get the following statement printed from our StatelessSessionBean.

[#|2012-06-06T01:20:12.346+0800|INFO|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=20;_ThreadName=Thread-2;|saving the AuctionItem AuctionItem

Next up would be deploying an ear into Glassfish(which shall contain this EJB and a servlet to test our DI)

Inheritance vs Aggregation

Posted: May 22, 2012 in Techilla

Inheritance vs Aggregation/ Composition vs Inheritance/ Is-a or Has-a ? Call it by any name, but this quandary is something which has baffled many design studies.
The other day I was designing some POC with Hibernate, and I got bitten by the performance bug. Having prior experience with Hibernate, I know what a beast it can be in terms of performance especially with complex inheritance mappings or associations .

As is often the case with me, one thing led to another and I veered down the Google way 🙂 . To paraphrase many eminent people, who have pretty much pushed me towards aggregation,let me provide a gist below:

An inheritance can always be rewritten as an association as follows

instead of


public class A {}
public class B extends A {}

we can use

public class B {
private A a;
}

We should use aggregation if part of the interface(available methods for us to override) is not used or has to be changed to avoid an illogical situation.

We only need to use inheritance, if we need almost all of the functionality without major changes. And when in doubt,we should always use Aggregation.
Also another line of thought was :
1.Whatever design strategy you choose, your choice will likely be the wrong one at some point because of changing requirements
2.Changing that choice is difficult once you’ve made it.
3.Inheritance tends to be a worse choice as it’s more constraining and hence we should go for aggregation.

To quote a discussion between Bill Venners and Erich Gamma  at:

Bill Venners: That extra flexibility of composition over inheritance is what I’ve observed, and it’s something I’ve always had difficulty explaining. That’s what I was hoping you could capture in words. Why? What is really going on? Where does the increased flexibility really come from?

Erich Gamma: We call this black box reuse. You have a container, and you plug in some smaller objects. These smaller objects configure the container and customize the behavior of the container. This is possible since the container delegates some behavior to the smaller thing. In the end you get customization by configuration. This provides you with both flexibility and reuse opportunities for the smaller things. That’s powerful. Rather than giving you a lengthy explanation, let me just point you to the Strategypattern. It is my prototypical example for the flexibility of composition over inheritance. The increased flexibility comes from the fact that you can plug-in different strategy objects and, moreovers, that you can even change the strategy objects dynamically at run-time.

Bill Venners: So if I were to use inheritance…

Erich Gamma: You can’t do this mix and match of strategy objects. In particular you cannot do it dynamically at run-time.

So, the writing’s on the wall for me. Choose aggregation over inheritance whenever you can, and hopefully, you will live to design another project before they find out that the design strategy implemented needs to be changed. 🙂

All info above are drawn from inheritance-vs-aggregation
and “Design principle” so all credit goes to original authors and I just summarized it above.

Logging a Spring +Hibernate application is especially useful during the initial stages when you would want to see whether all Hibernate configurations are fine. It might also be useful at the later stages to look at the query log generated to do some optimization. However, logging of combined Spring and Hibernate related data onto the same log file is not very straightforward (until you read this post, that is :). Its primarily because Spring and Hibernate use different logging technologies

Spring uses Jakarta Commons Logging API (JCL) which is mandatory to be in the classpath or the application context doesn’t get loaded.

Hibernate 3.3+ uses SL4J or Simple Logging Façade for Java

I have always used log4j to have a common logging repository for our application,

Now, for all these to work together and post everything onto the log file, we need the following libraries in our classpath:

  • commons-logging-1.1.1.jar
  • log4j-1.2.13.jar
  • slf4j-api-1.6.1.jar
  • slf4j-log4j12-1.6.4.jar

The last jar is the bridge between SL4J and log4j, however note that sl4j-log4j* and sl4j-jdk* cannot remain on the classpath at the same time, or they cause binding issues, so look out for that.(Will print on the logs if you have any such binding issues)

Finally, our log4j.xml which will contain the logging info.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"
	debug="false">

	<appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender">
		<layout class="org.apache.log4j.PatternLayout">
			<param name="ConversionPattern" value="[%d{dd/MM/yy hh:mm:ss:sss z}] %5p %c{2}: %m%n" />
		</layout>
	</appender>

	<appender name="ASYNC" class="org.apache.log4j.AsyncAppender">
		<appender-ref ref="CONSOLE" />
		<appender-ref ref="FILE" />
	</appender>

	<appender name="FILE" class="org.apache.log4j.RollingFileAppender">

		<param name="File" value="C:/log/spring-hib.log" />
		<param name="MaxBackupIndex" value="100" />

		<layout class="org.apache.log4j.PatternLayout">
			<param name="ConversionPattern" value="[%d{dd/MM/yy hh:mm:ss:sss z}] %5p %c{2}: %m%n" />
		</layout>

	</appender>

	<category name="org.hibernate">
		<priority value="DEBUG" />
	</category>

	<category name="java.sql">
		<priority value="debug" />
	</category>

	<root>
		<priority value="INFO" />
		<appender-ref ref="FILE" />
	</root>

</log4j:configuration>

Note : org.hibernate will show the full blown list of boiler plate stuff , so it might be worthwhile to trim down to org.hibernate.SQL.

Also, if u put hibernate.show_sql=true in your configuration file, it will print the sql statement in the console as well.

There is a very nice pictorial representation here which shows the various binding among different loggers and it pointed me in the right direction, thanks Espen!:)

Uploading a file or an image and retrieving it is an extremely frequent activity and doing so via Mybatis/Spring must be a breeze as there are so many users who must be doing it!

So I thought, 6 hours before I started to code it. Unfortunately , Mybatis user manual has zero references to blob/clob insert /delete and searching on Google didn’t seem to go very far. There were pointers but no complete code examples. In the end, it proved to be exceedingly simple and I went to sleep a happy man. Here’s the full example to save half a day for somebody else.

Mybatis version : mybatis-3.1.0
Spring version : 3.1.1
Mybatis-Spring bundle : 1.1.1
Database : Oracle 10.2.0.4
Reqd. libs : commons-fileupload-*.jar , commons-io*.jar , ojdbc14.jar and obviously the reqd. spring and mybatis jars.

Database columns in EMPLOYEE table:

FILENAME VARCHAR2 (100 Byte),
FILECONTENTTYPE VARCHAR2 (100 Byte),
FILEDATA BLOB.

The actual file data is stored in a BLOB field.

Note: not a CLOB field, as we are mainly trying to host images here.(Even if some other file type is uploaded, it being stored as a BLOB will help to retrieve it exactly as it was stored, without any character encoding being applied, as in case of CLOB)

We are having a Spring MVC application, so we will augment our web-config.xml or whatever config file the DispatcherServlet listens to with the below code:


<bean id="multipartResolver"
    class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
	<property name="maxUploadSize">
		<value>10000000</value>
	</property>
</bean>

The fileupload size is in Bytes.

Model object / Form data

@Alias(value="emp")
public class Employee {
       private CommonsMultipartFile fileData;
       private byte[] fileDataBytes;
       private String fileName;
       private String fileContentType;
       //getters/setters
}

Our jsp will host the below :

spring form file upload

If we don’t put the enctype=”multipart/form-data, we will not be able to typecast the uploaded file into the CommonsMultiPartFile and hence we will not be able to retrieve the contentType and the filename. If we don’t put the encType, we can always retrieve the uploaded file as byte[] but in order to retrieve the file, we will have to store the contenttype as well, so an easier option is to use CommonsMultiPartFile.

Store or Upload the file or image in Mybatis


<update id="updateEmployee" parameterType="com.spring.model.Employee">
	update employee
	<set>
<if test="fileData.originalFilename != null">filename = #{fileData.originalFilename,jdbcType=VARCHAR},</if>
		<if test="fileData.ContentType != null">fileContentType =					#{fileData.contentType,jdbcType=VARCHAR},</if>
		<if test="fileData.bytes != null">fileData = 							#{fileData.bytes},
</if>
	</set>
	where
	empId = #{empId}
</update>

Note, that we don’t set the jdbcType of contentType, nor do we use any typehandlers as many suggested.

Also, note that we use the original CommonsMultipart object to retrieve the fileContentType,filename and fileData.

That’s it for storing the file.

Retrieve the uploaded file/image using Mybatis

Now, for retrieving it back, we will use the below query:

<select id="getUploadedFileForEmployee" parameterType="Long"
		resultType="emp">
		select empId,fileName,fileContentType,fileData as
		fileDataBytes
		from
		employee where
		empId=#{empId}

</select>

That’s it. We have successfully retrieved the file along with its name and contenttype. Now, we shall see how we can display it / download the file.

@Controller
public class EmployeeController {
	@Autowired
	private EmployeeBaseService employeeService;
	@Autowired
	private UploadedObjectView uploadedObjectView;
	public EmployeeBaseService getEmployeeService() {
		return employeeService;
	}
     //getters,setters
         @RequestMapping(value="/employeeDownloadFile")
	public ModelAndView downloadFile(@RequestParam("empId") long empId){
		Map model = new HashMap();
		Employee empMap = employeeService.getUploadedFileForEmployee(empId);
		model.put("data", empMap.getFileDataBytes());
		model.put("contentType", empMap.getFileContentType());
		model.put("filename", empMap.getFileName());
		return new ModelAndView(uploadedObjectView, model);
	}

For the save of the employee along with its data and a detailed explanation of the annotations, please refer here.

Ok, so we are done with our controller, but since the file to be downloaded can be of different types, so we construct a generic UploadedObjectView and populate it with the file data.

public class UploadedObjectView extends AbstractView {
	//To rediect to another page, with inline text.
	protected void renderMergedOutputModel1(Map model,
			HttpServletRequest request, HttpServletResponse response) throws Exception {
		  byte[] bytes = (byte[]) model.get("data");
	      String contentType = (String) model.get("contentType");
	      response.setContentType(contentType);
	      response.setContentLength(bytes.length);
	      ServletOutputStream out = response.getOutputStream();
	      out.write(bytes);
	      out.flush();
	}
	//For a download option
	@Override
	protected void renderMergedOutputModel(Map model,
			HttpServletRequest request, HttpServletResponse response) throws Exception {
		byte[] bytes = (byte[]) model.get("data");
		String contentType = (String) model.get("contentType");
		response.addHeader("Content-disposition","attachment; filename="+model.get("filename"));
		response.setContentType(contentType);
		response.setContentLength(bytes.length);
		ServletOutputStream out = response.getOutputStream();
		out.write(bytes);
		out.flush();
	}
}

We have 2 methods above:

If we override with the 1st method, the output will be redirected to another page with an inline text / image.

For a totally generic implementation, the overridden method 2 is safer as it provides with a download option.

That’s it folks :)! I hope this could be a help to somebody.

One common error which I have encountered in Mybatis while uploading large files was :

SQL state [72000]; error code [1013]; ORA-03111: break received on communication channel

This error is due to the query timeout being exceeded.

From the mybatis-config.xml(more info here), changed the value from 10 to 100 to resolve this issue.

<setting name="defaultStatementTimeout" value="100" />

Great post here by Juergen Hoeller if you need further inputs or just anything about Spring in particular. Thanks Juergen!

Goal of this session

  • Mybatis Mapper xmls and interfaces creation
  • Mybatis MapperFactoryBean to retrieve Mybatis SqlSessions which are threadsafe.
  • Mybatis MapperScannerConfigurer to automatically wire the mapper interfaces
  • Mybatis SqlSessionDaoSupport and SqlSessionTemplate
  • Spring Annotated Controllers
  • Spring example of get, post methods to retrieve and update an object.

Ok, so we have done our configurational changes here.

We have created our datasource, sqlsessionfactorybean, mybatis configurational file and referred to our mybatis mapper xmls.But we haven’t yet created the model object or the mapper xmls.

Step 1:  Model Object Creation : We will create the Employee object first.

public class Employee {
	protected long empId;
	protected String firstname;
	protected String lastname;
	protected String email;
	protected String telephone;
	protected String birthday;

          // we will see these properties later
	private CommonsMultipartFile fileData;
	private byte[] fileDataBytes;
	private String fileName;
	private String fileContentType;
          //getters and setters for all attributes.
	}

Step 2: Creating the Mybatis interfaces and mapper xmls.

Lets start with the interfaces. We had 2 interfaces BaseMapperInterface and EmployeeMapperInterface. Of these, BaseMapperInterface is a marker interface.

We will go straight to the EmployeeMapperInterface. There are multiple ways to do it, so I will show the legacy way first.

package com.mybatis.dao;
public interface EmployeeMapperInterface extends BaseMapperInterface{
public List getEmployeeWithId(Long id);
public int insertEmployee(Employee e);
public int updateEmployee(Employee e);
public void deleteEmployee(Long id);
//other data access methods.
}

Now, we will define the EmployeeMapper.xml

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper
PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.mybatis.dao.EmployeeMapperInterface">
<select id="getEmployeeWithId" parameterType="Long" resultType="emp">
		select empId,firstname,lastname,emailId as email,birthday,fileName from
		employee where
		1=1
		<if test="_parameter != null">
			AND empId=#{empId}
		</if>
		order by empId
	</select>
<insert id="insertEmployee" parameterType="com.spring.model.Employee">
		<!--
			the interface returns int, not long otherwise typecast errors were
			thrown
		-->
		<selectKey keyProperty="empId" resultType="Long" order="BEFORE">
			select Hibernate_Sequence.nextval as empId from dual
  		</selectKey>

		insert into employee(empId,firstname,lastname,emailId,filename,fileContentType,fileData )
		values(#{empId},#{firstname},#{lastname},#{email},#{fileData.originalFilename,jdbcType=VARCHAR},
			#{fileData.contentType,jdbcType=VARCHAR},#{fileData.bytes} )

	</insert>

   <update id="updateEmployee" parameterType="com.spring.model.Employee">
		update employee
		<set>
			<if test="firstname != null">firstname= #{firstname,jdbcType=VARCHAR},</if>
			<if test="lastname != null">lastname =#{lastname,jdbcType=VARCHAR},</if>
			<if test="email != null">emailid = #{email,jdbcType=VARCHAR},</if>
			<if test="fileData.originalFilename != null">filename = #{fileData.originalFilename,jdbcType=VARCHAR},</if>
			<if test="fileData.ContentType != null">fileContentType =	#{fileData.contentType,jdbcType=VARCHAR},</if>
			<if test="fileData.bytes != null">fileData = #{fileData.bytes},</if>
		</set>
		where
		empId = #{empId}
	</update>

	<delete id="deleteEmployee" parameterType="Long">
		delete from employee
		where empId = #{empId}
	</delete>

Now, if you see the above xml, you would see a select, insert , update and delete tags. Don’t worry, we would go through each of them later in this tutorial. For now, we shall see the implementation of this interface.

Step 3 : Legacy way of defining the implementation of the interface.
Remember MapperFactoryBean on the 1st post? Its not required if we follow this legacy approach and the changed mapper-config.xml will look like below

<bean id="employeeMapper" class ="com.mybatis.dao.EmployeeMapperImpl">
   <property name="sqlSessionFactory" ref="sqlSessionFactory" />
</bean>

Obviously we should still follow the inheritance strategy as defined earlier and this referring of sqlSessionFactory directly by employeeMapper is just for demonstration purposes.

I say this legacy because with MapperFactoryBean this is no longer required. We can work only with the interfaces. However, life is not always so beautiful and we have to look at old code in order to debug , maintain or improve.


public class EmployeeMapperImpl extends SqlSessionDaoSupport implements EmployeeMapperInterface{

	@Override
	public void deleteEmployee(Long id) {
		getSqlSession().delete("com.mybatis.dao.EmployeeMapperInterface.deleteEmployee", id);
	}

	@Override
	public List<Employee> getEmployeeWithId(Long id) {
		return getSqlSession().selectList("com.mybatis.dao.EmployeeMapperInterface.getEmployeeWithId", id);

	}

	@Override
	public int insertEmployee(Employee e) {
		return getSqlSession().insert("com.mybatis.dao.EmployeeMapperInterface.insertEmployee", e);	}

	@Override
	public int updateEmployee(Employee e) {
		SqlSessionTemplate tm= (SqlSessionTemplate) getSqlSession();
		int id = tm.update("com.mybatis.dao.EmployeeMapperInterface.updateEmployee", e);
		//getSqlSession().rollback();
		return id;
	}
}

Now, before we start explaining, we would slightly veer off from the MVC pattern and we shall look at how a standalone Mybatis application would work . For a detailed explanation, please visit here.

At the core of all Mybatis stuff, is an SqlSessionFactory which opens and closes SqlSession. SqlSession is not thread safe and needs to be aligned with the HttpRequest cycle in case of a web application.

 For Spring Mybatis, there are 3 ways to retrieve the sessions :

i. extend SqlSessionDaoSupport as we are doing above and retrieve the session using getSqlSession(). This method returns a thread-safe SqlSession which we can use in our Spring transactions. Then, this session can be used to perform insert/update/delete/selectOne/selectList operations.

ii. We can also use SqlSessionTemplate. We can instantiate it in the database-config.xml

<bean id="sqlSession" class="org.mybatis.spring.SqlSessionTemplate">
 <constructor-arg index="0" ref="sqlSessionFactory" />
</bean>

As can be seen above, it can also be instantiated by using the sqlSessionFactory.

SqlSession sqlSsession = new SqlSessionTemplate(sqlSessionFactory);

To summarize, SqlSessionTemplate should always be used instead of SqlSession because the base MyBatis SqlSession cannot participate in Spring transactions and is not thread safe. Switching between the two classes in the same application can cause data integrity issues.
Also, in step i. when we do getSqlSession, we actually retrieve SqlSessionTemplate, so the below statement would work fine.

SqlSessionTemplate tm= (SqlSessionTemplate) getSqlSession();

iii.  Finally the MapperFactoryBean
Instead of using SqlSessionDaoSupport or SqlSessionTemplate directly from the DAOs, we can use MapperFactoryBean to inject interface DAOs into our service classes.


<bean id="baseMapper" class="org.mybatis.spring.mapper.MapperFactoryBean">
	<property name="mapperInterface" 									value="com.mybatis.dao.BaseMapperInterface" />
	<property name="sqlSessionFactory" ref="sqlSessionFactory" />
</bean>

<bean id="employeeMapper" parent="baseMapper">
<property name="mapperInterface" value="com.mybatis.dao.EmployeeMapperInterface" />
</bean>

The MapperFactoryBean handles creating an SqlSession as well as closing it. If there is a Spring transaction in progress, the session will also be committed or rolled back when the transaction completes. Finally, any exceptions will be translated into Spring DataAccessExceptions.

MapperScannerConfigurer and automatically marking the interfaces

Finally, those of you who don’t want to spend lots of time in wiring down each and every interface bean, can use the MapperScannerConfigurer which allows you to automatically scan the directories and mark the mapper interfaces.
1. Define the MapperScannerConfigurer and mark the basePackage.


<bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
     <property name="basePackage" value="org.mybatis.spring.sample.mapper" />
</bean>

2. The mapper interfaces which are referred in the service classes need to be autowired, since their references will no longer be present in the mapper-config.xml

3. Finally, sqlSessionFactory also needs to be configured(contrary to the Mybatis manual), else the mappers will not work and will throw this exception :

org.springframework.beans.factory.BeanCreationException: Error creating bean with name ’employeeMapperInterface’ defined in file [C:\anirban\Work_Ani\SpringIntegration\WebRoot\WEB-INF\classes\com\mybatis\dao\EmployeeMapperInterface.class]: Invocation of init method failed; nested exception is java.lang.IllegalArgumentException: Property ‘sqlSessionFactory’ or ‘sqlSessionTemplate’ are required

Note: There was an issue with PropertyPlaceHolderConfigurer not working when used in conjunction with mybatis-spring-3.1.0. Its resolved by replacing with mybatis-spring-3.1.1 bundle.

Thats it folks for the Mybatis stuff for this session. Now, lets jump to the Spring related changes . Recall that ,                                                                                             

 Request –> Spring MVC controller –> accesses Service facade –> accesses the DAO xml/interface.

We are done with the last leg(DAO xml/interface) and we will now construct the Service facade.
As always to encourage good coding practices, we will create an interface for the EmployeeService.

public interface EmployeeBaseService {
	public Employee getEmployeeById(long empId);
	public long saveEmployee(Employee employee) throws Exception;
	public void deleteEmployee(long empId);

}

Now, the actual implementation of the service:

@Service
public class EmployeeService implements EmployeeBaseService{

	@Autowired
	EmployeeMapperInterface employeeMapper;
         //getters, setters
         @Override
	public Employee getEmployeeById(long empId){
		//retrieve from database
		List empList = employeeMapper.getEmployeeWithId(empId);
		if(empList != null && empList.size()>0){
			return (Employee) empList.get(0);
		}
		return null;

	}

	@Override
	public long saveEmployee(Employee employee){
		long empId = 0l;
		if(employee.getEmpId()==0){
			empId  = new Long( employeeMapper.insertEmployee(employee));
		}else{
			 employeeMapper.updateEmployee(employee);
			 empId  =  employee.getEmpId();
		}
		return empId;
	}

	@Override
	public void deleteEmployee(long empId) {
		employeeMapper.deleteEmployee(empId);

	}

Annotated Employee Controller

@Controller
public class EmployeeController {
	@Autowired
	private EmployeeBaseService employeeService;

          @RequestMapping(value = "/employeeHome",method=RequestMethod.GET)
	public String displayHomePage(@RequestParam("empId") long empId,ModelMap m){
		System.out.println("employeeHome>> "+empId);
		//retrieve the employee with this id.
		Employee employee = null;
		Employee empMap = employeeService.getEmployeeByIdMap(empId);
		if(empId ==0){
			employee = new Employee();
		}else{
			employee = employeeService.getEmployeeById(empId);

		}
		m.addAttribute("employee",employee);
		System.out.println(employee);
		return ProjectConstants.SECURE_FOLDER+"employeeHome";
	}
         @RequestMapping(value="/employeeEdit",method=RequestMethod.POST)
	public String saveEmployee(@ModelAttribute("employee")
                Employee emp,  BindingResult br) throws Exception{
		if(br.hasErrors()){
			return "employeeHome";
		}
		System.out.println(emp);

		employeeService.saveEmployee(emp);
		return "redirect:employeeHome.html?empId="+emp.getEmpId();
	}

Ok, lets go over it. The controller contains a reference to the EmployeeBaseService and this is autowired.
First the displayHomePage() which is accessed via a GET.

 

Points to note :

1. @RequestParam(“”) is used to bind a request parameter to a method parameter.  If a RequestParam is specified, it becomes mandatory for the URI to contain it.Else, can  be used as @RequestParam(value=””,required=false)

2.  This method has a signature :                                                                                              String * (@RequestParam(“<param>”) datatype method-arg,ModelMap model)

3.  ModelMap.addAttribute(“paramName”,<param>). This paramName is important. As it has to be set in the view as the modelattribute, else the jsp will not be rendered and the following exception will come:
Neither BindingResult nor plain target object for bean name ‘command’ available as request attribute
4.  Returns the viewname , so essentially there would be a employeeHome.jsp lying around under ProjectConstants.SECURE_FOLDER.

5. @RequestMapping(value = “/employeeHome”,method=RequestMethod.GET) means this method would be invoked when we have a call like :
http://localhost:8080/springProj/employeeHome.html?empId=92178033
Note : the request parameter being passed.

Next, the saveEmployee() which is accessed via the POST.

  1. Annotation @ModelAttribute as the method parameter. This is used to retrieve the command object, after filling out the form.
  2. Returns a redirect to the url which is going to be picked up the get method, the redirect prevents an accidental re-update upon refreshing the screen.
  3.  @RequestMapping(value=”/employeeEdit”,method=RequestMethod.POST) will invoke this method, meaning there must be a form whose action would point to employeeEdit.html

Finally, lets complete this tutorial by showing the view (*. jsp) which will render the model.

Spring ModelAttribute JSP

Features to be noted in this jsp:

  1. modelAttribute “emp” is necessary as mentioned earlier.
  2. <form:form method=”post” action=”employeeEdit.html”  , so basically this will look like an employee personal details update form where the employee is redirected to his/her view page after the update.

 

Right that’s it.