Sunday, May 23, 2010
Transaction Management in Hibernate
In this article I would explain the common techniques to deal with the Session and transactions in Hibernate applications.
You may go through my previous post about transaction management.
Refer to the Hibernate reference documentation and the "Transactions and Concurrency" chapter for more information.
Unit of Work
A particular unit of work is grouping data access operations. We usually refer to the Hibernate Session as a unit of work because the scope of a Session is exactly that, in almost all cases. (The Session is also many other things, for example, a cache and a primary API.) To begin a unit of work you open a Session. To end a unit of work you close a Session. Usually you also flush a Session at the end of a unit of work to execute the SQL DML operations (UPDATE, INSERT, DELETE) that synchronize the in-memory Session state with the database.
A Session executes also SQL queries, whenever the developer triggers a query with the API or through loading on demand (lazy loading). Alternatively, think of the Session as a gateway to your database, a map of managed entity instances that are automatically dirty checked, and a queue of SQL DML statements that are created and flushed by Hibernate automatically.
Transactions
Transactions also group data access operations, in fact, every SQL statement, be it queries or DML, has to execute inside a database transaction. There can be no communication with a database outside of a database transaction. (Note that there are such things as read-only transactions that can be used to improve cleanup time in a database engine if it is not smart enough to optimize its own operations.)
One approach is the auto-commit mode, where every single SQL statement is wrapped in a very short transaction. This mode is never appropriate for an application, but only for ad-hoc execution of SQL with an operator console. Hibernate disables or expects the environment (in J2EE/JEE) to disable auto-commit mode, as applications are not executing ad-hoc SQL but a planned sequence of statements. (There are ways to enable auto-commit behavior in Hibernate but it is by definition slower than regular transactions and less safe. If you want to know more about auto-commit mode, read this. (http://community.jboss.org/wiki/Non-transactionaldataaccessandtheauto-commitmode)
The right approach is to define clear transaction boundaries in your application by beginning and committing transactions either programmatically, or if you have the machinery to do this, declaratively (e.g. on service/command methods). If an exception occurs the transaction has to be rolled back (or declaratively, is rolled back).
The scope of a unit of work
A single Hibernate Session might have the same scope as a single database transaction.
This is the most common programming model used for the session-per-request implementation pattern. A single Session and a single database transaction implement the processing of a particular request event (for example, an Http request in a web application). Do never use the session-per-operation anti-pattern! (There are extremely rare exceptions when session-per-operation might be appropriate, you will not encounter these if you are just learning Hibernate.)
Another programming model is that of long Conversations, e.g. an application that implements a multi-step dialog, for example a wizard dialogs, to interact with the user in several request/response cycles.
One way to implement this is the session-per-request-with-detached-objects pattern. Once persistent objects are considered detached during user think-time and have to be reattached to a new Session after they have been modified.
The session-per-conversation pattern is however recommended. In this case a single Session has a bigger scope than a single database transaction and it might span several database transactions. Each request event is processed in a single database transaction, but flushing of the Session would be delayed until the end of the conversation and the last database transaction, to make the conversation atomic. The Session is held in disconnected state, with no open database connection, during user think-time. Hibernate's automatic optimistic concurrency control (with versioning) is used to provide conversation isolation.
Hibernate supports several convenience APIs that make implementation of all transaction and conversation strategies easier, with any transaction processing system you might deploy on.
Transaction demarcation with JTA
Hibernate works in any environment that uses JTA, in fact, we recommend to use JTA whenever possible as it is the standard Java transaction interface. You get JTA built-in with all J2EE/JEE application servers, and each Datasource you use in such a container is automatically handled by a JTA TransactionManager. But this is not the only way to get JTA, you can use a standalone implementation (e.g. JOTM) in any plain JSE environment. Another example is JBoss Seam, it comes bundled with a demo application that uses an embeddable version of the JBoss JCA/JTA/JNDI services, hence provides JTA in any deployment situation.
Hibernate can automatically bind the "current" Session to the current JTA transaction. This enables an easy implementation of the session-per-request strategy with the getCurrentSession() method on your SessionFactory:
try {
UserTransaction tx = (UserTransaction)new InitialContext()
.lookup("java:comp/UserTransaction");
tx.begin();
// Do some work
factory.getCurrentSession().load(...);
factory.getCurrentSession().persist(...);
tx.commit();
} catch (RuntimeException e) {
tx.rollback();
throw e; // or display error message
}
The advantage of the built-in support should become clear as soon as you write non-trivial applications: you can separate the transaction demarcation code from your data access code. The "current session" refers to a Hibernate Session bound by Hibernate behind the scenes, to the transaction scope. A Session is opened when getCurrentSession() is called for the first time and closed when the transaction ends. It is also flushed automatically before the transaction commits. You can call getCurrentSession() as often and anywhere you want as long as the transaction runs. To enable this strategy in your Hibernate configuration:
• set hibernate.transaction.manager_lookup_class to a lookup strategy for your JEE container
• set hibernate.transaction.factory_class to org.hibernate.transaction.JTATransactionFactory
See the Hibernate reference documentation for more configuration details.
This does not mean that all Hibernate Sessions are closed when a transaction is committed! Only the Session that you obtained with sf.getCurrentSession() is flushed and closed automatically. If you decide to use sf.openSession() and manage the Session yourself, you have to flush() andclose() it. So a less convenient alternative, without any "current" Session, is this:
UserTransaction tx = (UserTransaction)new InitialContext()
.lookup("java:comp/UserTransaction");
Session session = factory.openSession();
try {
tx.begin();
// Do some work
session.load(...);
session.persist(...);
session.flush();
tx.commit();
} catch (RuntimeException e) {
tx.rollback();
throw e; // or display error message
} finally {
session.close();
}
If you manage the Session yourself, code is more difficult to layer. You can't easily move data access operations into a different layer than transaction and Session demarcation.
Transaction demarcation with plain JDBC
If you don't have JTA and don't want to deploy it along with your application, you will usually have to fall back to JDBC transaction demarcation. Instead of calling the JDBC API you better use Hibernate'sTransaction and the built-in session-per-request functionality:
try {
factory.getCurrentSession().beginTransaction();
// Do some work
factory.getCurrentSession().load(...);
factory.getCurrentSession().persist(...);
factory.getCurrentSession().getTransaction().commit();
} catch (RuntimeException e) {
factory.getCurrentSession().getTransaction().rollback();
throw e; // or display error message
}
Because Hibernate can't bind the "current session" to a transaction, as it does in a JTA environment, it binds it to the current Java thread. It is opened when getCurrentSession() is called for the first time, but in a "proxied" state that doesn't allow you to do anything except start a transaction. When the transaction ends, either through commit or roll back, the "current" Session is closed automatically. The next call to getCurrentSession() starts a new proxied Session, and so on. In other words, the session is bound to the thread behind the scenes, but scoped to a transaction, just like in a JTA environment. This thread-bound strategy works in every JSE application - note that you should use JTA and a transaction-bound strategy in a JEE environment (or install JTA with your JSE application, this is a modular service).
To enable the thread-bound strategy in your Hibernate configuration:
• set hibernate.transaction.factory_class to org.hibernate.transaction.JDBCTransactionFactory
• set hibernate.current_session_context_class to thread
This does not mean that all Hibernate Sessions are closed when a transaction is committed! Only the Session that you obtained with sf.getCurrentSession() is flushed and closed automatically. If you decide to use sf.openSession() and manage the Session yourself, you have to close() it. So a less convenient alternative, without any "current" Session, is this:
Session session = factory.openSession();
Transaction tx = null;
try {
tx = session.beginTransaction();
// Do some work
session.load(...);
session.persist(...);
tx.commit(); // Flush happens automatically
}
catch (RuntimeException e) {
tx.rollback();
throw e; // or display error message
}
finally {
session.close();
}
If you manage the Session yourself, code is more difficult to layer. You can't easily move data access operations into a different layer than transaction and Session demarcation.
Transaction demarcation with EJB/CMT
Our goal really is to remove any transaction demarcation code from the data access code:
@TransactionAttribute(TransactionAttributeType.REQUIRED)
public void doSomeWork() {
// Do some work
factory.getCurrentSession().load(...);
factory.getCurrentSession().persist(...);
}
Instead of coding the begin, commit, and rollback of your transactions into your application you could use a declarative approach. For example, you might declare that some of your service or command methods require a database transaction to be started when they are called. The transaction ends when the method returns; if an exception is thrown, the transaction will be rolled back. The Hibernate "current" Session has the some scope as the transaction (flushed and closed at commit) and is internally also bound to the transaction. It propagates into all components that are called in one transaction.
Declarative transaction demarcation is a standard feature of EJB, also known as container-managed transactions (CMT). In EJB 2.x you would use XML deployment descriptors to create your transaction assembly. In EJB 3.x you can use JDK 5.0 annotation metadata directly in your source code, a much less verbose approach. To enable CMT transaction demarcation for EJBs in Hibernate configuration:
• set hibernate.transaction.manager_lookup_class to a lookup strategy for your JEE container
• set hibernate.transaction.factory_class to org.hibernate.transaction.CMTTransactionFactory
Custom transaction interceptors
To remove transaction demarcation from your data access code you might want to write your own interceptor that can begin and end a transaction programmatically (or even declaratively). This is a lot easier than it sounds; after all, you only have to move three methods into a different piece of code that runs every time a request has to be processed. Of course more sophisticated solutions would also need to handle transaction propagation, e.g. if one service method calls another one. Typical interceptors are a servlet filter, or an AOP interceptor that can be applied to any Java method or class.
For an implementation with a servlet filter see Open Session in View.
For an implementation with JBoss AOP see Session handling with AOP.
Implementing long Conversations
If you'd like to design your application with a session-per-conversation strategy, you need to managethe "current" Session yourself. An example with a servlet filter is shown with the Open Session in View pattern.
Implementing data access objects (DAOs)
Writing DAOs that call Hibernate is incredibly easy and trivial. You don't need a framework. You don't need to extend some "DAOSupport" superclass from a proprietary library. All you need to do is keep your transaction demarcation (begin and commit) as well as any Session handling code outside of the DAO implementation. For example, a ProductDAO class has a setCurrentSession() method or constructor, or it looks up the "current" Hibernate Session internally. Where this current Sessioncomes from is not the responsibility of the DAO! How a transaction begins and ends is not the responsibility of the DAO! All the data access object does is use the current Session to execute some persistence and query operations. For a pattern that follows these rules, see Generic Data Access Objects.
What about the SessionFactory?
In the examples above you can see access to the SessionFactory. How do you get access to the factory everywhere in your code? Again, if you run in a JEE environment, or use an embedded service in JSE, you could simply look it up from JNDI, where Hibernate can bind it on startup. Another solution is to keep it in a global static singleton after startup. You can in fact solve both the problem of SessionFactory lookup and Hibernate startup with the same piece of code, a trivial helper class (this is from the tutorial in chapter 1, Hibernate reference documentation):
public class HibernateUtil {
private static final SessionFactory sessionFactory;
static {
try {
// Create the SessionFactory from hibernate.cfg.xml
sessionFactory = new Configuration().configure().buildSessionFactory();
} catch (Throwable ex) {
// Make sure you log the exception, as it might be swallowed
System.err.println("Initial SessionFactory creation failed." + ex);
throw new ExceptionInInitializerError(ex);
}
}
public static SessionFactory getSessionFactory() {
return sessionFactory;
}
}
A more sophisticated version if HibernateUtil that can also switch automatically between JNDI and static singleton can be found in the CaveatEmptor demo application.
This is all very difficult, can't this be done easier?
Hibernate can only do so much as a persistence service, managing the persistence service is however the responsibility of the application infrastructure, or framework. The EJB3 programming model makes transaction and persistence context management very easy, use the Hibernate EntityManager to get this API. Either run your EJBs inside a full J2EE application server (previews available from several vendors) or in a lightweight embeddable EJB3 container, JBoss Embeddable EJB3, in any Java environment. The JBoss Seam framework has built-in support for automatic context management, including persistence and conversations, with only a few annotations in your source code.
References:
http://community.jboss.org/wiki/Non-transactionaldataaccessandtheauto-commitmode
https://www.hibernate.org/43.html
Thursday, February 11, 2010
Transaction Management
Transactions:
The primary tool for handling concurrency in enterprise applications is the transaction. The word "transaction" often brings to mind an exchange of money or goods. Walking up to an ATM machine, entering your PIN, and withdrawing cash is a transaction. Paying the $3 toll at the
Looking at typical financial dealings such as these provides a good definition for the term. First, a transaction is a bounded sequence of work, with both start and endpoints well defined. An ATM transaction begins when the card is inserted and ends when cash is delivered or an inadequate balance is discovered. Second, all participating resources are in a consistent state both when the transaction begins and when the transaction ends.
In addition, each transaction must complete on and all-or-nothing basis. The bank can't subtract from an account holder's balance unless the ATM machine actually delivers the cash.
A transaction is a complete unit of work. It may comprise many computational tasks, which may include user interface, data retrieval, and communications. Completion of transaction means either commitment or rollback. Either commitment or rollback results in a consistent state.
Transaction Properties:
ACID
Software transactions are often described in terms of the ACID properties:
· Atomicity: Each step in the sequence of actions performed within the boundaries of a transaction must complete successfully or all work must roll back. Partial completion is not a transactional concept.
· Consistency: A system's resources must be in a consistent, non-corrupt state at both the start and the completion of a transaction.
· Isolation: The result of an individual transaction must not be visible to any other open transactions until that transaction commits successfully.
· Durability: Any result of a committed transaction must be made permanent. This translates to "Must survive a crash of any sort."
Transaction Concurrency Problems:
If locking is not available and several users access a database concurrently, problems may occur if their transactions use the same data at the same time. Concurrency problems include:
[Lost Updates and Inconsistent Reads (dirty read, non-repeatable read, Phantom reads)].
- Lost or buried updates.
- Uncommitted dependency (dirty read).
- Inconsistent analysis (non-repeatable read).
- Phantom reads.
v Lost Updates:
Lost updates occur when two or more transactions select the same row and then update the row based on the value originally selected. Each transaction is unaware of other transactions. The last update overwrites updates made by the other transactions, which results in lost data.
For example, two editors make an electronic copy of the same document. Each editor changes the copy independently and then saves the changed copy, thereby overwriting the original document. The editor who saves the changed copy last overwrites changes made by the first editor. This problem could be avoided if the second editor could not make changes until the first editor had finished.
Tx1: -----t1: update ------t3: commit.
Tx2: -------t2: update -------- t4: commit/rollback.
Data updated by Tx1 will be lost.
v Uncommitted Dependency (Dirty Read):
Uncommitted dependency occurs when a second transaction selects a row that is being updated by another transaction. The second transaction is reading data that has not been committed yet and may be changed by the transaction updating the row.
For example, an editor is making changes to an electronic document. During the changes, a second editor takes a copy of the document that includes all the changes made so far, and distributes the document to the intended audience. The first editor then decides the changes made so far are wrong and removes the edits and saves the document. The distributed document contains edits that no longer exist, and should be treated as if they never existed. This problem could be avoided if no one could read the changed document until the first editor determined that the changes were final.
Tx1: -----t1: updating---------- t3: (rollback).
Tx2: ----------t2: select---------------- t3: (commit).
Tx2 is working on data that no longer exist.
v Inconsistent Analysis (Non-repeatable Read):
Inconsistent analysis occurs when a second transaction accesses the same row several times and reads different data each time. Inconsistent analysis is similar to uncommitted dependency in that another transaction is changing the data that a second transaction is reading. However, in inconsistent analysis, the data read by the second transaction was committed by the transaction that made the change. Also, inconsistent analysis involves multiple reads (two or more) of the same row and each time the information is changed by another transaction; thus, the term non-repeatable read.
For example, an editor reads the same document twice, but between each reading, the writer rewrites the document. When the editor reads the document for the second time, it has changed. The original read was not repeatable. This problem could be avoided if the editor could read the document only after the writer has finished writing it.
Tx1: ---------t2: updating--- t3: (commit).
Tx2: -----t1: select------------------ t4: select---
Tx2 will read different data on second select.
v Phantom Reads:
Phantom reads occur when an insert or delete action is performed against a row that belongs to a range of rows being read by a transaction. The transaction's first read of the range of rows shows a row that no longer exists in the second or succeeding read, as a result of a deletion by a different transaction. Similarly, as the result of an insert by a different transaction, the transaction’s second or succeeding read shows a row that did not exist in the original read.
For example, an editor makes changes to a document submitted by a writer, but when the changes are incorporated into the master copy of the document by the production department, they find that new unedited material has been added to the document by the author. This problem could be avoided if no one could add new material to the document until the editor and production department finish working with the original document.
Tx1: ----------t2: insert/delete --- t3: (commit) ------
Tx2: ----t1: select---------- t4: select----------------
Transaction Concurrency Problems Solution:
Isolation and Immutability:
The problems of concurrency have been around for a while, and software people have come up with various solutions. For enterprise applications two solutions are particularly important: isolation and immutability.
Transaction Concurrency Control:
Optimistic Concurrency
Optimistic concurrency control works on the assumption that resource conflicts between multiple users are unlikely (but not impossible), and allows transactions to execute without locking any resources. Only when attempting to change data are resources checked to determine if any conflicts have occurred. If a conflict occurs, the application must read the data and attempt the change again.
Pessimistic Concurrency
Pessimistic concurrency control locks resources as they are required, for the duration of a transaction. Unless deadlocks occur, a transaction is assured of successful completion.
A good way of thinking about this is that an optimistic lock is about conflict detection while a pessimistic lock is about conflict prevention.
Both approaches have their pros and cons. The problem with the pessimistic lock is that it reduces concurrency. Optimistic locks allow people to make much better progress, because the lock is only held during the commit. The problem with them is what happens when you get a conflict.
The essence of the choice between optimistic and pessimistic locks is the frequency and severity of conflicts. If conflicts are sufficiently rare, or if the consequences are no big deal, you should usually pick optimistic locks because they give you better concurrency and are usually easier to implement. However, if the results of a conflict are painful for users, you'll need to use a pessimistic technique instead.
ANSI transaction isolation levels: [2]
1. Read uncommitted: A system that permits dirty reads is said to operate in read uncommitted isolation. One transaction may not write to a row if another uncommitted transaction has already written to it. Any transaction may read any row, however. This isolation level may be implemented in the database-management system with exclusive write locks.
2. Read Committed: A system that permits unrepeatable reads but not dirty reads is said to implement read committed transaction isolation. This may be achieved by using shared read locks and exclusive write locks.
3. Repeatable Read: A system operating in repeatable read isolation mode permits neither unrepeatable reads nor dirty reads. Phantom reads may occur.
4. Serializable: Serializable provides the strictest transaction isolation. This isolation level emulates serial transaction execution, as if transactions were executed one after another, serially, rather than concurrently. Serializability may not be implemented using only row-level locks. There must instead be some other mechanism that prevents a newly inserted row from becoming visible to a transaction that has already executed a query that would return the row.
Isolation level | Dirty read | Non-repeatable read | Phantom |
Read uncommitted | Yes | Yes | Yes |
Read committed | No | Yes | Yes |
Repeatable read | No | No | Yes |
Serializable | No | No | No |
How exactly the locking system is implemented in a DBMS varies significantly; each vendor has a different strategy. You should study the documentation of your DBMS to find out more about the locking system, how locks are escalated (from row-level, to pages, to whole tables, for example), and what impact each isolation level has on the performance and scalability of your system.
Reference:
- Addison Wesley: Patterns of Enterprise Application Architecture By Martin Fowler, David Rice, Matthew Foemmel, Edward Hieatt, Robert Mee, Randy Stafford
- Java Persistence with Hibernate By Christan Bauer and Gavin King
Generic DAO Pattern
Generic DAO Pattern:
In this post I have developed a simple application that how the generic DAO pattern can be used.
Following are the listing.
Directory structure is:
com/sample/
dao/
book/
facory/
hibfactory/
jpafactory/
hibimpl/
book/
jpaimpl/
book/
util/
daoclient/
book/
impl/
book/
domain/
let's first we see our domain object
package com.sample.domain;
public class Book {
private Long id;
private String name;
private String author;
public Book(){
}
public Book(Long id, String name, String author) {
super();
this.id = id;
this.name = name;
this.author = author;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getAuthor() {
return author;
}
public void setAuthor(String author) {
this.author = author;
}
}
develop also its hbm(use annotation) file.
Generic DAO interface.
package com.sample.dao;
import java.io.Serializable;
import java.util.List;
public interface GenericDAO
T findById(ID id, boolean lock);
List
List
T makePersistent(T entity);
void makeTransient(T entity);
void flush();
void clear();
}
To use hibernate as out persistence provider,
package com.sample.dao.hibimpl;
import java.io.Serializable;
import java.lang.reflect.ParameterizedType;
import java.util.List;
import org.hibernate.Criteria;
import org.hibernate.LockMode;
import org.hibernate.Session;
import org.hibernate.criterion.Criterion;
import org.hibernate.criterion.Example;
import com.sample.dao.GenericDAO;
import com.sample.dao.util.HibernateUtil;
public abstract class GenericHibernateDAO
private Class
private Session session;
@SuppressWarnings("unchecked")
public GenericHibernateDAO() {
this.persistentClass = (Class
}
public void setSession(Session s) {
this.session = s;
}
protected Session getSession() {
if (session == null)
session = HibernateUtil.getCurrentSession();
return session;
}
public Class
return persistentClass;
}
@SuppressWarnings("unchecked")
public T findById(ID id, boolean lock) {
T entity;
if (lock)
entity = (T) getSession().load(getPersistentClass(), id, LockMode.UPGRADE);
else
entity = (T) getSession()
.load(getPersistentClass(), id);
return entity;
}
@SuppressWarnings("unchecked")
public List
return findByCriteria();
}
@SuppressWarnings("unchecked")
public List
Criteria crit = getSession().createCriteria(getPersistentClass());
Example example = Example.create(exampleInstance);
for (String exclude : excludeProperty) {
example.excludeProperty(exclude);
}
crit.add(example);
return crit.list();
}
@SuppressWarnings("unchecked")
public T makePersistent(T entity) {
getSession().saveOrUpdate(entity);
return entity;
}
public void makeTransient(T entity) {
getSession().delete(entity);
}
public void flush() {
getSession().flush();
}
public void clear() {
getSession().clear();
}
/**
* Use this inside subclasses as a convenience method.
*/
@SuppressWarnings("unchecked")
protected List
Criteria crit = getSession().createCriteria(getPersistentClass());
for (Criterion c : criterion) {
crit.add(c);
}
return crit.list();
}
}
if want to JPA as the provider
package com.sample.dao.jpaimpl;
import java.io.Serializable;
import com.sample.dao.GenericDAO;
public abstract class GenericJpaDAO
// Methods specific to jpa implementation............
}
Using hibernate implementation.
package com.sample.dao.hibimpl.book;
import com.sample.dao.book.BookDAO;
import com.sample.dao.hibimpl.GenericHibernateDAO;
import com.sample.domain.Book;
/**
* @author
*
*/
public class BookDAOImpl extends GenericHibernateDAO
public Book findBook(Long id) {
/*your hql query to find book*/
return null;
}
}
now to develop our dao client, before developing our dao client develop a factory for getting you dao in dao client.
package com.sample.dao.factory;
import com.sample.dao.book.BookDAO;
import com.sample.dao.factory.hibfactory.HibernateDAOFactory;
import com.sample.dao.factory.jpafactory.JpaDAOFactory;
public abstract class DAOFactory {
public static enum DAOFactoryType {Hibernate, Jpa, };
public static DAOFactory instance(DAOFactoryType factoryType) {
DAOFactory factory = null;
try {
switch (factoryType) {
case Hibernate:
factory = HibernateDAOFactory.class.newInstance();
break;
case Jpa:
factory = JpaDAOFactory.class.newInstance();
break;
default:
factory = HibernateDAOFactory.class.newInstance();
break;
}
return factory;
} catch (Exception ex) {
throw new RuntimeException("Couldn't create DAOFactory for : " + factoryType);
}
}
@SuppressWarnings("unchecked")
public static DAOFactory instance(Class factory) {
try {
return (DAOFactory)factory.newInstance();
} catch (Exception ex) {
throw new RuntimeException("Couldn't create DAOFactory for : " + factory);
}
}
// Add your DAO interfaces here
public abstract BookDAO getBookDAO();
}
Develop factory to our specific provider
package com.sample.dao.factory.hibfactory;
import com.sample.dao.book.BookDAO;
import com.sample.dao.factory.DAOFactory;
import com.sample.dao.hibimpl.GenericHibernateDAO;
import com.sample.dao.hibimpl.book.BookDAOImpl;
public class HibernateDAOFactory extends DAOFactory {
@SuppressWarnings("unchecked")
private GenericHibernateDAO instantiateDAO(Class daoClass) {
try {
GenericHibernateDAO dao = (GenericHibernateDAO)daoClass.newInstance();
return dao;
} catch (Exception ex) {
throw new RuntimeException(
"Can not instantiate DAO: " + daoClass, ex);
}
}
/* (non-Javadoc)
* @see com.sample.dao.factory.DAOFactory#getBookDAO()
*/
@Override
public BookDAO getBookDAO() {
return (BookDAO) instantiateDAO(BookDAOImpl.class);
}
}
package com.sample.dao.factory.jpafactory;
import com.sample.dao.book.BookDAO;
import com.sample.dao.factory.DAOFactory;
import com.sample.dao.hibimpl.book.BookDAOImpl;
import com.sample.dao.jpaimpl.GenericJpaDAO;
public class JpaDAOFactory extends DAOFactory {
@SuppressWarnings("unchecked")
private GenericJpaDAO instantiateDAO(Class daoClass) {
try {
GenericJpaDAO dao = (GenericJpaDAO)daoClass.newInstance();
return dao;
} catch (Exception ex) {
throw new RuntimeException(
"Can not instantiate DAO: " + daoClass, ex);
}
}
/* (non-Javadoc)
* @see com.sample.dao.factory.DAOFactory#getBookDAO()
*/
@Override
public BookDAO getBookDAO() {
return (BookDAO) instantiateDAO(BookDAOImpl.class);
}
}
now our dao client.
package com.sample.daoclient.book;
import com.sample.domain.Book;
public interface BookManager {
void addBook(Book book);
void deleteBook(Long id);
/* more methods.....*/
}
import com.sample.dao.book.BookDAO;
import com.sample.dao.factory.DAOFactory;
import com.sample.dao.factory.hibfactory.HibernateDAOFactory;
import com.sample.daoclient.book.BookManager;
import com.sample.domain.Book;
public class BookManagerImpl implements BookManager {
private BookDAO bookDAO;
public BookManagerImpl(){
/*DAOFactory factory = DAOFactory.instance(DAOFactory.DAOFactoryType.Hibernate);*/
DAOFactory factory = DAOFactory.instance(HibernateDAOFactory.class);
bookDAO = factory.getBookDAO();
}
public BookDAO getBookDAO() {return bookDAO;}
public void setBookDAO(BookDAO bookDAO) {this.bookDAO = bookDAO;}
public void addBook(Book book){bookDAO.makePersistent(book);}
public void deleteBook(Long id ){
Book book = new Book();
book.setId(id);
bookDAO.makeTransient(book);
}
}
Finally here is HibernateUtil
package com.sample.dao.util;
import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.cfg.Configuration;
public class HibernateUtil {
private static Configuration configuration = new Configuration();
private static SessionFactory sessionFactory;
private static ThreadLocal sessions = new ThreadLocal();
static {
try {
sessionFactory = configuration.configure("hibernate.cfg.xml").buildSessionFactory();
} catch (Throwable ex) {
ex.printStackTrace();
throw new ExceptionInInitializerError(ex);
}
}
public static SessionFactory getSessionFactory() {
// Alternatively, you could look up in JNDI here
return sessionFactory;
}
public static void shutdown() {
// Close caches and connection pools
getSessionFactory().close();
}
public static Session getCurrentSession(){
Session session = (Session) sessions.get();
if (session == null || !session.isOpen()) {
if (sessionFactory == null) {
rebuildSessionFactory();
}
session = (sessionFactory != null) ?
sessionFactory.openSession() : null;
sessions.set(session);
}
return session;
}
public static void rebuildSessionFactory() {
try {
sessionFactory = configuration.configure("hibernate.cfg.xml").buildSessionFactory();
} catch (Exception e) {
System.err.println("%%%% Error Creating SessionFactory %%%%");
e.printStackTrace();
}
}
public static Configuration getConfiguration() {
return configuration;
}
}
References:
https://www.hibernate.org/328.html