Some best practices on Spring framework context configuration files naming

Like many other developers you may required to work on a pre-existing spring based codebase developed by a team of developers in the past couple of years where few of them really know why there are handful of spring configuration file scattered here and there and each one of them has a mixture of different configuration elements like security, aop, data sources and service definitions without a proper naming of the configuration files. That is a pain in the neck and headache to follow and further refactor/refine. Here I list some of the best practices around spring context configuration files which might help ease the reading and understanding the files:

  1. Use unified and proper naming schema like applicationContexte-[server|client |shared]<Configuration Domain>-<Configuration | Beans>-<Additional Clarifier>.xml which may seem to be long but believe me it will save someone or yourself fair deal of time later on. The explanation of the naming schema is as follow:
    • If your if the codebase is to run both client and server, some modules shared and some exclusive for each tier, use server and client to distinguish them.
    • The configuration domain can be any of: security, data-access, aop, resources, etc.
    • Use configuration when the file purpose is configure a set of services using some existing base beans
    • Use beans when you are defining the base beans which will be used to put together the services for the whole system
  2. Do not reference any of these files directly in the source code but rather use one single utility class responsible for returning references to different groups of the configuration files used in different places. Make sure that the methods returning these references uses proper signature return type like an String[] even if you have just one file for an specific domain or group of configuration files. Spring provides the possibility of composing a configuration file from multiple other files but I prefer using the code as it gives us more flexibility.
  3. Leave a one paragraph comment describing the purpose of the the document rather than leaving it as it is and hope that whoever is going to read it will understand the purpose by looking at the content or at the name because you don’t know when someone will look at the code and how s/he will interpret the content. Do not leave room for next to come people to guess where to put the new bean definitions, etc but rather specify it by leaving some comments if not documentation about the configuration file.

Introducing OpenESB from development to administration and management.

OpenESB project initiated by Sun Microsystems to develop and deliver a high performance, and feature rich implementation of Java Business Integration (JBI) under an open source friendly license. Basic task of JBI implementations is connecting different type of resources and applications together in a standard and none intrusive way. Basic building blocks of an ESB includes the Bus which is a message driven engine and transition path which deliver the required message to designated binding components. These components are the second main set of artifacts which an ESB has. The main tasks of binding components is receiving the Bus standard format messages and deliver them to other software in a format that the particular software can use. The third set of artifacts is Service engines which can relates the Bus to other types of containers like Java EE containers, transform the messages before they enter the Bus, perform routing related activities over the Bus messages.

The Main concept behind the JBI was around for quite some times but in recent years several factors helped with this concept to further evolve. First the increasing integration problems, second the SOA acceptance level between companies and finally introduction of JSR-208 specification by JCP which provides a common interfacing mechanism for JBI artifacts.

The increasing problem is nothing but the urge for integrating different software which is running in different department of the same enterprise or different partners across enterprises. An ESB implementation can be a non intrusive enabler product for integrating different software running in different department or enterprises without or with very small amount of changes in the software itself.

The concept which helped ESB to become very well known and popular is the Service Oriented Architecture (SOA) in which an ESB can play the enabler role because of its modularity and loose coupling nature.

1 Introducing OpenESB

OpenESB project located at initiated in 2005 when Sun Microsystems released a portion of its ESB products under CDDL license. Sun Microsystems, the user community and some other third party companies are working together on OpenESB project. However, the main man power and financial support comes from Sun Microsystems.

The project goal is development and delivery of a high performance, feature rich and modular JBI compliant ESB (the runtime) and all required development phase tools which accelerates and facilitate developing applications based on OpenESB capabilities and features.

The runtime components use GlassFish application server’s components like Metro web services stack, CLI and administration console infrastructure, and so on. OpenESB uses NetBeans IDE as the foundation for providing development and design time artifacts for its it runtime components. Design time functionalities are provided in a modular manner on top of NetBeans modularity system.

1.1 Introducing JBI

The JBI defines a specification for assembling integration components to create integration solutions that enable different software (external system) to actively integrate on providing a solution based on separate capabilities which every one of that software can provide on itself. These integration components are plugged into a JBI environment and can provide or consume services messages which lead in providing or consuming a service through the shared communication channel. The shared communication channel can route messages between different components and it also can transform the messages in a pre-defined or rule based way. In addition to transformation, different other base services are provided by the JBI environment. We call the environment the Service Bus or Enterprise Service Bus.

Based on the JBI specification and architecture a component is the main part of a JBI implementation which users or developers should understand. These components can acts as consumer or producers. When a component acts as consumer it receives a message from the service bus in a normalized and standard format. Then the component delivers the message to the external system in a format that the external system understands. When a component acts as a consumer to the service bus, it receives the message in a format provided by external system and converts it to normalized XML before sending it into the service bus. We call these components as binding components because they bind a external system to the service bus.

The JBI environment

JBI defines two types of components which JBI developers can develop and plug into a JBI container. However these two types are standardized and certainly each vendor has many internal and none standard interfaces, components, and a specific component model for its internal system. These Two types of components which can be plugged into a JBI environment include:

  • Service engines provide logic in the environment, such as transformation, scripting language support, ETL and CEP support, or BPEL and so on.

Binding components are adapters which can connect an external system to the service bus. For example a JMS binding component can connect a JMS message broker to the service bus, a mail server component can bind an emailing system to the bus to make it possible for other binding components to send email or receive emails through the bus.

Figure 1 shows the overall architecture of OpenESB from the specification point of view. The architecture may differ from implementation to implementation.

Figure 1 The OpenESB architectural diagram from JBI perspective

The most important fact in the JBI environment in message delivery and content delivery usually introduces requirements like transaction support, naming support, and so on. The internal square illustrates the OpenESB overall environment while the external boxes represent systems external to the OpenESB runtime. Message format inside the square is XML while the format between the binding components and external system is external system oriented. Every component either Service engine or Binding component register the service that they provide by WSDL descriptor.

1.2 OpenESB versions and editions

OpenESB like GlassFish has several concurrent versions which are either in active development mode, stable or in maintenance mode. OpenESB version one is in maintenance mode and the only updates are possible bug fixes, no new feature will be added to this version.

OpenESB version 2 is the current active version which is under development. OpenESB version 3 named project Fuji is under active design and development. Through the project Fuji Sun Microsystems will develop the next version of OpenESB which is fully based on a modular system similar to GlassFish version 3.

Like OpenSSO and GlassFish which have which have commercially packaged version supported by Sun Microsystems, there is a commercially packaged and supported package for OpenESB named GlassFish ESB. The only different between OpenESB and GlassFish ESB is that GlassFish ESB is a package which contains stable components of the OpenESB along with Sun Microsystems support behind it.

If you are looking for latest bug fixes, latest new features and components you should go with OpenESB, if you want a stable version then you should go with GlassFish ESB. Both versions are free and you can use them commercially without paying any license fee. The difference is that Sun provides commercial level support for GlassFish ESB packages.

All versions of OpenESB including GlassFish ESB, Project Fuji, and OpenESB development builds are available at

1.3 OpenESB Architecture

OpenESB version 2 closely follows the specification and has no extendibility or pluggibility beyond the JBI proposed components. OpenESB JBI container is tightly integrated with GlassFish application server to provide the overall running ESB. The integration between GlassFish application server and OpenESB results in integrated administration console in all 3 major administration channel including JMX and AMX, CLI and web based administration console.

The most important part of an ESB is the message broker and its Web services stack. OpenESB uses Metro Web service stack which is proven enough to be used by some other vendors as their web services stack and OpenMQ as the message broker. OpenMQ can be considered one of the top level message broking systems in term of performance and capabilities.

1.4 OpenESB supported standards and features

When it come to list an ESB features and supported protocols and format we can list them under five different categories including: The over all performance, the reliability and scalability of the system, available service engines, available binding components and finally administration and management channel effectiveness.

Certainly there are other factors which can not deeply discuss here, factors like: vendor support level, quality of available components and service engines. Providing benchmarks which shows how good the product works and so on. But, to some extent, we can discuss five primary attributes briefly.


The most important part of an ESB is the messaging system; OpenESB uses OpenMQ as the underlying messaging system. In different benchmarks and uses cases OpenMQ proved to perform very well.

The other important component which affects an ESB performance is Web services stack which is used to develop the underlying WSDL-SOAP communication. The Metro framework which forms the underlying Web services stack of OpenESB is very well featured with an outstanding performance. IBM uses some part of Metro in their JDK, BEA uses Metro stack for its WebLogic application server and so on.

Available service engines

Functional feature of an ESB are directly depended on the functionality of its service engines and binding components. OpenESB has a very rich set of Service engines which are either developed directory by Sun Microsystems or one of OpenESB project partners. Table 1 shows the list of most important service engines along with a description of each service engine.

Table 1 Important service engines available through project OpenESB

Service Engine



A WS-BPEL 2.0 compliant engine that provides business process orchestration capabilities.


An engine for Complex Event Processing (CEP) and Event Stream Processing (ESP)

Java EE SE

Makes the whole EE environment available as a JBI component so that EJBs can communicate with other JBI components through the NMR and vice versa.


The Worklist Manager is a JBI based engine providing task management and human intervention in a business process.

Data Integrator / ETL SE

Performs extract – transform – load to build DataWarehouses, or can be used for Data Migration.

Data mashup

Provides a single view of data from heterogeneous sources, including static web pages and tabular data exposed as web services. Provides options to join/aggregate/clean data, and generate a response in a WebRowSet schema.


An engine for applying XSLT transformations with basic orchestration capabilities.

All of these service engines have the runtime and design time components included. Some of them are not commercially stable enough to be included in GlassFish ESB and in case of requirement we should download and install them manually. The nightly builds are available at

We will discuss BPEL SE in more details in subsequent sections.

Available binding components

Binding components provide the functionalities required to connect an external system to the service bus. We have two expressions about binding components; a binding component acts as consumer or provider. For example an email component which acts as consumer sends a message which receives through the service bus to an email address. And when it injects the incoming emails into the bus it is acting as a provider.

Table 2 lists important binding components available through the OpenESB components project. These components are either developed by Sun Microsystems or one of its partners. Each binding component has a runtime and a design time part. The runtime part should be deployed into OpenESB while the design time part is a NetBeans module which we should install.

Table 2 Important binding components available through project OpenESB



eMail BC

A Binding Component for sending and receiving emails

Database BC

Reads messages from and writes messages to a database over JDBC


A JBI component for sending and receiving messages over HTTP


Receives messages from and sends messages to a remote service through JMS


Reads messages from and writes messages to an LDAP server


Provides functionality for interacting with a SAP R/3 system within a JBI environment.


Integrates media capabilities from communications networks using the SIP protocol (Session Initiation Protocol)


BC Polls for files on an FTP server, and provides for uploading files to an FTP server


BC Polls for files on the file system, and provides for writing files to the file system


Number of Binding components reaches 30 and covers variety of standards and protocols. Table 2 lists the most widely used binding components regardless of either it are in beta or stable state. You can find a list of all components at The list contains all necessary information like states, in maintenance mode or deprecated components and so on.

Administration and management

One important feature of a middleware is the possibility to administrate, monitor, and manage the middleware in ac effective and preferment way. OpenESB follows the same path that GlassFish does and provides very profound administration functionalities including a CLI and web based administration. Administration console provides monitoring information for service engines and binding components and deployed artifact.

2 Getting OpenESB running

Every big software need some sort of manual for installing and running the application, the manual includes information about compatible architecture, operating systems and runtime platforms. In our case we can install OpenESB on any operating system which there is a Java 5 runtime environment available for it. The supported platforms include Windows, Linux flavors, Solaris flavors, and Mac OSX flavors.

2.1 Downloading and installing

OpenESB is based on GlassFish and therefore it has the same packaging policy and schema for different operating systems. In our exploration we will use GlassFish ESB version 2 which is available at Make sure that you select the right version for your operating system and underlying architecture.

GlassFish ESB bundle contains the runtime bits which is consisting of JBI components and service engines along with the JBI container itself. The distribution package also contains the development time components which include NetBeans IDE and additional modules for supporting bundler JBI runtime components.

The installation process is quite simple; the installer asks for installation path and configuration information like GlassFish instance port, administration credentials and so on. By now you should be very good with the installation process. Before starting the installation process, ensure that you have something around 600MB of free space in the partition that you want to install the package.

Installing both GlassFish ESB runtime and NetBeans IDE inside one parent directory is a good practice as we have all development related artifacts in the same place.

2.2 Getting familiar with GlassFish ESB

I assume that we installed GlassFish ESB in a folder named glassfishesb_home, content of this folder is shown in figure 2


Figure 2 GlassFish ESB installation directory’s content

The content is very straight forward and cleans for such a massive development and deployment system. The glassfish folder includes GlassFish version 2ur2 along with stable versions of OpenESB container and components while the netbeans directory contains NetBeans IDE along with all necessary modules to support GlassFish ESB functionalities in design time.

We can start the server separately by running start_glassfish_domain1 script for times that we do not need the development environment running. Otherwise we can start NetBeans IDE by running start_netbeans script and then letting NetBeans to start the server when required.

3 Developing and testing a simple application

NetBeans IDE is a very feature rich IDE for developing Java EE applications and inclusion of SOA and Composite application development functionalities which targets OpenESB make the IDE a perfect development tools for developing composite applications and integration components. In this section we create a very simple BPEL based application to show how the overall process works.

3.1 The sample scenario

It is important to understand what we want to build and what features of OpenESB artifacts we want to use to implement the required functionalities. We have a portal like system which users can add a whether Portlet to see the current weather of their home city. We want to store some statistics which later on helps us determine which country and cities has largest number of users which use the Portlet weather.

To achieve this goal we used two web services, the first one translate the IP address of our client to its location including country and city and the second service provides us with current weather condition. We should somehow weave these services together to achieve the desired result. We also need to log all access to these web services with details related to country, city and date in a text file or database table to perform the statistical analysis and extract the usage portion for each country and each city in each country.

We will only use two BEPL components including the Database BC and BPEL SE. with these two component we can achieve a complete solution for our problem which otherwise could take more time to implement with much less flexibility in term of applying new and updated business rules.

3.2 Creating the BPEL Module project

Run NetBeans IDE and switch to Services view (CTRL+5), expand the database node and right click on the Java DB instance and select Start Server from the context menu. Expand the drivers’ node and right click on Java DB (Network) node and select Connect Using.

When the New Database Connection opened, change different attributes of the connection as follow:

  • Database URL: jdbc:derby://;create=true

  • User: APP

Password: APP

Click OK and NetBeans will create a new connection to our database. Expand the connection node, right click on the Tables node and select Create Table item. Create a table named wRequestLog with fields listed in the table 3.

Table 3 The wRequestLog table’s fields

Field Name

Field Type








The composite application runs inside the container and for accessing our WeatherInfo database we need to create a connection pool and data source in GlassFish application server. Create a connection pool for the above database and then create a data source named jdbc/winfo.

To create the connection pool you can use following asadmin command.

create-jdbc-connection-pool --datasourceclassname org.apache.derby.jdbc.ClientDataSource --restype javax.sql.XADataSource --property portNumber=1527:password=APP:user=APP:serverName=localhost:databaseName=WeatherInfo WinfoPool

I am sure that you remember how we used to create data sources but I provide the corresponding asadmin command if you have just forget the spelling ;-).

create-jdbc-resource --connectionpoolid WinfoPool jdbc/winfo

Now we are ready to take one step further and create the project and its related files like copy of WSDL documents, BPEL process file and so on.

Run the SOA enabled NetBeans IDE and from the main menu select File>New>Project and from categories select SOA and from projects list select BEPL Module click next and change the project name to WeatherInfo.

Right click on project node in project view (CTRL+1) and select New>Other, a window will open, from the categories select XML and from file types select External WSDL Document and click next. We want to create a WSDL from a URL therefore use as the URL and click finish. NetBeans will create a reference to the WSDL file along with a local copy of the WSDL file which we may edit to tailor it to our requirement. This web service provides us with location of a given IP address. The IP2Geo web service is provided by CDYNE Corporation and I get a trial license for using the web service in the book.

Follow the same procedure and create a new WSDL file from This web service takes a city name along with the country name and returns the current weather. The Global Weather web service is provided by WebserviceX.

We have two more steps and then we are finished with creating our project’s basic artifacts. The first artifact is a WSDL descriptor which let us use Database Binding Component to record all requests and the second artifact is the WSDL descriptor which describe input and output types of our provided service which is our BPEL process.

To create the database binding component, right click on the project node and select New>WSDL Document the wizard will open and guide you through the process. In the first page of wizard use the following information:

  • Change the name to DBLOGBC

  • Change the WSDL Type to Concrete WSDL Document; two new attributes will become available.

Change the Binding to Database and change the Type to Prepared Statement.

Click next, in this page we should determine which database connection we want to use, select the connection URL that we made in previous step and click Next. In this step we need to enter a valid prepared statement which the binding component will execute against the database. Enter the following snippet as the prepared statement value and then press Discover Parameters.

insert into APP.WREQUESTLOG (rdate,country,city) values (?,?,?)

Change the parameters name according to figure 3 and click Next then click Finish and we are done with defining a database binding component which can integrate into the solution and insert our records into database.


Figure 3 Database binding component’s prepared statement definition

The OpenESB version 2 design time modules does not works very good for naming the attributes and operations. We need to rename them manually to have easier mapping and navigation inside the project artifact later. Open the DBLOGBC.wsdl in the source editor and replace all instance of newuntitled_Operation with log_Operation. Also look for jdbc/__defaultDS and replace it with jdbc/winfo. Depending on your NetBeans and OpenESB design time component the default data source name may differ or even the wizard may asks you for the data source name but you can always manually edit the jdbc:address element of the DatabaseBC’s wsdl file to make the correct configuration.

You remember that we changed the parameters name, but your OpenESB design time modules may forget to apply the naming in the XSD file and therefore we will face some ambiguity problems during variables mapping in next steps. So open the DBLOGBC.xsd in the source editor and make following changes:

  • param1 should change to rDate

  • param2 should change to Country

param3 should change to City

Here comes the last step before creating the BPEL process, in this step we define the process communication interface which external clients will use to invoke our process and utilize its service.

Right click on the project node and select New>WSDL Document, when the wizard opened change the name to wInfoProcess; for the WSDL type select Concrete WSDL Document; for the binding choose SOAP and for the type attribute select RPC Literal press Next. Change the input message part name to IPAddress and change the output message part name to weatherInfo.

We are finished with defining all basic artifacts prior to designing the process, now we should just continue with designing the BPEL process itself. To do so, right click on the project node and select New>BPEL Process, change the name to wInfoProcess and click Finish. Now NetBeans should open the BPEL process in the BPEL designer which is a design time module of OpenESB.

In the right side of the IDE you can see components plat and in the left side the project explorer is in place. First let’s create the partner links which connect the BPEL process to other involved parties like external web services and our database binding component. The BPEL designer has three sections, left right and the middle section, the left and right side are used for partner links. Therefore drag and drop the WSDL files from project explorer into the design are as listed in table 4. When you drop a WSDL into the BPEL designer left or right side a window will open which allows you to change the partner link attribute and characteristics. We accept default values for all attributes expect for the partner link name which are listed in the table.

Table 4 adding WSDLs to BPEL process designer

WSDL name

BPEL designer area

Changes required for partner link


Right Side

Change the name to Positioning


Right Side

Change the name to Weather


Right Side

Change the name to Logger


Left Side

Change the name to WInfoProcess

By now your designer window should be similar to figure 5. Partner Links in the left side meant to be internal to the BPEL process and those in the right side are external to the BPEL process.

Figure 5 The BPEL designer after adding all partner links.

Now we need to add BPEL elements to the process to implement the business logic that we discussed. From top to button we need to add the following elements to the BPEL designer window. The final view of designer window after we finish adding elements will be similar to figure 6.

 Figure 6 BPEL designer view adding all necessary elements

Table 5 shows the complete list of all elements which we should add to the designer. To add the elements we should drag and drop them from the right side Palette into the middle area of the designer. Designer will show you all possible places for dropping the element by shoeing an orange circle in that position. To rename an element we can simply select the element and press F2 or double click on the element name. To connect an element to one of the partners, click on the element, an envelop will appear, drag and connect the element to the partner link mentioned in the table 5

Table 5 List of required BPEL elements which we should add to the designer window

Element Type

Element Name

Partner Link













Flow (Structured Activity)


Invoke (Inside Flow)



Invoke (Inside Flow)









Now we need to add variables and variables assignment and our work with the process and the process designer is finished. In this step we will create all variables and then we will move toward assigning these variables to each other. To create the variables:

  • Double click on GetWeatherInfo and for the input variable click create, accept the defaults and click OK buttons as much as required to see the designer window again.

  • Double click on InvokeToResolveIP and create both output and input variable the same way that you create the variable for the GetWeatherInfo.

  • Double click on GetWeather and make sure that you select GetWeather as the operation and create the input and output variables similar to previous steps.

  • Double click on LogInformation and create input and output variables, accept defaults for variable names and other attributes.

Finally double click on the reply button and for the normal response create a variable and accept the defaults for its name and other attributes.

Now we should be able to assign the variables to each other to make the data flow possible in the process. Thankfully NetBeans come with a very nice and easy to use Assign variable editor called Mapper which we can use to assign the variables to each other. Double click on the AssignIP element and use the following procedure to assign the received IP in the previous step to input variable of InvokeToResolveIP.

In the left side of the Mapper window select output button to ensure that it is filtered for output variables and in the right side select input button to ensure that we are looking at input variables.

Expand the WInfoProcessOperationIn from the left side and expand ResolveIPIn from the right side, make sure that you expand them as much as it does not expand anymore. Now from the left side select IPAddress and connect it to ipAddress in the right side. From the Mapper top menu bar select String and add a String Literal to the middle section of the Mapper window. Change its value to 0 and connect it to licenseKey variable in the right side. Now we are finished with the first assign element. Now switch to the designer view to finish the mapping for AssignCityAndCountry element.

Double click on the AssignCityAndCountry element and assign the variables as follow:

Let’s assign the city name and country name to the GetWeather invoke element. To do so, from the left side expand ResolveIPOut and from the right side expand GetWeatherIn. From the left side, assign city and country parameters of the ResolveIPOut to similar parameters of the GetWeatherIn in the right side. Now we need to assign the same information to our database log operation therefore, expand the Log_operationIn and assign the city and country from the left side to its parameters. We have an rDate parameter which we should use a Current Date & Time predicate. So, from the top menu bar select Date And Time and drop a Current Date & Time predicate into the middle section of the Mapper window. Connect the new predicate to rDate variable of the Log_OperationIn. We are finished with the second variable assignment and one last item remains which we should complete. Figure 6 shows the Mapper window after assigning all variables.


Figure 7 Mapper window with complete AssignCityAndCountry assignment

One more variable assignment and we are finished with the BPEL process development. Double click on AssignWeather element and assign the output of GetWeatherOut parameter to WInfoProcessOperationOut parameter. Ok, the name looks unlikely and you may thing that it is an output variable but believe me it is an input variable.

3.2 Creating the Composite Application project

A composite application is the project type that we can use to deploy a composite application into the JBI container. Select File>New Project; from the categories select SOA and from Projects select Composite application and click next. Change the project name to WinfoCA and click finish. The composite application is created and is ready for adding JBI modules.

Right click on the project node and select Add JBI Module, select WeatherInfo project. The CASA editor opens which shows the BEPL module without any relation. Right click on WinfoCA project node and select Clean and Build. NetBeans will compile the composite application and its included JBI modules. When we issue a build command NetBeans refresh the CASA editor and we can add new module, change a data flow between components and so on.

3.4 Deploying and testing sample project

Deploying and testing a composite application is very easy using the well integrated Netbeans, GlassFish and OpenESB. To deploy the application, right click on the WinfoCA node and select deploy. NetBeans will deploy the assembly to OpneESB runtime. To create a test case for the composite application, expand the WinfoCA node and right click on Test node. A wizard will start that help us generate the test case.

In the first page, wizard asks for the test case name, it can be WinfoTest at the second step it asks us to choose which WSDL file we want to test. Expand the WeatherInfo node and select winfoprocess.wsdl press next. Now we should select which operation we want to test, select winfProrocessOperation and click finish. Now we have a test case created, expand the test case in project view and double click on input node to edit the test case input values. Double clicking the input node results in opening an XML file in the editor. The XML file is a SOAP message which we should change its parameters value to suite our test requirement. Therefore change the value of IPAddress element to which is one the Google IP addresses. Save the file and we are finished with creating the test case.

Right click on WinfoTest and select run. IDE will invoke the operation defined in the WSDL file by passing the SOAP message we created to it. Then the output result will appear under the output node of the test case. The output should be information about current weather in Mountain View in the United States.

You can debug a BPEL process and generally a JBI application using NetBeans. The IDE allows you to view the variables during different steps of BPEL execution. The simplest way that we can use to debug the BPEL module is adding our required breakpoint and then executing the test using debug mode.

3.5 OpenESB administration from within NetBeans IDE

NetBeans IDE is very well integrated with OpenESB for not only for developing and testing applications but also for administration and monitoring purposes. All of these functionalities are hidden inside the NetBeans integration adapter for OpenESB.

Press CTRL+5 to open the servers tab and then expand the servers node, expand the OpenESB server instance and finally expand JBI node. Now your server view should be similar to figure 8. Now press CTRL+SHIFT+7 to open the Properties window and then select JBI node. It is wonderful that you can change configuration of the JBI container from within the IDE to continue your work without leaving the IDE and switching between different windows.


Figure 8 The OpenESB node in NetBeans Server view

Expand the Service Engines node and select sun-bpel-engine node, the Properties view shows you the entire configuration and read only attributes of the BPEL engine. You can enable the monitoring by checking the checkbox or you can enable the persisting feature to have your BPEL process recovered after any possible crash.

Right click on the sun-bpel-engine node and explore what operations you can perform on the engine from context menu. We can start, stop, upgrade, uninstall and perform any other lifecycle related task using the context menu. We can view the different endpoints used in BPEL process deployed in the BPEL engine using the Show Endpoints Statistics menu item. This menu item helps us to understand whether our services are called during our test period or not and if they are called what was the response semantic and quality.

4 OpenESB administration

The same administration channels which we saw for GlassFish are available for OpenESB and GlassFish ESB. OpenESB CLI commands are integrated with GlassFish CLI utility and follow the same principals. And the Web Based administration console uses GlassFish administration console to present required administration capabilities.

4.1 The CLI utility

OpenESB CLI administration is completely integrated with GlassFish administration and has the same set of characteristics. The only major difference is lack of some commands as GlassFish ESB version 2 is based on GlassFish version 2 and any new functionality that is included in GlassFish version 3 is not present in version 2.

Service assembly administration commands

JBI container like Java EE container has its own application type which is called a JBI service assembly or simply an assembly. Usually the overall procedure of creating and deploying an assembly is done using build systems or IDE but there some commands in the OpenESB CLI which let us administrate the assemblies. All of the OpenESB CLI commands are remote with all characteristics of a remote command like default target server, and so on.

Deploying and undeploying a service assembly are two most basic tasks related to OpenESB administration. In the previous step we developed and deployed an application using NetBeans, now lets undeploy the application and then deploy it using the CLI commands.

To underply the application we can use undeploy-jbi-service-assembly command. For example to undeploy the WinfoCA composite application we can use:

asadmin> undeploy-jbi-service-assembly WinfoCA

Now we can deploy the application archive using the deploy-jbi-service-assembly command. NetBeans puts the built packages into a directory named dist inside the project directory. Let’s deploy the WinfoCA application from that particular directory.

deploy-jbi-service-assembly /opt/article/WinfoCA/dist/

This command deploy the application to the default target server with default attributes like application name and so on.

To list all deployed service assembly we can use list-jbi-service-assemblies command. Now that we have one application deployed into the container this command should show a list containing WinfoCA similar to:

asadmin> list-jbi-service-assemblies


Command list-jbi-service-assemblies executed successfully.

Each service assembely contains one or more service units which connect services, binding components, and service engines along with any deployment artifacts like XSLT documents, BPEL process and so on. To view these service units we can use show-jbi-service-assembly command. Figure 9 shows how we can use the command to view detailed information for WinfoCA composite application.

Figure 9 viewing the detailed information for an assembly

We can stop, start and forcefully shutdown a service assembly when required by using the OpenESB provided commands.

  • To stop the WinfoCA service assembly: stop-jbi-service-assembly WinfoCA

  • To start the WinfoCA service assembly if it is stopped: start-jbi-service-assembly WinfoCA

To forcefully shutdown the WinfoCA service assembly if required: shut-down-jbi-service-assembly WinfoCA

There are more JBI service assembly commands in the OpenESB CLI which you can find them using ./asadmin help command.

Components administration commands

There two major types of JBI components including service engines and binding components which we may need to deal with. These two types are accompanied with another type named shared library which contains common library classes for several components. Table 6 shows important commands in this set along with an small explanation about each one of them.

Table 6 Important JBI components administration commands





These commands can be used to start and stop binding components. Stopping a binding component result in any related service unit to stop.




Showing a list of deployed JBI components




Show detailed information about a given JBI component which is deployed in the target container.




Installing different JBI components into the target container.



Uninstalling given JBI component from the target server.



Shutdown the given component. Any deployed service unit on the target component will shutdown consequently

Although there are many other command which an administrator may need during his daily tasks but we learned the most important commands in the section. In the next section we will take a look at web based administration console features for OpenESB.

4.2 Administration console

One good point about GlassFish and its companion product is the unified administration channels including CLI, JMX and web based administration console which let us use a unified look and feel and familiar environment to administrate all the infrastructure.

All of JBI administration capabilities are listed under the JBI node in the navigation tree. When you expand all nodes you can see something similar to figure 10. As you can see in figure 10 the JBI node also helps us have a big picture view of our service units’ deployment on different service engines and binding components.


Figure 10 OpenESB administration node in the GlassFish administration console

The JBI node itself allows us to configure the container settings like Logging, general statistics, and general configuration related to JBI artifacts installation. Subsequent nodes let us administrate the deployed applications, binding components, service engines and shared libraries.

Service assembly administration

Administration console provides deploy, undeploy, start, stop and shutdown operations for service assemblies. We also can view monitoring information for a deployed service assembles.

For each service assembly we may have one or more service units, we can view service units included in a service assembly along with accessing some monitoring information about them.

JBI components administration

Installing, uninstalling, starting, stopping and shutting down a JBI component which can be either a service engine or a biding component provided in the Components node. Administration console provides us with capabilities which allow us to change the component configuration; its loggers logging level, viewing the descriptor file for the component, and most important thing viewing detailed monitoring information about requests and request processing by each component.

We can see view which service units from which service assemblies are deployed on a component and navigate to the service unit or service assembly configuration.

JBI shared libraries administration

For the shared libraries, we can install, uninstall and view the depended components on each shared libraries to ensure that uninstalling the library will not cause unexpected result in other components.

4.3 Installing new components

OpenESB project provides many components out of the box but there are tens of components available in the OpenESB website at From this page we can navigate to each component WIKI page and download the required and install them into OpenESB and NetBeans. But another effort is undergo to let the developers to download one single installer for each component and let the installer install all required artifacts like shared libraries, runtime engines and design time NetBeans modules. Latest build of installers for all components are available at

Installing the components using the installers available at the above page is very straight forward. the installer asks for NetBeans and OpenESB (GlassFish) directory and complete the installation.

5 Summary

In this article we discussed almost all basics of OpenESB administration and application development in a crush course mode. We saw how JBI and OpenESB are related, how OpenESB and GlassFish ESB are similar products with different naming. We developed a sample JBI service assembly which utilizing several binding components and service engines and finally we deployed the application into OpenESB and performed functional testing using NetBeans IDE.

We discussed how we can install and administrate OpenESB. We discussed CLI and administration console for managing OpenESB and we learned that OpenESB administration channels uses similar rules and look and feel that GlassFish administration and CLI uses.

GlassFish Modularity System, How extend GlassFish CLI and Web Administration Console (Part 2: Developing Sample Modules)

Extending GlassFish CLI and Administration Console, Developing the sample Modules

Administrators are always looking for a more effective, easier to use, and less time consuming tool to use as the interface sitting between them and what they supposed to administrate and manage. GlassFish provide effective, easy to access and easy to simple to navigate in administration channels which cover all day to day tasks that administrators need to perform. But when it comes to administration, administrators usually write their own shell scripts to automate some tasks; they use CRON or other schedulers to schedule automatic tasks, and so on to achieve their own customized administration flow.

By using GlassFish console, it is simply possible to use shell scripts, or CRON to further extends the administration and facilitate the daily tasks. But sometimes it is far better to have a more sophisticated and mature way to extend and customize the administration interfaces. GlassFish provides all necessary interfaces and required services to allow administrators or developers to develop new modules for GlassFish administration interfaces to add a new feature or enhance already available capabilities.

GlassFish administration channels extendibility is not only for easing the administrator tasks to customize the administration interfaces, but also it is present to let container developers develop administration interfaces for their containers fully integrated with already available interfaces and fully compatible with the present administration console look and feels. Using administration console extendibility developers can develop new administration console commands very easily by utilizing already available services and dependency injection which provide them with all available environmental objects that they need.

This article covers how we can develop new commands for the CLI console and how we can develop new modules to add some new pages and navigation nodes to web administration console.

1 CLI console extendability

GlassFish CLI can be considered the easiest to accesses administration console for experienced administrators as they are better the keyboard and like automated and scripted tasks instead of clicking the mouse in a web page to see the result.  Not only GlassFish CLI is very command and feature rich, but also its design is based on the same modularity concepts that the whole application server is based on.

CLI modularity is very helpful for those who wants to extends GlassFish by providing new set of commands for their container, deploying new monitoring probes and providing the corresponding CLI commands for accessing the monitoring information,  developing new commands which may be required by administrators and are not already in place, and so on.

1.1 Prepare your development environment

This is a hands on article which involve us with developing some new modules for GlassFish application server and deploying the modules to GlassFish application server, therefore we should be able to compile the modules which usually uses HK2 services in the inside and are bundled as OSGI bundle for the deployment.

Although we can use an IDE like NetBeans to build the sample codes but we are going to use a more IDE agnostic way like using Apache Maven to ensure that we can simply manage the dependency hurdle and make the sample codes easily imported to IDE of your choice. Get Maven from and install it according to the installation instruction provided in the same page. Then you are ready to further dig into developing modules.

1.2 CLI modularity and extendibility

We discussed about how GlassFish utilize OSGI and HK2 for providing a modular architecture, one place which this modular architecture shows itself is CLI modularity. We know that all extendibility point in GlassFish are based on contracts and contracts providers which in simple though there are interfaces and interface implementation along with some annotation to mark and further configure the contract implementation.

We are talking about adding new commands to CLI dynamically by placing an OSGI bundle in the module directory of GlassFish, either the module just contains CLI commands or it contains CLI commands, web based administration console pages, and a container implementation. So, at first there is no predefined list of known commands and commands loads on-demand and as soon as you try to use one of them. For example when you issue ./asadmin list-commands CLI framework will check for all provider which implements the admin command contract and shows the list by extracting the information from each provider (implementation). CLI commands can use dependency injection provided by HK2 and also each command may extract some information and provide them to other components which might be interested in using them.

Like all providers in the GlassFish modularity system, we should annotated each command implementation using @Service annotation to ensure that HK2 will treat the implementation as a service for lifecycle management and service locating. CLI commands like any other HK2 service should have a scope which determines how the lifecycle of the command is managed by HK2.  Usually we should ensure that each command is just live in the context of its execution and not beyond that context. Therefore we annotate each command with a correct scope like PerLookup.class scope which result in initiating a command in each lookup.

All that we need to know for developing CLI commands is summarized in understanding some interfaces and helper classes.  Other required knowledge is around the AMX and HK2.

Each CLI command must implements a contract which is an interface named org.jvnet.hk2.annotations.Contract.AdminCommand which is shown in listing 1.

Listing 1 The AdminCommand interface which any CLI command have to implement

@Contract                                           #1
public interface AdminCommand {
public void execute(AdminCommandContext context);            #2

The AdminCommand interface has one method which is called by asadmin utility when we call the associated command. At #1 we define the interface as a contact which HK2 service providers can implements. At #2 we have the execute method which accept a argument of type org.glassfish.api.admin.AdminCommandContext which is shown in listing 2. This class is a set of utilities which allows developer to have access to some contextual variables like reporting, logging, and command parameters which the last one will be deprecated.

Listing 2 The AdminCommandContext class which is a bridge between inside the command and outside the command execution context

public class AdminCommandContext implements ExecutionContext {

public  ActionReport report;
public final Properties params;
public final Logger logger;
private List<File> uploadedFiles;

public AdminCommandContext(Logger logger, ActionReport report, Properties params) {
this(logger, report, params, null);

public AdminCommandContext(Logger logger, ActionReport report, Properties params,
List<File> uploadedFiles) {
this.logger = logger; = report;
this.params = params;
this.uploadedFiles = (uploadedFiles == null) ? emptyFileList() : uploadedFiles;

private static List<File> emptyFileList() {
return Collections.emptyList();

public ActionReport getActionReport() {
return report;

public void setActionReport(ActionReport newReport) {
report = newReport;

public Properties getCommandParameters() {
return params;

public Logger getLogger() {
return logger;

public List<File> getUploadedFiles() {
return uploadedFiles;

The only unfamiliar class is the org.glassfish.api.ActionReport abstract class which has several sub classes that allow us as CLI command developers to report the result of our command execution to the original caller. Several sub classes are defined to make it possible to report back the result of the execution in different environment like returning an HTML, JSON or plain text report about the execution. The report may include exit code, possible exceptions, failure or the success, and so on based on the reporter type. Important sub classes of the ActionReport are listed in table 1

Table 1 all available reporters which can be used to report the result of a command execution

Reporter class



Generate an HTML document containing report information


A JSON document containing the report information


A plain text report which provides necessary report information as plain text


A properties file containing all report elements


No report at all


A machine readable XML document containing all report elements

All of these reporters except the SilentActionReport which is placed inside org.glassfish.embed.impl are placed in the com.sun.enterprise.v3.common package.

Now we need a way to pass some parameters to our command, determine which parameters are necessary and which one are optional, which one need argument and what should be the type of the argument. There is an annotation which let us annotate properties of the AdminCommand implementation class to mark them as parameters. The annotation which we should use to mark a property as a parameter is @Param and is placed in org.glassfish.api package.

Listing 3 The Parameters annotation which we can use to have a parameterized CLI command

public @interface Param {
public String name() default "";
public String acceptableValues() default "";
public boolean optional() default false;
public String shortName() default "";
public boolean primary() default false;

As you can see, the annotation can be place either on a setter method or on a filed. Annotation has several elements including the parameter name which by default is the same as the property name, a list of comma separated acceptable values, the short name for the parameter, whether the parameter is optional or not and finally whether the parameter name is required to be included or not.  In each command only one parameter can be primary.

When we mark a property or a setter method with @Param annotation, CLI framework will try to initialize the parameter with the given value and then it will call the execute method. During the initialization, CLI framework check different attributes of a parameter like its necessity, short name, its acceptable value and so on.

The last thing which we should think about a CLI command is its internationalization and the possibility to show the localized version of text messages about the command and parameters to the client. This requirement is covered using another annotation named @I18n, the associated interface is placed in org.glassfish.api. Listing 3 shows the I18n interface declaration.

Listing 4 I18n Interface which we should use to provide internationalized messages for our CLI command

public @interface I18n {
public String value();

The annotation can be used both for properties and for annotating methods. At runtime the CLI framework will extract and inject the localized value into the variable based on the current locale of the JVM.

The localization file which is a standard resource bundle should follow the same key=value format which we used for any properties file. The standard name for the file is an example file can be for United States English or as the default bundle for any locale which has no corresponding locale file in the bundle.

1.3 A CLI command which returns the operating system details

First we will develop a very simple CLI command in this section to get more familiar with the abstracts that we discussed before and then we will develop a more sophisticated command which will teach us more details about CLI framework and capability.

First command that we are going to develop just shows us what is operating system which our application server is running on and it will shows some information about the OS like architecture, JVM version, JVM vendor and so on. Our command name is get-platform-details.

First let’s look at the maven build file and analyze its content, although I am not going to scrutinize on the Maven build file but we will look at elements related to our CLI command development.

Listing 5 Maven build file for creating a get-platform-details CLI command

<project xmlns="" xmlns:xsi=""
<modelVersion>4.0.0</modelVersion>                       #1
<artifactId> platformDetailsCommand </artifactId>
<packaging>hk2-jar</packaging>                             #2
<description>GlassFish in Action, Chapter 13, Platform Details Command </description>
<directory>src/main/resources</directory>         #3

You will find more details in the POM file included in the sample source code accompanying the book, but what is shown in the listing is enough to build a CLI command module.  At #1 we define the OSGI Manifest version. at #2 we are ensuring that the bundle which we are creating is built using the HK2 module specification which means inclusion of all OSGI related description like imported and exported packages in the MANIFEST.MF file which will be generated by Maven and resides inside the META-INF folder of the final JAR file. At #3 we ensure that Maven will include our help file for the command. The help file name should be similar to the command name and it can include anything textual.  We usually follows a UNIX like man files structure to create help files for the commands. The help file will be shown when we call ./asadmin get-platform-details –help. Figure 1 shows the directory layout of our sample command project.  As you can see we have two similar directory layout for the manual file and java source codes and the POM.XML file is in the root of the cli-platform-details-command directory.

Figure 1 Directory layout of the get-platform-details command project.

Now let’s see what the source code for the command is and what would be the outcome of the build process. Listing 6 shows the source code for the command itself which when we execute ./asadmin get-platform-details GlassFish CLI will try to call its execute method. The result after executing the command is similar to figure 2, as you can see our command has executed successfully.

Figure 2 Sample output for calling get-platform-details with runtime as the parameter

Now you can develop your own command and test it to see a similar result in the terminal window.

Listing 6 Source code for which is a CLI command

@Service(name = "get-platform-details")               #1
@Scoped(PerLookup.class)                                  #2
public class PlatformDetails implements AdminCommand {           #3

ActionReport report;
OperatingSystemMXBean osmb = ManagementFactory.getOperatingSystemMXBean();                  #4
RuntimeMXBean rtmb = ManagementFactory.getRuntimeMXBean();     #5
// this value can be either runtime or os for our demo
@Param(name = "detailsset", shortName = "DS", primary = true, optional = false)                                                        #6
String detailsSet;

public void execute(AdminCommandContext context) {
try {

report = context.getActionReport();
StringBuffer reportBuf;
if (detailsSet.equalsIgnoreCase("os")) {
reportBuf = new StringBuffer("OS details: n");
reportBuf.append("OS Name: " + osmb.getName() + "n");
reportBuf.append("OS Version: " + osmb.getVersion() + "n");
reportBuf.append("OS Architecture: " + osmb.getArch() + "n");
reportBuf.append("Available Processor: " + osmb.getAvailableProcessors() + "n");
reportBuf.append("Average Load: " + osmb.getSystemLoadAverage() + "n");
report.setMessage(reportBuf.toString());            #7
report.setActionExitCode(ExitCode.SUCCESS);         #8

} else if (detailsSet.equalsIgnoreCase("runtime")) {

reportBuf = new StringBuffer("Runtime details: n");
reportBuf.append("Virtual Machine Name: " + rtmb.getVmName() + "n");
reportBuf.append("VM Vendor: " + rtmb.getVmVendor() + "n");
reportBuf.append("VM Version: " + rtmb.getVmVersion() + "n");
reportBuf.append("VM Start Time: " + rtmb.getStartTime() + "n");
reportBuf.append("UpTime: " + rtmb.getUptime() + "n");

} else {
report.setActionExitCode(ExitCode.FAILURE);         #9
report.setMessage("Th given value for " +
"detailsset parameter is not acceptable, the value " +
"can be either 'runtime' or 'os'");

} catch (Exception ex) {

report.setMessage("Command failed with the following error:n" + ex.getMessage());
context.getLogger().log(Level.SEVERE, "get-platform-details " + detailsSet + " failed", ex);



At #1 we are marking the implementation as a provider which implements a contract. The provided service name is get-platform-details which are the same as our CLI command name. At #2 we are setting a correct scope for the command as we do not like to see the same result every time that we initiate the command either for different domains. At #3 we are implementing the contract interface. At #4 and #5 we get MBeans which provides the information that we need, this MBeans can be either our local server MBeans or the remote running instance if we use the –host and –port  to execute the command against a remote instance. At #6 we are defining a parameter which is not optional, the parameter name is detailsset which we will use it by –detailsset when we want to execute the command. The short name for the parameter is DS which we will use as –DS when we want to execute the command. The command is primary so we do not need to name the parameter, we can just pass the value and CLI framework will assign the value to this parameter. Pay attention to the parameter name and the containing variable. If we do not use name element of the @Param annotation the parameter name will be the same as the variable name. At #7 we set the output which we want to show to the user as the report object’s message. At #8 we set the exit condition to successful as we managed to execute the command successfully. At #9 we set the exit code as a failure as the value passed for the parameter is not recognizable, we show a suitable message to user to help him diagnose the problem. At #10 we faced an exception, so we log the exception and set the exit command as a failure.

GlassFish CLI commands fall under two broad categories, one is the local commands like create-domain or start-domain commands which execute locally and you can not pass remote connection parameters including –host, –port parameters and expect to see the command executed on a remote glassfish installation. These commands extend com.sun.enterprise.cli.framework.Command abstract class or one of subclasses like com.sun.enterprise.admin.cli.BaseLifeCycleCommand.  And do not need anything like an already running GlassFish instance.

Next command which we will study is another command which will list some details about the JMS service of target domain, but this time we will discuss resource injection and some details around localization of command messages.

Listing 7 shows source code for a command which shows some information about domain JMS service including starting arguments, JMS service type, address list behavior and so on. Although the source code shows how we can get some details but setting the values for each attribute is the same as getting them.

Figure 3 shows the result of executing this command on a domain created with default parameters.

Listing 7 Source code for remote command named get-jms-details

@Service(name = "get-jms-details")
@I18n("get-jms-details")                       #1
public class JMSDetails implements AdminCommand {

@Inject                                      #2
JmsService jmsService;
JmsHost jmsHost;
JmsAvailability jmsAvailability;
ActionReport report;
@Param(name = "detailsset", acceptableValues = "all,service,host,availability",
shortName = "DS", primary = true, optional = false) #3
@I18n("get-jms-details.details_set")                #4
String detailsSet;
final private static LocalStringManagerImpl localStrings = new LocalStringManagerImpl(JMSDetails.class);             #5

public void execute(AdminCommandContext context) {
try {
report = context.getActionReport();
StringBuffer reportBuf = new StringBuffer("");
if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("service")) {
reportBuf.append("Default Host: " + jmsService.getDefaultJmsHost() + "n");
reportBuf.append("MQ Service: " + jmsService.getMqService() + "n");
reportBuf.append("Reconnection Attempts: " + jmsService.getReconnectAttempts() + "n");
reportBuf.append("Startup Arguments: " + jmsService.getStartArgs() + "n");
} else if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("host")) {
reportBuf.append("Host Address: " + jmsHost.getHost() + "n");
reportBuf.append("Port Number: " + jmsHost.getPort() + "n");
} else if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("availability")) {
reportBuf.append("Is Availability Enabled: " + jmsAvailability.getAvailabilityEnabled() + "n");
reportBuf.append("Availability Storage Pool Name: " + jmsAvailability.getMqStorePoolName() + "n");

} catch (Exception ex) {
report.setMessage(localStrings.getLocalString("get-jms-details.failed", "Command failed to execute with the {0} as given parameter", detailsSet) + ". The exception message is:" +
ex.getMessage());  #6
context.getLogger().log(Level.SEVERE, "get-jms-details " + detailsSet + " failed", ex);



You are right the code looks similar to the first sample but more complex in using some non familiar classes and use of resource injection and localization stuff. We used resource injection in order to access GlassFish configurations. All configuration interfaces which we can obtain using injection are inside com.sun.enterprise.config.serverbeans. Now let’s analyze the code where it is important and somehow unfamiliar.  At #1 we are telling the framework that this command short help message key in the localization file is get-jms-details. At #2 we are injecting some configuration Beans into our variables. At #3 we ensure that the CLI framework will check the given value for the parameter to see whether it is one of the acceptable values or not. At #4 we are determining the localization key for the parameter help message. At #5 we initialize the string localization manager to get appreciated string values from the bundle files. At #6 we are showing a localized error message using the string localization manager.

Listing 5 shows file content which we will shortly discuss where it should be placed to allows the CLI framework to find it and load the necessary messages from it.

Listing 8 An Snippet of localization file content for a command named restart-domain

get-jms-details=Getting the Detailed information about JMS service and host in the target domain.                #1

get-jms-details.details_set=The set of details which is required, the value can be all, service, host, or availability.           #2

get-jms-details.failed=Command failed to execute with the {0} as given parameter.          #3

At #1 we include a description of the command; at #2 we include description of the detailsset parameter using the assigned localization key. At #3 we include the localized version of the error message which we want to show when the command fails. As you can see we used place holders to include the parameter value. Figure 4 shows the directory layout of the get-jms-details project. As you can see we placed the file next to the command source code.

Figure 4 Directory layout of the get-platform-details command project.

Developing more complex CLI commands follow the same rule as developing these simple commands. All that you need to develop a new command is starting a new project using the given maven POM files and discovering the new things that you can do.

2 Administration console pluggability

You know that administration console a very easy to use channel for administrating a single instance or multiple clusters with tens of instances and variety of managed resources. All of these functionalities were based on navigation tree, tabs, and content pages which we could use to perform our required tasks. You may wonder how we can extend these functionalities without changing the main web application which is known as admingui to get new functionalities in the same look and feel that already available functionalities are presented.  The answer lies again in the application server overall design, HK2 and OSGI along with some JSF Templating and use of Woodstock JSF component suite. We are going to write an administration console which let us upload and install new OSGI bundles by using browser from remote locations.

2.1 Administration console architecture

Before we start discussing how we can extend the administration console in a non-intrusive way we should learn more about the console architecture and how it really works. Administration console is a set of OSGI bundles and each bundle includes some administration console specific artifacts in addition to the default OSGI manifest file and Maven related artifacts.

One of the most basic artifacts which are required by an administration console plugin is a descriptor which defines where our administration console navigation items should appear. For example are going to add a new node to the navigation tree, or we want to add a new tab to a currently available tab set like Application Server page tab set.  The file which describes this navigation items and their places is named console-config.xml and should be placed inside META-INF/admingui folder of the final OSGI bundle.

The second thing which is required to make it possible for our module to be recognized by the HK2 kernel is the HK2 administration console plugin’s contract implementation. So, the second basic item in the administration console plug-in is an HK2 service provider which is nothing more than a typical Java class, annotated with @Service annotation that implements the org.glassfish.api.admingui.ConsoleProvider interface. The console provider interface has only one method named getConfiguration. We need to implement this method if we want to use a non-standard place for the console-config.xml file.

In addition to the basic requirement we have some other artifacts which should be present in order to see some effects in the administration console from our plug-in side. There requirements includes, JSF fragment files to add the node, tab, content, and common task button to administration console on places which are described in the console-config.xml file.  These JSF fragments uses JSFTemplating project tags to define nodes, tabs, and buttons which should appear in the integration points.

Until now we include just a node in a navigation tree, a tab in one of the tab sets, or a common task button or group in the common tasks page. What else we need to include? We need some pages to show when administrators clicked on a tree node or on a common task button. So we should add JSF fragments implemented using JSFTemplating tags to show the real content to the administrators. For example imagine that we want to write a module to deploy OSGI bundles, we will need to show a page which let users select a file along with a button which they can press to upload and install the bundle.

Now that we shown the page which let users select a file, we should have some business logic or handlers which receives the uploaded file and write it in the correct place. As we use JSFTemplating for developing pages, we will JSFTemplating handlers to handle the business logic.

Our functionality related artifacts are mentioned, but everything is not summarized in functionalities, we should provide localized administration pages to our administrators whom like to use their local languages when dealing with administration related tasks. We also need to provide some help files which will guide the administrators when they need to read a manual before digging into the action.

After we have all of this content and configurations in place, the administration console’s add on service query for all providers which implements the ConsoleProvider interface, then the service tries to get the configuration file if it couldn’t find the file in its default place by calling the getConfiguration method of the contract implementation. After that the administration console framework uses the configuration file and provided integration points template files and later on the content template files and handlers.

2.2 JSFTemplating

JSFTemplating is a sun sponsored project hosted in which provides templating for JSF. By utilizing JSFTemplating we can create web pages or components using template files, template files. Inside template files we can use JSFTemplating, and Facelets syntax, other syntax support may be provided in future. All syntaxes support all of JSFTemplating’s features such as accessing page session, event triggering and event handler’s support and dynamic reloading of page content. Let’s analyze a template file which is shown in listing 9 to see what we are going to use when we develop administration console plug-ins.

Listing 9 A simple JSFTemplating template file

setResourceBundle(key="i18n" bundle="mypackage.resource");
/>                                 #1
<sun:head id="head" />
#include /
<sun:form id="form">
"<p>#{i18n["welcome.msg"]}</p>        #2
<sun:label value="#{anOut}">        #3
GiA.getResponse(userInput="#{in}"                   response=>$pageSession{anOut});
"<br /><br />
<sun:textField id="in" value="#{}" /> #4
"<br /><br />
<sun:button text="$resource{i18n.button.text}" />
<sun:hyperlink text="cheat">
setPageSessionAttribute(key="in" value="sds");  #5
#include /          #6

At #1 we determine the resource file which we want to use to get localized text content.  We define a key for accessing the object. At #2 we are simply using our localization resource file to show a welcome message and as you have already noticed it is Facelet syntax. At #3 we are using a handler for one of the predefined events of JSFTemplating events. In the event we are sending out the value of in variable to our method and after getting back the method’s execution result we set the result into a variable named anOut in the page scope. And we use the same variable to initialize the Label component. You can see how a handler will look like in listing 10.  The event that we used makes it possible to change the text before it gets displayed. There are several pre-defined events which some of them are listed in table 2. At #4 we are using the in variable’s value to initialize the text field content. At #5 we are using a pre-defined command as the handler for the hyperlink click. At #6 we are including another template file into this template file. All of the components that we used in this template file are equivalent of WoodStock project’s components.

Table 2 JSFTemplating pre-defined events which can call a handler



AfterCreate, BeforeCreate

Event handling commences before or after the component is created

AfterEncode, BeforeEncode

Event handling commences before or after content of a component get displayed


Command has invoked, for example a link or a button is clicked


Page initialization phase, when components values are getting sets

Listing 10 The GiA.getResponse handler which has been used in listing 9

@Handler(id = "GiA.getResponse",
input = {
@HandlerInput(name = "in", type = String.class)},
output = {
@HandlerOutput(name = "response", type = String.class)
public static void getResponse(HandlerContext handlerCtx) {

In listing 10 you can see that the handler id were using in JSF page instead of the method fully qualified name. During the compilation, all handlers id, their fully qualified name, and their input and output parameters will get extracted by apt into a file named which resides inside a folder named jsftemplating. The jsftemplating folder is placed inside the META-INF folder. structure is similar to standard properties file containing variable=value pairs.

Now that we have some knowledge about the JSFTemplating project and the administration console architecture we can create our plug-in which let us install any GlassFish module in form of OSGI bundle using administration console. After we saw how a real plug-in can be developed we can proceed to learn more details about integration points and changing administration console theme and brand.

13.2.3 OSGI bundle installer plug-in

Now we want to create a plug-in which will let us upload an OSGI module and install it into the application server by copying the file into modules directory. First let’s see figure 5 which shows the file and directory layout of the plug-in source codes, and then we can discuss each artifact in details.

In figure 5, at the top level we have out Maven build file with one source directory containing standard Maven directory structure for Jar files which is a main directory with java and resources directory inside it. The java directory is self describing as it contains the handler and service provider implementation.  The resources directory again is standard for Maven Jar packager. Maven will copy the content of the resources folder directly in the root of the Jar file. Content of the resources folder is as follow:

glassfish folder: this folder contains a directory layout similar to java folder, the only file stored in this folder is our localization resource file which is named

  • images: graphic file which we want to use in our plug-in for example in the navigation tree
  • js: a JavaScript file which contains some functions which we will use in the plug-in
  • META-INF: during the development time we will have only one folder named admingui inside the META-INF folder, this folder holds the console-config.xml. After bulding the project some other folders including maven, jsftemplating and inhabitants will be created by the maven.
  • pages: this folder contains all of our JSF template files.

Figure 5 directory layout of the administration console plug-in to install OSGI bundles

Some folders are mandatory like META-INF but we can put other folders’ content directly inside the resources folder. But we will end up with an unorganized structure. Now we can discuss each artifact in details. You can see content of the console-config.xml in listing 11.

<?xml version="1.0" encoding="UTF-8"?>
<console-config id="GiA-OSGI-Module-Install">                  #1
<integration-point  id="CommonTask-Install-OSGI"          #2
type="org.glassfish.admingui:commonTask"          #3
parentId="deployment"                           #4
priority="300"                                   #5
content="pages/CommonTask.jsf" />             #6
<integration-point  id="Applications-OSGI"
type="org.glassfish.admingui:treeNode"       #7
parentId="applications"                       #8
content="pages/TreeNode.jsf" />                #9

content: The content for the integration point, typically a JavaServer Faces page.

This XML file describe which integration points we want to use, and what is the priority our navigation node in comparison with already existing navigation nodes. At #1 we give our plugin a unique ID because later on we will access our web pages and resources using this ID. At #2 we define an integration point again with a unique ID. At #3 we determine that this integration point is a common task which we want to add under deployment tasks group #4. At #5 the priority number says that all integration points with an smaller priority number will be proceed before this integration point and therefore our common tasks button will be placed after them. At #6 we determine which template file should be proceed to fill in the integration point place holder, in this integration point the template file contains a button which will fill the place holder.

At #7 we determine that we want to add some tree node to the navigation tree by using org.glassfish.admingui:treeNode as integration point type. At #8 we determine that our tree node should be placed under the applications node. At #9 we are telling that the template page which will fill the place holder is pages/TreeNode.jsf. To summarize we can say generally each console-config.xml file consists of several integration points’ description elements and each integration point element has 4 or 5 attributes. These attributes are as follow:

Id: An identifier for the integration point.

parentId:  The ID of the integration point’s parent. You can see A list of all parentid(s) in listing 3

type: The type of the integration point. You can see A list of all parentid(s) in listing 3

priority:  A numeric value that specifies the relative ordering of integration points for add-on components that specify the same parentId . This attribute is optional.

The second basic artifact which we discussed is the service provider which is simple Java class implementing the org.glassfish.api.admingui.ConsoleProvider interface. The listing 12 shows InstallOSGIBundle bundle which is the service provider for our plug-in.

Listing 12 Content of the, the plug-in service provider

@Service(name = "install-OSGI-bundle") #A
@Scoped(PerLookup.class)  #b
public class InstallOSGIBundle
implements ConsoleProvider {  #c

public URL getConfiguration() {
return null;

#A: Service has name

#B: Service is scoped

#C: Service implements the interface

After we saw the basic artifacts, including the service provider implementation and how we can define where in the administration console navigational system we want to put our navigational items, we can see what the content of template files which we use in the console-config.xml file is. Listing 13 shows the TreeNode.jsf file content.

Listing 13 Content of the TreeNode.jsf template file which create a node in navigation tree


setResourceBundle(key=”GiA” bundle=””) #1

setResourceBundle(key="GiA" bundle="") #1
<sun:treeNode id="GFiA_TreeNode"            #2
imageURL="resource/images/icon.png"  #3
text="$resource{GiA.OSGI.install.tree.title}" #4
url="GiA-OSGI-Module-Install/pages/installOSGI.jsf" #5
target="main"         #6
expanded="$boolean{false}">       #7

You can see the changes that this template will cause in figure 6, but the description of the code is as follow. At #1 we are using a JSFTemplating event to initialize the resource bundle. At #2 we are telling that we have a tree node with the GFiA_TreeNode as its ID which let us access the node using JavaScript. At #3 we determine the icon which we want to appear next to the tree node, you can see that we are using resource/images/icon.png as the path to the icon, the resource prefix lead us to the root of the resources folder. At #4 we are telling that we want the title of the tree node to be fetched from the resource bundle. At #5 we are determining which page should be loaded when administrator clicked the button. We are using facesContext.externalContext.requestContextPath we are getting the web application path, the GiA-OSGI-Module-Install is our plug-in id and we can access whatever we have inside the resources folder by prefixing its path with GiA-OSGI-Module-Install, you can see GiA-OSGI-Module-Install in listing 11. At #6 we are telling that the page should opens in the main frame (the content frame) and at #7 we are telling that the node should not be expanded by default. #7 effects are visible when we have some sub nodes.

Figure 6 the effects of TreeNode.jsf template file on the administration console navigation tree node

You can see that we have our own icon which is loaded directly from the images folder which is resided inside the resources directory. The tree node text is fetched from the resource bundle file.

Listing 14 shows CommonTask.jsf which causes the administration console service to place a button in the common task section under the deployment group, the result of this fragment on the common tasks page is shown in figure 7.

Listing 14 Content of the CommonTask.jsf file which is a template to fill the navigation item place

setResourceBundle(key="GiA" bundle="")
<sun:commonTask                                             #1
onClick="admingui.nav.selectTreeNodeById('form:tree:application:GFiA_TreeNode');                                                       #2
parent.location='#{facesContext.externalContext.requestContextPath}/GiA-OSGI-Module-Install/pages/installOSGI.jsf'; return false;"     #3


At #1 we are using a commonTask component of the JSFTemplating framework, we use a localized string for its text and tool tip. At #2 we are changing the state of the tree node defined in the console-config.xml when the component receives a click; this is for ensuring that the navigation tree shows where the use is. And at #4 we are telling that we want to load pages/installOSGI.jsf which this button is clicked.

Figure 7 Effects of the listing 13 on the common task page of the administration console

We fetched the button title and its tool tip from the resource file which we included in the resource folder as described in the sample project folder layout in figure 5.

Next file which we will discuss is the actual operation file which provides the administrators with an interface to select upload and install an OSGI bundle. The file as we already discussed is named installOSGI.jsf and listing 15 shows its content.

Listing 15 The installOSGI.jsf file content, this file provide interface for uploading the OSGI file

<sun:page id="install-osgi-bundle-page" >  #1
setResourceBundle(key="GiA" bundle="");
<sun:head id="propertyhead" title="$resource{GiA.OSGI.install.header.title}">      #2
<sun:script url="/resource/js/glassfishUtils.js" /> #3
<sun:form id="OSGI-Install-form" enctype="multipart/form-data">
<sun:title id="title" title="$resource{GiA.OSGI.install.form.title}" helpText="$resource{}">
<sun:upload id="fileupload" style="margin-left: 17pt" columns="$int{50}"
> #4
<sun:button id="uploadButton" text="$resource{GiA.OSGI.install.form.Install_button}"
return submitAndDisable(this, '$resource{GiA.OSGI.install.form.cancel_processing}');
"> #5

/>  #6

I agree that the file content may look scary, but if you look more carefully you can see many familiar elements and patterns. Figure 8 shows this page in the administration console. In the listing 14, at #1 we are starting a page component and we use its beforeCreate event to load the localized resource bundle. At #2 we are giving the page a title which is loaded from the resource bundle. At #3 we are load a JavaScript file which contains one helper JavaScript method. As you can see we prefixed the file path with resource. At #4 we are using a fileUpload component and the binary content of the uploaded files goes to uploadedFile in the request scope. At #5 we use the helper JavaScript method to submit the form and disable the Install button. At #6 we are calling a custom handler with GiA.uploadFile as its ID. We pass the request scoped uploadedFile variable to the handler method. After the command executed, we #6 we redirect to a simple page which shows a message indicating that the module is installed.

Figure 8 The installOSGI.jsf page in the administration console

Now, let’s see what is behind this page, how this handler works, how it can find the module directory and how the installation process commences. Listing 16 shows the InstallOSGIBundleHandlers class which contains only one handler for writing the uploaded file in the modules directory of GlassFish installation which owns the running domain.

Listing 16 The content of InstallOSGIBundleHandlers which is file uploading handler

public class InstallOSGIBundleHandlers {  #A

@Handler(id = "GiA.uploadFile",  #1
input = {
@HandlerInput(name = "file", type = UploadedFile.class)}) #2
public static void uploadFileToTempDir(HandlerContext handlerCtx) { #3
try {
UploadedFile uploadedFile = (UploadedFile) handlerCtx.getInputValue("file"); #4
String name = uploadedFile.getOriginalName();
String asInstallPath = AMXRoot.getInstance().getDomainRoot().getInstallDir(); #5
String modulesDir = asInstallPath + "/modules";
String serverSidePath = modulesDir + "/" + name;
uploadedFile.write(new File(serverSidePath));
} catch (Exception ex) {
GuiUtil.handleException(handlerCtx, ex); #6

#A Class Declaration

As you can see in the listing, it is really simple to write handlers and working with GlassFish APIs. at #1 we are defining a handler, the handler ID element allows us to use it in the template file. At #2 we define the handler input parameter named file the parameter type is com.sun.webui.jsf.model.UploadedFile. At #3 we are implementing the handler method. At #4 we are extracting the file content from the context. At #5 we are using and AMX utility class to find the GlassFish installation path which owns the currently running domain. There many utility classes in the org.glassfish.admingui.common.util package. At #6 we are handling any possible exception using the GlassFish way. We can define output parameters for our handler and after the command executed, decide where to redirect the user based on the value of the output parameter.

There are 3 other files which we will briefly discuss, the first file which is shown in listing 17 is the JavaScript file which I used in installOSGI.jsf and is placed inside the js directory. We discussed the file in listing 15 discussions.

Listing 17 Content of the glassfishUtils.js file

function submitAndDisable(button, msg, target) {
button.form.action += "?" + + "=" + encodeURI(button.value);
if (target) { = target;
return true;

The function receives three parameters, the button which it will disable, the message which it will set as the button title when the button disabled and finally the target that the button’s owner form will submit to.

Listing 17 shows the next file which we should take a look at. is our localization file located deep in resources directory under package. Although it is a standard localization file, but it is not bad to take a look and remember the first time that we faced with resource bundle files.

Listing 18 Content of the file

OSGI.install.header.title=Install OSGI bundle by uploading the file into the server.
OSGI.install.form.title=Install OSGI bundle by uploading the file into the server. and select your OSGI bundle to install it in the server's modules directory
OSGI.install.tree.title=Install OSGI Bundle
OSGI.install.task.title=Install OSGI Bundle
OSGI.install.tree.Tooltip=Simply install OSGI bundle by uploading the bundle from the administration console.

And finally our Maven build file, the build file as we discussed in the beginning of the section is placed in the top level directory of our project. If you want to review the directory layout, take a look at figure 5.

Listing 19 The plug-in project Maven build file, pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">
<groupId>org.glassfish.admingui</groupId> #A
<packaging>hk2-jar</packaging>           #B
<description>GlassFish in Action, Chapter 13, Admin console plugin to install OSGI bundle</description>

#A Project Group ID

#B HK2 Special Packagin

The pom.xml file which is available for download in is more complete with some comment which explains other elements, but the 19 listings content is sufficient for building the plug-in.

Now that you have a solid understanding of how an administration console plug-in works, we can extends our discussion to other integration points and their corresponding JSFTemplating components.

Table 3 shows integration points and the related JSFTemplating component along with some explanation about each item. As you can see we have 5 more types of integration points with different attributes and navigational places.

Table 3 List of all integration points along with the related JSFTemplating compoent

Integration point*

Values for ParentID attribute

Corresponding JSFTemplating compoent


tree, applicationServer

applications, webApplications

resources, configuration

node,  webContainer


sun:treeNode which

is equivalent of the Project Woodstock tag  webuijsf:treeNode


serverInstTab, add tabs and sub tabs to Application Server page

sun:tab which is equivalent of the project Woodstock tag webuijsf:tab


deployment, monitoring,updatecenter, documentation

sun:commonTask which

is equivalent of the Project Woodstock tag  webuijsf:commonTask.


commonTasksSection, it allows us to create our own common tasks group

sun:commonTasksGroup .


is equivalent of the Project Woodstock tag  webuijsf:commonTasksGroup


masthead, loginimage, loginform, versioninfo

This integration point let us change the login form, header and version information of the application server for changing the application server brand

* All of the first column values have org.glassfish.admingui: as prefix

Now you are quipped with enough knowledge to develop extensions for GlassFish administration console to facilitate your won daily tasks or provide your plug-ins to other administrators which are interested in same administration tasks which you are performing using your developed plug-ins.

3 Summary

Understanding an administration console from end user point of view is something and understanding it from a developer’s perspective is another, using the first one you can administrate the application server and using the second one you can manage, enhance and administrate the administration console itself. In this article we learned how we can develop new asadmin or CLI commands to ease our daily tasks or extends the GlassFish application server as a 3rd party partner. We learned how we can develop and extend the web based administration console which leads us to reviewing the JSFTemplating project and a fast pointing to AMX helper classes.

Using XML in Java refcard is available for download as free as speech


I wrote a new refcard for Dzone, the refcard is about using XML in java and generally covers:

  • What is XML
  • XML usage use cases, when to use and when not to use
  • what we have in Java for dealing with XML documents
  • What DTD and XSD are
  • How we can perform validation using XSD and DTD in different parsers
  • What is different between different techniques of parsing XML documents
  • Basics of XPath.
  • Some performance tips

This is my second Refcard, the first refcard was about GlassFish administration and it was an instant success. An updated version of GlassFish refcarsd is published and is available at GlassFish V3 refcard. The updated version covers some new features of upcoming GlassFish v3.

Hibernate dynamic mapping and Dom4J enabled sessions

Hibernate from version 3.0? provide a very useful feature for people who develop application frameworks. Indeed this feature allows you to work directly with XML documents and elements which represent entities.
Imagine that you have an application or an SDK which help users to manipulate data from different RDBMSs. Hibernate provide rich configuration facilities which help you configure  Hibernate dynamically in term of adding mapping data or other configuration artifacts that usually stores in hibernate.cfg.xml or equal properties files.

As we are planning to use Hibernate dynamic mapping and Dom4J entity mode i am going to blog about it during my evaluation.
OK, Hibernate provide 3 kinds of entity mode

  • POJO
  • DOM4J
  • MAP

Default mode sets to be POJO as it is most commonly used mode. This modes tell session how it should handle entities. We can configure a session to use any of this modes when we need that mode, but we can configure it in hibernate configuration file for by adding a property like

 <property name="default_entity_mode">dom4j</property>

To hibernate.cfg.xml . but for our sample we will create a session with dom4j entity mode. you can find a complete sample for this blog entry here . Make sure that you read readme file in project folder before you go toward executing it. For this sample I used Netbeans 6.0 M6 (which really rules) and Hibernate 3.2.1 . I wont tell steps to create project, XML file or … but just actions and core required for hibernate side. you can see project structure in the following  image.

Hibernate dom4j session project structure

As you can see it is a basic ant based project.
Let me give you content of each file and explain about it as much as i could. First of all lets see what we have in hibernate.cfg.xml

 <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration
 PUBLIC "-//Hibernate/Hibernate Configuration DTD//EN"
     <session-factory >
         <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
         <property name="hibernate.connection.url">jdbc:mysql://localhost/hiberDynamic</property>
         <property name="hibernate.connection.username">root</property>
         <property name="hibernate.connection.password">root</property>
         <property name="hibernate.c3p0.min_size">5</property>
         <property name="hibernate.c3p0.max_size">20</property>
         <property name="hibernate.c3p0.timeout">300</property>
         <property name="hibernate.c3p0.max_statements">50</property>
         <property name="hibernate.c3p0.idle_test_period">3000</property>
         <property name="dialect">org.hibernate.dialect.MySQLDialect</property>
         <!--<property name="default_entity_mode">dom4j</property> -->
         <mapping resource="dynamic/Student.hbm.xml"/>

The configuration file is a simple and traditional hibernate configuration file with pooling enabled and  dialect sets to MySQL ones.
We have one mapping file which is named student.hbm.xml so we include it into the configuration file. If you do not have MySQL around then use Derby which is included into NetBeans 😉 .

 Log4J configuration is another traditional one, as you see
 log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
 log4j.rootLogger=WARN, stdout

We used a file appender which send formatted log entry into a file named messages_dynamic.log in project root directory. next file which we are going to take a look is Student.hbm.xml  it is our mapping file, where we define the student as a dynamic entity.

 <?xml version="1.0"?>
 <!DOCTYPE hibernate-mapping PUBLIC
 "-//Hibernate/Hibernate Mapping DTD//EN"

     <class  entity-name="Student" table="Student">
         <id name="id" column="STUDENT_ID" type="long">
             <generator class="native"/>
         <property  name="name" type= "string"  column="STUDENT_NAME"/>
         <property name="lastName" type="string" column="STUDENT_LAST_NAME" />

As you  can see there is just one change in  mapping file, we have entity-name attribute instead of class attribute. You should know that can have both class and entity-name attribute so an entity could be dynamic or mapped to a concrete class.

Next step is looking at our HibernateUtil which is known to the community for Hibernate booting and  hibernate instance management.
here is its code:

 package persistence;
 import org.hibernate.*;
 import org.hibernate.cfg.*;

 public class HibernateUtil {
     private static SessionFactory sessionFactory;
     static {
         try {
             sessionFactory = new Configuration().configure().buildSessionFactory();
        } catch (Throwable ex) {
             throw new ExceptionInInitializerError(ex);
     public static SessionFactory getSessionFactory() {
         return sessionFactory;
     public static void shutdown() {

Noting extra here. lets look at last part in which we try to use dom4j session to manipulate our data.

 package dynamic;
 import java.util.*;
 import org.hibernate.EntityMode;
 import org.hibernate.Query;
 import org.hibernate.Session;
 import org.hibernate.Transaction;
 import persistence.HibernateUtil;
 import org.dom4j.*;

 public class DynamicMapping {
     public static void main(String[] args) {
         Session session = HibernateUtil.getSessionFactory().openSession().getSession(EntityMode.DOM4J);
         Transaction tx = session.beginTransaction();
 Query deleteQuery = session.createQuery("delete from Student");

 tx = session.beginTransaction();
         //create some some student and save them
             Element anStudent = DocumentHelper.createElement("Student");
             Element nameElement = DocumentHelper.createElement("name");
             Element lastNameElement = DocumentHelper.createElement("lastName");
             Element anStudent = DocumentHelper.createElement("Student");
             Element nameElement = DocumentHelper.createElement("name");
             Element lastNameElement = DocumentHelper.createElement("lastName");
         //List all student
         Query q = session.createQuery("from Student ");
         List students = q.list();
         org.dom4j.Element el = (org.dom4j.Element)students.get(0);
         for (Iterator it = students.iterator(); it.hasNext();) {
             org.dom4j.Element student = (org.dom4j.Element);
             System.out.println("Printing an Student details: ");
             for ( Iterator i = student.elementIterator(); i.hasNext(); ) {
                 Element element = (Element);
                 System.out.println( element.getName()+":  "+ element.getText());
         //retrieve an student, update and save it
         q = session.createQuery("from Student where  name =:studentName ");
         q.setParameter("studentName", "Alice");
         Element alice = (Element) q.uniqueResult();
         alice.element("name").setText("No Alice any more");

In the begging we create a session with dom4j entity mode. so it will return Dom4J elements as our entities. in next two blocks i have create two students one is Alice Cooper and the other is John connor  (what does this name remind you? 😉 . we simply ask our session to save them as we do for usual POJO mode. Session know what to do with dom4j elements as it is configured as a DOM4J session.
In Second block we query our table and retrieve all entities into a list, but this list is not a list of Student POJOs instead it is a list of DOM4J elements. so we need to do some XML processing when we want to extract our entity properties. you can learn more about DOM4J at Here .

Next step we retrieve a single row, edit and save it into our database, Its all simple DOM4J operation which you should use over some elements to manipulate your data.

Build file that i used contains two target that we will use during this project. first one is hbm2ddl which will create our database structure and the second one is run target which will execute our main class. it is not required to include build file here  you can download the sample and check it yourself. make sure you look at readme file before digging into execution of application.

In next few days I will try to do a simple benchmark for some simple CRUD operation to have a basic clue about DOM4J entity mode in our environment.