JavaZone 2010 sessions I am going to attend

I will be attending JavaZone 2010 both as an speaker presenting NIO.2 and as an atendee sitting and learning new technologies other speakers kindly share. After looking at the list of sessions, here is the sessions I decided to attend. It is really hard to decided on which speaker and subject to choose. All speakers are known engineers and developers and selected subjects are exceptionally good.

I may change few of this items as JZ2010 staff updates the agenda but majority of the sessions will be what I have already selected. See you at JZ2010 and the JourneyZone afterward the conference itself.

8th of September:

  1. Emergent Design — Neal Ford
  2. JRuby: Now With More J! — Nick Sieger
  3. Howto: Implement Collaborative Filtering with Map/Reduce – Ole-Martin Mørk
  4. Building a scalable search engine with Apache Solr, Hibernate Shards and MySQL – Aleksander Stensby, Jaran Nilsen
  5. The not so dark art of Performance Tuning — Dan Hardiker, Kirk Pepperdine
  6. Creating modular applications with Apache Aries and OSGi — Alasdair Nottingham
  7. Java 7 NIO.2: The I/O API for Future — Masoud Kalali -> I am Speaking here
  8. The evolution of data grids, from local caching to distributed computing – Bjørn Vidar Bøe

9th of September:

  1. NOSQL with Java — Aslak Hellesøy
  2. CouchDB and the web — Knut O. Hellan
  3. Decision Making in Software Teams — Tim Berglund
  4. A Practical Introduction to Apache Buildr — Alex Boisvert
  5. Architecture Determines Performance — Randy Stafford
  6. Surviving Java Maintenance – A mini field manual — Mårten Haglind
  7. Using the latest Java Persistence API 2.0 features — Arun Gupta

JavaZone 2010, Oslo
JavaZone 2010, JavaZone logo

Introducing OpenESB from development to administration and management.

OpenESB project initiated by Sun Microsystems to develop and deliver a high performance, and feature rich implementation of Java Business Integration (JBI) under an open source friendly license. Basic task of JBI implementations is connecting different type of resources and applications together in a standard and none intrusive way. Basic building blocks of an ESB includes the Bus which is a message driven engine and transition path which deliver the required message to designated binding components. These components are the second main set of artifacts which an ESB has. The main tasks of binding components is receiving the Bus standard format messages and deliver them to other software in a format that the particular software can use. The third set of artifacts is Service engines which can relates the Bus to other types of containers like Java EE containers, transform the messages before they enter the Bus, perform routing related activities over the Bus messages.

The Main concept behind the JBI was around for quite some times but in recent years several factors helped with this concept to further evolve. First the increasing integration problems, second the SOA acceptance level between companies and finally introduction of JSR-208 specification by JCP which provides a common interfacing mechanism for JBI artifacts.

The increasing problem is nothing but the urge for integrating different software which is running in different department of the same enterprise or different partners across enterprises. An ESB implementation can be a non intrusive enabler product for integrating different software running in different department or enterprises without or with very small amount of changes in the software itself.

The concept which helped ESB to become very well known and popular is the Service Oriented Architecture (SOA) in which an ESB can play the enabler role because of its modularity and loose coupling nature.

1 Introducing OpenESB

OpenESB project located at initiated in 2005 when Sun Microsystems released a portion of its ESB products under CDDL license. Sun Microsystems, the user community and some other third party companies are working together on OpenESB project. However, the main man power and financial support comes from Sun Microsystems.

The project goal is development and delivery of a high performance, feature rich and modular JBI compliant ESB (the runtime) and all required development phase tools which accelerates and facilitate developing applications based on OpenESB capabilities and features.

The runtime components use GlassFish application server’s components like Metro web services stack, CLI and administration console infrastructure, and so on. OpenESB uses NetBeans IDE as the foundation for providing development and design time artifacts for its it runtime components. Design time functionalities are provided in a modular manner on top of NetBeans modularity system.

1.1 Introducing JBI

The JBI defines a specification for assembling integration components to create integration solutions that enable different software (external system) to actively integrate on providing a solution based on separate capabilities which every one of that software can provide on itself. These integration components are plugged into a JBI environment and can provide or consume services messages which lead in providing or consuming a service through the shared communication channel. The shared communication channel can route messages between different components and it also can transform the messages in a pre-defined or rule based way. In addition to transformation, different other base services are provided by the JBI environment. We call the environment the Service Bus or Enterprise Service Bus.

Based on the JBI specification and architecture a component is the main part of a JBI implementation which users or developers should understand. These components can acts as consumer or producers. When a component acts as consumer it receives a message from the service bus in a normalized and standard format. Then the component delivers the message to the external system in a format that the external system understands. When a component acts as a consumer to the service bus, it receives the message in a format provided by external system and converts it to normalized XML before sending it into the service bus. We call these components as binding components because they bind a external system to the service bus.

The JBI environment

JBI defines two types of components which JBI developers can develop and plug into a JBI container. However these two types are standardized and certainly each vendor has many internal and none standard interfaces, components, and a specific component model for its internal system. These Two types of components which can be plugged into a JBI environment include:

  • Service engines provide logic in the environment, such as transformation, scripting language support, ETL and CEP support, or BPEL and so on.

Binding components are adapters which can connect an external system to the service bus. For example a JMS binding component can connect a JMS message broker to the service bus, a mail server component can bind an emailing system to the bus to make it possible for other binding components to send email or receive emails through the bus.

Figure 1 shows the overall architecture of OpenESB from the specification point of view. The architecture may differ from implementation to implementation.

Figure 1 The OpenESB architectural diagram from JBI perspective

The most important fact in the JBI environment in message delivery and content delivery usually introduces requirements like transaction support, naming support, and so on. The internal square illustrates the OpenESB overall environment while the external boxes represent systems external to the OpenESB runtime. Message format inside the square is XML while the format between the binding components and external system is external system oriented. Every component either Service engine or Binding component register the service that they provide by WSDL descriptor.

1.2 OpenESB versions and editions

OpenESB like GlassFish has several concurrent versions which are either in active development mode, stable or in maintenance mode. OpenESB version one is in maintenance mode and the only updates are possible bug fixes, no new feature will be added to this version.

OpenESB version 2 is the current active version which is under development. OpenESB version 3 named project Fuji is under active design and development. Through the project Fuji Sun Microsystems will develop the next version of OpenESB which is fully based on a modular system similar to GlassFish version 3.

Like OpenSSO and GlassFish which have which have commercially packaged version supported by Sun Microsystems, there is a commercially packaged and supported package for OpenESB named GlassFish ESB. The only different between OpenESB and GlassFish ESB is that GlassFish ESB is a package which contains stable components of the OpenESB along with Sun Microsystems support behind it.

If you are looking for latest bug fixes, latest new features and components you should go with OpenESB, if you want a stable version then you should go with GlassFish ESB. Both versions are free and you can use them commercially without paying any license fee. The difference is that Sun provides commercial level support for GlassFish ESB packages.

All versions of OpenESB including GlassFish ESB, Project Fuji, and OpenESB development builds are available at

1.3 OpenESB Architecture

OpenESB version 2 closely follows the specification and has no extendibility or pluggibility beyond the JBI proposed components. OpenESB JBI container is tightly integrated with GlassFish application server to provide the overall running ESB. The integration between GlassFish application server and OpenESB results in integrated administration console in all 3 major administration channel including JMX and AMX, CLI and web based administration console.

The most important part of an ESB is the message broker and its Web services stack. OpenESB uses Metro Web service stack which is proven enough to be used by some other vendors as their web services stack and OpenMQ as the message broker. OpenMQ can be considered one of the top level message broking systems in term of performance and capabilities.

1.4 OpenESB supported standards and features

When it come to list an ESB features and supported protocols and format we can list them under five different categories including: The over all performance, the reliability and scalability of the system, available service engines, available binding components and finally administration and management channel effectiveness.

Certainly there are other factors which can not deeply discuss here, factors like: vendor support level, quality of available components and service engines. Providing benchmarks which shows how good the product works and so on. But, to some extent, we can discuss five primary attributes briefly.


The most important part of an ESB is the messaging system; OpenESB uses OpenMQ as the underlying messaging system. In different benchmarks and uses cases OpenMQ proved to perform very well.

The other important component which affects an ESB performance is Web services stack which is used to develop the underlying WSDL-SOAP communication. The Metro framework which forms the underlying Web services stack of OpenESB is very well featured with an outstanding performance. IBM uses some part of Metro in their JDK, BEA uses Metro stack for its WebLogic application server and so on.

Available service engines

Functional feature of an ESB are directly depended on the functionality of its service engines and binding components. OpenESB has a very rich set of Service engines which are either developed directory by Sun Microsystems or one of OpenESB project partners. Table 1 shows the list of most important service engines along with a description of each service engine.

Table 1 Important service engines available through project OpenESB

Service Engine



A WS-BPEL 2.0 compliant engine that provides business process orchestration capabilities.


An engine for Complex Event Processing (CEP) and Event Stream Processing (ESP)

Java EE SE

Makes the whole EE environment available as a JBI component so that EJBs can communicate with other JBI components through the NMR and vice versa.


The Worklist Manager is a JBI based engine providing task management and human intervention in a business process.

Data Integrator / ETL SE

Performs extract – transform – load to build DataWarehouses, or can be used for Data Migration.

Data mashup

Provides a single view of data from heterogeneous sources, including static web pages and tabular data exposed as web services. Provides options to join/aggregate/clean data, and generate a response in a WebRowSet schema.


An engine for applying XSLT transformations with basic orchestration capabilities.

All of these service engines have the runtime and design time components included. Some of them are not commercially stable enough to be included in GlassFish ESB and in case of requirement we should download and install them manually. The nightly builds are available at

We will discuss BPEL SE in more details in subsequent sections.

Available binding components

Binding components provide the functionalities required to connect an external system to the service bus. We have two expressions about binding components; a binding component acts as consumer or provider. For example an email component which acts as consumer sends a message which receives through the service bus to an email address. And when it injects the incoming emails into the bus it is acting as a provider.

Table 2 lists important binding components available through the OpenESB components project. These components are either developed by Sun Microsystems or one of its partners. Each binding component has a runtime and a design time part. The runtime part should be deployed into OpenESB while the design time part is a NetBeans module which we should install.

Table 2 Important binding components available through project OpenESB



eMail BC

A Binding Component for sending and receiving emails

Database BC

Reads messages from and writes messages to a database over JDBC


A JBI component for sending and receiving messages over HTTP


Receives messages from and sends messages to a remote service through JMS


Reads messages from and writes messages to an LDAP server


Provides functionality for interacting with a SAP R/3 system within a JBI environment.


Integrates media capabilities from communications networks using the SIP protocol (Session Initiation Protocol)


BC Polls for files on an FTP server, and provides for uploading files to an FTP server


BC Polls for files on the file system, and provides for writing files to the file system


Number of Binding components reaches 30 and covers variety of standards and protocols. Table 2 lists the most widely used binding components regardless of either it are in beta or stable state. You can find a list of all components at The list contains all necessary information like states, in maintenance mode or deprecated components and so on.

Administration and management

One important feature of a middleware is the possibility to administrate, monitor, and manage the middleware in ac effective and preferment way. OpenESB follows the same path that GlassFish does and provides very profound administration functionalities including a CLI and web based administration. Administration console provides monitoring information for service engines and binding components and deployed artifact.

2 Getting OpenESB running

Every big software need some sort of manual for installing and running the application, the manual includes information about compatible architecture, operating systems and runtime platforms. In our case we can install OpenESB on any operating system which there is a Java 5 runtime environment available for it. The supported platforms include Windows, Linux flavors, Solaris flavors, and Mac OSX flavors.

2.1 Downloading and installing

OpenESB is based on GlassFish and therefore it has the same packaging policy and schema for different operating systems. In our exploration we will use GlassFish ESB version 2 which is available at Make sure that you select the right version for your operating system and underlying architecture.

GlassFish ESB bundle contains the runtime bits which is consisting of JBI components and service engines along with the JBI container itself. The distribution package also contains the development time components which include NetBeans IDE and additional modules for supporting bundler JBI runtime components.

The installation process is quite simple; the installer asks for installation path and configuration information like GlassFish instance port, administration credentials and so on. By now you should be very good with the installation process. Before starting the installation process, ensure that you have something around 600MB of free space in the partition that you want to install the package.

Installing both GlassFish ESB runtime and NetBeans IDE inside one parent directory is a good practice as we have all development related artifacts in the same place.

2.2 Getting familiar with GlassFish ESB

I assume that we installed GlassFish ESB in a folder named glassfishesb_home, content of this folder is shown in figure 2


Figure 2 GlassFish ESB installation directory’s content

The content is very straight forward and cleans for such a massive development and deployment system. The glassfish folder includes GlassFish version 2ur2 along with stable versions of OpenESB container and components while the netbeans directory contains NetBeans IDE along with all necessary modules to support GlassFish ESB functionalities in design time.

We can start the server separately by running start_glassfish_domain1 script for times that we do not need the development environment running. Otherwise we can start NetBeans IDE by running start_netbeans script and then letting NetBeans to start the server when required.

3 Developing and testing a simple application

NetBeans IDE is a very feature rich IDE for developing Java EE applications and inclusion of SOA and Composite application development functionalities which targets OpenESB make the IDE a perfect development tools for developing composite applications and integration components. In this section we create a very simple BPEL based application to show how the overall process works.

3.1 The sample scenario

It is important to understand what we want to build and what features of OpenESB artifacts we want to use to implement the required functionalities. We have a portal like system which users can add a whether Portlet to see the current weather of their home city. We want to store some statistics which later on helps us determine which country and cities has largest number of users which use the Portlet weather.

To achieve this goal we used two web services, the first one translate the IP address of our client to its location including country and city and the second service provides us with current weather condition. We should somehow weave these services together to achieve the desired result. We also need to log all access to these web services with details related to country, city and date in a text file or database table to perform the statistical analysis and extract the usage portion for each country and each city in each country.

We will only use two BEPL components including the Database BC and BPEL SE. with these two component we can achieve a complete solution for our problem which otherwise could take more time to implement with much less flexibility in term of applying new and updated business rules.

3.2 Creating the BPEL Module project

Run NetBeans IDE and switch to Services view (CTRL+5), expand the database node and right click on the Java DB instance and select Start Server from the context menu. Expand the drivers’ node and right click on Java DB (Network) node and select Connect Using.

When the New Database Connection opened, change different attributes of the connection as follow:

  • Database URL: jdbc:derby://;create=true

  • User: APP

Password: APP

Click OK and NetBeans will create a new connection to our database. Expand the connection node, right click on the Tables node and select Create Table item. Create a table named wRequestLog with fields listed in the table 3.

Table 3 The wRequestLog table’s fields

Field Name

Field Type








The composite application runs inside the container and for accessing our WeatherInfo database we need to create a connection pool and data source in GlassFish application server. Create a connection pool for the above database and then create a data source named jdbc/winfo.

To create the connection pool you can use following asadmin command.

create-jdbc-connection-pool --datasourceclassname org.apache.derby.jdbc.ClientDataSource --restype javax.sql.XADataSource --property portNumber=1527:password=APP:user=APP:serverName=localhost:databaseName=WeatherInfo WinfoPool

I am sure that you remember how we used to create data sources but I provide the corresponding asadmin command if you have just forget the spelling ;-).

create-jdbc-resource --connectionpoolid WinfoPool jdbc/winfo

Now we are ready to take one step further and create the project and its related files like copy of WSDL documents, BPEL process file and so on.

Run the SOA enabled NetBeans IDE and from the main menu select File>New>Project and from categories select SOA and from projects list select BEPL Module click next and change the project name to WeatherInfo.

Right click on project node in project view (CTRL+1) and select New>Other, a window will open, from the categories select XML and from file types select External WSDL Document and click next. We want to create a WSDL from a URL therefore use as the URL and click finish. NetBeans will create a reference to the WSDL file along with a local copy of the WSDL file which we may edit to tailor it to our requirement. This web service provides us with location of a given IP address. The IP2Geo web service is provided by CDYNE Corporation and I get a trial license for using the web service in the book.

Follow the same procedure and create a new WSDL file from This web service takes a city name along with the country name and returns the current weather. The Global Weather web service is provided by WebserviceX.

We have two more steps and then we are finished with creating our project’s basic artifacts. The first artifact is a WSDL descriptor which let us use Database Binding Component to record all requests and the second artifact is the WSDL descriptor which describe input and output types of our provided service which is our BPEL process.

To create the database binding component, right click on the project node and select New>WSDL Document the wizard will open and guide you through the process. In the first page of wizard use the following information:

  • Change the name to DBLOGBC

  • Change the WSDL Type to Concrete WSDL Document; two new attributes will become available.

Change the Binding to Database and change the Type to Prepared Statement.

Click next, in this page we should determine which database connection we want to use, select the connection URL that we made in previous step and click Next. In this step we need to enter a valid prepared statement which the binding component will execute against the database. Enter the following snippet as the prepared statement value and then press Discover Parameters.

insert into APP.WREQUESTLOG (rdate,country,city) values (?,?,?)

Change the parameters name according to figure 3 and click Next then click Finish and we are done with defining a database binding component which can integrate into the solution and insert our records into database.


Figure 3 Database binding component’s prepared statement definition

The OpenESB version 2 design time modules does not works very good for naming the attributes and operations. We need to rename them manually to have easier mapping and navigation inside the project artifact later. Open the DBLOGBC.wsdl in the source editor and replace all instance of newuntitled_Operation with log_Operation. Also look for jdbc/__defaultDS and replace it with jdbc/winfo. Depending on your NetBeans and OpenESB design time component the default data source name may differ or even the wizard may asks you for the data source name but you can always manually edit the jdbc:address element of the DatabaseBC’s wsdl file to make the correct configuration.

You remember that we changed the parameters name, but your OpenESB design time modules may forget to apply the naming in the XSD file and therefore we will face some ambiguity problems during variables mapping in next steps. So open the DBLOGBC.xsd in the source editor and make following changes:

  • param1 should change to rDate

  • param2 should change to Country

param3 should change to City

Here comes the last step before creating the BPEL process, in this step we define the process communication interface which external clients will use to invoke our process and utilize its service.

Right click on the project node and select New>WSDL Document, when the wizard opened change the name to wInfoProcess; for the WSDL type select Concrete WSDL Document; for the binding choose SOAP and for the type attribute select RPC Literal press Next. Change the input message part name to IPAddress and change the output message part name to weatherInfo.

We are finished with defining all basic artifacts prior to designing the process, now we should just continue with designing the BPEL process itself. To do so, right click on the project node and select New>BPEL Process, change the name to wInfoProcess and click Finish. Now NetBeans should open the BPEL process in the BPEL designer which is a design time module of OpenESB.

In the right side of the IDE you can see components plat and in the left side the project explorer is in place. First let’s create the partner links which connect the BPEL process to other involved parties like external web services and our database binding component. The BPEL designer has three sections, left right and the middle section, the left and right side are used for partner links. Therefore drag and drop the WSDL files from project explorer into the design are as listed in table 4. When you drop a WSDL into the BPEL designer left or right side a window will open which allows you to change the partner link attribute and characteristics. We accept default values for all attributes expect for the partner link name which are listed in the table.

Table 4 adding WSDLs to BPEL process designer

WSDL name

BPEL designer area

Changes required for partner link


Right Side

Change the name to Positioning


Right Side

Change the name to Weather


Right Side

Change the name to Logger


Left Side

Change the name to WInfoProcess

By now your designer window should be similar to figure 5. Partner Links in the left side meant to be internal to the BPEL process and those in the right side are external to the BPEL process.

Figure 5 The BPEL designer after adding all partner links.

Now we need to add BPEL elements to the process to implement the business logic that we discussed. From top to button we need to add the following elements to the BPEL designer window. The final view of designer window after we finish adding elements will be similar to figure 6.

 Figure 6 BPEL designer view adding all necessary elements

Table 5 shows the complete list of all elements which we should add to the designer. To add the elements we should drag and drop them from the right side Palette into the middle area of the designer. Designer will show you all possible places for dropping the element by shoeing an orange circle in that position. To rename an element we can simply select the element and press F2 or double click on the element name. To connect an element to one of the partners, click on the element, an envelop will appear, drag and connect the element to the partner link mentioned in the table 5

Table 5 List of required BPEL elements which we should add to the designer window

Element Type

Element Name

Partner Link













Flow (Structured Activity)


Invoke (Inside Flow)



Invoke (Inside Flow)









Now we need to add variables and variables assignment and our work with the process and the process designer is finished. In this step we will create all variables and then we will move toward assigning these variables to each other. To create the variables:

  • Double click on GetWeatherInfo and for the input variable click create, accept the defaults and click OK buttons as much as required to see the designer window again.

  • Double click on InvokeToResolveIP and create both output and input variable the same way that you create the variable for the GetWeatherInfo.

  • Double click on GetWeather and make sure that you select GetWeather as the operation and create the input and output variables similar to previous steps.

  • Double click on LogInformation and create input and output variables, accept defaults for variable names and other attributes.

Finally double click on the reply button and for the normal response create a variable and accept the defaults for its name and other attributes.

Now we should be able to assign the variables to each other to make the data flow possible in the process. Thankfully NetBeans come with a very nice and easy to use Assign variable editor called Mapper which we can use to assign the variables to each other. Double click on the AssignIP element and use the following procedure to assign the received IP in the previous step to input variable of InvokeToResolveIP.

In the left side of the Mapper window select output button to ensure that it is filtered for output variables and in the right side select input button to ensure that we are looking at input variables.

Expand the WInfoProcessOperationIn from the left side and expand ResolveIPIn from the right side, make sure that you expand them as much as it does not expand anymore. Now from the left side select IPAddress and connect it to ipAddress in the right side. From the Mapper top menu bar select String and add a String Literal to the middle section of the Mapper window. Change its value to 0 and connect it to licenseKey variable in the right side. Now we are finished with the first assign element. Now switch to the designer view to finish the mapping for AssignCityAndCountry element.

Double click on the AssignCityAndCountry element and assign the variables as follow:

Let’s assign the city name and country name to the GetWeather invoke element. To do so, from the left side expand ResolveIPOut and from the right side expand GetWeatherIn. From the left side, assign city and country parameters of the ResolveIPOut to similar parameters of the GetWeatherIn in the right side. Now we need to assign the same information to our database log operation therefore, expand the Log_operationIn and assign the city and country from the left side to its parameters. We have an rDate parameter which we should use a Current Date & Time predicate. So, from the top menu bar select Date And Time and drop a Current Date & Time predicate into the middle section of the Mapper window. Connect the new predicate to rDate variable of the Log_OperationIn. We are finished with the second variable assignment and one last item remains which we should complete. Figure 6 shows the Mapper window after assigning all variables.


Figure 7 Mapper window with complete AssignCityAndCountry assignment

One more variable assignment and we are finished with the BPEL process development. Double click on AssignWeather element and assign the output of GetWeatherOut parameter to WInfoProcessOperationOut parameter. Ok, the name looks unlikely and you may thing that it is an output variable but believe me it is an input variable.

3.2 Creating the Composite Application project

A composite application is the project type that we can use to deploy a composite application into the JBI container. Select File>New Project; from the categories select SOA and from Projects select Composite application and click next. Change the project name to WinfoCA and click finish. The composite application is created and is ready for adding JBI modules.

Right click on the project node and select Add JBI Module, select WeatherInfo project. The CASA editor opens which shows the BEPL module without any relation. Right click on WinfoCA project node and select Clean and Build. NetBeans will compile the composite application and its included JBI modules. When we issue a build command NetBeans refresh the CASA editor and we can add new module, change a data flow between components and so on.

3.4 Deploying and testing sample project

Deploying and testing a composite application is very easy using the well integrated Netbeans, GlassFish and OpenESB. To deploy the application, right click on the WinfoCA node and select deploy. NetBeans will deploy the assembly to OpneESB runtime. To create a test case for the composite application, expand the WinfoCA node and right click on Test node. A wizard will start that help us generate the test case.

In the first page, wizard asks for the test case name, it can be WinfoTest at the second step it asks us to choose which WSDL file we want to test. Expand the WeatherInfo node and select winfoprocess.wsdl press next. Now we should select which operation we want to test, select winfProrocessOperation and click finish. Now we have a test case created, expand the test case in project view and double click on input node to edit the test case input values. Double clicking the input node results in opening an XML file in the editor. The XML file is a SOAP message which we should change its parameters value to suite our test requirement. Therefore change the value of IPAddress element to which is one the Google IP addresses. Save the file and we are finished with creating the test case.

Right click on WinfoTest and select run. IDE will invoke the operation defined in the WSDL file by passing the SOAP message we created to it. Then the output result will appear under the output node of the test case. The output should be information about current weather in Mountain View in the United States.

You can debug a BPEL process and generally a JBI application using NetBeans. The IDE allows you to view the variables during different steps of BPEL execution. The simplest way that we can use to debug the BPEL module is adding our required breakpoint and then executing the test using debug mode.

3.5 OpenESB administration from within NetBeans IDE

NetBeans IDE is very well integrated with OpenESB for not only for developing and testing applications but also for administration and monitoring purposes. All of these functionalities are hidden inside the NetBeans integration adapter for OpenESB.

Press CTRL+5 to open the servers tab and then expand the servers node, expand the OpenESB server instance and finally expand JBI node. Now your server view should be similar to figure 8. Now press CTRL+SHIFT+7 to open the Properties window and then select JBI node. It is wonderful that you can change configuration of the JBI container from within the IDE to continue your work without leaving the IDE and switching between different windows.


Figure 8 The OpenESB node in NetBeans Server view

Expand the Service Engines node and select sun-bpel-engine node, the Properties view shows you the entire configuration and read only attributes of the BPEL engine. You can enable the monitoring by checking the checkbox or you can enable the persisting feature to have your BPEL process recovered after any possible crash.

Right click on the sun-bpel-engine node and explore what operations you can perform on the engine from context menu. We can start, stop, upgrade, uninstall and perform any other lifecycle related task using the context menu. We can view the different endpoints used in BPEL process deployed in the BPEL engine using the Show Endpoints Statistics menu item. This menu item helps us to understand whether our services are called during our test period or not and if they are called what was the response semantic and quality.

4 OpenESB administration

The same administration channels which we saw for GlassFish are available for OpenESB and GlassFish ESB. OpenESB CLI commands are integrated with GlassFish CLI utility and follow the same principals. And the Web Based administration console uses GlassFish administration console to present required administration capabilities.

4.1 The CLI utility

OpenESB CLI administration is completely integrated with GlassFish administration and has the same set of characteristics. The only major difference is lack of some commands as GlassFish ESB version 2 is based on GlassFish version 2 and any new functionality that is included in GlassFish version 3 is not present in version 2.

Service assembly administration commands

JBI container like Java EE container has its own application type which is called a JBI service assembly or simply an assembly. Usually the overall procedure of creating and deploying an assembly is done using build systems or IDE but there some commands in the OpenESB CLI which let us administrate the assemblies. All of the OpenESB CLI commands are remote with all characteristics of a remote command like default target server, and so on.

Deploying and undeploying a service assembly are two most basic tasks related to OpenESB administration. In the previous step we developed and deployed an application using NetBeans, now lets undeploy the application and then deploy it using the CLI commands.

To underply the application we can use undeploy-jbi-service-assembly command. For example to undeploy the WinfoCA composite application we can use:

asadmin> undeploy-jbi-service-assembly WinfoCA

Now we can deploy the application archive using the deploy-jbi-service-assembly command. NetBeans puts the built packages into a directory named dist inside the project directory. Let’s deploy the WinfoCA application from that particular directory.

deploy-jbi-service-assembly /opt/article/WinfoCA/dist/

This command deploy the application to the default target server with default attributes like application name and so on.

To list all deployed service assembly we can use list-jbi-service-assemblies command. Now that we have one application deployed into the container this command should show a list containing WinfoCA similar to:

asadmin> list-jbi-service-assemblies


Command list-jbi-service-assemblies executed successfully.

Each service assembely contains one or more service units which connect services, binding components, and service engines along with any deployment artifacts like XSLT documents, BPEL process and so on. To view these service units we can use show-jbi-service-assembly command. Figure 9 shows how we can use the command to view detailed information for WinfoCA composite application.

Figure 9 viewing the detailed information for an assembly

We can stop, start and forcefully shutdown a service assembly when required by using the OpenESB provided commands.

  • To stop the WinfoCA service assembly: stop-jbi-service-assembly WinfoCA

  • To start the WinfoCA service assembly if it is stopped: start-jbi-service-assembly WinfoCA

To forcefully shutdown the WinfoCA service assembly if required: shut-down-jbi-service-assembly WinfoCA

There are more JBI service assembly commands in the OpenESB CLI which you can find them using ./asadmin help command.

Components administration commands

There two major types of JBI components including service engines and binding components which we may need to deal with. These two types are accompanied with another type named shared library which contains common library classes for several components. Table 6 shows important commands in this set along with an small explanation about each one of them.

Table 6 Important JBI components administration commands





These commands can be used to start and stop binding components. Stopping a binding component result in any related service unit to stop.




Showing a list of deployed JBI components




Show detailed information about a given JBI component which is deployed in the target container.




Installing different JBI components into the target container.



Uninstalling given JBI component from the target server.



Shutdown the given component. Any deployed service unit on the target component will shutdown consequently

Although there are many other command which an administrator may need during his daily tasks but we learned the most important commands in the section. In the next section we will take a look at web based administration console features for OpenESB.

4.2 Administration console

One good point about GlassFish and its companion product is the unified administration channels including CLI, JMX and web based administration console which let us use a unified look and feel and familiar environment to administrate all the infrastructure.

All of JBI administration capabilities are listed under the JBI node in the navigation tree. When you expand all nodes you can see something similar to figure 10. As you can see in figure 10 the JBI node also helps us have a big picture view of our service units’ deployment on different service engines and binding components.


Figure 10 OpenESB administration node in the GlassFish administration console

The JBI node itself allows us to configure the container settings like Logging, general statistics, and general configuration related to JBI artifacts installation. Subsequent nodes let us administrate the deployed applications, binding components, service engines and shared libraries.

Service assembly administration

Administration console provides deploy, undeploy, start, stop and shutdown operations for service assemblies. We also can view monitoring information for a deployed service assembles.

For each service assembly we may have one or more service units, we can view service units included in a service assembly along with accessing some monitoring information about them.

JBI components administration

Installing, uninstalling, starting, stopping and shutting down a JBI component which can be either a service engine or a biding component provided in the Components node. Administration console provides us with capabilities which allow us to change the component configuration; its loggers logging level, viewing the descriptor file for the component, and most important thing viewing detailed monitoring information about requests and request processing by each component.

We can see view which service units from which service assemblies are deployed on a component and navigate to the service unit or service assembly configuration.

JBI shared libraries administration

For the shared libraries, we can install, uninstall and view the depended components on each shared libraries to ensure that uninstalling the library will not cause unexpected result in other components.

4.3 Installing new components

OpenESB project provides many components out of the box but there are tens of components available in the OpenESB website at From this page we can navigate to each component WIKI page and download the required and install them into OpenESB and NetBeans. But another effort is undergo to let the developers to download one single installer for each component and let the installer install all required artifacts like shared libraries, runtime engines and design time NetBeans modules. Latest build of installers for all components are available at

Installing the components using the installers available at the above page is very straight forward. the installer asks for NetBeans and OpenESB (GlassFish) directory and complete the installation.

5 Summary

In this article we discussed almost all basics of OpenESB administration and application development in a crush course mode. We saw how JBI and OpenESB are related, how OpenESB and GlassFish ESB are similar products with different naming. We developed a sample JBI service assembly which utilizing several binding components and service engines and finally we deployed the application into OpenESB and performed functional testing using NetBeans IDE.

We discussed how we can install and administrate OpenESB. We discussed CLI and administration console for managing OpenESB and we learned that OpenESB administration channels uses similar rules and look and feel that GlassFish administration and CLI uses.

Using Spring Security to enforce authentication and authorization on Spring Remoting Services Invoked from a Java SE client…

Spring framework is one of the biggest and the most comprehensive frameworks Java Community can utilize to cover most of the  end to end requirement of a software system when it come to implementation.
Spring Security and Spring Remoting are two important parts of the framework which covers security in a descriptive way and let us have remote invocation of a spring bean methods using a local proxy.

In this entry I will show you how we can use spring security to secure a spring bean exposed over HTTP and invoke its secured methods from an standalone client. In our sample we are developing an Spring service which returns the list of roles which are assigned to that currently authenticated user. I will develop a simple web application congaing an secured Spring service then I will develop a simple Java SE client to invoke that secured service.

To develop the service we need to have a service interface which is as follow:

 package springhttp;
public interface SecurityServiceIF {
public String[] getRoles();

Then we need to have an implementation of this interface which will do the actual job of extracting the roles for the currently authenticated user.

package springhttp;
public class SecurityService implements SecurityServiceIF {
public String[] getRoles() {
Collection<GrantedAuthority> col = SecurityContextHolder.getContext().getAuthentication().getAuthorities();
String[] roles = new String[col.size()];
int i = 0;
for (Iterator<GrantedAuthority> it = col.iterator(); it.hasNext();) {
GrantedAuthority grantedAuthority =;
roles[i] = grantedAuthority.getAuthority();
return roles;

Now we should define this remote service in a descriptor file. Here we will use remote-servlet.xml file to describe the service. The file can be placed inside the WEB-INF of the web application. The file content is as follow:

<?xml version="1.0" encoding="UTF-8"?>
<bean name="/securityService"
<property name="serviceInterface" value="springhttp.SecurityServiceIF" />
<property name="service" ref="securityService" />

We are simply using /securityService as the relative URL to expose our SecurityService implementation.

The next configuration file is the Spring application context in which we define all of our beans, beans weaving and configurations. The file name is applicationContext.xml and it is located inside the WEB-INF directory.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""
xsi:schemaLocation=" ">

<bean id="securityService" class="springhttp.SecurityService">

<security:http realm="SecRemoting">
<security:intercept-url pattern="/securityService" access="ROLE_ADMIN" />

<security:authentication-manager alias="authenticationManager">
<security:user-service id="uds">
<security:user name="Jami" password="Jami"
authorities="ROLE_USER, ROLE_MANAGER" />
<security:user name="bob" password="bob"
authorities="ROLE_USER,ROLE_ADMIN" />

<bean id="digestProcessingFilter"
<property name="userDetailsService" ref="uds" />
<property name="authenticationEntryPoint"
ref="digestProcessingFilterEntryPoint" />

<bean id="digestProcessingFilterEntryPoint"
<property name="realmName" value="ThisIsTheDigestRealm" />
<property name="key" value="acegi" />
<property name="nonceValiditySeconds" value="10" />

<bean id="springSecurityFilterChain"
<security:filter-chain-map path-type="ant">
<security:filter-chain pattern="/**"
filters="httpSessionContextIntegrationFilter,digestProcessingFilter,exceptionTranslationFilter,filterSecurityInterceptor" />

<bean id="httpSessionContextIntegrationFilter"
class="" />
<bean id="accessDecisionManager" class="">
<property name="decisionVoters">

<bean class="" />
<bean class="" />

<bean id="exceptionTranslationFilter"  class="">
<property name="authenticationEntryPoint"
ref="digestProcessingFilterEntryPoint" />

Most of the above code is default spring configurations except for the following parts which I am going to explain in more details. The first snippet is defining the bean itself:

<bean id="securityService" class="springhttp.SecurityService">

The second part is when we specify the security restrictions on the application itself. We are instructing the spring security to only allow an specific role to invoke the securityService

<security:http realm="SecRemoting">
<security:intercept-url pattern="/securityService" access="ROLE_ADMIN" />

The third part is whe
n we define the identity repository where our users and role assignment are stored.

<security:authentication-manager alias="authenticationManager">
<security:user-service id="uds">
<security:user name="jimi" password="jimi"
authorities="ROLE_USER, ROLE_MANAGER" />
<security:user name="bob" password="bob"
authorities="ROLE_USER,ROLE_ADMIN" />

As you can see we are using the simple in memory user service, you may configure the an LDAP, JDBC, or a custom user service in the production environment. All other parts of the applicationContext.xml are Spring security filter definition. You can find explanation about any of then in Spring Security Documentation or by googling for it. To enforce security restrictions on the SercurityService for any local invocation we can simply change the first disucssed snippet as follow to allow ROLE_ADMIN and ROLE_MANAGER to invoke the service locall.

 <bean id="securityService" class="springhttp.SecurityService">
method="springhttp.SecurityService.getRoles" access="ROLE_MANAGER" />

Now that we have all Spring configuration files in place, we need to add some elements to web.xml in order to let spring framework kick start. Following changes need to be included in the web.xml file.


Now that the server part is over, we can develop the client application to invoke this service we developed. The code for the client class is as follow:

package client;
public class Main {
public static void main(final String[] arguments) {
final ApplicationContext context =
new ClassPathXmlApplicationContext(
String user = "bob";
String pw = "bob";
SecurityContextImpl sc = new SecurityContextImpl();
Authentication auth = new UsernamePasswordAuthenticationToken(user,


String[] roles = ((SecurityServiceIF) context.getBean("securityService")).getRoles();
for (int i = 0; i < roles.length; i++) {
System.out.println("Role:" + roles[i]);


As you can see we are initializing the application context using a xml file. We will discuss that XML file in few minutes. After the application context initialization we are creating a SecurityContextImpl,   and a UsernamePasswordAuthenticationToken then we pass the token to the security context and finally use this security context to pass it on to the server for authentication and further authorization.

Finally we invoke the getRoles() method of our SecurityService which returns the set of roles assigned to the currently authenticated user and print its role to the default output stream.

Now, the spring-http-client-config.xml content:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""
<bean id="securityService"
<property name="serviceUrl" value="http://localhost:8084/sremoting/securityService" />
<property name="serviceInterface" value="client.SecurityServiceIF" />
<property name="httpInvokerRequestExecutor">
<bean class=""  />

As you can see we are just setting some properties like serviceURL bean name, the httpInvokerRequestExecutor and so on. the only thing which you may need to change to run the sample is the serviceUrl.
You can download the complete sample code form here. These are two NetBeans project, a web application and a Java SE one. You may need to add Spring libraries to the project prior to any other attempt.

The SecurityService implementation is borrowed from Elvis weblog available at

GlassFish Modularity System, How extend GlassFish CLI and Web Administration Console (Part I, The architecture)

Modularity is the essential design and implementation consideration which every software architects and designers should have in mind to get an easy to develop, maintain and extend software.

GlassFish is an application server which highly benefits from a modularity system to provide different level of functionalities for different deployment and case studies. GlassFish fully supports Java EE profiles, so it provides a lot of features which suits different case studies and different type of use cases. Every deployment and case study requires a subset of functionalities to be provided, administrated and maintained so, both GlassFish users, application developers, and GlassFish development team can highly benefits from modularity to reduce the overall costs for development, maintenance, administration and management of each deployment type that GlassFish supports.

Looking from functionalities point of view GlassFish provides extension points for further extending: administration console, CLI, containers, and monitoring capabilities of GlassFish application server in a non-intrusive and modular way.

From development point of view, GlassFish uses OSGI for module lifecycle management while it uses HK2 for service layer functionalities like dependency injection, service instances management, service load and unload sequence, and so on.

Looking from integration point GlassFish modular architecture provides many benefits for different level of integration. In bigger picture Glassfish can be deployed into a bigger OSGI container which for example may be running your enterprise application, GlassFish has many advanced subsystem like its monitoring infrastructure  which can be used in any enterprise level application for extendable monitoring, and thanks to GlassFish modularity these capabilities can be easily extracted from GlassFish without any class path dependencies headache and versioning conflicts.

GlassFish Modularity

GlassFish supports modularity by providing extension points and required SPIs to add new functionalities in administration console, CLI, monitoring, and possibility to develop new containers to host new type of applications.

Not only modular architecture provides easy extendibility but also it provides better performance and much faster starts and stops sequences as the modular architecture loads modules as soon as they get referenced by another module and not during the startup. For example, Ruby container will not start unless a Ruby on Rails application gets deployed into application server.

GlassFish modular system uses a combination of two module systems including well known OSGI and HK2 as an early implementation of Java module system (JSR-277). OSGI provides module and module lifecycle layers functionalities to let application server fits into bigger deployment which uses OSGI framework.  It also makes It possible to benefits from well designed and tested OSGI modularity and lifecycle management techniques which are included in OSGI. The HK2 provides service layer functionality and guarantee a smooth migration to Java SE 7 module system (JSR-277). To make it simple, HK2 is the framework which we use to develop the Java classes that we have inside our module and perform the main tasks.

We talked about GlassFish modules, but these modules need another entity to load, unload and feed them with their intended responsibilities which the module designed to accept.  The entity which takes care of loading and unloading modules is GlassFish kernel which itself is an OSGI bundle which implemented Kernel services.

When GlassFish starts, either normally using its internal OSGI framework or using an external OSGI framework, First it loads GlassFish kernel and check for all available services by looking into the implementation of different contracts in the class path and. Kernel will only loads services which are necessary during the startup or services which are referenced from another service as one of its dependencies.

Containers services will only starts if an application get deployed into the container, administration console extensions load as soon as administration console loads, CLI commands loads on demand and system does not preload all commands when asadmin starts, monitoring modules starts when a client bind to them.

GlassFish modular system does not only apply to application server specific capabilities like administration console extension, but it also applies on Java EE specifications implementation. GlassFish uses different OSGI modules for different Java EE specifications like Servlet 3 and JSF 2. These specifications are bundled using OSGI to make it easier to update the system when a new release of each specification is available.

Different GlassFish modules fit into different categories around GlassFish kernel. These categorizations of modules are result of modules implementing GlassFish provided interfaces for different extendibility points. We will discuss the extension points in mode details in section 3.



New features added to GlassFish using the modularity system will not differ from GlassFish out of the box features because new modules completely fit into the overall provided features.

All extendibility points in GlassFish designed to ensure that adding new type of containers for hosting programming languages and framework other than Java is easy to cope with. Every container that is available for GlassFish application server adds its own set of CLI commands, administration console web pages and navigation nodes, and its own monitoring modules to measure its required metrics.


2 Introducing OSGI 

The OSGI is an extensive framework introduced to add modularity capabilities to Java before Java SE 7 being shipped. Essentially OSGI is a composition of several layers of functionalities which are building on top of each other to free Java developers from headache of overcoming the class loader complexity, dynamic plug ability, libraries versioning, code visibility, dependency management; and a publish, find and bind service model to decouple bundles. Layering OSGI gives us something similar to Figure 1.

Figure 1 OSGI layers, GlassFish uses all except the service layer. An OSGI based application may utilize all or some of the layers in addition to OS and Java platform which OSGI is running on.

              OSGI runs on hand held devices with CDC profile support and Java SE. so variety of operating systems and devices can benefits from OSGI. Over this minimum platform, OSGI layers march one over the other and based on the principals of layered architecture each layer can only see its bottom layer and not its upper layer.

Bundle or module layer is closest to Java SE platform, OSGI bundles are Jar files which some more Meta data information, these metadata generally defines modules required and provided interfaces, but it does this task in a very effective manner. OSGI bundles provide 7 defections to help with dependencies, versioning. These definitions include:

       Multi-version support: several versions of the same bundle can exist in the framework. And depended bundles can use the versions that satisfy their requirements.

       Bundle fragments: Allows bundle content to be extended, bundles can merge to form one bundle and expose a unified export and import attributes.

       Bundle dependencies: Allows for tight coupling of bundles when required Import everything that another, specific bundle exports Allows re-exporting and split packages

       Explicit code boundaries and dependencies: Bundles can explicitly their internal packages or declare dependencies on external packages.

       Arbitrary export/import attributes for more control: Exporters can attach arbitrary attributes to their exports; importers can match against these arbitrary attributes when they find the required package and before importing.

       Sophisticated class space consistency model

       Package filtering for fine-grained class visibility: Exporters may declare that certain classes are included/ excluded from the exported package.

All of these definitions are included in the MANIFEST.MF file which is an easy to read and create text file located inside the META-INF directory of jar files. Listing 4 shows a sample MANIFEST.MF file.

Listing 4 sample MANIFIST.MF file which shows some OSGI modules definitions

Bundle-ManifestVersion: 2                                       #1
Bundle-SymbolicName: gfbook.samples.chapter12                    #2
Bundle-Version: 1.1.0                                            #2
Bundle-Activator: gfbook.samples.chapterActivator             #3 
Bundle-ClassPath: .,gfbook/lib/juice.jar                          #4
Bundle-NativeCode:                                                #5; osname=Linux; processor=x86,                        #5
serialport.dll; osname=Windows 98; processor=x86                   #5
Import-Package: gfbook.samples.chapter7; version="[1.0.0,1.1.0)";  #6
resolution:="optional"                                             #6
Export-Package:                                                    #7
gfbook.samples.chapterservices.monitoring; version=1.1;          #7
vendor="gfbook.samples:"; exclude:="*Impl",                         #7
gfbook.samples.chapterservices.logging; version=1.1;             #8
uses:=""                                   #8

GlassFish uses OSGI Revision 4 and the version number at #1 indicates it. At #2 we uniquely identify this bundle with a name and its version together. At #3 we determine a Class which implements BundleActivator interface in order to let the bundle get notified when it stops or starts. It lets the developers to perform initialization and perform some checks when the bundle starts or perform some cleanup when the bundle stops. At #4 we define our bundle internal class path, a bundle can has JAR files inside it and bundle class loader check this class path when it looks for a classes that it requires in the same order given in the Bundle-ClassPath header. At #5 we define the bundle native dependencies per operating system.

At #6 we define an optional dependency on some packages, as you can see our bundle can use any version inside the given version range. At #7 we describe the exported packages along with the version and excluded classes using wildcard notation. At #8 we define that if any bundle import gfbook.samples.chapterservices.logging from our bundle and the importer bundle needs it should use the same bundle for satisfying this import that our package uses.

Next layer in OSGI modularity system is the Life Cycle layer which makes it possible for bundles to installed, dependency checked, started, activated, stopped, and uninstalled dynamically. It relies on the module layer for the class path related tasks like class locating and class loading. This layer adds some APIs for managing the modules in run time. Figure 2 shows functionalities of this layer.

Figure 2 OSGI life cycle layer states. A module need to be stopped and has no dependency to be uninstalled.

     Installed: the bundle has been installed and but all bundle dependencies have not been satisfied, for example one of its imported packages in not exported by any of currently resolved bundles.

     Resolved: Bundle is installed and all of its dependencies are satisfied, this is an in-transit state and bundles do not stay in this state.

       Starting: Another temporary state which a bundle goes through when it is going to be activated.

       Active: Bundle is active and running.

     Stopping: A temporary state which a bundle goes through when it is going to be stopped.

       Uninstalled: Bundle is not in the framework anymore.

Next layer in OSGI mode is the Service model, the layer which is not used by eclipse provides several functionalities for the deployed bundles. First it provides some common services which are common in all applications, services like logging and security. Secondly it provides a set of APIs which bundles developers can use to developed OSGI friendly services and publish them in the OSGI framework service registry, search for services that their bundles need in the registry and bind to the services that they need. This layer is not used by GlassFish.

All that GlassFish taken from OSGI is its bundle and life cycle management layers. Adhering to these two layer guidelines let GlassFish be able to fit in bigger systems which are OSGI compliant and it can host new modules based on OSGI modularity system. Now that you know How GlassFish modularity system uses OSGI, you are ready to see what is inside these modules and what GlassFish propose for developing the module functionalities.

3 Introducing HK2

We said that GlassFish uses HK2 for service layer functionalities. First question which you may ask is OSGI service model has not utilized for GlassFish; answer is that the HK2 is more aligned with Java Module System (JSR-277) which is going to be a part of Java Runtime in Java SE 7 and eventually replacing HK2 with Java SE modularity system will be easier that replacing OSGI service model layer with Java SE service model layer.

3.1 What HK2 is

GlassFish as a modular application system should have a kernel which all modules plug into it to provide some functionality which call services. In order for the kernel to place the services in the right category and place, services are developed based on some contracts which are provided by the GlassFish application server designer and developers.

A contract describes the extension point building blocks and a service is an implementation which fits that particular building blocks. HK2 is a general purpose modularity system and can be used in any application which requires a modular design.

An HK2 service should adhere to a contract and it means nothing but extending an interface which HK2 kernel knows its structure and knows how to use it. But interfaces alone can not provide all information which a module may require to identify itself to the kernel. So HK2 provides a set of annotations which let the developers to provide some META data with the component to make it possible for the kernel to place it in the correct place.

A system which looks for modularity should have a complex requirement which leas to a architecture, and modularity is one of many helpers with design and development of a complex software.

In software as big as GlassFish objects need other objects during the construction, they need configuration during construction or after the construction during a method execution, objects may need to provides some of their internal objects available for other object which may need them, objects need to be categorized into different scopes to prevent any interference between objects that need scoped information and objects that runs without storing any state information (stateless).

First things first

Do we need to discuss contract and service annotation source code?

, after we developed a service by implementing a contract which is an interface, we should have a way to inform HK2 that our implementation is a service, it has a name and it has a scope. To do this we use a @Service annotation, listing 1 shows an example of using @Service annotation.

Listing 1 HK2 @Service annotation

@Service (name="restart-domain",                                  #1
public class RestartDomain implements AdminCommand {                     #2

Very simple, just an annotated POJO provide the system with all required information to accept it as a CLI command. At #1 we define the name of the service that we want to develop, late on we define the scope of the component. At #2 implement the contract interface to make it possible for the kernel to use our service in the way that it knows. Scope can be one of the built-in scopes or any custom implemented scope by implementing org.jvnet.hk2.component.Scope contract. The custom scopes should take care of the objects lifecycle. Built-in scopes are:


     Per Lookup scope: Components in this scope will create new instances every time someone asks for it, the implementation class is: org.jvnet.hk2.component.

     Singleton scope: Objects in this scope stay alive until the session’s ends and any subsequent call for instantiating such objects end in returning the available object. The implementation class is: org.jvnet.hk2.component.Singleton.

The @Service annotation accepts several optional elements as follow, these elements are just for sake of better and more readable, maintainable and customizable code, otherwise they are not required and HK2 determine default values for each element.

       The name element defines the name of the service. The default value is an empty string.

     The scope element defines the scope to which this service implementation is tied. The default value is org.jvnet.hk2.component.PerLookup.class.

     The factory element defines the factory class for the service implementation, if the service is created by a factory class rather than by calling the default constructor. If this element is specified, the factory component is activated, and Factory.getObject is used instead of the default constructor.


What if our service requires accessing another service or another object? How we will decide whether we should initiate a new object and use it or use a currently available object? Should we as service developers really involve with a complex service or object initialization task?

First let’s look at instantiation, HK2 is an IOC pattern implementation and so, No object instantiation will need to go though calling new() method and instead you will ask the IOC to return an instance of your required component. In HK2, ComponentManager class is in charge of providing a registry for instantiated components, and also it bear the responsibility of initializing components that are not initialized upon a client request. Clients ask ComponentManager for a component By calling getComponent(ClassT contract). And ComponentManager manager ensure that either it returns a correct instance based on available instance and their scopes or it create a new instance by traversing the entire object graph required for initializing the component. The ComponentManager methods are threading safe and maintain no session information between consequent calls.

Components developers need to have some control to ensure all required resources and configuration are ready after the construction and before any other operation; also they may need to perform some housekeeping and cleanup before destroying the object. So two interfaces are provided which components can implement them to register for a notification upon object construction and destruction. There two interfaces are org.jvnet.hk2.component.PostConstruct and org.jvnet.hk2.component.PreDestroy, each interface just has one method named postConstruct and preDestroy. The ComponentManager is also in charge of resource injection to components properties during construction and also resource extraction from components.

First lets see resource extraction which may be unfamiliar, Some components may create an internal object which carry some information required by some other components in the system or similar components which are going to be constructed. These internal objects can be marked with @Extract annotation which is located at org.jvnet.hk2.annotations.Extract. This annotation makes define the property to be extracted and placed in a predefined scope or the default scope which is PerLookup scope.

After an object is extracted, HK2 place it inside a container of type org.jvnet.hk2.component.Habitat, the container will provide any object which needs one of its inhabitants. Listing 2 shows how a property of an object can be marked as an extraction.

Listing 2 The HK2 extraction sample

@Service (name="restart-domain",                                 scope=org.jvnet.hk2.component.PerLookup.class,                                    
public class RestartDomain implements AdminCommand {                   
@Extract                                                                                                                  #1
LoggerService logger;
@Extract                                                                                                  #2
public CounterService getCounterService() {...}

At #1 we mark a property for extraction and at #2 we extract the object from a getter method, so remember that @Extract annotation works both on property and getter level. Now let’s see how these inhabitant objects can be used by another service which needs these two functionalities.

Listing 3 The HK2 extraction sample

@Service (name="restart-domain",                                 scope=org.jvnet.hk2.component.PerLookup.class,                                    
public class RestartDomain implements AdminCommand {                   
@Inject protected Habitat habitat;                                                                      #1
public void performAudition() {
LoggerService logger = habitat.getComponent(LoggerService.class);         #2
CounterService counter = habitat.getComponent(CounterService.class);       #2

At #1 we inject the container into a property of these object, you see this is world of IOC, and we have no explicit object creation, at #2 we ask habitat to provide us with instance of LoggerService and CounterService.

Now you should have required understanding of OSGI Bundle layer and HK2 basics we can get our hands down to write some GlassFish modules to see how these two can be used to GlassFish modules in this article  and the upcomming one.


4 GlassFish Containers and Containers extendibility

GlassFish is a multi purpose application server which can host different kind of applications developed using variety of programming languages like Ruby, Groovy and PHP in addition to support for Java EE platform based applications.

The concept of container makes it possible to let GlassFish application server host new types of applications which can be developed using either Java programming language or other programming languages. Imagine that you want to develop a new container which can host a brand new type of application. These type of application will let GlassFish users to deploy a new type of archived package which contains descriptor files along with required classes for performing scheduled tasks in a container managed way. The new type of application is developed using Java and for sure it can access all GlassFish application server content and in the same way it can access some operating system related scripts like shell scripts for performing operating system related tasks on schedule.

GlassFish may have tens of different type containers for different type of applications so, before anything else, each container must have a name in order to make it possible for GlassFish kernel to access them easily. In the next step, GlassFish should have some mechanism to determine which container an application should be deployed to. Then there should be a mechanism to determine which URL patterns should be passed to a specific container for further processing. And finally, each application has some set of classes which container should load them to process the incoming request, so a class loader with full understanding of the application directory structure is required to load the application classes.  Figure 3 shows the procedure of application deployment from the moment that a deployment command received by GlassFish up to the stage that the container becomes available to GlassFish deployment manager.

Figure 3 Application deployment activities in GlassFish

Is this figure too busy? Also, there are some typos in it, so keep this flagged

  from early start until GlassFish get the designated container name.

Figure 4 shows activities that happen after the container become available to GlassFish application deployment manager up to the point that GlassFish get a class loader for the deployed application.

Figure 4 Activities which happens after GlassFish get the specific container name

GlassFish container SPI provides several interfaces which container developers can consider implementing them to develop their containers. These interfaces provide necessary functionalities for each activity in figure 3. First let’s see what are required for the first step which is checking the deployment artifact which can be a compressed file or a directory.

When an artifact is passed to GlassFish for deployment, first the artifact goes though a check to see which type of archive it is and which container can host this application. So we need to be able to check an archive and determine whether is compatible with our container or not. To do this we must implements an interface with the fully qualified name as org.glassfish.api.container.Sniffer, GlassFish creates a list of all Sniffer implementations and pass the archive to each one of them to see whether an Sniffer know the archive or not. Listing 4 shows a dummy Sniffer implementation


Listing 4 Sniffer interface

@Service(name = "TextApplicationSniffer")
public interface TextApplicationSniffer implements Sniffer{
public boolean handles(ReadableArchive source, ClassLoader loader){} #1
public Class<? extends Annotation>[] getAnnotationTypes(){} #2
public String[] getURLPatterns(){}                                  #3
public String getModuleType(){}                                     #4
public Module[] setup(String containerHome, Logger logger) throws IOException(){}                                                      #9
public void tearDown(){}                                           #5
public String[] getContainersNames(){}                             #6
   public boolean isUserVisible(){}                                   #7
public Map<String,String> getDeploymentConfigurations(final          ReadableArchive source) throws IOException{}                           #8

An implementation of Sniffer interface should return true or false based on the fact that either it understands the archive or not #1. ReadableArchive interface implementations provide us a virtual representation of the archive content and let us check for specific files or file size or it let us check for presence of another archive inside the current one and so on.

If we intend to use a more complex analyze of the archive, for example by checking the presence of a specific annotation we should return list of annotations in getAnnotationTypes #2. GlassFish scan the archive for presence of one the annotations and continue with this Sniffer if a class is annotated by one of these annotations.

If this is the Sniffer that understands the archive then GlassFish asks for a URL patterns that GlassFish should call the container service method when a request matches that pattern#3. At #4 we define the module name which can be a simple string.

At #5 we just remove everything related to this container from the module system. At #6 we should return a list of all containers that this specific Sniffer can enable.

At #7, we determine whether the container is visible to users when they deploy an application into the container or not and at #8 we should return all interesting configuration of the archive which can be used during the deployment or runtime of the application.

The most important method in the Sniffer interface is the setup method which will setup the container if container is not already installed. #9 As you have already noticed the method have the path to container home directory so, the container is not installed already Sniffer can download the required bundles from the repositories and configure the container with default configurations. The method returns a list of all modules that are required by the container; using this list GlassFish prevent removal of any of these modules and check for the container class to resume the operation.

The org.glassfish.api.container.Container is the GlassFish container contract which each container should implements, Container interface is the entry point for GlassFish container extendibility. Listing 5 shows a dummy Container implementation. Container class does not need to perform anything specific, all tasks will delegate to another class which perform the

Listing 5 dummy container implemenatation

@Service(name = "gfbook.chaptercontainer.TextApplicationContainer")       #1
public class TextApplicationContainer implements Container {                 #2
  public Class<? extends Deployer> getDeployer() {              #3
      return gfbook.chaptercontainer.TextApplicationDeployer.class;#3
  public String getName() {
      return " Text Container";                   #4

At #1 we define a unique name for our container, we can use fully qualified class name to ensure that it is unique. At #2 we implement the container contract to let the GlassFish kernel use our container in the way that it knows. At # we return our Deployer interface implementation class name which will take care of tasks like loading, unloading, preparing, and other application related tasks. At #4 we return a human readable name for our container.

You can see that container has nothing special except that it has an implementation of org.glassfish.api.deployment.Deployer which GlassFish kernel will use to delegate the deployment task. Listing 6 shows snippet code from Text application container’s deployer implementation.

Listing 6 Deployer implementation of TextApplication container

public class TextApplicationDeployer implements Deployer<TextContainer,    TextApplication> {                   #1
              public boolean prepare(DeploymentContext context) {    } #2
    public <T> T loadMetaData(Class<T> type, DeploymentContext context) { }#3
    public MetaData getMetaData() {}#4
    public TextApplication load(TextContainer container, DeploymentContext context) {}                               #5
    public void unload(TextApplication container, DeploymentContext context) {}                  #6
    public void clean(DeploymentContext context) {}  #6

Methods that are shown in the code snippet are just mandatory methods of Deployer contract that, our implementation may have some helper methods too. At #1 we define our class as an HK2 service which implements Deployer interface.

At #2 we prepare the application for deployment it can be unzipping some archive files or pre-processing the application deployment descriptors to extract some required information for deployment task. Several methods of the Deployer interface accept a org.glassfish.api.deployment.DeploymentContext as one of its parameters, this interface provides all contextual information about the currently under processing application. For example application content in term of files and directories, accessing the application class loader, all parameters which passed to deploy command and so on.

At #3 we return a set of MetaData that is associated with our application, these metadata can contain information like application name and so on. At #4 we should return an instance of org.glassfish.api.deployment.MetaData which contains special requirement which this Deployer instance needs these requirement can be loading a set of classes before loading the application using a separate class loader, or a set of metadata that our Deployer instance will provides after successfully loading the application. We will discuss the MetaData in more details in next step. At #5 we load the application and return an implementation of org.glassfish.api.deployment. The ApplicationContainer manages the lifecycle of the application and its execution environment. At #6 we undeploy the application from the container which means that our application should not be available in the container anymore. At #7 we clean any temporary file created during the prepare method execution.

Now the TextApplication class which implements application management tasks like starting application, stopping application. The sample snippet for this class can be similar to listing 7.


Listing 7 Snippet code for TextApplication class

public class TextApplication extends TextApplicationAdapter
                implements ApplicationContainer{ #1
  public boolean start(ApplicationContext startupContext) {   }
  public boolean stop(ApplicationContext stopContext) {  }
  public boolean resume()
  public ClassLoader getClassLoader() {  }
  public boolean suspend()
  public Object getDescriptor() {  } #2

All portion of the code should be self explaining except that in #2 we return the application descriptor to GlassFish deployment layer and in #1 which we do a lot of works including interfacing our container with Grizzly (the web layer of GlassFish).

You remember that in listing 6 we returned an instance of TextApplication when the container calls the load method of the Deployer object, this object is responsible for application lifecycle including what you can see in the listing 7 and also it is responsible for interfacing our container with Grizzly layer. Listing 8 shows the snippet of the TextApplicationAdapter abstract class implementation. Extending the com.sun.grizzly.tcp.http11.GrizzlyAdapter helps with developing custom containers which need to interact with HTTP layer of GlassFish. The com.sun.grizzly.tcp.http11.GrizzlyAdapter has one abstract method which we talk about it when we were discussing the Sniffer interface implementation and that method is the void service(GrizzlyRequest request, GrizzlyResponse response) which is called when ever a request hits a URL matching with one of the container’s deployed application.



5 GlassFish Update Center


Glassfish update center is one of its outstanding features which help developers and system administrators to spend less time on updating and distributing applications on multiple GlassFish instances. In this section we will dig more into how update center works, and how we can develop new package for the update center.

The update center name may be misleading that it can just works for updating the system, but reality is that it has more functionalities than what its name reveal. Figure 1.3 shows update center Swing client running in a Linux machine. At your left hand you will see a navigation tree which let you select what kid of information you want to view regarding update center functionalities. These tasks include:

       Available Updates: The Available Updates node includes information about available updates for installation. It shows the basic details about each available update and identifies if the update is new and whether its installation needs a restart to complete. You can select the desired component from the table and click install.

       Installed Software: The Installed Software node shows the software components currently installed. This is the place which you can remove any of not required piece of software from your GlassFish installation and let it run lightly. Upon trying to uninstall any component GlassFish will check to find what components are depending on it and let you decide whether you want to remove all of them or not. The Details section of the window shows more information for the installed software including technical specifications, product support, documentation, and other useful resources.

       Available Software: The Available Software node shows all the software components available for installation through the Update Center. These are new set of components like new containers, new JDBC drivers, and new web applications and so on. The Details section of the window shows more information for the installed software including technical specifications, product support, documentation and other useful resources.

Now that we know what the update center can do, let’s see how we should bootstrap it in our application server. GlassFish is the modular application server and so, update center itself is a module which can be present or not, by default the update center modules are not bundled with GlassFish installation and GlassFish download and installs its bit upon the first access, just like GlassFish administration console.

GlassFish installer will bootstrap the GlassFish Update Center during installation if a internet connection was available. If your environment had no Internet connection during the installation you can bootstrap the update center by navigating to GlassFish_Home/bin and running updatetool script, it is either a shell script or a batch file. After you execute the command it will download the necessary bits and configure the update center. Running the command again will result in seeing the update center tool GUI similar to figure 5.

Figure 5 Update Center Tool, it can manage multiple application installation

In the Navigation Tree you can see that several installations can be managed using a single Update Center front end. These are installed image in the system along with manually create image for GlassFish in Action Book.

5.1 Update center in the administration console

GlassFish administration system is all about effectiveness, integration and ease of use so the same update center functionalities are available in the GlassFish administration console to let the administrators check and install updates without having shell level access to their machines. Administrators can navigate to Update Center page in GlassFish administration page and see which updates and which new features are available for their installed version.

Figure 5 shows web based update center page with all features that desktop based version has. Integrating the update center functionalities into administration console is along with GlassFish promises to keep all administration tasks for a single machine or a cluster integrated into one single application.




Figure 6 Update center pages in GlassFish administration console with the same functionalities of desktop GUI.

I know that you want to ask, what about update notification, how often I should check for updates. GlassFish development team knows that administrators like to be notified for available updates, therefore there are two mechanisms for you to get notified about available updates. The first way is by means of the notification text which appears in the top bar of the administration console as shown in figure 7.

Figure 7 Update center web based notification and integration of the notification icon in Gnome desktop

You are asking for more passive way of notification, are a CLI advocate administrator, then you can use the Try Icon notification system which notifies you about available updates by showing a bulb in your system try. There is no different if you are using Solaris, Linux or windows, you can activate the notification icon simply by issuing the following command in the GlassFish_Home/updatetool/bin/ directory.

./updatetool --register

And you are done, the registration of update center notification icon is finished and after your next startup you can see the notification icon which notifies you about availability of new updates and features. Figure 7 shows the two notification mechanism discussed.

The update center helps us to keep our application server up to date by notifying us about availability of new packages and features. This features and packages as we already know either should be OSGI bundles or they should be some other types of packages which can be used for transferring any type of files. The update center uses a general packaging system to deliver the requested packages which may contains one or more OSGI bundles, operating system binary files, graphic and text and any other types of contents.

This general purpose and very powerful packaging system is called pkg(5) Image Packaging System or in brief IPS.

6 GlassFish packages distribution

Update center as we can see in the GlassFish use case, helps us to keep our application server up to date and let us install new components or remove currently installed components to keep the house clean of not required components. But the so called update center is much more important and bigger than just updating the GlassFish installation. Update center is part of a bigger general purpose binary package distribution for distributing modular software systems in different operating system without interfering with the operating system installation sub-system or so called root permission.

GlassFish version 2 has an update center, which you can use to manage the GlassFish installation however that update tool was using a module model like NetBeans NBM files to transfer the updates from the server to local installation. But in GlassFish version 3 the update center system and GlassFish installation system has completely changed. We know that the GlassFish installer and OpenInstaller framework, you may remember from 2.2 that that GlassFish installer uses a specific type of binary distribution system designed by OpenSolaris named pkg(5) Image Packaging System (IPS). Now we are going to discuss IPS in more details along with its relation with GlassFish update center. The IPS developed using Python to ensure that it is platform independent and will execute on every platform with minimal required dependencies.

6.1 PKG(5) Image Packaging System, simply IPS

The IPS is a software delivery system based heavily on using network repositories to distribute the software system either completely from the network repositories or in a mixture of using network repositories and initially downloaded packages. Generally speaking IPS helps with distributing and updating software system on different levels independent of the operating system and the platform that the software is going to be installed.

IPS is consisting of several components which Some of them are mandatory for maintaining a system based on IPS and some of them are just provided to let us make the overall procedure of creating, maintaining, and using IPS based software distribution easier.

       End user Utilities: There is a set of command line utilities that let the end users to interact with the server side software which serve as repository front end to let clients fetch and install updates and new features for their installed software.

       GUI based Client side software: like the Java based Update Center  which is in use by GlassFish

       API for interacting with IPS: There are some Java APIs provided for Java developers to be able to perform IPS related tasks from their Java applications, it provides better integration with java application and more flexibilities for ISVs and OEMs to develop their own client side applications.

       Installation images: An Installation image is a set of packages for an application that provides some or all of its functionalities.

       Servers side utilities: An HTTP server which sit in front of the physical repository to interact with the clients.

       Development utilities: A set of utilities that helps developers with creating IPS packages. The build systems integration as a subset of development utilities. For now IPS integration with Maven and ANT is available which integrate and automate creating and publishing the package to repositories with the build system.

In IPS, a software installation is called an image that we can say is a customized mirror of the installation repositories in term of installed packages. Different types of images are defined in the IPS including:

       Operating system level images: This type of images is just in use for distributing Operating systems and operating system upgrades. Images in this level are called Full images and partial images.

       Custom application level image: This is the image level which software developers and distributes can use to distribute their application. This image is called user images and does not require the host operating system to be based on IPS.

Let’s see how GlassFish installer is related to IPS, GlassFish installer solely asks the user for the path which user wants to expand GlassFish IPS image and perform the initial configuration on the expanded image. These initial configurations include creating of a default domain along with setting the key repositories for domain password and master password. The expanded image will interact with some preconfigured network repository to fetch information about updates and new features which are available for expanded GlassFish image.

Package repositories, which installation image interact with, contain packages which lead to updating, completing or adding new features to an already installed image. These packages can differ from one operating system to another and Because of these differences different repositories are required for different operating systems.

An installation image can be as big as an operating system image or as small as necessary bootstrapping files to bootstrap the IPS and let it perform the rest of installation task. So a user image can be categorized under one of the following types:

       The image with some basic required functionality, for example GlassFish web container. These basic functionalities can be operating system independent for sake of simplicity in distributing the main image. The image can contain complete IPS system including its utilities and client application like GlassFish update center or it may just contain a script to bootstrap the IPS system installation.

       The image may contains the very basic and minimal packages to bootstrap the IPS system, the user will run the IPS bootstrapping script to install more IPS related packages like graphical Update Center client and later on install all required features by using these IPS utilities.

6.2 Creating packages using IPS

Now that we have an understanding of what is IPS and what is its relation with GlassFish installer and GlassFish itself we can get our hands on IPS utilities and see how we can use command line utilities or use the IPS to create new packages. Figure 8 shows GlassFish directory structure after bootstrapping the Update Center. Keep it in mind that bootstrapping the GlassFish Update Center result in installation of pkg(5) IPS utilities.

You may ask what the relation GlassFish modularity, extendibility, OSGI and HK2 is with IPS, IPS image is zip files containing the GlassFish directory layout including its files like its OSGI bungles, JAR files, documentation and so on. Later on, each IPS package which can be an update to currently installed features or a brand new feature is a zip file which can contain one or more OSGI bundle along with their related documentation and thing like that.

Two directories in GlassFish directory structure are related to IPS and GlassFish Update Center. The pkg directory includes all necessary files for IPS system which includes man pages, libraries which Java developers can use to develop their own application on top of IPS system or their own application for bootstrapping the IPS; it also includes a minimal Python distribution to let users execute the IPS scripts. The vendor-packages contain all packages downloaded and installed for this image which is the Update Center image.

From now on, we referee to pkg folder with the name ips_home so you should replace it with the real path to the pkg directory or you can set an environment variable with this name pointing to that directory.



Figure 8 Update center and pkg(5) IPS directory layout

The bin directory that resides inside the pkg directory includes IPS scripts for both developers and end users, these scripts are listed in table 1 with their description.

Table 1 IPS scripts with associated description




Can be used to create and update images


This is the repository server for the image packaging system. The pkg or other retrieval clients send packages and catalog recovery request to the repository server.



Let us publish new packages and new package versions to an image packaging depot server.


Let us download the contents of a package from a server. Content format is suitable for pkgsend command.

You can see some Python scripts in the bin directory. These scripts perform the real tasks explained for each of the above operating system friendly scripts.

The updatetool folder is where Update Center GUI application is located along with its documentation and related scripts. The bin directory contains two scripts for running the Update Center and registering the desktop notifier which keeps you posted about new updates and available feature by showing a balloon with the related message in your system try section. The vendor-packages folder contains all packages downloaded and installed for this image which is the Update Center image.

Now let’s see how we can setup a repository, create a package and publish the package into the repository. I am just going to create a very basic package to show the procedure, the package may not be useful for GlassFish or any other application.

First of all we need a directory to act as the depot of our packages, a place which we are sure our user has the read and write permission along with enough space for our packages.  So create a directory named repository, the directory can be inside the ips_home directory. Now start the repository server by issuing the following command inside the ips_home/bin directory.

./pkg.depotd -d ../repository -p 10005

To check that your repository is running open in your browser and you should see something similar to figure 9 which includes some information about your repository status.

Figure 9 The Repository status which we can see by pointing the browser to the repository URL.


Now we have our repository server running and waiting for us to push our packages into it and later on download those packages by our client side utilities like pkg or Update Center.

Our packages should be placed in the package repository and the command which can do this for us is pkgsend command. You should already though about the urge for a way to describe content of a package, its version, description, and so on. The pkgsend command let us open a transaction and add all package attribute, its included files and directory layout and finally close the transaction which lead to pkgsend sends our package to the repository. The pkgsend command can acts in a transactional way which means we can have a set of packages which we need either all of them in the repository or none of them. This model guaranteed that we may never have incompatible packages in the repository during the time that we push updates. To create a sample package we need some contents, so create a directory inside the pkg directory and name it sample_package, put two text files named readme.txt and license.txt inside it. Inside the sample_package directory create a directory named figures with an image file named figure1.jpg inside it. These are dummy files and their content can be anything that you like. You may add some more files and create a directory structure to test more complex structured packages. Listing 8 shows a series of commands which we can use to create and push the package that we have just prepared its content to the repository that we create in previous step. I assumed that you are using Linux so the commands are Linuxish and you should enter them line by line in the terminal window which can be either a gnome-terminal or any other terminal of your choice. If you want to see how a transaction works you can point your browser to during the executing of listing 8 commands.

Listing 8 Create and push the sample package to our repository

export PKG_REPO=                           #1
eval './pkgsend open GFiASample@1.0'                              #2
./pkgsend add set name="" value="GFiASamplePackage"      #3
./pkgsend add set name="pkg.description" value="sample description" #4
./pkgsend add set name="pkg.detailed_url" value="" #5  
./pkgsend add file ../sample_package/readme.txt path=GFiA/README.txt  #6
./pkgsend add dir path=GFiA/images/                      #7
./pkgsend add file ../sample_package/figures/f.jpg path=GFiA/figs/f.jpg #8
./pkgsend add license ../sample_package/license.txt license=GPL         #9
./pkgsend close               #10

At #1 we export an environment variable which IPS scripts will use as the repository URL, otherwise we should pass the URL with –s in each command execution. At #2 we open a transaction named GFiASample@1.0 for uploading a package. At #3 we add the package name.  At #4 we add the package description which will appear in the description column of the Update Center GUI application or pkg command information retrieval.  At #5 we add the URL which contain complementary information, Update Center fetch the information from provided URL and it will show them in the description pane. At #6 we add a text file along with its extraction path; At #7 we add a directory which our package installation will create in the destination image. At #8 we add a file to our previously created directory. At #9 we add the package license type along with the license file, later on license type can be used to query the available packages based on their licenses. And finally at #10 we close the transaction will result in appearance of the package in the package repository.

Now we have one package in our repository, you can find the number packages in the repository by opening the repository URL which is in your browser.

You may already recognize a pattern in using pkgsend command and its parameters, and if you did you are correct because pkgsend commands are following a pattern which is similar to pkgsend subcommand [subcommand parameters]. List of important pkgsend subcommand is shown in table 2.

Table 2 List of pkgsend subcommands which can be used to send a package to repository





Begins a transaction on the package specified by package name.

Syntax : pkgsend open pkg_name


Adds a resource associated with an action to the current transaction.

Syntax : pkgsend add action [action attributes]


Close the current transaction.

Syntax : pkgsend close


You can see complete list of subcommands in the pkgsend man files or in the pkg(5) project website located at

The add subcommand is the most usefull subcommand between the pkgsend subcommands, it takes actions which we need to the transaction along with the actions attributes. You have already seen how we can use set, file, dir, and license actions. As you see each action may accept one or more named attributes. Other important actions are listed in table 3.

Table 3 list of other important add actions


Description and Key Attributes


The link action represents a symbolic link. The path attribute define the file system path where the symlink is installed.


The hardlink action represents a physical link.The path attributes define the file system path where the symlink is installed.


The driver action represents a device driver. It does not reference a payload, the driver files must be installed as file actions. The name attribute represent the name of the driver. This is usually, but not always, the file name of the driver binary.


The depend action represents a dependency between packages. A package might depend on another package to work or to install. Dependencies are optional.


The group action defines a UNIX  group. No support is present for group passwords. Groups defined with this action initially have no user-list. Users can be added with the user action.


The user action defines a UNIX user as defined in /etc/passwd, /etc/shadow, /etc/group and /etc/ftpd/ftpusers files. Users defined with this attribute have entries added to the appropriate files.


Now that we are finished with the pkgsend, let’s see what alternatives we have in using our package repository. We can either install the package into an already installed package like our GlassFish installation or we can create a new package in our client machine and install the package into that particular image.

To install the package in the current GlassFish installation open Update Center, select GlassFish node in the navigation tree and then select Image Properties from file menu or press CTRL+I to open the image properties window. Add a new repository; enter GFiA.Repository as name and as the repository URL.  Now you should be able to refresh the Available Add-ons list and select GFiA Sample  Package for installation.

The installation process will be fairly simple as it will just execute the given actions one by one which will result in creation of a GFiA directory inside the GlassFish installation directory with two text files inside it. The package installation also creates the figs directory inside GFiA directory along with adding the image file inside it.

You get the idea that you can copy your package files anywhere in the host image (GlassFish installation in this case) so, when you want to distribute your OSGI bundle you will only need to put the bundle inside an already existing directory named modules in the GlassFish installation directory.

The other way in using our package repository is creating a new image in our client system and then installing the package in this new image. Although we can create the image and install our package into it using Update Tool, but is joyful to use command line utilities to accomplish the task.

  1. Create the local image:

./pkg image-create -U -a GFiA.Repository =http://localhost:10005/ /home/user/GFiA

The command will create a user image which is determined by the –U parameter; its default package repository is our local repository and it is determined by –a parameter. And finally the path to image location is /home/user/GFiA which is the place where our image will extract.

  1. Set a Title and Description in the Image:

./pkg set-property title "GlassFish in Action Image"

./pkg set-property description "GlassFish in Action Book image which is created in Chapter "

  1. install the GFiASamplePackage into the newly created image

./pkg  /home/user/GFiA install GFiASamplePackage

As you can see we can install the package into any installation image by providing the installation image path.

You as a developer or project manager can use IPS to distribute your own application from the ground and keep yourself free from updating hurdle. The IPS toolkit is available at


6 Summary

GlassFish modular architecture opens the way for an easy update and maintenance of the application server along with providing the possibility to use its modules outside the application server in ISVs or develop new modules to extend the application server functionalities.

Using OSGI let the application server to be deployed inside a bigger software system based on the OSGI and let the system administrators and maintainers to deal with one single installation with one single underplaying module management system.

GlassFish container development provides the opportunity to develop new type of severs which can contain new types of applications without going deep into network server development. It is also suitable as the container can interface with other containers in the application server to use managed resources like JDBC connection pools or EJBs.

GlassFish update center can be seen as one of the most initiative features of the application server as it takes care of many headaches which administrators usually face for updating and patching the application server. The Update center automatically check for available updates for our installed version of GlassFish and in blink of an eye it will install the updates for us without making us go through the compatibility check between the available update and our installation.

Update center notification mechanism can keep us posted for new updates either when we are doing daily administration task in the administration console or by showing the famous bulb in our desktop notification area.

The Application server is platform independent and so it needs a platform agnostic distributing mechanism and the pkg(5) IPS is a proven binary distribution system which GlassFish used to distribute its binary.


OpenMQ, the Open source Message Queuing, for beginners and professionals (OpenMQ from A to Z)

Talking about messaging imply two basic functionalities which all other provided features are built around them; these two capabilities include support for topics and queues which basically lets a message to be consumed by as many interested consumer  as subscribed or at just one of the interested consumer(s).

Messaging middleware was present in the industry before Java come along and every one of them had its own interfaces for providing the messaging services. There were no standard interfaces which vendors try to comply with it in order to increase the compatibility, interoperability, ease of use, and portability. The art of Java community was defining Java Messaging System (JMS) as a standard set of interfaces to interact with this type of middleware from Java based applications.

Asynchronous communication of data and events is one of the always present communication models in an enterprise application to resolve some requirements like long running process delegation, batch processing, loose coupling or different systems, updating dynamically growing or shrinking clients’ set, and so on. Messaging can be considered as one of important basics required by every advanced middleware system software stack which provides essentials required to define a SOA as it provides the required basics for loose coupling.

Open MQ is an open source project mostly sponsored by Sun Microsystems for providing the Java community with a high performance, cross platform message queuing system with commercial usage friendly licenses including CDDL and GPL. Open MQ which is hosted at shares the same code base that Sun Java System Message Queue uses. This shared code base means that any feature available in the commercial product is also available in the open source project and the only difference between them is that for the commercial one you should pay a license fee and in return you will get professional level support from Sun engineers while with the open source alternative you do not have commercial level support and instead you can use community provided support.

Open MQ provides a broad range of functionalities in security, high availability, performance management and tuning,  monitoring, multiple language support, and so on which lead easier utilization of its base functionalities in the overall software architecture.

1 Introducing Open MQ


Open MQ is the umbrella project for multiple components which forms Open MQ and Sun Java System Message queue which is name of the productized sibling of Open MQ. These components include:

§     Message broker or messaging server

Message broker is the heart of the messaging system which generally maintains the message queues and topics, manage the clients connections and their requests, control the clients access to queues and topics either for reading or writing, and so on.

§     Client libraries

Client libraries provide APIs which let developers interact with the message broker from different programming languages. Open MQ provides Java and C libraries in addition to a platform and programming language agnostic interaction mechanism.

§     Administrative tools

Glassfish provides a set of very intuitive and easy to use administration tools including tools for broker’s life cycle management, broker monitoring, and destination life cycle management and so on.

1.1 Installing Open MQ

From version 4.3, Open MQ provides an easy to use installer which is platform depended as it is bundled with compiled copy of client libraries and administration tools. Download bits for installer are available at, download the binary copy compatible with your operating system. Make sure that you have JDK 1.5 or above already installed before you try to install Open MQ. Virtually Open MQ works on any operating system capable of running JDK 1.5 and above.

Beauty of open source

You might be using an operating system for development or deployment which is not included in table 1; in this case you can simply download the source code archive from the same page mentioned above and build the source code yourself. The building instructions are available at and Running OpenMQ 4.3 in NetBeans.txt which is straight forward. Even if you could not manage to build the binary tools you can still use Open MQ and JMS on that operating system and run your management tools from a supported operating system.


To install Open MQ extract the downloaded archive and run installer script or executable file depending on your operating system, as you can see installer is straight forward and does not require any special user interaction except accepting some defaults.

After we complete the installation process we will have a directory structure similar to figure 1 in the installation path we select during the installation.

Figure 1 Open MQ installation directory structure


As you can suggest from the figure, we have some Meta directory related to installer application and a directory named mq that contains Open MQ related files. Table 1 lists Open MQ directories along with their descriptions.

Table 1 Important items inside Open MQ installation directory

Directory name



All administration utilities resides here


Required headers for using OpenMQ C APIs


All JAR files and some native dependencies (NSS) resides here


HTTP and HTTP tunneling for firewall restricted configuration, Universal Messaging service for language agnostic use of Open MQ.


Configuration files for different broker instances created on this Open MQ installation are stored inside this folder by default.


2 Open MQ administration

Every server side application requires some level of administration and management to keep it up and running and Open MQ is not an exception. As we discussed in article introduction each messaging system must have some basic capabilities including point to point to point and publish subscribe mechanism for distributing messages. So, basic administration should evolve around the same concept.

2.1 Queue and Topic administration and management

When a broker receive a message it should put the message in a queue, topic or discard it. Queues and topics are called physical destinations as they are latest place in the broker space that a message should be placed.

In Open MQ we can use two administration utilities for creating queues and topics, imqadmin and imqcmd which the first one is a simple swing application and the later one is a command line utility. In order to create a queue we can execute the following command in the bin directory of Open MQ installation.


imqcmd create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"



This command simply creates a destination of type queue, this queue will not store more than 1000 message and if producers try to put more messages in the queue it will remove oldest present message to open up some space for new messages. Removed message will be placed in an specific queue named Dead Messages Queue (DMQ). The queue that we create using this command is named smsQueue. This command creates the queue on the local Open MQ instance which is running on localhost port 7676. We can use –b to ask imqcmd to communicate with a remote server. Following command create a topic named smsTopic on a server running on, remember that default username and password for the default broker is admin.


imqcmd –b create dst -n smsTopic -t t


imqcmd is a very complete and feature rich command, we can use it to monitor the physical destinations. A sample command to monitor the destination can be like:


imqcmd metrics dst -m rts -n smsQueue -t q

Sample output for this command is similar to figure 2 which shows consuming and producing messages’ rates of smsQueue. The only thing about the command is that -m argument which determine which type of metrics we need to see, different types of observable metrics for physical destinations includes:

§              con, which will show destination consumer information

§              dsk, disk usage by This destination

§              rts, as already said it shows message rates

§              ttl, shows message total, it is the default metric type if none specified


Figure 2 sample output for OpenMQ monitoring command


There some other operation which we may perform on destinations:

§              Listing destinations: imqcmd list dst

§              Emptying a destination: imqcmd purge dst -n ase -t q

§              Pausing, resuming a destination: imqcmd pause dst -n ase -t q

§              Updating a destination: imqcmd update dst -t q -n smsQueue -o "maxNumProducers=5"

§              Query destination information: imqcmd query dst -t q -n smsQueue

§              Destroy a destination: imqcmd destroy dst -n smsTopic -t t

2.2 Brokers administration and managemetn

We discussed that when we install Open MQ it creates a default broker which will start when we run imqbrokerd without any additional parameters. An Open MQ installation can have multiple brokers running on different ports with different configuration. We can create a new broker by executing following command which will create the broker and start it for accepting further commands or incoming messages.

imqbrokerd -port 7677 -name broker02


In table 1 we talked about a var/instances directory which all configuration files related to brokers reside inside it. If you look at this folder after executing the above command you will see two directories name imqbroker and broker02 which corresponds to default and newly created broker. Table 2 shows important artifacts residing inside broker02 directory.


Table 2 Brokers important files and directories

File or directory name



Configurations related to access controls, roles, authentication source and mechanism are stored here.


Users and password for accessing Open MQ are stored here

fs370 (directory)

Broker file based message stores, physical destinations, and so on.

log (directory)

Open MQ log files


All configurations relate to this broker are stored inside this file, configurations like services, security, clustering, and so on.

lock (file)

Exists when the broker is running and shows broker’s name, host and port


Open MQ has a very initiative configuration mechanism, first of all there are some default configuration which apply on Open MQ installation, there is an installation configuration file which we can use to override the default configurations, then each broker has its own configuration file which can override the installation configuration parameters and finally, we can override  broker (instance) level configurations by overriding them by passing their values to  imqbrokerd when staring a broker. When we create a broker we can change its configuration by editing the file mentioned in table 2. Table 3 lists all other mentioned configuration files and their default location on your hard disk.

Table 3 Open MQ configuration files path and descriptions

Configuration file location

Configuration file description

Located in lib/props/broker* directory, for determining default configuration parameters

Located in lib/props/broker* directory, for determining installation configuration parameters

Located inside props** folder of instance directory

* We discussed this directory in table 2

** This directory discussed in table 4

Now that we understand how we can create a broker, let’s see how we can administrate and manage a broker. In order to manage a broker we use imqcmd utilities which were also discussed in previous section for destination administration.

There are several level of monitoring data associated with a broker including incoming and outgoing messages and packets (-m ttl), messages and packet incoming and outgoing rate (-m rts), and finally operational related statistics like connections, threads and heap size (-m cxn). Following command returns message and packets rate for a broker running on localhost, listening on port 7677, the statistics updates each 10 seconds.

imqcmd metrics bkr  -b -m rts -int 10 -u admin

As you can see we passed the username with –u parameter and server will not ask for the username again. We can not pass the associated password directly in the command line and instead we can use a plain text file which includes the associated password. the password file should contains:


Default password for Open MQ administrator user is admin and as you can see the password file is a simple properties file. Table 4 list some other common tasks related to brokers along with a description and a sample command.

Table 4 Broker administration and management


Description and sample command

pause and resume

Make the broker to stop or resume accepting connections on all services.

Quiescence, un-quiescence

Issuing quiescence on a broker because it to stop accepting new connections, but it serve already connected connections, and un-quiescence command will return it to normal mode.

imqcmd quiesce bkr -b -u admin –passfile passfile.txt

Update a broker

Update a broker attributes like

imqcmd update bkr –o “imq.log.level=ERROR”

Shutdown broker

imqcmd shutdown bkr -b -time 90 -u admin

The sample command shutdown the broker after 90 seconds and during this time it only serves already established connections

Query broker

Shows broker information

./imqcmd query bkr –b –u admin



3 Open MQ security

After getting the sense of Open MQ basics we can look at its security mode to be able to setup a secure system based on Open MQ messaging capability. We will discuss connection authentication to ensure that a user is allowed to create a connection either as a producer or consumer, access control or authorization to check whether the connected user can send a message to a specific queue or not. Security the transport step is a responsibility of messaging server administrators, Open MQ support SSL and TLS for preventing message tamper and providing the required level of encryption but we will not discuss it as it is out of this article scope.

Open MQ provides supports for two types of user repository out of the box along with facilities to use JAAS modules for authentication. User information repositories which can be used out of the box includes flat file user information repository and directory server with an LDAP communication interface.

3.1 Authentication

Authentication happens before a new connection establishes. To configure Open MQ perform authentication we should provide an authentication source along with editing the configuration files to enable the authentication and configure it to use our prepared user information repository.

Flat file authentication

Flat File authentication rely on a flat file which contains username, encoded password, role name, each line of this files contains mentioned properties of one user. A sample line can be similar to:


Default location of each broker’s password file is the broker’s home directory/etc/passwd, later on when we use imqusermgr utility we either edit this file or the file that we described using related properties which are mentioned in table 5.

To configure a broker instance to perform authentication before establishing any connection we can either use imqcmd to update the broker’s properties or we can directly edit its configuration file which is mentioned in table 5. In order to configure broker02 that we create in section 1.2 to perform authenticate before establishing a connection we need add properties listed in table 5 to file which is located inside props directory or broker’s home directory, broker directory structure and configuration files are listed in table 3 and 2.

We can ignore two later attributes as we want to use the default file in its default path,. So, shutdown the broker, add listed attributes to its configuration file and start it again. Now we can simply uses imqusrmgr to perform CRUD operations on our user repository associated to our target broker.


Table 5 required changes to enable connection authentication for broker02

Required information by broker

Corresponding property in configuration file

What kind of user information repository we want to use


What is password encoding


Path to directory containing password file


Path to password file which contains passwords


Now that we configured the broker to perform authentication we need some mechanism to add, remove, or update the user information repository content. Table 6 shows how we can use imqusermgr utility to perform CRUD operations on broker02.

Table 6 Flat file user information repository management using imqusermgr

CRUD operation

Sample command

Create user

Add a user to user group, other groups are admin and anonymous.

imqusermgr add -u user01 -p password01 -i broker02 -g user


List users

imqusermgr list -i broker02

Update a user

Change the user status to inactive.

imqusermgr update -u user01 -a false -i broker02

Remove a user

Remove user01 from broker02

imqusermgr delete -u user01 -i broker02

Now create another user as follow:

imqusermgr add -u user02 -p password02 -i broker02 -g user

We will use these two users in defining authorization rules and later on in section 4 to see how we can connect to Open MQ from Java code when authentication is enabled.


All other utilities that we discussed need the Open MQ to be running and all of them we able to perform the required action on remote Open MQ instances but this command only works on local instances and does not require the instance to be running.

3.2 Authorization

After we enabled authentication which result in the broker checking user credentials before establishing the connection we may need to check the connected user’s permissions before we let it perform a specific operation like connecting to a service, putting message into a queue (acting as a producer), subscribing to a topic (acting as a consumer), browsing a physical destination, or automatic creation of physical destinations,. All these permissions can be defined using a very simple syntax which is shown below:


resourceType. resourceVariant. operation. access. principalType= principals


In this syntax we have 6 variable elements which are described in table 7.

Table 7 different elements in each Open MQ authorization rule




Type of resource to which the rule applies:

connection: Connections

queue:Queue destinations

topic: Topic destinations


Specific resource (connection service type or destination) to which the rule applies.

An asterisk (*) may be used as a wild-card character to denote all resources of a given type: for example, a rule beginning with queue.* applies to all queue destinations.


Operation to which the rule applies. This syntax element is not used for resourceType=connection


Level of access authorized:

allow: Authorize user to perform operation

deny: Prohibit user from performing operation


Type of principal (user or group) to which the rule applies:

user: Individual user

group: User group


List of principals (users or groups) to whom the rule applies, separated by commas.

An asterisk (*) may be used as a wild-card character to denote all users or all groups: for example, a rule ending with user=* applies to all users.


Enabling authoriization

To enable authorization we will have to do a bit more than what we did for authentication because in addition to enabling the authorization we need to define roles privileges. Open MQ provides a text based file to describe the access controlling rules. Default path for this file is inside the etc directory of broker home, and the default name is If we do not provide the path for this file when configuring Open MQ for authorization, Open MQ will pick up the default file.

In order to enable authorization we need to add following attributes to one of configuration files depending on how much wide we want the authorization be applied. Second attribute is not necessary if we want to use default file path.


§              To Enable the authorization: imq.accesscontrol.enabled=true

  • Determine the access control description file path: imq.accesscontrol.file.filename= path/toaccess/contro/file

Defining access control roles inside the access control file is an easy task, we just need use the rules syntax and limited set of variable values to define simple or complex access control roles. Listing 1 shows list of rules which we need to add to our access control file that either can be the default file or another file in the specific path that we should describe in the configuration file. If you are using default access control configuration file make sure that you comment all already defined rules by adding # sign in beginning of each line.

Listing 1 Open MQ access control rules definition

#connection authorization rules                            #1
connection.ADMIN.allow.user=user01                            #1*                                          #2
#queue authorization rules
queue.smsQueue.produce.allow.user=user01,user02                                          #3
queue.smsQueue.consume.allow.user=user01,user02                                          #4
queue.*.browse.allow.user=*                                                                      #5
#topic authorization rules
topic.smsTopic.**                                                                      #6
topic.smsTopic.produce.deny.user=user01                                                        #7
topic.*                                                        #8             
topic.*.consume.allow.user=*                                                                      #9
#Auto creating destinations authorization rules
queue.create.allow.user=*                                                                      #10                                                                      #11
topic.create.deny.user=user01                                                                      #12


By adding content of listing 1 to access control description file, we are applying following rules in Open MQ authentication, first of all we are letting admin group to connect to the system as administrator #1 and the only user01 as a normal user has previledge to connect to system as administrator #1. We let any user from any group to connect as a normal user to the broker#2. At #3 we let any of users that we create before to either act as a producer and at #4 we let these two users to act as producers of smsQueue destination. Because of rules we defined at #3 and #4 no other user can act as a consumer or producer of any other queue in the system. At #5 we let any user to browse any queue in the system. At #6 we let any operation on smsTopic by any group of users and later on at #7 we define a rule to deny user01 from acting as a smsTopic producer. As you can see generalized rules can be overridden by more specific rules as we did it for denying user01 from acting as a producer. At #8 we deny anonymous from acting as producer of smsTopic. At #9 we let any user from any group to act as a consumer of any present topic in the system. At #10 we allow any user to use auto destination creating facility. At #11 we just let one specific group to automatically create topics while one single user has no privilege to automatically create a topic #12.


If we use an empty access control definition file no user can connect to the system so, no operation will be possible in the system by any user. Open MQ default permission for any operation is denial, so if we need to change the default permission to allow we should add allows rules to the access control definition file.

Now we can simply connect to Open MQ by using user01 to perform administrative tasks. For example executing these commands will create our sample queue and topics in the broker that we create in section 1.2. When imqcmd asked for username and password user user01 and password01


imqcmd  –b create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"


imqcmd –b create dst -n smsTopic -t t


As you can see we can simply define very complex authorization rules using a simple text file and its rules definition syntax.

So far, we tried and use simple text file for user management and authentication purposes, but in addition to using text file, Open MQ can use a more robust and enterprise friendly user repository like OpenDS. When we use directory service as the user information repository we will no more manage it using the Open MQ command line utilities and instead the enterprise wide administrators perform user management from one central location and we as Open MQ administrators or developers just define out authorization rules.

In order to configure Open MQ to use a directory server (like Open DS) as the user repository we need to add the properties listed in table 8 to our broker or installation configuration file.


Table 8 Configuring Open MQ to use OpenDS as the user information repository

Required information by broker

Corresponding property in configuration file

Determine user information repository type


Determine password exchange encryption, in this case it is base-64 encoding similar to directory server method.


*Directory server host and port


A DN for binding and searching.

imq.user_repository.ldap.principal=cn=gf cn=admin

Provided DN password


User attribute name to compare the username.


**User’s group name


* Multiple directory server address can be determined for high availability reasons, for example ldap:// ldap:// each address should be separated from other address by an space.

** User’s group name(s) attribute is vendor dependent and vary from one directory service to other directory services.

There are several other attributes which can be used to either tweak the LDAP authentication in term of security and performance, but essential properties are listed in table 10. Other properties which can be used are available at Sun Java System Message Queue Administration Guide available at

The key to use directory service is to know the schema

The key behind a successful use of a directory service in any system is to know the directory service itself and the schema that we are dealing with. For example different schemas uses different attribute names and for each property of a directory service object so before diving into using a directory service implementation make sure that you know both the directory server and its schema.

4 clustering and high availability

We know that redundancy is one of the high availability enabler. We discussed data and service availability along with horizontal and vertical scalability of software solutions. OpenMQ like any other component of a software solution which drive an enterprise should be highly available sometimes both for data and service layer and sometimes just for service layer.

The engine behind the JMS interface in the message broker which its tasks is managing destinations, routing and delivering messages, checking with the security and so on. Therefore if we could keep the brokers and related services available for the clients, then we have the service availability in place. Later on if we manage to keep the state of queues, topics, durable subscribers, transactions and so on preserved in event of a broker failure then we can provide out clients with high availability in both service and data layer.

Open MQ provides two types of clustering, conventional clusters and high availability clusters, which can be used depending on the high availability degree required for messaging component in the whole software solution.

Conventional clusters which provide service availability but not data availability. If one broker in a cluster fails, clients connected to that broker can reconnect to another broker in the cluster but may be unable to access some data while they are reconnected to the alternate broker. Figure 3 shows how conventional cluster member and potential clients work together.

Figure 3 Open MQ Conventional clustering, one MQ broker act as the master broker to keep track of changes and propagate it to other cluster members

As you can see in the figure 3, in conventional clusters, we have different brokers running with no shared data. But in conventional cluster configuration is shared by a master broker that propagates changes between different cluster members to keep them up to dated about destinations and durable subscribers. Master broker keep newly joined or offline members which become online after a period of time synchronized with new configuration information and durable subscribers. Each client is connected to one broker and will use that broker until the broker fails (get offline, a disaster happens, and so on) which will connect to another broker in the same cluster.

High availability clusters provide both service availability and data availability. If one broker in a cluster fails, clients connected to that broker are automatically connected to that broker in the cluster which takes over the failed broker’s store. Clients continue to operate with all persistent data available to the new broker at all times. Figure 4 shows how high availability clusters’ members and potential clients work together.

Figure 4 Open MQHigh availability cluster, no single point of failure as all information are stored in a highly available database and all cluster member interact with that database

As you can see in figure 4 we have one central highly available database which we relay upon for storing any kind of dynamic data like transactions, destinations, messages, durable subscribers, and so on. These database itself should be highly available, usually a cluster of different database instances running on different machines in different geographical locations. Each client connect to one broker and continue using the same broker until broker fails, in term of the broker failure, another broker will take over all of the failed broker responsibilities like open transactions, and durable subscribers. Then client will reconnect to the broker that takes over of failed broker’s responsibilities.

All in all we can summarize the differences between conventional clusters and high availability clusters as listed in table 9.

Table 9 comparison between high availability clusters and conventional clusters


Conventional cluster

High availability cluster




Service availability

Yes, but partial when master broker is not available


Data availability

No, a failed broker can cause data lose


Transparent failover recovery

May not be possible if failover occurs during a commit

May not be possible if failover

occurs during a commit and the

client cannot reconnect to any

Other broker in the cluster


Done by setting appropriate cluster configuration broker properties

Done by setting appropriate cluster configuration broker properties

3rd party software requirement


Highly available database


Usually when we are talking about data availability we need to accept the slight performance overhead, you remember that we had similar condition with GlassFish HADB backed clusters.

We said that high availability clusters use a database to store dynamic information; some databases tested with OpenMQ include ORACLE RAC, MySQL, Derby, PostgreSQL, and HADB. We usually use a high available database to maximize the data availability and reduce the risk of losing data.

We can configure a highly available messaging infrastructure by a following a set of simple steps; First you need to create as many brokers as you need in your messaging infrastructure, so create brokers as discussed in 1.2, In second step we need to configure the brokers to use a shared database for dynamic data storage and act as a cluster member. To perform this step we need to add some changes to the each broker configuration file, these changes includes adding the content of listing 2 to broker’s configuration file

Listing 2 Required properties for making a broker highly available

#data store configuration                          #1

imq.persist.jdbc.dbVendor=mysql                  #2

imq.persist.jdbc.mysql.driver=com.mysql.jdbc.Driver   #2 #2        #2

imq.persist.jdbc.mysql.user=privilegedUser                  #2

imq.persist.jdbc.mysql.needpassword=true                    #2

imq.persist.jdbc.mysql.password=dbpassword                   #2


#cluster related configuration

imq.cluster.ha=true                                          #3

imq.cluster.clusterid=e.cluster01                           #3

imq.brokerid=e.broker02                                      #4



Please remove the numbers in the above listing and following paragraph with cueballs

At #1 we define that data store is of type JDBC and not Open MQ managed binary files, for high availability clusters it must be jdbc instead of file. At #2 we provide required configuration parameters for Open MQ to be able to connect to the database, we can replace mysql with oracle, derby, hadb or postgresql if we want to. At #3 we configure this broker as a member of a high availability cluster named e.cluster01, and finally at #4 we determine the unique name of this broker. Make sure that each broker should have a unique name in the entire cluster and different clusters using the same database need to be uniquely identified.

Now we have a database server available for our messaging infrastructure high availability and we configured all of our brokers to use this database, the last step is creating the database that Open MQ is going to use to store the dynamic data. To create the initial schema we should use following command:

./imqdbmgr create all –b broker_name

This command creates the database, required tables, and initializes the database with topology related information. As you can see we passed a broker name to the command, this means that we want the imqdbmgr to use that specific broker’s configuration file to create the database. As you remember we can have multiple brokers in the same Open MQ installation and each broker can operate completely independent from other brokers.

The imqdbmgr command has many usages including: upgrading storage from old Open MQ versions, creating backups, restoring backups and so on. To see a complete list of its commands and parameters execute the command with –h parameter.

Now you should have a good understanding of Open MQ clustering and high availability. In next section we will discuss how we can use Open MQ from a Java SE application and later on we will see how Open MQ is related to GlassFish and GlassFish clustering architecture.

Excellent segue

Thanks, I am learning



5 Open MQ management using JMX

Open MQ provides a very rich set of JMX MBeans to expose administration and management interface of Open MQ. These MBeans can also be used to monitor the Open MQ using JMX. Generally any task which we can do using imqcmd can be done using JMX code or a JMX console like JDK’s Java Monitoring and Management Console, jconsole, utility.

Administrative tasks that we can do using JMX include managing brokers, services, destinations, consumers, producers. And so on. Meantime management and monitoring is possible using JMX, for example we can monitor destinations using our code or change the configuration of the JMX broker dynamically using our own code or a JMX console. Some use cases of the JMX connection channel includes:

  • We can include JMX code in our JMS client application to monitor application performance and, based on the results, to reconfigure the JMS objects you use to improve performance.
  • We can write JMX clients that monitor the broker to identify use patterns and performance problems, and we can use the JMX API to reconfigure the broker to optimize performance.
  • We can write a JMX client to automate regular maintenance tasks, rolling upgrades, and so on.
  • We can write a JMX application that constitutes your own version of imqcmd, and we can use it instead of imqcmd.

To connect to Open MQ broker using a JMX console we can use service:jmx:rmi:///jndi/rmi://host:port/server  as the connection URL pattern.  Listing 3 shows how we can use Java code to pause jms service temporarily.

Please replace numbers with cueballs in following sample codes and paragraphs


Listing 3 pausing the jms service using Java code, JMX and admin connection factory

AdminConnectionFactory  acf = new AdminConnectionFactory();             #1

acf.setProperty(AdminConnectionConfiguration.imqAddress, "");                                                       #2

JMXConnector  jmxc = acf.createConnection("admin", "admin");          #3

MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection();           #4

ObjectName  serviceConfigName = MQObjectName.createServiceConfig("jms"); #5

mbsc.invoke(serviceConfigName, ServiceOperations.PAUSE, null, null);    #6

jmxc.close();                                                           #7


This sample code uses admin connection factory which brings some dependencies on Open MQ classes which make you add imqjmx.jar to your class path. This file is located in Open MQ installation lib directory. At #1 we create an administration connection factory, this class is an Open MQ helper class which make this sample code depending on Open MQ libraries, at #2 we configure the factory to gives us connection to a non-default host, the default host is at #3 we set the credentials which we want to use to make the connection, at #4 we get the connection to MBean server. At #5 we get jms service MBean and at #6 we invoke one of its operation and finally we close the connection at #7.

Listing 4 shows a pure JMX sample code for reading MaxNumProducers attributes of a smsQueue.

Listing 4Reading smsQueue attributes using pure JMX code

HashMap   environment = new HashMap();                       #1

String[]  credentials = new String[] {"admin", "admin"};   #1

environment.put (JMXConnector.CREDENTIALS, credentials);     #1

JMXServiceURL  url;

url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://"); #2

JMXConnector  jmxc = JMXConnectorFactory.connect(url, environment); #3

MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection(); #4

ObjectName  destConfigName= MQObjectName.createDestinationConfig(DestinationType.QUEUE, "smsQueu"); #5

Integer  attrValue= (Integer)mbsc.getAttribute(destConfigName,DestinationAttributes.MAX_NUM_PRODUCERS); #6

System.out.println( "Maximum number of producers: " + attrValue );

jmxc.close();                                                        #7


At #1 and #2 we determine the parameters that JMX needs to create a connection, at #3 we create the JMX connection, at #4 we get a connection to the Open MQ’s MBean server, at #5 get the MBean representing our smsQueue, at #6 we get value of an attribute which represent the maximum number of permitted producers and finally we close the connection at #7.

To learn more about JMX interfaces and programming model for Open MQ you can check Sun Java System Message Queue 4.2 Developer’s Guide for JMX Clients which is located at

6 Using Open MQ

Open MQ messaging infrastructure can be accessed from variety of channels and programming languages, we can perform messaging tasks using Java, C, and virtually any programming language capable of performing HTTP communication. When it comes to java we can access the broker functionalities either using JMS or JMX.

Open MQ directly provides APIs for accessing the broker using C and Java but for other languages it takes a relatively evolutionary approach by providing a HTTP gateway for communicating between any language like python, JavaScript (AJAX), C#, and so on. This approach is called Universal Messaging System (UMS) which introduce in Open MQ 4.3. in this section we only discuss Java and OpenMQ and other supported mechanism and languages are out of scope. You can find more information about them at

6.1 Using Open MQ from Java

We may use Open MQ either directly or by using application server managed resources, here we just discuss how we can use Open MQ from Java SE as using JMS service from Java EE is straight forward.

Open MQ provides a broad range of functionalities wrapped in a set of JMS compliant APIs, you can perform usual JMS operation like producing and consuming messages or you can gather metrics related to different resources using the JMS API. Listing 5 shows how sample applications which communicate with a cluster of open MQ brokers. Running the application will result in sending a message and consuming the same message. In order to execute listing 5 and 6 you will need to add imq.jar, imqjmx.jar and jms.jar to your classpath. These files are inside the imq installation lib directory. 

Listing 5 Producing and consuming mesaages from a Java SE application

public class QueueClient implements MessageListener {      #1
   public void startClientConsumer()throws JMSException {
   com.sun.messaging.ConnectionFactory connectionFactory = new com.sun.messaging.ConnectionFactory();            
connectionFactory.setProperty(com.sun.messaging.ConnectionConfiguration.imqAddressList, "mq://,mq://");          #2
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectEnabled, "true"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectAttempts, "5"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectInterval, "500"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqAddressListBehavior, "RANDOM"); #3
   javax.jms.QueueConnection queueConnection = connectionFactory.createQueueConnection("user01", "password01"); #4
   javax.jms.Queue smsQueue = null;
   javax.jms.Session session = queueConnection.createSession(false, javax.jms.Session.AUTO_ACKNOWLEDGE);               #5
   javax.jms.MessageProducer producer = session.createProducer(smsQueue);
   Message msg = session.createTextMessage("A sample sms message");
   producer.send(msg);                 #6
   javax.jms.MessageConsumer consumer = session.createConsumer(smsQueue);
   consumer.setMessageListener(this);         #7
   queueConnection.start();                #8

   public void onMessage(Message sms) {
try {
   String smsContent = ((javax.jms.TextMessage) sms).getText(); #9
} catch (JMSException ex) {

As you can see in listing 5 we used JMS programming model to communicate with the Open MQ for producing and consuming messages. At #1 we use the QueueClient class to implement MessageListener interface. At #2 we create a connection factory which connects to one of a cluster member; at #3 we define the behavior of the connection factory in relation to the cluster members. At #4 we provide a privileged user to connect to the brokers. At #5 we create a session which automatically acknowledges receiving the messages. At #6 we send a text message to the queue. At #7 we set QueueClient as the message listener for our consumer. At #8 we start receiving messages from the server and finally at #9 we consume the message in the onMessage method which is MessageListener interface method for processing the incoming messages.

We said that Open MQ provides some functionality which let us retirve monitoring information using JMS APIs. It means that we can receive JMS messages containing Open MQ metrics. But before we can receive such messages we should know where these messages are published and how we should configure Open MQ to publish these messages.

First we need to add some configuration to enable gathering metrics and the interval that these metrics should be gathered, to do this we need to add following properties to one of the configuration files which we discussed in table 5.





Now that we enabled the metric gathering, Open MQ will gather brokers, destinations list, JVM, queue, and topics metrics and send each type of these metrics to a pre-configured topic. Broker gathers and sends the metrics messages based on the provided interval. Table 10 shows the topic names for each type of metric information.


Table 10 Topic names for each type of metric information.


Topic Name


Broker metrics: information on connections, message flow, and volume of messages in the broker.


Java Virtual Machine metrics: information on memory usage in the JVM.


A list of all destinations on the broker, and their types.


Destination metrics for a queue of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the queueName variable.


Destination metrics for a topic of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the topicName variable.

Now that we know which topics we should subscribe for our metrics we should think about the security of these topics and privileged users which can subscribe to these topics. To configure the authorization for these topics we can simply define some rules in access control file which we discussed in 2.2. Syntax and template for defining authorization for these resources is similar to:*,user02*

Now that we have the security in place we can write a sample code to retrieve smsQueue metrics, listing 6 shows how we can retrieve smsQueue metrics.

listing 6 retrieving smsQueue metrics using brokers metrics messages

public void onMessage(Message metricsMessage) {
        try {
            String metricTopicName = "mq.metrics.destination.queue.smsQueue";
            MapMessage mapMsg = (MapMessage) metricsMessage;
            String metrics[] = new String[11];
            int i = 0;
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgsIn"));
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgsOut"));
            metrics[i++] = Long.toString(mapMsg.getLong("msgBytesIn"));
            metrics[i++] = Long.toString(mapMsg.getLong("msgBytesOut"));
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("peakNumMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("avgNumMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("totalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("peakTotalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("avgTotalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("peakMsgBytes") / 1024);
        } catch (Exception ex) {

Listing 6 only shows onMessage method of the message listener class, other part of the code is trivial and very similar to listing 5 with slight changes like subscribing to a topic named mq.metrics.destination.queue.smsQueue instead of smsQueue.

A list of all possible metrics for each type of resource which are properties of the received map message can be found in chapter 20 of Sun Java System Message Queue 4.2 Developer’s Guide for Java Clients which is located at

7 GlassFish and Open MQ


GlassFish like other Java EE application servers need to provide messaging functionality behind its JMS implementation. GlassFish uses Open MQ as message broker behind its JMS implementation. GlassFish supports Java Connector architecture 1.5 and using this specification it integrates with Open MQ. If you take a closer look at table 1.4 you can see that by default, different GlassFish installation profiles uses different way of communication with JMS broker which ranges from an embedded instance, local or Open MQ server running on a remote host. Embedded MQ broker is only mode that Open MQ runs in the same process that GlassFish itself runs. In local mode starting and stopping Open MQ server is done by the application server itself. GlassFish starts the associated Open MQ server when GlassFish starts and stop them when GlassFish stops. Finally remote mode means that starting and stopping Open MQ server should be done independently and an administrator or an automated script should take care of the tasks.

We know that how we can setup a glassfish cluster both with an in-memory replication and also a high available cluster with HADB backend for persisting transient information like sessions, JMS messages, and so on. Here in this section we discuss the GlassFish in memory and HADB backed clusters’ relation with Open MQ in more details to let you have a better understanding of the overall architecture of a highly available solution.

First let’s see what happens when we setup an in-memory cluster; after we configure the cluster, we have several instances in the cluster which uses a local Open MQ broker instance for messaging tasks. When we start the cluster all GlassFish instance starts and upon each instances startup, corresponding Open MQ broker starts.

You remember that we discussed conventional clusters which need no backend database for storing the transient data, and instead use one broker as the master broker to propagate changes and keep other broker instances up to date. In GlassFish in-memory replicated clusters, Open MQ broker associated with the first GlassFish instance that we create act as the master broker. A sample deployment diagram for in-memory replicated cluster and its relation with Open MQ is shown in figure 5

Figure 5 Open MQ and GlassFish in an in-memory replicated cluster of GlassFish instances. Open MQ will work as conventional cluster which lead in one broker acting as the master broker.


When it come to enterprise profiles, we usually uses independent high available cluster of Open MQ brokers with a database backend  with a highly available cluster of GlassFish instances which are backed by a highly available HADB cluster. We discussed that each GlassFish instance can have multiple Open MQ brokers to choose between when it creates a connection to the broker, to configure glassfish to use multiple brokers of a cluster we can add al available brokers to glassfish instance and then let the glassfish manage how to distribute the load between broker instances.  To add multiple brokers to a Glassfish instance we can follow these steps:

  • Open Glassfish administration console
  • Navigate to instance configuration node or default configuration node of glassfish application server based on either you want to change the glassfish cluster’s instances configuration or a single instance configuration.
  • Navigate to Configuration> Java Message Service> JMS Hosts from the navigation tree

§   Remove the default_JMS_host

§   Add as much hosts as you like this instance or cluster use, each host represent a broker in your cluster

  • Navigate to Configuration> Java Message Service and change the fields value as shown in table 11


Table 11 Glassfish configuration for using a standalone JMS cluster



Value and description


We  discussed different types of integration in this section, we will use remote standalone cluster of brokers

Startup Timeout

Time that GlassFish waits during its startup time before it gives up on JMS service startup, if it gives up no JMS service will be available

Start Arguments

We can pass any arguments that we can pass to imqbrokerd command using these field

Reconnect, Reconnect Interval, Reconnect Attempts

We should enable the reconnection, set a meaningful number of retries to connect to the brokers and a meaningful wait time between reconnection

Default JMS Host

It determines which broker will be used to execute and communicate administrative commands on the cluster. Commands like creating a destination and so on

Address List Behavior

It can be used to determine how GlassFish select a broker when it creates a connection; it can either random or based on the first available broker in the available host list. First broker has the highest priority.

Address List Iterations

How many times GlassFish should reiterate over the available hosts list in order to establish a connection if the brokers are not available at first iteration

MQ Scheme

Please refer to table 13

MQ Service

Please refer to table 13


As you can see we can configure GlassFish to use a cluster of Open MQ brokers very easily. You may like to know how this configuration affects your MDB or a JMS connection that you acquire from the application server, I should say that when you create a connection using a connection factory, GlassFish check the JMS service configuration and return a connection which has the address of a healthy broker as its host address, the selection of the broker is based on the Address List Behavior value, it can select a healthy broker at random or the first healthy host starting from the top of the hosts list.

Open MQ is one of the most feature complete message broker implementation available in the open source and commercial market, it provides a lot of feature including the possibility of using multiple connection protocols for sack simplicity and vast types of clients that may need to connect to it. Two concepts are introduced to cover this connection protocol flexibility:

  • Open MQ Service: Different connection handling services implemented to support variety of connection protocols. All available connection services are listed at table 12.
  • Open MQ Scheme: Determine the connection schema between an Open MQ client, a connection factory, and the brokers. Supported schemas are listed at table 12.

The full syntax for a message service address is scheme://address-syntax and before using any of the schemas we should ensure that its corresponding service is already enabled. By setting a broker’s imq.service.activelist property, you can configure it to run any or all of these connection services. The value of this property is a list of connection services to be activated when the broker is started up; if the property is not specified explicitly, the jms and admin services will be activated by default.

Table 12 available services and schemas in Open MQ 4.3





jms and ssljms

Uses the broker’s port mapper to assign a port dynamically for either the jms or ssljms connection service.


jms and admin

Bypasses the port mapper and connects directly to a specified port, using the jms connection service.


ssljms and ssladmin

Makes a SSL connection to a specified port, using the ssljms connection service.



Makes a HTTP connection to the Open MQ tunnel Servlet at a specified URL, using the httpjms connection service.



Makes a HTTPS connection to the Open MQ tunnel Servlet at a specified URL, using the httpsjms connection service.


We usually enable the services that we need and leave other services to stay disabled. Enabling each service need its own measure for firewalls configuration and authorization.


Performance consideration

Performance is always a hurdle in any large scale system because of relatively large number of components which are involved in the system architecture. Open MQ is a high performance messaging system, however you should know that performance differs greatly depending on many different factors including:

The network topology

Transport protocols used

Quality of service

Topics versus queues

Durable messaging versus reliable (some buffering takes place but if a consumer is down long enough the messages are discarded)

Message timeout

Hardware, network, JVM and operating system

Number of producers, number of consumers

Distribution of messages across destinations along with message size



Messaging is one of the most basic and fundamental requirement in every large scale application and in this article So far you learned what is OpenMQ, how does it works, what are its main components, how it can be administrated either using Java code and JMX or using its command line utilities. You learned how a J2SE application can connect to Open MQ using JMS interfaces and act as a consumer or producer. We discussed how we can configure Open MQ to perform access control management. You learned how OpenMQ broker instances can be created and configured to work in a mission critical system.

Architecting a system need a wide knowledge of technologies, COTS, projects, standards….

When we start working on a new project as an architect we are dealing basically with a set of requirement which our architecture should be able to act as a foundation for the design and implementation of those requirements in form of a software system. to let the customer fulfill its requirements in a better and more efficient way.

Preparing the architecture for a software system means not only the architect to be familiar with the domain but also he should well aware of new technologies, frameworks, COTS, and standards available not only for the domain he is working on but also for the development platform which will realize the architecture into a working piece of software.

Knowing a platform is the foundation for knowing the related technologies, standards and popular frameworks around that platform, but to know the COTS, or open source projects around the same domain we are going to work on we will need to have some experience in implementing some software in the similar fields or we should have some research in the similar domain.

Having enough experience in doing architecture, knowing the domain, the platform and the free or commercial enablers form the necessaries in creating a good architecture but it is not enough because of the advancement in deployment models which one can select to develop and deploy the system independent of the domain itself. Having knowledge and experience with component models, modularity frameworks, integration standards and solutions, and deployment model form the next set of necessities in creating a good architecture.

Finally, I think being passionate in doing something play a big role in being successful in it. So, being passionate is the most notable requirement for an architect and without it, the architect will not be able to achieve his goal which is nothing but preparing a good architecture and realizing it to a good software.