Solaris fault administration using fmadm command

In this article we will study how we can use the fmadm command to get the list of faulty components along with the detailed information about the fault.

Before starting this article we should have a command console open and then we can proceed with using the fmadm command. The most basic form of using fmadm command is using its faulty subcommand as follow

# fmadm faulty

When we have no error in the system, this command will not show anything and exit normally but with a faulty component  the output will be different, for example in the following sample we have  a faulty ZFS pool because some of its underlying devices are missing.

Starting from top we have:
  • Identification record: This record consisting of  the timestamp, a unique event ID, a message Id letting us know which knowledge repository article we can refer to for learning more about the problem and troubleshooting and finally the fault severity which can be Minor or Major.
  • Fault Class: This field allows us to know what is the device hierarchy causing this fault
  • Affects: tells us which component of our system is affected and how. In this instance some devices are missing and therefore the fault manager takes the Zpool out of service.
  • Problem in: shows more details about the problem root. In this case the device id.
  • Description: this field refer us to the a knowledge base article discussing this type of faults
  • Response:  Shows what action(s) were executed by fault manager to repair the problem.
  • Impact: describe the effect of the fault on the overall system stability and the component itself
  • Action: a quick tip on the next step administrator should follow to shoot the problem. This step is fully described in the knowledge base article we were referred in the description field.

Following figure shows the output for proceeding with the suggested action.

As we can see the same article we were referred to, is mentioned here again. We can see that two of the three devices have failed and fpool had no replica for each of those failed device to replace them automatically.

If we had a mirrored pool and one of the three devices were out of service, the system could automatically take corrective actions and replace continue working in a degraded status until we replace the faulty device.

The fault management framework is a plug-able framework consisting of diagnosis engines and subsystem agents. Agents and diagnosis engine contains the logic for assessing the problem, performing corrective actions if possible and filing the relevant fault record into the fault database.

To see a list of agents and engines plugged into the fault management framework we can use config subcommand of the fmadm command. Following figure shows the output for this command.

As we can see in the figure, there are two engines deployed with OpenSolaris, eft and the zfs-diagnosis. The eft, standing for Eversholt Fault Diagnosis Language, is responsible for assessing and analyzing hardware faults while the zfs-diagnosis is a ZFS specific engine which analyzes and diagnoses ZFS problems.

The  fmadm is a powerful utility we which can perform much more than what we discussed.  Here we can see few other tasks we can perform using the fmadm.
We can use the repaired subcommand of the fmadm utility to notify the FMD about a fault being repaired so it changes the component status and allows it to get enabled and utilized.

For example to notify the FMD about repairing the missing underlying device of the ZFS pool we can use the following command.

# fmadm repaired  zfs://pool=fpool/vdev=7f8fb1c77433c183

We can rotate the log files created by the FMD when we want to keep a log file in a specific status or when we want to have a fresh log using the rotate subcommand as shown below.

# fmadm rotate errlog | fltlog

The fltlog and errlog are two log files residing in the /var/fm/fmd/ directory storing all event information regarding faults and the errors causing them.

To learn more about fmadm command we can use the man pages and the short help provided with the distribution. Following commands shows how we can invoke the man pages and the short help message.

# man fmadm
# fmadm --help
A good starting resource is the FM wiki page which is located at

Manage, Administrate and Monitor GlassFish v3 from Java code using AMX & JMX

Management is one of the most crucial parts of an application server set of functionalities. Development of the application which we deploy into the server happens once with minor development iteration during the software lifecycle, but the management is a lifetime task. One of the very powerful features of the GlassFish application server is the powerful administration and management channels that it provides for different level of administrators and developers whom want to extend the application server administration and management interfaces.

GlassFish as an application server capable or serving mission critical and large scale applications benefits from several administration channel including the CLI, web based administration console and finally the possibility to manage the application server by using standard Java management extension or the JMX.

Not only GlassFish fully expose its management functionalities as JMX MBeans but also it provides a very easier way to manage the application server using local objects which proxies JMX MBeans. These local objects are provided as AMX APIs which lift the need for learning JMX by administers and developers whom want to interact with the application server by code.

GlassFish provides very powerful monitoring APIs in term of AMX MBeans which let developers and administrators monitor any aspect of anything inside the application server using Java code without need to understand the JMX APIs or complexity of monitoring factors and statistics gathering. These monitoring APIs allows developers to monitor a bulk of Java EE functionalities together or just monitor or single attribute of a single configuration piece.

GlassFish self management capability is another powerful feature based on the AMX and JMX APIs to let administrators easily automate daily tasks which can consume a handful amount of time without automation. Self management can manage the application server dynamically by monitoring the application server in runtime and changing the application server configuration dynamically based on predefined rules.

1 Java Management eXtension (JMX)

JMX, native to Java platform, introduced to let Java developers have a standard and easy to learn and use way for managing and monitoring their Java applications and Java enabled devices. We as architects, designers and developers of Java applications which can be as small as an in house invoice management or as big as a running stock exchange system need a way to expose management of our developed software to other industry accepted management software and JMX is the answer to these need.

1.1 What is JMX?

JMX is a part of Java Standard edition and was present from early days of Java platform existence and seen many enhancements during Java platform evolution. The JMX related specifications define the architecture, design patterns, APIs, and services in the Java programming language for managing and monitoring applications and Java enabled devices.

Using the JMX technology, we can develop Java classes which perform the management and monitoring tasks and expose a set of their functionalities or attributes by means of an interface to which later on are exposed to JMX clients through specific JMX services. The objects which we use to perform and expose management functionalities are called Managed Beans or MBeans in brief.

In order for MBeans to be accessible to JMX clients, which will use them to perform management tasks or gathers monitoring data, they need to be registered in a registry which later on let our JMX client application to find and initialize them. This registry is one of the fundamental JMX services and called MBean Server.

Now that we have our MBeans registered with a registry, we should have a way to let clients communicate with the running application which registered the MBeans to execute our MBeans operations, this part of the system is called JMX connectors which let us communicate with the agent from a remote or local management station. The JMX connector and adapter API provides a two way converter which can transparently connect to JMX agent over different protocols and provides a standard way for management software to communicate with the JMX agents regardless of communication protocol.

1.2 JMX architecture

The JMX benefits from a layered architecture heavily based on the interfaces to provide independency between different layers in term of how each layer works and how the data and services are provided for each layer by its previous one.

We can divide the JMX architecture to three layers. Each layer only relay on its direct bottom layer and is not aware of its upper layer functionalities. These layers are: instrumentation, agent, and management layers. Each layer provides some services either for other layers, in-JVM clients or remote clients running in other JVMs. Figure 1 shows different layers of JMX architecture.

Figure 1 JMX layerd architecture and each layer components

Instrumentation layer

This layer contains MBeans and the resources that MBeans are intended to manage. Any resource that has a Java object representative can be instrumented by MBeans. MBeans can change the value of object’s attributes or call its operations which can affect the resource that this particular Java object represents. In addition to MBeans, notification model and MBean metadata objects are categorized in this layer. There are two different types of MBeans for different use cases, these types include:

Standard MBeans: Standard MBeans consisting of an MBean interface which define the exposed operations and properties (using getters and setters) and the MBean implementation class. The MBean implementation class and the interface naming should follow a standard naming pattern in Standard MBeans. There is another type of standard MBeans which lift the urge for following the naming pattern called MXBeans. The Standard MBeans naming pattern for MBeans interface is ClassNameMBean and the implementation class is ClassName. For the MXBeans naming pattern for the interface is AnythingMXBean and the implementation class can have any name. We will discuss this naming matter in more details later on.

Dynamic MBeans: A dynamic MBean implements, instead of implementing an static interface with a set of predefined methods. Dynamic MBeans relies on that represents the attributes and operations exposed by them. MBeans client application call generic getters and setters whose implementation must resolve the attribute or operation name to its intended behavior. Faster implementation of JMX management MBeans for an already completed application and the amount of information provided by MBeans metadata classes are two benefits of Dynamic MBeans.

Notification Model: JMX technology introduces a notification model based on the Java event model. Using this event model MBeans can emit notifications and any interested party can receive and process them, interested parties can be management applications or other MBeans.

MBean Metadata Classes: These classes contain the structures to describe all components of an MBean’s management interface including its attributes, operations, notification, and constructors. For each of these, the MBeanInfo class include a name, a description and its particular characteristics (for example, an attribute is readable, writeable, or both; for an operation, the signature of its parameter and return types).

Agent layer

This layer contains the JMX Agents which are intended to expose the MBeans to management applications. The JMX agent’s implementation specifications fall under this layer. Agents are usually located in the same JVM that MBeans are located but it is not an obligation. The JMX agent consisting of an MBean server and some helper services which facilitate MBeans operations. Management software access the agent trough an adapter or connecter based on the management application communication protocol.

MBean Server: This is the MBeans registry, where management applications will look to find which MBeans are available to them to use. The registry expose the MBeans management interface and not the implementation class. The MBeans registry provides two interfaces for accessing the MBeans from a remote and in the same JVM client. MBeans can be registered by another MBeans, by the management application or by the Agent itself. MBeans are distinguished by a unique name which we will discuss more in AMX section.

Agent Services: there some helper services for MBeans and agent to facilitate some functionalities. These services include: Timer, dynamic class loader, observers to observer numeric or string based properties of MBeans, and finally relation service which define associations between MBeans and enforces the cardinality of the relation based on predefined relation types.

Management layer

The Management tier contains components required for developing management applications capable of communicating with JMX agents. Such components provide an interface for a management application to interact with JMX agents through a connector. This layer may contain multiple adapters and connectors to expose the JMX agent and its attached MBeans to different management platforms like SNMP or exposing them in a semantic rich format like HTML.

JMX related JSRs

There are six different JSRs defined for the JMX related specifications during past 10 years. These JSRs include:

JMX 1.2 (JSR 3): First version of JMX which was included in J2SE 1.2

J2EE Management (JSR 77): A set of standard MBeans to expose application servers’ resources like applications, domains, and so on for management purposes.

JMX Remote API 1.0 (JSR 160): interaction with the JMX agents using RMI from a remove locaten.

Monitoring and Management Specification for the JVM (JSR 174): a set of API and standard MBeans for exposing JVMs management to any interested management software.

JMX 2.0 (JSR 255): The new version of JMX for Java 0 which introduces using generics, annotation, extended monitors, and so on.

Web Services Connector for JMX Agents (JSR 262): define an specification which leads to use Web Services to access JMX instrumentation remotely.

1.3 JMX benefits

What are JMX benefits that JCP defined a lot of JSRs for it and on top of it, why we did not follow another management standard like IEEE Std 828-1990. The reason is behind the following JMX benefits:

Java needs an open to extend and close to change API for integration with emerging requirement and technologies, JMX does this by its layered architecture.

The JMX is based on already well defined and proven Java technologies like Java event model for providing some of required functionalities.

The JMX specification and implementation let us use it in any Java enabled software in any scale.

Almost no change is required for an application to become manageable by JMX.

Many vendors uses Java to enable their devices, JMX provide one standard to manage both software and hardware.

You can imagine many other benefits for JMX which are not listed above.

1.4 Managed Beans (MBeans)

We discussed that generally there are two types of MBeans which we can choose to implement our instrumentation layer. Dynamic MBeans are a bit more complex and we would rather skip them in this crash course, so in this section we will discuss how MXBeans can be developed, used locally and remotely to prepare ourselves for understanding and using AMX to manage GlassFish.

We said that we should write an interface which defines all exposed operation of the MBeans both for the MXBeans and standard MBeans. So first we will write the interface. Listing 1 shows the WorkerMXBean interface, the interface has two methods which supposed to change a configuration in a worker thread and two properties which return the current number of workers threads and maximum number of worker threads. Number of current workers thread is read only and maximum number of threads is both readable and updateable.

Listing 1 The MXBean interface for WorkerMXBean


public interface WorkerIF


public int getWorkersCount();

public int getMaxWorkers();

public void setMaxWorkers(int newMaxWorkers);

public int stopAllWorkers();


I did not told you that we can forget about the naming conversion for MXBean interfaces if we are intended to use Java annotation. As you can see we simply marked the interface as an MBean interface and defined some setter and getter methods along with one operation which will stop some workers and return the number of stopped workers.

The implementation of our MXBean interface will just implement some getter and setters along with a dummy operation which just print a message in standard output.

Listing 2 the Worker MXBean implementation

public class Worker implements WorkerIF {

private int maxWorkers;

private int workersCount;

public Worker() {


public int getWorkersCount() {

return workersCount;


public int getMaxWorkers() {

return maxWorkers;


public void setMaxWorkers(int newMaxWorkers) {

this.maxWorkers = newMaxWorkers;


public int stopAllWorkers() {

System.out.println(“Stopping all workers”);

return 5;



We did not follow any naming convention because we are using MXBean along with the annotation. If it was a standard MBean then we should have named the interface as WorkerMBean and the implementation class should have been Worker.

Now we should register the MBean to some MBean server to make it available to any management software. Listing 3 shows how we can develop a simple agent which will host the MBean server along with the registered MBeans.

Please replace the numbers with cueballs

Listing 3 How MBeans server works in a simple agent named WorkerAgent

public class WorkerAgent {

public WorkerAgent() {

MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); #1

Worker workerBean = new Worker();                  #2

ObjectName workerName = null;

try {

workerName = new                                #3 ObjectName(“article:name=firstWorkerBean”);

mbs.registerMBean(workerBean, workerName);          #4

System.out.println(“Enter to exit…”);            #5;

} catch(Exception e) {




public static void main(String argv[]) {

WorkerAgent agent = new WorkerAgent();

System.out.println(“Worker Agent is running…”);



At #1 we get the platform MBean Server to register our MBean. Platform MBean server is the default JVM MBean server. At #2 we initialize an instance of our MBean.

At #3 we create a new ObjectName for our MBean. Each JVM may use many libraries which each of them can register tens of MBeans, so MBeans should be uniquely identified in MBean server to prevent any naming collision. The ObjectName follow a format to represent an MBean name in order to ensure that it is shown in a correct place in the management tree and lift any possibility for naming conflict. An ObjectName is made up of two parts, a domain and a name value pair separated by a colon. In our case the domain portion is article and the description is name=name=firstWorkerBean

At #4 as register the MBean to the MBean server. At #5 we just make sure that our application will not close automatically and let us examine the MBean.

Sample code for this chapter is provided along with the book, you can run the sample codes by following the readme.txt file included in the chapter06 directory of source code bundle. Using the sample code you will just use Maven to build and run the application and JConsole to monitor it. But the behind the scene procedure described in the following paragraph.

To run the application and see how our MBean will appear in a management console which is standard JConsole bundled with JDK. To enable the JMX management agent for local access we need to pass the to the JVM. This command will let the management console to use inter process communication to communicate with the management agent. Now that you have the application running you can run JConsole. Open a terminal window and run jconsle. When JConsole opens, it shows a window which let us select either a remote or a local JVM to connect. Just scan the list of local JVMs to find WorkerAgent under name column; select it and press connect to connect to the JVM. Figure 2 shows the new Connection window of JConsole which we talked about.

Figure 2 The New Connection window of JConsole

Now you will see how JConsole shows different aspects of the selected JVM including memory meters, threads status, loaded classes, JVM overview which includes the OS overview and finally the MBeans. Select MBeans tab and you can see a tree of all MBeans registered with the platform MBean server along with your MBean. You should remember what I said about ObjectName class, the tree clearly shows how a domain includes its child MBeans. When you expand article node you will see something similar to figure 3.

Figure 3 JConsole navigation tree and effect of ObjectName format on the MBean placing in the tree

And if you click on stopAllWorkers node the context panel of the JConsole will load a page similar to figure 4 which also shows the result of executing he method.

Figure 4 The content panel of JConsole after selecting stopAllWorkers method of the WorkerMBean

It was how we can connect to a local JVM, in the 1.6 we will discuss connecting to a JVM process from a remote location to manage the system using JMX.

1.5 JMX Notification

The JMX API defines a notification and notification subscription model to enable MBeans generate notifications to signal a state change, a detected event, or a problem.

To generate notifications, an MBean must implement the interface NotificationEmitter or extend NotificationBroadcasterSupport. To send a notification, we need to construct an instance of the class or a one of its subclasses like AttributeChangedNotification, and pass the instance to NotificationBroadcasterSupport.sendNotification.

Every notification has a source. The source is the object name of the MBean that generated the notification.

Each notification has a sequence number. This number can be used to order notifications coming from the same source when order matters and there is a risk of the notifications being handled in the wrong order. The sequence number can be zero, but preferably the number increments for each notification from a given MBean.

Please replace the # with cueball in the source and the paragraph

1.6 Remote management

To manage our worker application from a remote location using a JMX console like JConsole we will just need to ensure that an RMI connector is open in our JVM and all appropriate settings like port number, authentication mechanism, transport security and so on are provided. So, to run our sample application with remote management enabled we can pass the following parameters to the java command.     #1 #2 #3 #4

At #1 we determine a port for the RMI connector to listen for incoming connections. At #2 we enable authentication in order to protect our management portal. At #3 we provide the path to a password file which contains the list of username and password in plain text. In the password file, each username and password pair are placed in one line with an space between them. At #4 we disable SSL for transport layer.

To connect to a JVM which started with these parameters, we should choose remote in the New Connection window of JConsole and provide the service:jmx:rmi:///jndi/rmi:// as Remote Process URL along with one of the credentials which we defined in the passwordFile.txt.

2 Application Server Management eXtension (AMX)

GlassFish fully adheres to the J2EE management (JSR 77) in term of exposing the application server configuration as JMX MBeans but dealing with JMX is not easy and likeable for all developers, so Sun has included a set of client side proxies over the JSR 77 MBeans and other additional MBeans of their own to make the presence of JMX completely hidden for the developers who want to develop management extensions for GlassFish. This API set is named AMX and usually we use them to develop management rules.

2.1 J2EE management (JSR 77)

Before we dig into AMX we need to know what JSR 77 is and how it helps us to using JMX and AMX for managing the application server. JSR 77 specification introduces a set of MBeans and services which let any JMX compatible client manage Java EE container’s deployed objects and Java EE services.

The specification defines a set of MBeans which models all Java EE concepts in a hierarchic of MBeans. The specification determines which attributes and operation each MBeans must have and what should be the effect of calling a method on the managed objects. Specification defines a set of events which should be exposed to JMX clients by the MBeans. The specification also defines a set of attributes’ statistics which should be exposed by the JSR 77 MBeans to the JMX client for performance monitoring.

Services and MBeans provided by JSR 77 covers:

Monitoring performance statistics of managed artifacts in the Java EE container. Managed objects like EJBs, Servlets, and so on.

Event subscription for important events of the managed objects like stopping or starting an application.

Managing state of different standard managed objects if the Java EE container like changing an attribute of a JDBC connection pool or underplaying an application.

Navigation between Managed objects.

Managed objects

The artifacts that JSR 77 exposes their management, monitoring to JMX clients include a broad range of Java EE components and services. Figure 5 shows the first level of the Managed objects in the Managed objects hierarchic. As you can see in the figure all objects inherits four attributes which later on will be used to determine whether an object state is manageable, the object provides statistics or the object provide events for its important performed actions.

The objectName attribute which we discussed before has the same use and format, for example amx:j2eeType=X-JDBCConnectionPoolConfig,name=DerbyPool represent a connection pool named DerbyPool in the amx domain, the j2eeType attribute shows the MBean’s type.

Figure 5 first level hierarchic of JSR 77 Managed objects which based on the specification must be exposed for JMX management.

Now that you saw how broad the scope of JSR 77 is you ask what the use of these MBeans is and how they work and can be used.

Simple answer is that as soon as a Managed object become live in the application server a corresponding JSR 77 MBeans will get initialized for it by the application server management layer. For example as soon as you create a JDBC connection pool a new MBeans will appear under the JDBCConnectionPoolConfig node of connected JConsole which represent the newly created JDBC connection. Figure 6 shows the DerbyPool under the JDBCConnectionPoolConfig node in JConsole.

Figure 6 The DerbyPool place under JDBCConnectionPoolConfig node in JConsole

In the figure 5 you can see that we can manage All deployed objects using the JSR 77 exposed MBeans, these objects J2EE applications and modules which are shown in figure A deployed application may have EJB, Web, and any other modules; Glassfish management service will initialize a new insistence of appreciated MBeans for each module in the deployed application. The management service uses ObjectName which we discussed before to uniquely identify each MBean instance and later on we will use this unique name to access each deployed artifact independently.

Figure 7 The specification urge implementation of MBeans to manage all shown deployable objects

Getting lower into the deployed modules we will have Servlets, EJBs and so on. The JSR 77 specification provided MBeans for managing EJBs and Servlets. The EJB MBeans inherit the EJB MBean and includes all necessary attributes and method to manage different types of EJBs. Figure 8 represent the EJB sub classes.

Figure 8 All MBeans related to EJB management in GlassFish application server, based on JSR 77

Java EE resources is one the most colorful area in the Java EE specification and the JSR 77 provides MBeans for managing all standard resources managed bye Java EE containers. Figure 9 shows which types of resources are exposed for management by the JSR 77.

Figure 9 Java EE resources manageable by the JSR 77 MBeans

All of Java EE base concepts are covered by the JSR 77 in order to make it possible for the 3rd party management solution developers to integrate Java EE application server management into their solutions.

The events propagated by JSR 77

In the beginning of this section we talked about events that interested parties can receive from JSR 77 MBeans. These events are as follow for different type of JSR 77 MBeans.

J2SEEServer: An event when the corresponding server enters RUNNING, STOPPED, or FAILED state

EntityBean: An event when the corresponding Entity Bean enters RUNNING, STOPPED, or FAILED state

MessageDrivenBean: An event when the corresponding Entity Bean enters RUNNING, STOPPED, or FAILED state

J2EEResource: An event when the corresponding J2EE Resource enters RUNNING, STOPPED, or FAILED state

JDBCResource: An event when the corresponding JDBC data source enters RUNNING, STOPPED, or FAILED state

JCAResource: An event when a JCA connection factory or managed connection factory entered RUNNING, STOPPED, or FAILED state

The monitoring statistics exposed by JSR 77

The specification urge exposing some statistics related to different Java EE components by JSR 77 MBeans. The required statistics by the specification includes:

Servlet statistics: Servlet related statistics which include Number of currently loaded Servlets; Maximum number of Servlets loaded which were active, and so on.

EJB statistics: EJB related statistics which Include include Number of currently loaded EJBs, Maximum number of live EJBs, and so on.

JavaMail statistics: JavaMail related statistics which Include maximum number of sessions, total count of connections, and so on.

JTA statistics: Statistics for JTA resources which includes successful transactions, failed transactions and so on.

JCA statistics: JCA related statistics which includes both the non-pooled connections and the connection pools associated with the referencing JCA resource.

JDBC resource statistics: JDBC resource related statistics for both non-pooled connections and the connection pools associated with the referencing JDBC resource, connection factory. The statistics include total number of opened and closed connections, maximum number of connections in the pool, and so on. This is really helpful to find connection leak for an specific connection pool.

JMS statistics: The JMS related including statistics for connection session, JMS producer, and JMS consumer.

Application server JVM Statistics: The JVM related statistics. Information like different memory sector size, threading information, class loaders and loaded classes and so on.

2.2 Remotely accessing JSR 77 MBeans by Java code

Include a sample code which shows accessing JSR 77  MBeans from remote location using java code to show how cumbersome it is and how simple AMX made it. // Done

Now that we discussed the details of the JSR 77 MBeans, let’s see how we can access the DerbyPool MBeans from java code and then how we can change an attribute which represent maximum number of connections in the connection pool. Listing 4 show the sample code which will access a GlassFish instance with default port for JMX listener (the default port is 8686)

Listing 4 Accessing a JSR 77 MBean for changing DerbyPool’s MaxPoolSize attribute.

public class RemoteClient {

private MBeanServerConnection mbsc = null;

private ObjectName derbyPool;

public static void main(String[] args) {

try {

RemoteClient client = new RemoteClient(); #A

client.connect();                          #B

client.changeMaxPoolSize();              #c

} catch (Exception e) {




private void connect() throws Exception {

JMXServiceURL jmxUrl =

new JMXServiceURL(“service:jmx:rmi:///jndi/rmi://”); #1

Map env = new HashMap();

String[] credentials = new String[]{“admin”, “adminadmin”};

env.put(JMXConnector.CREDENTIALS, credentials);              #2

JMXConnector jmxc =

JMXConnectorFactory.connect(jmxUrl, env);             #3

mbsc = jmxc.getMBeanServerConnection();                     #4


private void changeMaxPoolSize() throws Exception {

String query = “amx:j2eeType=X-JDBCConnectionPoolConfig,name=DerbyPool”;

ObjectName queryName = new ObjectName(query);             #5

Set s = mbsc.queryNames(queryName, null);                   #6

derbyPool = (ObjectName) s.iterator().next();

mbsc.setAttribute(derbyPool, new Attribute(“MaxPoolSize”, new Integer(64)));                                                   #7



#A initiate an instance of the class

#B get the JMX connection

#C change the attribute’s value

At #1 we create a URL to GlassFish JMX service. At #2 we prepared the credentials which we should provide for connecting to the JMX service. At #3 we initialize the connector. At #4 we create a connection to GlassFish’s MBean server. At #5 we query the registered MBeans for an MBean similar to our DerbyPool MBean. At #6 we get the result of the query inside a set. We are sure that we have an MBean with the give name otherwise we should have checked to see whether the set is empty or not. At #7 we just change the attribute. You can check the attribute in JConsole and you will se that it change in the JConsole as well.

In the sample code we just update the DerbyPool’s maximum number of connections to 64. it can be counted as one of the simplest task related to JSR 77, management, and using JMX. Using plain JMX for a complex task will overhaul us with many lines of complex reflection based codes which are hard to maintain and debug.

2.3 Application Server Management eXtension (AMX)

Now that you see how hard it is to work with JSR 77 MBeans I can tell you that you are not going to use JSR 77 MBeans directly in your management applications, although you can.

What is AMX

In The AMX APIs java.lang.reflect.proxy is used to generate Java objects which implement the various AMX interfaces. Each proxy internally stores the JMX ObjectName of a server-side JMX MBean who’s MBeanInfo corresponds to the AMX interface implemented by the proxy.

So, in the same time we have JMX MBeans for using trough any JMX compliant management software and we have the AMX dynamic proxies to use them as easy to use local objects for managing the application server.

The GlassFish administration architecture is based on the concept of administration domain. An administration domain is responsible for managing multiple resources which are based on the same administration domain.  A resource can be a cluster of multiple GlassFish instances, a single GlassFish instance, and a JDBC connection pool inside the instance and so on. Hundreds of AMX interfaces are defined to proxy all of the GlassFish managed resources which themselves are defined as JSR 77 MBeans for client side access. All of these interfaces are placed under package.

There are several benefits in AMX dynamic proxies over the JMX MBeans, which are as follow:

Strongly typed methods and attributes for compile time type checking

Structural consistency with both the domain.xml configuration files.

Consistent and structured naming for methods, attributes and interfaces.

Possibility to navigate from a leaf AMX bean up to the DAS.

AMX MBeans

AMXdefines different types of MBean for different purposes or reasons, namely, configuration MBeans, monitoring MBeans, utility MBeans and JSR 77 MBeans. All AMX MBeans shares some common characteristics including:

They all implement the interface which contains methods and fields for checking the interface type, group, reaching its container and its root domain.

They all have a j2eeType and name property within their ObjectName. The j2eeType attribute specifies the interface we are dealing with.

All MBeans that logically contain other MBeans implement the interface. Using the container interface we can navigate from a leaf AMX Bean to the DAS and vice-versa. For example by having the domain AMX Bean we can get a list of all connection pools or EJB modules in deployed in the domain.

JSR 77 MBeans that have a corresponding configuration or monitoring peer expose it using getConfigPeer or getMonitoringPeer. However; there are many configuration and monitoring MBeans that do not correspond to JSR 77 MBeans.

Configuration MBeans

We discussed that there are several types of MBeans in the AMX framework, one of them is the configuration MBeans. Basically these MBeans represent domain.xml and other configuration file content and structure.

In GlassFish all configuration information are stored in one central repository named DAS, in a single instance installation the instance act as DAS and in a clustered installation the DAS responsibility is sole taking care of the configuration and propagating it to all instances. The information stored in the repository are exposed to any interested party like an administration console trough AMX interfaces.

Any developer with familiarity with domain.xml structure will find him very comfortable with configuration interfaces.

Monitoring MBeans

Monitoring MBeans provide transient monitoring information about all the vital components

of the Application Server. A monitoring interface can either provides statistics or not and if it provides statistics it should implements the MonitoringStats interface which is JSR 77 compliant interface for providing statistics.

Utility MBeans

UtilityMBeans provide commonly used services to the Application Server. These MBeans all extend either or both of the Utility and Singleton interfaces. All of these MBeans interface are located in package. Notable utility MBeans are listed in table 1

Table 1 AMX Utility MBeans along with description

MBean interface



Provides information about application server capabilities like clustering support


Provides JMX-like queries which are restricted to AMX MBeans.


Provides network-efficient “bulk” calls whereby many Attributes or Operations in many MBeans may be fetched and invoked in one invocation, thus minimizing network overhead.


Provides buffering and selective dynamic listening for Notifications on AMX MBeans.


Supports uploading and downloading of files to/from the application server.

Java EE Management MBeans

The Java EE management MBeans implement, and in some cases extend, the management

Hierarchy as defined by JSR 77, which specifies the management model for the whole Java EE platform. All JSR 77 MBeans in the AMX domain offer access to configuration and monitoring MBeans using the getMonitoringPeer and getConfigPeer methods.

Dynamic Client Proxies

Dynamic Client Proxies are an important part of the AMX API, and enhance ease-of-use for the programmer. JMX MBeans can be used directly by an MBeanServerConnection to the server. However, client proxies greatly simplify access to Attributes and operations on MBeans, offering get/set methods and type-safe invocation of operations. Compiling against the AMX interfaces means that compile-time checking is performed, as opposed to server-side runtime checking, when invoked generically through MBeanServerConnection.

See the API documentation for the package and its sub-packages for more information about using proxies. The API documentation explains the use of AMX with proxies. If you are using JMX directly (for example, by using MBeanServerConnection), the return type, argument types, and method names might vary as needed for the difference between a strongly-typed proxy interface and generic MBeanServerConnection or ObjectName  interface.

Changing the DerbyPool attributes using AMX

In listing 4 you saw how we can use JMX and pure JSR 77 approach to change the attributes of a JDBC connection pool, in this part we are going to perform the same operation using AMX to see how much easier and more effective the AMX is.

Listing 5 Using AMX to change DerbyPool MaxPoolSize attribute

AppserverConnectionSource appserverConnectionSource = new AppserverConnectionSource(AppserverConnectionSource.PROTOCOL_RMI, “”, 8686, “admin”, “adminadmin”,null,null);              #1

DomainRoot dRoot = appserverConnectionSource.getDomainRoot(); #2

JDBCConnectionPoolConfig cpConf= dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_CONFIG, “DerbyPool”); #3

cpConf.setMaxPoolSize(“100”);            #4

You are not mistaking it with another idea or code, that four lines of code let us change the maximum pool size of the DerbyPool.

At #1 we create a connection to the server that we want to perform our management operation on it. Several protocols can be used to connect to the application server management layer. As you can se when we construct the appserverConnectionSource instance we used AppserverConnectionSource.PROTOCOL_RMI as the communication protocol to ensure that we will not need some JAR files from OpenPDK project. Two other protocols which we can use are AppserverConnectionSource.PROTOCOL_HTTP and AppserverConnectionSource.PROTOCOL_JMXMP. The connection that we made does not uses TLS, but we can use TLS to ensure transport security.

At #2 we get the AMX domain root which later on let us navigate between all AMX leafs which are servers, clusters, connection pools, and listeners and so on. At #3 we query for an AMX MBean which its name is DerbyPool and its j2eeType is equal to XTypes.JDBC_CONNECTION_POOL_CONFIG. At #4 we set a new value for the attribute of our choice which is MaxPoolSize attribute.

Monitoring GlassFish using AMX

The monitoring term comes to the developers and administrators mind whenever they are dealing with performance tuning, but monitoring can also be used for management purposes like automation of specific tasks which without automation need an administrator to take care of it. An example of these types of monitoring is critical condition notifications which can be send either via email or SMS or any other gateway which the administrators and system managers prefer.

Imagine that you have a system running on top of GlassFish and you want to be notified whenever acquiring connections from a connection pool named DerbyPool is taking longer than 35 seconds.

You also want to store all information related to the connection pool when the pool is closing to saturate. For example when there are only 5 connections to give away before the pool get saturated.

So we need to write an application which monitor the GlassFish connection pool, check its statistics regularly and if the above criteria meets, our application should send us an email or an SMS along with saving the connection pool information.

AMX, provides us with all required means to monitor the connection pool and be notified when any connection pool attributes or any of its monitoring attributes changes so we will just use our current AMX knowledge along with two new concepts about the AMX.

The first concept that we will use is AMX monitoring MBeans which provides us with all statistics about the Managed Objects that they monitor. Using the AMX monitoring MBeans is the same as using other AMX MBeans like the connection pool MBean.

The other concept is the notification mechanism which AMX provides on top of already established JMX notification mechanism. The notification mechanism is fairly simple, we register our interest for some notification and we will receive the notifications whenever the MBeans emit a notification.

We know that we can configure GlassFish to collect statistics about almost all Managed Objects by changing the monitoring level from OFF to either LOW or HIGH. In our sample code we will change the monitoring level manually using our code and then use the same statistics that administration console shows to check for the connection pool consumed connections.

Listing 6 shows sample applications which monitor DerbyPool and notify the administrator whenever acquiring connection take longer than accepted. The application saves all statistics information when the connection pool gets close to its saturation.

Please Replace the numbers with cueballs

Listing 6 monitoring a connection pool and notifying the administrator when connection pool is going to reach the maximum size

public class AMXMonitor implements NotificationListener {      #1

AttributeChangeNotificationFilter filter;                                          #2

AppserverConnectionSource appserverConnectionSource;

private int cPoolSize;

private DomainRoot dRoot;

JDBCConnectionPoolMonitor derbyCPMon;                                          #3

JDBCConnectionPoolConfig cpConf;

private void initialize() {

try {

appserverConnectionSource = new AppserverConnectionSource(AppserverConnectionSource.PROTOCOL_RMI, “”, 8686, “admin”, “adminadmin”, null, null);

dRoot = appserverConnectionSource.getDomainRoot();

Set<String> stpr = dRoot.getDomainConfig().getConfigConfigMap().keySet();            #4

ConfigConfig conf=     dRoot.getDomainConfig().getConfigConfigMap().get(“server-config”); #4

conf.getMonitoringServiceConfig().getModuleMonitoringLevelsConfig().setJDBCConnectionPool(ModuleMonitoringLevelValues.HIGH); #4

cpConf = dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_CONFIG, “DerbyPool”);

cPoolSize = Integer.getInteger(cpConf.getMaxPoolSize());

filter = new AttributeChangeNotificationFilter();         #2

filter.enableAttribute(“ConnRequestWaitTime_Current”);       #2

filter.enableAttribute(“NumConnUsed_Current”);             #2

Set<JDBCConnectionPoolMonitor> jdbcCPM =


(XTypes.JDBC_CONNECTION_POOL_MONITOR);                           #5

for (JDBCConnectionPoolMonitor mon : jdbcCPM) {

if (mon.getName().equalsIgnoreCase(“DerbyPool”)) {

derbyCPMon = mon;




derbyCPMon = dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_MONITOR, “DerbyPool”);   #5

derbyCPMon.addNotificationListener(this, filter, null);  #5

} catch (Exception ex) {




public void handleNotification(Notification notification, Object handback) {

AttributeChangeNotification notif = (AttributeChangeNotification) notification;                                                                                                    #6

if (notif.getAttributeName().equals(“ConnRequestWaitTime_Current”)) {

int curWaitTime = Integer.getInteger((String) notif.getNewValue());                                                                                    #7

if (curWaitTime > 3500) {


sendNotification(“Current wait time is: ” + curWaitTime);


} else {

int curPoolSize = Integer.valueOf((String) notif.getNewValue());                                                        #8

if (curPoolSize > cPoolSize – 5) {


sendNotification(“Current pool size is: ” + curPoolSize);




private void saveInfoToFile() {

try {

FileWriter fw = new FileWriter(new File(“stats_” + (new Date()).toString()) + “.sts”);

Statistic[] stats = derbyCPMon.getStatistics(derbyCPMon.getStatisticNames()); #9

for (int i = 0; i < stats.length; i++) {

fw.write(stats[i].getName() + ” : ” + stats[i].getUnit()); #10


} catch (IOException ex) {




private void sendNotification(String message) {


public static void main(String[] args) {

AMXMonitor mon = new AMXMonitor();




At #1 we implement the NotificationListener interface as we are going to use the JMX notification mechanism. At #2 we define an AttributeChangeNotificationFilter which filter the notifications to a subset that we are interested. We also add the attributes that we are interested to the set of non-filtered attribute change notification. At #3 we define and initialize an AMX MBeans which represent DerbyPool monitoring information. We will get an instance not found exception the connection pool had no activities yet. At #4 we change the monitoring level of JDBC connection pools to ensure that GlassFish gather the required statistics. At #5 we find our designated connection pool monitoring MBean and add a new filtered listener to it.

The handleNotification method is the only method in the NotificationListener interface which we need to implement. At #6 we convert the received notification to AttributeChangeNotification as we know that the notification is of this type. At #7 we are dealing with change in the ConnRequestWaitTime_Current attribute. We get its new value to check for our condition. In the same time we can get the old value if we are interested. At #8 we are dealing with NumConnUsed_Current attribute and later on with calling the saveToFile method and sendNotification methods.

At #9 we get names of all connection pool monitoring factors and at #10 we just write the monitoring attribute’s name along with its value to a text file.

AMX and Dotted Names

AMX is designed with ease of use and efficiency in mind. So in addition to using standard JMX programming model, the getters and setters, we can use another hierarchical model to access all AMX MBeans attributes. In the dotted named model each attribute of an MBean starts from its root which is either the domain for all configuration MBeans and server for all runtime MBeans. For example domain.resources.jdbc-connection-pool.DerbyPool.max-pool-size represents the maximum pool size for the DerbyPool which we discussed before.

Two interfaces are provided to access the dotted names either for monitoring or for management and configuration purposes. The MonitoringDottedNames is provided to assists with reading an attribute. The other interface is ConfigDottedNames which provides writing access to attributes using the dotted format. We can get an instance of its implementation using dRoot.getMonitoringDottedNames().

3 GlassFish Management Rule

The above sample application is promising especially when you want to have tens or rules for automatic management, but running a separate process and watching that process is not in taste of many administrators. GlassFish application server provides an very effective way to deploy such management rules into GlassFish application server for sake of simplicity and integration. Benefits and use cases of the Management Rules can be summarized as follow:

Manage complexity by self-configuring based on the conditions

Keep administrators free for complex tasks by automating mundane management tasks

Improve performance by self-tuning in unpredictable run-time conditions

Automatically adjusting the system for availability by preventing problems and recovering from one (self healing)

Enhancing security measures by taking self-protective actions when security threats are detected

A GlassFish Management Rule is a set of:

Event: An event uses the JMX notification mechanism to trigger actions. Events can range from an MBean attribute change to specific log messages.

Action: Actions are associated with events and are triggered when related events happen. Actions can be MBeans that implement the NotificationListener interface.

When we deploy a Management Rule into GlassFish, GlassFish will register our MBean in the MBean server and register its interests for the notification type that we determined. Therefore, upon any event which our MBean is registered for, the handleNotification method of our MBeans will execute. GlassFish provides some pre-defined types of events which we can choose to register our MBean’s interest. These events are as follow:

Monitor events: These types of events trigger an action based on an MBean attribute change.

Notification events: Every MBean which implements the NotificationBroadcaster interface can be source of this event type.

System events: This is a set of predefined events that come from the internal infrastructure of GlassFish application server. These events include: lifecycle, log, timer, trace, and cluster events.

Now, let’s see how we can achieve a similar functionality that our AMXMonitor application provides from The GlassFish Management Rules. First we need to change our application to a mere JMX MBean which implements NotificationListener interface and perform required action which is, for example, sending an email or SMS, in the handleNotification method.

Changing the application to MBean should be very easy; we just need to define an MBean interface and then the MBean implementation which will just implement one single method, the handleNotification method.

Now that we have our MBeans compiled JAR file, we can deploy it using the administration console. So open the GlassFish administration console, navigate to the Custom MBeqns node and deploy the MBeans by providing the path to the JAR file and the name of the MBeans implementation class. Now that we have our MBeans deployed into GlassFish, it is available for the class loader to be used as an action for a Management Rule, so in order to create the Management Rule, use the following procedure.

In the navigation tree select Configuration node, select Management Rules node. In the content panel select New and user dcpRule as the name, make sure that you teak the enabled checkbox, select monitor in the event type combo box and let the events to be recorded in the log files, press next to navigate to the second page of the wizard.

In the second page, for the Observed MBean field enter amx:X-ServerRootMonitor=server,j2eeType=X-JDBCConnectionPoolMonitor,name=DerbyPool and for the observed attribute enter ConnRequestWaitTime_Current. For the monitor type select counter. The number type is int and the initial threshold is 35000 which indicate the number which if the monitored attribute exceed, our action will start.

Scroll down and for the action, select AMXMonitor which is the name of our MBean which we deployed in previous step.

You saw that the overall process is fairly simple, but there are some limitations like possibility to monitor one single MBean and attribute at a time. Therefore we need to create another Management Rule for NumConnUsed_Current attribute.

Now that we reached to the end of this article, you should be fairly familiar with JMX, AMX and GlassFish Management Rules. In next articles we will use the knowledge that we gained here to create administration commands and create monitoring solutions.

4 Summary

We toughly discussed managing GlassFish using Java code by discussing JMX which is the foundation of all Java based management solutions and framework. We discussed JMX architecture, event model and different types of MBeans which are included in the JMX programming model. We also covered AMX as the GlassFish way providing its management functionalities to client side applications which are not interested in using complicated JMX APIs.

We discussed GlassFish management using AMX APIs to show how much simpler the AMX is when we compare it to the plain JMX implementation and we covered GlassFish monitoring using the AMX APIs.

You saw how we can use GlassFish’s Self management and self administration functionalities to automate some tasks to keep administrators free from dealing with low level repetitive administration and management tasks.

OpenMQ, the Open source Message Queuing, for beginners and professionals (OpenMQ from A to Z)

Talking about messaging imply two basic functionalities which all other provided features are built around them; these two capabilities include support for topics and queues which basically lets a message to be consumed by as many interested consumer  as subscribed or at just one of the interested consumer(s).

Messaging middleware was present in the industry before Java come along and every one of them had its own interfaces for providing the messaging services. There were no standard interfaces which vendors try to comply with it in order to increase the compatibility, interoperability, ease of use, and portability. The art of Java community was defining Java Messaging System (JMS) as a standard set of interfaces to interact with this type of middleware from Java based applications.

Asynchronous communication of data and events is one of the always present communication models in an enterprise application to resolve some requirements like long running process delegation, batch processing, loose coupling or different systems, updating dynamically growing or shrinking clients’ set, and so on. Messaging can be considered as one of important basics required by every advanced middleware system software stack which provides essentials required to define a SOA as it provides the required basics for loose coupling.

Open MQ is an open source project mostly sponsored by Sun Microsystems for providing the Java community with a high performance, cross platform message queuing system with commercial usage friendly licenses including CDDL and GPL. Open MQ which is hosted at shares the same code base that Sun Java System Message Queue uses. This shared code base means that any feature available in the commercial product is also available in the open source project and the only difference between them is that for the commercial one you should pay a license fee and in return you will get professional level support from Sun engineers while with the open source alternative you do not have commercial level support and instead you can use community provided support.

Open MQ provides a broad range of functionalities in security, high availability, performance management and tuning,  monitoring, multiple language support, and so on which lead easier utilization of its base functionalities in the overall software architecture.

1 Introducing Open MQ


Open MQ is the umbrella project for multiple components which forms Open MQ and Sun Java System Message queue which is name of the productized sibling of Open MQ. These components include:

§     Message broker or messaging server

Message broker is the heart of the messaging system which generally maintains the message queues and topics, manage the clients connections and their requests, control the clients access to queues and topics either for reading or writing, and so on.

§     Client libraries

Client libraries provide APIs which let developers interact with the message broker from different programming languages. Open MQ provides Java and C libraries in addition to a platform and programming language agnostic interaction mechanism.

§     Administrative tools

Glassfish provides a set of very intuitive and easy to use administration tools including tools for broker’s life cycle management, broker monitoring, and destination life cycle management and so on.

1.1 Installing Open MQ

From version 4.3, Open MQ provides an easy to use installer which is platform depended as it is bundled with compiled copy of client libraries and administration tools. Download bits for installer are available at, download the binary copy compatible with your operating system. Make sure that you have JDK 1.5 or above already installed before you try to install Open MQ. Virtually Open MQ works on any operating system capable of running JDK 1.5 and above.

Beauty of open source

You might be using an operating system for development or deployment which is not included in table 1; in this case you can simply download the source code archive from the same page mentioned above and build the source code yourself. The building instructions are available at and Running OpenMQ 4.3 in NetBeans.txt which is straight forward. Even if you could not manage to build the binary tools you can still use Open MQ and JMS on that operating system and run your management tools from a supported operating system.


To install Open MQ extract the downloaded archive and run installer script or executable file depending on your operating system, as you can see installer is straight forward and does not require any special user interaction except accepting some defaults.

After we complete the installation process we will have a directory structure similar to figure 1 in the installation path we select during the installation.

Figure 1 Open MQ installation directory structure


As you can suggest from the figure, we have some Meta directory related to installer application and a directory named mq that contains Open MQ related files. Table 1 lists Open MQ directories along with their descriptions.

Table 1 Important items inside Open MQ installation directory

Directory name



All administration utilities resides here


Required headers for using OpenMQ C APIs


All JAR files and some native dependencies (NSS) resides here


HTTP and HTTP tunneling for firewall restricted configuration, Universal Messaging service for language agnostic use of Open MQ.


Configuration files for different broker instances created on this Open MQ installation are stored inside this folder by default.


2 Open MQ administration

Every server side application requires some level of administration and management to keep it up and running and Open MQ is not an exception. As we discussed in article introduction each messaging system must have some basic capabilities including point to point to point and publish subscribe mechanism for distributing messages. So, basic administration should evolve around the same concept.

2.1 Queue and Topic administration and management

When a broker receive a message it should put the message in a queue, topic or discard it. Queues and topics are called physical destinations as they are latest place in the broker space that a message should be placed.

In Open MQ we can use two administration utilities for creating queues and topics, imqadmin and imqcmd which the first one is a simple swing application and the later one is a command line utility. In order to create a queue we can execute the following command in the bin directory of Open MQ installation.


imqcmd create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"



This command simply creates a destination of type queue, this queue will not store more than 1000 message and if producers try to put more messages in the queue it will remove oldest present message to open up some space for new messages. Removed message will be placed in an specific queue named Dead Messages Queue (DMQ). The queue that we create using this command is named smsQueue. This command creates the queue on the local Open MQ instance which is running on localhost port 7676. We can use –b to ask imqcmd to communicate with a remote server. Following command create a topic named smsTopic on a server running on, remember that default username and password for the default broker is admin.


imqcmd –b create dst -n smsTopic -t t


imqcmd is a very complete and feature rich command, we can use it to monitor the physical destinations. A sample command to monitor the destination can be like:


imqcmd metrics dst -m rts -n smsQueue -t q

Sample output for this command is similar to figure 2 which shows consuming and producing messages’ rates of smsQueue. The only thing about the command is that -m argument which determine which type of metrics we need to see, different types of observable metrics for physical destinations includes:

§              con, which will show destination consumer information

§              dsk, disk usage by This destination

§              rts, as already said it shows message rates

§              ttl, shows message total, it is the default metric type if none specified


Figure 2 sample output for OpenMQ monitoring command


There some other operation which we may perform on destinations:

§              Listing destinations: imqcmd list dst

§              Emptying a destination: imqcmd purge dst -n ase -t q

§              Pausing, resuming a destination: imqcmd pause dst -n ase -t q

§              Updating a destination: imqcmd update dst -t q -n smsQueue -o "maxNumProducers=5"

§              Query destination information: imqcmd query dst -t q -n smsQueue

§              Destroy a destination: imqcmd destroy dst -n smsTopic -t t

2.2 Brokers administration and managemetn

We discussed that when we install Open MQ it creates a default broker which will start when we run imqbrokerd without any additional parameters. An Open MQ installation can have multiple brokers running on different ports with different configuration. We can create a new broker by executing following command which will create the broker and start it for accepting further commands or incoming messages.

imqbrokerd -port 7677 -name broker02


In table 1 we talked about a var/instances directory which all configuration files related to brokers reside inside it. If you look at this folder after executing the above command you will see two directories name imqbroker and broker02 which corresponds to default and newly created broker. Table 2 shows important artifacts residing inside broker02 directory.


Table 2 Brokers important files and directories

File or directory name



Configurations related to access controls, roles, authentication source and mechanism are stored here.


Users and password for accessing Open MQ are stored here

fs370 (directory)

Broker file based message stores, physical destinations, and so on.

log (directory)

Open MQ log files


All configurations relate to this broker are stored inside this file, configurations like services, security, clustering, and so on.

lock (file)

Exists when the broker is running and shows broker’s name, host and port


Open MQ has a very initiative configuration mechanism, first of all there are some default configuration which apply on Open MQ installation, there is an installation configuration file which we can use to override the default configurations, then each broker has its own configuration file which can override the installation configuration parameters and finally, we can override  broker (instance) level configurations by overriding them by passing their values to  imqbrokerd when staring a broker. When we create a broker we can change its configuration by editing the file mentioned in table 2. Table 3 lists all other mentioned configuration files and their default location on your hard disk.

Table 3 Open MQ configuration files path and descriptions

Configuration file location

Configuration file description

Located in lib/props/broker* directory, for determining default configuration parameters

Located in lib/props/broker* directory, for determining installation configuration parameters

Located inside props** folder of instance directory

* We discussed this directory in table 2

** This directory discussed in table 4

Now that we understand how we can create a broker, let’s see how we can administrate and manage a broker. In order to manage a broker we use imqcmd utilities which were also discussed in previous section for destination administration.

There are several level of monitoring data associated with a broker including incoming and outgoing messages and packets (-m ttl), messages and packet incoming and outgoing rate (-m rts), and finally operational related statistics like connections, threads and heap size (-m cxn). Following command returns message and packets rate for a broker running on localhost, listening on port 7677, the statistics updates each 10 seconds.

imqcmd metrics bkr  -b -m rts -int 10 -u admin

As you can see we passed the username with –u parameter and server will not ask for the username again. We can not pass the associated password directly in the command line and instead we can use a plain text file which includes the associated password. the password file should contains:


Default password for Open MQ administrator user is admin and as you can see the password file is a simple properties file. Table 4 list some other common tasks related to brokers along with a description and a sample command.

Table 4 Broker administration and management


Description and sample command

pause and resume

Make the broker to stop or resume accepting connections on all services.

Quiescence, un-quiescence

Issuing quiescence on a broker because it to stop accepting new connections, but it serve already connected connections, and un-quiescence command will return it to normal mode.

imqcmd quiesce bkr -b -u admin –passfile passfile.txt

Update a broker

Update a broker attributes like

imqcmd update bkr –o “imq.log.level=ERROR”

Shutdown broker

imqcmd shutdown bkr -b -time 90 -u admin

The sample command shutdown the broker after 90 seconds and during this time it only serves already established connections

Query broker

Shows broker information

./imqcmd query bkr –b –u admin



3 Open MQ security

After getting the sense of Open MQ basics we can look at its security mode to be able to setup a secure system based on Open MQ messaging capability. We will discuss connection authentication to ensure that a user is allowed to create a connection either as a producer or consumer, access control or authorization to check whether the connected user can send a message to a specific queue or not. Security the transport step is a responsibility of messaging server administrators, Open MQ support SSL and TLS for preventing message tamper and providing the required level of encryption but we will not discuss it as it is out of this article scope.

Open MQ provides supports for two types of user repository out of the box along with facilities to use JAAS modules for authentication. User information repositories which can be used out of the box includes flat file user information repository and directory server with an LDAP communication interface.

3.1 Authentication

Authentication happens before a new connection establishes. To configure Open MQ perform authentication we should provide an authentication source along with editing the configuration files to enable the authentication and configure it to use our prepared user information repository.

Flat file authentication

Flat File authentication rely on a flat file which contains username, encoded password, role name, each line of this files contains mentioned properties of one user. A sample line can be similar to:


Default location of each broker’s password file is the broker’s home directory/etc/passwd, later on when we use imqusermgr utility we either edit this file or the file that we described using related properties which are mentioned in table 5.

To configure a broker instance to perform authentication before establishing any connection we can either use imqcmd to update the broker’s properties or we can directly edit its configuration file which is mentioned in table 5. In order to configure broker02 that we create in section 1.2 to perform authenticate before establishing a connection we need add properties listed in table 5 to file which is located inside props directory or broker’s home directory, broker directory structure and configuration files are listed in table 3 and 2.

We can ignore two later attributes as we want to use the default file in its default path,. So, shutdown the broker, add listed attributes to its configuration file and start it again. Now we can simply uses imqusrmgr to perform CRUD operations on our user repository associated to our target broker.


Table 5 required changes to enable connection authentication for broker02

Required information by broker

Corresponding property in configuration file

What kind of user information repository we want to use


What is password encoding


Path to directory containing password file


Path to password file which contains passwords


Now that we configured the broker to perform authentication we need some mechanism to add, remove, or update the user information repository content. Table 6 shows how we can use imqusermgr utility to perform CRUD operations on broker02.

Table 6 Flat file user information repository management using imqusermgr

CRUD operation

Sample command

Create user

Add a user to user group, other groups are admin and anonymous.

imqusermgr add -u user01 -p password01 -i broker02 -g user


List users

imqusermgr list -i broker02

Update a user

Change the user status to inactive.

imqusermgr update -u user01 -a false -i broker02

Remove a user

Remove user01 from broker02

imqusermgr delete -u user01 -i broker02

Now create another user as follow:

imqusermgr add -u user02 -p password02 -i broker02 -g user

We will use these two users in defining authorization rules and later on in section 4 to see how we can connect to Open MQ from Java code when authentication is enabled.


All other utilities that we discussed need the Open MQ to be running and all of them we able to perform the required action on remote Open MQ instances but this command only works on local instances and does not require the instance to be running.

3.2 Authorization

After we enabled authentication which result in the broker checking user credentials before establishing the connection we may need to check the connected user’s permissions before we let it perform a specific operation like connecting to a service, putting message into a queue (acting as a producer), subscribing to a topic (acting as a consumer), browsing a physical destination, or automatic creation of physical destinations,. All these permissions can be defined using a very simple syntax which is shown below:


resourceType. resourceVariant. operation. access. principalType= principals


In this syntax we have 6 variable elements which are described in table 7.

Table 7 different elements in each Open MQ authorization rule




Type of resource to which the rule applies:

connection: Connections

queue:Queue destinations

topic: Topic destinations


Specific resource (connection service type or destination) to which the rule applies.

An asterisk (*) may be used as a wild-card character to denote all resources of a given type: for example, a rule beginning with queue.* applies to all queue destinations.


Operation to which the rule applies. This syntax element is not used for resourceType=connection


Level of access authorized:

allow: Authorize user to perform operation

deny: Prohibit user from performing operation


Type of principal (user or group) to which the rule applies:

user: Individual user

group: User group


List of principals (users or groups) to whom the rule applies, separated by commas.

An asterisk (*) may be used as a wild-card character to denote all users or all groups: for example, a rule ending with user=* applies to all users.


Enabling authoriization

To enable authorization we will have to do a bit more than what we did for authentication because in addition to enabling the authorization we need to define roles privileges. Open MQ provides a text based file to describe the access controlling rules. Default path for this file is inside the etc directory of broker home, and the default name is If we do not provide the path for this file when configuring Open MQ for authorization, Open MQ will pick up the default file.

In order to enable authorization we need to add following attributes to one of configuration files depending on how much wide we want the authorization be applied. Second attribute is not necessary if we want to use default file path.


§              To Enable the authorization: imq.accesscontrol.enabled=true

  • Determine the access control description file path: imq.accesscontrol.file.filename= path/toaccess/contro/file

Defining access control roles inside the access control file is an easy task, we just need use the rules syntax and limited set of variable values to define simple or complex access control roles. Listing 1 shows list of rules which we need to add to our access control file that either can be the default file or another file in the specific path that we should describe in the configuration file. If you are using default access control configuration file make sure that you comment all already defined rules by adding # sign in beginning of each line.

Listing 1 Open MQ access control rules definition

#connection authorization rules                            #1
connection.ADMIN.allow.user=user01                            #1*                                          #2
#queue authorization rules
queue.smsQueue.produce.allow.user=user01,user02                                          #3
queue.smsQueue.consume.allow.user=user01,user02                                          #4
queue.*.browse.allow.user=*                                                                      #5
#topic authorization rules
topic.smsTopic.**                                                                      #6
topic.smsTopic.produce.deny.user=user01                                                        #7
topic.*                                                        #8             
topic.*.consume.allow.user=*                                                                      #9
#Auto creating destinations authorization rules
queue.create.allow.user=*                                                                      #10                                                                      #11
topic.create.deny.user=user01                                                                      #12


By adding content of listing 1 to access control description file, we are applying following rules in Open MQ authentication, first of all we are letting admin group to connect to the system as administrator #1 and the only user01 as a normal user has previledge to connect to system as administrator #1. We let any user from any group to connect as a normal user to the broker#2. At #3 we let any of users that we create before to either act as a producer and at #4 we let these two users to act as producers of smsQueue destination. Because of rules we defined at #3 and #4 no other user can act as a consumer or producer of any other queue in the system. At #5 we let any user to browse any queue in the system. At #6 we let any operation on smsTopic by any group of users and later on at #7 we define a rule to deny user01 from acting as a smsTopic producer. As you can see generalized rules can be overridden by more specific rules as we did it for denying user01 from acting as a producer. At #8 we deny anonymous from acting as producer of smsTopic. At #9 we let any user from any group to act as a consumer of any present topic in the system. At #10 we allow any user to use auto destination creating facility. At #11 we just let one specific group to automatically create topics while one single user has no privilege to automatically create a topic #12.


If we use an empty access control definition file no user can connect to the system so, no operation will be possible in the system by any user. Open MQ default permission for any operation is denial, so if we need to change the default permission to allow we should add allows rules to the access control definition file.

Now we can simply connect to Open MQ by using user01 to perform administrative tasks. For example executing these commands will create our sample queue and topics in the broker that we create in section 1.2. When imqcmd asked for username and password user user01 and password01


imqcmd  –b create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"


imqcmd –b create dst -n smsTopic -t t


As you can see we can simply define very complex authorization rules using a simple text file and its rules definition syntax.

So far, we tried and use simple text file for user management and authentication purposes, but in addition to using text file, Open MQ can use a more robust and enterprise friendly user repository like OpenDS. When we use directory service as the user information repository we will no more manage it using the Open MQ command line utilities and instead the enterprise wide administrators perform user management from one central location and we as Open MQ administrators or developers just define out authorization rules.

In order to configure Open MQ to use a directory server (like Open DS) as the user repository we need to add the properties listed in table 8 to our broker or installation configuration file.


Table 8 Configuring Open MQ to use OpenDS as the user information repository

Required information by broker

Corresponding property in configuration file

Determine user information repository type


Determine password exchange encryption, in this case it is base-64 encoding similar to directory server method.


*Directory server host and port


A DN for binding and searching.

imq.user_repository.ldap.principal=cn=gf cn=admin

Provided DN password


User attribute name to compare the username.


**User’s group name


* Multiple directory server address can be determined for high availability reasons, for example ldap:// ldap:// each address should be separated from other address by an space.

** User’s group name(s) attribute is vendor dependent and vary from one directory service to other directory services.

There are several other attributes which can be used to either tweak the LDAP authentication in term of security and performance, but essential properties are listed in table 10. Other properties which can be used are available at Sun Java System Message Queue Administration Guide available at

The key to use directory service is to know the schema

The key behind a successful use of a directory service in any system is to know the directory service itself and the schema that we are dealing with. For example different schemas uses different attribute names and for each property of a directory service object so before diving into using a directory service implementation make sure that you know both the directory server and its schema.

4 clustering and high availability

We know that redundancy is one of the high availability enabler. We discussed data and service availability along with horizontal and vertical scalability of software solutions. OpenMQ like any other component of a software solution which drive an enterprise should be highly available sometimes both for data and service layer and sometimes just for service layer.

The engine behind the JMS interface in the message broker which its tasks is managing destinations, routing and delivering messages, checking with the security and so on. Therefore if we could keep the brokers and related services available for the clients, then we have the service availability in place. Later on if we manage to keep the state of queues, topics, durable subscribers, transactions and so on preserved in event of a broker failure then we can provide out clients with high availability in both service and data layer.

Open MQ provides two types of clustering, conventional clusters and high availability clusters, which can be used depending on the high availability degree required for messaging component in the whole software solution.

Conventional clusters which provide service availability but not data availability. If one broker in a cluster fails, clients connected to that broker can reconnect to another broker in the cluster but may be unable to access some data while they are reconnected to the alternate broker. Figure 3 shows how conventional cluster member and potential clients work together.

Figure 3 Open MQ Conventional clustering, one MQ broker act as the master broker to keep track of changes and propagate it to other cluster members

As you can see in the figure 3, in conventional clusters, we have different brokers running with no shared data. But in conventional cluster configuration is shared by a master broker that propagates changes between different cluster members to keep them up to dated about destinations and durable subscribers. Master broker keep newly joined or offline members which become online after a period of time synchronized with new configuration information and durable subscribers. Each client is connected to one broker and will use that broker until the broker fails (get offline, a disaster happens, and so on) which will connect to another broker in the same cluster.

High availability clusters provide both service availability and data availability. If one broker in a cluster fails, clients connected to that broker are automatically connected to that broker in the cluster which takes over the failed broker’s store. Clients continue to operate with all persistent data available to the new broker at all times. Figure 4 shows how high availability clusters’ members and potential clients work together.

Figure 4 Open MQHigh availability cluster, no single point of failure as all information are stored in a highly available database and all cluster member interact with that database

As you can see in figure 4 we have one central highly available database which we relay upon for storing any kind of dynamic data like transactions, destinations, messages, durable subscribers, and so on. These database itself should be highly available, usually a cluster of different database instances running on different machines in different geographical locations. Each client connect to one broker and continue using the same broker until broker fails, in term of the broker failure, another broker will take over all of the failed broker responsibilities like open transactions, and durable subscribers. Then client will reconnect to the broker that takes over of failed broker’s responsibilities.

All in all we can summarize the differences between conventional clusters and high availability clusters as listed in table 9.

Table 9 comparison between high availability clusters and conventional clusters


Conventional cluster

High availability cluster




Service availability

Yes, but partial when master broker is not available


Data availability

No, a failed broker can cause data lose


Transparent failover recovery

May not be possible if failover occurs during a commit

May not be possible if failover

occurs during a commit and the

client cannot reconnect to any

Other broker in the cluster


Done by setting appropriate cluster configuration broker properties

Done by setting appropriate cluster configuration broker properties

3rd party software requirement


Highly available database


Usually when we are talking about data availability we need to accept the slight performance overhead, you remember that we had similar condition with GlassFish HADB backed clusters.

We said that high availability clusters use a database to store dynamic information; some databases tested with OpenMQ include ORACLE RAC, MySQL, Derby, PostgreSQL, and HADB. We usually use a high available database to maximize the data availability and reduce the risk of losing data.

We can configure a highly available messaging infrastructure by a following a set of simple steps; First you need to create as many brokers as you need in your messaging infrastructure, so create brokers as discussed in 1.2, In second step we need to configure the brokers to use a shared database for dynamic data storage and act as a cluster member. To perform this step we need to add some changes to the each broker configuration file, these changes includes adding the content of listing 2 to broker’s configuration file

Listing 2 Required properties for making a broker highly available

#data store configuration                          #1

imq.persist.jdbc.dbVendor=mysql                  #2

imq.persist.jdbc.mysql.driver=com.mysql.jdbc.Driver   #2 #2        #2

imq.persist.jdbc.mysql.user=privilegedUser                  #2

imq.persist.jdbc.mysql.needpassword=true                    #2

imq.persist.jdbc.mysql.password=dbpassword                   #2


#cluster related configuration

imq.cluster.ha=true                                          #3

imq.cluster.clusterid=e.cluster01                           #3

imq.brokerid=e.broker02                                      #4



Please remove the numbers in the above listing and following paragraph with cueballs

At #1 we define that data store is of type JDBC and not Open MQ managed binary files, for high availability clusters it must be jdbc instead of file. At #2 we provide required configuration parameters for Open MQ to be able to connect to the database, we can replace mysql with oracle, derby, hadb or postgresql if we want to. At #3 we configure this broker as a member of a high availability cluster named e.cluster01, and finally at #4 we determine the unique name of this broker. Make sure that each broker should have a unique name in the entire cluster and different clusters using the same database need to be uniquely identified.

Now we have a database server available for our messaging infrastructure high availability and we configured all of our brokers to use this database, the last step is creating the database that Open MQ is going to use to store the dynamic data. To create the initial schema we should use following command:

./imqdbmgr create all –b broker_name

This command creates the database, required tables, and initializes the database with topology related information. As you can see we passed a broker name to the command, this means that we want the imqdbmgr to use that specific broker’s configuration file to create the database. As you remember we can have multiple brokers in the same Open MQ installation and each broker can operate completely independent from other brokers.

The imqdbmgr command has many usages including: upgrading storage from old Open MQ versions, creating backups, restoring backups and so on. To see a complete list of its commands and parameters execute the command with –h parameter.

Now you should have a good understanding of Open MQ clustering and high availability. In next section we will discuss how we can use Open MQ from a Java SE application and later on we will see how Open MQ is related to GlassFish and GlassFish clustering architecture.

Excellent segue

Thanks, I am learning



5 Open MQ management using JMX

Open MQ provides a very rich set of JMX MBeans to expose administration and management interface of Open MQ. These MBeans can also be used to monitor the Open MQ using JMX. Generally any task which we can do using imqcmd can be done using JMX code or a JMX console like JDK’s Java Monitoring and Management Console, jconsole, utility.

Administrative tasks that we can do using JMX include managing brokers, services, destinations, consumers, producers. And so on. Meantime management and monitoring is possible using JMX, for example we can monitor destinations using our code or change the configuration of the JMX broker dynamically using our own code or a JMX console. Some use cases of the JMX connection channel includes:

  • We can include JMX code in our JMS client application to monitor application performance and, based on the results, to reconfigure the JMS objects you use to improve performance.
  • We can write JMX clients that monitor the broker to identify use patterns and performance problems, and we can use the JMX API to reconfigure the broker to optimize performance.
  • We can write a JMX client to automate regular maintenance tasks, rolling upgrades, and so on.
  • We can write a JMX application that constitutes your own version of imqcmd, and we can use it instead of imqcmd.

To connect to Open MQ broker using a JMX console we can use service:jmx:rmi:///jndi/rmi://host:port/server  as the connection URL pattern.  Listing 3 shows how we can use Java code to pause jms service temporarily.

Please replace numbers with cueballs in following sample codes and paragraphs


Listing 3 pausing the jms service using Java code, JMX and admin connection factory

AdminConnectionFactory  acf = new AdminConnectionFactory();             #1

acf.setProperty(AdminConnectionConfiguration.imqAddress, "");                                                       #2

JMXConnector  jmxc = acf.createConnection("admin", "admin");          #3

MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection();           #4

ObjectName  serviceConfigName = MQObjectName.createServiceConfig("jms"); #5

mbsc.invoke(serviceConfigName, ServiceOperations.PAUSE, null, null);    #6

jmxc.close();                                                           #7


This sample code uses admin connection factory which brings some dependencies on Open MQ classes which make you add imqjmx.jar to your class path. This file is located in Open MQ installation lib directory. At #1 we create an administration connection factory, this class is an Open MQ helper class which make this sample code depending on Open MQ libraries, at #2 we configure the factory to gives us connection to a non-default host, the default host is at #3 we set the credentials which we want to use to make the connection, at #4 we get the connection to MBean server. At #5 we get jms service MBean and at #6 we invoke one of its operation and finally we close the connection at #7.

Listing 4 shows a pure JMX sample code for reading MaxNumProducers attributes of a smsQueue.

Listing 4Reading smsQueue attributes using pure JMX code

HashMap   environment = new HashMap();                       #1

String[]  credentials = new String[] {"admin", "admin"};   #1

environment.put (JMXConnector.CREDENTIALS, credentials);     #1

JMXServiceURL  url;

url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://"); #2

JMXConnector  jmxc = JMXConnectorFactory.connect(url, environment); #3

MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection(); #4

ObjectName  destConfigName= MQObjectName.createDestinationConfig(DestinationType.QUEUE, "smsQueu"); #5

Integer  attrValue= (Integer)mbsc.getAttribute(destConfigName,DestinationAttributes.MAX_NUM_PRODUCERS); #6

System.out.println( "Maximum number of producers: " + attrValue );

jmxc.close();                                                        #7


At #1 and #2 we determine the parameters that JMX needs to create a connection, at #3 we create the JMX connection, at #4 we get a connection to the Open MQ’s MBean server, at #5 get the MBean representing our smsQueue, at #6 we get value of an attribute which represent the maximum number of permitted producers and finally we close the connection at #7.

To learn more about JMX interfaces and programming model for Open MQ you can check Sun Java System Message Queue 4.2 Developer’s Guide for JMX Clients which is located at

6 Using Open MQ

Open MQ messaging infrastructure can be accessed from variety of channels and programming languages, we can perform messaging tasks using Java, C, and virtually any programming language capable of performing HTTP communication. When it comes to java we can access the broker functionalities either using JMS or JMX.

Open MQ directly provides APIs for accessing the broker using C and Java but for other languages it takes a relatively evolutionary approach by providing a HTTP gateway for communicating between any language like python, JavaScript (AJAX), C#, and so on. This approach is called Universal Messaging System (UMS) which introduce in Open MQ 4.3. in this section we only discuss Java and OpenMQ and other supported mechanism and languages are out of scope. You can find more information about them at

6.1 Using Open MQ from Java

We may use Open MQ either directly or by using application server managed resources, here we just discuss how we can use Open MQ from Java SE as using JMS service from Java EE is straight forward.

Open MQ provides a broad range of functionalities wrapped in a set of JMS compliant APIs, you can perform usual JMS operation like producing and consuming messages or you can gather metrics related to different resources using the JMS API. Listing 5 shows how sample applications which communicate with a cluster of open MQ brokers. Running the application will result in sending a message and consuming the same message. In order to execute listing 5 and 6 you will need to add imq.jar, imqjmx.jar and jms.jar to your classpath. These files are inside the imq installation lib directory. 

Listing 5 Producing and consuming mesaages from a Java SE application

public class QueueClient implements MessageListener {      #1
   public void startClientConsumer()throws JMSException {
   com.sun.messaging.ConnectionFactory connectionFactory = new com.sun.messaging.ConnectionFactory();            
connectionFactory.setProperty(com.sun.messaging.ConnectionConfiguration.imqAddressList, "mq://,mq://");          #2
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectEnabled, "true"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectAttempts, "5"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectInterval, "500"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqAddressListBehavior, "RANDOM"); #3
   javax.jms.QueueConnection queueConnection = connectionFactory.createQueueConnection("user01", "password01"); #4
   javax.jms.Queue smsQueue = null;
   javax.jms.Session session = queueConnection.createSession(false, javax.jms.Session.AUTO_ACKNOWLEDGE);               #5
   javax.jms.MessageProducer producer = session.createProducer(smsQueue);
   Message msg = session.createTextMessage("A sample sms message");
   producer.send(msg);                 #6
   javax.jms.MessageConsumer consumer = session.createConsumer(smsQueue);
   consumer.setMessageListener(this);         #7
   queueConnection.start();                #8

   public void onMessage(Message sms) {
try {
   String smsContent = ((javax.jms.TextMessage) sms).getText(); #9
} catch (JMSException ex) {

As you can see in listing 5 we used JMS programming model to communicate with the Open MQ for producing and consuming messages. At #1 we use the QueueClient class to implement MessageListener interface. At #2 we create a connection factory which connects to one of a cluster member; at #3 we define the behavior of the connection factory in relation to the cluster members. At #4 we provide a privileged user to connect to the brokers. At #5 we create a session which automatically acknowledges receiving the messages. At #6 we send a text message to the queue. At #7 we set QueueClient as the message listener for our consumer. At #8 we start receiving messages from the server and finally at #9 we consume the message in the onMessage method which is MessageListener interface method for processing the incoming messages.

We said that Open MQ provides some functionality which let us retirve monitoring information using JMS APIs. It means that we can receive JMS messages containing Open MQ metrics. But before we can receive such messages we should know where these messages are published and how we should configure Open MQ to publish these messages.

First we need to add some configuration to enable gathering metrics and the interval that these metrics should be gathered, to do this we need to add following properties to one of the configuration files which we discussed in table 5.





Now that we enabled the metric gathering, Open MQ will gather brokers, destinations list, JVM, queue, and topics metrics and send each type of these metrics to a pre-configured topic. Broker gathers and sends the metrics messages based on the provided interval. Table 10 shows the topic names for each type of metric information.


Table 10 Topic names for each type of metric information.


Topic Name


Broker metrics: information on connections, message flow, and volume of messages in the broker.


Java Virtual Machine metrics: information on memory usage in the JVM.


A list of all destinations on the broker, and their types.


Destination metrics for a queue of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the queueName variable.


Destination metrics for a topic of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the topicName variable.

Now that we know which topics we should subscribe for our metrics we should think about the security of these topics and privileged users which can subscribe to these topics. To configure the authorization for these topics we can simply define some rules in access control file which we discussed in 2.2. Syntax and template for defining authorization for these resources is similar to:*,user02*

Now that we have the security in place we can write a sample code to retrieve smsQueue metrics, listing 6 shows how we can retrieve smsQueue metrics.

listing 6 retrieving smsQueue metrics using brokers metrics messages

public void onMessage(Message metricsMessage) {
        try {
            String metricTopicName = "mq.metrics.destination.queue.smsQueue";
            MapMessage mapMsg = (MapMessage) metricsMessage;
            String metrics[] = new String[11];
            int i = 0;
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgsIn"));
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgsOut"));
            metrics[i++] = Long.toString(mapMsg.getLong("msgBytesIn"));
            metrics[i++] = Long.toString(mapMsg.getLong("msgBytesOut"));
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("peakNumMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("avgNumMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("totalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("peakTotalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("avgTotalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("peakMsgBytes") / 1024);
        } catch (Exception ex) {

Listing 6 only shows onMessage method of the message listener class, other part of the code is trivial and very similar to listing 5 with slight changes like subscribing to a topic named mq.metrics.destination.queue.smsQueue instead of smsQueue.

A list of all possible metrics for each type of resource which are properties of the received map message can be found in chapter 20 of Sun Java System Message Queue 4.2 Developer’s Guide for Java Clients which is located at

7 GlassFish and Open MQ


GlassFish like other Java EE application servers need to provide messaging functionality behind its JMS implementation. GlassFish uses Open MQ as message broker behind its JMS implementation. GlassFish supports Java Connector architecture 1.5 and using this specification it integrates with Open MQ. If you take a closer look at table 1.4 you can see that by default, different GlassFish installation profiles uses different way of communication with JMS broker which ranges from an embedded instance, local or Open MQ server running on a remote host. Embedded MQ broker is only mode that Open MQ runs in the same process that GlassFish itself runs. In local mode starting and stopping Open MQ server is done by the application server itself. GlassFish starts the associated Open MQ server when GlassFish starts and stop them when GlassFish stops. Finally remote mode means that starting and stopping Open MQ server should be done independently and an administrator or an automated script should take care of the tasks.

We know that how we can setup a glassfish cluster both with an in-memory replication and also a high available cluster with HADB backend for persisting transient information like sessions, JMS messages, and so on. Here in this section we discuss the GlassFish in memory and HADB backed clusters’ relation with Open MQ in more details to let you have a better understanding of the overall architecture of a highly available solution.

First let’s see what happens when we setup an in-memory cluster; after we configure the cluster, we have several instances in the cluster which uses a local Open MQ broker instance for messaging tasks. When we start the cluster all GlassFish instance starts and upon each instances startup, corresponding Open MQ broker starts.

You remember that we discussed conventional clusters which need no backend database for storing the transient data, and instead use one broker as the master broker to propagate changes and keep other broker instances up to date. In GlassFish in-memory replicated clusters, Open MQ broker associated with the first GlassFish instance that we create act as the master broker. A sample deployment diagram for in-memory replicated cluster and its relation with Open MQ is shown in figure 5

Figure 5 Open MQ and GlassFish in an in-memory replicated cluster of GlassFish instances. Open MQ will work as conventional cluster which lead in one broker acting as the master broker.


When it come to enterprise profiles, we usually uses independent high available cluster of Open MQ brokers with a database backend  with a highly available cluster of GlassFish instances which are backed by a highly available HADB cluster. We discussed that each GlassFish instance can have multiple Open MQ brokers to choose between when it creates a connection to the broker, to configure glassfish to use multiple brokers of a cluster we can add al available brokers to glassfish instance and then let the glassfish manage how to distribute the load between broker instances.  To add multiple brokers to a Glassfish instance we can follow these steps:

  • Open Glassfish administration console
  • Navigate to instance configuration node or default configuration node of glassfish application server based on either you want to change the glassfish cluster’s instances configuration or a single instance configuration.
  • Navigate to Configuration> Java Message Service> JMS Hosts from the navigation tree

§   Remove the default_JMS_host

§   Add as much hosts as you like this instance or cluster use, each host represent a broker in your cluster

  • Navigate to Configuration> Java Message Service and change the fields value as shown in table 11


Table 11 Glassfish configuration for using a standalone JMS cluster



Value and description


We  discussed different types of integration in this section, we will use remote standalone cluster of brokers

Startup Timeout

Time that GlassFish waits during its startup time before it gives up on JMS service startup, if it gives up no JMS service will be available

Start Arguments

We can pass any arguments that we can pass to imqbrokerd command using these field

Reconnect, Reconnect Interval, Reconnect Attempts

We should enable the reconnection, set a meaningful number of retries to connect to the brokers and a meaningful wait time between reconnection

Default JMS Host

It determines which broker will be used to execute and communicate administrative commands on the cluster. Commands like creating a destination and so on

Address List Behavior

It can be used to determine how GlassFish select a broker when it creates a connection; it can either random or based on the first available broker in the available host list. First broker has the highest priority.

Address List Iterations

How many times GlassFish should reiterate over the available hosts list in order to establish a connection if the brokers are not available at first iteration

MQ Scheme

Please refer to table 13

MQ Service

Please refer to table 13


As you can see we can configure GlassFish to use a cluster of Open MQ brokers very easily. You may like to know how this configuration affects your MDB or a JMS connection that you acquire from the application server, I should say that when you create a connection using a connection factory, GlassFish check the JMS service configuration and return a connection which has the address of a healthy broker as its host address, the selection of the broker is based on the Address List Behavior value, it can select a healthy broker at random or the first healthy host starting from the top of the hosts list.

Open MQ is one of the most feature complete message broker implementation available in the open source and commercial market, it provides a lot of feature including the possibility of using multiple connection protocols for sack simplicity and vast types of clients that may need to connect to it. Two concepts are introduced to cover this connection protocol flexibility:

  • Open MQ Service: Different connection handling services implemented to support variety of connection protocols. All available connection services are listed at table 12.
  • Open MQ Scheme: Determine the connection schema between an Open MQ client, a connection factory, and the brokers. Supported schemas are listed at table 12.

The full syntax for a message service address is scheme://address-syntax and before using any of the schemas we should ensure that its corresponding service is already enabled. By setting a broker’s imq.service.activelist property, you can configure it to run any or all of these connection services. The value of this property is a list of connection services to be activated when the broker is started up; if the property is not specified explicitly, the jms and admin services will be activated by default.

Table 12 available services and schemas in Open MQ 4.3





jms and ssljms

Uses the broker’s port mapper to assign a port dynamically for either the jms or ssljms connection service.


jms and admin

Bypasses the port mapper and connects directly to a specified port, using the jms connection service.


ssljms and ssladmin

Makes a SSL connection to a specified port, using the ssljms connection service.



Makes a HTTP connection to the Open MQ tunnel Servlet at a specified URL, using the httpjms connection service.



Makes a HTTPS connection to the Open MQ tunnel Servlet at a specified URL, using the httpsjms connection service.


We usually enable the services that we need and leave other services to stay disabled. Enabling each service need its own measure for firewalls configuration and authorization.


Performance consideration

Performance is always a hurdle in any large scale system because of relatively large number of components which are involved in the system architecture. Open MQ is a high performance messaging system, however you should know that performance differs greatly depending on many different factors including:

The network topology

Transport protocols used

Quality of service

Topics versus queues

Durable messaging versus reliable (some buffering takes place but if a consumer is down long enough the messages are discarded)

Message timeout

Hardware, network, JVM and operating system

Number of producers, number of consumers

Distribution of messages across destinations along with message size



Messaging is one of the most basic and fundamental requirement in every large scale application and in this article So far you learned what is OpenMQ, how does it works, what are its main components, how it can be administrated either using Java code and JMX or using its command line utilities. You learned how a J2SE application can connect to Open MQ using JMS interfaces and act as a consumer or producer. We discussed how we can configure Open MQ to perform access control management. You learned how OpenMQ broker instances can be created and configured to work in a mission critical system.

And GlassFish v3 is Here

The long awaited and the most looked upon version of GlassFish released today. GlassFish v3 fully implements Java EE 6 specification which means EJB 3.1, Servlet 3, JAX-RS, JPA 2, Contexts and Dependency Injection for Java EE, Bean validation, Java EE profiles and so on.


GlassFish is not only the most up to date application server but it also benefits from a very good architecture. GlassFish architecture provides extensions points and contracts which 3rd party developers can use to add functionality to GlassFish (even the administration console is plubable).


GlassFish v3 is an important milestone in GlassFish life because now it is fully based on OSGI modularity framework which means GlassFish can be integrated into a bigger OSGI system.


In addition to Java EE profiles, GlassFish v3 is also available as an embedded application server which we can use for testing purpose or any other kind of in process use cases.


What I like the most about GlassFish is its close integration with many other well developed products like OpenESB, OpenSSO, IDEs, and so on in addition to its superb performance and administration channels.


GlassFish v3 adds another reason to make one consider it as the prime option for deploying simple and complex application, and that is its extensibility which so far made it possible to host different kind of scripting language based applications like application based on RoR or PHP in the same process which hosts the Java EE applications.


GlassFish v3 is available for download at:


Like GlassFish v2, Sun Microsystems provides support for GlassFish if you need a higher level of assurance and guranteed support. to get more information on the provided support model take a look at GlassFish Enterprise Home-page.


And if you are interested there is a GlassFish V3 Refcard published by DZone and authored by me which introduces GlassFish v3 in more details and gives you all you need to start working with GlassFish v3.



I have an Upcoming Book about GlassFish which is due to be published on April 2010 by Packt Publishing. The book mainly discuss GlassFish security (administration and configuration), Java EE security and using OpenSSO to secure Java EE applications in general and Java EE web services in particular.


To learn more about Java EE 6, you can take a look at Sun Java EE 6 white paper located at:


Another way for you to meet GlassFish folks, other GlassFish community members and to learn more about GlassFish v3 is to join the Virtual Conference about GlassFish v3 which is supposed to take place on 15th of December.

Monitoring GlassFish application server’s HTTP Service using VisualVM.

Monitoring GlassFish HTTP service using VisualVM.

GlassFish application server has some motniroting features which are exposed through its administration chanlles including Administration Console and asadmin utility. I disucssed some of these features here and in this we will see how we can use visualvm and its plugin as a desktop application to monitor GlassFish HTTP service.

VisualVM is JDK tool based on NetBeans platform, NetBeans profiler, and a handful of plug-ins which let us profile any Java application running in JDK 6 or 7. VisualVM does not works with older JDK or any JRE either 6 or 7. Starting from JDK 6 update 7 visualVM is included in the Sun JDK distribution.

Downloading and installing latest version of visualVM is straight forward, just point to Download Page grab the zip file, extract the zip file content and you are ready to go. Follow the installation instruction (4 very easy steps) here

Now, to monitor GlassFish HTTP service we need to install a visualVM plugin that can interact with GlassFish application server through its JMX/ AMX API to gather the statistics and let us view them live in our desktop application. Run visualVM using the provided script in the bin directory, Open Tools>Plugins menu item and from the available plug-ins section select VisualVM-GlassFish and press install button to install the plug-in. Restart VisualVM and now we are ready to connect to a GlassFish instance.


Now we have two options, we can either monitor a local Glassfish instance or we can select to monitor a remote Glassfish instance. All local JVM are listed under the local node and in order to create a remote connection we need to register the host under the Remote node by right clicking on the node and adding the remote host, in my case I add Now on the remote host we may have several JXM enabled applications running, each application on its own port. Default JMX port for GlassFish application server is 8686, therefore to add a GlassFish application server running on we can right click on its node under the Remote node and select add JMX connection, a window will open which asks you to provide the connection information including port number and credentials. default administration credentials for GlassFish are admin/ adminadmin as username/ password. A_figure02.jpeg

Now that The remote GlassFish is registered the right side panel should be similar to the fillowing figure


In my case I have a remote and a local GlassFish instances, When you double click on an instance a window will open that shows all monitoring information which VisialVM can gather about the running application in different panels, something similar to the following figure


This page shows general information about the JVM which we are going to monitor and some of the plug-in specific configuration which we may need to change. Make sure that you turn the HTTP Service monitoring level and WebContainer Monitoring Level to High or LOW which will result in GlassFish gathering monitoring statistics about the HTTP Service.


Now let your application server get some hit from the clients and then switch to HTTP Service Tab in order to get a general overview of your Server HTTP Service performance. Failed, Incoming, Average connections, Keep Alive, and File Cache performance can be monitored here. Following Figure shows HTTP Service Tab


Now the funny part begins, Assuming that we have a dozen of application hosted in the same GlassFish instance we expect to be able to monitor each application separately to see how HTTP service is performing for that particular application. It is easy and do-able in no more than 3 steps. Under the GlassFish node you can see Model and under the Model you can see different web applications that you have hosted in your server, double clicking on any of those applications result in viewing the HTTP Service statistics for that particular application. For example If I double click on the admingui I will get a result page similar to the next figure.


The fun part continues further I tell you that you can monitor different servlets used in your application separately. for example I can expand the admingui node and double click on ThemeServlet to get an statistical view similar to the next figure.


All statistical information which are shown includes two major factors, processing time and error/ request count. in addition to average processing time, maximum processing time, and so on.

Experimenting replication and failover recovery (High Availability) with OpenDS 1.3 Build 1

If you do not know what OpenDS is, then you can simply learn about it by looking at its website located at But a brief description is: OpenDS is a high performance, feature rich and pure Java based, directory server which is under active development by Sun Microsystems.

Today I grabbed OpenDS 1.3 build 1 to see what is new and check its replication and fail-over recovery. You can grab a copy at First thing that I noticed is the new control panel which replaces the old likely status panel. You can see an screen-shot of the control panel right here.


Although the control panel has good set of features and functionalities and it is very good to have a built-in LDAP browser and management utility coming with the OpenDS but this control panel is not user friendly enough. I think we will see some changes for this control panel in new future. for example some better menu design, tab based content pane instead of opening new window,… To run the control-panel application you can run control-panel script either from bat or bin directory


Down to business, I though I can test the replication and fail over recovery of OpenDS replication by some simple Java code and the new control panel application. To install OpenDS in an specific directory, extract the zip file in that directory and run setup script. I installed first instance of OpenDS server in /home/masoud/opends1.3/OpenDS-1.3.0-inst01. The installation process is quite simple, you just need to execute the setup script, it opens a GUI setup application which guide you through the installation. Following screenshots shows the installation process for first instance.

Welcome page:
Server Setting page: I used admin as password
Topology Options
Directory Data
Installation review page
Installation finished

Installation application will open the control-panel application, the control panel needs administration credentials to connect to the directory server. administration credentials are cn=Directory Manager as the bind DN and admin as password. (If you used anything else then you should use your own credentials)

Now we should install the second directory server instance, this instance will form a replication group with instance 02, I extracted the zip file into /home/masoud/opends1.3/OpenDS-1.3.0-inst02 and then execute the setup script to commence with the installation. Following screen shots shows the installation process:

Welcome page:
Server Setting page: I used admin as password, as you can see port numbers are different because default ports are in use and setup application try to use new port numbers instead.

Topology Options: Here we are adding this server instance to a replication topology which already has one member. We connect this instance to another instance in the topology by providing the setup application with host name, administration port and administration credentials of that server. In my case both instances are installed on the same machine.


Global Administration: A administration credentials which can be used to manage the whole replication topology. I used admin/admin for username/password


Data Replication: As we want to have a replica of our remote server we should select "Create local instance of existing base DNs and….", And we should select the Base DNs which we want to replication



Review: review the installation and if you found anything wrong you can go back and fix it


As both installation tasks are finished we have our replication topology installed, and configured.

So far, we should have two control-panel open. Each one of them can manage one of our installation and if it comes to data management, if we change data in one control-panel we can see the change in other control panel.

To test the replication configuration, in one of the control-panel applications, under the Directory Data tab, select manage entries and then delete some entries, now go to the other control-panel and you will those entries are gone. To make the test more understandable about fail-over recover, stop one server, delete some entries in other server, start the server which you have stopped and you should see that all deleted record are deleted in the new server as soon as it has been started.

Directory Data tab:
Deleting some entries:

We have a replication topology configured and working, what we can do with this topology from a Java application? The answer is simple: as we can not afford to see our client applications stopped working because a directory server machine is down or a router which route the clients to one of the server is not working and so on… we need to have our Java application uses any available server instead of depending on one server and then stop working when the server is down.

following sample code shows how we can use these two instances to prevent our client application stop working when one instance is down.

import java.util.Date;
import java.util.Hashtable;

import java.util.Timer;
import java.util.TimerTask;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;

public class Main {

    public static void main(String[] args) throws NamingException {

        final Hashtable env = new Hashtable();
        env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
        env.put(Context.PROVIDER_URL, "ldap:// ldap://");
        env.put(Context.SECURITY_PRINCIPAL, "cn=Directory Manager");
        env.put(Context.SECURITY_CREDENTIALS, "admin");
        env.put(Context.SECURITY_AUTHENTICATION, "simple");
        Timer t = new Timer();
        TimerTask ts = new TimerTask() {

            public void run() {
                try {
                    InitialDirContext ctx = new InitialDirContext(env);
                    Attributes attrs = ctx.getAttributes("");
                    final NamingEnumeration enm = attrs.getAll();
                } catch (NamingException ex) {