How REST interface covers for the absence of JMX/AMX administration and management interface in GlassFish 3.1

For sometime I wanted to write this entry and explain what happened to GlassFish JMX/AMX management and administration interface but being busy with other stuff prevented me from doing so. This article here can be an upgrade to my other article about GlassFish 3.0 JMX administration interface which I wrote while ago. Long story short, in GlassFish 3.1 the AMX/JMX is no longer available and instead we can use the REST interface to change the server settings and perform all administration/management and monitoring activities we need. There are fair number of articles and blog entries all over the web about the RESTful interface which I included them at the end of this blog. Firs of all the rest interface is available trough the administration console application meaning that we can access the interface using a URL similar to: http://localhost:4848/management/domain/ The administration console and the rest interface  are running on a separate virtual server and therefore a separate HTTP Listener and if required transport configuration. What I will explain here will be the following items:

  • How to use Apache HttpClient to perform administration tasks using GlassFish REST interface
  • How to find the request parameters for different commands.
  • GlassFish administration, authentication and transport security

How to use Apache HttpClient to interact with GlassFish administration and management application

Now back to the RESTful interface, this is a HTTP based interaction channel with the GlassFish administration infrastructure which basically allows us to do almost anything possible to do using asadmin trhough HTTP in a RESTful manner. We can use basically any programming language capable to writing on a socket to interact with the RESTFul interface. Here we will use Apache HTTPClient to take care of sending the commands to GlassFish RESTFul console. When using GlassFish REST management we can use any of the POST/GET and DELETE methods to perform the following tasks:

  • POST: create and partially update a resource
  • GET: get information like details of a connection pool
  • DELETE: to delete a resource

Following sample code shows how to use the  to perform some basic operations including updating a resource, getting some resources list, creating a resource and finally deleting it.

[java]
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URISyntaxException;
import java.util.logging.Logger;

import org.apache.http.HttpEntity;
import org.apache.http.HttpException;
import org.apache.http.HttpResponse;
import org.apache.http.auth.AuthenticationException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpDelete;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.message.BasicHeader;
import org.apache.http.protocol.HTTP;

/**
*
* @author Masoud Kalali
*/
public class AdminGlassFish {

//change the ports to your own settng
private static final String ADMINISTRATION_URL = "http://localhost:4848/management";
private static final String MONITORING_URL = "http://localhost:4848/monitoring";
private static final String CONTENT_TYPE_JSON = "application/json";
private static final String CONTENT_TYPE_XML = "application/xml";
private static final String ACCEPT_ALL = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
private static final Logger LOG = Logger.getLogger(AdminGlassFish.class.getName());

public static void main(String args[]) throws IOException, HttpException, URISyntaxException {

//just chaning the indent level for the JSON and XML output to make them readable, for humans…
String prettyFormatRestInterfaceOutput = "{"indentLevel":2}";
String response = postInformation("/domain/configs/config/server-config/_set-rest-admin-config", prettyFormatRestInterfaceOutput);
LOG.info(response);
//getting list of all JDBC resources
String jdbcResources = getInformation("/domain/resources/list-jdbc-resources");
LOG.info(jdbcResources);

// creating a JDBC resource on top of the default pool
String createJDBCResource = "{"id":"jdbc/Made-By-Rest","poolName":"DerbyPool"}";
String resourceCreationResponse = postInformation("/domain/resources/jdbc-resource", createJDBCResource);
LOG.info(resourceCreationResponse);

// deleting a JDBC resource
String deletionReponse = deleteResource("/domain/resources/jdbc-resource/jdbc%2FMade-By-Rest");
LOG.info(deletionReponse);

}

//using HTTP get
public static String getInformation(String resourcePath) throws IOException, AuthenticationException {
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpGet httpG = new HttpGet(ADMINISTRATION_URL + resourcePath);
httpG.setHeader("Accept", CONTENT_TYPE_XML);
HttpResponse response = httpClient.execute(httpG);
HttpEntity entity = response.getEntity();
InputStream instream = entity.getContent();
return isToString(instream);
}

//using HTTP post for creating and partially updating resources
public static String postInformation(String resourcePath, String content) throws IOException {
HttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost(ADMINISTRATION_URL + resourcePath);
StringEntity entity = new StringEntity(content);

//setting the content type
entity.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, CONTENT_TYPE_JSON));
httpPost.addHeader("Accept",ACCEPT_ALL);
httpPost.setEntity(entity);
HttpResponse response = httpClient.execute(httpPost);

return response.toString();
}

//using HTTP delete to delete a resource
public static String deleteResource(String resourcePath) throws IOException {
HttpClient httpClient = new DefaultHttpClient();
HttpDelete httpDelete = new HttpDelete(ADMINISTRATION_URL + resourcePath);
httpDelete.addHeader("Accept",
ACCEPT_ALL);
HttpResponse response = httpClient.execute(httpDelete);
return response.toString();

}

//converting the get output stream to something printable
private static String isToString(InputStream in) throws IOException {
StringBuilder sb = new StringBuilder();
BufferedReader br = new BufferedReader(new InputStreamReader(in), 1024);
for (String line = br.readLine(); line != null; line = br.readLine()) {
sb.append(line);
}
in.close();
return sb.toString();
}
}
[/java]

You may ask how could one know what are the required attributes names, there are several ways to do it:

  • Look at the reference documents…
  • View the source code of the html page representing that kind of resource, for example for the JDBC resource it is like http://localhost:4848/management/domain/resources/jdbc-resource which if you open it in the browser you will see an html page and viewing its source will give you the names of different attributes. I think the label for the attributes is also the same as the attributes themselves.

  • Submit the above page and monitor the request using your browser plugin, for example in case of chrome and for the JDBC resource it is like the following picture

GlassFish administration, authentication and transport security

By default when we install GlassFish or we create a domain the domain administration console is not protected by authentication nor it is protected by HTTPS so whatever we send to the application server from through this channel will be readable by someone sniffing around. Therefore you may need to enable authentication using the following command:

./asadmin change-admin-password

You may ask now that we enabled the authenticaiton for the admin console, how we can use it by our sample code, the answer is quite simple,  Just set the credentials for the request object and you are done. Something like:

[java]
UsernamePasswordCredentials cred = new UsernamePasswordCredentials("admin", "admin");
httpget.addHeader(new BasicScheme().authenticate(cred, httpget));
[/java]

Make sure that you are using correct username and passwords as well as correct request object. In this case the request object is httpGet

Now about the HTTPs to have have encryption during the transport we need to enable the SSL for the admin HTTP listener. following steps show how to use the RESTFul interface through a browser to enable HTTPS for the admin console. When using the browser, GlassFish admin console shows basic HML forms for different resources.

  1. Open the http://localhost:4848/management/domain/configs/config/server-config/network-config/protocols/protocol/admin-listener in the browser
  2. Select true for security-enabled option element
  3. Click Update to save the settings.
  4. Restart server using http://localhost:4848/management/domain/restart

The above steps have the same effect as

./asadmin enable-secure-admin

This will enable the SSL layer for the admin-listener but it will use the default, self singed certificate. Here I explained how to install a GoDaddy digital certificate into GlassFish application server to be sure that none can listen during the transport of command parameters and on the other hand the certificate is valid instead of being self signed. And here I explained how one can use the EJBCA to setup and use an small inter-corporate certificate authority with GlassFish, though the manual is a little old but it will give you enough understanding to use the newer version of EJBCA.

If you are asking about how our small sample application can work with this dummy self signed certificate of GlassFish you need to wait till next time that I will explain how to bypass the invalid certificate installed on our test server.

In the next part of this series I will cover more details on the monitoring part as well as discussing  how to bypass the self signed Digital certificate during the development…

Managing Logical network interfaces in Solaris

Like other operating system we can assign multiple IP address to a network interface. This secondary address are called logical interfaces and we can use them to make one machine with one single network interface own multiple IP addresses for different purposes. We may need to assign multiple IP address to an interface to make it available to both internal and external networks or for testing purposes.

We should have one network interface configured in our system in order to create additional logical interfaces.

We are going to add a logical interface to e1000g1 interface with a 10.0.2.24 as its static IP address. Before doing so let’s see what network interface we have using the ifconfig command.

Now to add the logical interface we only need to execute the following command:

# ifconfig e1000g1 addif 10.0.2.24/24 up
Invoking this command performs the following tasks:
  • Create a logical interface named e1000g1:1. The naming schema for logical interfaces conforms with interface_name:logical_interface_number which the number element can be from 1 to 4096.
  • Assign 10.0.2.24/24 as its IP address, net mask and broadcast address.

Now if we invoke ifconfing -a command the output should contain the logical interface status as well. The following figure shows a fragment of ifconfig -a command.

Operating system does not retain this configuration over a system reboot and to make the configuration persistent we need to make some changes in the interface configuration file. For example to make the configuration we applied in this recipe persistent the content of /etc/opensolaris.e1000g1 should something similar to the following snippet.

10.0.2.23/24
addif 10.0.2.24/24

The first line as we discussed in recipe 3 of this chapter assign the given address to this interface and the second like adds the logical interface with the given address to this interface. To remove a logical interface we can simply un-plumb it using the ifconfig command as shown below.

# ifconfig e1000g1:1 unplumb

When we create a logical interface, OpenSolaris register that interface in the network and any packet received by the interface will be delivered to the same stack that handles the underlying physical interface.

Like other operating system we can assign multiple IP address to a network interface. This secondary address are called logical interfaces and we can use them to make one machine with one single network interface own multiple IP addresses for different purposes. We may need to assign multiple IP address to an interface to make it available to both internal and external networks or for testing purposes.

We should have one network interface configured in our system in order to create additional logical interfaces.

Configuring Solaris Link Aggregation (Ethernet bonding)

Link aggregation or commonly known Ethernet bonding allows us to enhance the network availability and performance by combining multiple network interfaces together and form an aggregation of those interfaces which act as a single network interface with greatly enhanced availability and performance.

When we aggregate two or more network interfaces, we are forming a new network interface on top of those physical interfaces combined in the link layer.
We need to have at least two network interfaces in our machine to create a link aggregation. The interfaces must be unplumb-ed in order for us to use them in a link aggregation. Following command shows how to unplumb a network interface.
# ifconfig e1000g1 down unplumb
We should disable the NWAM because link aggregation and NWAM cannot co-exist.
# svcadm disable network/physical:nwam
The interfaces we want to use in a link aggregation must not be part of virtual interface; otherwise it will not be possible to create the aggregation. To ensure that an interface is not part of a virtual interface checks the output for the following command.
# dladm show-link
Following figure shows that my e1000g0 has a virtual interface on top of it so it cannot be used in an aggregation.
To delete the virtual interface we can use the dladm command as follow
# dladm delete-vlan vlan0
The link aggregation as the name suggests works in the link layer and therefore we will use dladm command to make the necessary configurations.  We use create-aggr subcommand of dladm command with the following syntax to create aggregation links.
dladm  create-aggr [-l interface_name]*  aggregation_name
In this syntax we should have at least two occurrence of -l interface_name option followed by the aggregation name.
Assuming that we have e1000g0 and e1000g1 in our disposal following commands configure an aggregation link on top of them.
# dladm create-aggr -l e1000g0 -l e1000g1 aggr0
Now that the aggregation is created we can configure its IP allocation in the same way that we configure a physical or virtual network interface. Following command plumb the aggr0 interface, assign a static IP address to it and bring the interface up.
# ifconfig aggr0 plumb 10.0.2.25/24 up
Now we can use ifconfig command to see status of our new aggregated interface.
# ifconfig aggr0
The result of the above command should be similar to the following figure.
To get a list of all available network interfaces either virtual or physical we can use the dladm command as follow
# dladm show-link
And to get a list of aggregated interfaces we can use another subcommand of dladm as follow.
# dladm show-aggr
The output for previous dladm commands is shown in the following figure.
We can change an aggregation link underlying interfaces by adding an interface to the aggregation or removing one from the aggregation using add-aggr and remove-aggr subcommands of dladm command.  For example:
# dladm add-aggr -l e1000g2 aggr0
# dladm remove-aggr -l e1000g1 aggr0
The aggregation we created will survive the reboot but our ifconfig configuration will not survive a reboot unless we persist it using the interface configuration files.
To make the aggregation IP configuration persistent we just need to add create /etc/hostname.aggr0 file with the following content:
10.0.2.25/24
The interface configuration files are discussed in recipe 2 and 3 of this chapter in great details. Reviewing them is recommended.
To delete an aggregation we can use delete-aggr subcommand of dladm command. For example to delete aggr0 we can use the following commands.
# ifconfig aggr0 down unplumb
# dladm delete-aggr aggr0
As you can see before we could delete the aggregation we should bring down its interface and unplumb it.
In recipe 11 we discussed IPMP which allows us to have high availability by grouping network interfaces and when required automatically failing over the IP address of any failed interface to a healthy one. In this recipe we saw how we can join a group of interfaces together to have a better performance. By grouping a set of aggregations we can have the high availability that IPMP offers along with the performance boost that link aggregation offers.
The link aggregation works in layer 2 meaning that the aggregation groups the interfaces in layer 2 of network where network packets are dealt with. In this layer the network layer’s packets are extracted or created with the designated IP address of the aggregation and then are delivered to the lower, higher layer.

Configuring DHCP server in Solaris

A Dynamic Host Configuration Protocol (DHCP) server leases IP address to clients connected to the network and has DHCP client enabled on their network interface.
Before we can setup a start the DHCP server we need to install DHCP configuration packages. Detail information about installing packages in provided in recipe of chapter 1. But to save the time we can use the following command to install the packages.
# pkg install SUNWdhcs
After installing these packages we can continue with the next step.
How to do it…
First thing to setup the DHCP server is creating the storage and initial settings for the DHCP server. Following command does the trick for us.
# dhcpconfig -D -r SUNWfiles -p /fpool/dhcp_fs -a 10.0.2.254 -d domain.nme -h files -l 86400
In the above command we used several parameters and options, each one of these options are explained below.
  • The -D specifies that we are setting up a new instance of the DHCP service.
  • The -r SUNWfiles specifies the storage type. Here we are using plain-text storage while SUNWbinfiles and SUNWnisplus are available as well.
  • The -p /fpool/dhcp_fs specifies the absolute path to where the configuration files should be stored.
  • The -a 10.0.2.15 specifies the DNS server to use on the LAN. We can multiple comma separated addresses for DNS servers.
  • The -d domain.nme specifies the network domain name.
  • The -h files specifies where the host information should be stored. Other values are nisplus and dns.
  • The -l 86400 specifies the lease time in seconds.
Now that the initial configuration is created we should proceed to the next step and create a network.
# dhcpconfig -N 10.0.2.0 -m 255.255.255.0  -t 10.0.2.1
Parameters we used in the above command are explained below.
  • The -N 10.0.2.0 specifies the network address.
  • The -m 255.255.255.0 specifies the network mask to use for the network
  • The -t 10.0.2.1 specifies the default gateway
All configurations that we created are stored in DHCP server configuration files. We can manage the configurations using the dhtadm command. For example to view all of the current DHCP server configuration assemblies we can use the following command.
# dhtadm -P
This command’s output is similar to the following figure.
Each command we invoked previously is stored as a macro with a unique name in the DHCP configuration storage. Later on we will use these macros in subsequent commands.
Now we need to create a network of addresses to lease. Following command adds the addresses we want to lease.
# pntadm -C 10.0.2.0
If we need to reserve an address for a specific host or a specific interface in a host we should add the required configuration to the system to ensure that our host or interface receives the designated IP address. For example:
# pntadm -A 10.0.2.22 -f MANUAL -i 01001BFC92BC10 -m 10.0.2.0 -y 10.0.2.0
In the above command we have:
  • The -A 10.0.2.22 adds the IP address 10.0.2.22.
  • The -f MANUAL sets the flag MANUAL in order to only assign this IP address to the MAC address specified.
  • The -i 01001BFC92BC10 sets the MAC address for the host this entry assigned  to it.
  • The -m 10.0.2.0 specifies that this host is going to use the 192.168.0.0 macro.
  • The –y asks the command to verify that the macro entered actually exists.
  • The 10.0.2.0 Specifies the network the address is assigned to.
Finally we should restart the DHCP server in order for all the changes to take effect. Following command restarts the corresponding service.
#  svcadm restart dhcp-server
When we setup the DHCP service, we store the related configuration in the storage of our choice. When we start the service, it reads the configuration from the storage and wait dormant until it receives a request for leasing an IP address. The service checks the configuration and if an IP was available for lease, it leases the IP to the client.
Prior to leasing the IP, DHCP service checks all leasing conditions like leasing a specific IP address to a client to ensure that it leases the right address to a client, etc.
We can use the DHCP Manager GUI application to configure a DHCP server. The DHCP manager can migrate the DHCP storage from one format to another. To install the DHCP manager package we can use the following command.
# pkg install SUNWdhcm
Now we can invoke the DHCP manager using the following command which opens the DHCP Manager welcome page shown in the following figure.
# dhcpmgr

Learning GlassFish v3 Command Line Administration Interface (CLI)

Learning GlassFish v3 Command Line Administration Interface (CLI)

Terminals and consoles was one of the earliest types of communication interfaces between a system administrator and the system administration layer. Due to this long time presence,  command line administration consoles become one the most utilized administration channel for configuring different software ranging from database engines to router’s embedded operating systems.

GlassFish provides several administration channels; one of them is the command line administration interface or CLI from now on.  The CLI has many unique features which make it very convenient for command line advocates and new administrators which like to get familiar with CLI and use it in the daily basis.  The CLI allows us to manage, administrate, and monitor any resources which application server exposes to the administrators. So we can perform all of our administration tasks just by firing up a terminal, navigating to glassfish bin directory and executing the asadmin script which is either a shell script in UNIX based operating systems or a batch file in Windows operating system.

The CLI has some degree of embedded intelligence which helps us to simply find the correct spelling and similar commands to the command or phrase that we try. Therefore with basic knowledge of GlassFish application server administration we can find out the command that we need and proceed with the task list which we have in front.

1 The CLI environment

CLI is an administration interface which let administrators and developers to change configurations like changing a listener port; performing life cycle related tasks like deploying a Web application, or creating a new domain; monitoring different aspects of the application server; and finally performing maintenance duties like creating backup from the domain configuration, preparing maintenance reports and so on.

Door to the CLI interface is a script which is place in glassfish_home/bin directory. For Linux and UNIX platforms it is a shell script named asadmin and for the Windows platform it is a batch file named asadmin.bat. So, before we start doing anything with the CLI interface we need to either have glassfish_home/bin in the system path or we should be in the same directory which the script resides.

Add glassfish_home/bin to operating system path

We can add glassfish_home/bin to windows operating system path by using the following procedure. You should have the glassfish_home variable already defined, if not you can replace %glassfish_home% with the full path to glassfish installation directory.

  • Right click on My Computer icon and select properties item
  • Select advanced tab in the from the tab set
  • Click on environment variables button, you will see two set of variables one for user and one for the system. I suggest you choose the user space to add your variable or update the currently defined variable.
  • If you can not find path in the list of variables, create a new variable named path and sets its value to %glassfish_home%/bin. If the variable is present then add ;glassfish_home/bin to the end of the variable’s value.

Close all cmd windows and open a new one to have the new path applied

And for UNIX and Linux machines, we can add it to operating system path by using the following procedure. Replace “${glassfish_home}” with the full installation of Glassfish.

Open a text editor like ktext, gedit or any editor which you are familiar with

Open ~/.bashrc file using the file menu

At the end of the file (last line) add export PATH=”${glassfish_home}”/bin:”${PATH}”

Close all terminals and fire up a new terminal console to have the effects applied

Info

We can execute the asadmin.bat in windows by calling asadmin.bat or asadmin in a cmd windows. We should be either in the bin directory or we should have the script in path.

For Linux and UNIX, if we have the script in path we can call asadmin in a terminal window, if we do not have it in the path then we should be in the bin directory and execute it by typing ./asadmin in the terminal window.

Info

In this article I will use an operating system natural syntax, for example I will use asadmin version which can be interpreted as asadmin.bat version  in windows or ./asadmin version in UNIX or the asadmin version itself when we have asadmin in our path.

2 Getting started with CLI

2.1 The commands format

The asadmin script is the door to GlassFish command line interface and all of its commands follow a de-facto naming convention and a standard format which we need to remember. All command names are self explaining and we can understand the command purpose from the command name. All of the asadmin commands use one or multiple words as the command name to completely explain the command purpose. These words which form the command name are separated using a dash (-).

The naming convention also provides another learning point about the asadmin commands. Commands are categorized into lifecycle management commands, configuration commands, listing commands and monitoring commands. Each command starts with a verb which determines the command nature, for example: create, delete, update, list, and configure are the starting words for many of asadmin commands.  The command syntax is shown in the following snippet.

command-name options operands

The command-name can be start-domain, stop-domain, and so on. The options are parameters which affects the command execution, for example the domain instance port. the JDBC connection pool data source class name and so on.  The operand are usually name of the resources which the command may affect, like name of the connection pool that we want to create, name of the domain which we want to start and so on.

2.2 The asadmin operating modes

The asadmin utility which we use to execute our administrative commands operates in two modes. Each mode is suitable for specific set of tasks and has its own characteristics when it comes to performance and productivity.

Headless operating mode

In the headless mode which is very suitable for writing scripts  we call the asadmin  utility and pass the command name along with its options and operands. The asadmin utility launches a JVM and execute the command. The result of the command will be either 0 as failure or 1 as a success. An example of this mode is similar to the following script:

asadmin version

This will simply execute the version command which shows the application server version. To get help for a command in this mode we can use asadmin help version or we can use asadmin version –help.

Multimodes operating mode

This operating mode let use execute multiple commands in one JVM instance, therefore it is faster for executing commands. This execution mode is not suitable for script developers as we should execute the commands in asadmin environment and not in the shell command line.

To execute a command in this mode we should enter the asadmin environment and then enter our commands along with options and operands. For example:

asadmin  A

asadmin> version --help B

A: Hit enter

B: Command name with parameters

The asadmin utility is an operating system native shell script which wrap around the actual Java application responsible for executing the administration commands which we issue. Later on we will see how this script can fall useful for administrating some domains from a system which is behind a proxy.

2.3 Local commands

Two sets of commands are present in asadmin which help administrators perform variety of tasks. Local commands are a set of commands which either affects the environment which application server is running or it needs accessing the application server environment locally to execute some scripts or batch files to perform a job.

A good example of local commands is domain lifecycle management commands which include creating a new domain, deleting a currently available domain, or starting a domain. All of these commands need to either access the application server installation directory layout or need to run a script on the target machine to perform the task.

Another place which we can discuss the local commands is the Java DB copy included with the GlassFish application server installation bundle. The following command start the database on the local machine. The command starts the database by creating another Java process directly by using the operating system shell.

asadmin start-database

You may wonder why this command has not options or operands. Generally all GlassFish administration commands need the minimum possible options and operands to perform the task when we use default setting and configuration.

Each command has its own set of parameters but most of the asadmin commands, either local or remote, have some shared parameters which are listed in table 1. Command’s parameters can have a default value which will be used if we do not specify it. The default value for each parameter is underlined.

Table 1 List of shared parameters between local and remote commands

Parameter

Acceptable values

Description

–terse or –t

false, true

Indicates that any output data  must  be  very  concise

–echo, -e

true, false

If set to true, the command-line statement is echoed on the standard output.

–interactive, -I

true, false

If set to true, only the required password options are prompted.

–passwordfile

Path to the password file

We will discuss it in section 5 of this article

–user, -u

No default value

A user with administrative privilege on target domain.

These parameters have no effect on the command execution but they just change the way that the command shows us the information and how we interact with the commands.

2.4 Remote commands

In the Opposite side of the local commands we have remote commands which form a set of commands that affects the running application server instance configuration and access the application server environment and file system using an application deployed in the application server itself. Therefore the target instance should be running and there should be a network route between the administration workstation and the GlassFish instance running on the server machine.

When we execute a remote command we should let the asadmin know where the target application server instance is running and what the instance administration port is.  The asadmin uses this information along with the authentication information to communicate with the application server over HTTP or HTTPs using a REST like URL pattern for commands and parameters. Figure 1 demonstrates the remote and local commands.

Figure 1 The asadmin utility communication with remote and local GlassFish domain and installation

In figure 1 you can see that in the running GlassFish instance there is an application deployed under /__asadmin/ context and listens for incoming asadmin commands. This application is listening on admin listener which is by default uses port number 4848.

Listing 1 shows a sample command content when it is heading toward the application server. It should have already flashed a light in your head about the RESTful Web services development articles or books that you have already read.

Listing 1 content of HTTP request for version command

GET /__asadmin/version HTTP/1.1

Content-type: application/octet-stream

Connection: Keep-Alive

Authorization: Basic YWRtaW46YWRtaW5hZG1pbg==

User-Agent: hk2-agent     #A

Cache-Control: no-cache

Pragma: no-cache

Host: localhost:4848

Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2

#A The asadmin’s agent name

Listing 2 shows the response generated by application server for version command. As you can see, the response is plain and simple.

Listing 2 response generated by application server for version command

HTTP/1.1 200 OK

Content-Type: text/html

Transfer-Encoding: chunked

Date: Mon, 12 Jan 2009 19:26:01 GMT

Signature-Version: 1.0

message: GlassFish v3 Prelude (build b28c)

exit-code: SUCCESS #A

If you are thinking that trying URLs in your browser will result in some output, you are completely right. For example if you try browsing http://127.0.0.1:4848/__asadmin/uptime you will get a result page similar to figure 2.

Figure 2 Using browsers to run asadmin commands and view the result in browser

Now you may have some questions about how a command can generate an HTML oriented output in the browser while it can generates a plain text output when we use asadmin to execute the same command. The answer can be found in the way that browser identifies itself and asadmin utility identifies. The asadmin utility uses hk2-agent as the agent name and asadmin web application respond to this types of agent with a plain text response while it provides a full readable html for other agent types.

The only time that we feel difference between these two sets of commands is when we try to execute a local command on a remote machine.  For example creating a domain on a remote machine is not possible unless we have a node agent installed on that machine and we are using DAS to control that machine.

A good example of remote commands is application deployment command. We can deploy an application archive like an EAR or WAR file using asadmin. The deployment command will upload the file to the server and server takes care of the deployment. Following command will deploy the gia.war to the default glassfish instance.

asadmin deploy gia.war

The deploy commands has several parameters, we will discuss these parameters in details in subsequent sections. You may have already asked yourself how we identify which domain we want to execute a command against. Your answer is behind some shared parameters between all remote commands to distinguish the target domain which the command affects. These parameters are listed in table 2 along with the description and default values.

Table 2 List of all shared parameters between remote commands

Parameter

Default values

Description

–host, -H

localhost

when no value is determine for this parameter, the asadmin utility will try to communicate with localhost

–port, -p

4848

when no value is determine for this parameter, the asadmin utility will try to communicate with the determined host on port 4848

–secure, -s

false

If we have our asadmin application on a secure listener then we will use this parameter to prevent any security breach.

All of these default parameters value’s assumption allows us to have more compact, easy to write and understand commands. Based on the connate of this table we can say that the sample deploy command will deploy the application into a domain running on localhost with port 4848 assigned to its admin listener.

3 Performing common tasks with CLI

Application server administration is composed of some common tasks which deal with common concepts of Java EE application servers like domain administration, application lifecycle administration, and managed resources administration. Along with these common tasks there are many other tasks which might be common between application servers or not. These tasks includes listeners administration, resource adapter administration, application server security administration which includes both application server security and Java EE related security definitions, virtual servers administration, monitoring and so on. In this section we will discuss the common administration tasks.

3.1 Domain lifecycle administration

Domains are foundation of GlassFish application server instances; each domain is an independent instance of application server which uses the same application server installation and hardware resources. Domain lifecycle administration covers creating, starting, stopping, configuration backup, deleting the domain, and domain configuration recovery using previously created backups.

Creating a new domain

Creating a new domain is fairly simple when you know what attributes distinguishes one domain from another. A sample command which creates a domain is as follow:

create-domain --user admin --adminport 4747  --domaindir /opt/dev/apps/domains/GiADomain --profile developer --instanceport 8282 --domainproperties jms.port=8687 --savemasterpassword=true --passwordfile=/opt/dev/apps/passfile.txt  --savelogin=true GiADomain

This is a sample of create-domain command with most of the applicable parameters. We are using –adminport to determine the admin port, –domaindir let us create the domain in none default path like what we used instead of glassfish_home/domains which is the default directory. The –instanceport let us determine the port that default application listener uses. Some of the domain properties like JMS port, JMX port and so on can be determined using the –domainproperties parameter. All parameters are self describing except for following three parameters which you need to understand them thoroughly.

  • The –savemasterpassword parameter: Setting this option to true asks the domain creating process to save the master password into a file and facilitate the unattended (headless) domain startup. We will discuss this password in more details in section 4 of this article. For now just remember that default value for master password is changeit. I am sure that value of the password is talking for the importance of changing it.
  • The –passwordfile parameter: The asadmin utility policy encourages the administrators not to enter the password in the console and instead use a file which includes the required password. During the domain Creating two passwords are required which are domain administration password and master password. To avoid typing the passwords we can store them in a plain text file similar to the following snippet and pass it to any command that we execute instead of typing the passwords in an interactive way.

AS_ADMIN_PASSWORD=adminadmin

AS_ADMIN_MASTERPASSWORD=changeit

The file content is in properties file format and can be created using any text editor. If we do not use the –passwordfile parameter and a password file containing the passwords, the asadmin utility will asks for passwords interactively.

  • The –savelogin parameter: We will discuss automatic login with more details in section 4 of this article, but to be brief, this parameter make it possible to execute our commands on different domains without providing username and password on every command execution.

Starting and stopping a domain

We can start a domain using the start-domain command. An example of starting a domain is similar to the following snippet.

start-domain --debug=true  --verbose=true --user admin --passwordfile=/opt/dev/apps/passfile.txt

--domaindir= /opt/dev/apps/domains/  GiADomain

Although the command can be as simple as start-domain domain1 but that is for occasions when we are using all of the default values for any parameter and no customization has happened during the time which we create the domain. The –debug=true means that domain is started in debug mode and we can attach our debugger to the running instance. The –verbose=true means that we want to see all messages that application server emits during the startup; it is useful to catch errors and troubleshoot. The –passwordfile parameter lift the need to enter the admin and master passwords. The file content is similar to password file we used during the domain creation. And finally we need to determine where the domain resides if we create the domain in a non-standard location.

To stop a domain we can use a command similar to the following snippet which is fairly simple.

stop-domain  --domaindir= /opt/dev/apps/domains/ GiADomain

We just need to provide the path to the directory which contains the domain. The only applicable shared parameters are the items that we used. None of the other shared parameters is applicable in this command.

Domain utility commands

There are two utility commands for domain administration. The first command lists all of the domains which reside in the given directory along with the domain status indicating whether the domain is running or not. And the second command checks the structure of domain.xml file.

list-domains --domaindir=/opt/dev/apps/domains/

verify-domain-xml  --verbose=true --domaindir= /opt/dev/apps/domains/  GiADomain

The verify-domain-xml is very useful when we edit the domain.xml file manually.

Domain configuration backup and recovery

Yes, we can backup a domain configuration and later on restore that backup if we messed up with the domain configuration. Each backup file is an archive which contains description files and vital domain configuration files including domain.xml. By default all backups are placed inside a directory named backups which resides inside the domain directory. The backup file names follow a pattern similar to sjsas_backup_v00001.zip which means that it is the first backup of this domain. To create a backup a command similar to the following snippet will do the trick.

backup-domain --description "backup made for glassfish in action book" --domaindir /opt/dev/apps/domains/ GiADomain

After some times we will have several backup archives for each of our domains, we will need a command to get a list of all available backups or backups related to one single domain. To get list of backups for a single domain we can use:

list-backups --domaindir /opt/dev/apps/domains/ GiADomain

Or to get a list of all backups of all domains in the default domains directory we can use list-backups command alone with no parameters.

Now that we can list the available backups for each domain we need command which allows us to restore a domain. To restore a domain to one of its previous backups we should stop the domain and then use the restore command which is similar to:

restore-domain --domaindir /opt/dev/apps/domains/ --filename sjsas_backup_v00001.zip GiADomain

The command will simply pick up the backup archive and restore the given domain using its content.

Deleting a domain

We should deal with this command very carefully as it will delete the domain directory with all of its content including backups, deployed applications, configuration files and so on. To delete a domain we can use delete-domain command as follow to delete our GiADomain domain.

delete-domaind --domaindir=/opt/dev/apps/domains/ GiADomain

Domain management is the most basic task related to application server administration. Every asadmin command including domain administration commands has manual pages included in the asadmin commands package and accessible using asadmin utility. An example of getting help for create-domain command is similar to:

asadmin help create-domain

You can see complete list of parameters and their associated values for each of the discussed command using the help command. All of the domain administration commands are local commands. We can use list-commands command to see the list of all available commands in both remote and local categories.

3.2 Applications lifecycle administration’

Applications including web, enterprise, and resource adapter, and so on are our bits inside the application server which our clients relay on to get the required services. Applications can be deployed from command line in a very effective and customizable manner. Like all other administration commands application deployment is a single parameter command when we want to use the default parameters.

Deploying and un-deploying application

To deploy an application independently from its type (WAR, EAR, RAR, and so on) we can use deploy command. The command which is one of the most basic remote commands deploys any kind of application known to the GlassFish application server to its dedicated container. The command has many options which are omitted to show the common use case. The GlassFish instance should be up and running to use deploy or any other remote command.

deploy   --host  127.0.0.1 --port 4747  --user  admin –name aName --contextroot cRoot --upload=true --dbvendorname derby --createtables=true  /home/masoud/GiA.ear

Deploying a web application using default parameters is as simple as passing the archive to the deploy command. The context name will be same as the archive file.

deploy GiA.war

We can remove an application using the undeploy command. The command will remove the application bits from the application server directories along with removing it from the list of known applications in the domain.xml file. Following snippet shows how we can use it.

undeploy  --droptables=true aName

The undeply command uses the application name to remove it from the application server; we have the liberty to drop the automatically created tables (for enterprise applications) automatically. Deploying and undeploying other application types is similar to enterprise application deployment. You can view the complete list of parameters by executing deploy with no parameter or you can view the complete help content by using help deploy command.

Application utility commands

For domains with developer profile we have only one utility command named list-applications which list the deployed applications. In addition to all shared parameter this command has one important parameter named –type which accepts application,    connector, ejb, jruby, and web as its value. Following command will list all web applications deployed in out domain.

list-applications --type=web

You can see all commands related to application administration by trying asadmin application which will show all commands which has application word in the command name.

3.3 Application server managed resources

Dealing with managed resources is one of the most recurring tasks which an administrator faces during the application deployment and development. The most important types of managed resources are JDBC and JMS resources which we will discuss in this section.

JDBC connection pool administration

A JDBC connection pool is a group of reusable connections for a particular database. Because creating each new physical connection is time consuming, the server maintains a pool of available connections to increase performance. When an application requests a connection, it obtains one from the pool and when the application closes the connection, the connection is returned to the pool instead of being closed. To create a JDBC connection pool we can use create-jdbc-connection-pool command.

create-jdbc-connection-pool --user admin

--passwordfile=/opt/dev/apps/passfile.txt  --host localhost --port 4747

–datasourceclassname org.apache.derby.jdbc.ClientDataSource –restype javax.sql.XADataSource

--property portNumber=1527:password=APP:user=APP:serverName=

localhost:databaseName=GiADatabase:create=true GiA_Derby_Pool

The command may looks scary at the beginning but if you take a closer look you can see that all of the parameters and values are well known for you. We ca delete a JDBC connection pool using delete-jdbc-connection-pool which in addition to shared parameters accept an operand which is the name of the connection pool which we want to delete

Now that we have a connection pool we need a reference to the connection pool in the JNDI tree to ensure that our applications can access the connection pool through the application server.

JDBC resources administration

A JDBC resource or a data source is the mediator between application and the connection pool by providing a JNDI name to access the connection pool. Multiple JDBC resources can specify a single connection pool.

create-jdbc-resource –user admin –port 4747 –host localhost –passwordfile=/opt/dev/apps/passfile.txt

--connectionpoolid GiA_Derby_Pool jdbc/GiAPool

We determined a name for our JDBC connection pool when we create the pool. And when we want to create the data source we use that name to identity the JDBC connection pool we want to specify in the JDBC resource.

We can delete a JDBC resource using delete-jdbc-resource which in addition to shared parameters accepts an operand which is name of the connection pool that we want to delete.

JMS Destination administration

In enterprise applications, asynchronous communication is one of the in-evitable requirements. JMS destinations are virtually places which each message are headed to. JMS clients are listening on the same destinations to receive these messages. We can create a JMS destination either a Queue or Topic by create-jmsdest command.

create-jmsdest --user admin --passwordfile=/opt/dev/apps/passfile.txt --host localhost --port 4747 --desttype queue  --property User=public:Password=public GiAQueue

We can create a JMS topic by changing the value of –desttype parameter to topic. To delete a JMS destination we can use delete-jmsdest command which accept the name of destination which we want to delete as an operand. For example the following command will delete the JMS queue which we create in previous step.
delete-jmsdest GiAQueue

Make sure that you create the above queue again as we will need it in the next steps.

JMS resources administration

JMS connection factories are door to consuming and producing JMS messages. We need to define these factories prior to be able to access a JMS destination. To create a JMS connection factory we can use creste-jms-resource command. For example:

create-jms-resource --user admin --passwordfile=/opt/dev/apps/passfile.txt  --host localhost --port 4747  --restype javax.jms.QueueConnectionFactory --property ClientId=MyID:UserName=guest:Password=guest jms/GiAQueueConnectionFactory

Our sample command creates a connection factory which provides us with queue connections. When we need to interact with a JMS topic we can change the –restype parameter to javax.jms.TopicConnectionFactory.

Deleting a JMS resource is possible by using delete-jms-resource command which takes the resource name which we want to delete as a parameter in addition to the shared parameters.

Utility commands

There are several utility commands related to manage resources, table 3 shows these commands along with a sample and description.

Table 3 Some asadmin utility commands for administering managed resources

Command

Sample

Description

flush-jmsdest

flush-jmsdest GiAQueue*

purges the messages in a JMS destination

jms-ping

jms-ping

checks to see if the JMS provider is running, the command can ping cluster wide configurations

list-jms-resources

ist-jms-resources –restype javax.jms.TopicConnectionFactory

List all JMS resources

list-jmsdest

list-jmsdest

List all JMS destinations

list-jdbc-connection-pools

list-jdbc-connection-pools

List all JDBC connection pools

list-jdbc-resources

list-jdbc-resources

List all JDBC data sources

*I removed all shared parameters to fit the commands in the table.

The asadmin utility has a very rich set of commands for administrating the managed resources. We learned basic commands here.

3 Monitoring with CLI

Monitoring is an integral part of the administrators daily task list. Monitoring helps us to: find possible problems, detect performance bottlenecks, find system resource shortage, plan system capacity or upgrade, and finally monitor the daily performance of the system. The GlassFish application server infrastructure allows us to monitor almost any object which corresponds to a Java EE concept. For example we can monitor an HTTP listener, a Servlet, or a JDBC connection pool.

To monitor an object, system should gather statistics about that object and this statistics gathering has some overhead even in ideal software as more byte codes should be executed to gather the statistics. Based on the fact that GlassFish application server should be well tuned out of the box, all monitoring functionalities are turned off to prevent any extra overhead during the system work hours.

GlassFish provides 3 monitoring level for any object available for monitoring. These levels include:

  • OFF: No monitoring information is gathered
  • LOW: Only object’s attributes changes are monitored
  • HIGH: In addition to attributes methods execution is counted

The asadmin utility provides multiple ways for getting statistical data from the application server core. Multiple commands are introduced to facilitate our job in viewing this statistical information. These commands include:

  • The monitor command: We can use this command to view common statistical information about different application server services, we can filter the statistics for only on monitor-able object in the service or we can view cumulative statistical information about all objects in that particular service. For example all JDBC connection pools or just one single JDBC connection pool statistics can be viewed using the monitor command.
  • The list command: We can use this command to get the hierarchic of all monitor-able and configuration objects or specific type of objects like JDBC connection pools. The list command has an important boolean parameter named monitor which determine whether we want to see monitor-able objects or configuration objects.
  • The set command: We can use this command to change the monitoring level of a monitor-able object. And in broader view, We can use it to change application server configuration by changing the attributes of configuration objects.
  • The get command: We can get customized monitoring information about different attributes of different monitor-able objects. In broader view, we can use this command to view value of any configuration object’s attributes.

3.1 Dotted names

All monitor-able and configuration objects shape a tree composed of these objects and their children objects. The children objects have attributes which are either the monitoring factor or configuration elements. Objects in the tree are separated using a dot and therefore we can simply navigate from the root object to any of lower level child objects. For example if we execute list –monitor=true server we will get a list of all monitor-able objects which are direct child of server objects. The list can be similar to the following snippet.

server.applications

server.connector-service

server.http-service

server.jms-service

server.jvm

server.orb

server.resources

server.thread-pools

server.transaction-service

After getting the high level child we can go deeper toward leaf nodes by listing the immediate or related children of any top level object. For example to get immediate children or http-service object we can use list –monitor=true server.http-service to show all immediate children of http-service object which can result in an output similar to:

server.http-service.__asadmin

server.http-service.connection-queue

server.http-service.file-cache

server.http-service.keep-alive

server.http-service.pwc-thread-pool

server.http-service.server

We can further do down and monitor any monitor-able object in the application server. We can use list –monitor=true server.http-service.* to view a recursive list of immediate children and children of those immediate children. We use * with list command as a place holder for any part of the dotted name. For example we can use server.jvm.* or server.resource* to get all children of server.jvm or all children which starts with resource.

The dotted names are not provided for sole monitoring purposes, we can use them to view and change the configuration of application server by using the asadmin utility. We can get list of configurable objects by omitting the –monitor=true parameter from the list command.

Enabling the GlassFish monitoring

GlassFish monitoring is turned off when we install the application server, we need to enable the monitoring system to gather statistical information for us. To do so, we can use set command to change the monitoring level for one or all of the GlassFish services like JDBC connection pool, HTTP listener, and so on.  But before changing the monitoring level from OFF to HIGH we can check and see the current level by using a simple get command similar to the following snippet.

get server.monitoring-service.module-monitoring-levels.*

Executing this snippet shows us a result similar to the following snippet which means that monitoring is OFF for all services.

server.monitoring-service.module-monitoring-levels.connector-service=OFF

server.monitoring-service.module-monitoring-levels.ejb-container=OFF

server.monitoring-service.module-monitoring-levels.http-service=OFF

server.monitoring-service.module-monitoring-levels.jdbc-connection-pool=OFF

server.monitoring-service.module-monitoring-levels.jms-service=OFF

server.monitoring-service.module-monitoring-levels.jvm=OFF

server.monitoring-service.module-monitoring-levels.orb=OFF

server.monitoring-service.module-monitoring-levels.thread-pool=OFF

server.monitoring-service.module-monitoring-levels.transaction-service=OFF

server.monitoring-service.module-monitoring-levels.web-container=OFF

We can enable the monitoring for JDBC connection pools by using set command as follow:

Set server.monitoring-service.module-monitoring-levels.jdbc-connection-pool=HIGH

Now we are sure that any activity involving any of the JDBC connection pool will be monitored and its statistical information will be available for us to view and analyze.

3.2 The monitor command

Using monitor command is the simplest way to view monitoring information about GlassFish application server. Following snippet shows the monitor command syntax. I have not included the shared parameters to make it easier to read and see the command specific parameters.

monitor –type  monitor_type [–filename file_name] [–interval interval]

[–filter filter_name]   instance_name

We can save the command output to CSV files using the –filename parameter and we can determine the interval which the command uses to retrieve the statistics by using the –interval parameter. The instance_name argument let us use this command on a DAS to retrieve the statistics related to one of the DAS member instances. Two other parameters determine what service and which object in the service we want to monitor. Most important acceptable values for –type parameter is listed in table 4 along with related description.

Table 4 acceptable values for the monitor command’s type parameter

Value

Description

statefulsession

Statistics related to State full Session Beans

statelesssession

Statistics related to Stateless Session Beans

connectorpool

JCA connector pool statistics

endpoint

Web services end point statistics

entitybean

Entity Beans statistics

filecache

GlassFish HTTP level cache system statistics

httplistener

HTT listener statistics

httpservice

HTTP service statistics

jdbcpool

JDBC connection pool statistics

jvm

Underlying JVM statistics

servlet

Servlet statistics

You can see an example monitoring command to monitor our newly created JDBC connection pool in the following snippet.

monitor –type jdbcpool –interval 10 –filter GiA_Derby_Pool server

We just want to view the statistics related to one JDBC connection pool on the default instance. The interval for retrieving the statistics is 10 seconds. The output for this command is very similar to figure 3.

Figure 3 Outputs for monitoring GiA_Derby_Pool JDBC connection pool using monitor command

Monitoring results are categorized under some column and for each column we have several statistic factors like highest number of free connections, and current number of free connections. Next service which we want to monitor is the application server runtime or the JVM instance that our application server is running on. To monitor the JVM we can use a command similar to the following snippet.

monitor –type jvm –interval 1 server

Command output is similar to figure 4 which shows almost no changes in heap size during the monitoring period.

Figure 4 outputs for JVM monitoring using the monitor command

The monitor command combined with LOW monitoring level is very useful when we need to get an overview of different GlassFish services in production environment.

3.3 Monitoring and configuration by get and set commands

Explain more about get and set commands

Monitoring a JDBC connection pool

Now we want to use these set of four commands to monitor our newly created JDBC connection pool.

get --monitor=true server.resources.GiA_Derby_Pool.*

This command will return a handful set of statistical information about our connection pool. Although we can change the command in order to get only one of the connection pool attributes instead of complete statistical set. To get number of free connections in our JDBC pool we can use the following command

get --monitor=true  server.resources.DerbyPool.numconnfree-current

We can use get to view the configuration attributes of any configurable object in the application server. If we try to use get without –monitor=true, we should give try to view a configuration attribute instead of a monitoring attribute. For example to view the maximum pool size for our connection pool we can use the following command:

get server.resources.jdbc-connection-pool.DerbyPool.max-pool-size

We can use set command alongside with get command to change the configuration parameters using the dotted names and asadmin utility.

set server.resources.jdbc-connection-pool.DerbyPool.max-pool-size=64

It is very simple and effective to get detailed monitoring information out of the application server to catch up with performance related issues.

4 CLI security

The CLI is our means to administrate the application server. As well as it can be very useful for daily tasks it can be catastrophic if we fail to protect it well from the unauthorized access.

4.1 The asadmin related credentials

In the commands which we studied in this article you saw that for some of them we used the –user and –passwordfile parameters and for some of them we did not. The main reason behind the fact that we could execute a command with or without providing any credentials refers back to the way that we create the domain. If you look at the command we used to create the domain you can see that we used –savemasterpassword=true and –savelogin=true.  These two parameters save the domain credentials and asadmin reuse those credentials whenever we want to execute a command.

The so called master password

The master password is designated to protect the domain encrypted files like digital certificate store from unauthorized access.

When we start a GlassFish domain, startup process needs to read these certificates and therefore it need to open the certificates store files. GlassFish needs master password to open the store files and therefore we should either provide the master password during the start-domain command execution in an interactive way or we should use the –passwordfile parameter to provide the process with the master password.

The most common situation for using a previously saved master password is when we need our application server to start in an unattended environment, like when we make a Windows service or Linux Daemon for it.

To provide this password for the application server startup process, we should use the –savemasterpassword=true during the domain creation. If we use this parameter the master password will be saved into a file named master-password which resides inside the domain/config directory.

We said that the master password protects some files from unauthorized access. One of the file set that master password protects is the digital certificates storage files which differ between developer and cluster profile.

The domain administration credentials

So far, we that we can execute asadmin commands with or without providing the password file. You may ask, how it is possible that we are not entering the password and nor we are providing the password file and hence we saw that our command are carried out by the  asadmin.

We did not provide any password in an interactive manner or any password file for asadmin to carry out our command because we used a parameter during the domain creation. The –savelogin=true asks the domain creation command to save the administration password of our domain into a file name .asadminpass which is located inside our home directory. The content of this file is similar to the following snippet.

asadmin://admin@localhost:4747 YWRtaW5hZG1pbg==

The syntax simply determine what is the administration username and administration password for a domain that is running on localhost and uses 4747 as its administration port Sure the password is encrypted to prevent anyone learning it without being authorized.

There is an asadmin command which perform a similar task to what –savelogin does for domains that are already created.  We can use this command to save the credentials to prevent the asadmin utility asking for them either in an interactive mode or as  –passwordfile parameter. Following snippet shows how we can use login command.

login   --host localhost --port 9138

This command will interactively ask for administration username and administration password and after a successful authentication it will save the provided credentials into the .asadminpass file. After we execute this command the content of .asadminpass will be similar to the following snippet.

asadmin://admin@localhost:4747 YWRtaW5hZG1pbg==

asadmin://admin@localhost:9138 YWRtaW5hZG1pbg==

The .asadminpass contains the SHA hashed copy of passwords therefore it is not possible for anyone to recover the original passwords if he can grasp the file.

The login command falls very useful when we need to administrate several domains from our administration workstation. We can simply login into each remote or local domain that we need to administrate and then asadmin will pickup the correct credential from the .asadminpass file based on the –host and –port parameters.

Changing passwords

As and administrator we usually like to change our passwords from time to time to ensure keep a higher level of security precautions. GlassFish let us simply change the master or administration password using some provided commands.

To change the master password we must be sure that application server is not running and then we can the change-master-password as follow:

Change-master-password --domaindir=/opt/dev/apps/domains/ --savemasterpassword=true GiADomain

After executing this command, the asadmin utility will interactively asks us for the new master password which must be at least 8 characters. And we use the –savemasterpassword to ensure that the master password is saved and during the domain startup we do not need to provide the asadmin with it. We need to change the password file if we are using it to feed the asadmin utility with the master password.

To change the administration password we can use change-admin-password which is a remote command and need the application server to be running. Following snippet shows how we can change the administration password for a given username.

change-admin-password --host 127.0.0.1 --port 4747 --user admin

After executing this command, asadmin will asks for administration password and then change the password of the given user. If we change the password for a user, we will need to login into that domain again if we need to use automatic login. Also we need to change the password file if we are using it to feed the asadmin utility with its required passwords.

In advanced administration article we will discuss how we can add more administration users and how we can use an already credentials store like an LDAP for administration authentication.

4.2 Protecting passwords

We have many places in any application server which we need to provide it wit some username and passwords which the application server will use to communicate with external systems like JMS brokers and Databases. Usually each part of the overall infrastructure has its own level of policy and permission set which lead to the fact that we should protect these passwords and avoid leaving these passwords in plain text format in any place, even in the application server configuration files which can be opened using any text editor.

GlassFish provides a very elegant way for protecting these passwords by providing us with the required commands and infrastructure to encrypt the passwords and use the encrypted password’s assigned alias in the application server configuration files. The encrypted passwords are stored inside an encrypted file named domain-passwords which resides inside the domain’s config directory. The domain-passwords file is encrypted using the master password and if the master password compromised then these file can be decrypted.

The command for creating password aliases is a remote command named create-password-alias and a sample usage is as following snippet:

create-password-alias --user admin --host localhost --port 4747 GiA_Derby_Pool_Alias

After we execute this command asadmin utility will ask for the password which we want this alias to hold.  Although asadmin may asks for administration credentials if we are not logged in.

Now that we create the alias we can use it by using the alias accessing syntax which follows the ${ALIAS=password-alias-password} format. For example if we want to create the JDBC connection pool that we create in 3.3 we can change the command as follow:

create-jdbc-connection-pool --user admin  --host localhost --port 4747

--datasourceclassname org.apache.derby.jdbc.ClientDataSource --restype javax.sql.XADataSource

--property portNumber=1527:password=${ALIAS= GiA_Derby_Pool_Alias }:user=APP:serverName=

localhost:databaseName=GiADatabase:create=true GiA_Derby_Pool

Password aliasing is not present just for external resources, but it can be used to protect the content of the password file which contains administration and master password for using instead of typing the password when the asadmin interactively asks for it.  We can simply create a password alias for administration password and for the master password and use them in password file. Sample content for a password file with aliased password is like.

AS_ADMIN_PASSWORD=${ALIAS=admin-alias}

AS_ADMIN_MAPPEDPASSWORD=${ALIAS=master-alias}

Like all other administration commands, the alias administration commands set have some other commands which help with the commands administration. Other commands in the set are

  • The delete-password-alias command: We can delete an alias when we are sure we are no longer using it.
  • The list-password-aliases command: We can get a list of all aliases that we have in our domain-password file.
  • The update-password-alias command: We can update an alias by changing the password that it holds.

Password aliasing is very helpful when we do not want to give our passwords to the personal in charge of application server management or administration tasks. Instead we can provide them aliased password which they can use.

5 Summary

The GlassFish command line administration interface is one of the most powerful and feature complete command line based administration utilities in the application server markets.  The asadmin utility provides all required commands which we need to administrate any part of the application server directly from the command line instead of wandering in the XML configuration files or third party utilities or the web based administration console.

The asadmin utility is a not only a client side application which we can use to administrate an application and instead it is a composition of client side and a server side application which makes it possible to use the asadmin utility to administrate remote domains as well as local domains.

The asadmin utility has enough intelligence embedded in its routines which helps us find the required commands and the command structure simply by entering a part of command. By using asadmin utility we can administrate domains, applications, and application server managed resources.

When it comes to security, which is one of the highest concerns of the administrators, GlassFish provides many measures to protect passwords and keep the possibility of compromising passwords to the lowest possible value by using encryption for storing external resource passwords and asadmin administration passwords.

OpenMQ, the Open source Message Queuing, for beginners and professionals (OpenMQ from A to Z)

Talking about messaging imply two basic functionalities which all other provided features are built around them; these two capabilities include support for topics and queues which basically lets a message to be consumed by as many interested consumer  as subscribed or at just one of the interested consumer(s).

Messaging middleware was present in the industry before Java come along and every one of them had its own interfaces for providing the messaging services. There were no standard interfaces which vendors try to comply with it in order to increase the compatibility, interoperability, ease of use, and portability. The art of Java community was defining Java Messaging System (JMS) as a standard set of interfaces to interact with this type of middleware from Java based applications.

Asynchronous communication of data and events is one of the always present communication models in an enterprise application to resolve some requirements like long running process delegation, batch processing, loose coupling or different systems, updating dynamically growing or shrinking clients’ set, and so on. Messaging can be considered as one of important basics required by every advanced middleware system software stack which provides essentials required to define a SOA as it provides the required basics for loose coupling.

Open MQ is an open source project mostly sponsored by Sun Microsystems for providing the Java community with a high performance, cross platform message queuing system with commercial usage friendly licenses including CDDL and GPL. Open MQ which is hosted at mq.dev.java.net shares the same code base that Sun Java System Message Queue uses. This shared code base means that any feature available in the commercial product is also available in the open source project and the only difference between them is that for the commercial one you should pay a license fee and in return you will get professional level support from Sun engineers while with the open source alternative you do not have commercial level support and instead you can use community provided support.

Open MQ provides a broad range of functionalities in security, high availability, performance management and tuning,  monitoring, multiple language support, and so on which lead easier utilization of its base functionalities in the overall software architecture.

1 Introducing Open MQ

 

Open MQ is the umbrella project for multiple components which forms Open MQ and Sun Java System Message queue which is name of the productized sibling of Open MQ. These components include:

§     Message broker or messaging server

Message broker is the heart of the messaging system which generally maintains the message queues and topics, manage the clients connections and their requests, control the clients access to queues and topics either for reading or writing, and so on.

§     Client libraries

Client libraries provide APIs which let developers interact with the message broker from different programming languages. Open MQ provides Java and C libraries in addition to a platform and programming language agnostic interaction mechanism.

§     Administrative tools

Glassfish provides a set of very intuitive and easy to use administration tools including tools for broker’s life cycle management, broker monitoring, and destination life cycle management and so on.

1.1 Installing Open MQ

From version 4.3, Open MQ provides an easy to use installer which is platform depended as it is bundled with compiled copy of client libraries and administration tools. Download bits for installer are available at https://mq.dev.java.net/downloads.html, download the binary copy compatible with your operating system. Make sure that you have JDK 1.5 or above already installed before you try to install Open MQ. Virtually Open MQ works on any operating system capable of running JDK 1.5 and above.

Beauty of open source

You might be using an operating system for development or deployment which is not included in table 1; in this case you can simply download the source code archive from the same page mentioned above and build the source code yourself. The building instructions are available at http://download.java.net/mq/open-mq/4.3/b05/Compiling and Running OpenMQ 4.3 in NetBeans.txt which is straight forward. Even if you could not manage to build the binary tools you can still use Open MQ and JMS on that operating system and run your management tools from a supported operating system.

 

To install Open MQ extract the downloaded archive and run installer script or executable file depending on your operating system, as you can see installer is straight forward and does not require any special user interaction except accepting some defaults.

After we complete the installation process we will have a directory structure similar to figure 1 in the installation path we select during the installation.

Figure 1 Open MQ installation directory structure

 

As you can suggest from the figure, we have some Meta directory related to installer application and a directory named mq that contains Open MQ related files. Table 1 lists Open MQ directories along with their descriptions.

Table 1 Important items inside Open MQ installation directory

Directory name

Description

bin

All administration utilities resides here

include

Required headers for using OpenMQ C APIs

lib

All JAR files and some native dependencies (NSS) resides here

lib/*.war

HTTP and HTTP tunneling for firewall restricted configuration, Universal Messaging service for language agnostic use of Open MQ.

var/instances

Configuration files for different broker instances created on this Open MQ installation are stored inside this folder by default.

 

2 Open MQ administration

Every server side application requires some level of administration and management to keep it up and running and Open MQ is not an exception. As we discussed in article introduction each messaging system must have some basic capabilities including point to point to point and publish subscribe mechanism for distributing messages. So, basic administration should evolve around the same concept.

2.1 Queue and Topic administration and management

When a broker receive a message it should put the message in a queue, topic or discard it. Queues and topics are called physical destinations as they are latest place in the broker space that a message should be placed.

In Open MQ we can use two administration utilities for creating queues and topics, imqadmin and imqcmd which the first one is a simple swing application and the later one is a command line utility. In order to create a queue we can execute the following command in the bin directory of Open MQ installation.

 

imqcmd create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"

 

 

This command simply creates a destination of type queue, this queue will not store more than 1000 message and if producers try to put more messages in the queue it will remove oldest present message to open up some space for new messages. Removed message will be placed in an specific queue named Dead Messages Queue (DMQ). The queue that we create using this command is named smsQueue. This command creates the queue on the local Open MQ instance which is running on localhost port 7676. We can use –b to ask imqcmd to communicate with a remote server. Following command create a topic named smsTopic on a server running on 192.168.100.1:7676, remember that default username and password for the default broker is admin.

 

imqcmd –b 192.168.100.1:7676 create dst -n smsTopic -t t

 

imqcmd is a very complete and feature rich command, we can use it to monitor the physical destinations. A sample command to monitor the destination can be like:

 

imqcmd metrics dst -m rts -n smsQueue -t q

Sample output for this command is similar to figure 2 which shows consuming and producing messages’ rates of smsQueue. The only thing about the command is that -m argument which determine which type of metrics we need to see, different types of observable metrics for physical destinations includes:

§              con, which will show destination consumer information

§              dsk, disk usage by This destination

§              rts, as already said it shows message rates

§              ttl, shows message total, it is the default metric type if none specified

 

Figure 2 sample output for OpenMQ monitoring command

 

There some other operation which we may perform on destinations:

§              Listing destinations: imqcmd list dst

§              Emptying a destination: imqcmd purge dst -n ase -t q

§              Pausing, resuming a destination: imqcmd pause dst -n ase -t q

§              Updating a destination: imqcmd update dst -t q -n smsQueue -o "maxNumProducers=5"

§              Query destination information: imqcmd query dst -t q -n smsQueue

§              Destroy a destination: imqcmd destroy dst -n smsTopic -t t

2.2 Brokers administration and managemetn

We discussed that when we install Open MQ it creates a default broker which will start when we run imqbrokerd without any additional parameters. An Open MQ installation can have multiple brokers running on different ports with different configuration. We can create a new broker by executing following command which will create the broker and start it for accepting further commands or incoming messages.

imqbrokerd -port 7677 -name broker02

 

In table 1 we talked about a var/instances directory which all configuration files related to brokers reside inside it. If you look at this folder after executing the above command you will see two directories name imqbroker and broker02 which corresponds to default and newly created broker. Table 2 shows important artifacts residing inside broker02 directory.

 

Table 2 Brokers important files and directories

File or directory name

Description

etc/ accesscontrol.properties

Configurations related to access controls, roles, authentication source and mechanism are stored here.

etc/passfile

Users and password for accessing Open MQ are stored here

fs370 (directory)

Broker file based message stores, physical destinations, and so on.

log (directory)

Open MQ log files

props/ config.properties

All configurations relate to this broker are stored inside this file, configurations like services, security, clustering, and so on.

lock (file)

Exists when the broker is running and shows broker’s name, host and port

 

Open MQ has a very initiative configuration mechanism, first of all there are some default configuration which apply on Open MQ installation, there is an installation configuration file which we can use to override the default configurations, then each broker has its own configuration file which can override the installation configuration parameters and finally, we can override  broker (instance) level configurations by overriding them by passing their values to  imqbrokerd when staring a broker. When we create a broker we can change its configuration by editing the config.properties file mentioned in table 2. Table 3 lists all other mentioned configuration files and their default location on your hard disk.

Table 3 Open MQ configuration files path and descriptions

Configuration file location

Configuration file description

default.properties

Located in lib/props/broker* directory, for determining default configuration parameters

install.properties

Located in lib/props/broker* directory, for determining installation configuration parameters

config.properties

Located inside props** folder of instance directory

* We discussed this directory in table 2

** This directory discussed in table 4

Now that we understand how we can create a broker, let’s see how we can administrate and manage a broker. In order to manage a broker we use imqcmd utilities which were also discussed in previous section for destination administration.

There are several level of monitoring data associated with a broker including incoming and outgoing messages and packets (-m ttl), messages and packet incoming and outgoing rate (-m rts), and finally operational related statistics like connections, threads and heap size (-m cxn). Following command returns message and packets rate for a broker running on localhost, listening on port 7677, the statistics updates each 10 seconds.

imqcmd metrics bkr  -b 127.0.0.1:7677 -m rts -int 10 -u admin

As you can see we passed the username with –u parameter and server will not ask for the username again. We can not pass the associated password directly in the command line and instead we can use a plain text file which includes the associated password. the password file should contains:

imq.imqcmd.password=admin

Default password for Open MQ administrator user is admin and as you can see the password file is a simple properties file. Table 4 list some other common tasks related to brokers along with a description and a sample command.

Table 4 Broker administration and management

Task

Description and sample command

pause and resume

Make the broker to stop or resume accepting connections on all services.

Quiescence, un-quiescence

Issuing quiescence on a broker because it to stop accepting new connections, but it serve already connected connections, and un-quiescence command will return it to normal mode.

imqcmd quiesce bkr -b 127.0.0.1:7677 -u admin –passfile passfile.txt

Update a broker

Update a broker attributes like

imqcmd update bkr –o “imq.log.level=ERROR”

Shutdown broker

imqcmd shutdown bkr -b 127.0.0.1:7677 -time 90 -u admin

The sample command shutdown the broker after 90 seconds and during this time it only serves already established connections

Query broker

Shows broker information

./imqcmd query bkr –b 127.0.0.1:7677 –u admin

 

 

3 Open MQ security

After getting the sense of Open MQ basics we can look at its security mode to be able to setup a secure system based on Open MQ messaging capability. We will discuss connection authentication to ensure that a user is allowed to create a connection either as a producer or consumer, access control or authorization to check whether the connected user can send a message to a specific queue or not. Security the transport step is a responsibility of messaging server administrators, Open MQ support SSL and TLS for preventing message tamper and providing the required level of encryption but we will not discuss it as it is out of this article scope.

Open MQ provides supports for two types of user repository out of the box along with facilities to use JAAS modules for authentication. User information repositories which can be used out of the box includes flat file user information repository and directory server with an LDAP communication interface.

3.1 Authentication

Authentication happens before a new connection establishes. To configure Open MQ perform authentication we should provide an authentication source along with editing the configuration files to enable the authentication and configure it to use our prepared user information repository.

Flat file authentication

Flat File authentication rely on a flat file which contains username, encoded password, role name, each line of this files contains mentioned properties of one user. A sample line can be similar to:

admin:-2d5455c8583c24eec82c7a1e273ea02e:admin:1

Default location of each broker’s password file is the broker’s home directory/etc/passwd, later on when we use imqusermgr utility we either edit this file or the file that we described using related properties which are mentioned in table 5.

To configure a broker instance to perform authentication before establishing any connection we can either use imqcmd to update the broker’s properties or we can directly edit its configuration file which is mentioned in table 5. In order to configure broker02 that we create in section 1.2 to perform authenticate before establishing a connection we need add properties listed in table 5 to config.properties file which is located inside props directory or broker’s home directory, broker directory structure and configuration files are listed in table 3 and 2.

We can ignore two later attributes as we want to use the default file in its default path,. So, shutdown the broker, add listed attributes to its configuration file and start it again. Now we can simply uses imqusrmgr to perform CRUD operations on our user repository associated to our target broker.

 

Table 5 required changes to enable connection authentication for broker02

Required information by broker

Corresponding property in configuration file

What kind of user information repository we want to use

imq.authentication.basic.user_repository=file

What is password encoding

imq.authentication.type=digest

Path to directory containing password file

imq.passfile.dirpath=path/to/the/folder

Path to password file which contains passwords

imq.passfile.name=password/file/name

 

Now that we configured the broker to perform authentication we need some mechanism to add, remove, or update the user information repository content. Table 6 shows how we can use imqusermgr utility to perform CRUD operations on broker02.

Table 6 Flat file user information repository management using imqusermgr

CRUD operation

Sample command

Create user

Add a user to user group, other groups are admin and anonymous.

imqusermgr add -u user01 -p password01 -i broker02 -g user

 

List users

imqusermgr list -i broker02

Update a user

Change the user status to inactive.

imqusermgr update -u user01 -a false -i broker02

Remove a user

Remove user01 from broker02

imqusermgr delete -u user01 -i broker02

Now create another user as follow:

imqusermgr add -u user02 -p password02 -i broker02 -g user

We will use these two users in defining authorization rules and later on in section 4 to see how we can connect to Open MQ from Java code when authentication is enabled.

 

All other utilities that we discussed need the Open MQ to be running and all of them we able to perform the required action on remote Open MQ instances but this command only works on local instances and does not require the instance to be running.

3.2 Authorization

After we enabled authentication which result in the broker checking user credentials before establishing the connection we may need to check the connected user’s permissions before we let it perform a specific operation like connecting to a service, putting message into a queue (acting as a producer), subscribing to a topic (acting as a consumer), browsing a physical destination, or automatic creation of physical destinations,. All these permissions can be defined using a very simple syntax which is shown below:

 

resourceType. resourceVariant. operation. access. principalType= principals

 

In this syntax we have 6 variable elements which are described in table 7.

Table 7 different elements in each Open MQ authorization rule

Element

Description

resourceType

Type of resource to which the rule applies:

connection: Connections

queue:Queue destinations

topic: Topic destinations

resourceVariant

Specific resource (connection service type or destination) to which the rule applies.

An asterisk (*) may be used as a wild-card character to denote all resources of a given type: for example, a rule beginning with queue.* applies to all queue destinations.

operation

Operation to which the rule applies. This syntax element is not used for resourceType=connection

access

Level of access authorized:

allow: Authorize user to perform operation

deny: Prohibit user from performing operation

principalType

Type of principal (user or group) to which the rule applies:

user: Individual user

group: User group

principals

List of principals (users or groups) to whom the rule applies, separated by commas.

An asterisk (*) may be used as a wild-card character to denote all users or all groups: for example, a rule ending with user=* applies to all users.

 

Enabling authoriization

To enable authorization we will have to do a bit more than what we did for authentication because in addition to enabling the authorization we need to define roles privileges. Open MQ provides a text based file to describe the access controlling rules. Default path for this file is inside the etc directory of broker home, and the default name is accesscontrol.properties. If we do not provide the path for this file when configuring Open MQ for authorization, Open MQ will pick up the default file.

In order to enable authorization we need to add following attributes to one of configuration files depending on how much wide we want the authorization be applied. Second attribute is not necessary if we want to use default file path.

 

§              To Enable the authorization: imq.accesscontrol.enabled=true

  • Determine the access control description file path: imq.accesscontrol.file.filename= path/toaccess/contro/file

Defining access control roles inside the access control file is an easy task, we just need use the rules syntax and limited set of variable values to define simple or complex access control roles. Listing 1 shows list of rules which we need to add to our access control file that either can be the default file or another file in the specific path that we should describe in the configuration file. If you are using default access control configuration file make sure that you comment all already defined rules by adding # sign in beginning of each line.

Listing 1 Open MQ access control rules definition


#connection authorization rules
connection.ADMIN.allow.group=admin                            #1
connection.ADMIN.allow.user=user01                            #1
connection.NORMAL.allow.group=*                                          #2
 
#queue authorization rules
queue.smsQueue.produce.allow.user=user01,user02                                          #3
queue.smsQueue.consume.allow.user=user01,user02                                          #4
queue.*.browse.allow.user=*                                                                      #5
 
#topic authorization rules
topic.smsTopic.*.allow.group=*                                                                      #6
topic.smsTopic.produce.deny.user=user01                                                        #7
topic.*.produce.deny.group=anonymous                                                        #8             
topic.*.consume.allow.user=*                                                                      #9
 
#Auto creating destinations authorization rules
queue.create.allow.user=*                                                                      #10
topic.create.allow.group=user                                                                      #11
topic.create.deny.user=user01                                                                      #12

 

By adding content of listing 1 to access control description file, we are applying following rules in Open MQ authentication, first of all we are letting admin group to connect to the system as administrator #1 and the only user01 as a normal user has previledge to connect to system as administrator #1. We let any user from any group to connect as a normal user to the broker#2. At #3 we let any of users that we create before to either act as a producer and at #4 we let these two users to act as producers of smsQueue destination. Because of rules we defined at #3 and #4 no other user can act as a consumer or producer of any other queue in the system. At #5 we let any user to browse any queue in the system. At #6 we let any operation on smsTopic by any group of users and later on at #7 we define a rule to deny user01 from acting as a smsTopic producer. As you can see generalized rules can be overridden by more specific rules as we did it for denying user01 from acting as a producer. At #8 we deny anonymous from acting as producer of smsTopic. At #9 we let any user from any group to act as a consumer of any present topic in the system. At #10 we allow any user to use auto destination creating facility. At #11 we just let one specific group to automatically create topics while one single user has no privilege to automatically create a topic #12.

WARNING

If we use an empty access control definition file no user can connect to the system so, no operation will be possible in the system by any user. Open MQ default permission for any operation is denial, so if we need to change the default permission to allow we should add allows rules to the access control definition file.

Now we can simply connect to Open MQ by using user01 to perform administrative tasks. For example executing these commands will create our sample queue and topics in the broker that we create in section 1.2. When imqcmd asked for username and password user user01 and password01

 

imqcmd  –b 127.0.0.1:7677 create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"

 

imqcmd –b 127.0.0.1:7677 create dst -n smsTopic -t t

 

As you can see we can simply define very complex authorization rules using a simple text file and its rules definition syntax.

So far, we tried and use simple text file for user management and authentication purposes, but in addition to using text file, Open MQ can use a more robust and enterprise friendly user repository like OpenDS. When we use directory service as the user information repository we will no more manage it using the Open MQ command line utilities and instead the enterprise wide administrators perform user management from one central location and we as Open MQ administrators or developers just define out authorization rules.

In order to configure Open MQ to use a directory server (like Open DS) as the user repository we need to add the properties listed in table 8 to our broker or installation configuration file.

 

Table 8 Configuring Open MQ to use OpenDS as the user information repository

Required information by broker

Corresponding property in configuration file

Determine user information repository type

imq.authentication.basic.user_repository=ldap

Determine password exchange encryption, in this case it is base-64 encoding similar to directory server method.

imq.authentication.type=basic

*Directory server host and port

imq.user_repository.ldap.server=127.0.0.1:7677

A DN for binding and searching.

imq.user_repository.ldap.principal=cn=gf cn=admin

Provided DN password

imq.user_repository.ldap.password=admin

User attribute name to compare the username.

imq.user_repository.ldap.uidattr=uid

**User’s group name

imq.user_repository.ldap.gidattr=groupid

* Multiple directory server address can be determined for high availability reasons, for example ldap://127.0.0.1:7677 ldap://192.168.1.1:7677 each address should be separated from other address by an space.

** User’s group name(s) attribute is vendor dependent and vary from one directory service to other directory services.

There are several other attributes which can be used to either tweak the LDAP authentication in term of security and performance, but essential properties are listed in table 10. Other properties which can be used are available at Sun Java System Message Queue Administration Guide available at http://docs.sun.com/app/docs/coll/1307.5.

The key to use directory service is to know the schema

The key behind a successful use of a directory service in any system is to know the directory service itself and the schema that we are dealing with. For example different schemas uses different attribute names and for each property of a directory service object so before diving into using a directory service implementation make sure that you know both the directory server and its schema.

4 clustering and high availability

We know that redundancy is one of the high availability enabler. We discussed data and service availability along with horizontal and vertical scalability of software solutions. OpenMQ like any other component of a software solution which drive an enterprise should be highly available sometimes both for data and service layer and sometimes just for service layer.

The engine behind the JMS interface in the message broker which its tasks is managing destinations, routing and delivering messages, checking with the security and so on. Therefore if we could keep the brokers and related services available for the clients, then we have the service availability in place. Later on if we manage to keep the state of queues, topics, durable subscribers, transactions and so on preserved in event of a broker failure then we can provide out clients with high availability in both service and data layer.

Open MQ provides two types of clustering, conventional clusters and high availability clusters, which can be used depending on the high availability degree required for messaging component in the whole software solution.

Conventional clusters which provide service availability but not data availability. If one broker in a cluster fails, clients connected to that broker can reconnect to another broker in the cluster but may be unable to access some data while they are reconnected to the alternate broker. Figure 3 shows how conventional cluster member and potential clients work together.

Figure 3 Open MQ Conventional clustering, one MQ broker act as the master broker to keep track of changes and propagate it to other cluster members

As you can see in the figure 3, in conventional clusters, we have different brokers running with no shared data. But in conventional cluster configuration is shared by a master broker that propagates changes between different cluster members to keep them up to dated about destinations and durable subscribers. Master broker keep newly joined or offline members which become online after a period of time synchronized with new configuration information and durable subscribers. Each client is connected to one broker and will use that broker until the broker fails (get offline, a disaster happens, and so on) which will connect to another broker in the same cluster.

High availability clusters provide both service availability and data availability. If one broker in a cluster fails, clients connected to that broker are automatically connected to that broker in the cluster which takes over the failed broker’s store. Clients continue to operate with all persistent data available to the new broker at all times. Figure 4 shows how high availability clusters’ members and potential clients work together.

Figure 4 Open MQHigh availability cluster, no single point of failure as all information are stored in a highly available database and all cluster member interact with that database

As you can see in figure 4 we have one central highly available database which we relay upon for storing any kind of dynamic data like transactions, destinations, messages, durable subscribers, and so on. These database itself should be highly available, usually a cluster of different database instances running on different machines in different geographical locations. Each client connect to one broker and continue using the same broker until broker fails, in term of the broker failure, another broker will take over all of the failed broker responsibilities like open transactions, and durable subscribers. Then client will reconnect to the broker that takes over of failed broker’s responsibilities.

All in all we can summarize the differences between conventional clusters and high availability clusters as listed in table 9.

Table 9 comparison between high availability clusters and conventional clusters

Functionality

Conventional cluster

High availability cluster

Performance

Faster

Slower

Service availability

Yes, but partial when master broker is not available

Yes

Data availability

No, a failed broker can cause data lose

Yes

Transparent failover recovery

May not be possible if failover occurs during a commit

May not be possible if failover

occurs during a commit and the

client cannot reconnect to any

Other broker in the cluster

Configuration

Done by setting appropriate cluster configuration broker properties

Done by setting appropriate cluster configuration broker properties

3rd party software requirement

None

Highly available database

 

Usually when we are talking about data availability we need to accept the slight performance overhead, you remember that we had similar condition with GlassFish HADB backed clusters.

We said that high availability clusters use a database to store dynamic information; some databases tested with OpenMQ include ORACLE RAC, MySQL, Derby, PostgreSQL, and HADB. We usually use a high available database to maximize the data availability and reduce the risk of losing data.

We can configure a highly available messaging infrastructure by a following a set of simple steps; First you need to create as many brokers as you need in your messaging infrastructure, so create brokers as discussed in 1.2, In second step we need to configure the brokers to use a shared database for dynamic data storage and act as a cluster member. To perform this step we need to add some changes to the each broker configuration file, these changes includes adding the content of listing 2 to broker’s configuration file

Listing 2 Required properties for making a broker highly available

#data store configuration

imq.persist.store=jdbc                          #1

imq.persist.jdbc.dbVendor=mysql                  #2

imq.persist.jdbc.mysql.driver=com.mysql.jdbc.Driver   #2

imq.persist.jdbc.mysql.property.url=jdbc:mysql://localhost:3306 #2

imq.persist.jdbc.mysql.property.databasename=jmsStore        #2

imq.persist.jdbc.mysql.user=privilegedUser                  #2

imq.persist.jdbc.mysql.needpassword=true                    #2

imq.persist.jdbc.mysql.password=dbpassword                   #2

 

#cluster related configuration

imq.cluster.ha=true                                          #3

imq.cluster.clusterid=e.cluster01                           #3

imq.brokerid=e.broker02                                      #4

 

 

Please remove the numbers in the above listing and following paragraph with cueballs

At #1 we define that data store is of type JDBC and not Open MQ managed binary files, for high availability clusters it must be jdbc instead of file. At #2 we provide required configuration parameters for Open MQ to be able to connect to the database, we can replace mysql with oracle, derby, hadb or postgresql if we want to. At #3 we configure this broker as a member of a high availability cluster named e.cluster01, and finally at #4 we determine the unique name of this broker. Make sure that each broker should have a unique name in the entire cluster and different clusters using the same database need to be uniquely identified.

Now we have a database server available for our messaging infrastructure high availability and we configured all of our brokers to use this database, the last step is creating the database that Open MQ is going to use to store the dynamic data. To create the initial schema we should use following command:

./imqdbmgr create all –b broker_name

This command creates the database, required tables, and initializes the database with topology related information. As you can see we passed a broker name to the command, this means that we want the imqdbmgr to use that specific broker’s configuration file to create the database. As you remember we can have multiple brokers in the same Open MQ installation and each broker can operate completely independent from other brokers.

The imqdbmgr command has many usages including: upgrading storage from old Open MQ versions, creating backups, restoring backups and so on. To see a complete list of its commands and parameters execute the command with –h parameter.

Now you should have a good understanding of Open MQ clustering and high availability. In next section we will discuss how we can use Open MQ from a Java SE application and later on we will see how Open MQ is related to GlassFish and GlassFish clustering architecture.

Excellent segue

Thanks, I am learning

 

 

5 Open MQ management using JMX

Open MQ provides a very rich set of JMX MBeans to expose administration and management interface of Open MQ. These MBeans can also be used to monitor the Open MQ using JMX. Generally any task which we can do using imqcmd can be done using JMX code or a JMX console like JDK’s Java Monitoring and Management Console, jconsole, utility.

Administrative tasks that we can do using JMX include managing brokers, services, destinations, consumers, producers. And so on. Meantime management and monitoring is possible using JMX, for example we can monitor destinations using our code or change the configuration of the JMX broker dynamically using our own code or a JMX console. Some use cases of the JMX connection channel includes:

  • We can include JMX code in our JMS client application to monitor application performance and, based on the results, to reconfigure the JMS objects you use to improve performance.
  • We can write JMX clients that monitor the broker to identify use patterns and performance problems, and we can use the JMX API to reconfigure the broker to optimize performance.
  • We can write a JMX client to automate regular maintenance tasks, rolling upgrades, and so on.
  • We can write a JMX application that constitutes your own version of imqcmd, and we can use it instead of imqcmd.

To connect to Open MQ broker using a JMX console we can use service:jmx:rmi:///jndi/rmi://host:port/server  as the connection URL pattern.  Listing 3 shows how we can use Java code to pause jms service temporarily.

Please replace numbers with cueballs in following sample codes and paragraphs

 

Listing 3 pausing the jms service using Java code, JMX and admin connection factory

AdminConnectionFactory  acf = new AdminConnectionFactory();             #1

acf.setProperty(AdminConnectionConfiguration.imqAddress, "192.168.1.1:7677");                                                       #2

JMXConnector  jmxc = acf.createConnection("admin", "admin");          #3

MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection();           #4

ObjectName  serviceConfigName = MQObjectName.createServiceConfig("jms"); #5

mbsc.invoke(serviceConfigName, ServiceOperations.PAUSE, null, null);    #6

jmxc.close();                                                           #7

 

This sample code uses admin connection factory which brings some dependencies on Open MQ classes which make you add imqjmx.jar to your class path. This file is located in Open MQ installation lib directory. At #1 we create an administration connection factory, this class is an Open MQ helper class which make this sample code depending on Open MQ libraries, at #2 we configure the factory to gives us connection to a non-default host, the default host is 127.0.0.1:7676 at #3 we set the credentials which we want to use to make the connection, at #4 we get the connection to MBean server. At #5 we get jms service MBean and at #6 we invoke one of its operation and finally we close the connection at #7.

Listing 4 shows a pure JMX sample code for reading MaxNumProducers attributes of a smsQueue.

Listing 4Reading smsQueue attributes using pure JMX code

HashMap   environment = new HashMap();                       #1

String[]  credentials = new String[] {"admin", "admin"};   #1

environment.put (JMXConnector.CREDENTIALS, credentials);     #1

JMXServiceURL  url;

url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://192.168.1.1:7677/server"); #2

JMXConnector  jmxc = JMXConnectorFactory.connect(url, environment); #3

MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection(); #4

ObjectName  destConfigName= MQObjectName.createDestinationConfig(DestinationType.QUEUE, "smsQueu"); #5

Integer  attrValue= (Integer)mbsc.getAttribute(destConfigName,DestinationAttributes.MAX_NUM_PRODUCERS); #6

System.out.println( "Maximum number of producers: " + attrValue );

jmxc.close();                                                        #7

 

At #1 and #2 we determine the parameters that JMX needs to create a connection, at #3 we create the JMX connection, at #4 we get a connection to the Open MQ’s MBean server, at #5 get the MBean representing our smsQueue, at #6 we get value of an attribute which represent the maximum number of permitted producers and finally we close the connection at #7.

To learn more about JMX interfaces and programming model for Open MQ you can check Sun Java System Message Queue 4.2 Developer’s Guide for JMX Clients which is located at http://docs.sun.com/app/docs/doc/820-5207.

6 Using Open MQ

Open MQ messaging infrastructure can be accessed from variety of channels and programming languages, we can perform messaging tasks using Java, C, and virtually any programming language capable of performing HTTP communication. When it comes to java we can access the broker functionalities either using JMS or JMX.

Open MQ directly provides APIs for accessing the broker using C and Java but for other languages it takes a relatively evolutionary approach by providing a HTTP gateway for communicating between any language like python, JavaScript (AJAX), C#, and so on. This approach is called Universal Messaging System (UMS) which introduce in Open MQ 4.3. in this section we only discuss Java and OpenMQ and other supported mechanism and languages are out of scope. You can find more information about them at http://mq.dev.java.net

6.1 Using Open MQ from Java

We may use Open MQ either directly or by using application server managed resources, here we just discuss how we can use Open MQ from Java SE as using JMS service from Java EE is straight forward.

Open MQ provides a broad range of functionalities wrapped in a set of JMS compliant APIs, you can perform usual JMS operation like producing and consuming messages or you can gather metrics related to different resources using the JMS API. Listing 5 shows how sample applications which communicate with a cluster of open MQ brokers. Running the application will result in sending a message and consuming the same message. In order to execute listing 5 and 6 you will need to add imq.jar, imqjmx.jar and jms.jar to your classpath. These files are inside the imq installation lib directory. 

Listing 5 Producing and consuming mesaages from a Java SE application


public class QueueClient implements MessageListener {      #1
   public void startClientConsumer()throws JMSException {
   com.sun.messaging.ConnectionFactory connectionFactory = new com.sun.messaging.ConnectionFactory();            
connectionFactory.setProperty(com.sun.messaging.ConnectionConfiguration.imqAddressList, "mq://127.0.0.1:7676,mq://127.0.0.1:7677");          #2
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectEnabled, "true"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectAttempts, "5"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectInterval, "500"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqAddressListBehavior, "RANDOM"); #3
   javax.jms.QueueConnection queueConnection = connectionFactory.createQueueConnection("user01", "password01"); #4
   javax.jms.Queue smsQueue = null;
   javax.jms.Session session = queueConnection.createSession(false, javax.jms.Session.AUTO_ACKNOWLEDGE);               #5
   javax.jms.MessageProducer producer = session.createProducer(smsQueue);
   Message msg = session.createTextMessage("A sample sms message");
   producer.send(msg);                 #6
   javax.jms.MessageConsumer consumer = session.createConsumer(smsQueue);
   consumer.setMessageListener(this);         #7
   queueConnection.start();                #8
   }

   public void onMessage(Message sms) {
try {
   String smsContent = ((javax.jms.TextMessage) sms).getText(); #9
   System.out.println(smsContent);
} catch (JMSException ex) {
  ex.printStackTrace();
}
   }
}

As you can see in listing 5 we used JMS programming model to communicate with the Open MQ for producing and consuming messages. At #1 we use the QueueClient class to implement MessageListener interface. At #2 we create a connection factory which connects to one of a cluster member; at #3 we define the behavior of the connection factory in relation to the cluster members. At #4 we provide a privileged user to connect to the brokers. At #5 we create a session which automatically acknowledges receiving the messages. At #6 we send a text message to the queue. At #7 we set QueueClient as the message listener for our consumer. At #8 we start receiving messages from the server and finally at #9 we consume the message in the onMessage method which is MessageListener interface method for processing the incoming messages.

We said that Open MQ provides some functionality which let us retirve monitoring information using JMS APIs. It means that we can receive JMS messages containing Open MQ metrics. But before we can receive such messages we should know where these messages are published and how we should configure Open MQ to publish these messages.

First we need to add some configuration to enable gathering metrics and the interval that these metrics should be gathered, to do this we need to add following properties to one of the configuration files which we discussed in table 5.

 

imq.metrics.topic.enabled=true

imq.metrics.topic.interval=30

 

Now that we enabled the metric gathering, Open MQ will gather brokers, destinations list, JVM, queue, and topics metrics and send each type of these metrics to a pre-configured topic. Broker gathers and sends the metrics messages based on the provided interval. Table 10 shows the topic names for each type of metric information.

 

Table 10 Topic names for each type of metric information.

 

Topic Name

Description

mq.metrics.broker

Broker metrics: information on connections, message flow, and volume of messages in the broker.

mq.metrics.jvm

Java Virtual Machine metrics: information on memory usage in the JVM.

mq.metrics.destination_list

A list of all destinations on the broker, and their types.

mq.metrics.destination.queue.queueName

Destination metrics for a queue of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the queueName variable.

mq.metrics.destination.topic.topicName

Destination metrics for a topic of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the topicName variable.

Now that we know which topics we should subscribe for our metrics we should think about the security of these topics and privileged users which can subscribe to these topics. To configure the authorization for these topics we can simply define some rules in access control file which we discussed in 2.2. Syntax and template for defining authorization for these resources is similar to:

 
topic.mq.metrics.broker.consume.deny.user=*
topic.mq.metrics.broker.consume.allow.user=user01,user02
topic.mq.metrics.destination.topic.t1.consume.deny.user=*
topic.mq.metrics.destination.topic.t1.consume.allow.user=user01

Now that we have the security in place we can write a sample code to retrieve smsQueue metrics, listing 6 shows how we can retrieve smsQueue metrics.

listing 6 retrieving smsQueue metrics using brokers metrics messages


public void onMessage(Message metricsMessage) {
        try {
 
            String metricTopicName = "mq.metrics.destination.queue.smsQueue";
            MapMessage mapMsg = (MapMessage) metricsMessage;
            String metrics[] = new String[11];
            int i = 0;
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgsIn"));
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgsOut"));
            metrics[i++] = Long.toString(mapMsg.getLong("msgBytesIn"));
            metrics[i++] = Long.toString(mapMsg.getLong("msgBytesOut"));
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("peakNumMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("avgNumMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("totalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("peakTotalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("avgTotalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("peakMsgBytes") / 1024);
 
        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }
 

Listing 6 only shows onMessage method of the message listener class, other part of the code is trivial and very similar to listing 5 with slight changes like subscribing to a topic named mq.metrics.destination.queue.smsQueue instead of smsQueue.

A list of all possible metrics for each type of resource which are properties of the received map message can be found in chapter 20 of Sun Java System Message Queue 4.2 Developer’s Guide for Java Clients which is located at docs.sun.com/app/docs/doc/820-5205/aeqej?a=view

7 GlassFish and Open MQ

 

GlassFish like other Java EE application servers need to provide messaging functionality behind its JMS implementation. GlassFish uses Open MQ as message broker behind its JMS implementation. GlassFish supports Java Connector architecture 1.5 and using this specification it integrates with Open MQ. If you take a closer look at table 1.4 you can see that by default, different GlassFish installation profiles uses different way of communication with JMS broker which ranges from an embedded instance, local or Open MQ server running on a remote host. Embedded MQ broker is only mode that Open MQ runs in the same process that GlassFish itself runs. In local mode starting and stopping Open MQ server is done by the application server itself. GlassFish starts the associated Open MQ server when GlassFish starts and stop them when GlassFish stops. Finally remote mode means that starting and stopping Open MQ server should be done independently and an administrator or an automated script should take care of the tasks.

We know that how we can setup a glassfish cluster both with an in-memory replication and also a high available cluster with HADB backend for persisting transient information like sessions, JMS messages, and so on. Here in this section we discuss the GlassFish in memory and HADB backed clusters’ relation with Open MQ in more details to let you have a better understanding of the overall architecture of a highly available solution.

First let’s see what happens when we setup an in-memory cluster; after we configure the cluster, we have several instances in the cluster which uses a local Open MQ broker instance for messaging tasks. When we start the cluster all GlassFish instance starts and upon each instances startup, corresponding Open MQ broker starts.

You remember that we discussed conventional clusters which need no backend database for storing the transient data, and instead use one broker as the master broker to propagate changes and keep other broker instances up to date. In GlassFish in-memory replicated clusters, Open MQ broker associated with the first GlassFish instance that we create act as the master broker. A sample deployment diagram for in-memory replicated cluster and its relation with Open MQ is shown in figure 5

Figure 5 Open MQ and GlassFish in an in-memory replicated cluster of GlassFish instances. Open MQ will work as conventional cluster which lead in one broker acting as the master broker.

 

When it come to enterprise profiles, we usually uses independent high available cluster of Open MQ brokers with a database backend  with a highly available cluster of GlassFish instances which are backed by a highly available HADB cluster. We discussed that each GlassFish instance can have multiple Open MQ brokers to choose between when it creates a connection to the broker, to configure glassfish to use multiple brokers of a cluster we can add al available brokers to glassfish instance and then let the glassfish manage how to distribute the load between broker instances.  To add multiple brokers to a Glassfish instance we can follow these steps:

  • Open Glassfish administration console
  • Navigate to instance configuration node or default configuration node of glassfish application server based on either you want to change the glassfish cluster’s instances configuration or a single instance configuration.
  • Navigate to Configuration> Java Message Service> JMS Hosts from the navigation tree

§   Remove the default_JMS_host

§   Add as much hosts as you like this instance or cluster use, each host represent a broker in your cluster

  • Navigate to Configuration> Java Message Service and change the fields value as shown in table 11

 

Table 11 Glassfish configuration for using a standalone JMS cluster

 

Field

Value and description

Type

We  discussed different types of integration in this section, we will use remote standalone cluster of brokers

Startup Timeout

Time that GlassFish waits during its startup time before it gives up on JMS service startup, if it gives up no JMS service will be available

Start Arguments

We can pass any arguments that we can pass to imqbrokerd command using these field

Reconnect, Reconnect Interval, Reconnect Attempts

We should enable the reconnection, set a meaningful number of retries to connect to the brokers and a meaningful wait time between reconnection

Default JMS Host

It determines which broker will be used to execute and communicate administrative commands on the cluster. Commands like creating a destination and so on

Address List Behavior

It can be used to determine how GlassFish select a broker when it creates a connection; it can either random or based on the first available broker in the available host list. First broker has the highest priority.

Address List Iterations

How many times GlassFish should reiterate over the available hosts list in order to establish a connection if the brokers are not available at first iteration

MQ Scheme

Please refer to table 13

MQ Service

Please refer to table 13

 

As you can see we can configure GlassFish to use a cluster of Open MQ brokers very easily. You may like to know how this configuration affects your MDB or a JMS connection that you acquire from the application server, I should say that when you create a connection using a connection factory, GlassFish check the JMS service configuration and return a connection which has the address of a healthy broker as its host address, the selection of the broker is based on the Address List Behavior value, it can select a healthy broker at random or the first healthy host starting from the top of the hosts list.

Open MQ is one of the most feature complete message broker implementation available in the open source and commercial market, it provides a lot of feature including the possibility of using multiple connection protocols for sack simplicity and vast types of clients that may need to connect to it. Two concepts are introduced to cover this connection protocol flexibility:

  • Open MQ Service: Different connection handling services implemented to support variety of connection protocols. All available connection services are listed at table 12.
  • Open MQ Scheme: Determine the connection schema between an Open MQ client, a connection factory, and the brokers. Supported schemas are listed at table 12.

The full syntax for a message service address is scheme://address-syntax and before using any of the schemas we should ensure that its corresponding service is already enabled. By setting a broker’s imq.service.activelist property, you can configure it to run any or all of these connection services. The value of this property is a list of connection services to be activated when the broker is started up; if the property is not specified explicitly, the jms and admin services will be activated by default.

Table 12 available services and schemas in Open MQ 4.3

Schema

Service

Description

mq

jms and ssljms

Uses the broker’s port mapper to assign a port dynamically for either the jms or ssljms connection service.

mqtcp

jms and admin

Bypasses the port mapper and connects directly to a specified port, using the jms connection service.

mqssl

ssljms and ssladmin

Makes a SSL connection to a specified port, using the ssljms connection service.

http

httpjms

Makes a HTTP connection to the Open MQ tunnel Servlet at a specified URL, using the httpjms connection service.

https

httpsjms

Makes a HTTPS connection to the Open MQ tunnel Servlet at a specified URL, using the httpsjms connection service.

 

We usually enable the services that we need and leave other services to stay disabled. Enabling each service need its own measure for firewalls configuration and authorization.

 

Performance consideration

Performance is always a hurdle in any large scale system because of relatively large number of components which are involved in the system architecture. Open MQ is a high performance messaging system, however you should know that performance differs greatly depending on many different factors including:

The network topology

Transport protocols used

Quality of service

Topics versus queues

Durable messaging versus reliable (some buffering takes place but if a consumer is down long enough the messages are discarded)

Message timeout

Hardware, network, JVM and operating system

Number of producers, number of consumers

Distribution of messages across destinations along with message size

 

Summary

Messaging is one of the most basic and fundamental requirement in every large scale application and in this article So far you learned what is OpenMQ, how does it works, what are its main components, how it can be administrated either using Java code and JMX or using its command line utilities. You learned how a J2SE application can connect to Open MQ using JMS interfaces and act as a consumer or producer. We discussed how we can configure Open MQ to perform access control management. You learned how OpenMQ broker instances can be created and configured to work in a mission critical system.