GlassFish Modularity System, How extend GlassFish CLI and Web Administration Console (Part 2: Developing Sample Modules)

Extending GlassFish CLI and Administration Console, Developing the sample Modules

Administrators are always looking for a more effective, easier to use, and less time consuming tool to use as the interface sitting between them and what they supposed to administrate and manage. GlassFish provide effective, easy to access and easy to simple to navigate in administration channels which cover all day to day tasks that administrators need to perform. But when it comes to administration, administrators usually write their own shell scripts to automate some tasks; they use CRON or other schedulers to schedule automatic tasks, and so on to achieve their own customized administration flow.

By using GlassFish console, it is simply possible to use shell scripts, or CRON to further extends the administration and facilitate the daily tasks. But sometimes it is far better to have a more sophisticated and mature way to extend and customize the administration interfaces. GlassFish provides all necessary interfaces and required services to allow administrators or developers to develop new modules for GlassFish administration interfaces to add a new feature or enhance already available capabilities.

GlassFish administration channels extendibility is not only for easing the administrator tasks to customize the administration interfaces, but also it is present to let container developers develop administration interfaces for their containers fully integrated with already available interfaces and fully compatible with the present administration console look and feels. Using administration console extendibility developers can develop new administration console commands very easily by utilizing already available services and dependency injection which provide them with all available environmental objects that they need.

This article covers how we can develop new commands for the CLI console and how we can develop new modules to add some new pages and navigation nodes to web administration console.

1 CLI console extendability

GlassFish CLI can be considered the easiest to accesses administration console for experienced administrators as they are better the keyboard and like automated and scripted tasks instead of clicking the mouse in a web page to see the result.  Not only GlassFish CLI is very command and feature rich, but also its design is based on the same modularity concepts that the whole application server is based on.

CLI modularity is very helpful for those who wants to extends GlassFish by providing new set of commands for their container, deploying new monitoring probes and providing the corresponding CLI commands for accessing the monitoring information,  developing new commands which may be required by administrators and are not already in place, and so on.

1.1 Prepare your development environment

This is a hands on article which involve us with developing some new modules for GlassFish application server and deploying the modules to GlassFish application server, therefore we should be able to compile the modules which usually uses HK2 services in the inside and are bundled as OSGI bundle for the deployment.

Although we can use an IDE like NetBeans to build the sample codes but we are going to use a more IDE agnostic way like using Apache Maven to ensure that we can simply manage the dependency hurdle and make the sample codes easily imported to IDE of your choice. Get Maven from http://maven.apache.org/download.html and install it according to the installation instruction provided in the same page. Then you are ready to further dig into developing modules.

1.2 CLI modularity and extendibility

We discussed about how GlassFish utilize OSGI and HK2 for providing a modular architecture, one place which this modular architecture shows itself is CLI modularity. We know that all extendibility point in GlassFish are based on contracts and contracts providers which in simple though there are interfaces and interface implementation along with some annotation to mark and further configure the contract implementation.

We are talking about adding new commands to CLI dynamically by placing an OSGI bundle in the module directory of GlassFish, either the module just contains CLI commands or it contains CLI commands, web based administration console pages, and a container implementation. So, at first there is no predefined list of known commands and commands loads on-demand and as soon as you try to use one of them. For example when you issue ./asadmin list-commands CLI framework will check for all provider which implements the admin command contract and shows the list by extracting the information from each provider (implementation). CLI commands can use dependency injection provided by HK2 and also each command may extract some information and provide them to other components which might be interested in using them.

Like all providers in the GlassFish modularity system, we should annotated each command implementation using @Service annotation to ensure that HK2 will treat the implementation as a service for lifecycle management and service locating. CLI commands like any other HK2 service should have a scope which determines how the lifecycle of the command is managed by HK2.  Usually we should ensure that each command is just live in the context of its execution and not beyond that context. Therefore we annotate each command with a correct scope like PerLookup.class scope which result in initiating a command in each lookup.

All that we need to know for developing CLI commands is summarized in understanding some interfaces and helper classes.  Other required knowledge is around the AMX and HK2.

Each CLI command must implements a contract which is an interface named org.jvnet.hk2.annotations.Contract.AdminCommand which is shown in listing 1.

Listing 1 The AdminCommand interface which any CLI command have to implement


@Contract                                           #1
public interface AdminCommand {
public void execute(AdminCommandContext context);            #2
}

The AdminCommand interface has one method which is called by asadmin utility when we call the associated command. At #1 we define the interface as a contact which HK2 service providers can implements. At #2 we have the execute method which accept a argument of type org.glassfish.api.admin.AdminCommandContext which is shown in listing 2. This class is a set of utilities which allows developer to have access to some contextual variables like reporting, logging, and command parameters which the last one will be deprecated.

Listing 2 The AdminCommandContext class which is a bridge between inside the command and outside the command execution context


public class AdminCommandContext implements ExecutionContext {

public  ActionReport report;
public final Properties params;
public final Logger logger;
private List<File> uploadedFiles;

public AdminCommandContext(Logger logger, ActionReport report, Properties params) {
this(logger, report, params, null);
}

public AdminCommandContext(Logger logger, ActionReport report, Properties params,
List<File> uploadedFiles) {
this.logger = logger;
this.report = report;
this.params = params;
this.uploadedFiles = (uploadedFiles == null) ? emptyFileList() : uploadedFiles;
}

private static List<File> emptyFileList() {
return Collections.emptyList();
}

public ActionReport getActionReport() {
return report;
}

public void setActionReport(ActionReport newReport) {
report = newReport;
}

public Properties getCommandParameters() {
return params;
}

public Logger getLogger() {
return logger;
}

public List<File> getUploadedFiles() {
return uploadedFiles;
}
}

The only unfamiliar class is the org.glassfish.api.ActionReport abstract class which has several sub classes that allow us as CLI command developers to report the result of our command execution to the original caller. Several sub classes are defined to make it possible to report back the result of the execution in different environment like returning an HTML, JSON or plain text report about the execution. The report may include exit code, possible exceptions, failure or the success, and so on based on the reporter type. Important sub classes of the ActionReport are listed in table 1

Table 1 all available reporters which can be used to report the result of a command execution

Reporter class

Description

HTMLActionReporter

Generate an HTML document containing report information

JsonActionReporter

A JSON document containing the report information

PlainTextActionReporter

A plain text report which provides necessary report information as plain text

PropsFileActionReporter

A properties file containing all report elements

SilentActionReport

No report at all

XMLActionReporter

A machine readable XML document containing all report elements

All of these reporters except the SilentActionReport which is placed inside org.glassfish.embed.impl are placed in the com.sun.enterprise.v3.common package.

Now we need a way to pass some parameters to our command, determine which parameters are necessary and which one are optional, which one need argument and what should be the type of the argument. There is an annotation which let us annotate properties of the AdminCommand implementation class to mark them as parameters. The annotation which we should use to mark a property as a parameter is @Param and is placed in org.glassfish.api package.

Listing 3 The Parameters annotation which we can use to have a parameterized CLI command


@Retention(RUNTIME)
@Target({METHOD,FIELD})
public @interface Param {
public String name() default "";
public String acceptableValues() default "";
public boolean optional() default false;
public String shortName() default "";
public boolean primary() default false;
}

As you can see, the annotation can be place either on a setter method or on a filed. Annotation has several elements including the parameter name which by default is the same as the property name, a list of comma separated acceptable values, the short name for the parameter, whether the parameter is optional or not and finally whether the parameter name is required to be included or not.  In each command only one parameter can be primary.

When we mark a property or a setter method with @Param annotation, CLI framework will try to initialize the parameter with the given value and then it will call the execute method. During the initialization, CLI framework check different attributes of a parameter like its necessity, short name, its acceptable value and so on.

The last thing which we should think about a CLI command is its internationalization and the possibility to show the localized version of text messages about the command and parameters to the client. This requirement is covered using another annotation named @I18n, the associated interface is placed in org.glassfish.api. Listing 3 shows the I18n interface declaration.

Listing 4 I18n Interface which we should use to provide internationalized messages for our CLI command


@Retention(RUNTIME)
@Target({TYPE,METHOD,FIELD})
public @interface I18n {
public String value();
}

The annotation can be used both for properties and for annotating methods. At runtime the CLI framework will extract and inject the localized value into the variable based on the current locale of the JVM.

The localization file which is a standard resource bundle should follow the same key=value format which we used for any properties file. The standard name for the file is LocalStrings_Language_Country.properties an example file can be LocalStrings_en_US.properties for United States English or LocalStrings.properties as the default bundle for any locale which has no corresponding locale file in the bundle.

1.3 A CLI command which returns the operating system details

First we will develop a very simple CLI command in this section to get more familiar with the abstracts that we discussed before and then we will develop a more sophisticated command which will teach us more details about CLI framework and capability.

First command that we are going to develop just shows us what is operating system which our application server is running on and it will shows some information about the OS like architecture, JVM version, JVM vendor and so on. Our command name is get-platform-details.

First let’s look at the maven build file and analyze its content, although I am not going to scrutinize on the Maven build file but we will look at elements related to our CLI command development.

Listing 5 Maven build file for creating a get-platform-details CLI command


<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>                       #1
<parent>
<groupId>org.glassfish.pluggability</groupId>
<artifactId>pluggability</artifactId>
<version>3.0-SNAPSHOT</version>
</parent>
<artifactId>glassfish.book.chapter platformDetailsCommand </artifactId>
<packaging>hk2-jar</packaging>                             #2
<name>platform-details-command</name>
<description>GlassFish in Action, Chapter 13, Platform Details Command </description>
<dependencies>
<dependency>
<groupId>org.glassfish.admin</groupId>
<artifactId>cli-framework</artifactId>
<version>${project.parent.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.common</groupId>
<artifactId>common-util</artifactId>
<version>${project.parent.version}</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>         #3
<includes>
<include>**/*.1</include>
</includes>
</resource>
</resources>
</build>
</project>

You will find more details in the POM file included in the sample source code accompanying the book, but what is shown in the listing is enough to build a CLI command module.  At #1 we define the OSGI Manifest version. at #2 we are ensuring that the bundle which we are creating is built using the HK2 module specification which means inclusion of all OSGI related description like imported and exported packages in the MANIFEST.MF file which will be generated by Maven and resides inside the META-INF folder of the final JAR file. At #3 we ensure that Maven will include our help file for the command. The help file name should be similar to the command name and it can include anything textual.  We usually follows a UNIX like man files structure to create help files for the commands. The help file will be shown when we call ./asadmin get-platform-details –help. Figure 1 shows the directory layout of our sample command project.  As you can see we have two similar directory layout for the manual file and java source codes and the POM.XML file is in the root of the cli-platform-details-command directory.

Figure 1 Directory layout of the get-platform-details command project.

Now let’s see what the source code for the command is and what would be the outcome of the build process. Listing 6 shows the source code for the command itself which when we execute ./asadmin get-platform-details GlassFish CLI will try to call its execute method. The result after executing the command is similar to figure 2, as you can see our command has executed successfully.

Figure 2 Sample output for calling get-platform-details with runtime as the parameter

Now you can develop your own command and test it to see a similar result in the terminal window.

Listing 6 Source code for PlatformDetails.java which is a CLI command


@Service(name = "get-platform-details")               #1
@Scoped(PerLookup.class)                                  #2
public class PlatformDetails implements AdminCommand {           #3

ActionReport report;
OperatingSystemMXBean osmb = ManagementFactory.getOperatingSystemMXBean();                  #4
RuntimeMXBean rtmb = ManagementFactory.getRuntimeMXBean();     #5
// this value can be either runtime or os for our demo
@Param(name = "detailsset", shortName = "DS", primary = true, optional = false)                                                        #6
String detailsSet;

public void execute(AdminCommandContext context) {
try {

report = context.getActionReport();
StringBuffer reportBuf;
if (detailsSet.equalsIgnoreCase("os")) {
reportBuf = new StringBuffer("OS details: n");
reportBuf.append("OS Name: " + osmb.getName() + "n");
reportBuf.append("OS Version: " + osmb.getVersion() + "n");
reportBuf.append("OS Architecture: " + osmb.getArch() + "n");
reportBuf.append("Available Processor: " + osmb.getAvailableProcessors() + "n");
reportBuf.append("Average Load: " + osmb.getSystemLoadAverage() + "n");
report.setMessage(reportBuf.toString());            #7
report.setActionExitCode(ExitCode.SUCCESS);         #8

} else if (detailsSet.equalsIgnoreCase("runtime")) {

reportBuf = new StringBuffer("Runtime details: n");
reportBuf.append("Virtual Machine Name: " + rtmb.getVmName() + "n");
reportBuf.append("VM Vendor: " + rtmb.getVmVendor() + "n");
reportBuf.append("VM Version: " + rtmb.getVmVersion() + "n");
reportBuf.append("VM Start Time: " + rtmb.getStartTime() + "n");
reportBuf.append("UpTime: " + rtmb.getUptime() + "n");
report.setMessage(reportBuf.toString());
report.setActionExitCode(ExitCode.SUCCESS);

} else {
report.setActionExitCode(ExitCode.FAILURE);         #9
report.setMessage("Th given value for " +
"detailsset parameter is not acceptable, the value " +
"can be either 'runtime' or 'os'");
}

} catch (Exception ex) {

report.setActionExitCode(ExitCode.FAILURE);
report.setMessage("Command failed with the following error:n" + ex.getMessage());
context.getLogger().log(Level.SEVERE, "get-platform-details " + detailsSet + " failed", ex);

}

}
}

At #1 we are marking the implementation as a provider which implements a contract. The provided service name is get-platform-details which are the same as our CLI command name. At #2 we are setting a correct scope for the command as we do not like to see the same result every time that we initiate the command either for different domains. At #3 we are implementing the contract interface. At #4 and #5 we get MBeans which provides the information that we need, this MBeans can be either our local server MBeans or the remote running instance if we use the –host and –port  to execute the command against a remote instance. At #6 we are defining a parameter which is not optional, the parameter name is detailsset which we will use it by –detailsset when we want to execute the command. The short name for the parameter is DS which we will use as –DS when we want to execute the command. The command is primary so we do not need to name the parameter, we can just pass the value and CLI framework will assign the value to this parameter. Pay attention to the parameter name and the containing variable. If we do not use name element of the @Param annotation the parameter name will be the same as the variable name. At #7 we set the output which we want to show to the user as the report object’s message. At #8 we set the exit condition to successful as we managed to execute the command successfully. At #9 we set the exit code as a failure as the value passed for the parameter is not recognizable, we show a suitable message to user to help him diagnose the problem. At #10 we faced an exception, so we log the exception and set the exit command as a failure.

GlassFish CLI commands fall under two broad categories, one is the local commands like create-domain or start-domain commands which execute locally and you can not pass remote connection parameters including –host, –port parameters and expect to see the command executed on a remote glassfish installation. These commands extend com.sun.enterprise.cli.framework.Command abstract class or one of subclasses like com.sun.enterprise.admin.cli.BaseLifeCycleCommand.  And do not need anything like an already running GlassFish instance.

Next command which we will study is another command which will list some details about the JMS service of target domain, but this time we will discuss resource injection and some details around localization of command messages.

Listing 7 shows source code for a command which shows some information about domain JMS service including starting arguments, JMS service type, address list behavior and so on. Although the source code shows how we can get some details but setting the values for each attribute is the same as getting them.

Figure 3 shows the result of executing this command on a domain created with default parameters.

Listing 7 Source code for remote command named get-jms-details


@Service(name = "get-jms-details")
@Scoped(PerLookup.class)
@I18n("get-jms-details")                       #1
public class JMSDetails implements AdminCommand {

@Inject                                      #2
JmsService jmsService;
@Inject
JmsHost jmsHost;
@Inject
JmsAvailability jmsAvailability;
ActionReport report;
@Param(name = "detailsset", acceptableValues = "all,service,host,availability",
shortName = "DS", primary = true, optional = false) #3
@I18n("get-jms-details.details_set")                #4
String detailsSet;
final private static LocalStringManagerImpl localStrings = new LocalStringManagerImpl(JMSDetails.class);             #5

public void execute(AdminCommandContext context) {
try {
report = context.getActionReport();
StringBuffer reportBuf = new StringBuffer("");
if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("service")) {
reportBuf.append("Default Host: " + jmsService.getDefaultJmsHost() + "n");
reportBuf.append("MQ Service: " + jmsService.getMqService() + "n");
reportBuf.append("Reconnection Attempts: " + jmsService.getReconnectAttempts() + "n");
reportBuf.append("Startup Arguments: " + jmsService.getStartArgs() + "n");
} else if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("host")) {
reportBuf.append("Host Address: " + jmsHost.getHost() + "n");
reportBuf.append("Port Number: " + jmsHost.getPort() + "n");
} else if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("availability")) {
reportBuf.append("Is Availability Enabled: " + jmsAvailability.getAvailabilityEnabled() + "n");
reportBuf.append("Availability Storage Pool Name: " + jmsAvailability.getMqStorePoolName() + "n");
}
report.setMessage(reportBuf.toString());
report.setActionExitCode(ExitCode.SUCCESS);

} catch (Exception ex) {
report.setActionExitCode(ExitCode.FAILURE);
report.setMessage(localStrings.getLocalString("get-jms-details.failed", "Command failed to execute with the {0} as given parameter", detailsSet) + ". The exception message is:" +
ex.getMessage());  #6
context.getLogger().log(Level.SEVERE, "get-jms-details " + detailsSet + " failed", ex);

}

}
}


You are right the code looks similar to the first sample but more complex in using some non familiar classes and use of resource injection and localization stuff. We used resource injection in order to access GlassFish configurations. All configuration interfaces which we can obtain using injection are inside com.sun.enterprise.config.serverbeans. Now let’s analyze the code where it is important and somehow unfamiliar.  At #1 we are telling the framework that this command short help message key in the localization file is get-jms-details. At #2 we are injecting some configuration Beans into our variables. At #3 we ensure that the CLI framework will check the given value for the parameter to see whether it is one of the acceptable values or not. At #4 we are determining the localization key for the parameter help message. At #5 we initialize the string localization manager to get appreciated string values from the bundle files. At #6 we are showing a localized error message using the string localization manager.

Listing 5 shows LocalStrings.properties file content which we will shortly discuss where it should be placed to allows the CLI framework to find it and load the necessary messages from it.

Listing 8 An Snippet of localization file content for a command named restart-domain

get-jms-details=Getting the Detailed information about JMS service and host in the target domain.                #1

get-jms-details.details_set=The set of details which is required, the value can be all, service, host, or availability.           #2

get-jms-details.failed=Command failed to execute with the {0} as given parameter.          #3

At #1 we include a description of the command; at #2 we include description of the detailsset parameter using the assigned localization key. At #3 we include the localized version of the error message which we want to show when the command fails. As you can see we used place holders to include the parameter value. Figure 4 shows the directory layout of the get-jms-details project. As you can see we placed the LocalStrings.properties file next to the command source code.

Figure 4 Directory layout of the get-platform-details command project.

Developing more complex CLI commands follow the same rule as developing these simple commands. All that you need to develop a new command is starting a new project using the given maven POM files and discovering the new things that you can do.

2 Administration console pluggability

You know that administration console a very easy to use channel for administrating a single instance or multiple clusters with tens of instances and variety of managed resources. All of these functionalities were based on navigation tree, tabs, and content pages which we could use to perform our required tasks. You may wonder how we can extend these functionalities without changing the main web application which is known as admingui to get new functionalities in the same look and feel that already available functionalities are presented.  The answer lies again in the application server overall design, HK2 and OSGI along with some JSF Templating and use of Woodstock JSF component suite. We are going to write an administration console which let us upload and install new OSGI bundles by using browser from remote locations.

2.1 Administration console architecture

Before we start discussing how we can extend the administration console in a non-intrusive way we should learn more about the console architecture and how it really works. Administration console is a set of OSGI bundles and each bundle includes some administration console specific artifacts in addition to the default OSGI manifest file and Maven related artifacts.

One of the most basic artifacts which are required by an administration console plugin is a descriptor which defines where our administration console navigation items should appear. For example are going to add a new node to the navigation tree, or we want to add a new tab to a currently available tab set like Application Server page tab set.  The file which describes this navigation items and their places is named console-config.xml and should be placed inside META-INF/admingui folder of the final OSGI bundle.

The second thing which is required to make it possible for our module to be recognized by the HK2 kernel is the HK2 administration console plugin’s contract implementation. So, the second basic item in the administration console plug-in is an HK2 service provider which is nothing more than a typical Java class, annotated with @Service annotation that implements the org.glassfish.api.admingui.ConsoleProvider interface. The console provider interface has only one method named getConfiguration. We need to implement this method if we want to use a non-standard place for the console-config.xml file.

In addition to the basic requirement we have some other artifacts which should be present in order to see some effects in the administration console from our plug-in side. There requirements includes, JSF fragment files to add the node, tab, content, and common task button to administration console on places which are described in the console-config.xml file.  These JSF fragments uses JSFTemplating project tags to define nodes, tabs, and buttons which should appear in the integration points.

Until now we include just a node in a navigation tree, a tab in one of the tab sets, or a common task button or group in the common tasks page. What else we need to include? We need some pages to show when administrators clicked on a tree node or on a common task button. So we should add JSF fragments implemented using JSFTemplating tags to show the real content to the administrators. For example imagine that we want to write a module to deploy OSGI bundles, we will need to show a page which let users select a file along with a button which they can press to upload and install the bundle.

Now that we shown the page which let users select a file, we should have some business logic or handlers which receives the uploaded file and write it in the correct place. As we use JSFTemplating for developing pages, we will JSFTemplating handlers to handle the business logic.

Our functionality related artifacts are mentioned, but everything is not summarized in functionalities, we should provide localized administration pages to our administrators whom like to use their local languages when dealing with administration related tasks. We also need to provide some help files which will guide the administrators when they need to read a manual before digging into the action.

After we have all of this content and configurations in place, the administration console’s add on service query for all providers which implements the ConsoleProvider interface, then the service tries to get the configuration file if it couldn’t find the file in its default place by calling the getConfiguration method of the contract implementation. After that the administration console framework uses the configuration file and provided integration points template files and later on the content template files and handlers.

2.2 JSFTemplating

JSFTemplating is a sun sponsored project hosted in https://jsftemplating.dev.java.net/ which provides templating for JSF. By utilizing JSFTemplating we can create web pages or components using template files, template files. Inside template files we can use JSFTemplating, and Facelets syntax, other syntax support may be provided in future. All syntaxes support all of JSFTemplating’s features such as accessing page session, event triggering and event handler’s support and dynamic reloading of page content. Let’s analyze a template file which is shown in listing 9 to see what we are going to use when we develop administration console plug-ins.

Listing 9 A simple JSFTemplating template file


<!initPage
setResourceBundle(key="i18n" bundle="mypackage.resource");
/>                                 #1
<sun:page>
<sun:html>
<sun:head id="head" />
<sun:body>
#include /header.inc
<sun:form id="form">
"<p>#{i18n["welcome.msg"]}</p>        #2
<sun:label value="#{anOut}">        #3
<!beforeEncode
GiA.getResponse(userInput="#{in}"                   response=>$pageSession{anOut});
/>
</sun:label>
"<br /><br />
<sun:textField id="in" value="#{pageSession.in}" /> #4
"<br /><br />
<sun:button text="$resource{i18n.button.text}" />
<sun:hyperlink text="cheat">
<!command
setPageSessionAttribute(key="in" value="sds");  #5
/>
</sun:hyperlink>
</sun:form>
#include /footer.inc          #6
</sun:body>
</sun:html>
</sun:page>

At #1 we determine the resource file which we want to use to get localized text content.  We define a key for accessing the object. At #2 we are simply using our localization resource file to show a welcome message and as you have already noticed it is Facelet syntax. At #3 we are using a handler for one of the predefined events of JSFTemplating events. In the event we are sending out the value of in variable to our method and after getting back the method’s execution result we set the result into a variable named anOut in the page scope. And we use the same variable to initialize the Label component. You can see how a handler will look like in listing 10.  The event that we used makes it possible to change the text before it gets displayed. There are several pre-defined events which some of them are listed in table 2. At #4 we are using the in variable’s value to initialize the text field content. At #5 we are using a pre-defined command as the handler for the hyperlink click. At #6 we are including another template file into this template file. All of the components that we used in this template file are equivalent of WoodStock project’s components.

Table 2 JSFTemplating pre-defined events which can call a handler

Event

Description

AfterCreate, BeforeCreate

Event handling commences before or after the component is created

AfterEncode, BeforeEncode

Event handling commences before or after content of a component get displayed

Command

Command has invoked, for example a link or a button is clicked

InitPage

Page initialization phase, when components values are getting sets

Listing 10 The GiA.getResponse handler which has been used in listing 9


@Handler(id = "GiA.getResponse",
input = {
@HandlerInput(name = "in", type = String.class)},
output = {
@HandlerOutput(name = "response", type = String.class)
})
public static void getResponse(HandlerContext handlerCtx) {
}

In listing 10 you can see that the handler id were using in JSF page instead of the method fully qualified name. During the compilation, all handlers id, their fully qualified name, and their input and output parameters will get extracted by apt into a file named Handler.map which resides inside a folder named jsftemplating. The jsftemplating folder is placed inside the META-INF folder. Handler.map structure is similar to standard properties file containing variable=value pairs.

Now that we have some knowledge about the JSFTemplating project and the administration console architecture we can create our plug-in which let us install any GlassFish module in form of OSGI bundle using administration console. After we saw how a real plug-in can be developed we can proceed to learn more details about integration points and changing administration console theme and brand.

13.2.3 OSGI bundle installer plug-in

Now we want to create a plug-in which will let us upload an OSGI module and install it into the application server by copying the file into modules directory. First let’s see figure 5 which shows the file and directory layout of the plug-in source codes, and then we can discuss each artifact in details.

In figure 5, at the top level we have out Maven build file with one source directory containing standard Maven directory structure for Jar files which is a main directory with java and resources directory inside it. The java directory is self describing as it contains the handler and service provider implementation.  The resources directory again is standard for Maven Jar packager. Maven will copy the content of the resources folder directly in the root of the Jar file. Content of the resources folder is as follow:

glassfish folder: this folder contains a directory layout similar to java folder, the only file stored in this folder is our localization resource file which is named Strings.properties

  • images: graphic file which we want to use in our plug-in for example in the navigation tree
  • js: a JavaScript file which contains some functions which we will use in the plug-in
  • META-INF: during the development time we will have only one folder named admingui inside the META-INF folder, this folder holds the console-config.xml. After bulding the project some other folders including maven, jsftemplating and inhabitants will be created by the maven.
  • pages: this folder contains all of our JSF template files.

Figure 5 directory layout of the administration console plug-in to install OSGI bundles

Some folders are mandatory like META-INF but we can put other folders’ content directly inside the resources folder. But we will end up with an unorganized structure. Now we can discuss each artifact in details. You can see content of the console-config.xml in listing 11.


<?xml version="1.0" encoding="UTF-8"?>
<console-config id="GiA-OSGI-Module-Install">                  #1
<integration-point  id="CommonTask-Install-OSGI"          #2
type="org.glassfish.admingui:commonTask"          #3
parentId="deployment"                           #4
priority="300"                                   #5
content="pages/CommonTask.jsf" />             #6
<integration-point  id="Applications-OSGI"
type="org.glassfish.admingui:treeNode"       #7
priority="500"
parentId="applications"                       #8
content="pages/TreeNode.jsf" />                #9
</console-config>

content: The content for the integration point, typically a JavaServer Faces page.

This XML file describe which integration points we want to use, and what is the priority our navigation node in comparison with already existing navigation nodes. At #1 we give our plugin a unique ID because later on we will access our web pages and resources using this ID. At #2 we define an integration point again with a unique ID. At #3 we determine that this integration point is a common task which we want to add under deployment tasks group #4. At #5 the priority number says that all integration points with an smaller priority number will be proceed before this integration point and therefore our common tasks button will be placed after them. At #6 we determine which template file should be proceed to fill in the integration point place holder, in this integration point the template file contains a button which will fill the place holder.

At #7 we determine that we want to add some tree node to the navigation tree by using org.glassfish.admingui:treeNode as integration point type. At #8 we determine that our tree node should be placed under the applications node. At #9 we are telling that the template page which will fill the place holder is pages/TreeNode.jsf. To summarize we can say generally each console-config.xml file consists of several integration points’ description elements and each integration point element has 4 or 5 attributes. These attributes are as follow:

Id: An identifier for the integration point.

parentId:  The ID of the integration point’s parent. You can see A list of all parentid(s) in listing 3

type: The type of the integration point. You can see A list of all parentid(s) in listing 3

priority:  A numeric value that specifies the relative ordering of integration points for add-on components that specify the same parentId . This attribute is optional.

The second basic artifact which we discussed is the service provider which is simple Java class implementing the org.glassfish.api.admingui.ConsoleProvider interface. The listing 12 shows InstallOSGIBundle bundle which is the service provider for our plug-in.

Listing 12 Content of the InstallOSGIBundle.java, the plug-in service provider


@Service(name = "install-OSGI-bundle") #A
@Scoped(PerLookup.class)  #b
public class InstallOSGIBundle
implements ConsoleProvider {  #c

public URL getConfiguration() {
return null;
}
}

#A: Service has name

#B: Service is scoped

#C: Service implements the interface

After we saw the basic artifacts, including the service provider implementation and how we can define where in the administration console navigational system we want to put our navigational items, we can see what the content of template files which we use in the console-config.xml file is. Listing 13 shows the TreeNode.jsf file content.

Listing 13 Content of the TreeNode.jsf template file which create a node in navigation tree

<!initPage

setResourceBundle(key=”GiA” bundle=”glassfish.book.chapteradmingui.plugin.Strings”) #1


<!initPage
setResourceBundle(key="GiA" bundle="glassfish.book.chapteradmingui.plugin.Strings") #1
/>
<sun:treeNode id="GFiA_TreeNode"            #2
imageURL="resource/images/icon.png"  #3
text="$resource{GiA.OSGI.install.tree.title}" #4
url="GiA-OSGI-Module-Install/pages/installOSGI.jsf" #5
target="main"         #6
expanded="$boolean{false}">       #7
</sun:treeNode>

You can see the changes that this template will cause in figure 6, but the description of the code is as follow. At #1 we are using a JSFTemplating event to initialize the resource bundle. At #2 we are telling that we have a tree node with the GFiA_TreeNode as its ID which let us access the node using JavaScript. At #3 we determine the icon which we want to appear next to the tree node, you can see that we are using resource/images/icon.png as the path to the icon, the resource prefix lead us to the root of the resources folder. At #4 we are telling that we want the title of the tree node to be fetched from the resource bundle. At #5 we are determining which page should be loaded when administrator clicked the button. We are using facesContext.externalContext.requestContextPath we are getting the web application path, the GiA-OSGI-Module-Install is our plug-in id and we can access whatever we have inside the resources folder by prefixing its path with GiA-OSGI-Module-Install, you can see GiA-OSGI-Module-Install in listing 11. At #6 we are telling that the page should opens in the main frame (the content frame) and at #7 we are telling that the node should not be expanded by default. #7 effects are visible when we have some sub nodes.

Figure 6 the effects of TreeNode.jsf template file on the administration console navigation tree node

You can see that we have our own icon which is loaded directly from the images folder which is resided inside the resources directory. The tree node text is fetched from the resource bundle file.

Listing 14 shows CommonTask.jsf which causes the administration console service to place a button in the common task section under the deployment group, the result of this fragment on the common tasks page is shown in figure 7.

Listing 14 Content of the CommonTask.jsf file which is a template to fill the navigation item place


<!initPage
setResourceBundle(key="GiA" bundle="glassfish.book.chapteradmingui.plugin.Strings")
/>
<sun:commonTask                                             #1
text="$resource{GiA.OSGI.install.task.title}"
toolTip="$resource{GiA.OSGI.install.tree.Tooltip}"
onClick="admingui.nav.selectTreeNodeById('form:tree:application:GFiA_TreeNode');                                                       #2
parent.location='#{facesContext.externalContext.requestContextPath}/GiA-OSGI-Module-Install/pages/installOSGI.jsf'; return false;"     #3

>
</sun:commonTask>

At #1 we are using a commonTask component of the JSFTemplating framework, we use a localized string for its text and tool tip. At #2 we are changing the state of the tree node defined in the console-config.xml when the component receives a click; this is for ensuring that the navigation tree shows where the use is. And at #4 we are telling that we want to load pages/installOSGI.jsf which this button is clicked.

Figure 7 Effects of the listing 13 on the common task page of the administration console

We fetched the button title and its tool tip from the resource file which we included in the resource folder as described in the sample project folder layout in figure 5.

Next file which we will discuss is the actual operation file which provides the administrators with an interface to select upload and install an OSGI bundle. The file as we already discussed is named installOSGI.jsf and listing 15 shows its content.

Listing 15 The installOSGI.jsf file content, this file provide interface for uploading the OSGI file


<sun:page id="install-osgi-bundle-page" >  #1
<!beforeCreate
setResourceBundle(key="GiA" bundle="glassfish.book.chapteradmingui.plugin.Strings");
/>
<sun:html>
<sun:head id="propertyhead" title="$resource{GiA.OSGI.install.header.title}">      #2
<sun:script url="/resource/js/glassfishUtils.js" /> #3
</sun:head>
<sun:body>
<sun:form id="OSGI-Install-form" enctype="multipart/form-data">
<sun:title id="title" title="$resource{GiA.OSGI.install.form.title}" helpText="$resource{GiA.OSGI.install.form.help.title}">
<sun:upload id="fileupload" style="margin-left: 17pt" columns="$int{50}"
uploadedFile="#{requestScope.uploadedFile}"
> #4
</sun:upload>
<sun:button id="uploadButton" text="$resource{GiA.OSGI.install.form.Install_button}"
onClick="javascript:
return submitAndDisable(this, '$resource{GiA.OSGI.install.form.cancel_processing}');
"> #5
<!command
GiA.uploadFile(file="#{uploadedFile}");

redirect(page="#{request.contextPath}/GiA-OSGI-Module-Install/pages/OSGIModuleInstalled.jsf");
/>  #6
</sun:button>
</sun:button>
</sun:title>
</sun:form>
</sun:body>
</sun:html>
</sun:page>

I agree that the file content may look scary, but if you look more carefully you can see many familiar elements and patterns. Figure 8 shows this page in the administration console. In the listing 14, at #1 we are starting a page component and we use its beforeCreate event to load the localized resource bundle. At #2 we are giving the page a title which is loaded from the resource bundle. At #3 we are load a JavaScript file which contains one helper JavaScript method. As you can see we prefixed the file path with resource. At #4 we are using a fileUpload component and the binary content of the uploaded files goes to uploadedFile in the request scope. At #5 we use the helper JavaScript method to submit the form and disable the Install button. At #6 we are calling a custom handler with GiA.uploadFile as its ID. We pass the request scoped uploadedFile variable to the handler method. After the command executed, we #6 we redirect to a simple page which shows a message indicating that the module is installed.

Figure 8 The installOSGI.jsf page in the administration console

Now, let’s see what is behind this page, how this handler works, how it can find the module directory and how the installation process commences. Listing 16 shows the InstallOSGIBundleHandlers class which contains only one handler for writing the uploaded file in the modules directory of GlassFish installation which owns the running domain.

Listing 16 The content of InstallOSGIBundleHandlers which is file uploading handler


public class InstallOSGIBundleHandlers {  #A

@Handler(id = "GiA.uploadFile",  #1
input = {
@HandlerInput(name = "file", type = UploadedFile.class)}) #2
public static void uploadFileToTempDir(HandlerContext handlerCtx) { #3
try {
UploadedFile uploadedFile = (UploadedFile) handlerCtx.getInputValue("file"); #4
String name = uploadedFile.getOriginalName();
String asInstallPath = AMXRoot.getInstance().getDomainRoot().getInstallDir(); #5
String modulesDir = asInstallPath + "/modules";
String serverSidePath = modulesDir + "/" + name;
uploadedFile.write(new File(serverSidePath));
} catch (Exception ex) {
GuiUtil.handleException(handlerCtx, ex); #6
}
}
}

#A Class Declaration

As you can see in the listing, it is really simple to write handlers and working with GlassFish APIs. at #1 we are defining a handler, the handler ID element allows us to use it in the template file. At #2 we define the handler input parameter named file the parameter type is com.sun.webui.jsf.model.UploadedFile. At #3 we are implementing the handler method. At #4 we are extracting the file content from the context. At #5 we are using and AMX utility class to find the GlassFish installation path which owns the currently running domain. There many utility classes in the org.glassfish.admingui.common.util package. At #6 we are handling any possible exception using the GlassFish way. We can define output parameters for our handler and after the command executed, decide where to redirect the user based on the value of the output parameter.

There are 3 other files which we will briefly discuss, the first file which is shown in listing 17 is the JavaScript file which I used in installOSGI.jsf and is placed inside the js directory. We discussed the file in listing 15 discussions.

Listing 17 Content of the glassfishUtils.js file


function submitAndDisable(button, msg, target) {
button.className='Btn1Dis';
button.disabled=true;
button.form.action += "?" + button.name + "=" + encodeURI(button.value);
button.value=msg;
if (target) {
button.form.target = target;
}
button.form.submit();
return true;
}

The function receives three parameters, the button which it will disable, the message which it will set as the button title when the button disabled and finally the target that the button’s owner form will submit to.

Listing 17 shows the next file which we should take a look at.  String.properties is our localization file located deep in resources directory under  glassfish.book.chapteradmingui.plugin package. Although it is a standard localization file, but it is not bad to take a look and remember the first time that we faced with resource bundle files.

Listing 18 Content of the Strings.properties file


OSGI.install.header.title=Install OSGI bundle by uploading the file into the server.
OSGI.install.form.title=Install OSGI bundle by uploading the file into the server.
OSGI.install.form.help.title=Browse and select your OSGI bundle to install it in the server's modules directory
OSGI.install.form.button_processing=Processing
OSGI.install.form.Install_button=Install
OSGI.install.tree.title=Install OSGI Bundle
OSGI.install.task.title=Install OSGI Bundle
OSGI.install.tree.Tooltip=Simply install OSGI bundle by uploading the bundle from the administration console.

And finally our Maven build file, the build file as we discussed in the beginning of the section is placed in the top level directory of our project. If you want to review the directory layout, take a look at figure 5.

Listing 19 The plug-in project Maven build file, pom.xml


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.glassfish.admingui</groupId> #A
<artifactId>admingui</artifactId>
<version>3.0-SNAPSHOT</version>
</parent>
<artifactId>glassfish.book.chapterAdminConsolePlugin</artifactId>
<packaging>hk2-jar</packaging>           #B
<name>OSGI-bundle-Installation-Adming-Plugin</name>
<description>GlassFish in Action, Chapter 13, Admin console plugin to install OSGI bundle</description>
<dependencies>
<dependency>
<groupId>org.glassfish.common</groupId>
<artifactId>glassfish-api</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.admingui</groupId>
<artifactId>console-common</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.admingui</groupId>
<artifactId>console-plugin-service</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.common</groupId>
<artifactId>amx-api</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.deployment</groupId>
<artifactId>deployment-client</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
</project>

#A Project Group ID

#B HK2 Special Packagin

The pom.xml file which is available for download in www.manning.com/kalali is more complete with some comment which explains other elements, but the 19 listings content is sufficient for building the plug-in.

Now that you have a solid understanding of how an administration console plug-in works, we can extends our discussion to other integration points and their corresponding JSFTemplating components.

Table 3 shows integration points and the related JSFTemplating component along with some explanation about each item. As you can see we have 5 more types of integration points with different attributes and navigational places.

Table 3 List of all integration points along with the related JSFTemplating compoent

Integration point*

Values for ParentID attribute

Corresponding JSFTemplating compoent

treeNode

tree, applicationServer

applications, webApplications

resources, configuration

node,  webContainer

httpService

sun:treeNode which

is equivalent of the Project Woodstock tag  webuijsf:treeNode

serverInstTab

serverInstTab, add tabs and sub tabs to Application Server page

sun:tab which is equivalent of the project Woodstock tag webuijsf:tab

commonTask

deployment, monitoring,updatecenter, documentation

sun:commonTask which

is equivalent of the Project Woodstock tag  webuijsf:commonTask.

commonTask

commonTasksSection, it allows us to create our own common tasks group

sun:commonTasksGroup .

which

is equivalent of the Project Woodstock tag  webuijsf:commonTasksGroup

customtheme

masthead, loginimage, loginform, versioninfo

This integration point let us change the login form, header and version information of the application server for changing the application server brand

* All of the first column values have org.glassfish.admingui: as prefix

Now you are quipped with enough knowledge to develop extensions for GlassFish administration console to facilitate your won daily tasks or provide your plug-ins to other administrators which are interested in same administration tasks which you are performing using your developed plug-ins.

3 Summary

Understanding an administration console from end user point of view is something and understanding it from a developer’s perspective is another, using the first one you can administrate the application server and using the second one you can manage, enhance and administrate the administration console itself. In this article we learned how we can develop new asadmin or CLI commands to ease our daily tasks or extends the GlassFish application server as a 3rd party partner. We learned how we can develop and extend the web based administration console which leads us to reviewing the JSFTemplating project and a fast pointing to AMX helper classes.

Using Spring Security to enforce authentication and authorization on Spring Remoting Services Invoked from a Java SE client…

Spring framework is one of the biggest and the most comprehensive frameworks Java Community can utilize to cover most of the  end to end requirement of a software system when it come to implementation.
Spring Security and Spring Remoting are two important parts of the framework which covers security in a descriptive way and let us have remote invocation of a spring bean methods using a local proxy.

In this entry I will show you how we can use spring security to secure a spring bean exposed over HTTP and invoke its secured methods from an standalone client. In our sample we are developing an Spring service which returns the list of roles which are assigned to that currently authenticated user. I will develop a simple web application congaing an secured Spring service then I will develop a simple Java SE client to invoke that secured service.

To develop the service we need to have a service interface which is as follow:

 package springhttp;
public interface SecurityServiceIF {
public String[] getRoles();
}

Then we need to have an implementation of this interface which will do the actual job of extracting the roles for the currently authenticated user.

package springhttp;
public class SecurityService implements SecurityServiceIF {
public String[] getRoles() {
Collection<GrantedAuthority> col = SecurityContextHolder.getContext().getAuthentication().getAuthorities();
String[] roles = new String[col.size()];
int i = 0;
for (Iterator<GrantedAuthority> it = col.iterator(); it.hasNext();) {
GrantedAuthority grantedAuthority = it.next();
roles[i] = grantedAuthority.getAuthority();
i++;
}
return roles;
}
}

Now we should define this remote service in a descriptor file. Here we will use remote-servlet.xml file to describe the service. The file can be placed inside the WEB-INF of the web application. The file content is as follow:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" "http://www.springframework.org/dtd/spring-beans.dtd">
<beans>
<bean name="/securityService"
class="org.springframework.remoting.httpinvoker.HttpInvokerServiceExporter">
<property name="serviceInterface" value="springhttp.SecurityServiceIF" />
<property name="service" ref="securityService" />
</bean>
</beans>

We are simply using /securityService as the relative URL to expose our SecurityService implementation.

The next configuration file is the Spring application context in which we define all of our beans, beans weaving and configurations. The file name is applicationContext.xml and it is located inside the WEB-INF directory.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:util="http://www.springframework.org/schema/util"
xmlns:security="http://www.springframework.org/schema/security"
xsi:schemaLocation="http://www.springframework.org/schema/beans      http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd   http://www.springframework.org/schema/security   http://www.springframework.org/schema/security/spring-security-3.0.xsd ">

<bean id="securityService" class="springhttp.SecurityService">
</bean>

<security:http realm="SecRemoting">
<security:http-basic/>
<security:intercept-url pattern="/securityService" access="ROLE_ADMIN" />
</security:http>

<security:authentication-manager alias="authenticationManager">
<security:authentication-provider>
<security:user-service id="uds">
<security:user name="Jami" password="Jami"
authorities="ROLE_USER, ROLE_MANAGER" />
<security:user name="bob" password="bob"
authorities="ROLE_USER,ROLE_ADMIN" />
</security:user-service>
</security:authentication-provider>
</security:authentication-manager>

<bean id="digestProcessingFilter"
class="org.springframework.security.web.authentication.www.DigestAuthenticationFilter">
<property name="userDetailsService" ref="uds" />
<property name="authenticationEntryPoint"
ref="digestProcessingFilterEntryPoint" />
</bean>

<bean id="digestProcessingFilterEntryPoint"
class="org.springframework.security.web.authentication.www.DigestAuthenticationEntryPoint">
<property name="realmName" value="ThisIsTheDigestRealm" />
<property name="key" value="acegi" />
<property name="nonceValiditySeconds" value="10" />
</bean>

<bean id="springSecurityFilterChain"
class="org.springframework.security.web.FilterChainProxy">
<security:filter-chain-map path-type="ant">
<security:filter-chain pattern="/**"
filters="httpSessionContextIntegrationFilter,digestProcessingFilter,exceptionTranslationFilter,filterSecurityInterceptor" />
</security:filter-chain-map>
</bean>

<bean id="httpSessionContextIntegrationFilter"
class="org.springframework.security.web.context.HttpSessionContextIntegrationFilter" />
<bean id="accessDecisionManager" class="org.springframework.security.access.vote.UnanimousBased">
<property name="decisionVoters">
<list>

<bean class="org.springframework.security.access.vote.RoleVoter" />
<bean class="org.springframework.security.access.vote.AuthenticatedVoter" />
</list>
</property>
</bean>

<bean id="exceptionTranslationFilter"  class="org.springframework.security.web.access.ExceptionTranslationFilter">
<property name="authenticationEntryPoint"
ref="digestProcessingFilterEntryPoint" />
</bean>
</beans>

Most of the above code is default spring configurations except for the following parts which I am going to explain in more details. The first snippet is defining the bean itself:

<bean id="securityService" class="springhttp.SecurityService">
</bean>

The second part is when we specify the security restrictions on the application itself. We are instructing the spring security to only allow an specific role to invoke the securityService

<security:http realm="SecRemoting">
<security:http-basic/>
<security:intercept-url pattern="/securityService" access="ROLE_ADMIN" />
</security:http>

The third part is whe
n we define the identity repository where our users and role assignment are stored.

<security:authentication-manager alias="authenticationManager">
<security:authentication-provider>
<security:user-service id="uds">
<security:user name="jimi" password="jimi"
authorities="ROLE_USER, ROLE_MANAGER" />
<security:user name="bob" password="bob"
authorities="ROLE_USER,ROLE_ADMIN" />
</security:user-service>
</security:authentication-provider>
</security:authentication-manager>

As you can see we are using the simple in memory user service, you may configure the an LDAP, JDBC, or a custom user service in the production environment. All other parts of the applicationContext.xml are Spring security filter definition. You can find explanation about any of then in Spring Security Documentation or by googling for it. To enforce security restrictions on the SercurityService for any local invocation we can simply change the first disucssed snippet as follow to allow ROLE_ADMIN and ROLE_MANAGER to invoke the service locall.

 <bean id="securityService" class="springhttp.SecurityService">
<security:intercept-methods>
<security:protect
method="springhttp.SecurityService.getRoles" access="ROLE_MANAGER" />
</security:intercept-methods>
</bean>

Now that we have all Spring configuration files in place, we need to add some elements to web.xml in order to let spring framework kick start. Following changes need to be included in the web.xml file.

<listener>
<listener-class>
org.springframework.web.context.ContextLoaderListener
</listener-class>
</listener>
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>
/WEB-INF/applicationContext.xml
</param-value>
</context-param>
<filter>
<filter-name>springSecurityFilterChain</filter-name>
<filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
</filter>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
<servlet>
<servlet-name>remote</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<load-on-startup>2</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>remote</servlet-name>
<url-pattern>/*</url-pattern>
</servlet-mapping>

Now that the server part is over, we can develop the client application to invoke this service we developed. The code for the client class is as follow:

package client;
public class Main {
public static void main(final String[] arguments) {
final ApplicationContext context =
new ClassPathXmlApplicationContext(
"client/spring-http-client-config.xml");
String user = "bob";
String pw = "bob";
SecurityContextImpl sc = new SecurityContextImpl();
Authentication auth = new UsernamePasswordAuthenticationToken(user,
pw);

sc.setAuthentication(auth);
SecurityContextHolder.setContext(sc);

String[] roles = ((SecurityServiceIF) context.getBean("securityService")).getRoles();
for (int i = 0; i < roles.length; i++) {
System.out.println("Role:" + roles[i]);
}

}
}

As you can see we are initializing the application context using a xml file. We will discuss that XML file in few minutes. After the application context initialization we are creating a SecurityContextImpl,   and a UsernamePasswordAuthenticationToken then we pass the token to the security context and finally use this security context to pass it on to the server for authentication and further authorization.

Finally we invoke the getRoles() method of our SecurityService which returns the set of roles assigned to the currently authenticated user and print its role to the default output stream.

Now, the spring-http-client-config.xml content:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd">
<bean id="securityService"
class="org.springframework.remoting.httpinvoker.HttpInvokerProxyFactoryBean">
<property name="serviceUrl" value="http://localhost:8084/sremoting/securityService" />
<property name="serviceInterface" value="client.SecurityServiceIF" />
<property name="httpInvokerRequestExecutor">
<bean class="org.springframework.security.remoting.httpinvoker.AuthenticationSimpleHttpInvokerRequestExecutor"  />
</property>
</bean>
</beans>

As you can see we are just setting some properties like serviceURL bean name, the httpInvokerRequestExecutor and so on. the only thing which you may need to change to run the sample is the serviceUrl.
You can download the complete sample code form here. These are two NetBeans project, a web application and a Java SE one. You may need to add Spring libraries to the project prior to any other attempt.

The SecurityService implementation is borrowed from Elvis weblog available at http://elvisfromhell.blogspot.com/2008/06/this-was-hard-one.html

GlassFish Modularity System, How extend GlassFish CLI and Web Administration Console (Part I, The architecture)

Modularity is the essential design and implementation consideration which every software architects and designers should have in mind to get an easy to develop, maintain and extend software.

GlassFish is an application server which highly benefits from a modularity system to provide different level of functionalities for different deployment and case studies. GlassFish fully supports Java EE profiles, so it provides a lot of features which suits different case studies and different type of use cases. Every deployment and case study requires a subset of functionalities to be provided, administrated and maintained so, both GlassFish users, application developers, and GlassFish development team can highly benefits from modularity to reduce the overall costs for development, maintenance, administration and management of each deployment type that GlassFish supports.

Looking from functionalities point of view GlassFish provides extension points for further extending: administration console, CLI, containers, and monitoring capabilities of GlassFish application server in a non-intrusive and modular way.

From development point of view, GlassFish uses OSGI for module lifecycle management while it uses HK2 for service layer functionalities like dependency injection, service instances management, service load and unload sequence, and so on.

Looking from integration point GlassFish modular architecture provides many benefits for different level of integration. In bigger picture Glassfish can be deployed into a bigger OSGI container which for example may be running your enterprise application, GlassFish has many advanced subsystem like its monitoring infrastructure  which can be used in any enterprise level application for extendable monitoring, and thanks to GlassFish modularity these capabilities can be easily extracted from GlassFish without any class path dependencies headache and versioning conflicts.

GlassFish Modularity

GlassFish supports modularity by providing extension points and required SPIs to add new functionalities in administration console, CLI, monitoring, and possibility to develop new containers to host new type of applications.

Not only modular architecture provides easy extendibility but also it provides better performance and much faster starts and stops sequences as the modular architecture loads modules as soon as they get referenced by another module and not during the startup. For example, Ruby container will not start unless a Ruby on Rails application gets deployed into application server.

GlassFish modular system uses a combination of two module systems including well known OSGI and HK2 as an early implementation of Java module system (JSR-277). OSGI provides module and module lifecycle layers functionalities to let application server fits into bigger deployment which uses OSGI framework.  It also makes It possible to benefits from well designed and tested OSGI modularity and lifecycle management techniques which are included in OSGI. The HK2 provides service layer functionality and guarantee a smooth migration to Java SE 7 module system (JSR-277). To make it simple, HK2 is the framework which we use to develop the Java classes that we have inside our module and perform the main tasks.

We talked about GlassFish modules, but these modules need another entity to load, unload and feed them with their intended responsibilities which the module designed to accept.  The entity which takes care of loading and unloading modules is GlassFish kernel which itself is an OSGI bundle which implemented Kernel services.

When GlassFish starts, either normally using its internal OSGI framework or using an external OSGI framework, First it loads GlassFish kernel and check for all available services by looking into the implementation of different contracts in the class path and. Kernel will only loads services which are necessary during the startup or services which are referenced from another service as one of its dependencies.

Containers services will only starts if an application get deployed into the container, administration console extensions load as soon as administration console loads, CLI commands loads on demand and system does not preload all commands when asadmin starts, monitoring modules starts when a client bind to them.

GlassFish modular system does not only apply to application server specific capabilities like administration console extension, but it also applies on Java EE specifications implementation. GlassFish uses different OSGI modules for different Java EE specifications like Servlet 3 and JSF 2. These specifications are bundled using OSGI to make it easier to update the system when a new release of each specification is available.

Different GlassFish modules fit into different categories around GlassFish kernel. These categorizations of modules are result of modules implementing GlassFish provided interfaces for different extendibility points. We will discuss the extension points in mode details in section 3.

 

 

New features added to GlassFish using the modularity system will not differ from GlassFish out of the box features because new modules completely fit into the overall provided features.

All extendibility points in GlassFish designed to ensure that adding new type of containers for hosting programming languages and framework other than Java is easy to cope with. Every container that is available for GlassFish application server adds its own set of CLI commands, administration console web pages and navigation nodes, and its own monitoring modules to measure its required metrics.

 

2 Introducing OSGI 

The OSGI is an extensive framework introduced to add modularity capabilities to Java before Java SE 7 being shipped. Essentially OSGI is a composition of several layers of functionalities which are building on top of each other to free Java developers from headache of overcoming the class loader complexity, dynamic plug ability, libraries versioning, code visibility, dependency management; and a publish, find and bind service model to decouple bundles. Layering OSGI gives us something similar to Figure 1.

Figure 1 OSGI layers, GlassFish uses all except the service layer. An OSGI based application may utilize all or some of the layers in addition to OS and Java platform which OSGI is running on.

              OSGI runs on hand held devices with CDC profile support and Java SE. so variety of operating systems and devices can benefits from OSGI. Over this minimum platform, OSGI layers march one over the other and based on the principals of layered architecture each layer can only see its bottom layer and not its upper layer.

Bundle or module layer is closest to Java SE platform, OSGI bundles are Jar files which some more Meta data information, these metadata generally defines modules required and provided interfaces, but it does this task in a very effective manner. OSGI bundles provide 7 defections to help with dependencies, versioning. These definitions include:

       Multi-version support: several versions of the same bundle can exist in the framework. And depended bundles can use the versions that satisfy their requirements.

       Bundle fragments: Allows bundle content to be extended, bundles can merge to form one bundle and expose a unified export and import attributes.

       Bundle dependencies: Allows for tight coupling of bundles when required Import everything that another, specific bundle exports Allows re-exporting and split packages

       Explicit code boundaries and dependencies: Bundles can explicitly their internal packages or declare dependencies on external packages.

       Arbitrary export/import attributes for more control: Exporters can attach arbitrary attributes to their exports; importers can match against these arbitrary attributes when they find the required package and before importing.

       Sophisticated class space consistency model

       Package filtering for fine-grained class visibility: Exporters may declare that certain classes are included/ excluded from the exported package.

All of these definitions are included in the MANIFEST.MF file which is an easy to read and create text file located inside the META-INF directory of jar files. Listing 4 shows a sample MANIFEST.MF file.

Listing 4 sample MANIFIST.MF file which shows some OSGI modules definitions


Bundle-ManifestVersion: 2                                       #1
Bundle-SymbolicName: gfbook.samples.chapter12                    #2
Bundle-Version: 1.1.0                                            #2
Bundle-Activator: gfbook.samples.chapterActivator             #3 
Bundle-ClassPath: .,gfbook/lib/juice.jar                          #4
Bundle-NativeCode:                                                #5
serialport.so; osname=Linux; processor=x86,                        #5
serialport.dll; osname=Windows 98; processor=x86                   #5
Import-Package: gfbook.samples.chapter7; version="[1.0.0,1.1.0)";  #6
resolution:="optional"                                             #6
Export-Package:                                                    #7
gfbook.samples.chapterservices.monitoring; version=1.1;          #7
vendor="gfbook.samples:"; exclude:="*Impl",                         #7
gfbook.samples.chapterservices.logging; version=1.1;             #8
uses:="samples.chapter7.services"                                   #8

GlassFish uses OSGI Revision 4 and the version number at #1 indicates it. At #2 we uniquely identify this bundle with a name and its version together. At #3 we determine a Class which implements BundleActivator interface in order to let the bundle get notified when it stops or starts. It lets the developers to perform initialization and perform some checks when the bundle starts or perform some cleanup when the bundle stops. At #4 we define our bundle internal class path, a bundle can has JAR files inside it and bundle class loader check this class path when it looks for a classes that it requires in the same order given in the Bundle-ClassPath header. At #5 we define the bundle native dependencies per operating system.

At #6 we define an optional dependency on some packages, as you can see our bundle can use any version inside the given version range. At #7 we describe the exported packages along with the version and excluded classes using wildcard notation. At #8 we define that if any bundle import gfbook.samples.chapterservices.logging from our bundle and the importer bundle needs samples.chapter7.services it should use the same bundle for satisfying this import that our package uses.

Next layer in OSGI modularity system is the Life Cycle layer which makes it possible for bundles to installed, dependency checked, started, activated, stopped, and uninstalled dynamically. It relies on the module layer for the class path related tasks like class locating and class loading. This layer adds some APIs for managing the modules in run time. Figure 2 shows functionalities of this layer.

Figure 2 OSGI life cycle layer states. A module need to be stopped and has no dependency to be uninstalled.

     Installed: the bundle has been installed and but all bundle dependencies have not been satisfied, for example one of its imported packages in not exported by any of currently resolved bundles.

     Resolved: Bundle is installed and all of its dependencies are satisfied, this is an in-transit state and bundles do not stay in this state.

       Starting: Another temporary state which a bundle goes through when it is going to be activated.

       Active: Bundle is active and running.

     Stopping: A temporary state which a bundle goes through when it is going to be stopped.

       Uninstalled: Bundle is not in the framework anymore.

Next layer in OSGI mode is the Service model, the layer which is not used by eclipse provides several functionalities for the deployed bundles. First it provides some common services which are common in all applications, services like logging and security. Secondly it provides a set of APIs which bundles developers can use to developed OSGI friendly services and publish them in the OSGI framework service registry, search for services that their bundles need in the registry and bind to the services that they need. This layer is not used by GlassFish.

All that GlassFish taken from OSGI is its bundle and life cycle management layers. Adhering to these two layer guidelines let GlassFish be able to fit in bigger systems which are OSGI compliant and it can host new modules based on OSGI modularity system. Now that you know How GlassFish modularity system uses OSGI, you are ready to see what is inside these modules and what GlassFish propose for developing the module functionalities.

3 Introducing HK2

We said that GlassFish uses HK2 for service layer functionalities. First question which you may ask is OSGI service model has not utilized for GlassFish; answer is that the HK2 is more aligned with Java Module System (JSR-277) which is going to be a part of Java Runtime in Java SE 7 and eventually replacing HK2 with Java SE modularity system will be easier that replacing OSGI service model layer with Java SE service model layer.

3.1 What HK2 is

GlassFish as a modular application system should have a kernel which all modules plug into it to provide some functionality which call services. In order for the kernel to place the services in the right category and place, services are developed based on some contracts which are provided by the GlassFish application server designer and developers.

A contract describes the extension point building blocks and a service is an implementation which fits that particular building blocks. HK2 is a general purpose modularity system and can be used in any application which requires a modular design.

An HK2 service should adhere to a contract and it means nothing but extending an interface which HK2 kernel knows its structure and knows how to use it. But interfaces alone can not provide all information which a module may require to identify itself to the kernel. So HK2 provides a set of annotations which let the developers to provide some META data with the component to make it possible for the kernel to place it in the correct place.

A system which looks for modularity should have a complex requirement which leas to a architecture, and modularity is one of many helpers with design and development of a complex software.

In software as big as GlassFish objects need other objects during the construction, they need configuration during construction or after the construction during a method execution, objects may need to provides some of their internal objects available for other object which may need them, objects need to be categorized into different scopes to prevent any interference between objects that need scoped information and objects that runs without storing any state information (stateless).

First things first

Do we need to discuss contract and service annotation source code?

, after we developed a service by implementing a contract which is an interface, we should have a way to inform HK2 that our implementation is a service, it has a name and it has a scope. To do this we use a @Service annotation, listing 1 shows an example of using @Service annotation.

Listing 1 HK2 @Service annotation


@Service (name="restart-domain",                                  #1
scope=org.jvnet.hk2.component.PerLookup.class,                                    
factory=gfbook.chapterCommandFactory)
public class RestartDomain implements AdminCommand {                     #2
...
}

Very simple, just an annotated POJO provide the system with all required information to accept it as a CLI command. At #1 we define the name of the service that we want to develop, late on we define the scope of the component. At #2 implement the contract interface to make it possible for the kernel to use our service in the way that it knows. Scope can be one of the built-in scopes or any custom implemented scope by implementing org.jvnet.hk2.component.Scope contract. The custom scopes should take care of the objects lifecycle. Built-in scopes are:

 

     Per Lookup scope: Components in this scope will create new instances every time someone asks for it, the implementation class is: org.jvnet.hk2.component.

     Singleton scope: Objects in this scope stay alive until the session’s ends and any subsequent call for instantiating such objects end in returning the available object. The implementation class is: org.jvnet.hk2.component.Singleton.

The @Service annotation accepts several optional elements as follow, these elements are just for sake of better and more readable, maintainable and customizable code, otherwise they are not required and HK2 determine default values for each element.

       The name element defines the name of the service. The default value is an empty string.

     The scope element defines the scope to which this service implementation is tied. The default value is org.jvnet.hk2.component.PerLookup.class.

     The factory element defines the factory class for the service implementation, if the service is created by a factory class rather than by calling the default constructor. If this element is specified, the factory component is activated, and Factory.getObject is used instead of the default constructor.

 

What if our service requires accessing another service or another object? How we will decide whether we should initiate a new object and use it or use a currently available object? Should we as service developers really involve with a complex service or object initialization task?

First let’s look at instantiation, HK2 is an IOC pattern implementation and so, No object instantiation will need to go though calling new() method and instead you will ask the IOC to return an instance of your required component. In HK2, ComponentManager class is in charge of providing a registry for instantiated components, and also it bear the responsibility of initializing components that are not initialized upon a client request. Clients ask ComponentManager for a component By calling getComponent(ClassT contract). And ComponentManager manager ensure that either it returns a correct instance based on available instance and their scopes or it create a new instance by traversing the entire object graph required for initializing the component. The ComponentManager methods are threading safe and maintain no session information between consequent calls.

Components developers need to have some control to ensure all required resources and configuration are ready after the construction and before any other operation; also they may need to perform some housekeeping and cleanup before destroying the object. So two interfaces are provided which components can implement them to register for a notification upon object construction and destruction. There two interfaces are org.jvnet.hk2.component.PostConstruct and org.jvnet.hk2.component.PreDestroy, each interface just has one method named postConstruct and preDestroy. The ComponentManager is also in charge of resource injection to components properties during construction and also resource extraction from components.

First lets see resource extraction which may be unfamiliar, Some components may create an internal object which carry some information required by some other components in the system or similar components which are going to be constructed. These internal objects can be marked with @Extract annotation which is located at org.jvnet.hk2.annotations.Extract. This annotation makes define the property to be extracted and placed in a predefined scope or the default scope which is PerLookup scope.

After an object is extracted, HK2 place it inside a container of type org.jvnet.hk2.component.Habitat, the container will provide any object which needs one of its inhabitants. Listing 2 shows how a property of an object can be marked as an extraction.

Listing 2 The HK2 extraction sample


@Service (name="restart-domain",                                 scope=org.jvnet.hk2.component.PerLookup.class,                                    
factory=gfbook.chapterCommandFactory)
public class RestartDomain implements AdminCommand {                   
@Extract                                                                                                                  #1
LoggerService logger;
@Extract                                                                                                  #2
public CounterService getCounterService() {...}
}
 

At #1 we mark a property for extraction and at #2 we extract the object from a getter method, so remember that @Extract annotation works both on property and getter level. Now let’s see how these inhabitant objects can be used by another service which needs these two functionalities.

Listing 3 The HK2 extraction sample


@Service (name="restart-domain",                                 scope=org.jvnet.hk2.component.PerLookup.class,                                    
factory=gfbook.chapterCommandFactory)
public class RestartDomain implements AdminCommand {                   
@Inject protected Habitat habitat;                                                                      #1
public void performAudition() {
LoggerService logger = habitat.getComponent(LoggerService.class);         #2
CounterService counter = habitat.getComponent(CounterService.class);       #2
}
}

At #1 we inject the container into a property of these object, you see this is world of IOC, and we have no explicit object creation, at #2 we ask habitat to provide us with instance of LoggerService and CounterService.

Now you should have required understanding of OSGI Bundle layer and HK2 basics we can get our hands down to write some GlassFish modules to see how these two can be used to GlassFish modules in this article  and the upcomming one.

 

4 GlassFish Containers and Containers extendibility

GlassFish is a multi purpose application server which can host different kind of applications developed using variety of programming languages like Ruby, Groovy and PHP in addition to support for Java EE platform based applications.

The concept of container makes it possible to let GlassFish application server host new types of applications which can be developed using either Java programming language or other programming languages. Imagine that you want to develop a new container which can host a brand new type of application. These type of application will let GlassFish users to deploy a new type of archived package which contains descriptor files along with required classes for performing scheduled tasks in a container managed way. The new type of application is developed using Java and for sure it can access all GlassFish application server content and in the same way it can access some operating system related scripts like shell scripts for performing operating system related tasks on schedule.

GlassFish may have tens of different type containers for different type of applications so, before anything else, each container must have a name in order to make it possible for GlassFish kernel to access them easily. In the next step, GlassFish should have some mechanism to determine which container an application should be deployed to. Then there should be a mechanism to determine which URL patterns should be passed to a specific container for further processing. And finally, each application has some set of classes which container should load them to process the incoming request, so a class loader with full understanding of the application directory structure is required to load the application classes.  Figure 3 shows the procedure of application deployment from the moment that a deployment command received by GlassFish up to the stage that the container becomes available to GlassFish deployment manager.

Figure 3 Application deployment activities in GlassFish

Is this figure too busy? Also, there are some typos in it, so keep this flagged

  from early start until GlassFish get the designated container name.

Figure 4 shows activities that happen after the container become available to GlassFish application deployment manager up to the point that GlassFish get a class loader for the deployed application.

Figure 4 Activities which happens after GlassFish get the specific container name

GlassFish container SPI provides several interfaces which container developers can consider implementing them to develop their containers. These interfaces provide necessary functionalities for each activity in figure 3. First let’s see what are required for the first step which is checking the deployment artifact which can be a compressed file or a directory.

When an artifact is passed to GlassFish for deployment, first the artifact goes though a check to see which type of archive it is and which container can host this application. So we need to be able to check an archive and determine whether is compatible with our container or not. To do this we must implements an interface with the fully qualified name as org.glassfish.api.container.Sniffer, GlassFish creates a list of all Sniffer implementations and pass the archive to each one of them to see whether an Sniffer know the archive or not. Listing 4 shows a dummy Sniffer implementation

 

Listing 4 Sniffer interface


@Service(name = "TextApplicationSniffer")
@Scoped(Singleton.class)
public interface TextApplicationSniffer implements Sniffer{
public boolean handles(ReadableArchive source, ClassLoader loader){} #1
public Class<? extends Annotation>[] getAnnotationTypes(){} #2
public String[] getURLPatterns(){}                                  #3
public String getModuleType(){}                                     #4
public Module[] setup(String containerHome, Logger logger) throws IOException(){}                                                      #9
public void tearDown(){}                                           #5
public String[] getContainersNames(){}                             #6
   public boolean isUserVisible(){}                                   #7
public Map<String,String> getDeploymentConfigurations(final          ReadableArchive source) throws IOException{}                           #8
}

An implementation of Sniffer interface should return true or false based on the fact that either it understands the archive or not #1. ReadableArchive interface implementations provide us a virtual representation of the archive content and let us check for specific files or file size or it let us check for presence of another archive inside the current one and so on.

If we intend to use a more complex analyze of the archive, for example by checking the presence of a specific annotation we should return list of annotations in getAnnotationTypes #2. GlassFish scan the archive for presence of one the annotations and continue with this Sniffer if a class is annotated by one of these annotations.

If this is the Sniffer that understands the archive then GlassFish asks for a URL patterns that GlassFish should call the container service method when a request matches that pattern#3. At #4 we define the module name which can be a simple string.

At #5 we just remove everything related to this container from the module system. At #6 we should return a list of all containers that this specific Sniffer can enable.

At #7, we determine whether the container is visible to users when they deploy an application into the container or not and at #8 we should return all interesting configuration of the archive which can be used during the deployment or runtime of the application.

The most important method in the Sniffer interface is the setup method which will setup the container if container is not already installed. #9 As you have already noticed the method have the path to container home directory so, the container is not installed already Sniffer can download the required bundles from the repositories and configure the container with default configurations. The method returns a list of all modules that are required by the container; using this list GlassFish prevent removal of any of these modules and check for the container class to resume the operation.

The org.glassfish.api.container.Container is the GlassFish container contract which each container should implements, Container interface is the entry point for GlassFish container extendibility. Listing 5 shows a dummy Container implementation. Container class does not need to perform anything specific, all tasks will delegate to another class which perform the

Listing 5 dummy container implemenatation


@Service(name = "gfbook.chaptercontainer.TextApplicationContainer")       #1
public class TextApplicationContainer implements Container {                 #2
  public Class<? extends Deployer> getDeployer() {              #3
      return gfbook.chaptercontainer.TextApplicationDeployer.class;#3
  }
  public String getName() {
      return " Text Container";                   #4
  }

At #1 we define a unique name for our container, we can use fully qualified class name to ensure that it is unique. At #2 we implement the container contract to let the GlassFish kernel use our container in the way that it knows. At # we return our Deployer interface implementation class name which will take care of tasks like loading, unloading, preparing, and other application related tasks. At #4 we return a human readable name for our container.

You can see that container has nothing special except that it has an implementation of org.glassfish.api.deployment.Deployer which GlassFish kernel will use to delegate the deployment task. Listing 6 shows snippet code from Text application container’s deployer implementation.

Listing 6 Deployer implementation of TextApplication container


@Service
public class TextApplicationDeployer implements Deployer<TextContainer,    TextApplication> {                   #1
              public boolean prepare(DeploymentContext context) {    } #2
    public <T> T loadMetaData(Class<T> type, DeploymentContext context) { }#3
    public MetaData getMetaData() {}#4
    public TextApplication load(TextContainer container, DeploymentContext context) {}                               #5
    public void unload(TextApplication container, DeploymentContext context) {}                  #6
    public void clean(DeploymentContext context) {}  #6
}

Methods that are shown in the code snippet are just mandatory methods of Deployer contract that, our implementation may have some helper methods too. At #1 we define our class as an HK2 service which implements Deployer interface.

At #2 we prepare the application for deployment it can be unzipping some archive files or pre-processing the application deployment descriptors to extract some required information for deployment task. Several methods of the Deployer interface accept a org.glassfish.api.deployment.DeploymentContext as one of its parameters, this interface provides all contextual information about the currently under processing application. For example application content in term of files and directories, accessing the application class loader, all parameters which passed to deploy command and so on.

At #3 we return a set of MetaData that is associated with our application, these metadata can contain information like application name and so on. At #4 we should return an instance of org.glassfish.api.deployment.MetaData which contains special requirement which this Deployer instance needs these requirement can be loading a set of classes before loading the application using a separate class loader, or a set of metadata that our Deployer instance will provides after successfully loading the application. We will discuss the MetaData in more details in next step. At #5 we load the application and return an implementation of org.glassfish.api.deployment. The ApplicationContainer manages the lifecycle of the application and its execution environment. At #6 we undeploy the application from the container which means that our application should not be available in the container anymore. At #7 we clean any temporary file created during the prepare method execution.

Now the TextApplication class which implements application management tasks like starting application, stopping application. The sample snippet for this class can be similar to listing 7.

 

Listing 7 Snippet code for TextApplication class


public class TextApplication extends TextApplicationAdapter
                implements ApplicationContainer{ #1
  public boolean start(ApplicationContext startupContext) {   }
  public boolean stop(ApplicationContext stopContext) {  }
  public boolean resume()
  public ClassLoader getClassLoader() {  }
  public boolean suspend()
  public Object getDescriptor() {  } #2
}

All portion of the code should be self explaining except that in #2 we return the application descriptor to GlassFish deployment layer and in #1 which we do a lot of works including interfacing our container with Grizzly (the web layer of GlassFish).

You remember that in listing 6 we returned an instance of TextApplication when the container calls the load method of the Deployer object, this object is responsible for application lifecycle including what you can see in the listing 7 and also it is responsible for interfacing our container with Grizzly layer. Listing 8 shows the snippet of the TextApplicationAdapter abstract class implementation. Extending the com.sun.grizzly.tcp.http11.GrizzlyAdapter helps with developing custom containers which need to interact with HTTP layer of GlassFish. The com.sun.grizzly.tcp.http11.GrizzlyAdapter has one abstract method which we talk about it when we were discussing the Sniffer interface implementation and that method is the void service(GrizzlyRequest request, GrizzlyResponse response) which is called when ever a request hits a URL matching with one of the container’s deployed application.

 

 

5 GlassFish Update Center

 

Glassfish update center is one of its outstanding features which help developers and system administrators to spend less time on updating and distributing applications on multiple GlassFish instances. In this section we will dig more into how update center works, and how we can develop new package for the update center.

The update center name may be misleading that it can just works for updating the system, but reality is that it has more functionalities than what its name reveal. Figure 1.3 shows update center Swing client running in a Linux machine. At your left hand you will see a navigation tree which let you select what kid of information you want to view regarding update center functionalities. These tasks include:

       Available Updates: The Available Updates node includes information about available updates for installation. It shows the basic details about each available update and identifies if the update is new and whether its installation needs a restart to complete. You can select the desired component from the table and click install.

       Installed Software: The Installed Software node shows the software components currently installed. This is the place which you can remove any of not required piece of software from your GlassFish installation and let it run lightly. Upon trying to uninstall any component GlassFish will check to find what components are depending on it and let you decide whether you want to remove all of them or not. The Details section of the window shows more information for the installed software including technical specifications, product support, documentation, and other useful resources.

       Available Software: The Available Software node shows all the software components available for installation through the Update Center. These are new set of components like new containers, new JDBC drivers, and new web applications and so on. The Details section of the window shows more information for the installed software including technical specifications, product support, documentation and other useful resources.

Now that we know what the update center can do, let’s see how we should bootstrap it in our application server. GlassFish is the modular application server and so, update center itself is a module which can be present or not, by default the update center modules are not bundled with GlassFish installation and GlassFish download and installs its bit upon the first access, just like GlassFish administration console.

GlassFish installer will bootstrap the GlassFish Update Center during installation if a internet connection was available. If your environment had no Internet connection during the installation you can bootstrap the update center by navigating to GlassFish_Home/bin and running updatetool script, it is either a shell script or a batch file. After you execute the command it will download the necessary bits and configure the update center. Running the command again will result in seeing the update center tool GUI similar to figure 5.

Figure 5 Update Center Tool, it can manage multiple application installation

In the Navigation Tree you can see that several installations can be managed using a single Update Center front end. These are installed image in the system along with manually create image for GlassFish in Action Book.

5.1 Update center in the administration console

GlassFish administration system is all about effectiveness, integration and ease of use so the same update center functionalities are available in the GlassFish administration console to let the administrators check and install updates without having shell level access to their machines. Administrators can navigate to Update Center page in GlassFish administration page and see which updates and which new features are available for their installed version.

Figure 5 shows web based update center page with all features that desktop based version has. Integrating the update center functionalities into administration console is along with GlassFish promises to keep all administration tasks for a single machine or a cluster integrated into one single application.

 

 

 

Figure 6 Update center pages in GlassFish administration console with the same functionalities of desktop GUI.

I know that you want to ask, what about update notification, how often I should check for updates. GlassFish development team knows that administrators like to be notified for available updates, therefore there are two mechanisms for you to get notified about available updates. The first way is by means of the notification text which appears in the top bar of the administration console as shown in figure 7.

Figure 7 Update center web based notification and integration of the notification icon in Gnome desktop

You are asking for more passive way of notification, are a CLI advocate administrator, then you can use the Try Icon notification system which notifies you about available updates by showing a bulb in your system try. There is no different if you are using Solaris, Linux or windows, you can activate the notification icon simply by issuing the following command in the GlassFish_Home/updatetool/bin/ directory.

./updatetool --register

And you are done, the registration of update center notification icon is finished and after your next startup you can see the notification icon which notifies you about availability of new updates and features. Figure 7 shows the two notification mechanism discussed.

The update center helps us to keep our application server up to date by notifying us about availability of new packages and features. This features and packages as we already know either should be OSGI bundles or they should be some other types of packages which can be used for transferring any type of files. The update center uses a general packaging system to deliver the requested packages which may contains one or more OSGI bundles, operating system binary files, graphic and text and any other types of contents.

This general purpose and very powerful packaging system is called pkg(5) Image Packaging System or in brief IPS.

6 GlassFish packages distribution

Update center as we can see in the GlassFish use case, helps us to keep our application server up to date and let us install new components or remove currently installed components to keep the house clean of not required components. But the so called update center is much more important and bigger than just updating the GlassFish installation. Update center is part of a bigger general purpose binary package distribution for distributing modular software systems in different operating system without interfering with the operating system installation sub-system or so called root permission.

GlassFish version 2 has an update center, which you can use to manage the GlassFish installation however that update tool was using a module model like NetBeans NBM files to transfer the updates from the server to local installation. But in GlassFish version 3 the update center system and GlassFish installation system has completely changed. We know that the GlassFish installer and OpenInstaller framework, you may remember from 2.2 that that GlassFish installer uses a specific type of binary distribution system designed by OpenSolaris named pkg(5) Image Packaging System (IPS). Now we are going to discuss IPS in more details along with its relation with GlassFish update center. The IPS developed using Python to ensure that it is platform independent and will execute on every platform with minimal required dependencies.

6.1 PKG(5) Image Packaging System, simply IPS

The IPS is a software delivery system based heavily on using network repositories to distribute the software system either completely from the network repositories or in a mixture of using network repositories and initially downloaded packages. Generally speaking IPS helps with distributing and updating software system on different levels independent of the operating system and the platform that the software is going to be installed.

IPS is consisting of several components which Some of them are mandatory for maintaining a system based on IPS and some of them are just provided to let us make the overall procedure of creating, maintaining, and using IPS based software distribution easier.

       End user Utilities: There is a set of command line utilities that let the end users to interact with the server side software which serve as repository front end to let clients fetch and install updates and new features for their installed software.

       GUI based Client side software: like the Java based Update Center  which is in use by GlassFish

       API for interacting with IPS: There are some Java APIs provided for Java developers to be able to perform IPS related tasks from their Java applications, it provides better integration with java application and more flexibilities for ISVs and OEMs to develop their own client side applications.

       Installation images: An Installation image is a set of packages for an application that provides some or all of its functionalities.

       Servers side utilities: An HTTP server which sit in front of the physical repository to interact with the clients.

       Development utilities: A set of utilities that helps developers with creating IPS packages. The build systems integration as a subset of development utilities. For now IPS integration with Maven and ANT is available which integrate and automate creating and publishing the package to repositories with the build system.

In IPS, a software installation is called an image that we can say is a customized mirror of the installation repositories in term of installed packages. Different types of images are defined in the IPS including:

       Operating system level images: This type of images is just in use for distributing Operating systems and operating system upgrades. Images in this level are called Full images and partial images.

       Custom application level image: This is the image level which software developers and distributes can use to distribute their application. This image is called user images and does not require the host operating system to be based on IPS.

Let’s see how GlassFish installer is related to IPS, GlassFish installer solely asks the user for the path which user wants to expand GlassFish IPS image and perform the initial configuration on the expanded image. These initial configurations include creating of a default domain along with setting the key repositories for domain password and master password. The expanded image will interact with some preconfigured network repository to fetch information about updates and new features which are available for expanded GlassFish image.

Package repositories, which installation image interact with, contain packages which lead to updating, completing or adding new features to an already installed image. These packages can differ from one operating system to another and Because of these differences different repositories are required for different operating systems.

An installation image can be as big as an operating system image or as small as necessary bootstrapping files to bootstrap the IPS and let it perform the rest of installation task. So a user image can be categorized under one of the following types:

       The image with some basic required functionality, for example GlassFish web container. These basic functionalities can be operating system independent for sake of simplicity in distributing the main image. The image can contain complete IPS system including its utilities and client application like GlassFish update center or it may just contain a script to bootstrap the IPS system installation.

       The image may contains the very basic and minimal packages to bootstrap the IPS system, the user will run the IPS bootstrapping script to install more IPS related packages like graphical Update Center client and later on install all required features by using these IPS utilities.

6.2 Creating packages using IPS

Now that we have an understanding of what is IPS and what is its relation with GlassFish installer and GlassFish itself we can get our hands on IPS utilities and see how we can use command line utilities or use the IPS to create new packages. Figure 8 shows GlassFish directory structure after bootstrapping the Update Center. Keep it in mind that bootstrapping the GlassFish Update Center result in installation of pkg(5) IPS utilities.

You may ask what the relation GlassFish modularity, extendibility, OSGI and HK2 is with IPS, IPS image is zip files containing the GlassFish directory layout including its files like its OSGI bungles, JAR files, documentation and so on. Later on, each IPS package which can be an update to currently installed features or a brand new feature is a zip file which can contain one or more OSGI bundle along with their related documentation and thing like that.

Two directories in GlassFish directory structure are related to IPS and GlassFish Update Center. The pkg directory includes all necessary files for IPS system which includes man pages, libraries which Java developers can use to develop their own application on top of IPS system or their own application for bootstrapping the IPS; it also includes a minimal Python distribution to let users execute the IPS scripts. The vendor-packages contain all packages downloaded and installed for this image which is the Update Center image.

From now on, we referee to pkg folder with the name ips_home so you should replace it with the real path to the pkg directory or you can set an environment variable with this name pointing to that directory.

 

 

Figure 8 Update center and pkg(5) IPS directory layout

The bin directory that resides inside the pkg directory includes IPS scripts for both developers and end users, these scripts are listed in table 1 with their description.

Table 1 IPS scripts with associated description

Script

Description

pkg

Can be used to create and update images

pkg.depotd

This is the repository server for the image packaging system. The pkg or other retrieval clients send packages and catalog recovery request to the repository server.

pkgsend

 

Let us publish new packages and new package versions to an image packaging depot server.

pkgrecv

Let us download the contents of a package from a server. Content format is suitable for pkgsend command.

You can see some Python scripts in the bin directory. These scripts perform the real tasks explained for each of the above operating system friendly scripts.

The updatetool folder is where Update Center GUI application is located along with its documentation and related scripts. The bin directory contains two scripts for running the Update Center and registering the desktop notifier which keeps you posted about new updates and available feature by showing a balloon with the related message in your system try section. The vendor-packages folder contains all packages downloaded and installed for this image which is the Update Center image.

Now let’s see how we can setup a repository, create a package and publish the package into the repository. I am just going to create a very basic package to show the procedure, the package may not be useful for GlassFish or any other application.

First of all we need a directory to act as the depot of our packages, a place which we are sure our user has the read and write permission along with enough space for our packages.  So create a directory named repository, the directory can be inside the ips_home directory. Now start the repository server by issuing the following command inside the ips_home/bin directory.

./pkg.depotd -d ../repository -p 10005

To check that your repository is running open http://127.0.0.1:10005 in your browser and you should see something similar to figure 9 which includes some information about your repository status.

Figure 9 The Repository status which we can see by pointing the browser to the repository URL.

 

Now we have our repository server running and waiting for us to push our packages into it and later on download those packages by our client side utilities like pkg or Update Center.

Our packages should be placed in the package repository and the command which can do this for us is pkgsend command. You should already though about the urge for a way to describe content of a package, its version, description, and so on. The pkgsend command let us open a transaction and add all package attribute, its included files and directory layout and finally close the transaction which lead to pkgsend sends our package to the repository. The pkgsend command can acts in a transactional way which means we can have a set of packages which we need either all of them in the repository or none of them. This model guaranteed that we may never have incompatible packages in the repository during the time that we push updates. To create a sample package we need some contents, so create a directory inside the pkg directory and name it sample_package, put two text files named readme.txt and license.txt inside it. Inside the sample_package directory create a directory named figures with an image file named figure1.jpg inside it. These are dummy files and their content can be anything that you like. You may add some more files and create a directory structure to test more complex structured packages. Listing 8 shows a series of commands which we can use to create and push the package that we have just prepared its content to the repository that we create in previous step. I assumed that you are using Linux so the commands are Linuxish and you should enter them line by line in the terminal window which can be either a gnome-terminal or any other terminal of your choice. If you want to see how a transaction works you can point your browser to http://127.0.0.1:10005/ during the executing of listing 8 commands.

Listing 8 Create and push the sample package to our repository


export PKG_REPO=http://127.0.0.1:10005/                           #1
eval './pkgsend open GFiASample@1.0'                              #2
./pkgsend add set name="pkg.name" value="GFiASamplePackage"      #3
./pkgsend add set name="pkg.description" value="sample description" #4
./pkgsend add set name="pkg.detailed_url" value="sample.com/kalali" #5  
./pkgsend add file ../sample_package/readme.txt path=GFiA/README.txt  #6
./pkgsend add dir path=GFiA/images/                      #7
./pkgsend add file ../sample_package/figures/f.jpg path=GFiA/figs/f.jpg #8
./pkgsend add license ../sample_package/license.txt license=GPL         #9
./pkgsend close               #10
 

At #1 we export an environment variable which IPS scripts will use as the repository URL, otherwise we should pass the URL with –s http://127.0.0.1:10005/ in each command execution. At #2 we open a transaction named GFiASample@1.0 for uploading a package. At #3 we add the package name.  At #4 we add the package description which will appear in the description column of the Update Center GUI application or pkg command information retrieval.  At #5 we add the URL which contain complementary information, Update Center fetch the information from provided URL and it will show them in the description pane. At #6 we add a text file along with its extraction path; At #7 we add a directory which our package installation will create in the destination image. At #8 we add a file to our previously created directory. At #9 we add the package license type along with the license file, later on license type can be used to query the available packages based on their licenses. And finally at #10 we close the transaction will result in appearance of the package in the package repository.

Now we have one package in our repository, you can find the number packages in the repository by opening the repository URL which is http://127.0.0.1:10005/ in your browser.

You may already recognize a pattern in using pkgsend command and its parameters, and if you did you are correct because pkgsend commands are following a pattern which is similar to pkgsend subcommand [subcommand parameters]. List of important pkgsend subcommand is shown in table 2.

Table 2 List of pkgsend subcommands which can be used to send a package to repository

 

Subcommand

Description

open

Begins a transaction on the package specified by package name.

Syntax : pkgsend open pkg_name

add

Adds a resource associated with an action to the current transaction.

Syntax : pkgsend add action [action attributes]

close

Close the current transaction.

Syntax : pkgsend close

 

You can see complete list of subcommands in the pkgsend man files or in the pkg(5) project website located at http://www.opensolaris.org/os/project/pkg/.

The add subcommand is the most usefull subcommand between the pkgsend subcommands, it takes actions which we need to the transaction along with the actions attributes. You have already seen how we can use set, file, dir, and license actions. As you see each action may accept one or more named attributes. Other important actions are listed in table 3.

Table 3 list of other important add actions

Action

Description and Key Attributes

Link

The link action represents a symbolic link. The path attribute define the file system path where the symlink is installed.

Hardlink

The hardlink action represents a physical link.The path attributes define the file system path where the symlink is installed.

Driver

The driver action represents a device driver. It does not reference a payload, the driver files must be installed as file actions. The name attribute represent the name of the driver. This is usually, but not always, the file name of the driver binary.

Depend

The depend action represents a dependency between packages. A package might depend on another package to work or to install. Dependencies are optional.

Group

The group action defines a UNIX  group. No support is present for group passwords. Groups defined with this action initially have no user-list. Users can be added with the user action.

User

The user action defines a UNIX user as defined in /etc/passwd, /etc/shadow, /etc/group and /etc/ftpd/ftpusers files. Users defined with this attribute have entries added to the appropriate files.

 

Now that we are finished with the pkgsend, let’s see what alternatives we have in using our package repository. We can either install the package into an already installed package like our GlassFish installation or we can create a new package in our client machine and install the package into that particular image.

To install the package in the current GlassFish installation open Update Center, select GlassFish node in the navigation tree and then select Image Properties from file menu or press CTRL+I to open the image properties window. Add a new repository; enter GFiA.Repository as name and http://127.0.0.1:10005/ as the repository URL.  Now you should be able to refresh the Available Add-ons list and select GFiA Sample  Package for installation.

The installation process will be fairly simple as it will just execute the given actions one by one which will result in creation of a GFiA directory inside the GlassFish installation directory with two text files inside it. The package installation also creates the figs directory inside GFiA directory along with adding the image file inside it.

You get the idea that you can copy your package files anywhere in the host image (GlassFish installation in this case) so, when you want to distribute your OSGI bundle you will only need to put the bundle inside an already existing directory named modules in the GlassFish installation directory.

The other way in using our package repository is creating a new image in our client system and then installing the package in this new image. Although we can create the image and install our package into it using Update Tool, but is joyful to use command line utilities to accomplish the task.

  1. Create the local image:

./pkg image-create -U -a GFiA.Repository =http://localhost:10005/ /home/user/GFiA

The command will create a user image which is determined by the –U parameter; its default package repository is our local repository and it is determined by –a parameter. And finally the path to image location is /home/user/GFiA which is the place where our image will extract.

  1. Set a Title and Description in the Image:

./pkg set-property title "GlassFish in Action Image"

./pkg set-property description "GlassFish in Action Book image which is created in Chapter "

  1. install the GFiASamplePackage into the newly created image

./pkg  /home/user/GFiA install GFiASamplePackage

As you can see we can install the package into any installation image by providing the installation image path.

You as a developer or project manager can use IPS to distribute your own application from the ground and keep yourself free from updating hurdle. The IPS toolkit is available at http://wikis.sun.com/display/IpsBestPractices/Downloads.

 

6 Summary

GlassFish modular architecture opens the way for an easy update and maintenance of the application server along with providing the possibility to use its modules outside the application server in ISVs or develop new modules to extend the application server functionalities.

Using OSGI let the application server to be deployed inside a bigger software system based on the OSGI and let the system administrators and maintainers to deal with one single installation with one single underplaying module management system.

GlassFish container development provides the opportunity to develop new type of severs which can contain new types of applications without going deep into network server development. It is also suitable as the container can interface with other containers in the application server to use managed resources like JDBC connection pools or EJBs.

GlassFish update center can be seen as one of the most initiative features of the application server as it takes care of many headaches which administrators usually face for updating and patching the application server. The Update center automatically check for available updates for our installed version of GlassFish and in blink of an eye it will install the updates for us without making us go through the compatibility check between the available update and our installation.

Update center notification mechanism can keep us posted for new updates either when we are doing daily administration task in the administration console or by showing the famous bulb in our desktop notification area.

The Application server is platform independent and so it needs a platform agnostic distributing mechanism and the pkg(5) IPS is a proven binary distribution system which GlassFish used to distribute its binary.

 

OpenMQ, the Open source Message Queuing, for beginners and professionals (OpenMQ from A to Z)

Talking about messaging imply two basic functionalities which all other provided features are built around them; these two capabilities include support for topics and queues which basically lets a message to be consumed by as many interested consumer  as subscribed or at just one of the interested consumer(s).

Messaging middleware was present in the industry before Java come along and every one of them had its own interfaces for providing the messaging services. There were no standard interfaces which vendors try to comply with it in order to increase the compatibility, interoperability, ease of use, and portability. The art of Java community was defining Java Messaging System (JMS) as a standard set of interfaces to interact with this type of middleware from Java based applications.

Asynchronous communication of data and events is one of the always present communication models in an enterprise application to resolve some requirements like long running process delegation, batch processing, loose coupling or different systems, updating dynamically growing or shrinking clients’ set, and so on. Messaging can be considered as one of important basics required by every advanced middleware system software stack which provides essentials required to define a SOA as it provides the required basics for loose coupling.

Open MQ is an open source project mostly sponsored by Sun Microsystems for providing the Java community with a high performance, cross platform message queuing system with commercial usage friendly licenses including CDDL and GPL. Open MQ which is hosted at mq.dev.java.net shares the same code base that Sun Java System Message Queue uses. This shared code base means that any feature available in the commercial product is also available in the open source project and the only difference between them is that for the commercial one you should pay a license fee and in return you will get professional level support from Sun engineers while with the open source alternative you do not have commercial level support and instead you can use community provided support.

Open MQ provides a broad range of functionalities in security, high availability, performance management and tuning,  monitoring, multiple language support, and so on which lead easier utilization of its base functionalities in the overall software architecture.

1 Introducing Open MQ

 

Open MQ is the umbrella project for multiple components which forms Open MQ and Sun Java System Message queue which is name of the productized sibling of Open MQ. These components include:

§     Message broker or messaging server

Message broker is the heart of the messaging system which generally maintains the message queues and topics, manage the clients connections and their requests, control the clients access to queues and topics either for reading or writing, and so on.

§     Client libraries

Client libraries provide APIs which let developers interact with the message broker from different programming languages. Open MQ provides Java and C libraries in addition to a platform and programming language agnostic interaction mechanism.

§     Administrative tools

Glassfish provides a set of very intuitive and easy to use administration tools including tools for broker’s life cycle management, broker monitoring, and destination life cycle management and so on.

1.1 Installing Open MQ

From version 4.3, Open MQ provides an easy to use installer which is platform depended as it is bundled with compiled copy of client libraries and administration tools. Download bits for installer are available at https://mq.dev.java.net/downloads.html, download the binary copy compatible with your operating system. Make sure that you have JDK 1.5 or above already installed before you try to install Open MQ. Virtually Open MQ works on any operating system capable of running JDK 1.5 and above.

Beauty of open source

You might be using an operating system for development or deployment which is not included in table 1; in this case you can simply download the source code archive from the same page mentioned above and build the source code yourself. The building instructions are available at http://download.java.net/mq/open-mq/4.3/b05/Compiling and Running OpenMQ 4.3 in NetBeans.txt which is straight forward. Even if you could not manage to build the binary tools you can still use Open MQ and JMS on that operating system and run your management tools from a supported operating system.

 

To install Open MQ extract the downloaded archive and run installer script or executable file depending on your operating system, as you can see installer is straight forward and does not require any special user interaction except accepting some defaults.

After we complete the installation process we will have a directory structure similar to figure 1 in the installation path we select during the installation.

Figure 1 Open MQ installation directory structure

 

As you can suggest from the figure, we have some Meta directory related to installer application and a directory named mq that contains Open MQ related files. Table 1 lists Open MQ directories along with their descriptions.

Table 1 Important items inside Open MQ installation directory

Directory name

Description

bin

All administration utilities resides here

include

Required headers for using OpenMQ C APIs

lib

All JAR files and some native dependencies (NSS) resides here

lib/*.war

HTTP and HTTP tunneling for firewall restricted configuration, Universal Messaging service for language agnostic use of Open MQ.

var/instances

Configuration files for different broker instances created on this Open MQ installation are stored inside this folder by default.

 

2 Open MQ administration

Every server side application requires some level of administration and management to keep it up and running and Open MQ is not an exception. As we discussed in article introduction each messaging system must have some basic capabilities including point to point to point and publish subscribe mechanism for distributing messages. So, basic administration should evolve around the same concept.

2.1 Queue and Topic administration and management

When a broker receive a message it should put the message in a queue, topic or discard it. Queues and topics are called physical destinations as they are latest place in the broker space that a message should be placed.

In Open MQ we can use two administration utilities for creating queues and topics, imqadmin and imqcmd which the first one is a simple swing application and the later one is a command line utility. In order to create a queue we can execute the following command in the bin directory of Open MQ installation.

 

imqcmd create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"

 

 

This command simply creates a destination of type queue, this queue will not store more than 1000 message and if producers try to put more messages in the queue it will remove oldest present message to open up some space for new messages. Removed message will be placed in an specific queue named Dead Messages Queue (DMQ). The queue that we create using this command is named smsQueue. This command creates the queue on the local Open MQ instance which is running on localhost port 7676. We can use –b to ask imqcmd to communicate with a remote server. Following command create a topic named smsTopic on a server running on 192.168.100.1:7676, remember that default username and password for the default broker is admin.

 

imqcmd –b 192.168.100.1:7676 create dst -n smsTopic -t t

 

imqcmd is a very complete and feature rich command, we can use it to monitor the physical destinations. A sample command to monitor the destination can be like:

 

imqcmd metrics dst -m rts -n smsQueue -t q

Sample output for this command is similar to figure 2 which shows consuming and producing messages’ rates of smsQueue. The only thing about the command is that -m argument which determine which type of metrics we need to see, different types of observable metrics for physical destinations includes:

§              con, which will show destination consumer information

§              dsk, disk usage by This destination

§              rts, as already said it shows message rates

§              ttl, shows message total, it is the default metric type if none specified

 

Figure 2 sample output for OpenMQ monitoring command

 

There some other operation which we may perform on destinations:

§              Listing destinations: imqcmd list dst

§              Emptying a destination: imqcmd purge dst -n ase -t q

§              Pausing, resuming a destination: imqcmd pause dst -n ase -t q

§              Updating a destination: imqcmd update dst -t q -n smsQueue -o "maxNumProducers=5"

§              Query destination information: imqcmd query dst -t q -n smsQueue

§              Destroy a destination: imqcmd destroy dst -n smsTopic -t t

2.2 Brokers administration and managemetn

We discussed that when we install Open MQ it creates a default broker which will start when we run imqbrokerd without any additional parameters. An Open MQ installation can have multiple brokers running on different ports with different configuration. We can create a new broker by executing following command which will create the broker and start it for accepting further commands or incoming messages.

imqbrokerd -port 7677 -name broker02

 

In table 1 we talked about a var/instances directory which all configuration files related to brokers reside inside it. If you look at this folder after executing the above command you will see two directories name imqbroker and broker02 which corresponds to default and newly created broker. Table 2 shows important artifacts residing inside broker02 directory.

 

Table 2 Brokers important files and directories

File or directory name

Description

etc/ accesscontrol.properties

Configurations related to access controls, roles, authentication source and mechanism are stored here.

etc/passfile

Users and password for accessing Open MQ are stored here

fs370 (directory)

Broker file based message stores, physical destinations, and so on.

log (directory)

Open MQ log files

props/ config.properties

All configurations relate to this broker are stored inside this file, configurations like services, security, clustering, and so on.

lock (file)

Exists when the broker is running and shows broker’s name, host and port

 

Open MQ has a very initiative configuration mechanism, first of all there are some default configuration which apply on Open MQ installation, there is an installation configuration file which we can use to override the default configurations, then each broker has its own configuration file which can override the installation configuration parameters and finally, we can override  broker (instance) level configurations by overriding them by passing their values to  imqbrokerd when staring a broker. When we create a broker we can change its configuration by editing the config.properties file mentioned in table 2. Table 3 lists all other mentioned configuration files and their default location on your hard disk.

Table 3 Open MQ configuration files path and descriptions

Configuration file location

Configuration file description

default.properties

Located in lib/props/broker* directory, for determining default configuration parameters

install.properties

Located in lib/props/broker* directory, for determining installation configuration parameters

config.properties

Located inside props** folder of instance directory

* We discussed this directory in table 2

** This directory discussed in table 4

Now that we understand how we can create a broker, let’s see how we can administrate and manage a broker. In order to manage a broker we use imqcmd utilities which were also discussed in previous section for destination administration.

There are several level of monitoring data associated with a broker including incoming and outgoing messages and packets (-m ttl), messages and packet incoming and outgoing rate (-m rts), and finally operational related statistics like connections, threads and heap size (-m cxn). Following command returns message and packets rate for a broker running on localhost, listening on port 7677, the statistics updates each 10 seconds.

imqcmd metrics bkr  -b 127.0.0.1:7677 -m rts -int 10 -u admin

As you can see we passed the username with –u parameter and server will not ask for the username again. We can not pass the associated password directly in the command line and instead we can use a plain text file which includes the associated password. the password file should contains:

imq.imqcmd.password=admin

Default password for Open MQ administrator user is admin and as you can see the password file is a simple properties file. Table 4 list some other common tasks related to brokers along with a description and a sample command.

Table 4 Broker administration and management

Task

Description and sample command

pause and resume

Make the broker to stop or resume accepting connections on all services.

Quiescence, un-quiescence

Issuing quiescence on a broker because it to stop accepting new connections, but it serve already connected connections, and un-quiescence command will return it to normal mode.

imqcmd quiesce bkr -b 127.0.0.1:7677 -u admin –passfile passfile.txt

Update a broker

Update a broker attributes like

imqcmd update bkr –o “imq.log.level=ERROR”

Shutdown broker

imqcmd shutdown bkr -b 127.0.0.1:7677 -time 90 -u admin

The sample command shutdown the broker after 90 seconds and during this time it only serves already established connections

Query broker

Shows broker information

./imqcmd query bkr –b 127.0.0.1:7677 –u admin

 

 

3 Open MQ security

After getting the sense of Open MQ basics we can look at its security mode to be able to setup a secure system based on Open MQ messaging capability. We will discuss connection authentication to ensure that a user is allowed to create a connection either as a producer or consumer, access control or authorization to check whether the connected user can send a message to a specific queue or not. Security the transport step is a responsibility of messaging server administrators, Open MQ support SSL and TLS for preventing message tamper and providing the required level of encryption but we will not discuss it as it is out of this article scope.

Open MQ provides supports for two types of user repository out of the box along with facilities to use JAAS modules for authentication. User information repositories which can be used out of the box includes flat file user information repository and directory server with an LDAP communication interface.

3.1 Authentication

Authentication happens before a new connection establishes. To configure Open MQ perform authentication we should provide an authentication source along with editing the configuration files to enable the authentication and configure it to use our prepared user information repository.

Flat file authentication

Flat File authentication rely on a flat file which contains username, encoded password, role name, each line of this files contains mentioned properties of one user. A sample line can be similar to:

admin:-2d5455c8583c24eec82c7a1e273ea02e:admin:1

Default location of each broker’s password file is the broker’s home directory/etc/passwd, later on when we use imqusermgr utility we either edit this file or the file that we described using related properties which are mentioned in table 5.

To configure a broker instance to perform authentication before establishing any connection we can either use imqcmd to update the broker’s properties or we can directly edit its configuration file which is mentioned in table 5. In order to configure broker02 that we create in section 1.2 to perform authenticate before establishing a connection we need add properties listed in table 5 to config.properties file which is located inside props directory or broker’s home directory, broker directory structure and configuration files are listed in table 3 and 2.

We can ignore two later attributes as we want to use the default file in its default path,. So, shutdown the broker, add listed attributes to its configuration file and start it again. Now we can simply uses imqusrmgr to perform CRUD operations on our user repository associated to our target broker.

 

Table 5 required changes to enable connection authentication for broker02

Required information by broker

Corresponding property in configuration file

What kind of user information repository we want to use

imq.authentication.basic.user_repository=file

What is password encoding

imq.authentication.type=digest

Path to directory containing password file

imq.passfile.dirpath=path/to/the/folder

Path to password file which contains passwords

imq.passfile.name=password/file/name

 

Now that we configured the broker to perform authentication we need some mechanism to add, remove, or update the user information repository content. Table 6 shows how we can use imqusermgr utility to perform CRUD operations on broker02.

Table 6 Flat file user information repository management using imqusermgr

CRUD operation

Sample command

Create user

Add a user to user group, other groups are admin and anonymous.

imqusermgr add -u user01 -p password01 -i broker02 -g user

 

List users

imqusermgr list -i broker02

Update a user

Change the user status to inactive.

imqusermgr update -u user01 -a false -i broker02

Remove a user

Remove user01 from broker02

imqusermgr delete -u user01 -i broker02

Now create another user as follow:

imqusermgr add -u user02 -p password02 -i broker02 -g user

We will use these two users in defining authorization rules and later on in section 4 to see how we can connect to Open MQ from Java code when authentication is enabled.

 

All other utilities that we discussed need the Open MQ to be running and all of them we able to perform the required action on remote Open MQ instances but this command only works on local instances and does not require the instance to be running.

3.2 Authorization

After we enabled authentication which result in the broker checking user credentials before establishing the connection we may need to check the connected user’s permissions before we let it perform a specific operation like connecting to a service, putting message into a queue (acting as a producer), subscribing to a topic (acting as a consumer), browsing a physical destination, or automatic creation of physical destinations,. All these permissions can be defined using a very simple syntax which is shown below:

 

resourceType. resourceVariant. operation. access. principalType= principals

 

In this syntax we have 6 variable elements which are described in table 7.

Table 7 different elements in each Open MQ authorization rule

Element

Description

resourceType

Type of resource to which the rule applies:

connection: Connections

queue:Queue destinations

topic: Topic destinations

resourceVariant

Specific resource (connection service type or destination) to which the rule applies.

An asterisk (*) may be used as a wild-card character to denote all resources of a given type: for example, a rule beginning with queue.* applies to all queue destinations.

operation

Operation to which the rule applies. This syntax element is not used for resourceType=connection

access

Level of access authorized:

allow: Authorize user to perform operation

deny: Prohibit user from performing operation

principalType

Type of principal (user or group) to which the rule applies:

user: Individual user

group: User group

principals

List of principals (users or groups) to whom the rule applies, separated by commas.

An asterisk (*) may be used as a wild-card character to denote all users or all groups: for example, a rule ending with user=* applies to all users.

 

Enabling authoriization

To enable authorization we will have to do a bit more than what we did for authentication because in addition to enabling the authorization we need to define roles privileges. Open MQ provides a text based file to describe the access controlling rules. Default path for this file is inside the etc directory of broker home, and the default name is accesscontrol.properties. If we do not provide the path for this file when configuring Open MQ for authorization, Open MQ will pick up the default file.

In order to enable authorization we need to add following attributes to one of configuration files depending on how much wide we want the authorization be applied. Second attribute is not necessary if we want to use default file path.

 

§              To Enable the authorization: imq.accesscontrol.enabled=true

  • Determine the access control description file path: imq.accesscontrol.file.filename= path/toaccess/contro/file

Defining access control roles inside the access control file is an easy task, we just need use the rules syntax and limited set of variable values to define simple or complex access control roles. Listing 1 shows list of rules which we need to add to our access control file that either can be the default file or another file in the specific path that we should describe in the configuration file. If you are using default access control configuration file make sure that you comment all already defined rules by adding # sign in beginning of each line.

Listing 1 Open MQ access control rules definition


#connection authorization rules
connection.ADMIN.allow.group=admin                            #1
connection.ADMIN.allow.user=user01                            #1
connection.NORMAL.allow.group=*                                          #2
 
#queue authorization rules
queue.smsQueue.produce.allow.user=user01,user02                                          #3
queue.smsQueue.consume.allow.user=user01,user02                                          #4
queue.*.browse.allow.user=*                                                                      #5
 
#topic authorization rules
topic.smsTopic.*.allow.group=*                                                                      #6
topic.smsTopic.produce.deny.user=user01                                                        #7
topic.*.produce.deny.group=anonymous                                                        #8             
topic.*.consume.allow.user=*                                                                      #9
 
#Auto creating destinations authorization rules
queue.create.allow.user=*                                                                      #10
topic.create.allow.group=user                                                                      #11
topic.create.deny.user=user01                                                                      #12

 

By adding content of listing 1 to access control description file, we are applying following rules in Open MQ authentication, first of all we are letting admin group to connect to the system as administrator #1 and the only user01 as a normal user has previledge to connect to system as administrator #1. We let any user from any group to connect as a normal user to the broker#2. At #3 we let any of users that we create before to either act as a producer and at #4 we let these two users to act as producers of smsQueue destination. Because of rules we defined at #3 and #4 no other user can act as a consumer or producer of any other queue in the system. At #5 we let any user to browse any queue in the system. At #6 we let any operation on smsTopic by any group of users and later on at #7 we define a rule to deny user01 from acting as a smsTopic producer. As you can see generalized rules can be overridden by more specific rules as we did it for denying user01 from acting as a producer. At #8 we deny anonymous from acting as producer of smsTopic. At #9 we let any user from any group to act as a consumer of any present topic in the system. At #10 we allow any user to use auto destination creating facility. At #11 we just let one specific group to automatically create topics while one single user has no privilege to automatically create a topic #12.

WARNING

If we use an empty access control definition file no user can connect to the system so, no operation will be possible in the system by any user. Open MQ default permission for any operation is denial, so if we need to change the default permission to allow we should add allows rules to the access control definition file.

Now we can simply connect to Open MQ by using user01 to perform administrative tasks. For example executing these commands will create our sample queue and topics in the broker that we create in section 1.2. When imqcmd asked for username and password user user01 and password01

 

imqcmd  –b 127.0.0.1:7677 create dst -n smsQueue -t q -o "maxNumMsgs=1000" -o "limitBehavior=REMOVE_OLDEST" –o "useDMQ"

 

imqcmd –b 127.0.0.1:7677 create dst -n smsTopic -t t

 

As you can see we can simply define very complex authorization rules using a simple text file and its rules definition syntax.

So far, we tried and use simple text file for user management and authentication purposes, but in addition to using text file, Open MQ can use a more robust and enterprise friendly user repository like OpenDS. When we use directory service as the user information repository we will no more manage it using the Open MQ command line utilities and instead the enterprise wide administrators perform user management from one central location and we as Open MQ administrators or developers just define out authorization rules.

In order to configure Open MQ to use a directory server (like Open DS) as the user repository we need to add the properties listed in table 8 to our broker or installation configuration file.

 

Table 8 Configuring Open MQ to use OpenDS as the user information repository

Required information by broker

Corresponding property in configuration file

Determine user information repository type

imq.authentication.basic.user_repository=ldap

Determine password exchange encryption, in this case it is base-64 encoding similar to directory server method.

imq.authentication.type=basic

*Directory server host and port

imq.user_repository.ldap.server=127.0.0.1:7677

A DN for binding and searching.

imq.user_repository.ldap.principal=cn=gf cn=admin

Provided DN password

imq.user_repository.ldap.password=admin

User attribute name to compare the username.

imq.user_repository.ldap.uidattr=uid

**User’s group name

imq.user_repository.ldap.gidattr=groupid

* Multiple directory server address can be determined for high availability reasons, for example ldap://127.0.0.1:7677 ldap://192.168.1.1:7677 each address should be separated from other address by an space.

** User’s group name(s) attribute is vendor dependent and vary from one directory service to other directory services.

There are several other attributes which can be used to either tweak the LDAP authentication in term of security and performance, but essential properties are listed in table 10. Other properties which can be used are available at Sun Java System Message Queue Administration Guide available at http://docs.sun.com/app/docs/coll/1307.5.

The key to use directory service is to know the schema

The key behind a successful use of a directory service in any system is to know the directory service itself and the schema that we are dealing with. For example different schemas uses different attribute names and for each property of a directory service object so before diving into using a directory service implementation make sure that you know both the directory server and its schema.

4 clustering and high availability

We know that redundancy is one of the high availability enabler. We discussed data and service availability along with horizontal and vertical scalability of software solutions. OpenMQ like any other component of a software solution which drive an enterprise should be highly available sometimes both for data and service layer and sometimes just for service layer.

The engine behind the JMS interface in the message broker which its tasks is managing destinations, routing and delivering messages, checking with the security and so on. Therefore if we could keep the brokers and related services available for the clients, then we have the service availability in place. Later on if we manage to keep the state of queues, topics, durable subscribers, transactions and so on preserved in event of a broker failure then we can provide out clients with high availability in both service and data layer.

Open MQ provides two types of clustering, conventional clusters and high availability clusters, which can be used depending on the high availability degree required for messaging component in the whole software solution.

Conventional clusters which provide service availability but not data availability. If one broker in a cluster fails, clients connected to that broker can reconnect to another broker in the cluster but may be unable to access some data while they are reconnected to the alternate broker. Figure 3 shows how conventional cluster member and potential clients work together.

Figure 3 Open MQ Conventional clustering, one MQ broker act as the master broker to keep track of changes and propagate it to other cluster members

As you can see in the figure 3, in conventional clusters, we have different brokers running with no shared data. But in conventional cluster configuration is shared by a master broker that propagates changes between different cluster members to keep them up to dated about destinations and durable subscribers. Master broker keep newly joined or offline members which become online after a period of time synchronized with new configuration information and durable subscribers. Each client is connected to one broker and will use that broker until the broker fails (get offline, a disaster happens, and so on) which will connect to another broker in the same cluster.

High availability clusters provide both service availability and data availability. If one broker in a cluster fails, clients connected to that broker are automatically connected to that broker in the cluster which takes over the failed broker’s store. Clients continue to operate with all persistent data available to the new broker at all times. Figure 4 shows how high availability clusters’ members and potential clients work together.

Figure 4 Open MQHigh availability cluster, no single point of failure as all information are stored in a highly available database and all cluster member interact with that database

As you can see in figure 4 we have one central highly available database which we relay upon for storing any kind of dynamic data like transactions, destinations, messages, durable subscribers, and so on. These database itself should be highly available, usually a cluster of different database instances running on different machines in different geographical locations. Each client connect to one broker and continue using the same broker until broker fails, in term of the broker failure, another broker will take over all of the failed broker responsibilities like open transactions, and durable subscribers. Then client will reconnect to the broker that takes over of failed broker’s responsibilities.

All in all we can summarize the differences between conventional clusters and high availability clusters as listed in table 9.

Table 9 comparison between high availability clusters and conventional clusters

Functionality

Conventional cluster

High availability cluster

Performance

Faster

Slower

Service availability

Yes, but partial when master broker is not available

Yes

Data availability

No, a failed broker can cause data lose

Yes

Transparent failover recovery

May not be possible if failover occurs during a commit

May not be possible if failover

occurs during a commit and the

client cannot reconnect to any

Other broker in the cluster

Configuration

Done by setting appropriate cluster configuration broker properties

Done by setting appropriate cluster configuration broker properties

3rd party software requirement

None

Highly available database

 

Usually when we are talking about data availability we need to accept the slight performance overhead, you remember that we had similar condition with GlassFish HADB backed clusters.

We said that high availability clusters use a database to store dynamic information; some databases tested with OpenMQ include ORACLE RAC, MySQL, Derby, PostgreSQL, and HADB. We usually use a high available database to maximize the data availability and reduce the risk of losing data.

We can configure a highly available messaging infrastructure by a following a set of simple steps; First you need to create as many brokers as you need in your messaging infrastructure, so create brokers as discussed in 1.2, In second step we need to configure the brokers to use a shared database for dynamic data storage and act as a cluster member. To perform this step we need to add some changes to the each broker configuration file, these changes includes adding the content of listing 2 to broker’s configuration file

Listing 2 Required properties for making a broker highly available

#data store configuration

imq.persist.store=jdbc                          #1

imq.persist.jdbc.dbVendor=mysql                  #2

imq.persist.jdbc.mysql.driver=com.mysql.jdbc.Driver   #2

imq.persist.jdbc.mysql.property.url=jdbc:mysql://localhost:3306 #2

imq.persist.jdbc.mysql.property.databasename=jmsStore        #2

imq.persist.jdbc.mysql.user=privilegedUser                  #2

imq.persist.jdbc.mysql.needpassword=true                    #2

imq.persist.jdbc.mysql.password=dbpassword                   #2

 

#cluster related configuration

imq.cluster.ha=true                                          #3

imq.cluster.clusterid=e.cluster01                           #3

imq.brokerid=e.broker02                                      #4

 

 

Please remove the numbers in the above listing and following paragraph with cueballs

At #1 we define that data store is of type JDBC and not Open MQ managed binary files, for high availability clusters it must be jdbc instead of file. At #2 we provide required configuration parameters for Open MQ to be able to connect to the database, we can replace mysql with oracle, derby, hadb or postgresql if we want to. At #3 we configure this broker as a member of a high availability cluster named e.cluster01, and finally at #4 we determine the unique name of this broker. Make sure that each broker should have a unique name in the entire cluster and different clusters using the same database need to be uniquely identified.

Now we have a database server available for our messaging infrastructure high availability and we configured all of our brokers to use this database, the last step is creating the database that Open MQ is going to use to store the dynamic data. To create the initial schema we should use following command:

./imqdbmgr create all –b broker_name

This command creates the database, required tables, and initializes the database with topology related information. As you can see we passed a broker name to the command, this means that we want the imqdbmgr to use that specific broker’s configuration file to create the database. As you remember we can have multiple brokers in the same Open MQ installation and each broker can operate completely independent from other brokers.

The imqdbmgr command has many usages including: upgrading storage from old Open MQ versions, creating backups, restoring backups and so on. To see a complete list of its commands and parameters execute the command with –h parameter.

Now you should have a good understanding of Open MQ clustering and high availability. In next section we will discuss how we can use Open MQ from a Java SE application and later on we will see how Open MQ is related to GlassFish and GlassFish clustering architecture.

Excellent segue

Thanks, I am learning

 

 

5 Open MQ management using JMX

Open MQ provides a very rich set of JMX MBeans to expose administration and management interface of Open MQ. These MBeans can also be used to monitor the Open MQ using JMX. Generally any task which we can do using imqcmd can be done using JMX code or a JMX console like JDK’s Java Monitoring and Management Console, jconsole, utility.

Administrative tasks that we can do using JMX include managing brokers, services, destinations, consumers, producers. And so on. Meantime management and monitoring is possible using JMX, for example we can monitor destinations using our code or change the configuration of the JMX broker dynamically using our own code or a JMX console. Some use cases of the JMX connection channel includes:

  • We can include JMX code in our JMS client application to monitor application performance and, based on the results, to reconfigure the JMS objects you use to improve performance.
  • We can write JMX clients that monitor the broker to identify use patterns and performance problems, and we can use the JMX API to reconfigure the broker to optimize performance.
  • We can write a JMX client to automate regular maintenance tasks, rolling upgrades, and so on.
  • We can write a JMX application that constitutes your own version of imqcmd, and we can use it instead of imqcmd.

To connect to Open MQ broker using a JMX console we can use service:jmx:rmi:///jndi/rmi://host:port/server  as the connection URL pattern.  Listing 3 shows how we can use Java code to pause jms service temporarily.

Please replace numbers with cueballs in following sample codes and paragraphs

 

Listing 3 pausing the jms service using Java code, JMX and admin connection factory

AdminConnectionFactory  acf = new AdminConnectionFactory();             #1

acf.setProperty(AdminConnectionConfiguration.imqAddress, "192.168.1.1:7677");                                                       #2

JMXConnector  jmxc = acf.createConnection("admin", "admin");          #3

MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection();           #4

ObjectName  serviceConfigName = MQObjectName.createServiceConfig("jms"); #5

mbsc.invoke(serviceConfigName, ServiceOperations.PAUSE, null, null);    #6

jmxc.close();                                                           #7

 

This sample code uses admin connection factory which brings some dependencies on Open MQ classes which make you add imqjmx.jar to your class path. This file is located in Open MQ installation lib directory. At #1 we create an administration connection factory, this class is an Open MQ helper class which make this sample code depending on Open MQ libraries, at #2 we configure the factory to gives us connection to a non-default host, the default host is 127.0.0.1:7676 at #3 we set the credentials which we want to use to make the connection, at #4 we get the connection to MBean server. At #5 we get jms service MBean and at #6 we invoke one of its operation and finally we close the connection at #7.

Listing 4 shows a pure JMX sample code for reading MaxNumProducers attributes of a smsQueue.

Listing 4Reading smsQueue attributes using pure JMX code

HashMap   environment = new HashMap();                       #1

String[]  credentials = new String[] {"admin", "admin"};   #1

environment.put (JMXConnector.CREDENTIALS, credentials);     #1

JMXServiceURL  url;

url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://192.168.1.1:7677/server"); #2

JMXConnector  jmxc = JMXConnectorFactory.connect(url, environment); #3

MBeanServerConnection  mbsc = jmxc.getMBeanServerConnection(); #4

ObjectName  destConfigName= MQObjectName.createDestinationConfig(DestinationType.QUEUE, "smsQueu"); #5

Integer  attrValue= (Integer)mbsc.getAttribute(destConfigName,DestinationAttributes.MAX_NUM_PRODUCERS); #6

System.out.println( "Maximum number of producers: " + attrValue );

jmxc.close();                                                        #7

 

At #1 and #2 we determine the parameters that JMX needs to create a connection, at #3 we create the JMX connection, at #4 we get a connection to the Open MQ’s MBean server, at #5 get the MBean representing our smsQueue, at #6 we get value of an attribute which represent the maximum number of permitted producers and finally we close the connection at #7.

To learn more about JMX interfaces and programming model for Open MQ you can check Sun Java System Message Queue 4.2 Developer’s Guide for JMX Clients which is located at http://docs.sun.com/app/docs/doc/820-5207.

6 Using Open MQ

Open MQ messaging infrastructure can be accessed from variety of channels and programming languages, we can perform messaging tasks using Java, C, and virtually any programming language capable of performing HTTP communication. When it comes to java we can access the broker functionalities either using JMS or JMX.

Open MQ directly provides APIs for accessing the broker using C and Java but for other languages it takes a relatively evolutionary approach by providing a HTTP gateway for communicating between any language like python, JavaScript (AJAX), C#, and so on. This approach is called Universal Messaging System (UMS) which introduce in Open MQ 4.3. in this section we only discuss Java and OpenMQ and other supported mechanism and languages are out of scope. You can find more information about them at http://mq.dev.java.net

6.1 Using Open MQ from Java

We may use Open MQ either directly or by using application server managed resources, here we just discuss how we can use Open MQ from Java SE as using JMS service from Java EE is straight forward.

Open MQ provides a broad range of functionalities wrapped in a set of JMS compliant APIs, you can perform usual JMS operation like producing and consuming messages or you can gather metrics related to different resources using the JMS API. Listing 5 shows how sample applications which communicate with a cluster of open MQ brokers. Running the application will result in sending a message and consuming the same message. In order to execute listing 5 and 6 you will need to add imq.jar, imqjmx.jar and jms.jar to your classpath. These files are inside the imq installation lib directory. 

Listing 5 Producing and consuming mesaages from a Java SE application


public class QueueClient implements MessageListener {      #1
   public void startClientConsumer()throws JMSException {
   com.sun.messaging.ConnectionFactory connectionFactory = new com.sun.messaging.ConnectionFactory();            
connectionFactory.setProperty(com.sun.messaging.ConnectionConfiguration.imqAddressList, "mq://127.0.0.1:7676,mq://127.0.0.1:7677");          #2
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectEnabled, "true"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectAttempts, "5"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqReconnectInterval, "500"); #3
connectionFactory.setProperty(ConnectionConfiguration.imqAddressListBehavior, "RANDOM"); #3
   javax.jms.QueueConnection queueConnection = connectionFactory.createQueueConnection("user01", "password01"); #4
   javax.jms.Queue smsQueue = null;
   javax.jms.Session session = queueConnection.createSession(false, javax.jms.Session.AUTO_ACKNOWLEDGE);               #5
   javax.jms.MessageProducer producer = session.createProducer(smsQueue);
   Message msg = session.createTextMessage("A sample sms message");
   producer.send(msg);                 #6
   javax.jms.MessageConsumer consumer = session.createConsumer(smsQueue);
   consumer.setMessageListener(this);         #7
   queueConnection.start();                #8
   }

   public void onMessage(Message sms) {
try {
   String smsContent = ((javax.jms.TextMessage) sms).getText(); #9
   System.out.println(smsContent);
} catch (JMSException ex) {
  ex.printStackTrace();
}
   }
}

As you can see in listing 5 we used JMS programming model to communicate with the Open MQ for producing and consuming messages. At #1 we use the QueueClient class to implement MessageListener interface. At #2 we create a connection factory which connects to one of a cluster member; at #3 we define the behavior of the connection factory in relation to the cluster members. At #4 we provide a privileged user to connect to the brokers. At #5 we create a session which automatically acknowledges receiving the messages. At #6 we send a text message to the queue. At #7 we set QueueClient as the message listener for our consumer. At #8 we start receiving messages from the server and finally at #9 we consume the message in the onMessage method which is MessageListener interface method for processing the incoming messages.

We said that Open MQ provides some functionality which let us retirve monitoring information using JMS APIs. It means that we can receive JMS messages containing Open MQ metrics. But before we can receive such messages we should know where these messages are published and how we should configure Open MQ to publish these messages.

First we need to add some configuration to enable gathering metrics and the interval that these metrics should be gathered, to do this we need to add following properties to one of the configuration files which we discussed in table 5.

 

imq.metrics.topic.enabled=true

imq.metrics.topic.interval=30

 

Now that we enabled the metric gathering, Open MQ will gather brokers, destinations list, JVM, queue, and topics metrics and send each type of these metrics to a pre-configured topic. Broker gathers and sends the metrics messages based on the provided interval. Table 10 shows the topic names for each type of metric information.

 

Table 10 Topic names for each type of metric information.

 

Topic Name

Description

mq.metrics.broker

Broker metrics: information on connections, message flow, and volume of messages in the broker.

mq.metrics.jvm

Java Virtual Machine metrics: information on memory usage in the JVM.

mq.metrics.destination_list

A list of all destinations on the broker, and their types.

mq.metrics.destination.queue.queueName

Destination metrics for a queue of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the queueName variable.

mq.metrics.destination.topic.topicName

Destination metrics for a topic of the specified name. Metrics data includes number of consumers, message flow or volume, disk usage, and more. Specify the destination name for the topicName variable.

Now that we know which topics we should subscribe for our metrics we should think about the security of these topics and privileged users which can subscribe to these topics. To configure the authorization for these topics we can simply define some rules in access control file which we discussed in 2.2. Syntax and template for defining authorization for these resources is similar to:

 
topic.mq.metrics.broker.consume.deny.user=*
topic.mq.metrics.broker.consume.allow.user=user01,user02
topic.mq.metrics.destination.topic.t1.consume.deny.user=*
topic.mq.metrics.destination.topic.t1.consume.allow.user=user01

Now that we have the security in place we can write a sample code to retrieve smsQueue metrics, listing 6 shows how we can retrieve smsQueue metrics.

listing 6 retrieving smsQueue metrics using brokers metrics messages


public void onMessage(Message metricsMessage) {
        try {
 
            String metricTopicName = "mq.metrics.destination.queue.smsQueue";
            MapMessage mapMsg = (MapMessage) metricsMessage;
            String metrics[] = new String[11];
            int i = 0;
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgsIn"));
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgsOut"));
            metrics[i++] = Long.toString(mapMsg.getLong("msgBytesIn"));
            metrics[i++] = Long.toString(mapMsg.getLong("msgBytesOut"));
            metrics[i++] = Long.toString(mapMsg.getLong("numMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("peakNumMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("avgNumMsgs"));
            metrics[i++] = Long.toString(mapMsg.getLong("totalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("peakTotalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("avgTotalMsgBytes") / 1024);
            metrics[i++] = Long.toString(mapMsg.getLong("peakMsgBytes") / 1024);
 
        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }
 

Listing 6 only shows onMessage method of the message listener class, other part of the code is trivial and very similar to listing 5 with slight changes like subscribing to a topic named mq.metrics.destination.queue.smsQueue instead of smsQueue.

A list of all possible metrics for each type of resource which are properties of the received map message can be found in chapter 20 of Sun Java System Message Queue 4.2 Developer’s Guide for Java Clients which is located at docs.sun.com/app/docs/doc/820-5205/aeqej?a=view

7 GlassFish and Open MQ

 

GlassFish like other Java EE application servers need to provide messaging functionality behind its JMS implementation. GlassFish uses Open MQ as message broker behind its JMS implementation. GlassFish supports Java Connector architecture 1.5 and using this specification it integrates with Open MQ. If you take a closer look at table 1.4 you can see that by default, different GlassFish installation profiles uses different way of communication with JMS broker which ranges from an embedded instance, local or Open MQ server running on a remote host. Embedded MQ broker is only mode that Open MQ runs in the same process that GlassFish itself runs. In local mode starting and stopping Open MQ server is done by the application server itself. GlassFish starts the associated Open MQ server when GlassFish starts and stop them when GlassFish stops. Finally remote mode means that starting and stopping Open MQ server should be done independently and an administrator or an automated script should take care of the tasks.

We know that how we can setup a glassfish cluster both with an in-memory replication and also a high available cluster with HADB backend for persisting transient information like sessions, JMS messages, and so on. Here in this section we discuss the GlassFish in memory and HADB backed clusters’ relation with Open MQ in more details to let you have a better understanding of the overall architecture of a highly available solution.

First let’s see what happens when we setup an in-memory cluster; after we configure the cluster, we have several instances in the cluster which uses a local Open MQ broker instance for messaging tasks. When we start the cluster all GlassFish instance starts and upon each instances startup, corresponding Open MQ broker starts.

You remember that we discussed conventional clusters which need no backend database for storing the transient data, and instead use one broker as the master broker to propagate changes and keep other broker instances up to date. In GlassFish in-memory replicated clusters, Open MQ broker associated with the first GlassFish instance that we create act as the master broker. A sample deployment diagram for in-memory replicated cluster and its relation with Open MQ is shown in figure 5

Figure 5 Open MQ and GlassFish in an in-memory replicated cluster of GlassFish instances. Open MQ will work as conventional cluster which lead in one broker acting as the master broker.

 

When it come to enterprise profiles, we usually uses independent high available cluster of Open MQ brokers with a database backend  with a highly available cluster of GlassFish instances which are backed by a highly available HADB cluster. We discussed that each GlassFish instance can have multiple Open MQ brokers to choose between when it creates a connection to the broker, to configure glassfish to use multiple brokers of a cluster we can add al available brokers to glassfish instance and then let the glassfish manage how to distribute the load between broker instances.  To add multiple brokers to a Glassfish instance we can follow these steps:

  • Open Glassfish administration console
  • Navigate to instance configuration node or default configuration node of glassfish application server based on either you want to change the glassfish cluster’s instances configuration or a single instance configuration.
  • Navigate to Configuration> Java Message Service> JMS Hosts from the navigation tree

§   Remove the default_JMS_host

§   Add as much hosts as you like this instance or cluster use, each host represent a broker in your cluster

  • Navigate to Configuration> Java Message Service and change the fields value as shown in table 11

 

Table 11 Glassfish configuration for using a standalone JMS cluster

 

Field

Value and description

Type

We  discussed different types of integration in this section, we will use remote standalone cluster of brokers

Startup Timeout

Time that GlassFish waits during its startup time before it gives up on JMS service startup, if it gives up no JMS service will be available

Start Arguments

We can pass any arguments that we can pass to imqbrokerd command using these field

Reconnect, Reconnect Interval, Reconnect Attempts

We should enable the reconnection, set a meaningful number of retries to connect to the brokers and a meaningful wait time between reconnection

Default JMS Host

It determines which broker will be used to execute and communicate administrative commands on the cluster. Commands like creating a destination and so on

Address List Behavior

It can be used to determine how GlassFish select a broker when it creates a connection; it can either random or based on the first available broker in the available host list. First broker has the highest priority.

Address List Iterations

How many times GlassFish should reiterate over the available hosts list in order to establish a connection if the brokers are not available at first iteration

MQ Scheme

Please refer to table 13

MQ Service

Please refer to table 13

 

As you can see we can configure GlassFish to use a cluster of Open MQ brokers very easily. You may like to know how this configuration affects your MDB or a JMS connection that you acquire from the application server, I should say that when you create a connection using a connection factory, GlassFish check the JMS service configuration and return a connection which has the address of a healthy broker as its host address, the selection of the broker is based on the Address List Behavior value, it can select a healthy broker at random or the first healthy host starting from the top of the hosts list.

Open MQ is one of the most feature complete message broker implementation available in the open source and commercial market, it provides a lot of feature including the possibility of using multiple connection protocols for sack simplicity and vast types of clients that may need to connect to it. Two concepts are introduced to cover this connection protocol flexibility:

  • Open MQ Service: Different connection handling services implemented to support variety of connection protocols. All available connection services are listed at table 12.
  • Open MQ Scheme: Determine the connection schema between an Open MQ client, a connection factory, and the brokers. Supported schemas are listed at table 12.

The full syntax for a message service address is scheme://address-syntax and before using any of the schemas we should ensure that its corresponding service is already enabled. By setting a broker’s imq.service.activelist property, you can configure it to run any or all of these connection services. The value of this property is a list of connection services to be activated when the broker is started up; if the property is not specified explicitly, the jms and admin services will be activated by default.

Table 12 available services and schemas in Open MQ 4.3

Schema

Service

Description

mq

jms and ssljms

Uses the broker’s port mapper to assign a port dynamically for either the jms or ssljms connection service.

mqtcp

jms and admin

Bypasses the port mapper and connects directly to a specified port, using the jms connection service.

mqssl

ssljms and ssladmin

Makes a SSL connection to a specified port, using the ssljms connection service.

http

httpjms

Makes a HTTP connection to the Open MQ tunnel Servlet at a specified URL, using the httpjms connection service.

https

httpsjms

Makes a HTTPS connection to the Open MQ tunnel Servlet at a specified URL, using the httpsjms connection service.

 

We usually enable the services that we need and leave other services to stay disabled. Enabling each service need its own measure for firewalls configuration and authorization.

 

Performance consideration

Performance is always a hurdle in any large scale system because of relatively large number of components which are involved in the system architecture. Open MQ is a high performance messaging system, however you should know that performance differs greatly depending on many different factors including:

The network topology

Transport protocols used

Quality of service

Topics versus queues

Durable messaging versus reliable (some buffering takes place but if a consumer is down long enough the messages are discarded)

Message timeout

Hardware, network, JVM and operating system

Number of producers, number of consumers

Distribution of messages across destinations along with message size

 

Summary

Messaging is one of the most basic and fundamental requirement in every large scale application and in this article So far you learned what is OpenMQ, how does it works, what are its main components, how it can be administrated either using Java code and JMX or using its command line utilities. You learned how a J2SE application can connect to Open MQ using JMS interfaces and act as a consumer or producer. We discussed how we can configure Open MQ to perform access control management. You learned how OpenMQ broker instances can be created and configured to work in a mission critical system.

How to prepare for, and install GoDaddy SSL certificate into GlassFish v3

Here are steps showing you how to prepare and install a SSL certificate purchased from Godaddy into GlassFish v3 server. To learn more about Godaddy certificates and step to buy a certificate you need to take a look at http://www.godaddy.com/ssl/ssl-certificates.aspx?app_hdr=. After you understand what Godaddy offer and whether it suites your requirement you can use the following steps to get and install the certificate into GlassFish.

  • Generate a keypair for your server using the following command. This command will generate a keypair and store it into a keystore of type JKS. later on we will submit the public key portion and other details provided during the key generation to a CA to sing it for us.
  • keytool -keysize 2048 -genkey -alias wwww.domain.com -keyalg RSA -dname "CN=wwww.domain.com,O=company,L=city,S=State,C=Countery" -keypass changeit -storepass changeit -keystore server.keystore

     

  • You may check whether you entered correct information in the key generation phase by checking the key using the following command:
  • keytool -list -v -alias wwww.domain.com -keystore server.keystore

     

  • Generate a CSR which you should submit to Godaddy to sing it for you. This CSR contains the public key which matchs the private key you generated previously.:
  •  

    keytool -certreq -alias www.domain.com -keystore server.keystore -storepass changeit -keypass changeit -file server-2048.csr

    Now, before you submit the CSR, make sure that you backed-up the server.keystore in a safe place because it contains your PK and if you lose it your certificate will be useless. Make sure that the file is in a safe place because if a malicious person gets his hands on it you will be in trouble unless you change your certificate. Using the PK included in that file anyone, with basic knowledge, can decrypt messages encrypted with your public key.

    Now that you created a backup of the server.keystore and purchased your certificate from godaddy its time to import them into designated keystores. Note that godaddy will give you a certificate named something like domain.com.crt, you will need to download godaddy CA certificates from its repository located at https://certs.godaddy.com/repository/. You will need to download the following ones:

    Now place all of the following files into a $domain.dir and fire a terminal (cmd) and execute the following commands:

  • Import the root certificate into the glassfish key store to make it possible for the secondary certificates to get validated. The keytool may tell you that the certificate already exists in the global ca cert store. If so, do not import this one:
  • keytool -import -alias root -keystore keystore.jks -trustcacerts -file valicert_class2_root.crt

     

  • import secondary CA certificates into the keystore to make it possible for the server certificate signed by godaddy to validated and accepted.
  • keytool -import -alias cross -keystore keystore.jks -trustcacerts -file gd_cross_intermediate.crt

    keytool -import -alias intermed -keystore keystore.jks -trustcacerts -file gd_intermediate.crt

  • import the server certificate into tke keystore. Make sure that the alias used for the certificate must be same as the alias used for the PK. otherwise the validation chain wont get completed and therefore the certificate won’t be imported into the keystore.

keytool -import -alias www.domain.com -keystore keystore.jks -trustcacerts -file shoptalkk.com.crt

The certificate installation is finished, the only left step is chaning the certificate nickname in your domain.xml file to the new alias name we used in the above commands.

  • Make sure that the domain is stopped using asadmin stop-domain domain_name
  • create a backup of the domain.xml
  • Open domain.xml in a text editor like gedit, kate or wordpad and replace all occurrence of s1as with www.domain.com which is the certificate alias
  • save the domain and start the domain in verbose mode using asadmin start-domain –verbose domain_name
  • Open https://server:8181/ and see whether it works properly or not. if you use the exact https://domain.com:8181/ you should get no warning and the whole thing should work properly. If you use https://localhost:8181/ you will get a warning about a misused certificate. it will explain that the certiificate is issued for www.domain.com but it is installed on localhost….