Java EE Security Refcard is now available for download at no cost.

Java EE Security refcard is available for download. This refcard covers Java EE 6 security and discuss how each application server supports the specs. The refcard covers authentication, authorization, and transport security in Web Application, EJB application and web services by introducing the concept and the related annotations and deployment descriptors which help us realize the concept.

GlassFish, Geronimo and JBoss are discussed in the refcard to show how we can use the vendor specific deployent descriptors for implementing the security design of our applications.

Following list shows how what are covered in this.

  • Security in Java EE applications
  • Authentication an Authorization in Java EE
  • Web Applications Security
    • Authentication and Authorization in Web Module
    • Enforcing Transport Security
    • Other Security Elements of Web application deployment descriptors
    • Using Annotations to enforce security in Web modules
    • Programmatic Security in Web Module
  • EJB Applications Security
    • EJB module deployment descriptors
    • Security Annotation of EJB modules in Java EE 6
    • Securing EJB Modules programmatically
  • Application Client Security
    • Security enforcement in Geronimo ACC
    • Security enforcement in JBoss ACC
  • Defining Security in Enterprise application level
  • Securing Web Services in Java EE

  • Web Services Security in Web Modules
  • Web Services Security in EJB Modules
  • Web Services Authentication in GlassFish
  • Web Services Authentication in Geronimo
  • Web Services Authentication in JBoss

The refcard comes with 4 figures showing relation between different element and components in Java EE  along with 7 tables explaning the deployment descriptors elements and security annotations. For most of the above headings you will find sample code included in the refcard showing how we can do implement the discussed functionality according to Java EE and mentioned application Servers.

GlassFish v3 and EJBCA 3.x a fair couple for mutual SSL authentication.

Please use the following articles while I am updating this entry

  1. How to have your Own CA and configure Glassfish and your clients for mutual authentication?
  2. How to have your Own CA and configure Glassfish and your clients for mutual authentication?, Part II

Please post any comment or question here so we can have one main reference for this.

GlassFish Security Book Which Covers GlassFish v3 security, Java EE 6 security, and OpenSSO has just been published.

The Book in Details:

Security was, is, and will be one of the most important aspects of Enterprise Applications and one of the most challenging areas for architects, developers, and administrators. It is mandatory for Java EE application developers to secure their enterprise applications using Glassfish security features.

Learn to secure Java EE artifacts (like Servlets and EJB methods), configure and use GlassFish JAAS modules, and establish environment and network security using this practical guide filled with examples. One of the things you will love about this book is that it covers the advantages of protecting application servers and web service providers using OpenSSO.

The book starts by introducing Java EE security in Web, EJB, and Application Client modules. Then it introduces the Security Realms provided in GlassFish, which developers and administrators can use to complete the authentication and authorization setup. In the next step, we develop a completely secure Java EE application with Web, EJB, and Application Client modules.

The next part includes a detailed and practical guide to setting up, configuring, and extending GlassFish security. This part covers everything an administrator needs to know about GlassFish security, starting from installation and operating environment security, listeners and password security, through policy enforcement, to auditing and developing new auditing modules.

Before starting the third major part of the book, we have a chapter on OpenDS discussing how to install, and administrate OpenDS. The chapter covers importing and exporting data, setting up replications, backup and recovery and finally developing LDAP based solutions using OpenDS and Java.

Finally the third part starts by introducing OpenSSO and continues with guiding you through OpenSSO features, installation, configuration and how you can use it to secure Java EE applications in general and web services in particular.

Inspired from real development cases, this practical guide shows you how to secure a GlassFish installation and how to develop applications with secure authentication based on GlassFish, Java EE, and OpenSSO capabilities.

What you will learn from this book :

  • Develop secure Java EE applications including Web, EJB, and Application client modules.
  • Reuse the security assets you have by learning GlassFish security realms in great details along with the sample for each realm.
  • Secure GlassFish installation including operating system security and JVM policy configuration.
  • Secure Java EE applications using OpenSSO and set up Single Sign-On (SSO) between multiple applications.
  • Secure web services using Java EE built-in features, OpenSSO and WS-Security.
  • Secure network listeners and passwords using GlassFish provided facilities.
  • Learn using OpenSSO services, SDKs, and agents to secure Java EE enterprise applications including Web Services.
  • Learn using OpenDS both as administrator and as an LDAP solution developer.
  • All command lines and more than 90% of the book content applies for both GlassFish 3.x and 2.x.


Security is driven by requirement and design and we implement security on the basis of the requirements provided by analysts. In this book, we take a programmatic approach to understand Java EE and GlassFish security.

You will find plenty of code samples in this book. It is easy to secure your application when you have a demonstration of a complete and working application explained in the book, isn’t it? Each chapter starts with the importance and relevance of the topic by introducing some Java EE applications requirement, which will encourage you to read it further.

Who this book is written for

This book is for application designers, developers and administrators who work with GlassFish and are keen to understand Java EE and GlassFish security.

To take full advantage of this book, you need to be familiar with Java EE and GlassFish application servers. You will love this book if you are looking for a book that covers Java EE security and using GlassFish features to create secure Java EE applications, or to secure the GlassFish installation and operating environment and using OpenSSO.

var gaJsHost

Learning GlassFish v3 Command Line Administration Interface (CLI)

Learning GlassFish v3 Command Line Administration Interface (CLI)

Terminals and consoles was one of the earliest types of communication interfaces between a system administrator and the system administration layer. Due to this long time presence,  command line administration consoles become one the most utilized administration channel for configuring different software ranging from database engines to router’s embedded operating systems.

GlassFish provides several administration channels; one of them is the command line administration interface or CLI from now on.  The CLI has many unique features which make it very convenient for command line advocates and new administrators which like to get familiar with CLI and use it in the daily basis.  The CLI allows us to manage, administrate, and monitor any resources which application server exposes to the administrators. So we can perform all of our administration tasks just by firing up a terminal, navigating to glassfish bin directory and executing the asadmin script which is either a shell script in UNIX based operating systems or a batch file in Windows operating system.

The CLI has some degree of embedded intelligence which helps us to simply find the correct spelling and similar commands to the command or phrase that we try. Therefore with basic knowledge of GlassFish application server administration we can find out the command that we need and proceed with the task list which we have in front.

1 The CLI environment

CLI is an administration interface which let administrators and developers to change configurations like changing a listener port; performing life cycle related tasks like deploying a Web application, or creating a new domain; monitoring different aspects of the application server; and finally performing maintenance duties like creating backup from the domain configuration, preparing maintenance reports and so on.

Door to the CLI interface is a script which is place in glassfish_home/bin directory. For Linux and UNIX platforms it is a shell script named asadmin and for the Windows platform it is a batch file named asadmin.bat. So, before we start doing anything with the CLI interface we need to either have glassfish_home/bin in the system path or we should be in the same directory which the script resides.

Add glassfish_home/bin to operating system path

We can add glassfish_home/bin to windows operating system path by using the following procedure. You should have the glassfish_home variable already defined, if not you can replace %glassfish_home% with the full path to glassfish installation directory.

  • Right click on My Computer icon and select properties item
  • Select advanced tab in the from the tab set
  • Click on environment variables button, you will see two set of variables one for user and one for the system. I suggest you choose the user space to add your variable or update the currently defined variable.
  • If you can not find path in the list of variables, create a new variable named path and sets its value to %glassfish_home%/bin. If the variable is present then add ;glassfish_home/bin to the end of the variable’s value.

Close all cmd windows and open a new one to have the new path applied

And for UNIX and Linux machines, we can add it to operating system path by using the following procedure. Replace “${glassfish_home}” with the full installation of Glassfish.

Open a text editor like ktext, gedit or any editor which you are familiar with

Open ~/.bashrc file using the file menu

At the end of the file (last line) add export PATH=”${glassfish_home}”/bin:”${PATH}”

Close all terminals and fire up a new terminal console to have the effects applied


We can execute the asadmin.bat in windows by calling asadmin.bat or asadmin in a cmd windows. We should be either in the bin directory or we should have the script in path.

For Linux and UNIX, if we have the script in path we can call asadmin in a terminal window, if we do not have it in the path then we should be in the bin directory and execute it by typing ./asadmin in the terminal window.


In this article I will use an operating system natural syntax, for example I will use asadmin version which can be interpreted as asadmin.bat version  in windows or ./asadmin version in UNIX or the asadmin version itself when we have asadmin in our path.

2 Getting started with CLI

2.1 The commands format

The asadmin script is the door to GlassFish command line interface and all of its commands follow a de-facto naming convention and a standard format which we need to remember. All command names are self explaining and we can understand the command purpose from the command name. All of the asadmin commands use one or multiple words as the command name to completely explain the command purpose. These words which form the command name are separated using a dash (-).

The naming convention also provides another learning point about the asadmin commands. Commands are categorized into lifecycle management commands, configuration commands, listing commands and monitoring commands. Each command starts with a verb which determines the command nature, for example: create, delete, update, list, and configure are the starting words for many of asadmin commands.  The command syntax is shown in the following snippet.

command-name options operands

The command-name can be start-domain, stop-domain, and so on. The options are parameters which affects the command execution, for example the domain instance port. the JDBC connection pool data source class name and so on.  The operand are usually name of the resources which the command may affect, like name of the connection pool that we want to create, name of the domain which we want to start and so on.

2.2 The asadmin operating modes

The asadmin utility which we use to execute our administrative commands operates in two modes. Each mode is suitable for specific set of tasks and has its own characteristics when it comes to performance and productivity.

Headless operating mode

In the headless mode which is very suitable for writing scripts  we call the asadmin  utility and pass the command name along with its options and operands. The asadmin utility launches a JVM and execute the command. The result of the command will be either 0 as failure or 1 as a success. An example of this mode is similar to the following script:

asadmin version

This will simply execute the version command which shows the application server version. To get help for a command in this mode we can use asadmin help version or we can use asadmin version –help.

Multimodes operating mode

This operating mode let use execute multiple commands in one JVM instance, therefore it is faster for executing commands. This execution mode is not suitable for script developers as we should execute the commands in asadmin environment and not in the shell command line.

To execute a command in this mode we should enter the asadmin environment and then enter our commands along with options and operands. For example:

asadmin  A

asadmin> version --help B

A: Hit enter

B: Command name with parameters

The asadmin utility is an operating system native shell script which wrap around the actual Java application responsible for executing the administration commands which we issue. Later on we will see how this script can fall useful for administrating some domains from a system which is behind a proxy.

2.3 Local commands

Two sets of commands are present in asadmin which help administrators perform variety of tasks. Local commands are a set of commands which either affects the environment which application server is running or it needs accessing the application server environment locally to execute some scripts or batch files to perform a job.

A good example of local commands is domain lifecycle management commands which include creating a new domain, deleting a currently available domain, or starting a domain. All of these commands need to either access the application server installation directory layout or need to run a script on the target machine to perform the task.

Another place which we can discuss the local commands is the Java DB copy included with the GlassFish application server installation bundle. The following command start the database on the local machine. The command starts the database by creating another Java process directly by using the operating system shell.

asadmin start-database

You may wonder why this command has not options or operands. Generally all GlassFish administration commands need the minimum possible options and operands to perform the task when we use default setting and configuration.

Each command has its own set of parameters but most of the asadmin commands, either local or remote, have some shared parameters which are listed in table 1. Command’s parameters can have a default value which will be used if we do not specify it. The default value for each parameter is underlined.

Table 1 List of shared parameters between local and remote commands


Acceptable values


–terse or –t

false, true

Indicates that any output data  must  be  very  concise

–echo, -e

true, false

If set to true, the command-line statement is echoed on the standard output.

–interactive, -I

true, false

If set to true, only the required password options are prompted.


Path to the password file

We will discuss it in section 5 of this article

–user, -u

No default value

A user with administrative privilege on target domain.

These parameters have no effect on the command execution but they just change the way that the command shows us the information and how we interact with the commands.

2.4 Remote commands

In the Opposite side of the local commands we have remote commands which form a set of commands that affects the running application server instance configuration and access the application server environment and file system using an application deployed in the application server itself. Therefore the target instance should be running and there should be a network route between the administration workstation and the GlassFish instance running on the server machine.

When we execute a remote command we should let the asadmin know where the target application server instance is running and what the instance administration port is.  The asadmin uses this information along with the authentication information to communicate with the application server over HTTP or HTTPs using a REST like URL pattern for commands and parameters. Figure 1 demonstrates the remote and local commands.

Figure 1 The asadmin utility communication with remote and local GlassFish domain and installation

In figure 1 you can see that in the running GlassFish instance there is an application deployed under /__asadmin/ context and listens for incoming asadmin commands. This application is listening on admin listener which is by default uses port number 4848.

Listing 1 shows a sample command content when it is heading toward the application server. It should have already flashed a light in your head about the RESTful Web services development articles or books that you have already read.

Listing 1 content of HTTP request for version command

GET /__asadmin/version HTTP/1.1

Content-type: application/octet-stream

Connection: Keep-Alive

Authorization: Basic YWRtaW46YWRtaW5hZG1pbg==

User-Agent: hk2-agent     #A

Cache-Control: no-cache

Pragma: no-cache

Host: localhost:4848

Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2

#A The asadmin’s agent name

Listing 2 shows the response generated by application server for version command. As you can see, the response is plain and simple.

Listing 2 response generated by application server for version command

HTTP/1.1 200 OK

Content-Type: text/html

Transfer-Encoding: chunked

Date: Mon, 12 Jan 2009 19:26:01 GMT

Signature-Version: 1.0

message: GlassFish v3 Prelude (build b28c)

exit-code: SUCCESS #A

If you are thinking that trying URLs in your browser will result in some output, you are completely right. For example if you try browsing you will get a result page similar to figure 2.

Figure 2 Using browsers to run asadmin commands and view the result in browser

Now you may have some questions about how a command can generate an HTML oriented output in the browser while it can generates a plain text output when we use asadmin to execute the same command. The answer can be found in the way that browser identifies itself and asadmin utility identifies. The asadmin utility uses hk2-agent as the agent name and asadmin web application respond to this types of agent with a plain text response while it provides a full readable html for other agent types.

The only time that we feel difference between these two sets of commands is when we try to execute a local command on a remote machine.  For example creating a domain on a remote machine is not possible unless we have a node agent installed on that machine and we are using DAS to control that machine.

A good example of remote commands is application deployment command. We can deploy an application archive like an EAR or WAR file using asadmin. The deployment command will upload the file to the server and server takes care of the deployment. Following command will deploy the gia.war to the default glassfish instance.

asadmin deploy gia.war

The deploy commands has several parameters, we will discuss these parameters in details in subsequent sections. You may have already asked yourself how we identify which domain we want to execute a command against. Your answer is behind some shared parameters between all remote commands to distinguish the target domain which the command affects. These parameters are listed in table 2 along with the description and default values.

Table 2 List of all shared parameters between remote commands


Default values


–host, -H


when no value is determine for this parameter, the asadmin utility will try to communicate with localhost

–port, -p


when no value is determine for this parameter, the asadmin utility will try to communicate with the determined host on port 4848

–secure, -s


If we have our asadmin application on a secure listener then we will use this parameter to prevent any security breach.

All of these default parameters value’s assumption allows us to have more compact, easy to write and understand commands. Based on the connate of this table we can say that the sample deploy command will deploy the application into a domain running on localhost with port 4848 assigned to its admin listener.

3 Performing common tasks with CLI

Application server administration is composed of some common tasks which deal with common concepts of Java EE application servers like domain administration, application lifecycle administration, and managed resources administration. Along with these common tasks there are many other tasks which might be common between application servers or not. These tasks includes listeners administration, resource adapter administration, application server security administration which includes both application server security and Java EE related security definitions, virtual servers administration, monitoring and so on. In this section we will discuss the common administration tasks.

3.1 Domain lifecycle administration

Domains are foundation of GlassFish application server instances; each domain is an independent instance of application server which uses the same application server installation and hardware resources. Domain lifecycle administration covers creating, starting, stopping, configuration backup, deleting the domain, and domain configuration recovery using previously created backups.

Creating a new domain

Creating a new domain is fairly simple when you know what attributes distinguishes one domain from another. A sample command which creates a domain is as follow:

create-domain --user admin --adminport 4747  --domaindir /opt/dev/apps/domains/GiADomain --profile developer --instanceport 8282 --domainproperties jms.port=8687 --savemasterpassword=true --passwordfile=/opt/dev/apps/passfile.txt  --savelogin=true GiADomain

This is a sample of create-domain command with most of the applicable parameters. We are using –adminport to determine the admin port, –domaindir let us create the domain in none default path like what we used instead of glassfish_home/domains which is the default directory. The –instanceport let us determine the port that default application listener uses. Some of the domain properties like JMS port, JMX port and so on can be determined using the –domainproperties parameter. All parameters are self describing except for following three parameters which you need to understand them thoroughly.

  • The –savemasterpassword parameter: Setting this option to true asks the domain creating process to save the master password into a file and facilitate the unattended (headless) domain startup. We will discuss this password in more details in section 4 of this article. For now just remember that default value for master password is changeit. I am sure that value of the password is talking for the importance of changing it.
  • The –passwordfile parameter: The asadmin utility policy encourages the administrators not to enter the password in the console and instead use a file which includes the required password. During the domain Creating two passwords are required which are domain administration password and master password. To avoid typing the passwords we can store them in a plain text file similar to the following snippet and pass it to any command that we execute instead of typing the passwords in an interactive way.



The file content is in properties file format and can be created using any text editor. If we do not use the –passwordfile parameter and a password file containing the passwords, the asadmin utility will asks for passwords interactively.

  • The –savelogin parameter: We will discuss automatic login with more details in section 4 of this article, but to be brief, this parameter make it possible to execute our commands on different domains without providing username and password on every command execution.

Starting and stopping a domain

We can start a domain using the start-domain command. An example of starting a domain is similar to the following snippet.

start-domain --debug=true  --verbose=true --user admin --passwordfile=/opt/dev/apps/passfile.txt

--domaindir= /opt/dev/apps/domains/  GiADomain

Although the command can be as simple as start-domain domain1 but that is for occasions when we are using all of the default values for any parameter and no customization has happened during the time which we create the domain. The –debug=true means that domain is started in debug mode and we can attach our debugger to the running instance. The –verbose=true means that we want to see all messages that application server emits during the startup; it is useful to catch errors and troubleshoot. The –passwordfile parameter lift the need to enter the admin and master passwords. The file content is similar to password file we used during the domain creation. And finally we need to determine where the domain resides if we create the domain in a non-standard location.

To stop a domain we can use a command similar to the following snippet which is fairly simple.

stop-domain  --domaindir= /opt/dev/apps/domains/ GiADomain

We just need to provide the path to the directory which contains the domain. The only applicable shared parameters are the items that we used. None of the other shared parameters is applicable in this command.

Domain utility commands

There are two utility commands for domain administration. The first command lists all of the domains which reside in the given directory along with the domain status indicating whether the domain is running or not. And the second command checks the structure of domain.xml file.

list-domains --domaindir=/opt/dev/apps/domains/

verify-domain-xml  --verbose=true --domaindir= /opt/dev/apps/domains/  GiADomain

The verify-domain-xml is very useful when we edit the domain.xml file manually.

Domain configuration backup and recovery

Yes, we can backup a domain configuration and later on restore that backup if we messed up with the domain configuration. Each backup file is an archive which contains description files and vital domain configuration files including domain.xml. By default all backups are placed inside a directory named backups which resides inside the domain directory. The backup file names follow a pattern similar to which means that it is the first backup of this domain. To create a backup a command similar to the following snippet will do the trick.

backup-domain --description "backup made for glassfish in action book" --domaindir /opt/dev/apps/domains/ GiADomain

After some times we will have several backup archives for each of our domains, we will need a command to get a list of all available backups or backups related to one single domain. To get list of backups for a single domain we can use:

list-backups --domaindir /opt/dev/apps/domains/ GiADomain

Or to get a list of all backups of all domains in the default domains directory we can use list-backups command alone with no parameters.

Now that we can list the available backups for each domain we need command which allows us to restore a domain. To restore a domain to one of its previous backups we should stop the domain and then use the restore command which is similar to:

restore-domain --domaindir /opt/dev/apps/domains/ --filename GiADomain

The command will simply pick up the backup archive and restore the given domain using its content.

Deleting a domain

We should deal with this command very carefully as it will delete the domain directory with all of its content including backups, deployed applications, configuration files and so on. To delete a domain we can use delete-domain command as follow to delete our GiADomain domain.

delete-domaind --domaindir=/opt/dev/apps/domains/ GiADomain

Domain management is the most basic task related to application server administration. Every asadmin command including domain administration commands has manual pages included in the asadmin commands package and accessible using asadmin utility. An example of getting help for create-domain command is similar to:

asadmin help create-domain

You can see complete list of parameters and their associated values for each of the discussed command using the help command. All of the domain administration commands are local commands. We can use list-commands command to see the list of all available commands in both remote and local categories.

3.2 Applications lifecycle administration’

Applications including web, enterprise, and resource adapter, and so on are our bits inside the application server which our clients relay on to get the required services. Applications can be deployed from command line in a very effective and customizable manner. Like all other administration commands application deployment is a single parameter command when we want to use the default parameters.

Deploying and un-deploying application

To deploy an application independently from its type (WAR, EAR, RAR, and so on) we can use deploy command. The command which is one of the most basic remote commands deploys any kind of application known to the GlassFish application server to its dedicated container. The command has many options which are omitted to show the common use case. The GlassFish instance should be up and running to use deploy or any other remote command.

deploy   --host --port 4747  --user  admin –name aName --contextroot cRoot --upload=true --dbvendorname derby --createtables=true  /home/masoud/GiA.ear

Deploying a web application using default parameters is as simple as passing the archive to the deploy command. The context name will be same as the archive file.

deploy GiA.war

We can remove an application using the undeploy command. The command will remove the application bits from the application server directories along with removing it from the list of known applications in the domain.xml file. Following snippet shows how we can use it.

undeploy  --droptables=true aName

The undeply command uses the application name to remove it from the application server; we have the liberty to drop the automatically created tables (for enterprise applications) automatically. Deploying and undeploying other application types is similar to enterprise application deployment. You can view the complete list of parameters by executing deploy with no parameter or you can view the complete help content by using help deploy command.

Application utility commands

For domains with developer profile we have only one utility command named list-applications which list the deployed applications. In addition to all shared parameter this command has one important parameter named –type which accepts application,    connector, ejb, jruby, and web as its value. Following command will list all web applications deployed in out domain.

list-applications --type=web

You can see all commands related to application administration by trying asadmin application which will show all commands which has application word in the command name.

3.3 Application server managed resources

Dealing with managed resources is one of the most recurring tasks which an administrator faces during the application deployment and development. The most important types of managed resources are JDBC and JMS resources which we will discuss in this section.

JDBC connection pool administration

A JDBC connection pool is a group of reusable connections for a particular database. Because creating each new physical connection is time consuming, the server maintains a pool of available connections to increase performance. When an application requests a connection, it obtains one from the pool and when the application closes the connection, the connection is returned to the pool instead of being closed. To create a JDBC connection pool we can use create-jdbc-connection-pool command.

create-jdbc-connection-pool --user admin

--passwordfile=/opt/dev/apps/passfile.txt  --host localhost --port 4747

–datasourceclassname org.apache.derby.jdbc.ClientDataSource –restype javax.sql.XADataSource

--property portNumber=1527:password=APP:user=APP:serverName=

localhost:databaseName=GiADatabase:create=true GiA_Derby_Pool

The command may looks scary at the beginning but if you take a closer look you can see that all of the parameters and values are well known for you. We ca delete a JDBC connection pool using delete-jdbc-connection-pool which in addition to shared parameters accept an operand which is the name of the connection pool which we want to delete

Now that we have a connection pool we need a reference to the connection pool in the JNDI tree to ensure that our applications can access the connection pool through the application server.

JDBC resources administration

A JDBC resource or a data source is the mediator between application and the connection pool by providing a JNDI name to access the connection pool. Multiple JDBC resources can specify a single connection pool.

create-jdbc-resource –user admin –port 4747 –host localhost –passwordfile=/opt/dev/apps/passfile.txt

--connectionpoolid GiA_Derby_Pool jdbc/GiAPool

We determined a name for our JDBC connection pool when we create the pool. And when we want to create the data source we use that name to identity the JDBC connection pool we want to specify in the JDBC resource.

We can delete a JDBC resource using delete-jdbc-resource which in addition to shared parameters accepts an operand which is name of the connection pool that we want to delete.

JMS Destination administration

In enterprise applications, asynchronous communication is one of the in-evitable requirements. JMS destinations are virtually places which each message are headed to. JMS clients are listening on the same destinations to receive these messages. We can create a JMS destination either a Queue or Topic by create-jmsdest command.

create-jmsdest --user admin --passwordfile=/opt/dev/apps/passfile.txt --host localhost --port 4747 --desttype queue  --property User=public:Password=public GiAQueue

We can create a JMS topic by changing the value of –desttype parameter to topic. To delete a JMS destination we can use delete-jmsdest command which accept the name of destination which we want to delete as an operand. For example the following command will delete the JMS queue which we create in previous step.
delete-jmsdest GiAQueue

Make sure that you create the above queue again as we will need it in the next steps.

JMS resources administration

JMS connection factories are door to consuming and producing JMS messages. We need to define these factories prior to be able to access a JMS destination. To create a JMS connection factory we can use creste-jms-resource command. For example:

create-jms-resource --user admin --passwordfile=/opt/dev/apps/passfile.txt  --host localhost --port 4747  --restype javax.jms.QueueConnectionFactory --property ClientId=MyID:UserName=guest:Password=guest jms/GiAQueueConnectionFactory

Our sample command creates a connection factory which provides us with queue connections. When we need to interact with a JMS topic we can change the –restype parameter to javax.jms.TopicConnectionFactory.

Deleting a JMS resource is possible by using delete-jms-resource command which takes the resource name which we want to delete as a parameter in addition to the shared parameters.

Utility commands

There are several utility commands related to manage resources, table 3 shows these commands along with a sample and description.

Table 3 Some asadmin utility commands for administering managed resources





flush-jmsdest GiAQueue*

purges the messages in a JMS destination



checks to see if the JMS provider is running, the command can ping cluster wide configurations


ist-jms-resources –restype javax.jms.TopicConnectionFactory

List all JMS resources



List all JMS destinations



List all JDBC connection pools



List all JDBC data sources

*I removed all shared parameters to fit the commands in the table.

The asadmin utility has a very rich set of commands for administrating the managed resources. We learned basic commands here.

3 Monitoring with CLI

Monitoring is an integral part of the administrators daily task list. Monitoring helps us to: find possible problems, detect performance bottlenecks, find system resource shortage, plan system capacity or upgrade, and finally monitor the daily performance of the system. The GlassFish application server infrastructure allows us to monitor almost any object which corresponds to a Java EE concept. For example we can monitor an HTTP listener, a Servlet, or a JDBC connection pool.

To monitor an object, system should gather statistics about that object and this statistics gathering has some overhead even in ideal software as more byte codes should be executed to gather the statistics. Based on the fact that GlassFish application server should be well tuned out of the box, all monitoring functionalities are turned off to prevent any extra overhead during the system work hours.

GlassFish provides 3 monitoring level for any object available for monitoring. These levels include:

  • OFF: No monitoring information is gathered
  • LOW: Only object’s attributes changes are monitored
  • HIGH: In addition to attributes methods execution is counted

The asadmin utility provides multiple ways for getting statistical data from the application server core. Multiple commands are introduced to facilitate our job in viewing this statistical information. These commands include:

  • The monitor command: We can use this command to view common statistical information about different application server services, we can filter the statistics for only on monitor-able object in the service or we can view cumulative statistical information about all objects in that particular service. For example all JDBC connection pools or just one single JDBC connection pool statistics can be viewed using the monitor command.
  • The list command: We can use this command to get the hierarchic of all monitor-able and configuration objects or specific type of objects like JDBC connection pools. The list command has an important boolean parameter named monitor which determine whether we want to see monitor-able objects or configuration objects.
  • The set command: We can use this command to change the monitoring level of a monitor-able object. And in broader view, We can use it to change application server configuration by changing the attributes of configuration objects.
  • The get command: We can get customized monitoring information about different attributes of different monitor-able objects. In broader view, we can use this command to view value of any configuration object’s attributes.

3.1 Dotted names

All monitor-able and configuration objects shape a tree composed of these objects and their children objects. The children objects have attributes which are either the monitoring factor or configuration elements. Objects in the tree are separated using a dot and therefore we can simply navigate from the root object to any of lower level child objects. For example if we execute list –monitor=true server we will get a list of all monitor-able objects which are direct child of server objects. The list can be similar to the following snippet.










After getting the high level child we can go deeper toward leaf nodes by listing the immediate or related children of any top level object. For example to get immediate children or http-service object we can use list –monitor=true server.http-service to show all immediate children of http-service object which can result in an output similar to:






We can further do down and monitor any monitor-able object in the application server. We can use list –monitor=true server.http-service.* to view a recursive list of immediate children and children of those immediate children. We use * with list command as a place holder for any part of the dotted name. For example we can use server.jvm.* or server.resource* to get all children of server.jvm or all children which starts with resource.

The dotted names are not provided for sole monitoring purposes, we can use them to view and change the configuration of application server by using the asadmin utility. We can get list of configurable objects by omitting the –monitor=true parameter from the list command.

Enabling the GlassFish monitoring

GlassFish monitoring is turned off when we install the application server, we need to enable the monitoring system to gather statistical information for us. To do so, we can use set command to change the monitoring level for one or all of the GlassFish services like JDBC connection pool, HTTP listener, and so on.  But before changing the monitoring level from OFF to HIGH we can check and see the current level by using a simple get command similar to the following snippet.

get server.monitoring-service.module-monitoring-levels.*

Executing this snippet shows us a result similar to the following snippet which means that monitoring is OFF for all services.











We can enable the monitoring for JDBC connection pools by using set command as follow:

Set server.monitoring-service.module-monitoring-levels.jdbc-connection-pool=HIGH

Now we are sure that any activity involving any of the JDBC connection pool will be monitored and its statistical information will be available for us to view and analyze.

3.2 The monitor command

Using monitor command is the simplest way to view monitoring information about GlassFish application server. Following snippet shows the monitor command syntax. I have not included the shared parameters to make it easier to read and see the command specific parameters.

monitor –type  monitor_type [–filename file_name] [–interval interval]

[–filter filter_name]   instance_name

We can save the command output to CSV files using the –filename parameter and we can determine the interval which the command uses to retrieve the statistics by using the –interval parameter. The instance_name argument let us use this command on a DAS to retrieve the statistics related to one of the DAS member instances. Two other parameters determine what service and which object in the service we want to monitor. Most important acceptable values for –type parameter is listed in table 4 along with related description.

Table 4 acceptable values for the monitor command’s type parameter




Statistics related to State full Session Beans


Statistics related to Stateless Session Beans


JCA connector pool statistics


Web services end point statistics


Entity Beans statistics


GlassFish HTTP level cache system statistics


HTT listener statistics


HTTP service statistics


JDBC connection pool statistics


Underlying JVM statistics


Servlet statistics

You can see an example monitoring command to monitor our newly created JDBC connection pool in the following snippet.

monitor –type jdbcpool –interval 10 –filter GiA_Derby_Pool server

We just want to view the statistics related to one JDBC connection pool on the default instance. The interval for retrieving the statistics is 10 seconds. The output for this command is very similar to figure 3.

Figure 3 Outputs for monitoring GiA_Derby_Pool JDBC connection pool using monitor command

Monitoring results are categorized under some column and for each column we have several statistic factors like highest number of free connections, and current number of free connections. Next service which we want to monitor is the application server runtime or the JVM instance that our application server is running on. To monitor the JVM we can use a command similar to the following snippet.

monitor –type jvm –interval 1 server

Command output is similar to figure 4 which shows almost no changes in heap size during the monitoring period.

Figure 4 outputs for JVM monitoring using the monitor command

The monitor command combined with LOW monitoring level is very useful when we need to get an overview of different GlassFish services in production environment.

3.3 Monitoring and configuration by get and set commands

Explain more about get and set commands

Monitoring a JDBC connection pool

Now we want to use these set of four commands to monitor our newly created JDBC connection pool.

get --monitor=true server.resources.GiA_Derby_Pool.*

This command will return a handful set of statistical information about our connection pool. Although we can change the command in order to get only one of the connection pool attributes instead of complete statistical set. To get number of free connections in our JDBC pool we can use the following command

get --monitor=true  server.resources.DerbyPool.numconnfree-current

We can use get to view the configuration attributes of any configurable object in the application server. If we try to use get without –monitor=true, we should give try to view a configuration attribute instead of a monitoring attribute. For example to view the maximum pool size for our connection pool we can use the following command:

get server.resources.jdbc-connection-pool.DerbyPool.max-pool-size

We can use set command alongside with get command to change the configuration parameters using the dotted names and asadmin utility.

set server.resources.jdbc-connection-pool.DerbyPool.max-pool-size=64

It is very simple and effective to get detailed monitoring information out of the application server to catch up with performance related issues.

4 CLI security

The CLI is our means to administrate the application server. As well as it can be very useful for daily tasks it can be catastrophic if we fail to protect it well from the unauthorized access.

4.1 The asadmin related credentials

In the commands which we studied in this article you saw that for some of them we used the –user and –passwordfile parameters and for some of them we did not. The main reason behind the fact that we could execute a command with or without providing any credentials refers back to the way that we create the domain. If you look at the command we used to create the domain you can see that we used –savemasterpassword=true and –savelogin=true.  These two parameters save the domain credentials and asadmin reuse those credentials whenever we want to execute a command.

The so called master password

The master password is designated to protect the domain encrypted files like digital certificate store from unauthorized access.

When we start a GlassFish domain, startup process needs to read these certificates and therefore it need to open the certificates store files. GlassFish needs master password to open the store files and therefore we should either provide the master password during the start-domain command execution in an interactive way or we should use the –passwordfile parameter to provide the process with the master password.

The most common situation for using a previously saved master password is when we need our application server to start in an unattended environment, like when we make a Windows service or Linux Daemon for it.

To provide this password for the application server startup process, we should use the –savemasterpassword=true during the domain creation. If we use this parameter the master password will be saved into a file named master-password which resides inside the domain/config directory.

We said that the master password protects some files from unauthorized access. One of the file set that master password protects is the digital certificates storage files which differ between developer and cluster profile.

The domain administration credentials

So far, we that we can execute asadmin commands with or without providing the password file. You may ask, how it is possible that we are not entering the password and nor we are providing the password file and hence we saw that our command are carried out by the  asadmin.

We did not provide any password in an interactive manner or any password file for asadmin to carry out our command because we used a parameter during the domain creation. The –savelogin=true asks the domain creation command to save the administration password of our domain into a file name .asadminpass which is located inside our home directory. The content of this file is similar to the following snippet.

asadmin://admin@localhost:4747 YWRtaW5hZG1pbg==

The syntax simply determine what is the administration username and administration password for a domain that is running on localhost and uses 4747 as its administration port Sure the password is encrypted to prevent anyone learning it without being authorized.

There is an asadmin command which perform a similar task to what –savelogin does for domains that are already created.  We can use this command to save the credentials to prevent the asadmin utility asking for them either in an interactive mode or as  –passwordfile parameter. Following snippet shows how we can use login command.

login   --host localhost --port 9138

This command will interactively ask for administration username and administration password and after a successful authentication it will save the provided credentials into the .asadminpass file. After we execute this command the content of .asadminpass will be similar to the following snippet.

asadmin://admin@localhost:4747 YWRtaW5hZG1pbg==

asadmin://admin@localhost:9138 YWRtaW5hZG1pbg==

The .asadminpass contains the SHA hashed copy of passwords therefore it is not possible for anyone to recover the original passwords if he can grasp the file.

The login command falls very useful when we need to administrate several domains from our administration workstation. We can simply login into each remote or local domain that we need to administrate and then asadmin will pickup the correct credential from the .asadminpass file based on the –host and –port parameters.

Changing passwords

As and administrator we usually like to change our passwords from time to time to ensure keep a higher level of security precautions. GlassFish let us simply change the master or administration password using some provided commands.

To change the master password we must be sure that application server is not running and then we can the change-master-password as follow:

Change-master-password --domaindir=/opt/dev/apps/domains/ --savemasterpassword=true GiADomain

After executing this command, the asadmin utility will interactively asks us for the new master password which must be at least 8 characters. And we use the –savemasterpassword to ensure that the master password is saved and during the domain startup we do not need to provide the asadmin with it. We need to change the password file if we are using it to feed the asadmin utility with the master password.

To change the administration password we can use change-admin-password which is a remote command and need the application server to be running. Following snippet shows how we can change the administration password for a given username.

change-admin-password --host --port 4747 --user admin

After executing this command, asadmin will asks for administration password and then change the password of the given user. If we change the password for a user, we will need to login into that domain again if we need to use automatic login. Also we need to change the password file if we are using it to feed the asadmin utility with its required passwords.

In advanced administration article we will discuss how we can add more administration users and how we can use an already credentials store like an LDAP for administration authentication.

4.2 Protecting passwords

We have many places in any application server which we need to provide it wit some username and passwords which the application server will use to communicate with external systems like JMS brokers and Databases. Usually each part of the overall infrastructure has its own level of policy and permission set which lead to the fact that we should protect these passwords and avoid leaving these passwords in plain text format in any place, even in the application server configuration files which can be opened using any text editor.

GlassFish provides a very elegant way for protecting these passwords by providing us with the required commands and infrastructure to encrypt the passwords and use the encrypted password’s assigned alias in the application server configuration files. The encrypted passwords are stored inside an encrypted file named domain-passwords which resides inside the domain’s config directory. The domain-passwords file is encrypted using the master password and if the master password compromised then these file can be decrypted.

The command for creating password aliases is a remote command named create-password-alias and a sample usage is as following snippet:

create-password-alias --user admin --host localhost --port 4747 GiA_Derby_Pool_Alias

After we execute this command asadmin utility will ask for the password which we want this alias to hold.  Although asadmin may asks for administration credentials if we are not logged in.

Now that we create the alias we can use it by using the alias accessing syntax which follows the ${ALIAS=password-alias-password} format. For example if we want to create the JDBC connection pool that we create in 3.3 we can change the command as follow:

create-jdbc-connection-pool --user admin  --host localhost --port 4747

--datasourceclassname org.apache.derby.jdbc.ClientDataSource --restype javax.sql.XADataSource

--property portNumber=1527:password=${ALIAS= GiA_Derby_Pool_Alias }:user=APP:serverName=

localhost:databaseName=GiADatabase:create=true GiA_Derby_Pool

Password aliasing is not present just for external resources, but it can be used to protect the content of the password file which contains administration and master password for using instead of typing the password when the asadmin interactively asks for it.  We can simply create a password alias for administration password and for the master password and use them in password file. Sample content for a password file with aliased password is like.



Like all other administration commands, the alias administration commands set have some other commands which help with the commands administration. Other commands in the set are

  • The delete-password-alias command: We can delete an alias when we are sure we are no longer using it.
  • The list-password-aliases command: We can get a list of all aliases that we have in our domain-password file.
  • The update-password-alias command: We can update an alias by changing the password that it holds.

Password aliasing is very helpful when we do not want to give our passwords to the personal in charge of application server management or administration tasks. Instead we can provide them aliased password which they can use.

5 Summary

The GlassFish command line administration interface is one of the most powerful and feature complete command line based administration utilities in the application server markets.  The asadmin utility provides all required commands which we need to administrate any part of the application server directly from the command line instead of wandering in the XML configuration files or third party utilities or the web based administration console.

The asadmin utility is a not only a client side application which we can use to administrate an application and instead it is a composition of client side and a server side application which makes it possible to use the asadmin utility to administrate remote domains as well as local domains.

The asadmin utility has enough intelligence embedded in its routines which helps us find the required commands and the command structure simply by entering a part of command. By using asadmin utility we can administrate domains, applications, and application server managed resources.

When it comes to security, which is one of the highest concerns of the administrators, GlassFish provides many measures to protect passwords and keep the possibility of compromising passwords to the lowest possible value by using encryption for storing external resource passwords and asadmin administration passwords.

Manage, Administrate and Monitor GlassFish v3 from Java code using AMX & JMX

Management is one of the most crucial parts of an application server set of functionalities. Development of the application which we deploy into the server happens once with minor development iteration during the software lifecycle, but the management is a lifetime task. One of the very powerful features of the GlassFish application server is the powerful administration and management channels that it provides for different level of administrators and developers whom want to extend the application server administration and management interfaces.

GlassFish as an application server capable or serving mission critical and large scale applications benefits from several administration channel including the CLI, web based administration console and finally the possibility to manage the application server by using standard Java management extension or the JMX.

Not only GlassFish fully expose its management functionalities as JMX MBeans but also it provides a very easier way to manage the application server using local objects which proxies JMX MBeans. These local objects are provided as AMX APIs which lift the need for learning JMX by administers and developers whom want to interact with the application server by code.

GlassFish provides very powerful monitoring APIs in term of AMX MBeans which let developers and administrators monitor any aspect of anything inside the application server using Java code without need to understand the JMX APIs or complexity of monitoring factors and statistics gathering. These monitoring APIs allows developers to monitor a bulk of Java EE functionalities together or just monitor or single attribute of a single configuration piece.

GlassFish self management capability is another powerful feature based on the AMX and JMX APIs to let administrators easily automate daily tasks which can consume a handful amount of time without automation. Self management can manage the application server dynamically by monitoring the application server in runtime and changing the application server configuration dynamically based on predefined rules.

1 Java Management eXtension (JMX)

JMX, native to Java platform, introduced to let Java developers have a standard and easy to learn and use way for managing and monitoring their Java applications and Java enabled devices. We as architects, designers and developers of Java applications which can be as small as an in house invoice management or as big as a running stock exchange system need a way to expose management of our developed software to other industry accepted management software and JMX is the answer to these need.

1.1 What is JMX?

JMX is a part of Java Standard edition and was present from early days of Java platform existence and seen many enhancements during Java platform evolution. The JMX related specifications define the architecture, design patterns, APIs, and services in the Java programming language for managing and monitoring applications and Java enabled devices.

Using the JMX technology, we can develop Java classes which perform the management and monitoring tasks and expose a set of their functionalities or attributes by means of an interface to which later on are exposed to JMX clients through specific JMX services. The objects which we use to perform and expose management functionalities are called Managed Beans or MBeans in brief.

In order for MBeans to be accessible to JMX clients, which will use them to perform management tasks or gathers monitoring data, they need to be registered in a registry which later on let our JMX client application to find and initialize them. This registry is one of the fundamental JMX services and called MBean Server.

Now that we have our MBeans registered with a registry, we should have a way to let clients communicate with the running application which registered the MBeans to execute our MBeans operations, this part of the system is called JMX connectors which let us communicate with the agent from a remote or local management station. The JMX connector and adapter API provides a two way converter which can transparently connect to JMX agent over different protocols and provides a standard way for management software to communicate with the JMX agents regardless of communication protocol.

1.2 JMX architecture

The JMX benefits from a layered architecture heavily based on the interfaces to provide independency between different layers in term of how each layer works and how the data and services are provided for each layer by its previous one.

We can divide the JMX architecture to three layers. Each layer only relay on its direct bottom layer and is not aware of its upper layer functionalities. These layers are: instrumentation, agent, and management layers. Each layer provides some services either for other layers, in-JVM clients or remote clients running in other JVMs. Figure 1 shows different layers of JMX architecture.

Figure 1 JMX layerd architecture and each layer components

Instrumentation layer

This layer contains MBeans and the resources that MBeans are intended to manage. Any resource that has a Java object representative can be instrumented by MBeans. MBeans can change the value of object’s attributes or call its operations which can affect the resource that this particular Java object represents. In addition to MBeans, notification model and MBean metadata objects are categorized in this layer. There are two different types of MBeans for different use cases, these types include:

Standard MBeans: Standard MBeans consisting of an MBean interface which define the exposed operations and properties (using getters and setters) and the MBean implementation class. The MBean implementation class and the interface naming should follow a standard naming pattern in Standard MBeans. There is another type of standard MBeans which lift the urge for following the naming pattern called MXBeans. The Standard MBeans naming pattern for MBeans interface is ClassNameMBean and the implementation class is ClassName. For the MXBeans naming pattern for the interface is AnythingMXBean and the implementation class can have any name. We will discuss this naming matter in more details later on.

Dynamic MBeans: A dynamic MBean implements, instead of implementing an static interface with a set of predefined methods. Dynamic MBeans relies on that represents the attributes and operations exposed by them. MBeans client application call generic getters and setters whose implementation must resolve the attribute or operation name to its intended behavior. Faster implementation of JMX management MBeans for an already completed application and the amount of information provided by MBeans metadata classes are two benefits of Dynamic MBeans.

Notification Model: JMX technology introduces a notification model based on the Java event model. Using this event model MBeans can emit notifications and any interested party can receive and process them, interested parties can be management applications or other MBeans.

MBean Metadata Classes: These classes contain the structures to describe all components of an MBean’s management interface including its attributes, operations, notification, and constructors. For each of these, the MBeanInfo class include a name, a description and its particular characteristics (for example, an attribute is readable, writeable, or both; for an operation, the signature of its parameter and return types).

Agent layer

This layer contains the JMX Agents which are intended to expose the MBeans to management applications. The JMX agent’s implementation specifications fall under this layer. Agents are usually located in the same JVM that MBeans are located but it is not an obligation. The JMX agent consisting of an MBean server and some helper services which facilitate MBeans operations. Management software access the agent trough an adapter or connecter based on the management application communication protocol.

MBean Server: This is the MBeans registry, where management applications will look to find which MBeans are available to them to use. The registry expose the MBeans management interface and not the implementation class. The MBeans registry provides two interfaces for accessing the MBeans from a remote and in the same JVM client. MBeans can be registered by another MBeans, by the management application or by the Agent itself. MBeans are distinguished by a unique name which we will discuss more in AMX section.

Agent Services: there some helper services for MBeans and agent to facilitate some functionalities. These services include: Timer, dynamic class loader, observers to observer numeric or string based properties of MBeans, and finally relation service which define associations between MBeans and enforces the cardinality of the relation based on predefined relation types.

Management layer

The Management tier contains components required for developing management applications capable of communicating with JMX agents. Such components provide an interface for a management application to interact with JMX agents through a connector. This layer may contain multiple adapters and connectors to expose the JMX agent and its attached MBeans to different management platforms like SNMP or exposing them in a semantic rich format like HTML.

JMX related JSRs

There are six different JSRs defined for the JMX related specifications during past 10 years. These JSRs include:

JMX 1.2 (JSR 3): First version of JMX which was included in J2SE 1.2

J2EE Management (JSR 77): A set of standard MBeans to expose application servers’ resources like applications, domains, and so on for management purposes.

JMX Remote API 1.0 (JSR 160): interaction with the JMX agents using RMI from a remove locaten.

Monitoring and Management Specification for the JVM (JSR 174): a set of API and standard MBeans for exposing JVMs management to any interested management software.

JMX 2.0 (JSR 255): The new version of JMX for Java 0 which introduces using generics, annotation, extended monitors, and so on.

Web Services Connector for JMX Agents (JSR 262): define an specification which leads to use Web Services to access JMX instrumentation remotely.

1.3 JMX benefits

What are JMX benefits that JCP defined a lot of JSRs for it and on top of it, why we did not follow another management standard like IEEE Std 828-1990. The reason is behind the following JMX benefits:

Java needs an open to extend and close to change API for integration with emerging requirement and technologies, JMX does this by its layered architecture.

The JMX is based on already well defined and proven Java technologies like Java event model for providing some of required functionalities.

The JMX specification and implementation let us use it in any Java enabled software in any scale.

Almost no change is required for an application to become manageable by JMX.

Many vendors uses Java to enable their devices, JMX provide one standard to manage both software and hardware.

You can imagine many other benefits for JMX which are not listed above.

1.4 Managed Beans (MBeans)

We discussed that generally there are two types of MBeans which we can choose to implement our instrumentation layer. Dynamic MBeans are a bit more complex and we would rather skip them in this crash course, so in this section we will discuss how MXBeans can be developed, used locally and remotely to prepare ourselves for understanding and using AMX to manage GlassFish.

We said that we should write an interface which defines all exposed operation of the MBeans both for the MXBeans and standard MBeans. So first we will write the interface. Listing 1 shows the WorkerMXBean interface, the interface has two methods which supposed to change a configuration in a worker thread and two properties which return the current number of workers threads and maximum number of worker threads. Number of current workers thread is read only and maximum number of threads is both readable and updateable.

Listing 1 The MXBean interface for WorkerMXBean


public interface WorkerIF


public int getWorkersCount();

public int getMaxWorkers();

public void setMaxWorkers(int newMaxWorkers);

public int stopAllWorkers();


I did not told you that we can forget about the naming conversion for MXBean interfaces if we are intended to use Java annotation. As you can see we simply marked the interface as an MBean interface and defined some setter and getter methods along with one operation which will stop some workers and return the number of stopped workers.

The implementation of our MXBean interface will just implement some getter and setters along with a dummy operation which just print a message in standard output.

Listing 2 the Worker MXBean implementation

public class Worker implements WorkerIF {

private int maxWorkers;

private int workersCount;

public Worker() {


public int getWorkersCount() {

return workersCount;


public int getMaxWorkers() {

return maxWorkers;


public void setMaxWorkers(int newMaxWorkers) {

this.maxWorkers = newMaxWorkers;


public int stopAllWorkers() {

System.out.println(“Stopping all workers”);

return 5;



We did not follow any naming convention because we are using MXBean along with the annotation. If it was a standard MBean then we should have named the interface as WorkerMBean and the implementation class should have been Worker.

Now we should register the MBean to some MBean server to make it available to any management software. Listing 3 shows how we can develop a simple agent which will host the MBean server along with the registered MBeans.

Please replace the numbers with cueballs

Listing 3 How MBeans server works in a simple agent named WorkerAgent

public class WorkerAgent {

public WorkerAgent() {

MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); #1

Worker workerBean = new Worker();                  #2

ObjectName workerName = null;

try {

workerName = new                                #3 ObjectName(“article:name=firstWorkerBean”);

mbs.registerMBean(workerBean, workerName);          #4

System.out.println(“Enter to exit…”);            #5;

} catch(Exception e) {




public static void main(String argv[]) {

WorkerAgent agent = new WorkerAgent();

System.out.println(“Worker Agent is running…”);



At #1 we get the platform MBean Server to register our MBean. Platform MBean server is the default JVM MBean server. At #2 we initialize an instance of our MBean.

At #3 we create a new ObjectName for our MBean. Each JVM may use many libraries which each of them can register tens of MBeans, so MBeans should be uniquely identified in MBean server to prevent any naming collision. The ObjectName follow a format to represent an MBean name in order to ensure that it is shown in a correct place in the management tree and lift any possibility for naming conflict. An ObjectName is made up of two parts, a domain and a name value pair separated by a colon. In our case the domain portion is article and the description is name=name=firstWorkerBean

At #4 as register the MBean to the MBean server. At #5 we just make sure that our application will not close automatically and let us examine the MBean.

Sample code for this chapter is provided along with the book, you can run the sample codes by following the readme.txt file included in the chapter06 directory of source code bundle. Using the sample code you will just use Maven to build and run the application and JConsole to monitor it. But the behind the scene procedure described in the following paragraph.

To run the application and see how our MBean will appear in a management console which is standard JConsole bundled with JDK. To enable the JMX management agent for local access we need to pass the to the JVM. This command will let the management console to use inter process communication to communicate with the management agent. Now that you have the application running you can run JConsole. Open a terminal window and run jconsle. When JConsole opens, it shows a window which let us select either a remote or a local JVM to connect. Just scan the list of local JVMs to find WorkerAgent under name column; select it and press connect to connect to the JVM. Figure 2 shows the new Connection window of JConsole which we talked about.

Figure 2 The New Connection window of JConsole

Now you will see how JConsole shows different aspects of the selected JVM including memory meters, threads status, loaded classes, JVM overview which includes the OS overview and finally the MBeans. Select MBeans tab and you can see a tree of all MBeans registered with the platform MBean server along with your MBean. You should remember what I said about ObjectName class, the tree clearly shows how a domain includes its child MBeans. When you expand article node you will see something similar to figure 3.

Figure 3 JConsole navigation tree and effect of ObjectName format on the MBean placing in the tree

And if you click on stopAllWorkers node the context panel of the JConsole will load a page similar to figure 4 which also shows the result of executing he method.

Figure 4 The content panel of JConsole after selecting stopAllWorkers method of the WorkerMBean

It was how we can connect to a local JVM, in the 1.6 we will discuss connecting to a JVM process from a remote location to manage the system using JMX.

1.5 JMX Notification

The JMX API defines a notification and notification subscription model to enable MBeans generate notifications to signal a state change, a detected event, or a problem.

To generate notifications, an MBean must implement the interface NotificationEmitter or extend NotificationBroadcasterSupport. To send a notification, we need to construct an instance of the class or a one of its subclasses like AttributeChangedNotification, and pass the instance to NotificationBroadcasterSupport.sendNotification.

Every notification has a source. The source is the object name of the MBean that generated the notification.

Each notification has a sequence number. This number can be used to order notifications coming from the same source when order matters and there is a risk of the notifications being handled in the wrong order. The sequence number can be zero, but preferably the number increments for each notification from a given MBean.

Please replace the # with cueball in the source and the paragraph

1.6 Remote management

To manage our worker application from a remote location using a JMX console like JConsole we will just need to ensure that an RMI connector is open in our JVM and all appropriate settings like port number, authentication mechanism, transport security and so on are provided. So, to run our sample application with remote management enabled we can pass the following parameters to the java command.     #1 #2 #3 #4

At #1 we determine a port for the RMI connector to listen for incoming connections. At #2 we enable authentication in order to protect our management portal. At #3 we provide the path to a password file which contains the list of username and password in plain text. In the password file, each username and password pair are placed in one line with an space between them. At #4 we disable SSL for transport layer.

To connect to a JVM which started with these parameters, we should choose remote in the New Connection window of JConsole and provide the service:jmx:rmi:///jndi/rmi:// as Remote Process URL along with one of the credentials which we defined in the passwordFile.txt.

2 Application Server Management eXtension (AMX)

GlassFish fully adheres to the J2EE management (JSR 77) in term of exposing the application server configuration as JMX MBeans but dealing with JMX is not easy and likeable for all developers, so Sun has included a set of client side proxies over the JSR 77 MBeans and other additional MBeans of their own to make the presence of JMX completely hidden for the developers who want to develop management extensions for GlassFish. This API set is named AMX and usually we use them to develop management rules.

2.1 J2EE management (JSR 77)

Before we dig into AMX we need to know what JSR 77 is and how it helps us to using JMX and AMX for managing the application server. JSR 77 specification introduces a set of MBeans and services which let any JMX compatible client manage Java EE container’s deployed objects and Java EE services.

The specification defines a set of MBeans which models all Java EE concepts in a hierarchic of MBeans. The specification determines which attributes and operation each MBeans must have and what should be the effect of calling a method on the managed objects. Specification defines a set of events which should be exposed to JMX clients by the MBeans. The specification also defines a set of attributes’ statistics which should be exposed by the JSR 77 MBeans to the JMX client for performance monitoring.

Services and MBeans provided by JSR 77 covers:

Monitoring performance statistics of managed artifacts in the Java EE container. Managed objects like EJBs, Servlets, and so on.

Event subscription for important events of the managed objects like stopping or starting an application.

Managing state of different standard managed objects if the Java EE container like changing an attribute of a JDBC connection pool or underplaying an application.

Navigation between Managed objects.

Managed objects

The artifacts that JSR 77 exposes their management, monitoring to JMX clients include a broad range of Java EE components and services. Figure 5 shows the first level of the Managed objects in the Managed objects hierarchic. As you can see in the figure all objects inherits four attributes which later on will be used to determine whether an object state is manageable, the object provides statistics or the object provide events for its important performed actions.

The objectName attribute which we discussed before has the same use and format, for example amx:j2eeType=X-JDBCConnectionPoolConfig,name=DerbyPool represent a connection pool named DerbyPool in the amx domain, the j2eeType attribute shows the MBean’s type.

Figure 5 first level hierarchic of JSR 77 Managed objects which based on the specification must be exposed for JMX management.

Now that you saw how broad the scope of JSR 77 is you ask what the use of these MBeans is and how they work and can be used.

Simple answer is that as soon as a Managed object become live in the application server a corresponding JSR 77 MBeans will get initialized for it by the application server management layer. For example as soon as you create a JDBC connection pool a new MBeans will appear under the JDBCConnectionPoolConfig node of connected JConsole which represent the newly created JDBC connection. Figure 6 shows the DerbyPool under the JDBCConnectionPoolConfig node in JConsole.

Figure 6 The DerbyPool place under JDBCConnectionPoolConfig node in JConsole

In the figure 5 you can see that we can manage All deployed objects using the JSR 77 exposed MBeans, these objects J2EE applications and modules which are shown in figure A deployed application may have EJB, Web, and any other modules; Glassfish management service will initialize a new insistence of appreciated MBeans for each module in the deployed application. The management service uses ObjectName which we discussed before to uniquely identify each MBean instance and later on we will use this unique name to access each deployed artifact independently.

Figure 7 The specification urge implementation of MBeans to manage all shown deployable objects

Getting lower into the deployed modules we will have Servlets, EJBs and so on. The JSR 77 specification provided MBeans for managing EJBs and Servlets. The EJB MBeans inherit the EJB MBean and includes all necessary attributes and method to manage different types of EJBs. Figure 8 represent the EJB sub classes.

Figure 8 All MBeans related to EJB management in GlassFish application server, based on JSR 77

Java EE resources is one the most colorful area in the Java EE specification and the JSR 77 provides MBeans for managing all standard resources managed bye Java EE containers. Figure 9 shows which types of resources are exposed for management by the JSR 77.

Figure 9 Java EE resources manageable by the JSR 77 MBeans

All of Java EE base concepts are covered by the JSR 77 in order to make it possible for the 3rd party management solution developers to integrate Java EE application server management into their solutions.

The events propagated by JSR 77

In the beginning of this section we talked about events that interested parties can receive from JSR 77 MBeans. These events are as follow for different type of JSR 77 MBeans.

J2SEEServer: An event when the corresponding server enters RUNNING, STOPPED, or FAILED state

EntityBean: An event when the corresponding Entity Bean enters RUNNING, STOPPED, or FAILED state

MessageDrivenBean: An event when the corresponding Entity Bean enters RUNNING, STOPPED, or FAILED state

J2EEResource: An event when the corresponding J2EE Resource enters RUNNING, STOPPED, or FAILED state

JDBCResource: An event when the corresponding JDBC data source enters RUNNING, STOPPED, or FAILED state

JCAResource: An event when a JCA connection factory or managed connection factory entered RUNNING, STOPPED, or FAILED state

The monitoring statistics exposed by JSR 77

The specification urge exposing some statistics related to different Java EE components by JSR 77 MBeans. The required statistics by the specification includes:

Servlet statistics: Servlet related statistics which include Number of currently loaded Servlets; Maximum number of Servlets loaded which were active, and so on.

EJB statistics: EJB related statistics which Include include Number of currently loaded EJBs, Maximum number of live EJBs, and so on.

JavaMail statistics: JavaMail related statistics which Include maximum number of sessions, total count of connections, and so on.

JTA statistics: Statistics for JTA resources which includes successful transactions, failed transactions and so on.

JCA statistics: JCA related statistics which includes both the non-pooled connections and the connection pools associated with the referencing JCA resource.

JDBC resource statistics: JDBC resource related statistics for both non-pooled connections and the connection pools associated with the referencing JDBC resource, connection factory. The statistics include total number of opened and closed connections, maximum number of connections in the pool, and so on. This is really helpful to find connection leak for an specific connection pool.

JMS statistics: The JMS related including statistics for connection session, JMS producer, and JMS consumer.

Application server JVM Statistics: The JVM related statistics. Information like different memory sector size, threading information, class loaders and loaded classes and so on.

2.2 Remotely accessing JSR 77 MBeans by Java code

Include a sample code which shows accessing JSR 77  MBeans from remote location using java code to show how cumbersome it is and how simple AMX made it. // Done

Now that we discussed the details of the JSR 77 MBeans, let’s see how we can access the DerbyPool MBeans from java code and then how we can change an attribute which represent maximum number of connections in the connection pool. Listing 4 show the sample code which will access a GlassFish instance with default port for JMX listener (the default port is 8686)

Listing 4 Accessing a JSR 77 MBean for changing DerbyPool’s MaxPoolSize attribute.

public class RemoteClient {

private MBeanServerConnection mbsc = null;

private ObjectName derbyPool;

public static void main(String[] args) {

try {

RemoteClient client = new RemoteClient(); #A

client.connect();                          #B

client.changeMaxPoolSize();              #c

} catch (Exception e) {




private void connect() throws Exception {

JMXServiceURL jmxUrl =

new JMXServiceURL(“service:jmx:rmi:///jndi/rmi://”); #1

Map env = new HashMap();

String[] credentials = new String[]{“admin”, “adminadmin”};

env.put(JMXConnector.CREDENTIALS, credentials);              #2

JMXConnector jmxc =

JMXConnectorFactory.connect(jmxUrl, env);             #3

mbsc = jmxc.getMBeanServerConnection();                     #4


private void changeMaxPoolSize() throws Exception {

String query = “amx:j2eeType=X-JDBCConnectionPoolConfig,name=DerbyPool”;

ObjectName queryName = new ObjectName(query);             #5

Set s = mbsc.queryNames(queryName, null);                   #6

derbyPool = (ObjectName) s.iterator().next();

mbsc.setAttribute(derbyPool, new Attribute(“MaxPoolSize”, new Integer(64)));                                                   #7



#A initiate an instance of the class

#B get the JMX connection

#C change the attribute’s value

At #1 we create a URL to GlassFish JMX service. At #2 we prepared the credentials which we should provide for connecting to the JMX service. At #3 we initialize the connector. At #4 we create a connection to GlassFish’s MBean server. At #5 we query the registered MBeans for an MBean similar to our DerbyPool MBean. At #6 we get the result of the query inside a set. We are sure that we have an MBean with the give name otherwise we should have checked to see whether the set is empty or not. At #7 we just change the attribute. You can check the attribute in JConsole and you will se that it change in the JConsole as well.

In the sample code we just update the DerbyPool’s maximum number of connections to 64. it can be counted as one of the simplest task related to JSR 77, management, and using JMX. Using plain JMX for a complex task will overhaul us with many lines of complex reflection based codes which are hard to maintain and debug.

2.3 Application Server Management eXtension (AMX)

Now that you see how hard it is to work with JSR 77 MBeans I can tell you that you are not going to use JSR 77 MBeans directly in your management applications, although you can.

What is AMX

In The AMX APIs java.lang.reflect.proxy is used to generate Java objects which implement the various AMX interfaces. Each proxy internally stores the JMX ObjectName of a server-side JMX MBean who’s MBeanInfo corresponds to the AMX interface implemented by the proxy.

So, in the same time we have JMX MBeans for using trough any JMX compliant management software and we have the AMX dynamic proxies to use them as easy to use local objects for managing the application server.

The GlassFish administration architecture is based on the concept of administration domain. An administration domain is responsible for managing multiple resources which are based on the same administration domain.  A resource can be a cluster of multiple GlassFish instances, a single GlassFish instance, and a JDBC connection pool inside the instance and so on. Hundreds of AMX interfaces are defined to proxy all of the GlassFish managed resources which themselves are defined as JSR 77 MBeans for client side access. All of these interfaces are placed under package.

There are several benefits in AMX dynamic proxies over the JMX MBeans, which are as follow:

Strongly typed methods and attributes for compile time type checking

Structural consistency with both the domain.xml configuration files.

Consistent and structured naming for methods, attributes and interfaces.

Possibility to navigate from a leaf AMX bean up to the DAS.

AMX MBeans

AMXdefines different types of MBean for different purposes or reasons, namely, configuration MBeans, monitoring MBeans, utility MBeans and JSR 77 MBeans. All AMX MBeans shares some common characteristics including:

They all implement the interface which contains methods and fields for checking the interface type, group, reaching its container and its root domain.

They all have a j2eeType and name property within their ObjectName. The j2eeType attribute specifies the interface we are dealing with.

All MBeans that logically contain other MBeans implement the interface. Using the container interface we can navigate from a leaf AMX Bean to the DAS and vice-versa. For example by having the domain AMX Bean we can get a list of all connection pools or EJB modules in deployed in the domain.

JSR 77 MBeans that have a corresponding configuration or monitoring peer expose it using getConfigPeer or getMonitoringPeer. However; there are many configuration and monitoring MBeans that do not correspond to JSR 77 MBeans.

Configuration MBeans

We discussed that there are several types of MBeans in the AMX framework, one of them is the configuration MBeans. Basically these MBeans represent domain.xml and other configuration file content and structure.

In GlassFish all configuration information are stored in one central repository named DAS, in a single instance installation the instance act as DAS and in a clustered installation the DAS responsibility is sole taking care of the configuration and propagating it to all instances. The information stored in the repository are exposed to any interested party like an administration console trough AMX interfaces.

Any developer with familiarity with domain.xml structure will find him very comfortable with configuration interfaces.

Monitoring MBeans

Monitoring MBeans provide transient monitoring information about all the vital components

of the Application Server. A monitoring interface can either provides statistics or not and if it provides statistics it should implements the MonitoringStats interface which is JSR 77 compliant interface for providing statistics.

Utility MBeans

UtilityMBeans provide commonly used services to the Application Server. These MBeans all extend either or both of the Utility and Singleton interfaces. All of these MBeans interface are located in package. Notable utility MBeans are listed in table 1

Table 1 AMX Utility MBeans along with description

MBean interface



Provides information about application server capabilities like clustering support


Provides JMX-like queries which are restricted to AMX MBeans.


Provides network-efficient “bulk” calls whereby many Attributes or Operations in many MBeans may be fetched and invoked in one invocation, thus minimizing network overhead.


Provides buffering and selective dynamic listening for Notifications on AMX MBeans.


Supports uploading and downloading of files to/from the application server.

Java EE Management MBeans

The Java EE management MBeans implement, and in some cases extend, the management

Hierarchy as defined by JSR 77, which specifies the management model for the whole Java EE platform. All JSR 77 MBeans in the AMX domain offer access to configuration and monitoring MBeans using the getMonitoringPeer and getConfigPeer methods.

Dynamic Client Proxies

Dynamic Client Proxies are an important part of the AMX API, and enhance ease-of-use for the programmer. JMX MBeans can be used directly by an MBeanServerConnection to the server. However, client proxies greatly simplify access to Attributes and operations on MBeans, offering get/set methods and type-safe invocation of operations. Compiling against the AMX interfaces means that compile-time checking is performed, as opposed to server-side runtime checking, when invoked generically through MBeanServerConnection.

See the API documentation for the package and its sub-packages for more information about using proxies. The API documentation explains the use of AMX with proxies. If you are using JMX directly (for example, by using MBeanServerConnection), the return type, argument types, and method names might vary as needed for the difference between a strongly-typed proxy interface and generic MBeanServerConnection or ObjectName  interface.

Changing the DerbyPool attributes using AMX

In listing 4 you saw how we can use JMX and pure JSR 77 approach to change the attributes of a JDBC connection pool, in this part we are going to perform the same operation using AMX to see how much easier and more effective the AMX is.

Listing 5 Using AMX to change DerbyPool MaxPoolSize attribute

AppserverConnectionSource appserverConnectionSource = new AppserverConnectionSource(AppserverConnectionSource.PROTOCOL_RMI, “”, 8686, “admin”, “adminadmin”,null,null);              #1

DomainRoot dRoot = appserverConnectionSource.getDomainRoot(); #2

JDBCConnectionPoolConfig cpConf= dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_CONFIG, “DerbyPool”); #3

cpConf.setMaxPoolSize(“100”);            #4

You are not mistaking it with another idea or code, that four lines of code let us change the maximum pool size of the DerbyPool.

At #1 we create a connection to the server that we want to perform our management operation on it. Several protocols can be used to connect to the application server management layer. As you can se when we construct the appserverConnectionSource instance we used AppserverConnectionSource.PROTOCOL_RMI as the communication protocol to ensure that we will not need some JAR files from OpenPDK project. Two other protocols which we can use are AppserverConnectionSource.PROTOCOL_HTTP and AppserverConnectionSource.PROTOCOL_JMXMP. The connection that we made does not uses TLS, but we can use TLS to ensure transport security.

At #2 we get the AMX domain root which later on let us navigate between all AMX leafs which are servers, clusters, connection pools, and listeners and so on. At #3 we query for an AMX MBean which its name is DerbyPool and its j2eeType is equal to XTypes.JDBC_CONNECTION_POOL_CONFIG. At #4 we set a new value for the attribute of our choice which is MaxPoolSize attribute.

Monitoring GlassFish using AMX

The monitoring term comes to the developers and administrators mind whenever they are dealing with performance tuning, but monitoring can also be used for management purposes like automation of specific tasks which without automation need an administrator to take care of it. An example of these types of monitoring is critical condition notifications which can be send either via email or SMS or any other gateway which the administrators and system managers prefer.

Imagine that you have a system running on top of GlassFish and you want to be notified whenever acquiring connections from a connection pool named DerbyPool is taking longer than 35 seconds.

You also want to store all information related to the connection pool when the pool is closing to saturate. For example when there are only 5 connections to give away before the pool get saturated.

So we need to write an application which monitor the GlassFish connection pool, check its statistics regularly and if the above criteria meets, our application should send us an email or an SMS along with saving the connection pool information.

AMX, provides us with all required means to monitor the connection pool and be notified when any connection pool attributes or any of its monitoring attributes changes so we will just use our current AMX knowledge along with two new concepts about the AMX.

The first concept that we will use is AMX monitoring MBeans which provides us with all statistics about the Managed Objects that they monitor. Using the AMX monitoring MBeans is the same as using other AMX MBeans like the connection pool MBean.

The other concept is the notification mechanism which AMX provides on top of already established JMX notification mechanism. The notification mechanism is fairly simple, we register our interest for some notification and we will receive the notifications whenever the MBeans emit a notification.

We know that we can configure GlassFish to collect statistics about almost all Managed Objects by changing the monitoring level from OFF to either LOW or HIGH. In our sample code we will change the monitoring level manually using our code and then use the same statistics that administration console shows to check for the connection pool consumed connections.

Listing 6 shows sample applications which monitor DerbyPool and notify the administrator whenever acquiring connection take longer than accepted. The application saves all statistics information when the connection pool gets close to its saturation.

Please Replace the numbers with cueballs

Listing 6 monitoring a connection pool and notifying the administrator when connection pool is going to reach the maximum size

public class AMXMonitor implements NotificationListener {      #1

AttributeChangeNotificationFilter filter;                                          #2

AppserverConnectionSource appserverConnectionSource;

private int cPoolSize;

private DomainRoot dRoot;

JDBCConnectionPoolMonitor derbyCPMon;                                          #3

JDBCConnectionPoolConfig cpConf;

private void initialize() {

try {

appserverConnectionSource = new AppserverConnectionSource(AppserverConnectionSource.PROTOCOL_RMI, “”, 8686, “admin”, “adminadmin”, null, null);

dRoot = appserverConnectionSource.getDomainRoot();

Set<String> stpr = dRoot.getDomainConfig().getConfigConfigMap().keySet();            #4

ConfigConfig conf=     dRoot.getDomainConfig().getConfigConfigMap().get(“server-config”); #4

conf.getMonitoringServiceConfig().getModuleMonitoringLevelsConfig().setJDBCConnectionPool(ModuleMonitoringLevelValues.HIGH); #4

cpConf = dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_CONFIG, “DerbyPool”);

cPoolSize = Integer.getInteger(cpConf.getMaxPoolSize());

filter = new AttributeChangeNotificationFilter();         #2

filter.enableAttribute(“ConnRequestWaitTime_Current”);       #2

filter.enableAttribute(“NumConnUsed_Current”);             #2

Set<JDBCConnectionPoolMonitor> jdbcCPM =


(XTypes.JDBC_CONNECTION_POOL_MONITOR);                           #5

for (JDBCConnectionPoolMonitor mon : jdbcCPM) {

if (mon.getName().equalsIgnoreCase(“DerbyPool”)) {

derbyCPMon = mon;




derbyCPMon = dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_MONITOR, “DerbyPool”);   #5

derbyCPMon.addNotificationListener(this, filter, null);  #5

} catch (Exception ex) {




public void handleNotification(Notification notification, Object handback) {

AttributeChangeNotification notif = (AttributeChangeNotification) notification;                                                                                                    #6

if (notif.getAttributeName().equals(“ConnRequestWaitTime_Current”)) {

int curWaitTime = Integer.getInteger((String) notif.getNewValue());                                                                                    #7

if (curWaitTime > 3500) {


sendNotification(“Current wait time is: ” + curWaitTime);


} else {

int curPoolSize = Integer.valueOf((String) notif.getNewValue());                                                        #8

if (curPoolSize > cPoolSize – 5) {


sendNotification(“Current pool size is: ” + curPoolSize);




private void saveInfoToFile() {

try {

FileWriter fw = new FileWriter(new File(“stats_” + (new Date()).toString()) + “.sts”);

Statistic[] stats = derbyCPMon.getStatistics(derbyCPMon.getStatisticNames()); #9

for (int i = 0; i < stats.length; i++) {

fw.write(stats[i].getName() + ” : ” + stats[i].getUnit()); #10


} catch (IOException ex) {




private void sendNotification(String message) {


public static void main(String[] args) {

AMXMonitor mon = new AMXMonitor();




At #1 we implement the NotificationListener interface as we are going to use the JMX notification mechanism. At #2 we define an AttributeChangeNotificationFilter which filter the notifications to a subset that we are interested. We also add the attributes that we are interested to the set of non-filtered attribute change notification. At #3 we define and initialize an AMX MBeans which represent DerbyPool monitoring information. We will get an instance not found exception the connection pool had no activities yet. At #4 we change the monitoring level of JDBC connection pools to ensure that GlassFish gather the required statistics. At #5 we find our designated connection pool monitoring MBean and add a new filtered listener to it.

The handleNotification method is the only method in the NotificationListener interface which we need to implement. At #6 we convert the received notification to AttributeChangeNotification as we know that the notification is of this type. At #7 we are dealing with change in the ConnRequestWaitTime_Current attribute. We get its new value to check for our condition. In the same time we can get the old value if we are interested. At #8 we are dealing with NumConnUsed_Current attribute and later on with calling the saveToFile method and sendNotification methods.

At #9 we get names of all connection pool monitoring factors and at #10 we just write the monitoring attribute’s name along with its value to a text file.

AMX and Dotted Names

AMX is designed with ease of use and efficiency in mind. So in addition to using standard JMX programming model, the getters and setters, we can use another hierarchical model to access all AMX MBeans attributes. In the dotted named model each attribute of an MBean starts from its root which is either the domain for all configuration MBeans and server for all runtime MBeans. For example domain.resources.jdbc-connection-pool.DerbyPool.max-pool-size represents the maximum pool size for the DerbyPool which we discussed before.

Two interfaces are provided to access the dotted names either for monitoring or for management and configuration purposes. The MonitoringDottedNames is provided to assists with reading an attribute. The other interface is ConfigDottedNames which provides writing access to attributes using the dotted format. We can get an instance of its implementation using dRoot.getMonitoringDottedNames().

3 GlassFish Management Rule

The above sample application is promising especially when you want to have tens or rules for automatic management, but running a separate process and watching that process is not in taste of many administrators. GlassFish application server provides an very effective way to deploy such management rules into GlassFish application server for sake of simplicity and integration. Benefits and use cases of the Management Rules can be summarized as follow:

Manage complexity by self-configuring based on the conditions

Keep administrators free for complex tasks by automating mundane management tasks

Improve performance by self-tuning in unpredictable run-time conditions

Automatically adjusting the system for availability by preventing problems and recovering from one (self healing)

Enhancing security measures by taking self-protective actions when security threats are detected

A GlassFish Management Rule is a set of:

Event: An event uses the JMX notification mechanism to trigger actions. Events can range from an MBean attribute change to specific log messages.

Action: Actions are associated with events and are triggered when related events happen. Actions can be MBeans that implement the NotificationListener interface.

When we deploy a Management Rule into GlassFish, GlassFish will register our MBean in the MBean server and register its interests for the notification type that we determined. Therefore, upon any event which our MBean is registered for, the handleNotification method of our MBeans will execute. GlassFish provides some pre-defined types of events which we can choose to register our MBean’s interest. These events are as follow:

Monitor events: These types of events trigger an action based on an MBean attribute change.

Notification events: Every MBean which implements the NotificationBroadcaster interface can be source of this event type.

System events: This is a set of predefined events that come from the internal infrastructure of GlassFish application server. These events include: lifecycle, log, timer, trace, and cluster events.

Now, let’s see how we can achieve a similar functionality that our AMXMonitor application provides from The GlassFish Management Rules. First we need to change our application to a mere JMX MBean which implements NotificationListener interface and perform required action which is, for example, sending an email or SMS, in the handleNotification method.

Changing the application to MBean should be very easy; we just need to define an MBean interface and then the MBean implementation which will just implement one single method, the handleNotification method.

Now that we have our MBeans compiled JAR file, we can deploy it using the administration console. So open the GlassFish administration console, navigate to the Custom MBeqns node and deploy the MBeans by providing the path to the JAR file and the name of the MBeans implementation class. Now that we have our MBeans deployed into GlassFish, it is available for the class loader to be used as an action for a Management Rule, so in order to create the Management Rule, use the following procedure.

In the navigation tree select Configuration node, select Management Rules node. In the content panel select New and user dcpRule as the name, make sure that you teak the enabled checkbox, select monitor in the event type combo box and let the events to be recorded in the log files, press next to navigate to the second page of the wizard.

In the second page, for the Observed MBean field enter amx:X-ServerRootMonitor=server,j2eeType=X-JDBCConnectionPoolMonitor,name=DerbyPool and for the observed attribute enter ConnRequestWaitTime_Current. For the monitor type select counter. The number type is int and the initial threshold is 35000 which indicate the number which if the monitored attribute exceed, our action will start.

Scroll down and for the action, select AMXMonitor which is the name of our MBean which we deployed in previous step.

You saw that the overall process is fairly simple, but there are some limitations like possibility to monitor one single MBean and attribute at a time. Therefore we need to create another Management Rule for NumConnUsed_Current attribute.

Now that we reached to the end of this article, you should be fairly familiar with JMX, AMX and GlassFish Management Rules. In next articles we will use the knowledge that we gained here to create administration commands and create monitoring solutions.

4 Summary

We toughly discussed managing GlassFish using Java code by discussing JMX which is the foundation of all Java based management solutions and framework. We discussed JMX architecture, event model and different types of MBeans which are included in the JMX programming model. We also covered AMX as the GlassFish way providing its management functionalities to client side applications which are not interested in using complicated JMX APIs.

We discussed GlassFish management using AMX APIs to show how much simpler the AMX is when we compare it to the plain JMX implementation and we covered GlassFish monitoring using the AMX APIs.

You saw how we can use GlassFish’s Self management and self administration functionalities to automate some tasks to keep administrators free from dealing with low level repetitive administration and management tasks.

GlassFish Modularity System, How extend GlassFish CLI and Web Administration Console (Part 2: Developing Sample Modules)

Extending GlassFish CLI and Administration Console, Developing the sample Modules

Administrators are always looking for a more effective, easier to use, and less time consuming tool to use as the interface sitting between them and what they supposed to administrate and manage. GlassFish provide effective, easy to access and easy to simple to navigate in administration channels which cover all day to day tasks that administrators need to perform. But when it comes to administration, administrators usually write their own shell scripts to automate some tasks; they use CRON or other schedulers to schedule automatic tasks, and so on to achieve their own customized administration flow.

By using GlassFish console, it is simply possible to use shell scripts, or CRON to further extends the administration and facilitate the daily tasks. But sometimes it is far better to have a more sophisticated and mature way to extend and customize the administration interfaces. GlassFish provides all necessary interfaces and required services to allow administrators or developers to develop new modules for GlassFish administration interfaces to add a new feature or enhance already available capabilities.

GlassFish administration channels extendibility is not only for easing the administrator tasks to customize the administration interfaces, but also it is present to let container developers develop administration interfaces for their containers fully integrated with already available interfaces and fully compatible with the present administration console look and feels. Using administration console extendibility developers can develop new administration console commands very easily by utilizing already available services and dependency injection which provide them with all available environmental objects that they need.

This article covers how we can develop new commands for the CLI console and how we can develop new modules to add some new pages and navigation nodes to web administration console.

1 CLI console extendability

GlassFish CLI can be considered the easiest to accesses administration console for experienced administrators as they are better the keyboard and like automated and scripted tasks instead of clicking the mouse in a web page to see the result.  Not only GlassFish CLI is very command and feature rich, but also its design is based on the same modularity concepts that the whole application server is based on.

CLI modularity is very helpful for those who wants to extends GlassFish by providing new set of commands for their container, deploying new monitoring probes and providing the corresponding CLI commands for accessing the monitoring information,  developing new commands which may be required by administrators and are not already in place, and so on.

1.1 Prepare your development environment

This is a hands on article which involve us with developing some new modules for GlassFish application server and deploying the modules to GlassFish application server, therefore we should be able to compile the modules which usually uses HK2 services in the inside and are bundled as OSGI bundle for the deployment.

Although we can use an IDE like NetBeans to build the sample codes but we are going to use a more IDE agnostic way like using Apache Maven to ensure that we can simply manage the dependency hurdle and make the sample codes easily imported to IDE of your choice. Get Maven from and install it according to the installation instruction provided in the same page. Then you are ready to further dig into developing modules.

1.2 CLI modularity and extendibility

We discussed about how GlassFish utilize OSGI and HK2 for providing a modular architecture, one place which this modular architecture shows itself is CLI modularity. We know that all extendibility point in GlassFish are based on contracts and contracts providers which in simple though there are interfaces and interface implementation along with some annotation to mark and further configure the contract implementation.

We are talking about adding new commands to CLI dynamically by placing an OSGI bundle in the module directory of GlassFish, either the module just contains CLI commands or it contains CLI commands, web based administration console pages, and a container implementation. So, at first there is no predefined list of known commands and commands loads on-demand and as soon as you try to use one of them. For example when you issue ./asadmin list-commands CLI framework will check for all provider which implements the admin command contract and shows the list by extracting the information from each provider (implementation). CLI commands can use dependency injection provided by HK2 and also each command may extract some information and provide them to other components which might be interested in using them.

Like all providers in the GlassFish modularity system, we should annotated each command implementation using @Service annotation to ensure that HK2 will treat the implementation as a service for lifecycle management and service locating. CLI commands like any other HK2 service should have a scope which determines how the lifecycle of the command is managed by HK2.  Usually we should ensure that each command is just live in the context of its execution and not beyond that context. Therefore we annotate each command with a correct scope like PerLookup.class scope which result in initiating a command in each lookup.

All that we need to know for developing CLI commands is summarized in understanding some interfaces and helper classes.  Other required knowledge is around the AMX and HK2.

Each CLI command must implements a contract which is an interface named org.jvnet.hk2.annotations.Contract.AdminCommand which is shown in listing 1.

Listing 1 The AdminCommand interface which any CLI command have to implement

@Contract                                           #1
public interface AdminCommand {
public void execute(AdminCommandContext context);            #2

The AdminCommand interface has one method which is called by asadmin utility when we call the associated command. At #1 we define the interface as a contact which HK2 service providers can implements. At #2 we have the execute method which accept a argument of type org.glassfish.api.admin.AdminCommandContext which is shown in listing 2. This class is a set of utilities which allows developer to have access to some contextual variables like reporting, logging, and command parameters which the last one will be deprecated.

Listing 2 The AdminCommandContext class which is a bridge between inside the command and outside the command execution context

public class AdminCommandContext implements ExecutionContext {

public  ActionReport report;
public final Properties params;
public final Logger logger;
private List<File> uploadedFiles;

public AdminCommandContext(Logger logger, ActionReport report, Properties params) {
this(logger, report, params, null);

public AdminCommandContext(Logger logger, ActionReport report, Properties params,
List<File> uploadedFiles) {
this.logger = logger; = report;
this.params = params;
this.uploadedFiles = (uploadedFiles == null) ? emptyFileList() : uploadedFiles;

private static List<File> emptyFileList() {
return Collections.emptyList();

public ActionReport getActionReport() {
return report;

public void setActionReport(ActionReport newReport) {
report = newReport;

public Properties getCommandParameters() {
return params;

public Logger getLogger() {
return logger;

public List<File> getUploadedFiles() {
return uploadedFiles;

The only unfamiliar class is the org.glassfish.api.ActionReport abstract class which has several sub classes that allow us as CLI command developers to report the result of our command execution to the original caller. Several sub classes are defined to make it possible to report back the result of the execution in different environment like returning an HTML, JSON or plain text report about the execution. The report may include exit code, possible exceptions, failure or the success, and so on based on the reporter type. Important sub classes of the ActionReport are listed in table 1

Table 1 all available reporters which can be used to report the result of a command execution

Reporter class



Generate an HTML document containing report information


A JSON document containing the report information


A plain text report which provides necessary report information as plain text


A properties file containing all report elements


No report at all


A machine readable XML document containing all report elements

All of these reporters except the SilentActionReport which is placed inside org.glassfish.embed.impl are placed in the com.sun.enterprise.v3.common package.

Now we need a way to pass some parameters to our command, determine which parameters are necessary and which one are optional, which one need argument and what should be the type of the argument. There is an annotation which let us annotate properties of the AdminCommand implementation class to mark them as parameters. The annotation which we should use to mark a property as a parameter is @Param and is placed in org.glassfish.api package.

Listing 3 The Parameters annotation which we can use to have a parameterized CLI command

public @interface Param {
public String name() default "";
public String acceptableValues() default "";
public boolean optional() default false;
public String shortName() default "";
public boolean primary() default false;

As you can see, the annotation can be place either on a setter method or on a filed. Annotation has several elements including the parameter name which by default is the same as the property name, a list of comma separated acceptable values, the short name for the parameter, whether the parameter is optional or not and finally whether the parameter name is required to be included or not.  In each command only one parameter can be primary.

When we mark a property or a setter method with @Param annotation, CLI framework will try to initialize the parameter with the given value and then it will call the execute method. During the initialization, CLI framework check different attributes of a parameter like its necessity, short name, its acceptable value and so on.

The last thing which we should think about a CLI command is its internationalization and the possibility to show the localized version of text messages about the command and parameters to the client. This requirement is covered using another annotation named @I18n, the associated interface is placed in org.glassfish.api. Listing 3 shows the I18n interface declaration.

Listing 4 I18n Interface which we should use to provide internationalized messages for our CLI command

public @interface I18n {
public String value();

The annotation can be used both for properties and for annotating methods. At runtime the CLI framework will extract and inject the localized value into the variable based on the current locale of the JVM.

The localization file which is a standard resource bundle should follow the same key=value format which we used for any properties file. The standard name for the file is an example file can be for United States English or as the default bundle for any locale which has no corresponding locale file in the bundle.

1.3 A CLI command which returns the operating system details

First we will develop a very simple CLI command in this section to get more familiar with the abstracts that we discussed before and then we will develop a more sophisticated command which will teach us more details about CLI framework and capability.

First command that we are going to develop just shows us what is operating system which our application server is running on and it will shows some information about the OS like architecture, JVM version, JVM vendor and so on. Our command name is get-platform-details.

First let’s look at the maven build file and analyze its content, although I am not going to scrutinize on the Maven build file but we will look at elements related to our CLI command development.

Listing 5 Maven build file for creating a get-platform-details CLI command

<project xmlns="" xmlns:xsi=""
<modelVersion>4.0.0</modelVersion>                       #1
<artifactId> platformDetailsCommand </artifactId>
<packaging>hk2-jar</packaging>                             #2
<description>GlassFish in Action, Chapter 13, Platform Details Command </description>
<directory>src/main/resources</directory>         #3

You will find more details in the POM file included in the sample source code accompanying the book, but what is shown in the listing is enough to build a CLI command module.  At #1 we define the OSGI Manifest version. at #2 we are ensuring that the bundle which we are creating is built using the HK2 module specification which means inclusion of all OSGI related description like imported and exported packages in the MANIFEST.MF file which will be generated by Maven and resides inside the META-INF folder of the final JAR file. At #3 we ensure that Maven will include our help file for the command. The help file name should be similar to the command name and it can include anything textual.  We usually follows a UNIX like man files structure to create help files for the commands. The help file will be shown when we call ./asadmin get-platform-details –help. Figure 1 shows the directory layout of our sample command project.  As you can see we have two similar directory layout for the manual file and java source codes and the POM.XML file is in the root of the cli-platform-details-command directory.

Figure 1 Directory layout of the get-platform-details command project.

Now let’s see what the source code for the command is and what would be the outcome of the build process. Listing 6 shows the source code for the command itself which when we execute ./asadmin get-platform-details GlassFish CLI will try to call its execute method. The result after executing the command is similar to figure 2, as you can see our command has executed successfully.

Figure 2 Sample output for calling get-platform-details with runtime as the parameter

Now you can develop your own command and test it to see a similar result in the terminal window.

Listing 6 Source code for which is a CLI command

@Service(name = "get-platform-details")               #1
@Scoped(PerLookup.class)                                  #2
public class PlatformDetails implements AdminCommand {           #3

ActionReport report;
OperatingSystemMXBean osmb = ManagementFactory.getOperatingSystemMXBean();                  #4
RuntimeMXBean rtmb = ManagementFactory.getRuntimeMXBean();     #5
// this value can be either runtime or os for our demo
@Param(name = "detailsset", shortName = "DS", primary = true, optional = false)                                                        #6
String detailsSet;

public void execute(AdminCommandContext context) {
try {

report = context.getActionReport();
StringBuffer reportBuf;
if (detailsSet.equalsIgnoreCase("os")) {
reportBuf = new StringBuffer("OS details: n");
reportBuf.append("OS Name: " + osmb.getName() + "n");
reportBuf.append("OS Version: " + osmb.getVersion() + "n");
reportBuf.append("OS Architecture: " + osmb.getArch() + "n");
reportBuf.append("Available Processor: " + osmb.getAvailableProcessors() + "n");
reportBuf.append("Average Load: " + osmb.getSystemLoadAverage() + "n");
report.setMessage(reportBuf.toString());            #7
report.setActionExitCode(ExitCode.SUCCESS);         #8

} else if (detailsSet.equalsIgnoreCase("runtime")) {

reportBuf = new StringBuffer("Runtime details: n");
reportBuf.append("Virtual Machine Name: " + rtmb.getVmName() + "n");
reportBuf.append("VM Vendor: " + rtmb.getVmVendor() + "n");
reportBuf.append("VM Version: " + rtmb.getVmVersion() + "n");
reportBuf.append("VM Start Time: " + rtmb.getStartTime() + "n");
reportBuf.append("UpTime: " + rtmb.getUptime() + "n");

} else {
report.setActionExitCode(ExitCode.FAILURE);         #9
report.setMessage("Th given value for " +
"detailsset parameter is not acceptable, the value " +
"can be either 'runtime' or 'os'");

} catch (Exception ex) {

report.setMessage("Command failed with the following error:n" + ex.getMessage());
context.getLogger().log(Level.SEVERE, "get-platform-details " + detailsSet + " failed", ex);



At #1 we are marking the implementation as a provider which implements a contract. The provided service name is get-platform-details which are the same as our CLI command name. At #2 we are setting a correct scope for the command as we do not like to see the same result every time that we initiate the command either for different domains. At #3 we are implementing the contract interface. At #4 and #5 we get MBeans which provides the information that we need, this MBeans can be either our local server MBeans or the remote running instance if we use the –host and –port  to execute the command against a remote instance. At #6 we are defining a parameter which is not optional, the parameter name is detailsset which we will use it by –detailsset when we want to execute the command. The short name for the parameter is DS which we will use as –DS when we want to execute the command. The command is primary so we do not need to name the parameter, we can just pass the value and CLI framework will assign the value to this parameter. Pay attention to the parameter name and the containing variable. If we do not use name element of the @Param annotation the parameter name will be the same as the variable name. At #7 we set the output which we want to show to the user as the report object’s message. At #8 we set the exit condition to successful as we managed to execute the command successfully. At #9 we set the exit code as a failure as the value passed for the parameter is not recognizable, we show a suitable message to user to help him diagnose the problem. At #10 we faced an exception, so we log the exception and set the exit command as a failure.

GlassFish CLI commands fall under two broad categories, one is the local commands like create-domain or start-domain commands which execute locally and you can not pass remote connection parameters including –host, –port parameters and expect to see the command executed on a remote glassfish installation. These commands extend com.sun.enterprise.cli.framework.Command abstract class or one of subclasses like com.sun.enterprise.admin.cli.BaseLifeCycleCommand.  And do not need anything like an already running GlassFish instance.

Next command which we will study is another command which will list some details about the JMS service of target domain, but this time we will discuss resource injection and some details around localization of command messages.

Listing 7 shows source code for a command which shows some information about domain JMS service including starting arguments, JMS service type, address list behavior and so on. Although the source code shows how we can get some details but setting the values for each attribute is the same as getting them.

Figure 3 shows the result of executing this command on a domain created with default parameters.

Listing 7 Source code for remote command named get-jms-details

@Service(name = "get-jms-details")
@I18n("get-jms-details")                       #1
public class JMSDetails implements AdminCommand {

@Inject                                      #2
JmsService jmsService;
JmsHost jmsHost;
JmsAvailability jmsAvailability;
ActionReport report;
@Param(name = "detailsset", acceptableValues = "all,service,host,availability",
shortName = "DS", primary = true, optional = false) #3
@I18n("get-jms-details.details_set")                #4
String detailsSet;
final private static LocalStringManagerImpl localStrings = new LocalStringManagerImpl(JMSDetails.class);             #5

public void execute(AdminCommandContext context) {
try {
report = context.getActionReport();
StringBuffer reportBuf = new StringBuffer("");
if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("service")) {
reportBuf.append("Default Host: " + jmsService.getDefaultJmsHost() + "n");
reportBuf.append("MQ Service: " + jmsService.getMqService() + "n");
reportBuf.append("Reconnection Attempts: " + jmsService.getReconnectAttempts() + "n");
reportBuf.append("Startup Arguments: " + jmsService.getStartArgs() + "n");
} else if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("host")) {
reportBuf.append("Host Address: " + jmsHost.getHost() + "n");
reportBuf.append("Port Number: " + jmsHost.getPort() + "n");
} else if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("availability")) {
reportBuf.append("Is Availability Enabled: " + jmsAvailability.getAvailabilityEnabled() + "n");
reportBuf.append("Availability Storage Pool Name: " + jmsAvailability.getMqStorePoolName() + "n");

} catch (Exception ex) {
report.setMessage(localStrings.getLocalString("get-jms-details.failed", "Command failed to execute with the {0} as given parameter", detailsSet) + ". The exception message is:" +
ex.getMessage());  #6
context.getLogger().log(Level.SEVERE, "get-jms-details " + detailsSet + " failed", ex);



You are right the code looks similar to the first sample but more complex in using some non familiar classes and use of resource injection and localization stuff. We used resource injection in order to access GlassFish configurations. All configuration interfaces which we can obtain using injection are inside com.sun.enterprise.config.serverbeans. Now let’s analyze the code where it is important and somehow unfamiliar.  At #1 we are telling the framework that this command short help message key in the localization file is get-jms-details. At #2 we are injecting some configuration Beans into our variables. At #3 we ensure that the CLI framework will check the given value for the parameter to see whether it is one of the acceptable values or not. At #4 we are determining the localization key for the parameter help message. At #5 we initialize the string localization manager to get appreciated string values from the bundle files. At #6 we are showing a localized error message using the string localization manager.

Listing 5 shows file content which we will shortly discuss where it should be placed to allows the CLI framework to find it and load the necessary messages from it.

Listing 8 An Snippet of localization file content for a command named restart-domain

get-jms-details=Getting the Detailed information about JMS service and host in the target domain.                #1

get-jms-details.details_set=The set of details which is required, the value can be all, service, host, or availability.           #2

get-jms-details.failed=Command failed to execute with the {0} as given parameter.          #3

At #1 we include a description of the command; at #2 we include description of the detailsset parameter using the assigned localization key. At #3 we include the localized version of the error message which we want to show when the command fails. As you can see we used place holders to include the parameter value. Figure 4 shows the directory layout of the get-jms-details project. As you can see we placed the file next to the command source code.

Figure 4 Directory layout of the get-platform-details command project.

Developing more complex CLI commands follow the same rule as developing these simple commands. All that you need to develop a new command is starting a new project using the given maven POM files and discovering the new things that you can do.

2 Administration console pluggability

You know that administration console a very easy to use channel for administrating a single instance or multiple clusters with tens of instances and variety of managed resources. All of these functionalities were based on navigation tree, tabs, and content pages which we could use to perform our required tasks. You may wonder how we can extend these functionalities without changing the main web application which is known as admingui to get new functionalities in the same look and feel that already available functionalities are presented.  The answer lies again in the application server overall design, HK2 and OSGI along with some JSF Templating and use of Woodstock JSF component suite. We are going to write an administration console which let us upload and install new OSGI bundles by using browser from remote locations.

2.1 Administration console architecture

Before we start discussing how we can extend the administration console in a non-intrusive way we should learn more about the console architecture and how it really works. Administration console is a set of OSGI bundles and each bundle includes some administration console specific artifacts in addition to the default OSGI manifest file and Maven related artifacts.

One of the most basic artifacts which are required by an administration console plugin is a descriptor which defines where our administration console navigation items should appear. For example are going to add a new node to the navigation tree, or we want to add a new tab to a currently available tab set like Application Server page tab set.  The file which describes this navigation items and their places is named console-config.xml and should be placed inside META-INF/admingui folder of the final OSGI bundle.

The second thing which is required to make it possible for our module to be recognized by the HK2 kernel is the HK2 administration console plugin’s contract implementation. So, the second basic item in the administration console plug-in is an HK2 service provider which is nothing more than a typical Java class, annotated with @Service annotation that implements the org.glassfish.api.admingui.ConsoleProvider interface. The console provider interface has only one method named getConfiguration. We need to implement this method if we want to use a non-standard place for the console-config.xml file.

In addition to the basic requirement we have some other artifacts which should be present in order to see some effects in the administration console from our plug-in side. There requirements includes, JSF fragment files to add the node, tab, content, and common task button to administration console on places which are described in the console-config.xml file.  These JSF fragments uses JSFTemplating project tags to define nodes, tabs, and buttons which should appear in the integration points.

Until now we include just a node in a navigation tree, a tab in one of the tab sets, or a common task button or group in the common tasks page. What else we need to include? We need some pages to show when administrators clicked on a tree node or on a common task button. So we should add JSF fragments implemented using JSFTemplating tags to show the real content to the administrators. For example imagine that we want to write a module to deploy OSGI bundles, we will need to show a page which let users select a file along with a button which they can press to upload and install the bundle.

Now that we shown the page which let users select a file, we should have some business logic or handlers which receives the uploaded file and write it in the correct place. As we use JSFTemplating for developing pages, we will JSFTemplating handlers to handle the business logic.

Our functionality related artifacts are mentioned, but everything is not summarized in functionalities, we should provide localized administration pages to our administrators whom like to use their local languages when dealing with administration related tasks. We also need to provide some help files which will guide the administrators when they need to read a manual before digging into the action.

After we have all of this content and configurations in place, the administration console’s add on service query for all providers which implements the ConsoleProvider interface, then the service tries to get the configuration file if it couldn’t find the file in its default place by calling the getConfiguration method of the contract implementation. After that the administration console framework uses the configuration file and provided integration points template files and later on the content template files and handlers.

2.2 JSFTemplating

JSFTemplating is a sun sponsored project hosted in which provides templating for JSF. By utilizing JSFTemplating we can create web pages or components using template files, template files. Inside template files we can use JSFTemplating, and Facelets syntax, other syntax support may be provided in future. All syntaxes support all of JSFTemplating’s features such as accessing page session, event triggering and event handler’s support and dynamic reloading of page content. Let’s analyze a template file which is shown in listing 9 to see what we are going to use when we develop administration console plug-ins.

Listing 9 A simple JSFTemplating template file

setResourceBundle(key="i18n" bundle="mypackage.resource");
/>                                 #1
<sun:head id="head" />
#include /
<sun:form id="form">
"<p>#{i18n["welcome.msg"]}</p>        #2
<sun:label value="#{anOut}">        #3
GiA.getResponse(userInput="#{in}"                   response=>$pageSession{anOut});
"<br /><br />
<sun:textField id="in" value="#{}" /> #4
"<br /><br />
<sun:button text="$resource{i18n.button.text}" />
<sun:hyperlink text="cheat">
setPageSessionAttribute(key="in" value="sds");  #5
#include /          #6

At #1 we determine the resource file which we want to use to get localized text content.  We define a key for accessing the object. At #2 we are simply using our localization resource file to show a welcome message and as you have already noticed it is Facelet syntax. At #3 we are using a handler for one of the predefined events of JSFTemplating events. In the event we are sending out the value of in variable to our method and after getting back the method’s execution result we set the result into a variable named anOut in the page scope. And we use the same variable to initialize the Label component. You can see how a handler will look like in listing 10.  The event that we used makes it possible to change the text before it gets displayed. There are several pre-defined events which some of them are listed in table 2. At #4 we are using the in variable’s value to initialize the text field content. At #5 we are using a pre-defined command as the handler for the hyperlink click. At #6 we are including another template file into this template file. All of the components that we used in this template file are equivalent of WoodStock project’s components.

Table 2 JSFTemplating pre-defined events which can call a handler



AfterCreate, BeforeCreate

Event handling commences before or after the component is created

AfterEncode, BeforeEncode

Event handling commences before or after content of a component get displayed


Command has invoked, for example a link or a button is clicked


Page initialization phase, when components values are getting sets

Listing 10 The GiA.getResponse handler which has been used in listing 9

@Handler(id = "GiA.getResponse",
input = {
@HandlerInput(name = "in", type = String.class)},
output = {
@HandlerOutput(name = "response", type = String.class)
public static void getResponse(HandlerContext handlerCtx) {

In listing 10 you can see that the handler id were using in JSF page instead of the method fully qualified name. During the compilation, all handlers id, their fully qualified name, and their input and output parameters will get extracted by apt into a file named which resides inside a folder named jsftemplating. The jsftemplating folder is placed inside the META-INF folder. structure is similar to standard properties file containing variable=value pairs.

Now that we have some knowledge about the JSFTemplating project and the administration console architecture we can create our plug-in which let us install any GlassFish module in form of OSGI bundle using administration console. After we saw how a real plug-in can be developed we can proceed to learn more details about integration points and changing administration console theme and brand.

13.2.3 OSGI bundle installer plug-in

Now we want to create a plug-in which will let us upload an OSGI module and install it into the application server by copying the file into modules directory. First let’s see figure 5 which shows the file and directory layout of the plug-in source codes, and then we can discuss each artifact in details.

In figure 5, at the top level we have out Maven build file with one source directory containing standard Maven directory structure for Jar files which is a main directory with java and resources directory inside it. The java directory is self describing as it contains the handler and service provider implementation.  The resources directory again is standard for Maven Jar packager. Maven will copy the content of the resources folder directly in the root of the Jar file. Content of the resources folder is as follow:

glassfish folder: this folder contains a directory layout similar to java folder, the only file stored in this folder is our localization resource file which is named

  • images: graphic file which we want to use in our plug-in for example in the navigation tree
  • js: a JavaScript file which contains some functions which we will use in the plug-in
  • META-INF: during the development time we will have only one folder named admingui inside the META-INF folder, this folder holds the console-config.xml. After bulding the project some other folders including maven, jsftemplating and inhabitants will be created by the maven.
  • pages: this folder contains all of our JSF template files.

Figure 5 directory layout of the administration console plug-in to install OSGI bundles

Some folders are mandatory like META-INF but we can put other folders’ content directly inside the resources folder. But we will end up with an unorganized structure. Now we can discuss each artifact in details. You can see content of the console-config.xml in listing 11.

<?xml version="1.0" encoding="UTF-8"?>
<console-config id="GiA-OSGI-Module-Install">                  #1
<integration-point  id="CommonTask-Install-OSGI"          #2
type="org.glassfish.admingui:commonTask"          #3
parentId="deployment"                           #4
priority="300"                                   #5
content="pages/CommonTask.jsf" />             #6
<integration-point  id="Applications-OSGI"
type="org.glassfish.admingui:treeNode"       #7
parentId="applications"                       #8
content="pages/TreeNode.jsf" />                #9

content: The content for the integration point, typically a JavaServer Faces page.

This XML file describe which integration points we want to use, and what is the priority our navigation node in comparison with already existing navigation nodes. At #1 we give our plugin a unique ID because later on we will access our web pages and resources using this ID. At #2 we define an integration point again with a unique ID. At #3 we determine that this integration point is a common task which we want to add under deployment tasks group #4. At #5 the priority number says that all integration points with an smaller priority number will be proceed before this integration point and therefore our common tasks button will be placed after them. At #6 we determine which template file should be proceed to fill in the integration point place holder, in this integration point the template file contains a button which will fill the place holder.

At #7 we determine that we want to add some tree node to the navigation tree by using org.glassfish.admingui:treeNode as integration point type. At #8 we determine that our tree node should be placed under the applications node. At #9 we are telling that the template page which will fill the place holder is pages/TreeNode.jsf. To summarize we can say generally each console-config.xml file consists of several integration points’ description elements and each integration point element has 4 or 5 attributes. These attributes are as follow:

Id: An identifier for the integration point.

parentId:  The ID of the integration point’s parent. You can see A list of all parentid(s) in listing 3

type: The type of the integration point. You can see A list of all parentid(s) in listing 3

priority:  A numeric value that specifies the relative ordering of integration points for add-on components that specify the same parentId . This attribute is optional.

The second basic artifact which we discussed is the service provider which is simple Java class implementing the org.glassfish.api.admingui.ConsoleProvider interface. The listing 12 shows InstallOSGIBundle bundle which is the service provider for our plug-in.

Listing 12 Content of the, the plug-in service provider

@Service(name = "install-OSGI-bundle") #A
@Scoped(PerLookup.class)  #b
public class InstallOSGIBundle
implements ConsoleProvider {  #c

public URL getConfiguration() {
return null;

#A: Service has name

#B: Service is scoped

#C: Service implements the interface

After we saw the basic artifacts, including the service provider implementation and how we can define where in the administration console navigational system we want to put our navigational items, we can see what the content of template files which we use in the console-config.xml file is. Listing 13 shows the TreeNode.jsf file content.

Listing 13 Content of the TreeNode.jsf template file which create a node in navigation tree


setResourceBundle(key=”GiA” bundle=””) #1

setResourceBundle(key="GiA" bundle="") #1
<sun:treeNode id="GFiA_TreeNode"            #2
imageURL="resource/images/icon.png"  #3
text="$resource{GiA.OSGI.install.tree.title}" #4
url="GiA-OSGI-Module-Install/pages/installOSGI.jsf" #5
target="main"         #6
expanded="$boolean{false}">       #7

You can see the changes that this template will cause in figure 6, but the description of the code is as follow. At #1 we are using a JSFTemplating event to initialize the resource bundle. At #2 we are telling that we have a tree node with the GFiA_TreeNode as its ID which let us access the node using JavaScript. At #3 we determine the icon which we want to appear next to the tree node, you can see that we are using resource/images/icon.png as the path to the icon, the resource prefix lead us to the root of the resources folder. At #4 we are telling that we want the title of the tree node to be fetched from the resource bundle. At #5 we are determining which page should be loaded when administrator clicked the button. We are using facesContext.externalContext.requestContextPath we are getting the web application path, the GiA-OSGI-Module-Install is our plug-in id and we can access whatever we have inside the resources folder by prefixing its path with GiA-OSGI-Module-Install, you can see GiA-OSGI-Module-Install in listing 11. At #6 we are telling that the page should opens in the main frame (the content frame) and at #7 we are telling that the node should not be expanded by default. #7 effects are visible when we have some sub nodes.

Figure 6 the effects of TreeNode.jsf template file on the administration console navigation tree node

You can see that we have our own icon which is loaded directly from the images folder which is resided inside the resources directory. The tree node text is fetched from the resource bundle file.

Listing 14 shows CommonTask.jsf which causes the administration console service to place a button in the common task section under the deployment group, the result of this fragment on the common tasks page is shown in figure 7.

Listing 14 Content of the CommonTask.jsf file which is a template to fill the navigation item place

setResourceBundle(key="GiA" bundle="")
<sun:commonTask                                             #1
onClick="admingui.nav.selectTreeNodeById('form:tree:application:GFiA_TreeNode');                                                       #2
parent.location='#{facesContext.externalContext.requestContextPath}/GiA-OSGI-Module-Install/pages/installOSGI.jsf'; return false;"     #3


At #1 we are using a commonTask component of the JSFTemplating framework, we use a localized string for its text and tool tip. At #2 we are changing the state of the tree node defined in the console-config.xml when the component receives a click; this is for ensuring that the navigation tree shows where the use is. And at #4 we are telling that we want to load pages/installOSGI.jsf which this button is clicked.

Figure 7 Effects of the listing 13 on the common task page of the administration console

We fetched the button title and its tool tip from the resource file which we included in the resource folder as described in the sample project folder layout in figure 5.

Next file which we will discuss is the actual operation file which provides the administrators with an interface to select upload and install an OSGI bundle. The file as we already discussed is named installOSGI.jsf and listing 15 shows its content.

Listing 15 The installOSGI.jsf file content, this file provide interface for uploading the OSGI file

<sun:page id="install-osgi-bundle-page" >  #1
setResourceBundle(key="GiA" bundle="");
<sun:head id="propertyhead" title="$resource{GiA.OSGI.install.header.title}">      #2
<sun:script url="/resource/js/glassfishUtils.js" /> #3
<sun:form id="OSGI-Install-form" enctype="multipart/form-data">
<sun:title id="title" title="$resource{GiA.OSGI.install.form.title}" helpText="$resource{}">
<sun:upload id="fileupload" style="margin-left: 17pt" columns="$int{50}"
> #4
<sun:button id="uploadButton" text="$resource{GiA.OSGI.install.form.Install_button}"
return submitAndDisable(this, '$resource{GiA.OSGI.install.form.cancel_processing}');
"> #5

/>  #6

I agree that the file content may look scary, but if you look more carefully you can see many familiar elements and patterns. Figure 8 shows this page in the administration console. In the listing 14, at #1 we are starting a page component and we use its beforeCreate event to load the localized resource bundle. At #2 we are giving the page a title which is loaded from the resource bundle. At #3 we are load a JavaScript file which contains one helper JavaScript method. As you can see we prefixed the file path with resource. At #4 we are using a fileUpload component and the binary content of the uploaded files goes to uploadedFile in the request scope. At #5 we use the helper JavaScript method to submit the form and disable the Install button. At #6 we are calling a custom handler with GiA.uploadFile as its ID. We pass the request scoped uploadedFile variable to the handler method. After the command executed, we #6 we redirect to a simple page which shows a message indicating that the module is installed.

Figure 8 The installOSGI.jsf page in the administration console

Now, let’s see what is behind this page, how this handler works, how it can find the module directory and how the installation process commences. Listing 16 shows the InstallOSGIBundleHandlers class which contains only one handler for writing the uploaded file in the modules directory of GlassFish installation which owns the running domain.

Listing 16 The content of InstallOSGIBundleHandlers which is file uploading handler

public class InstallOSGIBundleHandlers {  #A

@Handler(id = "GiA.uploadFile",  #1
input = {
@HandlerInput(name = "file", type = UploadedFile.class)}) #2
public static void uploadFileToTempDir(HandlerContext handlerCtx) { #3
try {
UploadedFile uploadedFile = (UploadedFile) handlerCtx.getInputValue("file"); #4
String name = uploadedFile.getOriginalName();
String asInstallPath = AMXRoot.getInstance().getDomainRoot().getInstallDir(); #5
String modulesDir = asInstallPath + "/modules";
String serverSidePath = modulesDir + "/" + name;
uploadedFile.write(new File(serverSidePath));
} catch (Exception ex) {
GuiUtil.handleException(handlerCtx, ex); #6

#A Class Declaration

As you can see in the listing, it is really simple to write handlers and working with GlassFish APIs. at #1 we are defining a handler, the handler ID element allows us to use it in the template file. At #2 we define the handler input parameter named file the parameter type is com.sun.webui.jsf.model.UploadedFile. At #3 we are implementing the handler method. At #4 we are extracting the file content from the context. At #5 we are using and AMX utility class to find the GlassFish installation path which owns the currently running domain. There many utility classes in the org.glassfish.admingui.common.util package. At #6 we are handling any possible exception using the GlassFish way. We can define output parameters for our handler and after the command executed, decide where to redirect the user based on the value of the output parameter.

There are 3 other files which we will briefly discuss, the first file which is shown in listing 17 is the JavaScript file which I used in installOSGI.jsf and is placed inside the js directory. We discussed the file in listing 15 discussions.

Listing 17 Content of the glassfishUtils.js file

function submitAndDisable(button, msg, target) {
button.form.action += "?" + + "=" + encodeURI(button.value);
if (target) { = target;
return true;

The function receives three parameters, the button which it will disable, the message which it will set as the button title when the button disabled and finally the target that the button’s owner form will submit to.

Listing 17 shows the next file which we should take a look at. is our localization file located deep in resources directory under package. Although it is a standard localization file, but it is not bad to take a look and remember the first time that we faced with resource bundle files.

Listing 18 Content of the file

OSGI.install.header.title=Install OSGI bundle by uploading the file into the server.
OSGI.install.form.title=Install OSGI bundle by uploading the file into the server. and select your OSGI bundle to install it in the server's modules directory
OSGI.install.tree.title=Install OSGI Bundle
OSGI.install.task.title=Install OSGI Bundle
OSGI.install.tree.Tooltip=Simply install OSGI bundle by uploading the bundle from the administration console.

And finally our Maven build file, the build file as we discussed in the beginning of the section is placed in the top level directory of our project. If you want to review the directory layout, take a look at figure 5.

Listing 19 The plug-in project Maven build file, pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">
<groupId>org.glassfish.admingui</groupId> #A
<packaging>hk2-jar</packaging>           #B
<description>GlassFish in Action, Chapter 13, Admin console plugin to install OSGI bundle</description>

#A Project Group ID

#B HK2 Special Packagin

The pom.xml file which is available for download in is more complete with some comment which explains other elements, but the 19 listings content is sufficient for building the plug-in.

Now that you have a solid understanding of how an administration console plug-in works, we can extends our discussion to other integration points and their corresponding JSFTemplating components.

Table 3 shows integration points and the related JSFTemplating component along with some explanation about each item. As you can see we have 5 more types of integration points with different attributes and navigational places.

Table 3 List of all integration points along with the related JSFTemplating compoent

Integration point*

Values for ParentID attribute

Corresponding JSFTemplating compoent


tree, applicationServer

applications, webApplications

resources, configuration

node,  webContainer


sun:treeNode which

is equivalent of the Project Woodstock tag  webuijsf:treeNode


serverInstTab, add tabs and sub tabs to Application Server page

sun:tab which is equivalent of the project Woodstock tag webuijsf:tab


deployment, monitoring,updatecenter, documentation

sun:commonTask which

is equivalent of the Project Woodstock tag  webuijsf:commonTask.


commonTasksSection, it allows us to create our own common tasks group

sun:commonTasksGroup .


is equivalent of the Project Woodstock tag  webuijsf:commonTasksGroup


masthead, loginimage, loginform, versioninfo

This integration point let us change the login form, header and version information of the application server for changing the application server brand

* All of the first column values have org.glassfish.admingui: as prefix

Now you are quipped with enough knowledge to develop extensions for GlassFish administration console to facilitate your won daily tasks or provide your plug-ins to other administrators which are interested in same administration tasks which you are performing using your developed plug-ins.

3 Summary

Understanding an administration console from end user point of view is something and understanding it from a developer’s perspective is another, using the first one you can administrate the application server and using the second one you can manage, enhance and administrate the administration console itself. In this article we learned how we can develop new asadmin or CLI commands to ease our daily tasks or extends the GlassFish application server as a 3rd party partner. We learned how we can develop and extend the web based administration console which leads us to reviewing the JSFTemplating project and a fast pointing to AMX helper classes.