JavaZone 2010 sessions I am going to attend

I will be attending JavaZone 2010 both as an speaker presenting NIO.2 and as an atendee sitting and learning new technologies other speakers kindly share. After looking at the list of sessions, here is the sessions I decided to attend. It is really hard to decided on which speaker and subject to choose. All speakers are known engineers and developers and selected subjects are exceptionally good.

I may change few of this items as JZ2010 staff updates the agenda but majority of the sessions will be what I have already selected. See you at JZ2010 and the JourneyZone afterward the conference itself.

8th of September:

  1. Emergent Design — Neal Ford
  2. JRuby: Now With More J! — Nick Sieger
  3. Howto: Implement Collaborative Filtering with Map/Reduce – Ole-Martin Mørk
  4. Building a scalable search engine with Apache Solr, Hibernate Shards and MySQL – Aleksander Stensby, Jaran Nilsen
  5. The not so dark art of Performance Tuning — Dan Hardiker, Kirk Pepperdine
  6. Creating modular applications with Apache Aries and OSGi — Alasdair Nottingham
  7. Java 7 NIO.2: The I/O API for Future — Masoud Kalali -> I am Speaking here
  8. The evolution of data grids, from local caching to distributed computing – Bjørn Vidar Bøe

9th of September:

  1. NOSQL with Java — Aslak Hellesøy
  2. CouchDB and the web — Knut O. Hellan
  3. Decision Making in Software Teams — Tim Berglund
  4. A Practical Introduction to Apache Buildr — Alex Boisvert
  5. Architecture Determines Performance — Randy Stafford
  6. Surviving Java Maintenance – A mini field manual — Mårten Haglind
  7. Using the latest Java Persistence API 2.0 features — Arun Gupta

JavaZone 2010, Oslo
JavaZone 2010, JavaZone logo

Manage, Administrate and Monitor GlassFish v3 from Java code using AMX & JMX

Management is one of the most crucial parts of an application server set of functionalities. Development of the application which we deploy into the server happens once with minor development iteration during the software lifecycle, but the management is a lifetime task. One of the very powerful features of the GlassFish application server is the powerful administration and management channels that it provides for different level of administrators and developers whom want to extend the application server administration and management interfaces.

GlassFish as an application server capable or serving mission critical and large scale applications benefits from several administration channel including the CLI, web based administration console and finally the possibility to manage the application server by using standard Java management extension or the JMX.

Not only GlassFish fully expose its management functionalities as JMX MBeans but also it provides a very easier way to manage the application server using local objects which proxies JMX MBeans. These local objects are provided as AMX APIs which lift the need for learning JMX by administers and developers whom want to interact with the application server by code.

GlassFish provides very powerful monitoring APIs in term of AMX MBeans which let developers and administrators monitor any aspect of anything inside the application server using Java code without need to understand the JMX APIs or complexity of monitoring factors and statistics gathering. These monitoring APIs allows developers to monitor a bulk of Java EE functionalities together or just monitor or single attribute of a single configuration piece.

GlassFish self management capability is another powerful feature based on the AMX and JMX APIs to let administrators easily automate daily tasks which can consume a handful amount of time without automation. Self management can manage the application server dynamically by monitoring the application server in runtime and changing the application server configuration dynamically based on predefined rules.

1 Java Management eXtension (JMX)

JMX, native to Java platform, introduced to let Java developers have a standard and easy to learn and use way for managing and monitoring their Java applications and Java enabled devices. We as architects, designers and developers of Java applications which can be as small as an in house invoice management or as big as a running stock exchange system need a way to expose management of our developed software to other industry accepted management software and JMX is the answer to these need.

1.1 What is JMX?

JMX is a part of Java Standard edition and was present from early days of Java platform existence and seen many enhancements during Java platform evolution. The JMX related specifications define the architecture, design patterns, APIs, and services in the Java programming language for managing and monitoring applications and Java enabled devices.

Using the JMX technology, we can develop Java classes which perform the management and monitoring tasks and expose a set of their functionalities or attributes by means of an interface to which later on are exposed to JMX clients through specific JMX services. The objects which we use to perform and expose management functionalities are called Managed Beans or MBeans in brief.

In order for MBeans to be accessible to JMX clients, which will use them to perform management tasks or gathers monitoring data, they need to be registered in a registry which later on let our JMX client application to find and initialize them. This registry is one of the fundamental JMX services and called MBean Server.

Now that we have our MBeans registered with a registry, we should have a way to let clients communicate with the running application which registered the MBeans to execute our MBeans operations, this part of the system is called JMX connectors which let us communicate with the agent from a remote or local management station. The JMX connector and adapter API provides a two way converter which can transparently connect to JMX agent over different protocols and provides a standard way for management software to communicate with the JMX agents regardless of communication protocol.

1.2 JMX architecture

The JMX benefits from a layered architecture heavily based on the interfaces to provide independency between different layers in term of how each layer works and how the data and services are provided for each layer by its previous one.

We can divide the JMX architecture to three layers. Each layer only relay on its direct bottom layer and is not aware of its upper layer functionalities. These layers are: instrumentation, agent, and management layers. Each layer provides some services either for other layers, in-JVM clients or remote clients running in other JVMs. Figure 1 shows different layers of JMX architecture.

Figure 1 JMX layerd architecture and each layer components

Instrumentation layer

This layer contains MBeans and the resources that MBeans are intended to manage. Any resource that has a Java object representative can be instrumented by MBeans. MBeans can change the value of object’s attributes or call its operations which can affect the resource that this particular Java object represents. In addition to MBeans, notification model and MBean metadata objects are categorized in this layer. There are two different types of MBeans for different use cases, these types include:

Standard MBeans: Standard MBeans consisting of an MBean interface which define the exposed operations and properties (using getters and setters) and the MBean implementation class. The MBean implementation class and the interface naming should follow a standard naming pattern in Standard MBeans. There is another type of standard MBeans which lift the urge for following the naming pattern called MXBeans. The Standard MBeans naming pattern for MBeans interface is ClassNameMBean and the implementation class is ClassName. For the MXBeans naming pattern for the interface is AnythingMXBean and the implementation class can have any name. We will discuss this naming matter in more details later on.

Dynamic MBeans: A dynamic MBean implements javax.management.DynamicMBean, instead of implementing an static interface with a set of predefined methods. Dynamic MBeans relies on javax.management.MBeanInfo that represents the attributes and operations exposed by them. MBeans client application call generic getters and setters whose implementation must resolve the attribute or operation name to its intended behavior. Faster implementation of JMX management MBeans for an already completed application and the amount of information provided by MBeans metadata classes are two benefits of Dynamic MBeans.

Notification Model: JMX technology introduces a notification model based on the Java event model. Using this event model MBeans can emit notifications and any interested party can receive and process them, interested parties can be management applications or other MBeans.

MBean Metadata Classes: These classes contain the structures to describe all components of an MBean’s management interface including its attributes, operations, notification, and constructors. For each of these, the MBeanInfo class include a name, a description and its particular characteristics (for example, an attribute is readable, writeable, or both; for an operation, the signature of its parameter and return types).

Agent layer

This layer contains the JMX Agents which are intended to expose the MBeans to management applications. The JMX agent’s implementation specifications fall under this layer. Agents are usually located in the same JVM that MBeans are located but it is not an obligation. The JMX agent consisting of an MBean server and some helper services which facilitate MBeans operations. Management software access the agent trough an adapter or connecter based on the management application communication protocol.

MBean Server: This is the MBeans registry, where management applications will look to find which MBeans are available to them to use. The registry expose the MBeans management interface and not the implementation class. The MBeans registry provides two interfaces for accessing the MBeans from a remote and in the same JVM client. MBeans can be registered by another MBeans, by the management application or by the Agent itself. MBeans are distinguished by a unique name which we will discuss more in AMX section.

Agent Services: there some helper services for MBeans and agent to facilitate some functionalities. These services include: Timer, dynamic class loader, observers to observer numeric or string based properties of MBeans, and finally relation service which define associations between MBeans and enforces the cardinality of the relation based on predefined relation types.

Management layer

The Management tier contains components required for developing management applications capable of communicating with JMX agents. Such components provide an interface for a management application to interact with JMX agents through a connector. This layer may contain multiple adapters and connectors to expose the JMX agent and its attached MBeans to different management platforms like SNMP or exposing them in a semantic rich format like HTML.

JMX related JSRs

There are six different JSRs defined for the JMX related specifications during past 10 years. These JSRs include:

JMX 1.2 (JSR 3): First version of JMX which was included in J2SE 1.2

J2EE Management (JSR 77): A set of standard MBeans to expose application servers’ resources like applications, domains, and so on for management purposes.

JMX Remote API 1.0 (JSR 160): interaction with the JMX agents using RMI from a remove locaten.

Monitoring and Management Specification for the JVM (JSR 174): a set of API and standard MBeans for exposing JVMs management to any interested management software.

JMX 2.0 (JSR 255): The new version of JMX for Java 0 which introduces using generics, annotation, extended monitors, and so on.

Web Services Connector for JMX Agents (JSR 262): define an specification which leads to use Web Services to access JMX instrumentation remotely.

1.3 JMX benefits

What are JMX benefits that JCP defined a lot of JSRs for it and on top of it, why we did not follow another management standard like IEEE Std 828-1990. The reason is behind the following JMX benefits:

Java needs an open to extend and close to change API for integration with emerging requirement and technologies, JMX does this by its layered architecture.

The JMX is based on already well defined and proven Java technologies like Java event model for providing some of required functionalities.

The JMX specification and implementation let us use it in any Java enabled software in any scale.

Almost no change is required for an application to become manageable by JMX.

Many vendors uses Java to enable their devices, JMX provide one standard to manage both software and hardware.

You can imagine many other benefits for JMX which are not listed above.

1.4 Managed Beans (MBeans)

We discussed that generally there are two types of MBeans which we can choose to implement our instrumentation layer. Dynamic MBeans are a bit more complex and we would rather skip them in this crash course, so in this section we will discuss how MXBeans can be developed, used locally and remotely to prepare ourselves for understanding and using AMX to manage GlassFish.

We said that we should write an interface which defines all exposed operation of the MBeans both for the MXBeans and standard MBeans. So first we will write the interface. Listing 1 shows the WorkerMXBean interface, the interface has two methods which supposed to change a configuration in a worker thread and two properties which return the current number of workers threads and maximum number of worker threads. Number of current workers thread is read only and maximum number of threads is both readable and updateable.

Listing 1 The MXBean interface for WorkerMXBean

@MXBean

public interface WorkerIF

{

public int getWorkersCount();

public int getMaxWorkers();

public void setMaxWorkers(int newMaxWorkers);

public int stopAllWorkers();

}

I did not told you that we can forget about the naming conversion for MXBean interfaces if we are intended to use Java annotation. As you can see we simply marked the interface as an MBean interface and defined some setter and getter methods along with one operation which will stop some workers and return the number of stopped workers.

The implementation of our MXBean interface will just implement some getter and setters along with a dummy operation which just print a message in standard output.

Listing 2 the Worker MXBean implementation

public class Worker implements WorkerIF {

private int maxWorkers;

private int workersCount;

public Worker() {

}

public int getWorkersCount() {

return workersCount;

}

public int getMaxWorkers() {

return maxWorkers;

}

public void setMaxWorkers(int newMaxWorkers) {

this.maxWorkers = newMaxWorkers;

}

public int stopAllWorkers() {

System.out.println(“Stopping all workers”);

return 5;

}

}

We did not follow any naming convention because we are using MXBean along with the annotation. If it was a standard MBean then we should have named the interface as WorkerMBean and the implementation class should have been Worker.

Now we should register the MBean to some MBean server to make it available to any management software. Listing 3 shows how we can develop a simple agent which will host the MBean server along with the registered MBeans.

Please replace the numbers with cueballs

Listing 3 How MBeans server works in a simple agent named WorkerAgent

public class WorkerAgent {

public WorkerAgent() {

MBeanServer mbs = ManagementFactory.getPlatformMBeanServer(); #1

Worker workerBean = new Worker();                  #2

ObjectName workerName = null;

try {

workerName = new                                #3 ObjectName(“article:name=firstWorkerBean”);

mbs.registerMBean(workerBean, workerName);          #4

System.out.println(“Enter to exit…”);            #5

System.in.read();

} catch(Exception e) {

e.printStackTrace();

}

}

public static void main(String argv[]) {

WorkerAgent agent = new WorkerAgent();

System.out.println(“Worker Agent is running…”);

}

}

At #1 we get the platform MBean Server to register our MBean. Platform MBean server is the default JVM MBean server. At #2 we initialize an instance of our MBean.

At #3 we create a new ObjectName for our MBean. Each JVM may use many libraries which each of them can register tens of MBeans, so MBeans should be uniquely identified in MBean server to prevent any naming collision. The ObjectName follow a format to represent an MBean name in order to ensure that it is shown in a correct place in the management tree and lift any possibility for naming conflict. An ObjectName is made up of two parts, a domain and a name value pair separated by a colon. In our case the domain portion is article and the description is name=name=firstWorkerBean

At #4 as register the MBean to the MBean server. At #5 we just make sure that our application will not close automatically and let us examine the MBean.

Sample code for this chapter is provided along with the book, you can run the sample codes by following the readme.txt file included in the chapter06 directory of source code bundle. Using the sample code you will just use Maven to build and run the application and JConsole to monitor it. But the behind the scene procedure described in the following paragraph.

To run the application and see how our MBean will appear in a management console which is standard JConsole bundled with JDK. To enable the JMX management agent for local access we need to pass the -Dcom.sun.management.jmxremote to the JVM. This command will let the management console to use inter process communication to communicate with the management agent. Now that you have the application running you can run JConsole. Open a terminal window and run jconsle. When JConsole opens, it shows a window which let us select either a remote or a local JVM to connect. Just scan the list of local JVMs to find WorkerAgent under name column; select it and press connect to connect to the JVM. Figure 2 shows the new Connection window of JConsole which we talked about.

Figure 2 The New Connection window of JConsole

Now you will see how JConsole shows different aspects of the selected JVM including memory meters, threads status, loaded classes, JVM overview which includes the OS overview and finally the MBeans. Select MBeans tab and you can see a tree of all MBeans registered with the platform MBean server along with your MBean. You should remember what I said about ObjectName class, the tree clearly shows how a domain includes its child MBeans. When you expand article node you will see something similar to figure 3.

Figure 3 JConsole navigation tree and effect of ObjectName format on the MBean placing in the tree

And if you click on stopAllWorkers node the context panel of the JConsole will load a page similar to figure 4 which also shows the result of executing he method.

Figure 4 The content panel of JConsole after selecting stopAllWorkers method of the WorkerMBean

It was how we can connect to a local JVM, in the 1.6 we will discuss connecting to a JVM process from a remote location to manage the system using JMX.

1.5 JMX Notification

The JMX API defines a notification and notification subscription model to enable MBeans generate notifications to signal a state change, a detected event, or a problem.

To generate notifications, an MBean must implement the interface NotificationEmitter or extend NotificationBroadcasterSupport. To send a notification, we need to construct an instance of the class javax.management.Notification or a one of its subclasses like AttributeChangedNotification, and pass the instance to NotificationBroadcasterSupport.sendNotification.

Every notification has a source. The source is the object name of the MBean that generated the notification.

Each notification has a sequence number. This number can be used to order notifications coming from the same source when order matters and there is a risk of the notifications being handled in the wrong order. The sequence number can be zero, but preferably the number increments for each notification from a given MBean.

Please replace the # with cueball in the source and the paragraph

1.6 Remote management

To manage our worker application from a remote location using a JMX console like JConsole we will just need to ensure that an RMI connector is open in our JVM and all appropriate settings like port number, authentication mechanism, transport security and so on are provided. So, to run our sample application with remote management enabled we can pass the following parameters to the java command.

-Dcom.sun.management.jmxremote.port=10006     #1

-Dcom.sun.management.jmxremote.authenticate=true #2

-Dcom.sun.management.jmxremote.password.file=passwordFile.txt #3

-Dcom.sun.management.jmxremote.ssl=false #4

At #1 we determine a port for the RMI connector to listen for incoming connections. At #2 we enable authentication in order to protect our management portal. At #3 we provide the path to a password file which contains the list of username and password in plain text. In the password file, each username and password pair are placed in one line with an space between them. At #4 we disable SSL for transport layer.

To connect to a JVM which started with these parameters, we should choose remote in the New Connection window of JConsole and provide the service:jmx:rmi:///jndi/rmi://127.0.0.1:10006/jmxrmi as Remote Process URL along with one of the credentials which we defined in the passwordFile.txt.

2 Application Server Management eXtension (AMX)

GlassFish fully adheres to the J2EE management (JSR 77) in term of exposing the application server configuration as JMX MBeans but dealing with JMX is not easy and likeable for all developers, so Sun has included a set of client side proxies over the JSR 77 MBeans and other additional MBeans of their own to make the presence of JMX completely hidden for the developers who want to develop management extensions for GlassFish. This API set is named AMX and usually we use them to develop management rules.

2.1 J2EE management (JSR 77)

Before we dig into AMX we need to know what JSR 77 is and how it helps us to using JMX and AMX for managing the application server. JSR 77 specification introduces a set of MBeans and services which let any JMX compatible client manage Java EE container’s deployed objects and Java EE services.

The specification defines a set of MBeans which models all Java EE concepts in a hierarchic of MBeans. The specification determines which attributes and operation each MBeans must have and what should be the effect of calling a method on the managed objects. Specification defines a set of events which should be exposed to JMX clients by the MBeans. The specification also defines a set of attributes’ statistics which should be exposed by the JSR 77 MBeans to the JMX client for performance monitoring.

Services and MBeans provided by JSR 77 covers:

Monitoring performance statistics of managed artifacts in the Java EE container. Managed objects like EJBs, Servlets, and so on.

Event subscription for important events of the managed objects like stopping or starting an application.

Managing state of different standard managed objects if the Java EE container like changing an attribute of a JDBC connection pool or underplaying an application.

Navigation between Managed objects.

Managed objects

The artifacts that JSR 77 exposes their management, monitoring to JMX clients include a broad range of Java EE components and services. Figure 5 shows the first level of the Managed objects in the Managed objects hierarchic. As you can see in the figure all objects inherits four attributes which later on will be used to determine whether an object state is manageable, the object provides statistics or the object provide events for its important performed actions.

The objectName attribute which we discussed before has the same use and format, for example amx:j2eeType=X-JDBCConnectionPoolConfig,name=DerbyPool represent a connection pool named DerbyPool in the amx domain, the j2eeType attribute shows the MBean’s type.

Figure 5 first level hierarchic of JSR 77 Managed objects which based on the specification must be exposed for JMX management.

Now that you saw how broad the scope of JSR 77 is you ask what the use of these MBeans is and how they work and can be used.

Simple answer is that as soon as a Managed object become live in the application server a corresponding JSR 77 MBeans will get initialized for it by the application server management layer. For example as soon as you create a JDBC connection pool a new MBeans will appear under the JDBCConnectionPoolConfig node of connected JConsole which represent the newly created JDBC connection. Figure 6 shows the DerbyPool under the JDBCConnectionPoolConfig node in JConsole.

Figure 6 The DerbyPool place under JDBCConnectionPoolConfig node in JConsole

In the figure 5 you can see that we can manage All deployed objects using the JSR 77 exposed MBeans, these objects J2EE applications and modules which are shown in figure A deployed application may have EJB, Web, and any other modules; Glassfish management service will initialize a new insistence of appreciated MBeans for each module in the deployed application. The management service uses ObjectName which we discussed before to uniquely identify each MBean instance and later on we will use this unique name to access each deployed artifact independently.

Figure 7 The specification urge implementation of MBeans to manage all shown deployable objects

Getting lower into the deployed modules we will have Servlets, EJBs and so on. The JSR 77 specification provided MBeans for managing EJBs and Servlets. The EJB MBeans inherit the EJB MBean and includes all necessary attributes and method to manage different types of EJBs. Figure 8 represent the EJB sub classes.

Figure 8 All MBeans related to EJB management in GlassFish application server, based on JSR 77

Java EE resources is one the most colorful area in the Java EE specification and the JSR 77 provides MBeans for managing all standard resources managed bye Java EE containers. Figure 9 shows which types of resources are exposed for management by the JSR 77.

Figure 9 Java EE resources manageable by the JSR 77 MBeans

All of Java EE base concepts are covered by the JSR 77 in order to make it possible for the 3rd party management solution developers to integrate Java EE application server management into their solutions.

The events propagated by JSR 77

In the beginning of this section we talked about events that interested parties can receive from JSR 77 MBeans. These events are as follow for different type of JSR 77 MBeans.

J2SEEServer: An event when the corresponding server enters RUNNING, STOPPED, or FAILED state

EntityBean: An event when the corresponding Entity Bean enters RUNNING, STOPPED, or FAILED state

MessageDrivenBean: An event when the corresponding Entity Bean enters RUNNING, STOPPED, or FAILED state

J2EEResource: An event when the corresponding J2EE Resource enters RUNNING, STOPPED, or FAILED state

JDBCResource: An event when the corresponding JDBC data source enters RUNNING, STOPPED, or FAILED state

JCAResource: An event when a JCA connection factory or managed connection factory entered RUNNING, STOPPED, or FAILED state

The monitoring statistics exposed by JSR 77

The specification urge exposing some statistics related to different Java EE components by JSR 77 MBeans. The required statistics by the specification includes:

Servlet statistics: Servlet related statistics which include Number of currently loaded Servlets; Maximum number of Servlets loaded which were active, and so on.

EJB statistics: EJB related statistics which Include include Number of currently loaded EJBs, Maximum number of live EJBs, and so on.

JavaMail statistics: JavaMail related statistics which Include maximum number of sessions, total count of connections, and so on.

JTA statistics: Statistics for JTA resources which includes successful transactions, failed transactions and so on.

JCA statistics: JCA related statistics which includes both the non-pooled connections and the connection pools associated with the referencing JCA resource.

JDBC resource statistics: JDBC resource related statistics for both non-pooled connections and the connection pools associated with the referencing JDBC resource, connection factory. The statistics include total number of opened and closed connections, maximum number of connections in the pool, and so on. This is really helpful to find connection leak for an specific connection pool.

JMS statistics: The JMS related including statistics for connection session, JMS producer, and JMS consumer.

Application server JVM Statistics: The JVM related statistics. Information like different memory sector size, threading information, class loaders and loaded classes and so on.

2.2 Remotely accessing JSR 77 MBeans by Java code

Include a sample code which shows accessing JSR 77  MBeans from remote location using java code to show how cumbersome it is and how simple AMX made it. // Done

Now that we discussed the details of the JSR 77 MBeans, let’s see how we can access the DerbyPool MBeans from java code and then how we can change an attribute which represent maximum number of connections in the connection pool. Listing 4 show the sample code which will access a GlassFish instance with default port for JMX listener (the default port is 8686)

Listing 4 Accessing a JSR 77 MBean for changing DerbyPool’s MaxPoolSize attribute.

public class RemoteClient {

private MBeanServerConnection mbsc = null;

private ObjectName derbyPool;

public static void main(String[] args) {

try {

RemoteClient client = new RemoteClient(); #A

client.connect();                          #B

client.changeMaxPoolSize();              #c

} catch (Exception e) {

e.printStackTrace();

}

}

private void connect() throws Exception {

JMXServiceURL jmxUrl =

new JMXServiceURL(“service:jmx:rmi:///jndi/rmi://127.0.0.1:8686/jmxrmi”); #1

Map env = new HashMap();

String[] credentials = new String[]{“admin”, “adminadmin”};

env.put(JMXConnector.CREDENTIALS, credentials);              #2

JMXConnector jmxc =

JMXConnectorFactory.connect(jmxUrl, env);             #3

mbsc = jmxc.getMBeanServerConnection();                     #4

}

private void changeMaxPoolSize() throws Exception {

String query = “amx:j2eeType=X-JDBCConnectionPoolConfig,name=DerbyPool”;

ObjectName queryName = new ObjectName(query);             #5

Set s = mbsc.queryNames(queryName, null);                   #6

derbyPool = (ObjectName) s.iterator().next();

mbsc.setAttribute(derbyPool, new Attribute(“MaxPoolSize”, new Integer(64)));                                                   #7

}

}

#A initiate an instance of the class

#B get the JMX connection

#C change the attribute’s value

At #1 we create a URL to GlassFish JMX service. At #2 we prepared the credentials which we should provide for connecting to the JMX service. At #3 we initialize the connector. At #4 we create a connection to GlassFish’s MBean server. At #5 we query the registered MBeans for an MBean similar to our DerbyPool MBean. At #6 we get the result of the query inside a set. We are sure that we have an MBean with the give name otherwise we should have checked to see whether the set is empty or not. At #7 we just change the attribute. You can check the attribute in JConsole and you will se that it change in the JConsole as well.

In the sample code we just update the DerbyPool’s maximum number of connections to 64. it can be counted as one of the simplest task related to JSR 77, management, and using JMX. Using plain JMX for a complex task will overhaul us with many lines of complex reflection based codes which are hard to maintain and debug.

2.3 Application Server Management eXtension (AMX)

Now that you see how hard it is to work with JSR 77 MBeans I can tell you that you are not going to use JSR 77 MBeans directly in your management applications, although you can.

What is AMX

In The AMX APIs java.lang.reflect.proxy is used to generate Java objects which implement the various AMX interfaces. Each proxy internally stores the JMX ObjectName of a server-side JMX MBean who’s MBeanInfo corresponds to the AMX interface implemented by the proxy.

So, in the same time we have JMX MBeans for using trough any JMX compliant management software and we have the AMX dynamic proxies to use them as easy to use local objects for managing the application server.

The GlassFish administration architecture is based on the concept of administration domain. An administration domain is responsible for managing multiple resources which are based on the same administration domain.  A resource can be a cluster of multiple GlassFish instances, a single GlassFish instance, and a JDBC connection pool inside the instance and so on. Hundreds of AMX interfaces are defined to proxy all of the GlassFish managed resources which themselves are defined as JSR 77 MBeans for client side access. All of these interfaces are placed under com.sun.appserv.management package.

There are several benefits in AMX dynamic proxies over the JMX MBeans, which are as follow:

Strongly typed methods and attributes for compile time type checking

Structural consistency with both the domain.xml configuration files.

Consistent and structured naming for methods, attributes and interfaces.

Possibility to navigate from a leaf AMX bean up to the DAS.

AMX MBeans

AMXdefines different types of MBean for different purposes or reasons, namely, configuration MBeans, monitoring MBeans, utility MBeans and JSR 77 MBeans. All AMX MBeans shares some common characteristics including:

They all implement the com.sun.appserv.management.base.AMX interface which contains methods and fields for checking the interface type, group, reaching its container and its root domain.

They all have a j2eeType and name property within their ObjectName. The j2eeType attribute specifies the interface we are dealing with.

All MBeans that logically contain other MBeans implement the com.sun.appserv.management.base.Container interface. Using the container interface we can navigate from a leaf AMX Bean to the DAS and vice-versa. For example by having the domain AMX Bean we can get a list of all connection pools or EJB modules in deployed in the domain.

JSR 77 MBeans that have a corresponding configuration or monitoring peer expose it using getConfigPeer or getMonitoringPeer. However; there are many configuration and monitoring MBeans that do not correspond to JSR 77 MBeans.

Configuration MBeans

We discussed that there are several types of MBeans in the AMX framework, one of them is the configuration MBeans. Basically these MBeans represent domain.xml and other configuration file content and structure.

In GlassFish all configuration information are stored in one central repository named DAS, in a single instance installation the instance act as DAS and in a clustered installation the DAS responsibility is sole taking care of the configuration and propagating it to all instances. The information stored in the repository are exposed to any interested party like an administration console trough AMX interfaces.

Any developer with familiarity with domain.xml structure will find him very comfortable with configuration interfaces.

Monitoring MBeans

Monitoring MBeans provide transient monitoring information about all the vital components

of the Application Server. A monitoring interface can either provides statistics or not and if it provides statistics it should implements the MonitoringStats interface which is JSR 77 compliant interface for providing statistics.

Utility MBeans

UtilityMBeans provide commonly used services to the Application Server. These MBeans all extend either or both of the Utility and Singleton interfaces. All of these MBeans interface are located in com.sun.appserv.management.base package. Notable utility MBeans are listed in table 1

Table 1 AMX Utility MBeans along with description

MBean interface

description

SystemInfo

Provides information about application server capabilities like clustering support

QueryMgr

Provides JMX-like queries which are restricted to AMX MBeans.

BulkAccess

Provides network-efficient “bulk” calls whereby many Attributes or Operations in many MBeans may be fetched and invoked in one invocation, thus minimizing network overhead.

NotificationService

Provides buffering and selective dynamic listening for Notifications on AMX MBeans.

UploadDownloadMgr

Supports uploading and downloading of files to/from the application server.

Java EE Management MBeans

The Java EE management MBeans implement, and in some cases extend, the management

Hierarchy as defined by JSR 77, which specifies the management model for the whole Java EE platform. All JSR 77 MBeans in the AMX domain offer access to configuration and monitoring MBeans using the getMonitoringPeer and getConfigPeer methods.

Dynamic Client Proxies

Dynamic Client Proxies are an important part of the AMX API, and enhance ease-of-use for the programmer. JMX MBeans can be used directly by an MBeanServerConnection to the server. However, client proxies greatly simplify access to Attributes and operations on MBeans, offering get/set methods and type-safe invocation of operations. Compiling against the AMX interfaces means that compile-time checking is performed, as opposed to server-side runtime checking, when invoked generically through MBeanServerConnection.

See the API documentation for the com.sun.appserv.management package and its sub-packages for more information about using proxies. The API documentation explains the use of AMX with proxies. If you are using JMX directly (for example, by using MBeanServerConnection), the return type, argument types, and method names might vary as needed for the difference between a strongly-typed proxy interface and generic MBeanServerConnection or ObjectName  interface.

Changing the DerbyPool attributes using AMX

In listing 4 you saw how we can use JMX and pure JSR 77 approach to change the attributes of a JDBC connection pool, in this part we are going to perform the same operation using AMX to see how much easier and more effective the AMX is.

Listing 5 Using AMX to change DerbyPool MaxPoolSize attribute

AppserverConnectionSource appserverConnectionSource = new AppserverConnectionSource(AppserverConnectionSource.PROTOCOL_RMI, “127.0.0.1”, 8686, “admin”, “adminadmin”,null,null);              #1

DomainRoot dRoot = appserverConnectionSource.getDomainRoot(); #2

JDBCConnectionPoolConfig cpConf= dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_CONFIG, “DerbyPool”); #3

cpConf.setMaxPoolSize(“100”);            #4

You are not mistaking it with another idea or code, that four lines of code let us change the maximum pool size of the DerbyPool.

At #1 we create a connection to the server that we want to perform our management operation on it. Several protocols can be used to connect to the application server management layer. As you can se when we construct the appserverConnectionSource instance we used AppserverConnectionSource.PROTOCOL_RMI as the communication protocol to ensure that we will not need some JAR files from OpenPDK project. Two other protocols which we can use are AppserverConnectionSource.PROTOCOL_HTTP and AppserverConnectionSource.PROTOCOL_JMXMP. The connection that we made does not uses TLS, but we can use TLS to ensure transport security.

At #2 we get the AMX domain root which later on let us navigate between all AMX leafs which are servers, clusters, connection pools, and listeners and so on. At #3 we query for an AMX MBean which its name is DerbyPool and its j2eeType is equal to XTypes.JDBC_CONNECTION_POOL_CONFIG. At #4 we set a new value for the attribute of our choice which is MaxPoolSize attribute.

Monitoring GlassFish using AMX

The monitoring term comes to the developers and administrators mind whenever they are dealing with performance tuning, but monitoring can also be used for management purposes like automation of specific tasks which without automation need an administrator to take care of it. An example of these types of monitoring is critical condition notifications which can be send either via email or SMS or any other gateway which the administrators and system managers prefer.

Imagine that you have a system running on top of GlassFish and you want to be notified whenever acquiring connections from a connection pool named DerbyPool is taking longer than 35 seconds.

You also want to store all information related to the connection pool when the pool is closing to saturate. For example when there are only 5 connections to give away before the pool get saturated.

So we need to write an application which monitor the GlassFish connection pool, check its statistics regularly and if the above criteria meets, our application should send us an email or an SMS along with saving the connection pool information.

AMX, provides us with all required means to monitor the connection pool and be notified when any connection pool attributes or any of its monitoring attributes changes so we will just use our current AMX knowledge along with two new concepts about the AMX.

The first concept that we will use is AMX monitoring MBeans which provides us with all statistics about the Managed Objects that they monitor. Using the AMX monitoring MBeans is the same as using other AMX MBeans like the connection pool MBean.

The other concept is the notification mechanism which AMX provides on top of already established JMX notification mechanism. The notification mechanism is fairly simple, we register our interest for some notification and we will receive the notifications whenever the MBeans emit a notification.

We know that we can configure GlassFish to collect statistics about almost all Managed Objects by changing the monitoring level from OFF to either LOW or HIGH. In our sample code we will change the monitoring level manually using our code and then use the same statistics that administration console shows to check for the connection pool consumed connections.

Listing 6 shows sample applications which monitor DerbyPool and notify the administrator whenever acquiring connection take longer than accepted. The application saves all statistics information when the connection pool gets close to its saturation.

Please Replace the numbers with cueballs

Listing 6 monitoring a connection pool and notifying the administrator when connection pool is going to reach the maximum size

public class AMXMonitor implements NotificationListener {      #1

AttributeChangeNotificationFilter filter;                                          #2

AppserverConnectionSource appserverConnectionSource;

private int cPoolSize;

private DomainRoot dRoot;

JDBCConnectionPoolMonitor derbyCPMon;                                          #3

JDBCConnectionPoolConfig cpConf;

private void initialize() {

try {

appserverConnectionSource = new AppserverConnectionSource(AppserverConnectionSource.PROTOCOL_RMI, “127.0.0.1”, 8686, “admin”, “adminadmin”, null, null);

dRoot = appserverConnectionSource.getDomainRoot();

Set<String> stpr = dRoot.getDomainConfig().getConfigConfigMap().keySet();            #4

ConfigConfig conf=     dRoot.getDomainConfig().getConfigConfigMap().get(“server-config”); #4

conf.getMonitoringServiceConfig().getModuleMonitoringLevelsConfig().setJDBCConnectionPool(ModuleMonitoringLevelValues.HIGH); #4

cpConf = dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_CONFIG, “DerbyPool”);

cPoolSize = Integer.getInteger(cpConf.getMaxPoolSize());

filter = new AttributeChangeNotificationFilter();         #2

filter.enableAttribute(“ConnRequestWaitTime_Current”);       #2

filter.enableAttribute(“NumConnUsed_Current”);             #2

Set<JDBCConnectionPoolMonitor> jdbcCPM =

dRoot.getQueryMgr().queryJ2EETypeSet

(XTypes.JDBC_CONNECTION_POOL_MONITOR);                           #5

for (JDBCConnectionPoolMonitor mon : jdbcCPM) {

if (mon.getName().equalsIgnoreCase(“DerbyPool”)) {

derbyCPMon = mon;

break;

}

}

derbyCPMon = dRoot.getContainee(XTypes.JDBC_CONNECTION_POOL_MONITOR, “DerbyPool”);   #5

derbyCPMon.addNotificationListener(this, filter, null);  #5

} catch (Exception ex) {

ex.printStackTrace();

}

}

public void handleNotification(Notification notification, Object handback) {

AttributeChangeNotification notif = (AttributeChangeNotification) notification;                                                                                                    #6

if (notif.getAttributeName().equals(“ConnRequestWaitTime_Current”)) {

int curWaitTime = Integer.getInteger((String) notif.getNewValue());                                                                                    #7

if (curWaitTime > 3500) {

saveInfoToFile();

sendNotification(“Current wait time is: ” + curWaitTime);

}

} else {

int curPoolSize = Integer.valueOf((String) notif.getNewValue());                                                        #8

if (curPoolSize > cPoolSize – 5) {

saveInfoToFile();

sendNotification(“Current pool size is: ” + curPoolSize);

}

}

}

private void saveInfoToFile() {

try {

FileWriter fw = new FileWriter(new File(“stats_” + (new Date()).toString()) + “.sts”);

Statistic[] stats = derbyCPMon.getStatistics(derbyCPMon.getStatisticNames()); #9

for (int i = 0; i < stats.length; i++) {

fw.write(stats[i].getName() + ” : ” + stats[i].getUnit()); #10

}

} catch (IOException ex) {

ex.printStackTrace();

}

}

private void sendNotification(String message) {

}

public static void main(String[] args) {

AMXMonitor mon = new AMXMonitor();

mon.initialize();

}

}

At #1 we implement the NotificationListener interface as we are going to use the JMX notification mechanism. At #2 we define an AttributeChangeNotificationFilter which filter the notifications to a subset that we are interested. We also add the attributes that we are interested to the set of non-filtered attribute change notification. At #3 we define and initialize an AMX MBeans which represent DerbyPool monitoring information. We will get an instance not found exception the connection pool had no activities yet. At #4 we change the monitoring level of JDBC connection pools to ensure that GlassFish gather the required statistics. At #5 we find our designated connection pool monitoring MBean and add a new filtered listener to it.

The handleNotification method is the only method in the NotificationListener interface which we need to implement. At #6 we convert the received notification to AttributeChangeNotification as we know that the notification is of this type. At #7 we are dealing with change in the ConnRequestWaitTime_Current attribute. We get its new value to check for our condition. In the same time we can get the old value if we are interested. At #8 we are dealing with NumConnUsed_Current attribute and later on with calling the saveToFile method and sendNotification methods.

At #9 we get names of all connection pool monitoring factors and at #10 we just write the monitoring attribute’s name along with its value to a text file.

AMX and Dotted Names

AMX is designed with ease of use and efficiency in mind. So in addition to using standard JMX programming model, the getters and setters, we can use another hierarchical model to access all AMX MBeans attributes. In the dotted named model each attribute of an MBean starts from its root which is either the domain for all configuration MBeans and server for all runtime MBeans. For example domain.resources.jdbc-connection-pool.DerbyPool.max-pool-size represents the maximum pool size for the DerbyPool which we discussed before.

Two interfaces are provided to access the dotted names either for monitoring or for management and configuration purposes. The MonitoringDottedNames is provided to assists with reading an attribute. The other interface is ConfigDottedNames which provides writing access to attributes using the dotted format. We can get an instance of its implementation using dRoot.getMonitoringDottedNames().

3 GlassFish Management Rule

The above sample application is promising especially when you want to have tens or rules for automatic management, but running a separate process and watching that process is not in taste of many administrators. GlassFish application server provides an very effective way to deploy such management rules into GlassFish application server for sake of simplicity and integration. Benefits and use cases of the Management Rules can be summarized as follow:

Manage complexity by self-configuring based on the conditions

Keep administrators free for complex tasks by automating mundane management tasks

Improve performance by self-tuning in unpredictable run-time conditions

Automatically adjusting the system for availability by preventing problems and recovering from one (self healing)

Enhancing security measures by taking self-protective actions when security threats are detected

A GlassFish Management Rule is a set of:

Event: An event uses the JMX notification mechanism to trigger actions. Events can range from an MBean attribute change to specific log messages.

Action: Actions are associated with events and are triggered when related events happen. Actions can be MBeans that implement the NotificationListener interface.

When we deploy a Management Rule into GlassFish, GlassFish will register our MBean in the MBean server and register its interests for the notification type that we determined. Therefore, upon any event which our MBean is registered for, the handleNotification method of our MBeans will execute. GlassFish provides some pre-defined types of events which we can choose to register our MBean’s interest. These events are as follow:

Monitor events: These types of events trigger an action based on an MBean attribute change.

Notification events: Every MBean which implements the NotificationBroadcaster interface can be source of this event type.

System events: This is a set of predefined events that come from the internal infrastructure of GlassFish application server. These events include: lifecycle, log, timer, trace, and cluster events.

Now, let’s see how we can achieve a similar functionality that our AMXMonitor application provides from The GlassFish Management Rules. First we need to change our application to a mere JMX MBean which implements NotificationListener interface and perform required action which is, for example, sending an email or SMS, in the handleNotification method.

Changing the application to MBean should be very easy; we just need to define an MBean interface and then the MBean implementation which will just implement one single method, the handleNotification method.

Now that we have our MBeans compiled JAR file, we can deploy it using the administration console. So open the GlassFish administration console, navigate to the Custom MBeqns node and deploy the MBeans by providing the path to the JAR file and the name of the MBeans implementation class. Now that we have our MBeans deployed into GlassFish, it is available for the class loader to be used as an action for a Management Rule, so in order to create the Management Rule, use the following procedure.

In the navigation tree select Configuration node, select Management Rules node. In the content panel select New and user dcpRule as the name, make sure that you teak the enabled checkbox, select monitor in the event type combo box and let the events to be recorded in the log files, press next to navigate to the second page of the wizard.

In the second page, for the Observed MBean field enter amx:X-ServerRootMonitor=server,j2eeType=X-JDBCConnectionPoolMonitor,name=DerbyPool and for the observed attribute enter ConnRequestWaitTime_Current. For the monitor type select counter. The number type is int and the initial threshold is 35000 which indicate the number which if the monitored attribute exceed, our action will start.

Scroll down and for the action, select AMXMonitor which is the name of our MBean which we deployed in previous step.

You saw that the overall process is fairly simple, but there are some limitations like possibility to monitor one single MBean and attribute at a time. Therefore we need to create another Management Rule for NumConnUsed_Current attribute.

Now that we reached to the end of this article, you should be fairly familiar with JMX, AMX and GlassFish Management Rules. In next articles we will use the knowledge that we gained here to create administration commands and create monitoring solutions.

4 Summary

We toughly discussed managing GlassFish using Java code by discussing JMX which is the foundation of all Java based management solutions and framework. We discussed JMX architecture, event model and different types of MBeans which are included in the JMX programming model. We also covered AMX as the GlassFish way providing its management functionalities to client side applications which are not interested in using complicated JMX APIs.

We discussed GlassFish management using AMX APIs to show how much simpler the AMX is when we compare it to the plain JMX implementation and we covered GlassFish monitoring using the AMX APIs.

You saw how we can use GlassFish’s Self management and self administration functionalities to automate some tasks to keep administrators free from dealing with low level repetitive administration and management tasks.

GlassFish Modularity System, How extend GlassFish CLI and Web Administration Console (Part 2: Developing Sample Modules)

Extending GlassFish CLI and Administration Console, Developing the sample Modules

Administrators are always looking for a more effective, easier to use, and less time consuming tool to use as the interface sitting between them and what they supposed to administrate and manage. GlassFish provide effective, easy to access and easy to simple to navigate in administration channels which cover all day to day tasks that administrators need to perform. But when it comes to administration, administrators usually write their own shell scripts to automate some tasks; they use CRON or other schedulers to schedule automatic tasks, and so on to achieve their own customized administration flow.

By using GlassFish console, it is simply possible to use shell scripts, or CRON to further extends the administration and facilitate the daily tasks. But sometimes it is far better to have a more sophisticated and mature way to extend and customize the administration interfaces. GlassFish provides all necessary interfaces and required services to allow administrators or developers to develop new modules for GlassFish administration interfaces to add a new feature or enhance already available capabilities.

GlassFish administration channels extendibility is not only for easing the administrator tasks to customize the administration interfaces, but also it is present to let container developers develop administration interfaces for their containers fully integrated with already available interfaces and fully compatible with the present administration console look and feels. Using administration console extendibility developers can develop new administration console commands very easily by utilizing already available services and dependency injection which provide them with all available environmental objects that they need.

This article covers how we can develop new commands for the CLI console and how we can develop new modules to add some new pages and navigation nodes to web administration console.

1 CLI console extendability

GlassFish CLI can be considered the easiest to accesses administration console for experienced administrators as they are better the keyboard and like automated and scripted tasks instead of clicking the mouse in a web page to see the result.  Not only GlassFish CLI is very command and feature rich, but also its design is based on the same modularity concepts that the whole application server is based on.

CLI modularity is very helpful for those who wants to extends GlassFish by providing new set of commands for their container, deploying new monitoring probes and providing the corresponding CLI commands for accessing the monitoring information,  developing new commands which may be required by administrators and are not already in place, and so on.

1.1 Prepare your development environment

This is a hands on article which involve us with developing some new modules for GlassFish application server and deploying the modules to GlassFish application server, therefore we should be able to compile the modules which usually uses HK2 services in the inside and are bundled as OSGI bundle for the deployment.

Although we can use an IDE like NetBeans to build the sample codes but we are going to use a more IDE agnostic way like using Apache Maven to ensure that we can simply manage the dependency hurdle and make the sample codes easily imported to IDE of your choice. Get Maven from http://maven.apache.org/download.html and install it according to the installation instruction provided in the same page. Then you are ready to further dig into developing modules.

1.2 CLI modularity and extendibility

We discussed about how GlassFish utilize OSGI and HK2 for providing a modular architecture, one place which this modular architecture shows itself is CLI modularity. We know that all extendibility point in GlassFish are based on contracts and contracts providers which in simple though there are interfaces and interface implementation along with some annotation to mark and further configure the contract implementation.

We are talking about adding new commands to CLI dynamically by placing an OSGI bundle in the module directory of GlassFish, either the module just contains CLI commands or it contains CLI commands, web based administration console pages, and a container implementation. So, at first there is no predefined list of known commands and commands loads on-demand and as soon as you try to use one of them. For example when you issue ./asadmin list-commands CLI framework will check for all provider which implements the admin command contract and shows the list by extracting the information from each provider (implementation). CLI commands can use dependency injection provided by HK2 and also each command may extract some information and provide them to other components which might be interested in using them.

Like all providers in the GlassFish modularity system, we should annotated each command implementation using @Service annotation to ensure that HK2 will treat the implementation as a service for lifecycle management and service locating. CLI commands like any other HK2 service should have a scope which determines how the lifecycle of the command is managed by HK2.  Usually we should ensure that each command is just live in the context of its execution and not beyond that context. Therefore we annotate each command with a correct scope like PerLookup.class scope which result in initiating a command in each lookup.

All that we need to know for developing CLI commands is summarized in understanding some interfaces and helper classes.  Other required knowledge is around the AMX and HK2.

Each CLI command must implements a contract which is an interface named org.jvnet.hk2.annotations.Contract.AdminCommand which is shown in listing 1.

Listing 1 The AdminCommand interface which any CLI command have to implement


@Contract                                           #1
public interface AdminCommand {
public void execute(AdminCommandContext context);            #2
}

The AdminCommand interface has one method which is called by asadmin utility when we call the associated command. At #1 we define the interface as a contact which HK2 service providers can implements. At #2 we have the execute method which accept a argument of type org.glassfish.api.admin.AdminCommandContext which is shown in listing 2. This class is a set of utilities which allows developer to have access to some contextual variables like reporting, logging, and command parameters which the last one will be deprecated.

Listing 2 The AdminCommandContext class which is a bridge between inside the command and outside the command execution context


public class AdminCommandContext implements ExecutionContext {

public  ActionReport report;
public final Properties params;
public final Logger logger;
private List<File> uploadedFiles;

public AdminCommandContext(Logger logger, ActionReport report, Properties params) {
this(logger, report, params, null);
}

public AdminCommandContext(Logger logger, ActionReport report, Properties params,
List<File> uploadedFiles) {
this.logger = logger;
this.report = report;
this.params = params;
this.uploadedFiles = (uploadedFiles == null) ? emptyFileList() : uploadedFiles;
}

private static List<File> emptyFileList() {
return Collections.emptyList();
}

public ActionReport getActionReport() {
return report;
}

public void setActionReport(ActionReport newReport) {
report = newReport;
}

public Properties getCommandParameters() {
return params;
}

public Logger getLogger() {
return logger;
}

public List<File> getUploadedFiles() {
return uploadedFiles;
}
}

The only unfamiliar class is the org.glassfish.api.ActionReport abstract class which has several sub classes that allow us as CLI command developers to report the result of our command execution to the original caller. Several sub classes are defined to make it possible to report back the result of the execution in different environment like returning an HTML, JSON or plain text report about the execution. The report may include exit code, possible exceptions, failure or the success, and so on based on the reporter type. Important sub classes of the ActionReport are listed in table 1

Table 1 all available reporters which can be used to report the result of a command execution

Reporter class

Description

HTMLActionReporter

Generate an HTML document containing report information

JsonActionReporter

A JSON document containing the report information

PlainTextActionReporter

A plain text report which provides necessary report information as plain text

PropsFileActionReporter

A properties file containing all report elements

SilentActionReport

No report at all

XMLActionReporter

A machine readable XML document containing all report elements

All of these reporters except the SilentActionReport which is placed inside org.glassfish.embed.impl are placed in the com.sun.enterprise.v3.common package.

Now we need a way to pass some parameters to our command, determine which parameters are necessary and which one are optional, which one need argument and what should be the type of the argument. There is an annotation which let us annotate properties of the AdminCommand implementation class to mark them as parameters. The annotation which we should use to mark a property as a parameter is @Param and is placed in org.glassfish.api package.

Listing 3 The Parameters annotation which we can use to have a parameterized CLI command


@Retention(RUNTIME)
@Target({METHOD,FIELD})
public @interface Param {
public String name() default "";
public String acceptableValues() default "";
public boolean optional() default false;
public String shortName() default "";
public boolean primary() default false;
}

As you can see, the annotation can be place either on a setter method or on a filed. Annotation has several elements including the parameter name which by default is the same as the property name, a list of comma separated acceptable values, the short name for the parameter, whether the parameter is optional or not and finally whether the parameter name is required to be included or not.  In each command only one parameter can be primary.

When we mark a property or a setter method with @Param annotation, CLI framework will try to initialize the parameter with the given value and then it will call the execute method. During the initialization, CLI framework check different attributes of a parameter like its necessity, short name, its acceptable value and so on.

The last thing which we should think about a CLI command is its internationalization and the possibility to show the localized version of text messages about the command and parameters to the client. This requirement is covered using another annotation named @I18n, the associated interface is placed in org.glassfish.api. Listing 3 shows the I18n interface declaration.

Listing 4 I18n Interface which we should use to provide internationalized messages for our CLI command


@Retention(RUNTIME)
@Target({TYPE,METHOD,FIELD})
public @interface I18n {
public String value();
}

The annotation can be used both for properties and for annotating methods. At runtime the CLI framework will extract and inject the localized value into the variable based on the current locale of the JVM.

The localization file which is a standard resource bundle should follow the same key=value format which we used for any properties file. The standard name for the file is LocalStrings_Language_Country.properties an example file can be LocalStrings_en_US.properties for United States English or LocalStrings.properties as the default bundle for any locale which has no corresponding locale file in the bundle.

1.3 A CLI command which returns the operating system details

First we will develop a very simple CLI command in this section to get more familiar with the abstracts that we discussed before and then we will develop a more sophisticated command which will teach us more details about CLI framework and capability.

First command that we are going to develop just shows us what is operating system which our application server is running on and it will shows some information about the OS like architecture, JVM version, JVM vendor and so on. Our command name is get-platform-details.

First let’s look at the maven build file and analyze its content, although I am not going to scrutinize on the Maven build file but we will look at elements related to our CLI command development.

Listing 5 Maven build file for creating a get-platform-details CLI command


<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>                       #1
<parent>
<groupId>org.glassfish.pluggability</groupId>
<artifactId>pluggability</artifactId>
<version>3.0-SNAPSHOT</version>
</parent>
<artifactId>glassfish.book.chapter platformDetailsCommand </artifactId>
<packaging>hk2-jar</packaging>                             #2
<name>platform-details-command</name>
<description>GlassFish in Action, Chapter 13, Platform Details Command </description>
<dependencies>
<dependency>
<groupId>org.glassfish.admin</groupId>
<artifactId>cli-framework</artifactId>
<version>${project.parent.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.common</groupId>
<artifactId>common-util</artifactId>
<version>${project.parent.version}</version>
</dependency>
</dependencies>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>         #3
<includes>
<include>**/*.1</include>
</includes>
</resource>
</resources>
</build>
</project>

You will find more details in the POM file included in the sample source code accompanying the book, but what is shown in the listing is enough to build a CLI command module.  At #1 we define the OSGI Manifest version. at #2 we are ensuring that the bundle which we are creating is built using the HK2 module specification which means inclusion of all OSGI related description like imported and exported packages in the MANIFEST.MF file which will be generated by Maven and resides inside the META-INF folder of the final JAR file. At #3 we ensure that Maven will include our help file for the command. The help file name should be similar to the command name and it can include anything textual.  We usually follows a UNIX like man files structure to create help files for the commands. The help file will be shown when we call ./asadmin get-platform-details –help. Figure 1 shows the directory layout of our sample command project.  As you can see we have two similar directory layout for the manual file and java source codes and the POM.XML file is in the root of the cli-platform-details-command directory.

Figure 1 Directory layout of the get-platform-details command project.

Now let’s see what the source code for the command is and what would be the outcome of the build process. Listing 6 shows the source code for the command itself which when we execute ./asadmin get-platform-details GlassFish CLI will try to call its execute method. The result after executing the command is similar to figure 2, as you can see our command has executed successfully.

Figure 2 Sample output for calling get-platform-details with runtime as the parameter

Now you can develop your own command and test it to see a similar result in the terminal window.

Listing 6 Source code for PlatformDetails.java which is a CLI command


@Service(name = "get-platform-details")               #1
@Scoped(PerLookup.class)                                  #2
public class PlatformDetails implements AdminCommand {           #3

ActionReport report;
OperatingSystemMXBean osmb = ManagementFactory.getOperatingSystemMXBean();                  #4
RuntimeMXBean rtmb = ManagementFactory.getRuntimeMXBean();     #5
// this value can be either runtime or os for our demo
@Param(name = "detailsset", shortName = "DS", primary = true, optional = false)                                                        #6
String detailsSet;

public void execute(AdminCommandContext context) {
try {

report = context.getActionReport();
StringBuffer reportBuf;
if (detailsSet.equalsIgnoreCase("os")) {
reportBuf = new StringBuffer("OS details: n");
reportBuf.append("OS Name: " + osmb.getName() + "n");
reportBuf.append("OS Version: " + osmb.getVersion() + "n");
reportBuf.append("OS Architecture: " + osmb.getArch() + "n");
reportBuf.append("Available Processor: " + osmb.getAvailableProcessors() + "n");
reportBuf.append("Average Load: " + osmb.getSystemLoadAverage() + "n");
report.setMessage(reportBuf.toString());            #7
report.setActionExitCode(ExitCode.SUCCESS);         #8

} else if (detailsSet.equalsIgnoreCase("runtime")) {

reportBuf = new StringBuffer("Runtime details: n");
reportBuf.append("Virtual Machine Name: " + rtmb.getVmName() + "n");
reportBuf.append("VM Vendor: " + rtmb.getVmVendor() + "n");
reportBuf.append("VM Version: " + rtmb.getVmVersion() + "n");
reportBuf.append("VM Start Time: " + rtmb.getStartTime() + "n");
reportBuf.append("UpTime: " + rtmb.getUptime() + "n");
report.setMessage(reportBuf.toString());
report.setActionExitCode(ExitCode.SUCCESS);

} else {
report.setActionExitCode(ExitCode.FAILURE);         #9
report.setMessage("Th given value for " +
"detailsset parameter is not acceptable, the value " +
"can be either 'runtime' or 'os'");
}

} catch (Exception ex) {

report.setActionExitCode(ExitCode.FAILURE);
report.setMessage("Command failed with the following error:n" + ex.getMessage());
context.getLogger().log(Level.SEVERE, "get-platform-details " + detailsSet + " failed", ex);

}

}
}

At #1 we are marking the implementation as a provider which implements a contract. The provided service name is get-platform-details which are the same as our CLI command name. At #2 we are setting a correct scope for the command as we do not like to see the same result every time that we initiate the command either for different domains. At #3 we are implementing the contract interface. At #4 and #5 we get MBeans which provides the information that we need, this MBeans can be either our local server MBeans or the remote running instance if we use the –host and –port  to execute the command against a remote instance. At #6 we are defining a parameter which is not optional, the parameter name is detailsset which we will use it by –detailsset when we want to execute the command. The short name for the parameter is DS which we will use as –DS when we want to execute the command. The command is primary so we do not need to name the parameter, we can just pass the value and CLI framework will assign the value to this parameter. Pay attention to the parameter name and the containing variable. If we do not use name element of the @Param annotation the parameter name will be the same as the variable name. At #7 we set the output which we want to show to the user as the report object’s message. At #8 we set the exit condition to successful as we managed to execute the command successfully. At #9 we set the exit code as a failure as the value passed for the parameter is not recognizable, we show a suitable message to user to help him diagnose the problem. At #10 we faced an exception, so we log the exception and set the exit command as a failure.

GlassFish CLI commands fall under two broad categories, one is the local commands like create-domain or start-domain commands which execute locally and you can not pass remote connection parameters including –host, –port parameters and expect to see the command executed on a remote glassfish installation. These commands extend com.sun.enterprise.cli.framework.Command abstract class or one of subclasses like com.sun.enterprise.admin.cli.BaseLifeCycleCommand.  And do not need anything like an already running GlassFish instance.

Next command which we will study is another command which will list some details about the JMS service of target domain, but this time we will discuss resource injection and some details around localization of command messages.

Listing 7 shows source code for a command which shows some information about domain JMS service including starting arguments, JMS service type, address list behavior and so on. Although the source code shows how we can get some details but setting the values for each attribute is the same as getting them.

Figure 3 shows the result of executing this command on a domain created with default parameters.

Listing 7 Source code for remote command named get-jms-details


@Service(name = "get-jms-details")
@Scoped(PerLookup.class)
@I18n("get-jms-details")                       #1
public class JMSDetails implements AdminCommand {

@Inject                                      #2
JmsService jmsService;
@Inject
JmsHost jmsHost;
@Inject
JmsAvailability jmsAvailability;
ActionReport report;
@Param(name = "detailsset", acceptableValues = "all,service,host,availability",
shortName = "DS", primary = true, optional = false) #3
@I18n("get-jms-details.details_set")                #4
String detailsSet;
final private static LocalStringManagerImpl localStrings = new LocalStringManagerImpl(JMSDetails.class);             #5

public void execute(AdminCommandContext context) {
try {
report = context.getActionReport();
StringBuffer reportBuf = new StringBuffer("");
if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("service")) {
reportBuf.append("Default Host: " + jmsService.getDefaultJmsHost() + "n");
reportBuf.append("MQ Service: " + jmsService.getMqService() + "n");
reportBuf.append("Reconnection Attempts: " + jmsService.getReconnectAttempts() + "n");
reportBuf.append("Startup Arguments: " + jmsService.getStartArgs() + "n");
} else if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("host")) {
reportBuf.append("Host Address: " + jmsHost.getHost() + "n");
reportBuf.append("Port Number: " + jmsHost.getPort() + "n");
} else if (detailsSet.equalsIgnoreCase("all") || detailsSet.equalsIgnoreCase("availability")) {
reportBuf.append("Is Availability Enabled: " + jmsAvailability.getAvailabilityEnabled() + "n");
reportBuf.append("Availability Storage Pool Name: " + jmsAvailability.getMqStorePoolName() + "n");
}
report.setMessage(reportBuf.toString());
report.setActionExitCode(ExitCode.SUCCESS);

} catch (Exception ex) {
report.setActionExitCode(ExitCode.FAILURE);
report.setMessage(localStrings.getLocalString("get-jms-details.failed", "Command failed to execute with the {0} as given parameter", detailsSet) + ". The exception message is:" +
ex.getMessage());  #6
context.getLogger().log(Level.SEVERE, "get-jms-details " + detailsSet + " failed", ex);

}

}
}


You are right the code looks similar to the first sample but more complex in using some non familiar classes and use of resource injection and localization stuff. We used resource injection in order to access GlassFish configurations. All configuration interfaces which we can obtain using injection are inside com.sun.enterprise.config.serverbeans. Now let’s analyze the code where it is important and somehow unfamiliar.  At #1 we are telling the framework that this command short help message key in the localization file is get-jms-details. At #2 we are injecting some configuration Beans into our variables. At #3 we ensure that the CLI framework will check the given value for the parameter to see whether it is one of the acceptable values or not. At #4 we are determining the localization key for the parameter help message. At #5 we initialize the string localization manager to get appreciated string values from the bundle files. At #6 we are showing a localized error message using the string localization manager.

Listing 5 shows LocalStrings.properties file content which we will shortly discuss where it should be placed to allows the CLI framework to find it and load the necessary messages from it.

Listing 8 An Snippet of localization file content for a command named restart-domain

get-jms-details=Getting the Detailed information about JMS service and host in the target domain.                #1

get-jms-details.details_set=The set of details which is required, the value can be all, service, host, or availability.           #2

get-jms-details.failed=Command failed to execute with the {0} as given parameter.          #3

At #1 we include a description of the command; at #2 we include description of the detailsset parameter using the assigned localization key. At #3 we include the localized version of the error message which we want to show when the command fails. As you can see we used place holders to include the parameter value. Figure 4 shows the directory layout of the get-jms-details project. As you can see we placed the LocalStrings.properties file next to the command source code.

Figure 4 Directory layout of the get-platform-details command project.

Developing more complex CLI commands follow the same rule as developing these simple commands. All that you need to develop a new command is starting a new project using the given maven POM files and discovering the new things that you can do.

2 Administration console pluggability

You know that administration console a very easy to use channel for administrating a single instance or multiple clusters with tens of instances and variety of managed resources. All of these functionalities were based on navigation tree, tabs, and content pages which we could use to perform our required tasks. You may wonder how we can extend these functionalities without changing the main web application which is known as admingui to get new functionalities in the same look and feel that already available functionalities are presented.  The answer lies again in the application server overall design, HK2 and OSGI along with some JSF Templating and use of Woodstock JSF component suite. We are going to write an administration console which let us upload and install new OSGI bundles by using browser from remote locations.

2.1 Administration console architecture

Before we start discussing how we can extend the administration console in a non-intrusive way we should learn more about the console architecture and how it really works. Administration console is a set of OSGI bundles and each bundle includes some administration console specific artifacts in addition to the default OSGI manifest file and Maven related artifacts.

One of the most basic artifacts which are required by an administration console plugin is a descriptor which defines where our administration console navigation items should appear. For example are going to add a new node to the navigation tree, or we want to add a new tab to a currently available tab set like Application Server page tab set.  The file which describes this navigation items and their places is named console-config.xml and should be placed inside META-INF/admingui folder of the final OSGI bundle.

The second thing which is required to make it possible for our module to be recognized by the HK2 kernel is the HK2 administration console plugin’s contract implementation. So, the second basic item in the administration console plug-in is an HK2 service provider which is nothing more than a typical Java class, annotated with @Service annotation that implements the org.glassfish.api.admingui.ConsoleProvider interface. The console provider interface has only one method named getConfiguration. We need to implement this method if we want to use a non-standard place for the console-config.xml file.

In addition to the basic requirement we have some other artifacts which should be present in order to see some effects in the administration console from our plug-in side. There requirements includes, JSF fragment files to add the node, tab, content, and common task button to administration console on places which are described in the console-config.xml file.  These JSF fragments uses JSFTemplating project tags to define nodes, tabs, and buttons which should appear in the integration points.

Until now we include just a node in a navigation tree, a tab in one of the tab sets, or a common task button or group in the common tasks page. What else we need to include? We need some pages to show when administrators clicked on a tree node or on a common task button. So we should add JSF fragments implemented using JSFTemplating tags to show the real content to the administrators. For example imagine that we want to write a module to deploy OSGI bundles, we will need to show a page which let users select a file along with a button which they can press to upload and install the bundle.

Now that we shown the page which let users select a file, we should have some business logic or handlers which receives the uploaded file and write it in the correct place. As we use JSFTemplating for developing pages, we will JSFTemplating handlers to handle the business logic.

Our functionality related artifacts are mentioned, but everything is not summarized in functionalities, we should provide localized administration pages to our administrators whom like to use their local languages when dealing with administration related tasks. We also need to provide some help files which will guide the administrators when they need to read a manual before digging into the action.

After we have all of this content and configurations in place, the administration console’s add on service query for all providers which implements the ConsoleProvider interface, then the service tries to get the configuration file if it couldn’t find the file in its default place by calling the getConfiguration method of the contract implementation. After that the administration console framework uses the configuration file and provided integration points template files and later on the content template files and handlers.

2.2 JSFTemplating

JSFTemplating is a sun sponsored project hosted in https://jsftemplating.dev.java.net/ which provides templating for JSF. By utilizing JSFTemplating we can create web pages or components using template files, template files. Inside template files we can use JSFTemplating, and Facelets syntax, other syntax support may be provided in future. All syntaxes support all of JSFTemplating’s features such as accessing page session, event triggering and event handler’s support and dynamic reloading of page content. Let’s analyze a template file which is shown in listing 9 to see what we are going to use when we develop administration console plug-ins.

Listing 9 A simple JSFTemplating template file


<!initPage
setResourceBundle(key="i18n" bundle="mypackage.resource");
/>                                 #1
<sun:page>
<sun:html>
<sun:head id="head" />
<sun:body>
#include /header.inc
<sun:form id="form">
"<p>#{i18n["welcome.msg"]}</p>        #2
<sun:label value="#{anOut}">        #3
<!beforeEncode
GiA.getResponse(userInput="#{in}"                   response=>$pageSession{anOut});
/>
</sun:label>
"<br /><br />
<sun:textField id="in" value="#{pageSession.in}" /> #4
"<br /><br />
<sun:button text="$resource{i18n.button.text}" />
<sun:hyperlink text="cheat">
<!command
setPageSessionAttribute(key="in" value="sds");  #5
/>
</sun:hyperlink>
</sun:form>
#include /footer.inc          #6
</sun:body>
</sun:html>
</sun:page>

At #1 we determine the resource file which we want to use to get localized text content.  We define a key for accessing the object. At #2 we are simply using our localization resource file to show a welcome message and as you have already noticed it is Facelet syntax. At #3 we are using a handler for one of the predefined events of JSFTemplating events. In the event we are sending out the value of in variable to our method and after getting back the method’s execution result we set the result into a variable named anOut in the page scope. And we use the same variable to initialize the Label component. You can see how a handler will look like in listing 10.  The event that we used makes it possible to change the text before it gets displayed. There are several pre-defined events which some of them are listed in table 2. At #4 we are using the in variable’s value to initialize the text field content. At #5 we are using a pre-defined command as the handler for the hyperlink click. At #6 we are including another template file into this template file. All of the components that we used in this template file are equivalent of WoodStock project’s components.

Table 2 JSFTemplating pre-defined events which can call a handler

Event

Description

AfterCreate, BeforeCreate

Event handling commences before or after the component is created

AfterEncode, BeforeEncode

Event handling commences before or after content of a component get displayed

Command

Command has invoked, for example a link or a button is clicked

InitPage

Page initialization phase, when components values are getting sets

Listing 10 The GiA.getResponse handler which has been used in listing 9


@Handler(id = "GiA.getResponse",
input = {
@HandlerInput(name = "in", type = String.class)},
output = {
@HandlerOutput(name = "response", type = String.class)
})
public static void getResponse(HandlerContext handlerCtx) {
}

In listing 10 you can see that the handler id were using in JSF page instead of the method fully qualified name. During the compilation, all handlers id, their fully qualified name, and their input and output parameters will get extracted by apt into a file named Handler.map which resides inside a folder named jsftemplating. The jsftemplating folder is placed inside the META-INF folder. Handler.map structure is similar to standard properties file containing variable=value pairs.

Now that we have some knowledge about the JSFTemplating project and the administration console architecture we can create our plug-in which let us install any GlassFish module in form of OSGI bundle using administration console. After we saw how a real plug-in can be developed we can proceed to learn more details about integration points and changing administration console theme and brand.

13.2.3 OSGI bundle installer plug-in

Now we want to create a plug-in which will let us upload an OSGI module and install it into the application server by copying the file into modules directory. First let’s see figure 5 which shows the file and directory layout of the plug-in source codes, and then we can discuss each artifact in details.

In figure 5, at the top level we have out Maven build file with one source directory containing standard Maven directory structure for Jar files which is a main directory with java and resources directory inside it. The java directory is self describing as it contains the handler and service provider implementation.  The resources directory again is standard for Maven Jar packager. Maven will copy the content of the resources folder directly in the root of the Jar file. Content of the resources folder is as follow:

glassfish folder: this folder contains a directory layout similar to java folder, the only file stored in this folder is our localization resource file which is named Strings.properties

  • images: graphic file which we want to use in our plug-in for example in the navigation tree
  • js: a JavaScript file which contains some functions which we will use in the plug-in
  • META-INF: during the development time we will have only one folder named admingui inside the META-INF folder, this folder holds the console-config.xml. After bulding the project some other folders including maven, jsftemplating and inhabitants will be created by the maven.
  • pages: this folder contains all of our JSF template files.

Figure 5 directory layout of the administration console plug-in to install OSGI bundles

Some folders are mandatory like META-INF but we can put other folders’ content directly inside the resources folder. But we will end up with an unorganized structure. Now we can discuss each artifact in details. You can see content of the console-config.xml in listing 11.


<?xml version="1.0" encoding="UTF-8"?>
<console-config id="GiA-OSGI-Module-Install">                  #1
<integration-point  id="CommonTask-Install-OSGI"          #2
type="org.glassfish.admingui:commonTask"          #3
parentId="deployment"                           #4
priority="300"                                   #5
content="pages/CommonTask.jsf" />             #6
<integration-point  id="Applications-OSGI"
type="org.glassfish.admingui:treeNode"       #7
priority="500"
parentId="applications"                       #8
content="pages/TreeNode.jsf" />                #9
</console-config>

content: The content for the integration point, typically a JavaServer Faces page.

This XML file describe which integration points we want to use, and what is the priority our navigation node in comparison with already existing navigation nodes. At #1 we give our plugin a unique ID because later on we will access our web pages and resources using this ID. At #2 we define an integration point again with a unique ID. At #3 we determine that this integration point is a common task which we want to add under deployment tasks group #4. At #5 the priority number says that all integration points with an smaller priority number will be proceed before this integration point and therefore our common tasks button will be placed after them. At #6 we determine which template file should be proceed to fill in the integration point place holder, in this integration point the template file contains a button which will fill the place holder.

At #7 we determine that we want to add some tree node to the navigation tree by using org.glassfish.admingui:treeNode as integration point type. At #8 we determine that our tree node should be placed under the applications node. At #9 we are telling that the template page which will fill the place holder is pages/TreeNode.jsf. To summarize we can say generally each console-config.xml file consists of several integration points’ description elements and each integration point element has 4 or 5 attributes. These attributes are as follow:

Id: An identifier for the integration point.

parentId:  The ID of the integration point’s parent. You can see A list of all parentid(s) in listing 3

type: The type of the integration point. You can see A list of all parentid(s) in listing 3

priority:  A numeric value that specifies the relative ordering of integration points for add-on components that specify the same parentId . This attribute is optional.

The second basic artifact which we discussed is the service provider which is simple Java class implementing the org.glassfish.api.admingui.ConsoleProvider interface. The listing 12 shows InstallOSGIBundle bundle which is the service provider for our plug-in.

Listing 12 Content of the InstallOSGIBundle.java, the plug-in service provider


@Service(name = "install-OSGI-bundle") #A
@Scoped(PerLookup.class)  #b
public class InstallOSGIBundle
implements ConsoleProvider {  #c

public URL getConfiguration() {
return null;
}
}

#A: Service has name

#B: Service is scoped

#C: Service implements the interface

After we saw the basic artifacts, including the service provider implementation and how we can define where in the administration console navigational system we want to put our navigational items, we can see what the content of template files which we use in the console-config.xml file is. Listing 13 shows the TreeNode.jsf file content.

Listing 13 Content of the TreeNode.jsf template file which create a node in navigation tree

<!initPage

setResourceBundle(key=”GiA” bundle=”glassfish.book.chapteradmingui.plugin.Strings”) #1


<!initPage
setResourceBundle(key="GiA" bundle="glassfish.book.chapteradmingui.plugin.Strings") #1
/>
<sun:treeNode id="GFiA_TreeNode"            #2
imageURL="resource/images/icon.png"  #3
text="$resource{GiA.OSGI.install.tree.title}" #4
url="GiA-OSGI-Module-Install/pages/installOSGI.jsf" #5
target="main"         #6
expanded="$boolean{false}">       #7
</sun:treeNode>

You can see the changes that this template will cause in figure 6, but the description of the code is as follow. At #1 we are using a JSFTemplating event to initialize the resource bundle. At #2 we are telling that we have a tree node with the GFiA_TreeNode as its ID which let us access the node using JavaScript. At #3 we determine the icon which we want to appear next to the tree node, you can see that we are using resource/images/icon.png as the path to the icon, the resource prefix lead us to the root of the resources folder. At #4 we are telling that we want the title of the tree node to be fetched from the resource bundle. At #5 we are determining which page should be loaded when administrator clicked the button. We are using facesContext.externalContext.requestContextPath we are getting the web application path, the GiA-OSGI-Module-Install is our plug-in id and we can access whatever we have inside the resources folder by prefixing its path with GiA-OSGI-Module-Install, you can see GiA-OSGI-Module-Install in listing 11. At #6 we are telling that the page should opens in the main frame (the content frame) and at #7 we are telling that the node should not be expanded by default. #7 effects are visible when we have some sub nodes.

Figure 6 the effects of TreeNode.jsf template file on the administration console navigation tree node

You can see that we have our own icon which is loaded directly from the images folder which is resided inside the resources directory. The tree node text is fetched from the resource bundle file.

Listing 14 shows CommonTask.jsf which causes the administration console service to place a button in the common task section under the deployment group, the result of this fragment on the common tasks page is shown in figure 7.

Listing 14 Content of the CommonTask.jsf file which is a template to fill the navigation item place


<!initPage
setResourceBundle(key="GiA" bundle="glassfish.book.chapteradmingui.plugin.Strings")
/>
<sun:commonTask                                             #1
text="$resource{GiA.OSGI.install.task.title}"
toolTip="$resource{GiA.OSGI.install.tree.Tooltip}"
onClick="admingui.nav.selectTreeNodeById('form:tree:application:GFiA_TreeNode');                                                       #2
parent.location='#{facesContext.externalContext.requestContextPath}/GiA-OSGI-Module-Install/pages/installOSGI.jsf'; return false;"     #3

>
</sun:commonTask>

At #1 we are using a commonTask component of the JSFTemplating framework, we use a localized string for its text and tool tip. At #2 we are changing the state of the tree node defined in the console-config.xml when the component receives a click; this is for ensuring that the navigation tree shows where the use is. And at #4 we are telling that we want to load pages/installOSGI.jsf which this button is clicked.

Figure 7 Effects of the listing 13 on the common task page of the administration console

We fetched the button title and its tool tip from the resource file which we included in the resource folder as described in the sample project folder layout in figure 5.

Next file which we will discuss is the actual operation file which provides the administrators with an interface to select upload and install an OSGI bundle. The file as we already discussed is named installOSGI.jsf and listing 15 shows its content.

Listing 15 The installOSGI.jsf file content, this file provide interface for uploading the OSGI file


<sun:page id="install-osgi-bundle-page" >  #1
<!beforeCreate
setResourceBundle(key="GiA" bundle="glassfish.book.chapteradmingui.plugin.Strings");
/>
<sun:html>
<sun:head id="propertyhead" title="$resource{GiA.OSGI.install.header.title}">      #2
<sun:script url="/resource/js/glassfishUtils.js" /> #3
</sun:head>
<sun:body>
<sun:form id="OSGI-Install-form" enctype="multipart/form-data">
<sun:title id="title" title="$resource{GiA.OSGI.install.form.title}" helpText="$resource{GiA.OSGI.install.form.help.title}">
<sun:upload id="fileupload" style="margin-left: 17pt" columns="$int{50}"
uploadedFile="#{requestScope.uploadedFile}"
> #4
</sun:upload>
<sun:button id="uploadButton" text="$resource{GiA.OSGI.install.form.Install_button}"
onClick="javascript:
return submitAndDisable(this, '$resource{GiA.OSGI.install.form.cancel_processing}');
"> #5
<!command
GiA.uploadFile(file="#{uploadedFile}");

redirect(page="#{request.contextPath}/GiA-OSGI-Module-Install/pages/OSGIModuleInstalled.jsf");
/>  #6
</sun:button>
</sun:button>
</sun:title>
</sun:form>
</sun:body>
</sun:html>
</sun:page>

I agree that the file content may look scary, but if you look more carefully you can see many familiar elements and patterns. Figure 8 shows this page in the administration console. In the listing 14, at #1 we are starting a page component and we use its beforeCreate event to load the localized resource bundle. At #2 we are giving the page a title which is loaded from the resource bundle. At #3 we are load a JavaScript file which contains one helper JavaScript method. As you can see we prefixed the file path with resource. At #4 we are using a fileUpload component and the binary content of the uploaded files goes to uploadedFile in the request scope. At #5 we use the helper JavaScript method to submit the form and disable the Install button. At #6 we are calling a custom handler with GiA.uploadFile as its ID. We pass the request scoped uploadedFile variable to the handler method. After the command executed, we #6 we redirect to a simple page which shows a message indicating that the module is installed.

Figure 8 The installOSGI.jsf page in the administration console

Now, let’s see what is behind this page, how this handler works, how it can find the module directory and how the installation process commences. Listing 16 shows the InstallOSGIBundleHandlers class which contains only one handler for writing the uploaded file in the modules directory of GlassFish installation which owns the running domain.

Listing 16 The content of InstallOSGIBundleHandlers which is file uploading handler


public class InstallOSGIBundleHandlers {  #A

@Handler(id = "GiA.uploadFile",  #1
input = {
@HandlerInput(name = "file", type = UploadedFile.class)}) #2
public static void uploadFileToTempDir(HandlerContext handlerCtx) { #3
try {
UploadedFile uploadedFile = (UploadedFile) handlerCtx.getInputValue("file"); #4
String name = uploadedFile.getOriginalName();
String asInstallPath = AMXRoot.getInstance().getDomainRoot().getInstallDir(); #5
String modulesDir = asInstallPath + "/modules";
String serverSidePath = modulesDir + "/" + name;
uploadedFile.write(new File(serverSidePath));
} catch (Exception ex) {
GuiUtil.handleException(handlerCtx, ex); #6
}
}
}

#A Class Declaration

As you can see in the listing, it is really simple to write handlers and working with GlassFish APIs. at #1 we are defining a handler, the handler ID element allows us to use it in the template file. At #2 we define the handler input parameter named file the parameter type is com.sun.webui.jsf.model.UploadedFile. At #3 we are implementing the handler method. At #4 we are extracting the file content from the context. At #5 we are using and AMX utility class to find the GlassFish installation path which owns the currently running domain. There many utility classes in the org.glassfish.admingui.common.util package. At #6 we are handling any possible exception using the GlassFish way. We can define output parameters for our handler and after the command executed, decide where to redirect the user based on the value of the output parameter.

There are 3 other files which we will briefly discuss, the first file which is shown in listing 17 is the JavaScript file which I used in installOSGI.jsf and is placed inside the js directory. We discussed the file in listing 15 discussions.

Listing 17 Content of the glassfishUtils.js file


function submitAndDisable(button, msg, target) {
button.className='Btn1Dis';
button.disabled=true;
button.form.action += "?" + button.name + "=" + encodeURI(button.value);
button.value=msg;
if (target) {
button.form.target = target;
}
button.form.submit();
return true;
}

The function receives three parameters, the button which it will disable, the message which it will set as the button title when the button disabled and finally the target that the button’s owner form will submit to.

Listing 17 shows the next file which we should take a look at.  String.properties is our localization file located deep in resources directory under  glassfish.book.chapteradmingui.plugin package. Although it is a standard localization file, but it is not bad to take a look and remember the first time that we faced with resource bundle files.

Listing 18 Content of the Strings.properties file


OSGI.install.header.title=Install OSGI bundle by uploading the file into the server.
OSGI.install.form.title=Install OSGI bundle by uploading the file into the server.
OSGI.install.form.help.title=Browse and select your OSGI bundle to install it in the server's modules directory
OSGI.install.form.button_processing=Processing
OSGI.install.form.Install_button=Install
OSGI.install.tree.title=Install OSGI Bundle
OSGI.install.task.title=Install OSGI Bundle
OSGI.install.tree.Tooltip=Simply install OSGI bundle by uploading the bundle from the administration console.

And finally our Maven build file, the build file as we discussed in the beginning of the section is placed in the top level directory of our project. If you want to review the directory layout, take a look at figure 5.

Listing 19 The plug-in project Maven build file, pom.xml


<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.glassfish.admingui</groupId> #A
<artifactId>admingui</artifactId>
<version>3.0-SNAPSHOT</version>
</parent>
<artifactId>glassfish.book.chapterAdminConsolePlugin</artifactId>
<packaging>hk2-jar</packaging>           #B
<name>OSGI-bundle-Installation-Adming-Plugin</name>
<description>GlassFish in Action, Chapter 13, Admin console plugin to install OSGI bundle</description>
<dependencies>
<dependency>
<groupId>org.glassfish.common</groupId>
<artifactId>glassfish-api</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.admingui</groupId>
<artifactId>console-common</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.admingui</groupId>
<artifactId>console-plugin-service</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.common</groupId>
<artifactId>amx-api</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.glassfish.deployment</groupId>
<artifactId>deployment-client</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
</project>

#A Project Group ID

#B HK2 Special Packagin

The pom.xml file which is available for download in www.manning.com/kalali is more complete with some comment which explains other elements, but the 19 listings content is sufficient for building the plug-in.

Now that you have a solid understanding of how an administration console plug-in works, we can extends our discussion to other integration points and their corresponding JSFTemplating components.

Table 3 shows integration points and the related JSFTemplating component along with some explanation about each item. As you can see we have 5 more types of integration points with different attributes and navigational places.

Table 3 List of all integration points along with the related JSFTemplating compoent

Integration point*

Values for ParentID attribute

Corresponding JSFTemplating compoent

treeNode

tree, applicationServer

applications, webApplications

resources, configuration

node,  webContainer

httpService

sun:treeNode which

is equivalent of the Project Woodstock tag  webuijsf:treeNode

serverInstTab

serverInstTab, add tabs and sub tabs to Application Server page

sun:tab which is equivalent of the project Woodstock tag webuijsf:tab

commonTask

deployment, monitoring,updatecenter, documentation

sun:commonTask which

is equivalent of the Project Woodstock tag  webuijsf:commonTask.

commonTask

commonTasksSection, it allows us to create our own common tasks group

sun:commonTasksGroup .

which

is equivalent of the Project Woodstock tag  webuijsf:commonTasksGroup

customtheme

masthead, loginimage, loginform, versioninfo

This integration point let us change the login form, header and version information of the application server for changing the application server brand

* All of the first column values have org.glassfish.admingui: as prefix

Now you are quipped with enough knowledge to develop extensions for GlassFish administration console to facilitate your won daily tasks or provide your plug-ins to other administrators which are interested in same administration tasks which you are performing using your developed plug-ins.

3 Summary

Understanding an administration console from end user point of view is something and understanding it from a developer’s perspective is another, using the first one you can administrate the application server and using the second one you can manage, enhance and administrate the administration console itself. In this article we learned how we can develop new asadmin or CLI commands to ease our daily tasks or extends the GlassFish application server as a 3rd party partner. We learned how we can develop and extend the web based administration console which leads us to reviewing the JSFTemplating project and a fast pointing to AMX helper classes.

GlassFish Modularity System, How extend GlassFish CLI and Web Administration Console (Part I, The architecture)

Modularity is the essential design and implementation consideration which every software architects and designers should have in mind to get an easy to develop, maintain and extend software.

GlassFish is an application server which highly benefits from a modularity system to provide different level of functionalities for different deployment and case studies. GlassFish fully supports Java EE profiles, so it provides a lot of features which suits different case studies and different type of use cases. Every deployment and case study requires a subset of functionalities to be provided, administrated and maintained so, both GlassFish users, application developers, and GlassFish development team can highly benefits from modularity to reduce the overall costs for development, maintenance, administration and management of each deployment type that GlassFish supports.

Looking from functionalities point of view GlassFish provides extension points for further extending: administration console, CLI, containers, and monitoring capabilities of GlassFish application server in a non-intrusive and modular way.

From development point of view, GlassFish uses OSGI for module lifecycle management while it uses HK2 for service layer functionalities like dependency injection, service instances management, service load and unload sequence, and so on.

Looking from integration point GlassFish modular architecture provides many benefits for different level of integration. In bigger picture Glassfish can be deployed into a bigger OSGI container which for example may be running your enterprise application, GlassFish has many advanced subsystem like its monitoring infrastructure  which can be used in any enterprise level application for extendable monitoring, and thanks to GlassFish modularity these capabilities can be easily extracted from GlassFish without any class path dependencies headache and versioning conflicts.

GlassFish Modularity

GlassFish supports modularity by providing extension points and required SPIs to add new functionalities in administration console, CLI, monitoring, and possibility to develop new containers to host new type of applications.

Not only modular architecture provides easy extendibility but also it provides better performance and much faster starts and stops sequences as the modular architecture loads modules as soon as they get referenced by another module and not during the startup. For example, Ruby container will not start unless a Ruby on Rails application gets deployed into application server.

GlassFish modular system uses a combination of two module systems including well known OSGI and HK2 as an early implementation of Java module system (JSR-277). OSGI provides module and module lifecycle layers functionalities to let application server fits into bigger deployment which uses OSGI framework.  It also makes It possible to benefits from well designed and tested OSGI modularity and lifecycle management techniques which are included in OSGI. The HK2 provides service layer functionality and guarantee a smooth migration to Java SE 7 module system (JSR-277). To make it simple, HK2 is the framework which we use to develop the Java classes that we have inside our module and perform the main tasks.

We talked about GlassFish modules, but these modules need another entity to load, unload and feed them with their intended responsibilities which the module designed to accept.  The entity which takes care of loading and unloading modules is GlassFish kernel which itself is an OSGI bundle which implemented Kernel services.

When GlassFish starts, either normally using its internal OSGI framework or using an external OSGI framework, First it loads GlassFish kernel and check for all available services by looking into the implementation of different contracts in the class path and. Kernel will only loads services which are necessary during the startup or services which are referenced from another service as one of its dependencies.

Containers services will only starts if an application get deployed into the container, administration console extensions load as soon as administration console loads, CLI commands loads on demand and system does not preload all commands when asadmin starts, monitoring modules starts when a client bind to them.

GlassFish modular system does not only apply to application server specific capabilities like administration console extension, but it also applies on Java EE specifications implementation. GlassFish uses different OSGI modules for different Java EE specifications like Servlet 3 and JSF 2. These specifications are bundled using OSGI to make it easier to update the system when a new release of each specification is available.

Different GlassFish modules fit into different categories around GlassFish kernel. These categorizations of modules are result of modules implementing GlassFish provided interfaces for different extendibility points. We will discuss the extension points in mode details in section 3.

 

 

New features added to GlassFish using the modularity system will not differ from GlassFish out of the box features because new modules completely fit into the overall provided features.

All extendibility points in GlassFish designed to ensure that adding new type of containers for hosting programming languages and framework other than Java is easy to cope with. Every container that is available for GlassFish application server adds its own set of CLI commands, administration console web pages and navigation nodes, and its own monitoring modules to measure its required metrics.

 

2 Introducing OSGI 

The OSGI is an extensive framework introduced to add modularity capabilities to Java before Java SE 7 being shipped. Essentially OSGI is a composition of several layers of functionalities which are building on top of each other to free Java developers from headache of overcoming the class loader complexity, dynamic plug ability, libraries versioning, code visibility, dependency management; and a publish, find and bind service model to decouple bundles. Layering OSGI gives us something similar to Figure 1.

Figure 1 OSGI layers, GlassFish uses all except the service layer. An OSGI based application may utilize all or some of the layers in addition to OS and Java platform which OSGI is running on.

              OSGI runs on hand held devices with CDC profile support and Java SE. so variety of operating systems and devices can benefits from OSGI. Over this minimum platform, OSGI layers march one over the other and based on the principals of layered architecture each layer can only see its bottom layer and not its upper layer.

Bundle or module layer is closest to Java SE platform, OSGI bundles are Jar files which some more Meta data information, these metadata generally defines modules required and provided interfaces, but it does this task in a very effective manner. OSGI bundles provide 7 defections to help with dependencies, versioning. These definitions include:

       Multi-version support: several versions of the same bundle can exist in the framework. And depended bundles can use the versions that satisfy their requirements.

       Bundle fragments: Allows bundle content to be extended, bundles can merge to form one bundle and expose a unified export and import attributes.

       Bundle dependencies: Allows for tight coupling of bundles when required Import everything that another, specific bundle exports Allows re-exporting and split packages

       Explicit code boundaries and dependencies: Bundles can explicitly their internal packages or declare dependencies on external packages.

       Arbitrary export/import attributes for more control: Exporters can attach arbitrary attributes to their exports; importers can match against these arbitrary attributes when they find the required package and before importing.

       Sophisticated class space consistency model

       Package filtering for fine-grained class visibility: Exporters may declare that certain classes are included/ excluded from the exported package.

All of these definitions are included in the MANIFEST.MF file which is an easy to read and create text file located inside the META-INF directory of jar files. Listing 4 shows a sample MANIFEST.MF file.

Listing 4 sample MANIFIST.MF file which shows some OSGI modules definitions


Bundle-ManifestVersion: 2                                       #1
Bundle-SymbolicName: gfbook.samples.chapter12                    #2
Bundle-Version: 1.1.0                                            #2
Bundle-Activator: gfbook.samples.chapterActivator             #3 
Bundle-ClassPath: .,gfbook/lib/juice.jar                          #4
Bundle-NativeCode:                                                #5
serialport.so; osname=Linux; processor=x86,                        #5
serialport.dll; osname=Windows 98; processor=x86                   #5
Import-Package: gfbook.samples.chapter7; version="[1.0.0,1.1.0)";  #6
resolution:="optional"                                             #6
Export-Package:                                                    #7
gfbook.samples.chapterservices.monitoring; version=1.1;          #7
vendor="gfbook.samples:"; exclude:="*Impl",                         #7
gfbook.samples.chapterservices.logging; version=1.1;             #8
uses:="samples.chapter7.services"                                   #8

GlassFish uses OSGI Revision 4 and the version number at #1 indicates it. At #2 we uniquely identify this bundle with a name and its version together. At #3 we determine a Class which implements BundleActivator interface in order to let the bundle get notified when it stops or starts. It lets the developers to perform initialization and perform some checks when the bundle starts or perform some cleanup when the bundle stops. At #4 we define our bundle internal class path, a bundle can has JAR files inside it and bundle class loader check this class path when it looks for a classes that it requires in the same order given in the Bundle-ClassPath header. At #5 we define the bundle native dependencies per operating system.

At #6 we define an optional dependency on some packages, as you can see our bundle can use any version inside the given version range. At #7 we describe the exported packages along with the version and excluded classes using wildcard notation. At #8 we define that if any bundle import gfbook.samples.chapterservices.logging from our bundle and the importer bundle needs samples.chapter7.services it should use the same bundle for satisfying this import that our package uses.

Next layer in OSGI modularity system is the Life Cycle layer which makes it possible for bundles to installed, dependency checked, started, activated, stopped, and uninstalled dynamically. It relies on the module layer for the class path related tasks like class locating and class loading. This layer adds some APIs for managing the modules in run time. Figure 2 shows functionalities of this layer.

Figure 2 OSGI life cycle layer states. A module need to be stopped and has no dependency to be uninstalled.

     Installed: the bundle has been installed and but all bundle dependencies have not been satisfied, for example one of its imported packages in not exported by any of currently resolved bundles.

     Resolved: Bundle is installed and all of its dependencies are satisfied, this is an in-transit state and bundles do not stay in this state.

       Starting: Another temporary state which a bundle goes through when it is going to be activated.

       Active: Bundle is active and running.

     Stopping: A temporary state which a bundle goes through when it is going to be stopped.

       Uninstalled: Bundle is not in the framework anymore.

Next layer in OSGI mode is the Service model, the layer which is not used by eclipse provides several functionalities for the deployed bundles. First it provides some common services which are common in all applications, services like logging and security. Secondly it provides a set of APIs which bundles developers can use to developed OSGI friendly services and publish them in the OSGI framework service registry, search for services that their bundles need in the registry and bind to the services that they need. This layer is not used by GlassFish.

All that GlassFish taken from OSGI is its bundle and life cycle management layers. Adhering to these two layer guidelines let GlassFish be able to fit in bigger systems which are OSGI compliant and it can host new modules based on OSGI modularity system. Now that you know How GlassFish modularity system uses OSGI, you are ready to see what is inside these modules and what GlassFish propose for developing the module functionalities.

3 Introducing HK2

We said that GlassFish uses HK2 for service layer functionalities. First question which you may ask is OSGI service model has not utilized for GlassFish; answer is that the HK2 is more aligned with Java Module System (JSR-277) which is going to be a part of Java Runtime in Java SE 7 and eventually replacing HK2 with Java SE modularity system will be easier that replacing OSGI service model layer with Java SE service model layer.

3.1 What HK2 is

GlassFish as a modular application system should have a kernel which all modules plug into it to provide some functionality which call services. In order for the kernel to place the services in the right category and place, services are developed based on some contracts which are provided by the GlassFish application server designer and developers.

A contract describes the extension point building blocks and a service is an implementation which fits that particular building blocks. HK2 is a general purpose modularity system and can be used in any application which requires a modular design.

An HK2 service should adhere to a contract and it means nothing but extending an interface which HK2 kernel knows its structure and knows how to use it. But interfaces alone can not provide all information which a module may require to identify itself to the kernel. So HK2 provides a set of annotations which let the developers to provide some META data with the component to make it possible for the kernel to place it in the correct place.

A system which looks for modularity should have a complex requirement which leas to a architecture, and modularity is one of many helpers with design and development of a complex software.

In software as big as GlassFish objects need other objects during the construction, they need configuration during construction or after the construction during a method execution, objects may need to provides some of their internal objects available for other object which may need them, objects need to be categorized into different scopes to prevent any interference between objects that need scoped information and objects that runs without storing any state information (stateless).

First things first

Do we need to discuss contract and service annotation source code?

, after we developed a service by implementing a contract which is an interface, we should have a way to inform HK2 that our implementation is a service, it has a name and it has a scope. To do this we use a @Service annotation, listing 1 shows an example of using @Service annotation.

Listing 1 HK2 @Service annotation


@Service (name="restart-domain",                                  #1
scope=org.jvnet.hk2.component.PerLookup.class,                                    
factory=gfbook.chapterCommandFactory)
public class RestartDomain implements AdminCommand {                     #2
...
}

Very simple, just an annotated POJO provide the system with all required information to accept it as a CLI command. At #1 we define the name of the service that we want to develop, late on we define the scope of the component. At #2 implement the contract interface to make it possible for the kernel to use our service in the way that it knows. Scope can be one of the built-in scopes or any custom implemented scope by implementing org.jvnet.hk2.component.Scope contract. The custom scopes should take care of the objects lifecycle. Built-in scopes are:

 

     Per Lookup scope: Components in this scope will create new instances every time someone asks for it, the implementation class is: org.jvnet.hk2.component.

     Singleton scope: Objects in this scope stay alive until the session’s ends and any subsequent call for instantiating such objects end in returning the available object. The implementation class is: org.jvnet.hk2.component.Singleton.

The @Service annotation accepts several optional elements as follow, these elements are just for sake of better and more readable, maintainable and customizable code, otherwise they are not required and HK2 determine default values for each element.

       The name element defines the name of the service. The default value is an empty string.

     The scope element defines the scope to which this service implementation is tied. The default value is org.jvnet.hk2.component.PerLookup.class.

     The factory element defines the factory class for the service implementation, if the service is created by a factory class rather than by calling the default constructor. If this element is specified, the factory component is activated, and Factory.getObject is used instead of the default constructor.

 

What if our service requires accessing another service or another object? How we will decide whether we should initiate a new object and use it or use a currently available object? Should we as service developers really involve with a complex service or object initialization task?

First let’s look at instantiation, HK2 is an IOC pattern implementation and so, No object instantiation will need to go though calling new() method and instead you will ask the IOC to return an instance of your required component. In HK2, ComponentManager class is in charge of providing a registry for instantiated components, and also it bear the responsibility of initializing components that are not initialized upon a client request. Clients ask ComponentManager for a component By calling getComponent(ClassT contract). And ComponentManager manager ensure that either it returns a correct instance based on available instance and their scopes or it create a new instance by traversing the entire object graph required for initializing the component. The ComponentManager methods are threading safe and maintain no session information between consequent calls.

Components developers need to have some control to ensure all required resources and configuration are ready after the construction and before any other operation; also they may need to perform some housekeeping and cleanup before destroying the object. So two interfaces are provided which components can implement them to register for a notification upon object construction and destruction. There two interfaces are org.jvnet.hk2.component.PostConstruct and org.jvnet.hk2.component.PreDestroy, each interface just has one method named postConstruct and preDestroy. The ComponentManager is also in charge of resource injection to components properties during construction and also resource extraction from components.

First lets see resource extraction which may be unfamiliar, Some components may create an internal object which carry some information required by some other components in the system or similar components which are going to be constructed. These internal objects can be marked with @Extract annotation which is located at org.jvnet.hk2.annotations.Extract. This annotation makes define the property to be extracted and placed in a predefined scope or the default scope which is PerLookup scope.

After an object is extracted, HK2 place it inside a container of type org.jvnet.hk2.component.Habitat, the container will provide any object which needs one of its inhabitants. Listing 2 shows how a property of an object can be marked as an extraction.

Listing 2 The HK2 extraction sample


@Service (name="restart-domain",                                 scope=org.jvnet.hk2.component.PerLookup.class,                                    
factory=gfbook.chapterCommandFactory)
public class RestartDomain implements AdminCommand {                   
@Extract                                                                                                                  #1
LoggerService logger;
@Extract                                                                                                  #2
public CounterService getCounterService() {...}
}
 

At #1 we mark a property for extraction and at #2 we extract the object from a getter method, so remember that @Extract annotation works both on property and getter level. Now let’s see how these inhabitant objects can be used by another service which needs these two functionalities.

Listing 3 The HK2 extraction sample


@Service (name="restart-domain",                                 scope=org.jvnet.hk2.component.PerLookup.class,                                    
factory=gfbook.chapterCommandFactory)
public class RestartDomain implements AdminCommand {                   
@Inject protected Habitat habitat;                                                                      #1
public void performAudition() {
LoggerService logger = habitat.getComponent(LoggerService.class);         #2
CounterService counter = habitat.getComponent(CounterService.class);       #2
}
}

At #1 we inject the container into a property of these object, you see this is world of IOC, and we have no explicit object creation, at #2 we ask habitat to provide us with instance of LoggerService and CounterService.

Now you should have required understanding of OSGI Bundle layer and HK2 basics we can get our hands down to write some GlassFish modules to see how these two can be used to GlassFish modules in this article  and the upcomming one.

 

4 GlassFish Containers and Containers extendibility

GlassFish is a multi purpose application server which can host different kind of applications developed using variety of programming languages like Ruby, Groovy and PHP in addition to support for Java EE platform based applications.

The concept of container makes it possible to let GlassFish application server host new types of applications which can be developed using either Java programming language or other programming languages. Imagine that you want to develop a new container which can host a brand new type of application. These type of application will let GlassFish users to deploy a new type of archived package which contains descriptor files along with required classes for performing scheduled tasks in a container managed way. The new type of application is developed using Java and for sure it can access all GlassFish application server content and in the same way it can access some operating system related scripts like shell scripts for performing operating system related tasks on schedule.

GlassFish may have tens of different type containers for different type of applications so, before anything else, each container must have a name in order to make it possible for GlassFish kernel to access them easily. In the next step, GlassFish should have some mechanism to determine which container an application should be deployed to. Then there should be a mechanism to determine which URL patterns should be passed to a specific container for further processing. And finally, each application has some set of classes which container should load them to process the incoming request, so a class loader with full understanding of the application directory structure is required to load the application classes.  Figure 3 shows the procedure of application deployment from the moment that a deployment command received by GlassFish up to the stage that the container becomes available to GlassFish deployment manager.

Figure 3 Application deployment activities in GlassFish

Is this figure too busy? Also, there are some typos in it, so keep this flagged

  from early start until GlassFish get the designated container name.

Figure 4 shows activities that happen after the container become available to GlassFish application deployment manager up to the point that GlassFish get a class loader for the deployed application.

Figure 4 Activities which happens after GlassFish get the specific container name

GlassFish container SPI provides several interfaces which container developers can consider implementing them to develop their containers. These interfaces provide necessary functionalities for each activity in figure 3. First let’s see what are required for the first step which is checking the deployment artifact which can be a compressed file or a directory.

When an artifact is passed to GlassFish for deployment, first the artifact goes though a check to see which type of archive it is and which container can host this application. So we need to be able to check an archive and determine whether is compatible with our container or not. To do this we must implements an interface with the fully qualified name as org.glassfish.api.container.Sniffer, GlassFish creates a list of all Sniffer implementations and pass the archive to each one of them to see whether an Sniffer know the archive or not. Listing 4 shows a dummy Sniffer implementation

 

Listing 4 Sniffer interface


@Service(name = "TextApplicationSniffer")
@Scoped(Singleton.class)
public interface TextApplicationSniffer implements Sniffer{
public boolean handles(ReadableArchive source, ClassLoader loader){} #1
public Class<? extends Annotation>[] getAnnotationTypes(){} #2
public String[] getURLPatterns(){}                                  #3
public String getModuleType(){}                                     #4
public Module[] setup(String containerHome, Logger logger) throws IOException(){}                                                      #9
public void tearDown(){}                                           #5
public String[] getContainersNames(){}                             #6
   public boolean isUserVisible(){}                                   #7
public Map<String,String> getDeploymentConfigurations(final          ReadableArchive source) throws IOException{}                           #8
}

An implementation of Sniffer interface should return true or false based on the fact that either it understands the archive or not #1. ReadableArchive interface implementations provide us a virtual representation of the archive content and let us check for specific files or file size or it let us check for presence of another archive inside the current one and so on.

If we intend to use a more complex analyze of the archive, for example by checking the presence of a specific annotation we should return list of annotations in getAnnotationTypes #2. GlassFish scan the archive for presence of one the annotations and continue with this Sniffer if a class is annotated by one of these annotations.

If this is the Sniffer that understands the archive then GlassFish asks for a URL patterns that GlassFish should call the container service method when a request matches that pattern#3. At #4 we define the module name which can be a simple string.

At #5 we just remove everything related to this container from the module system. At #6 we should return a list of all containers that this specific Sniffer can enable.

At #7, we determine whether the container is visible to users when they deploy an application into the container or not and at #8 we should return all interesting configuration of the archive which can be used during the deployment or runtime of the application.

The most important method in the Sniffer interface is the setup method which will setup the container if container is not already installed. #9 As you have already noticed the method have the path to container home directory so, the container is not installed already Sniffer can download the required bundles from the repositories and configure the container with default configurations. The method returns a list of all modules that are required by the container; using this list GlassFish prevent removal of any of these modules and check for the container class to resume the operation.

The org.glassfish.api.container.Container is the GlassFish container contract which each container should implements, Container interface is the entry point for GlassFish container extendibility. Listing 5 shows a dummy Container implementation. Container class does not need to perform anything specific, all tasks will delegate to another class which perform the

Listing 5 dummy container implemenatation


@Service(name = "gfbook.chaptercontainer.TextApplicationContainer")       #1
public class TextApplicationContainer implements Container {                 #2
  public Class<? extends Deployer> getDeployer() {              #3
      return gfbook.chaptercontainer.TextApplicationDeployer.class;#3
  }
  public String getName() {
      return " Text Container";                   #4
  }

At #1 we define a unique name for our container, we can use fully qualified class name to ensure that it is unique. At #2 we implement the container contract to let the GlassFish kernel use our container in the way that it knows. At # we return our Deployer interface implementation class name which will take care of tasks like loading, unloading, preparing, and other application related tasks. At #4 we return a human readable name for our container.

You can see that container has nothing special except that it has an implementation of org.glassfish.api.deployment.Deployer which GlassFish kernel will use to delegate the deployment task. Listing 6 shows snippet code from Text application container’s deployer implementation.

Listing 6 Deployer implementation of TextApplication container


@Service
public class TextApplicationDeployer implements Deployer<TextContainer,    TextApplication> {                   #1
              public boolean prepare(DeploymentContext context) {    } #2
    public <T> T loadMetaData(Class<T> type, DeploymentContext context) { }#3
    public MetaData getMetaData() {}#4
    public TextApplication load(TextContainer container, DeploymentContext context) {}                               #5
    public void unload(TextApplication container, DeploymentContext context) {}                  #6
    public void clean(DeploymentContext context) {}  #6
}

Methods that are shown in the code snippet are just mandatory methods of Deployer contract that, our implementation may have some helper methods too. At #1 we define our class as an HK2 service which implements Deployer interface.

At #2 we prepare the application for deployment it can be unzipping some archive files or pre-processing the application deployment descriptors to extract some required information for deployment task. Several methods of the Deployer interface accept a org.glassfish.api.deployment.DeploymentContext as one of its parameters, this interface provides all contextual information about the currently under processing application. For example application content in term of files and directories, accessing the application class loader, all parameters which passed to deploy command and so on.

At #3 we return a set of MetaData that is associated with our application, these metadata can contain information like application name and so on. At #4 we should return an instance of org.glassfish.api.deployment.MetaData which contains special requirement which this Deployer instance needs these requirement can be loading a set of classes before loading the application using a separate class loader, or a set of metadata that our Deployer instance will provides after successfully loading the application. We will discuss the MetaData in more details in next step. At #5 we load the application and return an implementation of org.glassfish.api.deployment. The ApplicationContainer manages the lifecycle of the application and its execution environment. At #6 we undeploy the application from the container which means that our application should not be available in the container anymore. At #7 we clean any temporary file created during the prepare method execution.

Now the TextApplication class which implements application management tasks like starting application, stopping application. The sample snippet for this class can be similar to listing 7.

 

Listing 7 Snippet code for TextApplication class


public class TextApplication extends TextApplicationAdapter
                implements ApplicationContainer{ #1
  public boolean start(ApplicationContext startupContext) {   }
  public boolean stop(ApplicationContext stopContext) {  }
  public boolean resume()
  public ClassLoader getClassLoader() {  }
  public boolean suspend()
  public Object getDescriptor() {  } #2
}

All portion of the code should be self explaining except that in #2 we return the application descriptor to GlassFish deployment layer and in #1 which we do a lot of works including interfacing our container with Grizzly (the web layer of GlassFish).

You remember that in listing 6 we returned an instance of TextApplication when the container calls the load method of the Deployer object, this object is responsible for application lifecycle including what you can see in the listing 7 and also it is responsible for interfacing our container with Grizzly layer. Listing 8 shows the snippet of the TextApplicationAdapter abstract class implementation. Extending the com.sun.grizzly.tcp.http11.GrizzlyAdapter helps with developing custom containers which need to interact with HTTP layer of GlassFish. The com.sun.grizzly.tcp.http11.GrizzlyAdapter has one abstract method which we talk about it when we were discussing the Sniffer interface implementation and that method is the void service(GrizzlyRequest request, GrizzlyResponse response) which is called when ever a request hits a URL matching with one of the container’s deployed application.

 

 

5 GlassFish Update Center

 

Glassfish update center is one of its outstanding features which help developers and system administrators to spend less time on updating and distributing applications on multiple GlassFish instances. In this section we will dig more into how update center works, and how we can develop new package for the update center.

The update center name may be misleading that it can just works for updating the system, but reality is that it has more functionalities than what its name reveal. Figure 1.3 shows update center Swing client running in a Linux machine. At your left hand you will see a navigation tree which let you select what kid of information you want to view regarding update center functionalities. These tasks include:

       Available Updates: The Available Updates node includes information about available updates for installation. It shows the basic details about each available update and identifies if the update is new and whether its installation needs a restart to complete. You can select the desired component from the table and click install.

       Installed Software: The Installed Software node shows the software components currently installed. This is the place which you can remove any of not required piece of software from your GlassFish installation and let it run lightly. Upon trying to uninstall any component GlassFish will check to find what components are depending on it and let you decide whether you want to remove all of them or not. The Details section of the window shows more information for the installed software including technical specifications, product support, documentation, and other useful resources.

       Available Software: The Available Software node shows all the software components available for installation through the Update Center. These are new set of components like new containers, new JDBC drivers, and new web applications and so on. The Details section of the window shows more information for the installed software including technical specifications, product support, documentation and other useful resources.

Now that we know what the update center can do, let’s see how we should bootstrap it in our application server. GlassFish is the modular application server and so, update center itself is a module which can be present or not, by default the update center modules are not bundled with GlassFish installation and GlassFish download and installs its bit upon the first access, just like GlassFish administration console.

GlassFish installer will bootstrap the GlassFish Update Center during installation if a internet connection was available. If your environment had no Internet connection during the installation you can bootstrap the update center by navigating to GlassFish_Home/bin and running updatetool script, it is either a shell script or a batch file. After you execute the command it will download the necessary bits and configure the update center. Running the command again will result in seeing the update center tool GUI similar to figure 5.

Figure 5 Update Center Tool, it can manage multiple application installation

In the Navigation Tree you can see that several installations can be managed using a single Update Center front end. These are installed image in the system along with manually create image for GlassFish in Action Book.

5.1 Update center in the administration console

GlassFish administration system is all about effectiveness, integration and ease of use so the same update center functionalities are available in the GlassFish administration console to let the administrators check and install updates without having shell level access to their machines. Administrators can navigate to Update Center page in GlassFish administration page and see which updates and which new features are available for their installed version.

Figure 5 shows web based update center page with all features that desktop based version has. Integrating the update center functionalities into administration console is along with GlassFish promises to keep all administration tasks for a single machine or a cluster integrated into one single application.

 

 

 

Figure 6 Update center pages in GlassFish administration console with the same functionalities of desktop GUI.

I know that you want to ask, what about update notification, how often I should check for updates. GlassFish development team knows that administrators like to be notified for available updates, therefore there are two mechanisms for you to get notified about available updates. The first way is by means of the notification text which appears in the top bar of the administration console as shown in figure 7.

Figure 7 Update center web based notification and integration of the notification icon in Gnome desktop

You are asking for more passive way of notification, are a CLI advocate administrator, then you can use the Try Icon notification system which notifies you about available updates by showing a bulb in your system try. There is no different if you are using Solaris, Linux or windows, you can activate the notification icon simply by issuing the following command in the GlassFish_Home/updatetool/bin/ directory.

./updatetool --register

And you are done, the registration of update center notification icon is finished and after your next startup you can see the notification icon which notifies you about availability of new updates and features. Figure 7 shows the two notification mechanism discussed.

The update center helps us to keep our application server up to date by notifying us about availability of new packages and features. This features and packages as we already know either should be OSGI bundles or they should be some other types of packages which can be used for transferring any type of files. The update center uses a general packaging system to deliver the requested packages which may contains one or more OSGI bundles, operating system binary files, graphic and text and any other types of contents.

This general purpose and very powerful packaging system is called pkg(5) Image Packaging System or in brief IPS.

6 GlassFish packages distribution

Update center as we can see in the GlassFish use case, helps us to keep our application server up to date and let us install new components or remove currently installed components to keep the house clean of not required components. But the so called update center is much more important and bigger than just updating the GlassFish installation. Update center is part of a bigger general purpose binary package distribution for distributing modular software systems in different operating system without interfering with the operating system installation sub-system or so called root permission.

GlassFish version 2 has an update center, which you can use to manage the GlassFish installation however that update tool was using a module model like NetBeans NBM files to transfer the updates from the server to local installation. But in GlassFish version 3 the update center system and GlassFish installation system has completely changed. We know that the GlassFish installer and OpenInstaller framework, you may remember from 2.2 that that GlassFish installer uses a specific type of binary distribution system designed by OpenSolaris named pkg(5) Image Packaging System (IPS). Now we are going to discuss IPS in more details along with its relation with GlassFish update center. The IPS developed using Python to ensure that it is platform independent and will execute on every platform with minimal required dependencies.

6.1 PKG(5) Image Packaging System, simply IPS

The IPS is a software delivery system based heavily on using network repositories to distribute the software system either completely from the network repositories or in a mixture of using network repositories and initially downloaded packages. Generally speaking IPS helps with distributing and updating software system on different levels independent of the operating system and the platform that the software is going to be installed.

IPS is consisting of several components which Some of them are mandatory for maintaining a system based on IPS and some of them are just provided to let us make the overall procedure of creating, maintaining, and using IPS based software distribution easier.

       End user Utilities: There is a set of command line utilities that let the end users to interact with the server side software which serve as repository front end to let clients fetch and install updates and new features for their installed software.

       GUI based Client side software: like the Java based Update Center  which is in use by GlassFish

       API for interacting with IPS: There are some Java APIs provided for Java developers to be able to perform IPS related tasks from their Java applications, it provides better integration with java application and more flexibilities for ISVs and OEMs to develop their own client side applications.

       Installation images: An Installation image is a set of packages for an application that provides some or all of its functionalities.

       Servers side utilities: An HTTP server which sit in front of the physical repository to interact with the clients.

       Development utilities: A set of utilities that helps developers with creating IPS packages. The build systems integration as a subset of development utilities. For now IPS integration with Maven and ANT is available which integrate and automate creating and publishing the package to repositories with the build system.

In IPS, a software installation is called an image that we can say is a customized mirror of the installation repositories in term of installed packages. Different types of images are defined in the IPS including:

       Operating system level images: This type of images is just in use for distributing Operating systems and operating system upgrades. Images in this level are called Full images and partial images.

       Custom application level image: This is the image level which software developers and distributes can use to distribute their application. This image is called user images and does not require the host operating system to be based on IPS.

Let’s see how GlassFish installer is related to IPS, GlassFish installer solely asks the user for the path which user wants to expand GlassFish IPS image and perform the initial configuration on the expanded image. These initial configurations include creating of a default domain along with setting the key repositories for domain password and master password. The expanded image will interact with some preconfigured network repository to fetch information about updates and new features which are available for expanded GlassFish image.

Package repositories, which installation image interact with, contain packages which lead to updating, completing or adding new features to an already installed image. These packages can differ from one operating system to another and Because of these differences different repositories are required for different operating systems.

An installation image can be as big as an operating system image or as small as necessary bootstrapping files to bootstrap the IPS and let it perform the rest of installation task. So a user image can be categorized under one of the following types:

       The image with some basic required functionality, for example GlassFish web container. These basic functionalities can be operating system independent for sake of simplicity in distributing the main image. The image can contain complete IPS system including its utilities and client application like GlassFish update center or it may just contain a script to bootstrap the IPS system installation.

       The image may contains the very basic and minimal packages to bootstrap the IPS system, the user will run the IPS bootstrapping script to install more IPS related packages like graphical Update Center client and later on install all required features by using these IPS utilities.

6.2 Creating packages using IPS

Now that we have an understanding of what is IPS and what is its relation with GlassFish installer and GlassFish itself we can get our hands on IPS utilities and see how we can use command line utilities or use the IPS to create new packages. Figure 8 shows GlassFish directory structure after bootstrapping the Update Center. Keep it in mind that bootstrapping the GlassFish Update Center result in installation of pkg(5) IPS utilities.

You may ask what the relation GlassFish modularity, extendibility, OSGI and HK2 is with IPS, IPS image is zip files containing the GlassFish directory layout including its files like its OSGI bungles, JAR files, documentation and so on. Later on, each IPS package which can be an update to currently installed features or a brand new feature is a zip file which can contain one or more OSGI bundle along with their related documentation and thing like that.

Two directories in GlassFish directory structure are related to IPS and GlassFish Update Center. The pkg directory includes all necessary files for IPS system which includes man pages, libraries which Java developers can use to develop their own application on top of IPS system or their own application for bootstrapping the IPS; it also includes a minimal Python distribution to let users execute the IPS scripts. The vendor-packages contain all packages downloaded and installed for this image which is the Update Center image.

From now on, we referee to pkg folder with the name ips_home so you should replace it with the real path to the pkg directory or you can set an environment variable with this name pointing to that directory.

 

 

Figure 8 Update center and pkg(5) IPS directory layout

The bin directory that resides inside the pkg directory includes IPS scripts for both developers and end users, these scripts are listed in table 1 with their description.

Table 1 IPS scripts with associated description

Script

Description

pkg

Can be used to create and update images

pkg.depotd

This is the repository server for the image packaging system. The pkg or other retrieval clients send packages and catalog recovery request to the repository server.

pkgsend

 

Let us publish new packages and new package versions to an image packaging depot server.

pkgrecv

Let us download the contents of a package from a server. Content format is suitable for pkgsend command.

You can see some Python scripts in the bin directory. These scripts perform the real tasks explained for each of the above operating system friendly scripts.

The updatetool folder is where Update Center GUI application is located along with its documentation and related scripts. The bin directory contains two scripts for running the Update Center and registering the desktop notifier which keeps you posted about new updates and available feature by showing a balloon with the related message in your system try section. The vendor-packages folder contains all packages downloaded and installed for this image which is the Update Center image.

Now let’s see how we can setup a repository, create a package and publish the package into the repository. I am just going to create a very basic package to show the procedure, the package may not be useful for GlassFish or any other application.

First of all we need a directory to act as the depot of our packages, a place which we are sure our user has the read and write permission along with enough space for our packages.  So create a directory named repository, the directory can be inside the ips_home directory. Now start the repository server by issuing the following command inside the ips_home/bin directory.

./pkg.depotd -d ../repository -p 10005

To check that your repository is running open http://127.0.0.1:10005 in your browser and you should see something similar to figure 9 which includes some information about your repository status.

Figure 9 The Repository status which we can see by pointing the browser to the repository URL.

 

Now we have our repository server running and waiting for us to push our packages into it and later on download those packages by our client side utilities like pkg or Update Center.

Our packages should be placed in the package repository and the command which can do this for us is pkgsend command. You should already though about the urge for a way to describe content of a package, its version, description, and so on. The pkgsend command let us open a transaction and add all package attribute, its included files and directory layout and finally close the transaction which lead to pkgsend sends our package to the repository. The pkgsend command can acts in a transactional way which means we can have a set of packages which we need either all of them in the repository or none of them. This model guaranteed that we may never have incompatible packages in the repository during the time that we push updates. To create a sample package we need some contents, so create a directory inside the pkg directory and name it sample_package, put two text files named readme.txt and license.txt inside it. Inside the sample_package directory create a directory named figures with an image file named figure1.jpg inside it. These are dummy files and their content can be anything that you like. You may add some more files and create a directory structure to test more complex structured packages. Listing 8 shows a series of commands which we can use to create and push the package that we have just prepared its content to the repository that we create in previous step. I assumed that you are using Linux so the commands are Linuxish and you should enter them line by line in the terminal window which can be either a gnome-terminal or any other terminal of your choice. If you want to see how a transaction works you can point your browser to http://127.0.0.1:10005/ during the executing of listing 8 commands.

Listing 8 Create and push the sample package to our repository


export PKG_REPO=http://127.0.0.1:10005/                           #1
eval './pkgsend open GFiASample@1.0'                              #2
./pkgsend add set name="pkg.name" value="GFiASamplePackage"      #3
./pkgsend add set name="pkg.description" value="sample description" #4
./pkgsend add set name="pkg.detailed_url" value="sample.com/kalali" #5  
./pkgsend add file ../sample_package/readme.txt path=GFiA/README.txt  #6
./pkgsend add dir path=GFiA/images/                      #7
./pkgsend add file ../sample_package/figures/f.jpg path=GFiA/figs/f.jpg #8
./pkgsend add license ../sample_package/license.txt license=GPL         #9
./pkgsend close               #10
 

At #1 we export an environment variable which IPS scripts will use as the repository URL, otherwise we should pass the URL with –s http://127.0.0.1:10005/ in each command execution. At #2 we open a transaction named GFiASample@1.0 for uploading a package. At #3 we add the package name.  At #4 we add the package description which will appear in the description column of the Update Center GUI application or pkg command information retrieval.  At #5 we add the URL which contain complementary information, Update Center fetch the information from provided URL and it will show them in the description pane. At #6 we add a text file along with its extraction path; At #7 we add a directory which our package installation will create in the destination image. At #8 we add a file to our previously created directory. At #9 we add the package license type along with the license file, later on license type can be used to query the available packages based on their licenses. And finally at #10 we close the transaction will result in appearance of the package in the package repository.

Now we have one package in our repository, you can find the number packages in the repository by opening the repository URL which is http://127.0.0.1:10005/ in your browser.

You may already recognize a pattern in using pkgsend command and its parameters, and if you did you are correct because pkgsend commands are following a pattern which is similar to pkgsend subcommand [subcommand parameters]. List of important pkgsend subcommand is shown in table 2.

Table 2 List of pkgsend subcommands which can be used to send a package to repository

 

Subcommand

Description

open

Begins a transaction on the package specified by package name.

Syntax : pkgsend open pkg_name

add

Adds a resource associated with an action to the current transaction.

Syntax : pkgsend add action [action attributes]

close

Close the current transaction.

Syntax : pkgsend close

 

You can see complete list of subcommands in the pkgsend man files or in the pkg(5) project website located at http://www.opensolaris.org/os/project/pkg/.

The add subcommand is the most usefull subcommand between the pkgsend subcommands, it takes actions which we need to the transaction along with the actions attributes. You have already seen how we can use set, file, dir, and license actions. As you see each action may accept one or more named attributes. Other important actions are listed in table 3.

Table 3 list of other important add actions

Action

Description and Key Attributes

Link

The link action represents a symbolic link. The path attribute define the file system path where the symlink is installed.

Hardlink

The hardlink action represents a physical link.The path attributes define the file system path where the symlink is installed.

Driver

The driver action represents a device driver. It does not reference a payload, the driver files must be installed as file actions. The name attribute represent the name of the driver. This is usually, but not always, the file name of the driver binary.

Depend

The depend action represents a dependency between packages. A package might depend on another package to work or to install. Dependencies are optional.

Group

The group action defines a UNIX  group. No support is present for group passwords. Groups defined with this action initially have no user-list. Users can be added with the user action.

User

The user action defines a UNIX user as defined in /etc/passwd, /etc/shadow, /etc/group and /etc/ftpd/ftpusers files. Users defined with this attribute have entries added to the appropriate files.

 

Now that we are finished with the pkgsend, let’s see what alternatives we have in using our package repository. We can either install the package into an already installed package like our GlassFish installation or we can create a new package in our client machine and install the package into that particular image.

To install the package in the current GlassFish installation open Update Center, select GlassFish node in the navigation tree and then select Image Properties from file menu or press CTRL+I to open the image properties window. Add a new repository; enter GFiA.Repository as name and http://127.0.0.1:10005/ as the repository URL.  Now you should be able to refresh the Available Add-ons list and select GFiA Sample  Package for installation.

The installation process will be fairly simple as it will just execute the given actions one by one which will result in creation of a GFiA directory inside the GlassFish installation directory with two text files inside it. The package installation also creates the figs directory inside GFiA directory along with adding the image file inside it.

You get the idea that you can copy your package files anywhere in the host image (GlassFish installation in this case) so, when you want to distribute your OSGI bundle you will only need to put the bundle inside an already existing directory named modules in the GlassFish installation directory.

The other way in using our package repository is creating a new image in our client system and then installing the package in this new image. Although we can create the image and install our package into it using Update Tool, but is joyful to use command line utilities to accomplish the task.

  1. Create the local image:

./pkg image-create -U -a GFiA.Repository =http://localhost:10005/ /home/user/GFiA

The command will create a user image which is determined by the –U parameter; its default package repository is our local repository and it is determined by –a parameter. And finally the path to image location is /home/user/GFiA which is the place where our image will extract.

  1. Set a Title and Description in the Image:

./pkg set-property title "GlassFish in Action Image"

./pkg set-property description "GlassFish in Action Book image which is created in Chapter "

  1. install the GFiASamplePackage into the newly created image

./pkg  /home/user/GFiA install GFiASamplePackage

As you can see we can install the package into any installation image by providing the installation image path.

You as a developer or project manager can use IPS to distribute your own application from the ground and keep yourself free from updating hurdle. The IPS toolkit is available at http://wikis.sun.com/display/IpsBestPractices/Downloads.

 

6 Summary

GlassFish modular architecture opens the way for an easy update and maintenance of the application server along with providing the possibility to use its modules outside the application server in ISVs or develop new modules to extend the application server functionalities.

Using OSGI let the application server to be deployed inside a bigger software system based on the OSGI and let the system administrators and maintainers to deal with one single installation with one single underplaying module management system.

GlassFish container development provides the opportunity to develop new type of severs which can contain new types of applications without going deep into network server development. It is also suitable as the container can interface with other containers in the application server to use managed resources like JDBC connection pools or EJBs.

GlassFish update center can be seen as one of the most initiative features of the application server as it takes care of many headaches which administrators usually face for updating and patching the application server. The Update center automatically check for available updates for our installed version of GlassFish and in blink of an eye it will install the updates for us without making us go through the compatibility check between the available update and our installation.

Update center notification mechanism can keep us posted for new updates either when we are doing daily administration task in the administration console or by showing the famous bulb in our desktop notification area.

The Application server is platform independent and so it needs a platform agnostic distributing mechanism and the pkg(5) IPS is a proven binary distribution system which GlassFish used to distribute its binary.

 

And GlassFish v3 is Here

The long awaited and the most looked upon version of GlassFish released today. GlassFish v3 fully implements Java EE 6 specification which means EJB 3.1, Servlet 3, JAX-RS, JPA 2, Contexts and Dependency Injection for Java EE, Bean validation, Java EE profiles and so on.

 

GlassFish is not only the most up to date application server but it also benefits from a very good architecture. GlassFish architecture provides extensions points and contracts which 3rd party developers can use to add functionality to GlassFish (even the administration console is plubable).

 

GlassFish v3 is an important milestone in GlassFish life because now it is fully based on OSGI modularity framework which means GlassFish can be integrated into a bigger OSGI system.

 

In addition to Java EE profiles, GlassFish v3 is also available as an embedded application server which we can use for testing purpose or any other kind of in process use cases.

 

What I like the most about GlassFish is its close integration with many other well developed products like OpenESB, OpenSSO, IDEs, and so on in addition to its superb performance and administration channels.

 

GlassFish v3 adds another reason to make one consider it as the prime option for deploying simple and complex application, and that is its extensibility which so far made it possible to host different kind of scripting language based applications like application based on RoR or PHP in the same process which hosts the Java EE applications.

 

GlassFish v3 is available for download at: https://glassfish.dev.java.net/

 

Like GlassFish v2, Sun Microsystems provides support for GlassFish if you need a higher level of assurance and guranteed support. to get more information on the provided support model take a look at GlassFish Enterprise Home-page.

 

And if you are interested there is a GlassFish V3 Refcard published by DZone and authored by me which introduces GlassFish v3 in more details and gives you all you need to start working with GlassFish v3.

 

 

I have an Upcoming Book about GlassFish which is due to be published on April 2010 by Packt Publishing. The book mainly discuss GlassFish security (administration and configuration), Java EE security and using OpenSSO to secure Java EE applications in general and Java EE web services in particular.

 

To learn more about Java EE 6, you can take a look at Sun Java EE 6 white paper located at: https://www.sun.com/offers/docs/java_EE6_overview_paper.pdf

 

Another way for you to meet GlassFish folks, other GlassFish community members and to learn more about GlassFish v3 is to join the Virtual Conference about GlassFish v3 which is supposed to take place on 15th of December.

So you want to develop a rich client application on top of NetBeans RCP?

I was involved with development of a RCP application based on NetBeans platform and now I find few minutes to share some of the experience with you.

All standard coding and best practices are applicable here. Use project management systems like Trac or any project management system that you know. Never start a project without a project management system. Use coding standard, unit testing, desing and implementation documents versioning, design discussion sessions and so on.

Prepare all library wrappers and give them proper names. In NetBeans RCP we can not add JAR files to modules and instead JAR files need to be wrapped inside library wrapper modules and then this modules can be included in list of other modules dependencies. Use proper names for library wrappers, for example: jfreechart-wrapper-1.0.13. Never think about having one fat wrapper which contains all required libraries.

Avoid including the same JAR files in more than one wrapper which will be used by one module. instead include all common JAR files in one wrapper and include that wrapper in the dependencies of each wrapper which need it.

Think ahead and decide which modules you want to have and how these modules depends on each other. For example: A module which holds the domain mode and persistence layer (it will be wrapper inside a library wrapper if persistence layer is based on JPA), a module which contains all Web Services access objects (Again, this is a Java SE project wrapped inside a library wrapper), a module for reporting, a module for security implementation, a module to host all utility classes and common resources like icons, graphics, configuration files, etc.

Avoid circular dependencies. You can not access Security module from persistence and then using persistence objects from security module. Best practice is to define some logical layer and ensure that each layer can only access the layer under it and not any other layer.

If acceptable by customer, prepare an update center and push updates using the update center to avoid over working. The update center will help with distributing the software updates which can be new modules, updates, bug fixes etc.
Keep these FAQs close as they will help you along the way:
http://wiki.netbeans.org/NetBeansDeveloperFAQ
http://deadlock.netbeans.org/hudson/job/faqsuck/lastSuccessfulBuild/artifact/other/faqsuck/build/faq.html