Using GlassFish domain templates to easily create several customized domains

It might have happened to you to require some customization the GlassFish behavior after you create the domain in order to make the domain fit the  basic requirements that you have in your organization or for your development purpose. Some of the files that we usually manipulate to customize GlassFish includes logging.properties, keystore.jks, cacert.jks, default-web.xml, server.policy and domain.xml. These files can be customized through different asadmin commands, or JDK commands like keytool, policytool or manually using a text editor after you created the domain in the config directory of the domain itself.  But repeating the steps for multiple domains is a laborious task which can be prevented by changing the template files that GlassFish domains are created using them. The templates are located atAnd we can simply open them and edit the properties to make them more fit to our needs.

The benefit of modifying the templates rather than copy pasting the config directory of one domain to another is the domain specific behaviors like port numbers which have placeholders in the domain.xml to be filled by asadmin command. An example of a placeholder is %%%JMS_PROVIDER_PORT%%% which will be replaced by JMS provider port by asadmin command.

A walkthrough for the fork/join framework introduced in Java SE 7

Java SE 7 brought some neat features on the table for Java developers, one of these features is the fork/join framework or basically the new parallel programming framework we can use to easily to implement our divide and conquer solutions. The point is that a solution to a problem should be devised with the following characteristics to use the fork/join framework effectively:

  • The problem domain whether it is a file, list, etc to be processed or a computation should be dividable to smaller subtasks.
  • Processing chunks should be possible without requiring the result of other chunks.

To summarize it, solving or processing the problem domain should require no self-feedback to make it possible to use the framework. For example if you want to process a list and processing each element in the list require the result of processing previous element then it is impossible to use any parallel computing for doing that job. If you want to apply some FFT over a sound stream which require feedback for processing each pulse from the previous pulses it is not possible to speedup the processing using the fork/join framework, etc.

Well, before we start learning the fork/join framework we better know what it is and what it is not: What fork/join framework is:

  • A parallel programming framework for Java
  • Part of Java SE 7
  • Suitable for implementing parallel processing solutions, mostly data intensive with small or no shared resources between the workers who process the data chunks.
  • Suitable when no synchronization is required between the workers

What fork/join framework is not:

  • It is not a magic that turns your code to run fast on machines with multiple processors, you need to think and implement your solutions in a parallel manner.
  • It is not hard and obscure like other frameworks, MPI for example. Using the framework is way easier than anything I used before.

If you want to learn the mechanics behind the fork/join framework you can read the original article written by Doug Le which explains the motive and the design. The article is available at http://gee.cs.oswego.edu/dl/papers/fj.pdf. If you want to see how we can use the framework then continue on reading this article.

First let’s see what are the important classes that one need to know in order to implement a divide and conquer solution using fork/join framework and then we will start using those classes.

  • The ForkJoinPool: This is the workers pool where you can post your ForkJoinTask to be executed. The default parallelism level is the number of processors available to the runtime.
  • The RecursiveTask<V>: This is a task, subclass of the ForkJoinTask which can return some value of type V. For example processing a list of DTOs and returning the result of process.
  • The RecursiveAction: Another subclass of the ForkJoinTask without any return value, for example processing an array…

I looked at this new API mainly for data pipelining in which I need to process a pretty huge list of object and turn it to another format to keep the processing result of one library consumable for the next one in the data flow and I am happy with the result pretty easy and straight forward.

Following is an small sample showing how to process a list of Row objects and convert them a list of Entity Objects. In my case it was something similar with processing Row objects and turning them to OData OEntity objects.

[java]
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveTask;

/**
*
* @author Masoud Kalali <mkalali>
*/
class RowConverter extends RecursiveTask<List<Entity>> {

//if more than 5000 we will use the parallel processing
static final int SINGLE_TREAD_TOP = 5000;
int begin;
int end;
List<Row> rows;

public RowConverter(int begin, int end, List<Row> rows) {
this.begin = begin;
this.end = end;
this.rows = rows;
}

@Override
protected List<Entity> compute() {

if (end – begin <= SINGLE_TREAD_TOP) {
//actual processing happens here
List<Entity> preparedEntities = new ArrayList<Entity>(end – begin);
System.out.println(" beging: " + begin + " end: " + end);
for (int i = begin; i < end; ++i) {
preparedEntities.add(convertRow(rows.get(i)));
}
return preparedEntities;
} else {
//here we do the dividing the work and combining the results
// specifies the number of chunks you want to break the data to
int divider = 5000;
// one can calculate the divider based on the list size and the number of processor available
// using the http://download.oracle.com/javase/7/docs/api/java/lang/Runtime.html#availableProcessors()
// decrease the divider number and examine the changes.

RowConverter curLeft = new RowConverter(begin, divider, rows);
RowConverter curRight = new RowConverter(divider, end, rows);
curLeft.fork();
List<Entity> leftReslt = curRight.compute();
List<Entity> rightRes = curLeft.join();
leftReslt.addAll(rightRes);
return leftReslt;
}
}

//dummy converted method converting one DTO to another
private Entity convertRow(Row row) {

return new Entity(row.getId());
}
}

// the driver class which own the pool
public class Fjf {

public static void main(String[] args) {

List<Row> rawData = initDummyList(10000);
ForkJoinPool pool = new ForkJoinPool();
System.out.println("number of worker threads: " + pool.getParallelism());

List<Entity> res = pool.invoke(new RowConverter(0, rawData.size(), rawData));

// add a breakpoint here and examine the pool object.
//check how the stealCount, which shows number of subtasks taken on by available workers,
//changes when you use an smaller divider and thus produce more tasks
System.out.println("processed list: " + res.size());

}

/**
* creates a dummy list of rows
*
* @param size number of rows int he list
* @return the list of @see Row objects
*/
private static List<Row> initDummyList(int size) {

List<Row> rows = new ArrayList<Row>(size);

for (int i = 0; i < size; i++) {
rows.add(new Row(i));
}
return rows;
}
}

//dummy classes which should be converted from one form to another
class Row {

int id;

public Row(int id) {
this.id = id;
}

public int getId() {
return id;
}
}

class Entity {

int id;

public Entity(int id) {
this.id = id;
}

public int getId() {
return id;
}
}

[/java]

Just copy and paste the code into your IDE and try running and examining it to get deeper understanding of how the framework can be used. post any comment and possible questions that you may have here and I will try to help you own with them.

How REST interface covers for the absence of JMX/AMX administration and management interface in GlassFish 3.1

For sometime I wanted to write this entry and explain what happened to GlassFish JMX/AMX management and administration interface but being busy with other stuff prevented me from doing so. This article here can be an upgrade to my other article about GlassFish 3.0 JMX administration interface which I wrote while ago. Long story short, in GlassFish 3.1 the AMX/JMX is no longer available and instead we can use the REST interface to change the server settings and perform all administration/management and monitoring activities we need. There are fair number of articles and blog entries all over the web about the RESTful interface which I included them at the end of this blog. Firs of all the rest interface is available trough the administration console application meaning that we can access the interface using a URL similar to: http://localhost:4848/management/domain/ The administration console and the rest interface  are running on a separate virtual server and therefore a separate HTTP Listener and if required transport configuration. What I will explain here will be the following items:

  • How to use Apache HttpClient to perform administration tasks using GlassFish REST interface
  • How to find the request parameters for different commands.
  • GlassFish administration, authentication and transport security

How to use Apache HttpClient to interact with GlassFish administration and management application

Now back to the RESTful interface, this is a HTTP based interaction channel with the GlassFish administration infrastructure which basically allows us to do almost anything possible to do using asadmin trhough HTTP in a RESTful manner. We can use basically any programming language capable to writing on a socket to interact with the RESTFul interface. Here we will use Apache HTTPClient to take care of sending the commands to GlassFish RESTFul console. When using GlassFish REST management we can use any of the POST/GET and DELETE methods to perform the following tasks:

  • POST: create and partially update a resource
  • GET: get information like details of a connection pool
  • DELETE: to delete a resource

Following sample code shows how to use the  to perform some basic operations including updating a resource, getting some resources list, creating a resource and finally deleting it.

[java]
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URISyntaxException;
import java.util.logging.Logger;

import org.apache.http.HttpEntity;
import org.apache.http.HttpException;
import org.apache.http.HttpResponse;
import org.apache.http.auth.AuthenticationException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpDelete;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.message.BasicHeader;
import org.apache.http.protocol.HTTP;

/**
*
* @author Masoud Kalali
*/
public class AdminGlassFish {

//change the ports to your own settng
private static final String ADMINISTRATION_URL = "http://localhost:4848/management";
private static final String MONITORING_URL = "http://localhost:4848/monitoring";
private static final String CONTENT_TYPE_JSON = "application/json";
private static final String CONTENT_TYPE_XML = "application/xml";
private static final String ACCEPT_ALL = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
private static final Logger LOG = Logger.getLogger(AdminGlassFish.class.getName());

public static void main(String args[]) throws IOException, HttpException, URISyntaxException {

//just chaning the indent level for the JSON and XML output to make them readable, for humans…
String prettyFormatRestInterfaceOutput = "{"indentLevel":2}";
String response = postInformation("/domain/configs/config/server-config/_set-rest-admin-config", prettyFormatRestInterfaceOutput);
LOG.info(response);
//getting list of all JDBC resources
String jdbcResources = getInformation("/domain/resources/list-jdbc-resources");
LOG.info(jdbcResources);

// creating a JDBC resource on top of the default pool
String createJDBCResource = "{"id":"jdbc/Made-By-Rest","poolName":"DerbyPool"}";
String resourceCreationResponse = postInformation("/domain/resources/jdbc-resource", createJDBCResource);
LOG.info(resourceCreationResponse);

// deleting a JDBC resource
String deletionReponse = deleteResource("/domain/resources/jdbc-resource/jdbc%2FMade-By-Rest");
LOG.info(deletionReponse);

}

//using HTTP get
public static String getInformation(String resourcePath) throws IOException, AuthenticationException {
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpGet httpG = new HttpGet(ADMINISTRATION_URL + resourcePath);
httpG.setHeader("Accept", CONTENT_TYPE_XML);
HttpResponse response = httpClient.execute(httpG);
HttpEntity entity = response.getEntity();
InputStream instream = entity.getContent();
return isToString(instream);
}

//using HTTP post for creating and partially updating resources
public static String postInformation(String resourcePath, String content) throws IOException {
HttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost(ADMINISTRATION_URL + resourcePath);
StringEntity entity = new StringEntity(content);

//setting the content type
entity.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, CONTENT_TYPE_JSON));
httpPost.addHeader("Accept",ACCEPT_ALL);
httpPost.setEntity(entity);
HttpResponse response = httpClient.execute(httpPost);

return response.toString();
}

//using HTTP delete to delete a resource
public static String deleteResource(String resourcePath) throws IOException {
HttpClient httpClient = new DefaultHttpClient();
HttpDelete httpDelete = new HttpDelete(ADMINISTRATION_URL + resourcePath);
httpDelete.addHeader("Accept",
ACCEPT_ALL);
HttpResponse response = httpClient.execute(httpDelete);
return response.toString();

}

//converting the get output stream to something printable
private static String isToString(InputStream in) throws IOException {
StringBuilder sb = new StringBuilder();
BufferedReader br = new BufferedReader(new InputStreamReader(in), 1024);
for (String line = br.readLine(); line != null; line = br.readLine()) {
sb.append(line);
}
in.close();
return sb.toString();
}
}
[/java]

You may ask how could one know what are the required attributes names, there are several ways to do it:

  • Look at the reference documents…
  • View the source code of the html page representing that kind of resource, for example for the JDBC resource it is like http://localhost:4848/management/domain/resources/jdbc-resource which if you open it in the browser you will see an html page and viewing its source will give you the names of different attributes. I think the label for the attributes is also the same as the attributes themselves.

  • Submit the above page and monitor the request using your browser plugin, for example in case of chrome and for the JDBC resource it is like the following picture

GlassFish administration, authentication and transport security

By default when we install GlassFish or we create a domain the domain administration console is not protected by authentication nor it is protected by HTTPS so whatever we send to the application server from through this channel will be readable by someone sniffing around. Therefore you may need to enable authentication using the following command:

./asadmin change-admin-password

You may ask now that we enabled the authenticaiton for the admin console, how we can use it by our sample code, the answer is quite simple,  Just set the credentials for the request object and you are done. Something like:

[java]
UsernamePasswordCredentials cred = new UsernamePasswordCredentials("admin", "admin");
httpget.addHeader(new BasicScheme().authenticate(cred, httpget));
[/java]

Make sure that you are using correct username and passwords as well as correct request object. In this case the request object is httpGet

Now about the HTTPs to have have encryption during the transport we need to enable the SSL for the admin HTTP listener. following steps show how to use the RESTFul interface through a browser to enable HTTPS for the admin console. When using the browser, GlassFish admin console shows basic HML forms for different resources.

  1. Open the http://localhost:4848/management/domain/configs/config/server-config/network-config/protocols/protocol/admin-listener in the browser
  2. Select true for security-enabled option element
  3. Click Update to save the settings.
  4. Restart server using http://localhost:4848/management/domain/restart

The above steps have the same effect as

./asadmin enable-secure-admin

This will enable the SSL layer for the admin-listener but it will use the default, self singed certificate. Here I explained how to install a GoDaddy digital certificate into GlassFish application server to be sure that none can listen during the transport of command parameters and on the other hand the certificate is valid instead of being self signed. And here I explained how one can use the EJBCA to setup and use an small inter-corporate certificate authority with GlassFish, though the manual is a little old but it will give you enough understanding to use the newer version of EJBCA.

If you are asking about how our small sample application can work with this dummy self signed certificate of GlassFish you need to wait till next time that I will explain how to bypass the invalid certificate installed on our test server.

In the next part of this series I will cover more details on the monitoring part as well as discussing  how to bypass the self signed Digital certificate during the development…

Updating Web application’s Spring Context from Beans definitions partially stored in the database…

As you know spring security providers can get complex as  you may need several beans like, implementations of UserDetailsService, SaltSource, PasswordEncoder, the JDBC setup and so on. I was working on an spring based application which needed to load the security configuration from the database because the system administrator was able to select the security configuration from several pre-defined templates like LDAP, JDBC, File Based, etc. change some attributes like LDAP address or file location, etc. to fit the template in the environment and then apply the new configuration to be used by the application.

I was to port some parts of the application to the web and 3 tier architecture and so I had to have the authentication configured for the web application from the database and current implementations of the required beans for the security providers configurations.

It is plain and simple, load all of the context configuration by adding them to the web.xml and let the spring filter use them to initialize the context or extend XmlWebApplicationContext or its siblings and return the configuration file addresses by overriding the getConfigLocations method. This works perfectly when everything is in plain XML file and you have access to everything… It wont work when some of context configuration files are stored in the database and the only means of accessing the database is the spring context and that needs to be initialized before you could access the database through it.

What I needed to do was putting together a basic authentication in front of the web application while using the ProviderManager which its configuration is stored in the database. Without the ProviderManager you cannot have the security filters and thus no security will be applied over the context.

The first part, creating the security configuration and specifying the URL patterns which are needed to be protected is straight forward. The filters use the ProviderManager which is not there and thus the context initialization will fail. To solve this I used the following workaround which might help someone else as well. In all of our templates the ProviderManager bean name was the same so I could simply devise the following solution. Create a temporary basic security provider definition file with the following beans:

  • A basic UserDetailsService bean based on InMemoryDaoImpl
  • An AuthenticationProvider on top of the above UserDetailsService
  • A ProviderManager which uses the above AuthenticationProvider.

The complete file look like this:

[xml]

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"
default-lazy-init="true">

<bean id="tmpUserDetailsService"
class="org.springframework.security.core.userdetails.memory.InMemoryDaoImpl">
<property name="userMap">
<value>
</value>
</property>
</bean>

<bean id="tmpAuthenticationProvider"
class="org.springframework.security.authentication.dao.DaoAuthenticationProvider">
<property name="userDetailsService" ref="tmpUserDetailsService"/>
</bean>

<bean id="authenticationManager"
class="org.springframework.security.authentication.ProviderManager">
<property name="providers">
<list>
<ref local="tmpAuthenticationProvider"/>
</list>
</property>
</bean>
</beans>

[/xml]

Using this file, the spring context will get initialized and thus no NoSuchBeanDefinitionException will be thrown at your face for the least.  You may say, ok why you are not loading the entire security definitions and configurations after the context is initialized so you wont need to have the temporary security provider, the answer to this question is that having no security applied right after the context initialization is finished is a security risk because at the brief moment before the context get updated with the security definitions, people can access the entire system without any authentication or access control. Let’s say that brief moment is negliable but a bigger factor here is the possible failure of loading the definitions after the context is initialized means that the application will remain without any security restriction if we do not lock down the application with the temporary provider.

Now that spring context can get initialized you can hook into spring context initialization by a listener and load the security provider from the database into the context to override the temporary beans with the actual one stored in the database.

Too hook into spring context initialization process you need to follow the below steps:

  • Implement your ApplicationListener, following snippet shows how:
  • [java]
    public class SpringContextEventListener implements ApplicationListener {

    private XmlWebApplicationContext context;

    public void onApplicationEvent(ApplicationEvent e) {

    if (e instanceof ContextRefreshedEvent) {
    context = (XmlWebApplicationContext) ((ContextRefreshedEvent) e).getApplicationContext();
    loadSecurityConfigForServer();
    }
    }

    private void loadSecurityConfigForServer() {

    AutowireCapableBeanFactory factory = context.getAutowireCapableBeanFactory();
    BeanDefinitionRegistry registry = (BeanDefinitionRegistry) factory;
    String securityConfig = loadSecurityConfigFromDatabase();
    XmlBeanDefinitionReader xmlReader = new XmlBeanDefinitionReader(registry);
    xmlReader.loadBeanDefinitions(new ByteArrayResource(securityConfig.getBytes()));
    }

    private String loadSecurityConfigFromDatabase() {
    //use the context and load the configuration from the database
    }
    }
    }
    [/java]

  • Now that you have a listener which listen for ApplicationContext event and load your security configuration  you can hook this listener to you context because by its own it wont do anything 🙂
  • To add the listener to your application context, just add something similar to the following snippet to one of the context configuration files and you will be done.

    [xml]
    <bean id="applicationListener" class="your.package.SpringContextEventListener"/>
    [/xml]

    This is all you needed to do in order to get the context updated after web application starts without imposing any security hole on the application for lack of security definitions right after the application startup.

    Monitoring ZFS pools performance using zpool iostat

    Performance, performance, performance; this is what we hear in all software development and management sessions. ZFS provides few utility commands to monitor one or more pools’ performance.

    You should remember that we used fsstat command to monitor the UFS performance metrics. We can use iostat subcommand of the zpool command to monitor the performance metrics of ZFS pools.

    The iostat subcommand provides some options and arguments which we can see in its syntax shown below:

    iostat [-v] [pool] ... [interval [count]]

    The -v option will show detailed performance metrics about the pools it monitor including the underlying devices.  We can pass as many pools as we want to monitor or we can omit passing a pool name so the command shows performance metrics of all commands.  The interval and count specifies how many times we want the sampling to happen what is the interval between each subsequent sampling.

    For example we can use the following command to view detailed performance metrics of fpool for 100 times in 5 seconds interval we can use the following command.

    # zpool iostat -v fpool 5 100
    The output for this command is shown in the following figure.

    The first row shows the entire pool capacity stats including how much space were used upon the sampling and how much was available. The second row shows how many reads and writes operations performed during the interval time and finally the last column shows the band width used for reading from this pools and writing into the pool.

    The zpool iostat retrieve some of its information from the read-only attributes of the requested pools and the system metadata and calculate some other outputs by collecting sample information on each interval.

    Introducing NIO.2 (JSR 203) Part 6: Filtering directory content and walking over a file tree

    In this part we will look at how the directory tree walker and the directory stream reader works. These two features are another couple of long requested features which was not included in the core java before Java 7.

    First, lets see what directory stream reader is, this API allows us to filter content of a directory on the file system and extract the file names that matches our filter criteria. The feature works for very large folders with thousands of files.

    For filtration we can use PathMatcher expression matching the file name or we can filter the directory content based on the different file attributes. for example based on the file permissions or the file size.

    Following sample code shows how to use the DirectoryStream along with filtering. For using the PathMatcher expression we can just use another overload of the newDirectoryStream method which accepts the PathMatcher expression instead of the filter.

    public class DirectoryStream2 {
    
        public static void main(String args[]) throws IOException {
    	//Getting default file system and getting a path
            FileSystem fs = FileSystems.getDefault();
            Path p = fs.getPath("/usr/bin");
    
    	//creating a directory streamer filter
            DirectoryStream.Filter
     filter = new DirectoryStream.Filter
    () {
    
                public boolean accept(Path file) throws IOException {
                    long size = Attributes.readBasicFileAttributes(file).size();
                    String perm = PosixFilePermissions.toString(Attributes.readPosixFileAttributes(file).permissions());
                    if (size > 8192L && perm.equalsIgnoreCase("rwxr-xr-x")) {
                        return true;
                    }
                    return false;
                }
            };
    	// creating a directory streamer with the newly developed filter
            DirectoryStream
     ds = p.newDirectoryStream(filter);
            Iterator
     it = ds.iterator();
            while (it.hasNext()) {
    
                Path pp = it.next();
    
                System.out.println(pp.getName());
            }
        }
    }
    

    The above code is self explaining and I will not explain it any further than the in-line comments.

    Next subject of this entry is directory tree walking or basically file visitor API. This API allows us to walk over a file system tree and execute any operation we need over the files we found. The good news is that we can scan down to any depth we require using the provided API.

    With the directory tree walking API, the Java core allows us to register a vaster class with the directory tree walking API and then for each entry the API come across, either a file or a folder, it calls our visitor methods. So the first thing we need is a visitor to register it with the tree walker. Following snippet shows a simple visitor which only prints the file type using the Files.probeContentType() method.

    class visit extends SimpleFileVisitor
     {
    
        @Override
        public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) {
            try {
                System.out.println(Files.probeContentType(file));
    
            } catch (IOException ex) {
                Logger.getLogger(visit.class.getName()).log(Level.SEVERE, null, ex);
            }
            return super.visitFile(file, attrs);
        }
    
        @Override
        public FileVisitResult postVisitDirectory(Path dir, IOException exc) {
            return super.postVisitDirectory(dir, exc);
        }
    
        @Override
        public FileVisitResult preVisitDirectory(Path dir) {
            return super.preVisitDirectory(dir);
        }
    
        @Override
        public FileVisitResult preVisitDirectoryFailed(Path dir, IOException exc) {
            return super.preVisitDirectoryFailed(dir, exc);
        }
    
        @Override
        public FileVisitResult visitFileFailed(Path file, IOException exc) {
            return super.visitFileFailed(file, exc);
        }
    }
    

    As you can see we extended the SimpleFileVisitor and we have visitor methods for all possible cases.

    Now that we have the visitor class, the rest of the code is straight forward. following sample shows how to walk over /home/masoud directory down to two levels.

    public class FileVisitor {
    
        public static void main(String args[]) {
    
            FileSystem fs = FileSystems.getDefault();
            Path p = fs.getPath("/home/masoud");
            visit v = new visit();
            Files.walkFileTree(p, EnumSet.allOf(FileVisitOption.class), 2, v);
    
        }
    }
    

    You can grab the latest version of Java 7 aka Dolphin from here and from the same page you can grab the latest JavaDoc which is generated from the same source code that is used to generate the binary bits. Other entries of this series are under the NIO.2 tag of my weblog.