Masoud Kalali's Blog

My thoughts on software engineering and beyond…

By

My last blog in 2011: My 2012 wishes and predictions…

I almost stopped blogging during 2011 because of lots of complications I was dealing with but this entry is something I couldn’t just pass over. Hopefully I will resume blogging during 2012 as actively as I was doing during late 2009 and early 2010.

My predictions for 2012 in technology realm specially when focused on Java is as follow:

  • Oracle will push Java forward like never before.
  • Java ecosystem will thrive with JavaFX getting open sourced and new big names joining JCP.
  • We will see the best Java release for Mac os, Java SE 7.
  • IBM will push the idea of how cool Rational set of IDEs are and how good Websphere is and people will believe it until the are caught with no way to return.
  • RIM will probably stop development of it’s own operating system and instead develop a powerpack for Android…
  • Google Chrome will eat other browsers marketshare as fast as browserly possible.
  • Some of the new cool boys in the JVM town that are claiming to be the next Java will vanish/start vanishing without any trace :-D
  • GlassFish will get more marketshare and more people will benefit from it’s modular and extensible architecture.
  • Google will market a revolutionary Android tablet that will change the concept.

What I wish for during 2012

  • No more war and instead of that some peace and quiet time around the globe.
  • No disasters like what we had in 2011 and instead some ground breaking scientific discoveries in medicine, energy and space travel.
  • No economy breakdown anywhere in the world.
  • A cell phone thinner than Motorola DROID RAZR :-D
  • Google to provide a good cloud storage for end users so I could stop using combination of DrobBox, Google and SkyDrive.

Other predictions for 2012 which I truly like to be proven wrong for them.

  • Iranian government will not go away and will not change to a sane governing body.
  • Pakistan army and ISI will continue supporting /training and harboring  Al Qaeda and Taliban and continue destabilizing Afghanistan southern and central provinces.
  • Iranian government will continue meddling in other countries affair specially in Afghanistan and Arab countries.
  • Syrian dictatorship will remain intact by support of Iranian government and the region will stay unstable as it is now.

I wish everyone a happy new year with lots of joys and success.

By

My thoughts on JSR 351, Java Identity API

Identity, something that we hear more often these days with the whole web 2.0 and social services and more and more web based public services growing around us. The identity notion is an integral part of a security system in distributed services. Developing effective software system require an effective security and access control system which java provides, not exactly in the way that it should be in 2011 but it does provide what is the bare bone necessity to develop applications and frameworks on top of it and  benefit from its presence. The identity API is going to ease the interaction between the identity providers and those who consume the identity and trust the identity providers in addition to governing and managing the identity attributes.

I was studying the JSR details and it seems to be covering everything required for the identity attributes governance and the required API for both ends of the usage including the client API the governing/producing API. The identity producing and consuming is not new and there are fair number of public identity producers like facebook, twitter, etc. and also products that system integrators can use  like OpenAM as an open source product or gorilla Commercial software products like ORACLE identity management  or IBM tivoli identity management software, etc.

In a very simple set of words, the JSR 351: The Java Identity API will be as successful as it is going to be adopted. No adoption and it will endup dying some dark corner…  Design a simple and elegant API and try to ship it with some free and easy to use service implementations and it may get some momentum, otherwise it will be a goner and people will stick with what they have. I like the new features that it is going to introduce in the decision making or authorization part but we should see how well it will be adopted by identity providers to develop the services that provides the interaction point between the JSR interface and their repositories.  Pushing it as JSR wont really do that much without a wide adoption in the community. Look at how many implementation of the JSR 115 and JSR 196 exits to be plugged into application servers supporting the contract and you will get what I am referring to by community adoption.

By

An analysis on the monthly increase of NetBeans IDE active users count

Few weeks ago I was talking to Jiří Kovalský about NetBeans and its 1,000,000 active users and I asked him to check whether it is possible to have the number of active users by month or not and today he got back to me with a nice chart showing number of active users for each month since Jan 2004. Jiri is  NetBeans community manager and also he is the first person in the NetBeans team I get acquaintance long ago during NetCat50.

Here I attached the chart and then I will do an short analysis of what the chart can represent.

Number of NetBeans Active users per month

Now a brief overview of the chart:

  • NetBeans team started to count the active users in Jan 04
  • During summer and new year eve there is a decline in the number of active users each year and as overall number of users growth, this decline can be seen more clearly.
  • Number of active users is increasing continuously.

Now I want to merge the above chart with another table which is different NetBeans versions release dates.

NetBeans versions release dates

  • The last line in the bar is for June 2011, The July numbers are not calculated yet.
  • It took NetBeans two years, until Jan 2006, to get 200,000 active users at the beginning but the growth in number of active users was accelerating from the beginning as chart suggests.
  • In its next 3 years, from Jan 2006 to Jan 2009 number of users increased by 400,000 to a total of 600,000 active users which means the user growth accelerated quite well. This is the post NetBeans 5 ear when each version’s  changelog had quite a large number  number of bug fixes, performance improvements and new features.
  • The biggest increase in the number of users in duration of one year can be seen between June 2010 and June 2011 with about 200,000 users. This is the second year that ORACLE was in charge of Sun and its products.
  • It looks like that after NetBeans 6.9 the number of active users is increasing faster than before and the reason is clearly the stability and performance improvement in addition to tons of new features in the core and support for PHP and C++.

As a long time user of NetBeans IDE I should say that NetBeans has come a long long way to become the IDE that we use in our daily jobs nowadays. The number of features introduced in the IDE and the number of bug fixes is enormous. You can find and try NetBeans 4 or 5 and compare it to NetBeans 7 to understand the huge distance between these two.

NetBeans seen several shifts in its direction specially during netbeans 6 time when More languages were being supported in IDE and diverse set of SOA and XML development features were being included in the IDE. Then Again another shift happened and all those features and language supported were dropped and NetBeans team put more effort into the core to make the core more stable and feature rich which as you can see in the chart has payed off pretty well.

The 1,000,000 active users number is not just a number, it shows that a vast, versatile and living community is behind NetBeans IDE as users, contributors, and the core development team. Long live the good community and the good IDE.

By

A walkthrough for the fork/join framework introduced in Java SE 7

Java SE 7 brought some neat features on the table for Java developers, one of these features is the fork/join framework or basically the new parallel programming framework we can use to easily to implement our divide and conquer solutions. The point is that a solution to a problem should be devised with the following characteristics to use the fork/join framework effectively:

  • The problem domain whether it is a file, list, etc to be processed or a computation should be dividable to smaller subtasks.
  • Processing chunks should be possible without requiring the result of other chunks.

To summarize it, solving or processing the problem domain should require no self-feedback to make it possible to use the framework. For example if you want to process a list and processing each element in the list require the result of processing previous element then it is impossible to use any parallel computing for doing that job. If you want to apply some FFT over a sound stream which require feedback for processing each pulse from the previous pulses it is not possible to speedup the processing using the fork/join framework, etc.

Well, before we start learning the fork/join framework we better know what it is and what it is not: What fork/join framework is:

  • A parallel programming framework for Java
  • Part of Java SE 7
  • Suitable for implementing parallel processing solutions, mostly data intensive with small or no shared resources between the workers who process the data chunks.
  • Suitable when no synchronization is required between the workers

What fork/join framework is not:

  • It is not a magic that turns your code to run fast on machines with multiple processors, you need to think and implement your solutions in a parallel manner.
  • It is not hard and obscure like other frameworks, MPI for example. Using the framework is way easier than anything I used before.

If you want to learn the mechanics behind the fork/join framework you can read the original article written by Doug Le which explains the motive and the design. The article is available at http://gee.cs.oswego.edu/dl/papers/fj.pdf. If you want to see how we can use the framework then continue on reading this article.

First let’s see what are the important classes that one need to know in order to implement a divide and conquer solution using fork/join framework and then we will start using those classes.

  • The ForkJoinPool: This is the workers pool where you can post your ForkJoinTask to be executed. The default parallelism level is the number of processors available to the runtime.
  • The RecursiveTask<V>: This is a task, subclass of the ForkJoinTask which can return some value of type V. For example processing a list of DTOs and returning the result of process.
  • The RecursiveAction: Another subclass of the ForkJoinTask without any return value, for example processing an array…

I looked at this new API mainly for data pipelining in which I need to process a pretty huge list of object and turn it to another format to keep the processing result of one library consumable for the next one in the data flow and I am happy with the result pretty easy and straight forward.

Following is an small sample showing how to process a list of Row objects and convert them a list of Entity Objects. In my case it was something similar with processing Row objects and turning them to OData OEntity objects.


import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveTask;

/**
 *
 * @author Masoud Kalali <mkalali>
 */
class RowConverter extends RecursiveTask<List<Entity>> {

    //if more than 5000 we will use the parallel processing
    static final int SINGLE_TREAD_TOP = 5000;
    int begin;
    int end;
    List<Row> rows;

    public RowConverter(int begin, int end, List<Row> rows) {
        this.begin = begin;
        this.end = end;
        this.rows = rows;
    }

    @Override
    protected List<Entity> compute() {

        if (end - begin <= SINGLE_TREAD_TOP) {
            //actual processing happens here
            List<Entity> preparedEntities = new ArrayList<Entity>(end - begin);
            System.out.println("  beging: " + begin + " end: " + end);
            for (int i = begin; i < end; ++i) {
                preparedEntities.add(convertRow(rows.get(i)));
            }
            return preparedEntities;
        } else {
            //here we do the dividing the work and combining the results
            // specifies the number of chunks you want to break the data to
            int divider = 5000;
            // one can calculate the divider based on the list size and the number of processor available 
            // using the http://download.oracle.com/javase/7/docs/api/java/lang/Runtime.html#availableProcessors()
            // decrease the divider number and examine the changes.

            RowConverter curLeft = new RowConverter(begin, divider, rows);
            RowConverter curRight = new RowConverter(divider, end, rows);
            curLeft.fork();
            List<Entity> leftReslt = curRight.compute();
            List<Entity> rightRes = curLeft.join();
            leftReslt.addAll(rightRes);
            return leftReslt;
        }
    }

    //dummy converted method converting one DTO to another
    private Entity convertRow(Row row) {

        return new Entity(row.getId());
    }
}

// the driver class which own the pool 
public class Fjf {

    public static void main(String[] args) {

        List<Row> rawData = initDummyList(10000);
        ForkJoinPool pool = new ForkJoinPool();
        System.out.println("number of worker threads: " + pool.getParallelism());


        List<Entity> res = pool.invoke(new RowConverter(0, rawData.size(), rawData));

        // add a breakpoint here and examine the pool object. 
        //check how the stealCount, which shows number of subtasks taken on by available workers, 
        //changes when you use an smaller divider and thus produce more tasks
        System.out.println("processed list: " + res.size());

    }

    /**
     * creates a dummy list of rows
     * 
     * @param size number of rows int he list
     * @return the list of @see Row objects
     */
    private static List<Row> initDummyList(int size) {

        List<Row> rows = new ArrayList<Row>(size);

        for (int i = 0; i < size; i++) {
            rows.add(new Row(i));
        }
        return rows;
    }
}

//dummy classes which should be converted from one form to another
class Row {

    int id;

    public Row(int id) {
        this.id = id;
    }

    public int getId() {
        return id;
    }
}

class Entity {

    int id;

    public Entity(int id) {
        this.id = id;
    }

    public int getId() {
        return id;
    }
}

Just copy and paste the code into your IDE and try running and examining it to get deeper understanding of how the framework can be used. post any comment and possible questions that you may have here and I will try to help you own with them.

By

How REST interface covers for the absence of JMX/AMX administration and management interface in GlassFish 3.1

For sometime I wanted to write this entry and explain what happened to GlassFish JMX/AMX management and administration interface but being busy with other stuff prevented me from doing so. This article here can be an upgrade to my other article about GlassFish 3.0 JMX administration interface which I wrote while ago. Long story short, in GlassFish 3.1 the AMX/JMX is no longer available and instead we can use the REST interface to change the server settings and perform all administration/management and monitoring activities we need. There are fair number of articles and blog entries all over the web about the RESTful interface which I included them at the end of this blog. Firs of all the rest interface is available trough the administration console application meaning that we can access the interface using a URL similar to: http://localhost:4848/management/domain/ The administration console and the rest interface  are running on a separate virtual server and therefore a separate HTTP Listener and if required transport configuration. What I will explain here will be the following items:

  • How to use Apache HttpClient to perform administration tasks using GlassFish REST interface
  • How to find the request parameters for different commands.
  • GlassFish administration, authentication and transport security

How to use Apache HttpClient to interact with GlassFish administration and management application

Now back to the RESTful interface, this is a HTTP based interaction channel with the GlassFish administration infrastructure which basically allows us to do almost anything possible to do using asadmin trhough HTTP in a RESTful manner. We can use basically any programming language capable to writing on a socket to interact with the RESTFul interface. Here we will use Apache HTTPClient to take care of sending the commands to GlassFish RESTFul console. When using GlassFish REST management we can use any of the POST/GET and DELETE methods to perform the following tasks:

  • POST: create and partially update a resource
  • GET: get information like details of a connection pool
  • DELETE: to delete a resource

Following sample code shows how to use the  to perform some basic operations including updating a resource, getting some resources list, creating a resource and finally deleting it.

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URISyntaxException;
import java.util.logging.Logger;

import org.apache.http.HttpEntity;
import org.apache.http.HttpException;
import org.apache.http.HttpResponse;
import org.apache.http.auth.AuthenticationException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpDelete;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.message.BasicHeader;
import org.apache.http.protocol.HTTP;

/**
 *
 * @author Masoud Kalali
 */
public class AdminGlassFish {

    //change the ports to your own settng
    private static final String ADMINISTRATION_URL = "http://localhost:4848/management";
    private static final String MONITORING_URL = "http://localhost:4848/monitoring";
    private static final String CONTENT_TYPE_JSON = "application/json";
    private static final String CONTENT_TYPE_XML = "application/xml";
    private static final String ACCEPT_ALL = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
    private static final Logger LOG = Logger.getLogger(AdminGlassFish.class.getName());

    public static void main(String args[]) throws IOException, HttpException, URISyntaxException {

        //just chaning the indent level for the JSON and XML output to make them readable, for humans...
        String prettyFormatRestInterfaceOutput = "{"indentLevel":2}";
        String response = postInformation("/domain/configs/config/server-config/_set-rest-admin-config", prettyFormatRestInterfaceOutput);
        LOG.info(response);
        //getting list of all JDBC resources
        String jdbcResources = getInformation("/domain/resources/list-jdbc-resources");
        LOG.info(jdbcResources);

//        creating  a JDBC resource on top of the default pool
        String createJDBCResource = "{"id":"jdbc/Made-By-Rest","poolName":"DerbyPool"}";
        String resourceCreationResponse = postInformation("/domain/resources/jdbc-resource", createJDBCResource);
        LOG.info(resourceCreationResponse);

//        deleting a JDBC resource
        String deletionReponse = deleteResource("/domain/resources/jdbc-resource/jdbc%2FMade-By-Rest");
        LOG.info(deletionReponse);

    }

    //using HTTP get
    public static String getInformation(String resourcePath) throws IOException, AuthenticationException {
        DefaultHttpClient httpClient = new DefaultHttpClient();
        HttpGet httpG = new HttpGet(ADMINISTRATION_URL + resourcePath);
        httpG.setHeader("Accept", CONTENT_TYPE_XML);
        HttpResponse response = httpClient.execute(httpG);
        HttpEntity entity = response.getEntity();
        InputStream instream = entity.getContent();
        return isToString(instream);
    }

    //using HTTP post for creating and partially updating resources
    public static String postInformation(String resourcePath, String content) throws IOException {
        HttpClient httpClient = new DefaultHttpClient();
        HttpPost httpPost = new HttpPost(ADMINISTRATION_URL + resourcePath);
        StringEntity entity = new StringEntity(content);

        //setting the content type
        entity.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, CONTENT_TYPE_JSON));
        httpPost.addHeader("Accept",ACCEPT_ALL);
        httpPost.setEntity(entity);
        HttpResponse response = httpClient.execute(httpPost);

        return response.toString();
    }

    //using HTTP delete to delete a resource
    public static String deleteResource(String resourcePath) throws IOException {
        HttpClient httpClient = new DefaultHttpClient();
        HttpDelete httpDelete = new HttpDelete(ADMINISTRATION_URL + resourcePath);
        httpDelete.addHeader("Accept",
                ACCEPT_ALL);
        HttpResponse response = httpClient.execute(httpDelete);
        return response.toString();

    }

//converting the get output stream to something printable
    private static String isToString(InputStream in) throws IOException {
        StringBuilder sb = new StringBuilder();
        BufferedReader br = new BufferedReader(new InputStreamReader(in), 1024);
        for (String line = br.readLine(); line != null; line = br.readLine()) {
            sb.append(line);
        }
        in.close();
        return sb.toString();
    }
}

You may ask how could one know what are the required attributes names, there are several ways to do it:

  • Look at the reference documents…
  • View the source code of the html page representing that kind of resource, for example for the JDBC resource it is like http://localhost:4848/management/domain/resources/jdbc-resource which if you open it in the browser you will see an html page and viewing its source will give you the names of different attributes. I think the label for the attributes is also the same as the attributes themselves.

  • Submit the above page and monitor the request using your browser plugin, for example in case of chrome and for the JDBC resource it is like the following picture

GlassFish administration, authentication and transport security

By default when we install GlassFish or we create a domain the domain administration console is not protected by authentication nor it is protected by HTTPS so whatever we send to the application server from through this channel will be readable by someone sniffing around. Therefore you may need to enable authentication using the following command:

You may ask now that we enabled the authenticaiton for the admin console, how we can use it by our sample code, the answer is quite simple,  Just set the credentials for the request object and you are done. Something like:

UsernamePasswordCredentials cred = new UsernamePasswordCredentials("admin", "admin");
httpget.addHeader(new BasicScheme().authenticate(cred, httpget));

Make sure that you are using correct username and passwords as well as correct request object. In this case the request object is httpGet

Now about the HTTPs to have have encryption during the transport we need to enable the SSL for the admin HTTP listener. following steps show how to use the RESTFul interface through a browser to enable HTTPS for the admin console. When using the browser, GlassFish admin console shows basic HML forms for different resources.

  1. Open the http://localhost:4848/management/domain/configs/config/server-config/network-config/protocols/protocol/admin-listener in the browser
  2. Select true for security-enabled option element
  3. Click Update to save the settings.
  4. Restart server using http://localhost:4848/management/domain/restart

The above steps have the same effect as

This will enable the SSL layer for the admin-listener but it will use the default, self singed certificate. Here I explained how to install a GoDaddy digital certificate into GlassFish application server to be sure that none can listen during the transport of command parameters and on the other hand the certificate is valid instead of being self signed. And here I explained how one can use the EJBCA to setup and use an small inter-corporate certificate authority with GlassFish, though the manual is a little old but it will give you enough understanding to use the newer version of EJBCA.

If you are asking about how our small sample application can work with this dummy self signed certificate of GlassFish you need to wait till next time that I will explain how to bypass the invalid certificate installed on our test server.

In the next part of this series I will cover more details on the monitoring part as well as discussing  how to bypass the self signed Digital certificate during the development…

By

Updating Web application’s Spring Context from Beans definitions partially stored in the database…

As you know spring security providers can get complex as  you may need several beans like, implementations of UserDetailsService, SaltSource, PasswordEncoder, the JDBC setup and so on. I was working on an spring based application which needed to load the security configuration from the database because the system administrator was able to select the security configuration from several pre-defined templates like LDAP, JDBC, File Based, etc. change some attributes like LDAP address or file location, etc. to fit the template in the environment and then apply the new configuration to be used by the application.

I was to port some parts of the application to the web and 3 tier architecture and so I had to have the authentication configured for the web application from the database and current implementations of the required beans for the security providers configurations.

It is plain and simple, load all of the context configuration by adding them to the web.xml and let the spring filter use them to initialize the context or extend XmlWebApplicationContext or its siblings and return the configuration file addresses by overriding the getConfigLocations method. This works perfectly when everything is in plain XML file and you have access to everything… It wont work when some of context configuration files are stored in the database and the only means of accessing the database is the spring context and that needs to be initialized before you could access the database through it.

What I needed to do was putting together a basic authentication in front of the web application while using the ProviderManager which its configuration is stored in the database. Without the ProviderManager you cannot have the security filters and thus no security will be applied over the context.

The first part, creating the security configuration and specifying the URL patterns which are needed to be protected is straight forward. The filters use the ProviderManager which is not there and thus the context initialization will fail. To solve this I used the following workaround which might help someone else as well. In all of our templates the ProviderManager bean name was the same so I could simply devise the following solution. Create a temporary basic security provider definition file with the following beans:

  • A basic UserDetailsService bean based on InMemoryDaoImpl
  • An AuthenticationProvider on top of the above UserDetailsService
  • A ProviderManager which uses the above AuthenticationProvider.

The complete file look like this:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"
    default-lazy-init="true">

    <bean id="tmpUserDetailsService"
          class="org.springframework.security.core.userdetails.memory.InMemoryDaoImpl">
        <property name="userMap">
            <value>
            </value>
        </property>
    </bean>

    <bean id="tmpAuthenticationProvider"
          class="org.springframework.security.authentication.dao.DaoAuthenticationProvider">
        <property name="userDetailsService" ref="tmpUserDetailsService"/>
    </bean>

    <bean id="authenticationManager"
        class="org.springframework.security.authentication.ProviderManager">
        <property name="providers">
            <list>
                <ref local="tmpAuthenticationProvider"/>
            </list>
        </property>
    </bean>
</beans>

Using this file, the spring context will get initialized and thus no NoSuchBeanDefinitionException will be thrown at your face for the least.  You may say, ok why you are not loading the entire security definitions and configurations after the context is initialized so you wont need to have the temporary security provider, the answer to this question is that having no security applied right after the context initialization is finished is a security risk because at the brief moment before the context get updated with the security definitions, people can access the entire system without any authentication or access control. Let’s say that brief moment is negliable but a bigger factor here is the possible failure of loading the definitions after the context is initialized means that the application will remain without any security restriction if we do not lock down the application with the temporary provider.

Now that spring context can get initialized you can hook into spring context initialization by a listener and load the security provider from the database into the context to override the temporary beans with the actual one stored in the database.

Too hook into spring context initialization process you need to follow the below steps:

  • Implement your ApplicationListener, following snippet shows how:
  • public class SpringContextEventListener implements ApplicationListener {
    
        private XmlWebApplicationContext context;
    
        public void onApplicationEvent(ApplicationEvent e) {
    
            if (e instanceof ContextRefreshedEvent) {
                context = (XmlWebApplicationContext) ((ContextRefreshedEvent) e).getApplicationContext();
                loadSecurityConfigForServer();
            }
        }
    
        private void loadSecurityConfigForServer() {
    
            AutowireCapableBeanFactory factory = context.getAutowireCapableBeanFactory();
            BeanDefinitionRegistry registry = (BeanDefinitionRegistry) factory;
            String securityConfig = loadSecurityConfigFromDatabase();
            XmlBeanDefinitionReader xmlReader = new XmlBeanDefinitionReader(registry);
            xmlReader.loadBeanDefinitions(new ByteArrayResource(securityConfig.getBytes()));
        }
    
        private String loadSecurityConfigFromDatabase() {
            //use the context and load the configuration from the database
        }
    }
    }
    
  • Now that you have a listener which listen for ApplicationContext event and load your security configuration  you can hook this listener to you context because by its own it wont do anything :-)
  • To add the listener to your application context, just add something similar to the following snippet to one of the context configuration files and you will be done.

    <bean id="applicationListener" class="your.package.SpringContextEventListener"/>
    

    This is all you needed to do in order to get the context updated after web application starts without imposing any security hole on the application for lack of security definitions right after the application startup.