Getting started with CouchDB and MongoDB, Part II: Beginning with Security

In the first part of this series I discussed how can we perform CRUD operation in MongoDB and CouchDB. In this part we will look into security features of MongoDB and CouchDB. In the next two installments of this series I will go into how to apply the configuration that are described here and how to write codes that work with them.

Basic security requirements

There are some basic features and functionalities which are required by different applications, in addition to tons of other more advanced fine grained features, which one can expect a semi-structured data storage server to provide. If the not provided, other measures should be applied to limit the effect of missing features risks. The basic features are as listed below:

  • Authentication: Authentication refers to ensuring that if a token is presented to our system the system can verify that the token is valid, for example a username can be verified using the password that accompany it.
  • Access control: To limit the access to specific resources/ set of resources to an authenticated client or a group of authenticated clients sharing common characteristics like a assigned role_name.
  • Transport safety: To ensure that the data is being transferred over the network safely without possibility of tampering or reading depending on the required level of confidentiality.
  • On-Disk/In-memory Data Encryption: To keep the data encrypted on disk to prevent RAW access to the data without presence of the decryption key/token. Same applies to the systems that are data memory intensive, the memory data in memory should be protected from memory dump/views.

General Security precautions

  • Do not run the server on default network interface which it is configured, usually 0.0.0.0 which means all available interfaces. Imagine you having a LAN and WAN interface and you want your NoSQL storage to only listen on the LAN interface while by default it listens on all interfaces (including your outbound WAN).
  • Do not use the default port numbers, change the port numbers that your backend application server, database, NoSQL storage, ABCD network server listen on. No need to wait for intrusion on the default port number.
  • Do not run your network server, e.g NoSQL server on root or any other privileged user.  Create a user which is relaxed enough in disk/network access to let the network server perform its task and not relaxer!
  • Do not let the above user to have access to any portion of file server other than the one that it requires.
  • Enable disk quota!
  • If on-disk/in-memory encryption is supported depending on sensitivity of the data, enable it.
  • Enable transport security. One way or another transport security is required to prevent others peeping, tampering the data or request responses!
  • Make sure you enable your iptable and limit the above user to only have access to port/interfaces that it meant to have access nothing more.
  • Disable any sample welcome page, any default “I am UP” message, any default “Welcome page” or anything else that your network server may have out of the box. This will prevent people from peeping into seeing what version of what network server you are running.
  • Disable any extra interface/communication channel that your network server has and you are not using it. E.g In an application server we just don’t need the JMX interface and thus we disable it rather than having a door into the application server, although the door is locked but…
  • Do not use anything default, no port, no default username, no default password, no default welcome message, no default… Nothing default!

MongoDB and Security

Below is some description on how MongoDB cover the basic security requirements.

  • Authentication: Supports two types of access. Either no authentication is enabled and anonymous users can do read/write or authentication is enabled (per database) and users are authenticated and then checked for their authorization before accessing the database. At the token level, CouchDB supports OAuth, cookie based authentication and http basic authentication. We will go into details of these in the next installments of this blog series.
  • Authorization: Supports role based access control. There are several roles which can be assigned to users. each role add some more privilege to the user(remember unauthenticated users cannot interact with the database when authentication is enabled). Details of the roles MongoDB access control documentation.
  • Transport Security: In the community edition there is no build-in transport security out of the box, you can use the MongoDB enterprise or you need to build it from the source to get SSL included in your non-enterprise copy. Procedure described at Configure SSL.
  • On-disk/In-memory encryption: I am not aware of any built-in support for any of these. One possibility is to use OS level disk encryption and/or use application level encryption for the sensitive fields, which imposes limitation on indexing and similarity searches, in the document.

CouchDB and Security

Below is some description on how Apache CouchDB cover the basic security requirements.

  • Authentication: By default any connection to CouchDB has access to perform CRUD on the server (creating/reading/deleting databases) and also CRUD operations on documents stored in the databases. When authentication is enabled, we go into how each one of these tasks can be performed in next blog entry, the user’s credentials is checked to let it perform CRUD on server or on databases or to perform read action on the database.
  • Authorization: There are 3 roles in CouchDB, the server admin role which can do anything and everything possible in the server, the database admin role on a database and the read-only role on a database.
  • Transport: Apache CouchDB has built-in supports for transport security using SSL and thus all communication that requires transport security (encryption, integrity) can be made over HTTPS.
  • On-Disk/In-memory encryption: I am not aware of any built-in support for any of these. One possibility is to use OS level disk encryption and/or use application level encryption for the sensitive fields in the document. Application level encryption will imposes limitation on indexing and similarity searches.
This blog entry is wrote based on CouchDB 1.3.1 and MongoDB 2.4.5 and thus the descriptions are valid for these versions and they may differ in other versions. The sample codes should work with the mentioned versions and any newer release.

Getting started with CouchDB and MongoDB, Part I: CRUD operations

In a series of blog I am going to write to cover CouchDB and MongoDB as two popular document databases. In each one of the entries I will show how can you do similar tasks with each one of these two so that you get some understanding of the differences and similarities between them. I am not going to do any comparison here but reading the series you can draw conclusion on which one seems more suitable for your case based judging by how different tasks can be accomplished using these two.

Some technicalities:

  • All of the source codes for these series is available at https://github.com/kalali/nosql which you can get and run the samples.
  • All sample codes assume default port numbers unless I mention otherwise in the blog entry itself.
  • All the blog entries use CouchDB 1.3.1 and MongoDB 2.4.5 and thus I expect them to work fine with this version and any newer ones.

A little about these two databases:

  • MongoDB and CouchDB: Both of them are document databases which lets you store freeform data into the storage in form of a document with key/value fields of data. Just look at JSON to see how a document would look like.
  • Apache CouchDB: Provides built-in support for reading/writing JSON documents over a REST interface. Result of a query is JSON and the data you store is in form of JSON all through the HTTP interface. There are tens of access API implemented, each with its own unique characteristics, around the REST interface in different languages which one can use to avoid using the REST interface directly. List of these implementations are available at: Related Projects. Apache CouchDB has more emphasis on availability rather than consistency if you look at it from CAP theorem perspective. CouchDB is cross platform and developed in ErLang.
  • MongoDB: MongoDB does not provide a built-in REST interface to interact with the document store and it does not provide direct support for storing and retrieving JSON documents however it provides a fast native protocol which can be access using the MongoDB Drivers and Client Libraries which are available for almost any language you can think of, and also there are some bridges which one can use to create a REST interface on top of the native interface. MongoDB has more emphasis on consistency rather than availability if you look at it from CAP theorem perspective. MongoDB is cross platform and developed in C++.

Downloading and starting:

  • MongoDB: You can download MongoDB from  the download page and after extracting it you can start it by navigating to bin directory and running ./mongo to start the document store. The default administration port is 28017 and the access port number is 27017 by default.
  • Apache CouchDB: You can download Apache CouchDB from the download page or if you are running some Linux distribution you can get it from the package repositories or you can just build it from the code. After installation run sudo couchdb to start the server.

The basic scenario for this blog entry is to create a database, add a document, find that document back, update the document and read it back from the store, delete the document and finally to delete the database to make the sample code’s execution repeatable.

The CouchDB sample code for the sample scenario:

import javax.json.JsonObject;
import javax.json.JsonReader;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.Entity;
import javax.ws.rs.core.Form;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import java.io.StringReader;
import java.util.UUID;

/**
 * @author Masoud Kalali
 */

public class App {
    private static final String DB_BASE_URL = "http://127.0.0.1:5984/";
    private static final String STORE_NAME = "book_ds";
    private static Client c = ClientBuilder.newClient();

    public static void main(String[] args) {

        createDocumentStore();

        addingBookToDocumentStore();

        JsonObject secBook = readBook();

        String revision = secBook.getStringValue("_rev");
        updateBook(revision);

        JsonObject secBookAfterUpdate = readBook();
        deleteBookDocument(secBookAfterUpdate);

        deleteDocumentStore();
    }

    private static void createDocumentStore() {
        System.out.println("Creating Document Store...");
        javax.ws.rs.core.Response r = c.target(DB_BASE_URL).path(STORE_NAME).request().accept(MediaType.APPLICATION_JSON_TYPE).put(Entity.form(new Form()));
        if (!r.getStatusInfo().getFamily().equals(Response.Status.Family.SUCCESSFUL)) {
            throw new RuntimeException("Failed to create document storage: " + r.getStatusInfo().getReasonPhrase());
        }
    }

    private static void addingBookToDocumentStore() {
        System.out.println("Library document store created. Adding a book to the library...");
        javax.ws.rs.core.Response r = c.target(DB_BASE_URL).path(STORE_NAME).path("100").request(MediaType.APPLICATION_JSON_TYPE).put(Entity.text("{\n" +
                "   \"name\": \"GlassFish Security\",\n" +
                "   \"author\": \"Masoud Kalali\"\n" +
                "}"));

        if (!r.getStatusInfo().getFamily().equals(Response.Status.Family.SUCCESSFUL)) {
            throw new RuntimeException("Failed to create book document in the store: " + r.getStatusInfo().getReasonPhrase());
        }
    }

    private static JsonObject readBook() {

        System.out.println("Reading the book from the library...");
        javax.ws.rs.core.Response r = c.target(DB_BASE_URL).path(STORE_NAME).path("100").request(MediaType.APPLICATION_JSON_TYPE).get();
        if (!r.getStatusInfo().getFamily().equals(Response.Status.Family.SUCCESSFUL)) {
            throw new RuntimeException("Failed to read the book document from the data store: " + r.getStatusInfo().getReasonPhrase());

        }
        String ec = r.readEntity(String.class);
        System.out.println(ec);
        JsonReader reader = new JsonReader(new StringReader(ec));
        return reader.readObject();
    }

    private static void updateBook(String revision) {
        System.out.println("Updating the GlassFish Security book...");
        javax.ws.rs.core.Response r = c.target(DB_BASE_URL).path(STORE_NAME).path("100").request(MediaType.APPLICATION_JSON_TYPE).put(Entity.text(

                "{\"_rev\":\"" + revision + "\",\"name\":\"GlassFish Security And Java EE Security\",\"author\":\"Masoud Kalali\"}"
        ));

        if (!r.getStatusInfo().getFamily().equals(Response.Status.Family.SUCCESSFUL)) {
            throw new RuntimeException("Failed to update book document in the data store: " + r.getStatusInfo().getReasonPhrase());
        }
    }

    private static void deleteBookDocument(JsonObject secBook) {
        System.out.println("Deleting the the GlassFish Security book's document...");
        javax.ws.rs.core.Response r = c.target(DB_BASE_URL).path(STORE_NAME).path("100").queryParam("rev", secBook.getStringValue("_rev")).request().buildDelete().invoke();
        if (!r.getStatusInfo().getFamily().equals(Response.Status.Family.SUCCESSFUL)) {
            throw new RuntimeException("Failed to delete book document in the data store: " + r.getStatusInfo().getReasonPhrase());
        }
    }

    private static void deleteDocumentStore() {
        System.out.println("Deleting the document store...");
        javax.ws.rs.core.Response r = c.target(DB_BASE_URL).path(STORE_NAME).request().accept(MediaType.APPLICATION_JSON_TYPE).delete();
        if (!r.getStatusInfo().getFamily().equals(Response.Status.Family.SUCCESSFUL)) {
            throw new RuntimeException("Failed to delete document storage: " + r.getStatusInfo().getReasonPhrase());
        }
    }
}

I used JAX-RS and Some JSON-P in the code to add some salt to it rather than using plan http connection and plain-text content, but that aside the most important thing that you may want to remember is that we are using HTTP verbs for different actions, for example, PUT to create, DELETE to delete, POST to update and GET to query and read. The other important factor that you may have noticed is the _rev or the revision attribute of the document. You see that in order to update the document I specified the revision string, _rev in the document, also to delete the document I use the _rev id. This is required because at each point in time there might be multiple revisions of the document living in the database and you may read an older revision and that is the revision you can update (you can update a document revision because someone else might have updated the document while you were doing the update and thus your revision is old and you are updating your own revision).

The POM file for the couchDB sample is as follow:

  

    4.0.0

    me.kalali.couchdb
    couchdb_ch01
    1.0-SNAPSHOT
    jar

    couchdb_ch01
    http://maven.apache.org

    
        UTF-8
    

    
        
            org.glassfish.jersey.media
            jersey-media-moxy
            2.2
        
        
            org.glassfish.jersey.core
            jersey-client
            2.2
        
        
            org.glassfish
            javax.json
            1.0-b02
        
    

    
        

            
                org.codehaus.mojo
                exec-maven-plugin
                1.2.1
                
                    
                        run-main
                        package
                        
                            java
                        
                    
                
                
                    me.kalali.couchdb.ch01.App
                
            
        
    


Now, doing a clean install for the CouchDB project sample we get:

Creating Document Store...
Library document store created. Adding a book to the library...
Reading the book back from the library...
{"_id":"100","_rev":"1-99fb78c8607deb8ff966e2e2b3f3f7b3","name":"GlassFish Security","author":"Masoud Kalali"}

Updating the GlassFish Security book...
Book details after update:
{"_id":"100","_rev":"2-27c86bc501856c8bbe301b5a27f8da02","name":"GlassFish Security And Java EE Security","author":"Masoud Kalali"}
Deleting the the GlassFish Security book's document...
Deleting the document store...

If you pay attention you see that the _rev attribute has changed after updating the document, that is because by default read operation reads the latest revision of the document and in the second read after we updated the document the revision is changed and document that is read is the latest revision.

The MongoDB code for the sample scenario:

import com.mongodb.BasicDBObject;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.MongoClient;
import com.mongodb.WriteResult;

import java.net.UnknownHostException;

/**
 * @author Masoud Kalali
 */
public class App {
    private static final String STORE_NAME = "book_ds";
    private static MongoClient mongoClient;

    public static void main(String[] args) throws UnknownHostException {

        mongoClient = new MongoClient("127.0.0.1", 27017);

        DBCollection collection = createDocumentStore();

        addBookToCollection(collection);

        DBObject secBook = readBook(collection);
        System.out.println("Book details read from document store: \n" + secBook.toString());

        updateBook(collection);

        DBObject secBookAfterUpdate = readBook(collection);
        System.out.println("Book details after update: \n" + secBookAfterUpdate.toString());

        deleteBookDocument(collection);

        deleteDocumentStore();
    }

    private static DBCollection createDocumentStore() {
        System.out.println("Creating Document Store...");
        DB db = mongoClient.getDB(STORE_NAME);
        DBCollection collection = db.getCollection("books");
        return collection;
    }

    private static void addBookToCollection(DBCollection collection) {
        System.out.println("Library document store created. Adding a book to the library...");
        DBObject object = new BasicDBObject();
        object.put("name", "GlassFish Security");
        object.put("_id", "100");
        object.put("author", "Masoud Kalali");
        collection.insert(object);
    }

    private static DBObject readBook(DBCollection collection) {

        DBObject secBook = new BasicDBObject();
        secBook.put("_id", "100");
        return collection.findOne(secBook);
    }

    private static void updateBook(DBCollection collection) {
        System.out.println("Updating the GlassFish Security book...");
        DBObject matchings = collection.findOne(new BasicDBObject("_id", "100"));
        DBObject updateStmt = new BasicDBObject();
        updateStmt.put("name", "GlassFish Security And Java EE Security");
        updateStmt.put("_id", "100");
        updateStmt.put("author", "Masoud Kalali");
        WriteResult result = collection.update(matchings, updateStmt);
        System.out.println("Updated documents count: " + result.getN());
    }

    private static void deleteBookDocument(DBCollection collection) {
        DBObject object = new BasicDBObject();
        object.put("_id", "100");
        WriteResult result = collection.remove(object);
        System.out.println("Deleted documents count: " + result.getN());
    }

    private static void deleteDocumentStore() {
        System.out.println("Deleting the document store...");
        mongoClient.dropDatabase(STORE_NAME);
    }
}

As you can see in the code we are using the Mongo API rather than using direct HTTP communication and that is because MongoDB does not provide a built-in REST interface but rather it has its own native protocol as explained above. The other important part which will be in comparison with CouchDB is absence of revision or _rev or an attribute with similar semantic role. That is because MongoDB does not keep multiple version of the document Look at CAP theorem for more information on consistency and availability attributes of these two databases.

Now, the POM file for the MongoDB sample is as follow:

 

    4.0.0

    me.kalali.mongodb
    mongodb_ch01
    1.0-SNAPSHOT
    jar

    mongodb_ch01
    http://maven.apache.org

    
        UTF-8
    

    
        
            org.mongodb
            mongo-java-driver
            2.10.1
        
    

    
        
            
                org.codehaus.mojo
                exec-maven-plugin
                1.2.1
                
                    
                        run-main
                        package
                        
                            java
                        
                    
                
                
                    me.kalali.mongodb.ch01.App
                
            
        
    

The output for doing a clean install on the MongoDB sample will be as shown below:

Creating Document Store...
Library document store created. Adding a book to the library...
Book details read from document store:
{ "_id" : "100" , "name" : "GlassFish Security" , "author" : "Masoud Kalali"}
Updating the GlassFish Security book...
Updated documents count: 1
Book details after update:
{ "_id" : "100" , "name" : "GlassFish Security And Java EE Security" , "author" : "Masoud Kalali"}
Deleting the the GlassFish Security book's document...
Deleting the document store...

The sample code for this blog entry is available at GutHub:Kalali

This blog entry is wrote based on CouchDB 1.3.1 and MongoDB 2.4.5 and thus the descriptions are valid for these versions and they may differ in other versions. The sample codes should work with the mentioned versions and any newer release.


Using GlassFish domain templates to easily create several customized domains

It might have happened to you to require some customization the GlassFish behavior after you create the domain in order to make the domain fit the  basic requirements that you have in your organization or for your development purpose. Some of the files that we usually manipulate to customize GlassFish includes logging.properties, keystore.jks, cacert.jks, default-web.xml, server.policy and domain.xml. These files can be customized through different asadmin commands, or JDK commands like keytool, policytool or manually using a text editor after you created the domain in the config directory of the domain itself.  But repeating the steps for multiple domains is a laborious task which can be prevented by changing the template files that GlassFish domains are created using them. The templates are located atAnd we can simply open them and edit the properties to make them more fit to our needs.

The benefit of modifying the templates rather than copy pasting the config directory of one domain to another is the domain specific behaviors like port numbers which have placeholders in the domain.xml to be filled by asadmin command. An example of a placeholder is %%%JMS_PROVIDER_PORT%%% which will be replaced by JMS provider port by asadmin command.

A walkthrough for the fork/join framework introduced in Java SE 7

Java SE 7 brought some neat features on the table for Java developers, one of these features is the fork/join framework or basically the new parallel programming framework we can use to easily to implement our divide and conquer solutions. The point is that a solution to a problem should be devised with the following characteristics to use the fork/join framework effectively:

  • The problem domain whether it is a file, list, etc to be processed or a computation should be dividable to smaller subtasks.
  • Processing chunks should be possible without requiring the result of other chunks.

To summarize it, solving or processing the problem domain should require no self-feedback to make it possible to use the framework. For example if you want to process a list and processing each element in the list require the result of processing previous element then it is impossible to use any parallel computing for doing that job. If you want to apply some FFT over a sound stream which require feedback for processing each pulse from the previous pulses it is not possible to speedup the processing using the fork/join framework, etc.

Well, before we start learning the fork/join framework we better know what it is and what it is not: What fork/join framework is:

  • A parallel programming framework for Java
  • Part of Java SE 7
  • Suitable for implementing parallel processing solutions, mostly data intensive with small or no shared resources between the workers who process the data chunks.
  • Suitable when no synchronization is required between the workers

What fork/join framework is not:

  • It is not a magic that turns your code to run fast on machines with multiple processors, you need to think and implement your solutions in a parallel manner.
  • It is not hard and obscure like other frameworks, MPI for example. Using the framework is way easier than anything I used before.

If you want to learn the mechanics behind the fork/join framework you can read the original article written by Doug Le which explains the motive and the design. The article is available at http://gee.cs.oswego.edu/dl/papers/fj.pdf. If you want to see how we can use the framework then continue on reading this article.

First let’s see what are the important classes that one need to know in order to implement a divide and conquer solution using fork/join framework and then we will start using those classes.

  • The ForkJoinPool: This is the workers pool where you can post your ForkJoinTask to be executed. The default parallelism level is the number of processors available to the runtime.
  • The RecursiveTask<V>: This is a task, subclass of the ForkJoinTask which can return some value of type V. For example processing a list of DTOs and returning the result of process.
  • The RecursiveAction: Another subclass of the ForkJoinTask without any return value, for example processing an array…

I looked at this new API mainly for data pipelining in which I need to process a pretty huge list of object and turn it to another format to keep the processing result of one library consumable for the next one in the data flow and I am happy with the result pretty easy and straight forward.

Following is an small sample showing how to process a list of Row objects and convert them a list of Entity Objects. In my case it was something similar with processing Row objects and turning them to OData OEntity objects.

[java]
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveTask;

/**
*
* @author Masoud Kalali <mkalali>
*/
class RowConverter extends RecursiveTask<List<Entity>> {

//if more than 5000 we will use the parallel processing
static final int SINGLE_TREAD_TOP = 5000;
int begin;
int end;
List<Row> rows;

public RowConverter(int begin, int end, List<Row> rows) {
this.begin = begin;
this.end = end;
this.rows = rows;
}

@Override
protected List<Entity> compute() {

if (end – begin <= SINGLE_TREAD_TOP) {
//actual processing happens here
List<Entity> preparedEntities = new ArrayList<Entity>(end – begin);
System.out.println(" beging: " + begin + " end: " + end);
for (int i = begin; i < end; ++i) {
preparedEntities.add(convertRow(rows.get(i)));
}
return preparedEntities;
} else {
//here we do the dividing the work and combining the results
// specifies the number of chunks you want to break the data to
int divider = 5000;
// one can calculate the divider based on the list size and the number of processor available
// using the http://download.oracle.com/javase/7/docs/api/java/lang/Runtime.html#availableProcessors()
// decrease the divider number and examine the changes.

RowConverter curLeft = new RowConverter(begin, divider, rows);
RowConverter curRight = new RowConverter(divider, end, rows);
curLeft.fork();
List<Entity> leftReslt = curRight.compute();
List<Entity> rightRes = curLeft.join();
leftReslt.addAll(rightRes);
return leftReslt;
}
}

//dummy converted method converting one DTO to another
private Entity convertRow(Row row) {

return new Entity(row.getId());
}
}

// the driver class which own the pool
public class Fjf {

public static void main(String[] args) {

List<Row> rawData = initDummyList(10000);
ForkJoinPool pool = new ForkJoinPool();
System.out.println("number of worker threads: " + pool.getParallelism());

List<Entity> res = pool.invoke(new RowConverter(0, rawData.size(), rawData));

// add a breakpoint here and examine the pool object.
//check how the stealCount, which shows number of subtasks taken on by available workers,
//changes when you use an smaller divider and thus produce more tasks
System.out.println("processed list: " + res.size());

}

/**
* creates a dummy list of rows
*
* @param size number of rows int he list
* @return the list of @see Row objects
*/
private static List<Row> initDummyList(int size) {

List<Row> rows = new ArrayList<Row>(size);

for (int i = 0; i < size; i++) {
rows.add(new Row(i));
}
return rows;
}
}

//dummy classes which should be converted from one form to another
class Row {

int id;

public Row(int id) {
this.id = id;
}

public int getId() {
return id;
}
}

class Entity {

int id;

public Entity(int id) {
this.id = id;
}

public int getId() {
return id;
}
}

[/java]

Just copy and paste the code into your IDE and try running and examining it to get deeper understanding of how the framework can be used. post any comment and possible questions that you may have here and I will try to help you own with them.

How REST interface covers for the absence of JMX/AMX administration and management interface in GlassFish 3.1

For sometime I wanted to write this entry and explain what happened to GlassFish JMX/AMX management and administration interface but being busy with other stuff prevented me from doing so. This article here can be an upgrade to my other article about GlassFish 3.0 JMX administration interface which I wrote while ago. Long story short, in GlassFish 3.1 the AMX/JMX is no longer available and instead we can use the REST interface to change the server settings and perform all administration/management and monitoring activities we need. There are fair number of articles and blog entries all over the web about the RESTful interface which I included them at the end of this blog. Firs of all the rest interface is available trough the administration console application meaning that we can access the interface using a URL similar to: http://localhost:4848/management/domain/ The administration console and the rest interface  are running on a separate virtual server and therefore a separate HTTP Listener and if required transport configuration. What I will explain here will be the following items:

  • How to use Apache HttpClient to perform administration tasks using GlassFish REST interface
  • How to find the request parameters for different commands.
  • GlassFish administration, authentication and transport security

How to use Apache HttpClient to interact with GlassFish administration and management application

Now back to the RESTful interface, this is a HTTP based interaction channel with the GlassFish administration infrastructure which basically allows us to do almost anything possible to do using asadmin trhough HTTP in a RESTful manner. We can use basically any programming language capable to writing on a socket to interact with the RESTFul interface. Here we will use Apache HTTPClient to take care of sending the commands to GlassFish RESTFul console. When using GlassFish REST management we can use any of the POST/GET and DELETE methods to perform the following tasks:

  • POST: create and partially update a resource
  • GET: get information like details of a connection pool
  • DELETE: to delete a resource

Following sample code shows how to use the  to perform some basic operations including updating a resource, getting some resources list, creating a resource and finally deleting it.

[java]
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URISyntaxException;
import java.util.logging.Logger;

import org.apache.http.HttpEntity;
import org.apache.http.HttpException;
import org.apache.http.HttpResponse;
import org.apache.http.auth.AuthenticationException;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpDelete;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.message.BasicHeader;
import org.apache.http.protocol.HTTP;

/**
*
* @author Masoud Kalali
*/
public class AdminGlassFish {

//change the ports to your own settng
private static final String ADMINISTRATION_URL = "http://localhost:4848/management";
private static final String MONITORING_URL = "http://localhost:4848/monitoring";
private static final String CONTENT_TYPE_JSON = "application/json";
private static final String CONTENT_TYPE_XML = "application/xml";
private static final String ACCEPT_ALL = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
private static final Logger LOG = Logger.getLogger(AdminGlassFish.class.getName());

public static void main(String args[]) throws IOException, HttpException, URISyntaxException {

//just chaning the indent level for the JSON and XML output to make them readable, for humans…
String prettyFormatRestInterfaceOutput = "{"indentLevel":2}";
String response = postInformation("/domain/configs/config/server-config/_set-rest-admin-config", prettyFormatRestInterfaceOutput);
LOG.info(response);
//getting list of all JDBC resources
String jdbcResources = getInformation("/domain/resources/list-jdbc-resources");
LOG.info(jdbcResources);

// creating a JDBC resource on top of the default pool
String createJDBCResource = "{"id":"jdbc/Made-By-Rest","poolName":"DerbyPool"}";
String resourceCreationResponse = postInformation("/domain/resources/jdbc-resource", createJDBCResource);
LOG.info(resourceCreationResponse);

// deleting a JDBC resource
String deletionReponse = deleteResource("/domain/resources/jdbc-resource/jdbc%2FMade-By-Rest");
LOG.info(deletionReponse);

}

//using HTTP get
public static String getInformation(String resourcePath) throws IOException, AuthenticationException {
DefaultHttpClient httpClient = new DefaultHttpClient();
HttpGet httpG = new HttpGet(ADMINISTRATION_URL + resourcePath);
httpG.setHeader("Accept", CONTENT_TYPE_XML);
HttpResponse response = httpClient.execute(httpG);
HttpEntity entity = response.getEntity();
InputStream instream = entity.getContent();
return isToString(instream);
}

//using HTTP post for creating and partially updating resources
public static String postInformation(String resourcePath, String content) throws IOException {
HttpClient httpClient = new DefaultHttpClient();
HttpPost httpPost = new HttpPost(ADMINISTRATION_URL + resourcePath);
StringEntity entity = new StringEntity(content);

//setting the content type
entity.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, CONTENT_TYPE_JSON));
httpPost.addHeader("Accept",ACCEPT_ALL);
httpPost.setEntity(entity);
HttpResponse response = httpClient.execute(httpPost);

return response.toString();
}

//using HTTP delete to delete a resource
public static String deleteResource(String resourcePath) throws IOException {
HttpClient httpClient = new DefaultHttpClient();
HttpDelete httpDelete = new HttpDelete(ADMINISTRATION_URL + resourcePath);
httpDelete.addHeader("Accept",
ACCEPT_ALL);
HttpResponse response = httpClient.execute(httpDelete);
return response.toString();

}

//converting the get output stream to something printable
private static String isToString(InputStream in) throws IOException {
StringBuilder sb = new StringBuilder();
BufferedReader br = new BufferedReader(new InputStreamReader(in), 1024);
for (String line = br.readLine(); line != null; line = br.readLine()) {
sb.append(line);
}
in.close();
return sb.toString();
}
}
[/java]

You may ask how could one know what are the required attributes names, there are several ways to do it:

  • Look at the reference documents…
  • View the source code of the html page representing that kind of resource, for example for the JDBC resource it is like http://localhost:4848/management/domain/resources/jdbc-resource which if you open it in the browser you will see an html page and viewing its source will give you the names of different attributes. I think the label for the attributes is also the same as the attributes themselves.

  • Submit the above page and monitor the request using your browser plugin, for example in case of chrome and for the JDBC resource it is like the following picture

GlassFish administration, authentication and transport security

By default when we install GlassFish or we create a domain the domain administration console is not protected by authentication nor it is protected by HTTPS so whatever we send to the application server from through this channel will be readable by someone sniffing around. Therefore you may need to enable authentication using the following command:

./asadmin change-admin-password

You may ask now that we enabled the authenticaiton for the admin console, how we can use it by our sample code, the answer is quite simple,  Just set the credentials for the request object and you are done. Something like:

[java]
UsernamePasswordCredentials cred = new UsernamePasswordCredentials("admin", "admin");
httpget.addHeader(new BasicScheme().authenticate(cred, httpget));
[/java]

Make sure that you are using correct username and passwords as well as correct request object. In this case the request object is httpGet

Now about the HTTPs to have have encryption during the transport we need to enable the SSL for the admin HTTP listener. following steps show how to use the RESTFul interface through a browser to enable HTTPS for the admin console. When using the browser, GlassFish admin console shows basic HML forms for different resources.

  1. Open the http://localhost:4848/management/domain/configs/config/server-config/network-config/protocols/protocol/admin-listener in the browser
  2. Select true for security-enabled option element
  3. Click Update to save the settings.
  4. Restart server using http://localhost:4848/management/domain/restart

The above steps have the same effect as

./asadmin enable-secure-admin

This will enable the SSL layer for the admin-listener but it will use the default, self singed certificate. Here I explained how to install a GoDaddy digital certificate into GlassFish application server to be sure that none can listen during the transport of command parameters and on the other hand the certificate is valid instead of being self signed. And here I explained how one can use the EJBCA to setup and use an small inter-corporate certificate authority with GlassFish, though the manual is a little old but it will give you enough understanding to use the newer version of EJBCA.

If you are asking about how our small sample application can work with this dummy self signed certificate of GlassFish you need to wait till next time that I will explain how to bypass the invalid certificate installed on our test server.

In the next part of this series I will cover more details on the monitoring part as well as discussing  how to bypass the self signed Digital certificate during the development…

Updating Web application’s Spring Context from Beans definitions partially stored in the database…

As you know spring security providers can get complex as  you may need several beans like, implementations of UserDetailsService, SaltSource, PasswordEncoder, the JDBC setup and so on. I was working on an spring based application which needed to load the security configuration from the database because the system administrator was able to select the security configuration from several pre-defined templates like LDAP, JDBC, File Based, etc. change some attributes like LDAP address or file location, etc. to fit the template in the environment and then apply the new configuration to be used by the application.

I was to port some parts of the application to the web and 3 tier architecture and so I had to have the authentication configured for the web application from the database and current implementations of the required beans for the security providers configurations.

It is plain and simple, load all of the context configuration by adding them to the web.xml and let the spring filter use them to initialize the context or extend XmlWebApplicationContext or its siblings and return the configuration file addresses by overriding the getConfigLocations method. This works perfectly when everything is in plain XML file and you have access to everything… It wont work when some of context configuration files are stored in the database and the only means of accessing the database is the spring context and that needs to be initialized before you could access the database through it.

What I needed to do was putting together a basic authentication in front of the web application while using the ProviderManager which its configuration is stored in the database. Without the ProviderManager you cannot have the security filters and thus no security will be applied over the context.

The first part, creating the security configuration and specifying the URL patterns which are needed to be protected is straight forward. The filters use the ProviderManager which is not there and thus the context initialization will fail. To solve this I used the following workaround which might help someone else as well. In all of our templates the ProviderManager bean name was the same so I could simply devise the following solution. Create a temporary basic security provider definition file with the following beans:

  • A basic UserDetailsService bean based on InMemoryDaoImpl
  • An AuthenticationProvider on top of the above UserDetailsService
  • A ProviderManager which uses the above AuthenticationProvider.

The complete file look like this:

[xml]

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"
default-lazy-init="true">

<bean id="tmpUserDetailsService"
class="org.springframework.security.core.userdetails.memory.InMemoryDaoImpl">
<property name="userMap">
<value>
</value>
</property>
</bean>

<bean id="tmpAuthenticationProvider"
class="org.springframework.security.authentication.dao.DaoAuthenticationProvider">
<property name="userDetailsService" ref="tmpUserDetailsService"/>
</bean>

<bean id="authenticationManager"
class="org.springframework.security.authentication.ProviderManager">
<property name="providers">
<list>
<ref local="tmpAuthenticationProvider"/>
</list>
</property>
</bean>
</beans>

[/xml]

Using this file, the spring context will get initialized and thus no NoSuchBeanDefinitionException will be thrown at your face for the least.  You may say, ok why you are not loading the entire security definitions and configurations after the context is initialized so you wont need to have the temporary security provider, the answer to this question is that having no security applied right after the context initialization is finished is a security risk because at the brief moment before the context get updated with the security definitions, people can access the entire system without any authentication or access control. Let’s say that brief moment is negliable but a bigger factor here is the possible failure of loading the definitions after the context is initialized means that the application will remain without any security restriction if we do not lock down the application with the temporary provider.

Now that spring context can get initialized you can hook into spring context initialization by a listener and load the security provider from the database into the context to override the temporary beans with the actual one stored in the database.

Too hook into spring context initialization process you need to follow the below steps:

  • Implement your ApplicationListener, following snippet shows how:
  • [java]
    public class SpringContextEventListener implements ApplicationListener {

    private XmlWebApplicationContext context;

    public void onApplicationEvent(ApplicationEvent e) {

    if (e instanceof ContextRefreshedEvent) {
    context = (XmlWebApplicationContext) ((ContextRefreshedEvent) e).getApplicationContext();
    loadSecurityConfigForServer();
    }
    }

    private void loadSecurityConfigForServer() {

    AutowireCapableBeanFactory factory = context.getAutowireCapableBeanFactory();
    BeanDefinitionRegistry registry = (BeanDefinitionRegistry) factory;
    String securityConfig = loadSecurityConfigFromDatabase();
    XmlBeanDefinitionReader xmlReader = new XmlBeanDefinitionReader(registry);
    xmlReader.loadBeanDefinitions(new ByteArrayResource(securityConfig.getBytes()));
    }

    private String loadSecurityConfigFromDatabase() {
    //use the context and load the configuration from the database
    }
    }
    }
    [/java]

  • Now that you have a listener which listen for ApplicationContext event and load your security configuration  you can hook this listener to you context because by its own it wont do anything 🙂
  • To add the listener to your application context, just add something similar to the following snippet to one of the context configuration files and you will be done.

    [xml]
    <bean id="applicationListener" class="your.package.SpringContextEventListener"/>
    [/xml]

    This is all you needed to do in order to get the context updated after web application starts without imposing any security hole on the application for lack of security definitions right after the application startup.