My last blog in 2012: My 2013 wishes and predictions…

Last year I wrote a wish list and prediction for 2012 and this year I am going to do the same for 2013. So for 2013 in technology realm specially when focused on Java is as follow:

  • Java SE 8 will be an awesome release despite the Jigsaw setback.
  • Java EE 7 will bring more ease of use and clarity into the community.
  • GlassFish 4 will be awesome and and more people will benefit from it’s modular and extensible architecture…
  • In late 2013 NetBeans IDE 8 will rock!
  • IBM will push the idea of how cool Rational set of IDEs are and how good Websphere is and people will believe it until the are caught with no way to return.
  • RIM seems to be pulling it together and it is likely to keep its own operating system rather than adopting Android.
  • Google Chrome will continue eating other browsers marketshare as fast as browserly possible.
  • Some of the new cool boys in the JVM town that are claiming to be the next Java will vanish/start vanishing without any trace
  • I wish for a very thin and edge to edge tablet and cell phone on top of android so I could switch to another phone. This will be something that Google_Moto will do.
  • Maybe I see someone with a Windows Mobile phone somewhere other than advertisements.

What I wish for during 2013, unrelated to technology

  • No more war and instead of that some peace and quiet time around the globe.
  • No disasters like what we had in 2011 and instead some ground breaking scientific discoveries in medicine, energy and space travel.
  • No economy breakdown anywhere in the world.
  • To win more bets against my nemesis.

Other predictions for 2013 which I truly like to be proven wrong for some parts:

  • Iranian government will not go away and will not change to a sane governing body.
  • Pakistan army and ISI will continue supporting /training and harboring  Al Qaeda and Taliban and continue destabilizing Afghanistan southern and central provinces.
  • Iranian government will continue meddling in other countries affair specially in Afghanistan and Arab countries.
  • It is highly likely that Syrian dictatorship loose the battle for capital city and leave the capital but they will remain a player in the country and wreck havoc for the time being.

I wish everyone a happy new year with lots of joys and success.

An analysis on the monthly increase of NetBeans IDE active users count

Few weeks ago I was talking to Jiří Kovalský about NetBeans and its 1,000,000 active users and I asked him to check whether it is possible to have the number of active users by month or not and today he got back to me with a nice chart showing number of active users for each month since Jan 2004. Jiri is  NetBeans community manager and also he is the first person in the NetBeans team I get acquaintance long ago during NetCat50.

Here I attached the chart and then I will do an short analysis of what the chart can represent.

Number of NetBeans Active users per month

Now a brief overview of the chart:

  • NetBeans team started to count the active users in Jan 04
  • During summer and new year eve there is a decline in the number of active users each year and as overall number of users growth, this decline can be seen more clearly.
  • Number of active users is increasing continuously.

Now I want to merge the above chart with another table which is different NetBeans versions release dates.

NetBeans versions release dates

  • The last line in the bar is for June 2011, The July numbers are not calculated yet.
  • It took NetBeans two years, until Jan 2006, to get 200,000 active users at the beginning but the growth in number of active users was accelerating from the beginning as chart suggests.
  • In its next 3 years, from Jan 2006 to Jan 2009 number of users increased by 400,000 to a total of 600,000 active users which means the user growth accelerated quite well. This is the post NetBeans 5 ear when each version’s  changelog had quite a large number  number of bug fixes, performance improvements and new features.
  • The biggest increase in the number of users in duration of one year can be seen between June 2010 and June 2011 with about 200,000 users. This is the second year that ORACLE was in charge of Sun and its products.
  • It looks like that after NetBeans 6.9 the number of active users is increasing faster than before and the reason is clearly the stability and performance improvement in addition to tons of new features in the core and support for PHP and C++.

As a long time user of NetBeans IDE I should say that NetBeans has come a long long way to become the IDE that we use in our daily jobs nowadays. The number of features introduced in the IDE and the number of bug fixes is enormous. You can find and try NetBeans 4 or 5 and compare it to NetBeans 7 to understand the huge distance between these two.

NetBeans seen several shifts in its direction specially during netbeans 6 time when More languages were being supported in IDE and diverse set of SOA and XML development features were being included in the IDE. Then Again another shift happened and all those features and language supported were dropped and NetBeans team put more effort into the core to make the core more stable and feature rich which as you can see in the chart has payed off pretty well.

The 1,000,000 active users number is not just a number, it shows that a vast, versatile and living community is behind NetBeans IDE as users, contributors, and the core development team. Long live the good community and the good IDE.

Writing Solaris Service Management Facility (SMF) service manifest

SMF services are basically daemons staying in background and waiting for the requests which they should server, when the request come the daemon wake ups, serve the request and then wait for the next request to come.

The services are building using software development platforms and languages but they have one common aspect which we are going to discuss here. The service manifests which describe the service for the SMF and let the SMF manage and understand the service life cycle.

To write a service manifest we should be familiar with XML and the service manifest schema located at /usr/share/lib/xml/dtd/service_bundle.dtd.1.  This file specifies what elements can be used for describing a service for the SMF.

Next thing we need is a text editor and preferable an XML aware text editor. The Gnome gedit can do the task for us.

The service manifest file composed of 6 important elements which are listed below:

  • The service declaration:  specifies the service name, type and instantiation model
  • Zero or more dependency declaration elements: Specifies the service dependencies
  • Lifecycle methods: Specifies the start, stop and refresh methods
  • Property groups: Which property groups the service description has.
  • Stability element: how stable the service interface is considering version changes
  • Template element: more human readable information for the service.

To describe a service, first thing we need to do is identifying the service itself. Following snippet shows how we can declare a service named jws.

<service name='network/jws’ type='service' version='1'>
<single_instance/>

The first line specifies the service name, version and its type. The service name attribute forms the FMRI of the service which for this instance will be svc:/network/jws. In the second line we are telling SMF that it should only instantiate one instance of this service which will be svc:/network/jws:default. We can use the create_default_instance element to manipulate automatic creation of the default instance.

All of the elements which we are going to mention in the following sections of this article are immediate child elements of the service element which itself is a child element of the service_bundle element.

The next important element is dependency declaration element. We can have one or more of this element in our service manifest.

<dependency name='net-physica' grouping='require_all ' restart_on='none' type='service'>
<service_fmri value='svc:/network/physical'/>
</dependency>

Here we are telling that our service depends on the svc:/network/physical service and this service needs to be online before our service can start. Some of the values for the grouping attribute are as follow:

  • The require_all which represent that all services marked with this grouping must be online before our service came online
  • The require_any which represents that any of the services in this grouping suffice and our service can become online if one of them is online
  • The optional_all presence of the services marked with this grouping is optional for our service. Our service works with or without them.
  • The exclude_all:  specifies the services which may have conflict with our service and we cannot become online in presence of them

The next important elements are specifying how the SMF should start, stop and refresh the service. For these tasks we use three exec_method elements as follow:

<exec_method name='start' type='method' exec='/opt/jws/runner start' timeout_seconds='60'>
</exec_method>

This is the start method, SMF will invoke what the exec attribute specifies when it want to start the service.

<exec_method name='stop' type='method' exec=':kill' timeout_seconds='60'>
</exec_method>

The SMF will terminate the process it started for the service using a kill signal. By default it uses the SIGTERM but we can specify our own signal. For example we can use ‘kill -9’ or ‘kill -HUP’ or any other signal we find appropriate for our service termination.

<exec_method name='refresh' type='method' exec='/opt/jws/runner reload_cof' timeout_seconds='60'>
</exec_method>

The final method which we should describe is the refresh method in which we should reload the service configuration without disturbing its function.

The start and stop methods description are required to be present in the manifest in order to import it into the SMF repository but the refresh method description and implementation is optional.

The timeout_seconds=’60’ specifies that the SMF should wait for 60 seconds before aborting the method execution and retrying it over. We can use longer timeout when we know that the execution of the method take longer or lower timeout when we know that our service starts sooner.

The property group elements specify which property groups the SMF should associate with the service.  We can use as many of the SMF built-in property groups or define our own property groups.

<property_group name='startd' type='framework'>
<propval name='ignore_error' type='astring' value='core,signal'/>
</property_group>

The above snippet shows how we can define using a framework built-in property group. The built-in property groups and their properties have meaning for the SMF framework and change its way of dealing with our service.  For example the above snippet says that the SMF should ignore core dump termination of suppresses of this service via a kill signal from another application.

Next important element is the stability element which specifies whether the property names, service name, its dependencies and other service attributes may or may not change for the next release. For example the following snippet specifies that the service attributes may change in the next release.

<stability value='Unstable'/>

Finally we have the template element which contains descriptive information about the service, for example we can have something like the following element which describes our service for human users.

<template>
<common_name>
<loctext xml:lang=’Java'>Java Network Server </loctext>
</common_name>
<documentation>
<manpage title='JWS' section='1M' manpath='/usr/share/man'/>
</documentation>
</template>

The entire element is self describing except for the man page element which specifies where the man utility command should look for the man pages for the jws service.

All of these elements should be saved into an XML file with the following structure:
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle ...>
<service ...>
<single_instance .../>
<dependency ... />
<exec_method... />
<property_group .../>
<stability .../>
<template ...>
</service>
</service_bundle>

This is a raw representation of what the service manifest bare bone cal look like. The syntax is not accurate and the attributes and child elements are omitted to make it easier to scan by the eyes.

The service_bundle, service, and  start and stop method elements are mandatory while other elements are optional.

Now we should install the service into the SMF repository and then administrate it. For installing the service we can use the following command.

# svccfg import /path/to/manifest.xml

When we import the service manifest into the service repository, SMF will scan the service profiles and check whether this service is required to be enabled either directly or because of a dependent service or not. If required, the SMF will use the start method specified in the manifest to start the service. But before starting it, SMF will check its dependencies and start any required service recursively if they are not already online.

Reviewing the XML schema located at /usr/share/lib/xml/dtd/service_bundle.dtd.1 and the staying tuned for my next few entries is recommended for grasping a better understanding of the whole SMF and Solaris Services administration/management.

Configuring Solaris Link Aggregation (Ethernet bonding)

Link aggregation or commonly known Ethernet bonding allows us to enhance the network availability and performance by combining multiple network interfaces together and form an aggregation of those interfaces which act as a single network interface with greatly enhanced availability and performance.

When we aggregate two or more network interfaces, we are forming a new network interface on top of those physical interfaces combined in the link layer.
We need to have at least two network interfaces in our machine to create a link aggregation. The interfaces must be unplumb-ed in order for us to use them in a link aggregation. Following command shows how to unplumb a network interface.
# ifconfig e1000g1 down unplumb
We should disable the NWAM because link aggregation and NWAM cannot co-exist.
# svcadm disable network/physical:nwam
The interfaces we want to use in a link aggregation must not be part of virtual interface; otherwise it will not be possible to create the aggregation. To ensure that an interface is not part of a virtual interface checks the output for the following command.
# dladm show-link
Following figure shows that my e1000g0 has a virtual interface on top of it so it cannot be used in an aggregation.
To delete the virtual interface we can use the dladm command as follow
# dladm delete-vlan vlan0
The link aggregation as the name suggests works in the link layer and therefore we will use dladm command to make the necessary configurations.  We use create-aggr subcommand of dladm command with the following syntax to create aggregation links.
dladm  create-aggr [-l interface_name]*  aggregation_name
In this syntax we should have at least two occurrence of -l interface_name option followed by the aggregation name.
Assuming that we have e1000g0 and e1000g1 in our disposal following commands configure an aggregation link on top of them.
# dladm create-aggr -l e1000g0 -l e1000g1 aggr0
Now that the aggregation is created we can configure its IP allocation in the same way that we configure a physical or virtual network interface. Following command plumb the aggr0 interface, assign a static IP address to it and bring the interface up.
# ifconfig aggr0 plumb 10.0.2.25/24 up
Now we can use ifconfig command to see status of our new aggregated interface.
# ifconfig aggr0
The result of the above command should be similar to the following figure.
To get a list of all available network interfaces either virtual or physical we can use the dladm command as follow
# dladm show-link
And to get a list of aggregated interfaces we can use another subcommand of dladm as follow.
# dladm show-aggr
The output for previous dladm commands is shown in the following figure.
We can change an aggregation link underlying interfaces by adding an interface to the aggregation or removing one from the aggregation using add-aggr and remove-aggr subcommands of dladm command.  For example:
# dladm add-aggr -l e1000g2 aggr0
# dladm remove-aggr -l e1000g1 aggr0
The aggregation we created will survive the reboot but our ifconfig configuration will not survive a reboot unless we persist it using the interface configuration files.
To make the aggregation IP configuration persistent we just need to add create /etc/hostname.aggr0 file with the following content:
10.0.2.25/24
The interface configuration files are discussed in recipe 2 and 3 of this chapter in great details. Reviewing them is recommended.
To delete an aggregation we can use delete-aggr subcommand of dladm command. For example to delete aggr0 we can use the following commands.
# ifconfig aggr0 down unplumb
# dladm delete-aggr aggr0
As you can see before we could delete the aggregation we should bring down its interface and unplumb it.
In recipe 11 we discussed IPMP which allows us to have high availability by grouping network interfaces and when required automatically failing over the IP address of any failed interface to a healthy one. In this recipe we saw how we can join a group of interfaces together to have a better performance. By grouping a set of aggregations we can have the high availability that IPMP offers along with the performance boost that link aggregation offers.
The link aggregation works in layer 2 meaning that the aggregation groups the interfaces in layer 2 of network where network packets are dealt with. In this layer the network layer’s packets are extracted or created with the designated IP address of the aggregation and then are delivered to the lower, higher layer.

Oracle is NOT taking back OpenSolaris, ZDNet Dana Blankenhorn got it wrong.

Once again the FUD around Solaris and OpenSolaris fate started to spread after Dana Blankenhorn misunderstood the licensing terms and used a eye catching and visitor increasing title, Oracle taking back OpenSolaris, for his blog entry. Well, from this article we can get that even the veteran writers can get things wrong and spread incorrect news 🙂

Folks, Solairs is one of the biggest Sun assets that Oracle is now own by taking over Sun . Solaris and OpenSolaris are going to be around in a much better shape than before because Oracle is betting its fight for the market share on this operating system to form a complete stack including storage, hardware, OS, middle-ware, support and so on.

Oracle may change the licensing terms for the Solaris OS, which is the Commercial distribution of OpenSolaris (with some added/ removed components) supported by Sun in old era, but to close the OpenSolaris code-base, no way. Changing the licensing terms can be result of Oracle seeking a higher revenue stream from the product and I bet Oracle will be able to get more out of Solaris than Sun because of its powerful marketing department 😛

Looking at the these FUD from any angel tells you that they are not correct because of at least the following reasons:

  • OpenSolaris has a large community around it which Oracle do not like to send away.
  • The Solaris/ OpenSolaris adoption highly increased after Sun pushed the source codes into OpenSolaris project. The whole Solaris on Z architecture, adoption of OpenSolaris increased so adoption of Solaris itself. Long story short, just take a look at http://www.genunix.org/ and http://hub.opensolaris.org/bin/view/Main/downloads to see how many active distributions are based on OpenSolaris core.
  • Solaris/ OpenSolaris is more important to Oracle to let it fall apart  because it has a lot to offer in Oracle strategy of offering end to end stack of its own.

People are talking about why the 2010.3 release is not released when it is already first days of April, the answer is “A few more weeks of development and testing will gives us a more stable OS” if you want to check the latest features which will be included in the 2010.3, grab the latest build (which is build 134 right now) from http://www.genunix.org/dist/indiana/ and play with it, but keep it in mind the build is not production ready yet. If you want the source code of OpenSolaris, take a look at http://hub.opensolaris.org/bin/view/Main/get to get the source code and build the OS yourself.

I am wondering what these people are getting from spreading wrong words and incorrect news about thins they have no clue about. Folks, Solaris OS is not OpenSolaris. OpenSolaris is CCDL licensed (except for some parts which are not CCDLed (http://hub.opensolaris.org/bin/view/Main/no_source)  while the Solaris distribution contains some of OpenSolaris components and features. some value added components and well along with some license/ distribution fees and first class support from Oracle.

Well, it was my personal feelings about the whole issue of OpenSolaris/ Solaris FUDs flying around.

My refcard for Oracle Berkeley DB Java edition published by DZone

The refcard discuss the following items: 

  1. The BDB Family : An introduction to different DBD family members, including BDB Base Edition, BDB XML edition and BDB Java Edition with tables comparing their features.
  2. Key Features: Key features of BDB family members and BDB Java Edition exclusive features are explained here
  3. Introducing Berkeley DB Java Edition includin:

  • Installation: How to install the BDB JE and which JAR files are required to be in class path.
  • Description of Access APIs: Three different types of access APIs are explained and a working sample for each type of access APIs is included. Discussed APIs are Base API (Key/Value pair store and retrieval), DPL (Direct Persistence Layer for object persistence), Collections API(Using the BDB JE as an on-disk collection

        4. BDB Java Edition environment anatomy: How BDB JE store information on hard disk and what is directory and file layout of BDB JE is discussed here
5. BDB Java Edition APIs characteristics: What makes BDB JE different and how these differences can help developers in different scenarios.
6. Transaction Support: explanation and sample code for using Transaction with Base API and DPL
7. Persisting Complex Object Graph using DPL: How to define relation between objects using annotation and how to persist and retrieve related objects is explained here
8. BDB JE Backup/Recovery and Tuning: How to create backup, restore the backup and what tuning factor can be used to fine tune the BDB JE is explained here
9. Helper Utilities: Helper utilities for exporting and importing data are explained here
You can download the refcard free of charge from here.

 

You will find several tables and illustrations to make understanding the the whole concept of BDB JE and differences between BDB JE, BDB XML Edition and BDB Base Edition better. All included sample codes are fully explained.