Thursday, July 26, 2007

Order Is Disorder

Eclipse offers the possibility to auto-sort the members of a class while formatting its code. After seeing the result of member sorting applied to code I have written, I must confess I find this "feature" totally useless at best, deeply asinine at worst because the cons totally outnumber the pros.

If this sounds too harsh, then here is a more polished point of view from this developerWorks article:
Sometimes it's nice to have members sorted alphabetically to better navigate in your code. Again, there may be arguments against it. Imagine you structure your code so that methods that call each other are located close together for code navigation. The sort would reorganize them, and they may not be in the wanted order.

Sorting members alphabetically is very often counter-intuitive. Let's take javax.servlet.Filter as an example. Which implementation do you prefer:

The alphabetic order
void destroy()
void doFilter()
void init()

The execution flow order
void init()
void doFilter()
void destroy()

The latter is of course the more natural one: time can not be factored out when reading and understanding source code. In this matter, the alphabetic ordering acts as a randomization of the time line of the source code.

On top of time-line ordering, programmers naturally organize related members in groups because it is easier to keep a visual context that covers several methods, hence it facilitates their comprehension. Using the previous example, I would opt to sort the members this way:

void init()
void destroy()
void doFilter()

keeping the lifecycle methods (init and destroy) visually grouped and time-line sorted.

Here again, alphabetic sorting damages another important factor for code understanding: the distance between related statements.

My conclusion is that member sorting is far beyond formatting concerns and should not be automatically applied. If some sort of sorting is needed, it is better to sort the hierarchical outline of the class, as explained in the aforementioned article:
The outline view offers a nice feature to sort members in the view, but not in the code. The specific settings and how members are sorted can be found in Preferences > Java > Appearance > Members Sort Order.

Monday, July 23, 2007

Some Serious Tooling

To further promote Windows Developer Power Tools, O'Reilly has just released two badges to show off the work done on the tools and on the book.

Windows Developer Power ToolsWindows Developer Power Tools

I really like the circular saw: though software engineering is more like plumbing, there are some serious cutting job as well, so a saw like this one can be handy.

Anyway, I start using the badges in the NxBRE sites and documentations.

Sunday, July 22, 2007

Content Rides The Bus

I have just released the very first version of the JCR Transport for Mule. This transport supports JCR 1.0 (Java Content Repository 1.0, as defined by JSR 170) repositories and is a receiver only. It leverages the asynchronous notification mechanism, named observation, to receive events whenever a change occurs in the repository and is not intended for storing nor reading messages from a content repository.

Indeed the use case for this connector is to monitor a content repository for particular events and to transform these events into fully-resolved serializable content-rich objects that Mule ESB can route to interested endpoints or components, enabling them to trigger actions like instantiating work-flow sequences or flushing caches.

Developing this connector using Mule's Transport Artifact was very easy, the biggest part of the code being in the (optional) augmentation of the JCR events with content from the repository. The development environment that MuleSource has opened to developers, named MuleForge, uses all the good tools of the moment like Subversion, Confluence, Jira, Bamboo and, of course, Maven ; with Xircles as the one platform to bind them all in a consistent dashboard.

MuleForge has allowed me to develop, build, distribute and document this transport in a very efficient and professional way. For example, the wiki templates look exactly like the Mule ESB official guide, which is a great plus for the end-users who will experiment a consistent reading experience.

All in all, MuleForge is a smart initiative that fosters creativity and contribution around a powerful and well-established open source product. I vouch the efforts MuleSource has put in setting up this environment will pay: it will nurture projects contributed by a vibrant community of users and developers and will eventually enrich the overall Mule ESB offer.

Saturday, July 21, 2007

DieSeL for the Engine

Thanks to ANTLR and ANTLRWorks, I have been able to make significant progress on the addition of Domain Specific Language (DSL) support for the Inference Engine of NxBRE. A usable version is available in the SVN repository on SourceForge.

NxBRE DSL, aka NxDSL, main goal is to allow rules authors to use their own natural language to write executable rules. Technically this feature is based on:
  • a language-specific ANTLR grammar that strictly enforces the structure of a rule file: this file is not supposed to be edited by the implementer,
  • a custom definition file that translates statements into RuleML atoms: this file must be created by the implementer to capture, with regular expressions, the natural language fragments and how they translate in RuleML atoms.

In the following example, the words in red are parsed by ANTLR while the ones in green are matched by the regular expression from the definition file.

rule "Discount for regular products"
The customer is rated premium
The product is a regular product
then deduct
The discount on this product for the customer is 5.0 %

The terms in italic are captured by ANTLR and the regular expressions to get values for labels, implication actions and atom values.

As you can see, the ANTLR grammar defines and enforces the structure of the rulebase, i.e. the skeleton of the rules, logical blocks and statements.

NxDSL comes with a grammar that allows using French terms for defining the rule structure, thus opening the door to consistently write the body of the rules in the same language, as shown hereafter:

règle "Remise pour les produits standards"
Le client est en catégorie premium
Le produit est de type standard
alors déduis
La remise pour ce produit pour ce client est de 5.0 %

While NxDSL adds extra work for the engine implementor, the positive impact for the rule writers will justify its usage for projects in need of letting non-technical users manage rules.

NxDSL also opens the possibility of writing a plain-text rule editor by leveraging the definition file to provide code assistance to the writer. Anyone feeling like contributing such an editor?

Thursday, July 19, 2007

Lang for the Commons

I had to deal today with a class that contains an attribute of type When I asked Eclipse to generate the equals and hashCode methods that I needed to be implemented, it warned me that the generated code will not work properly because Serializable does not mandate anything in term of implementation of these two methods.

And as expected, I started to get red lights when my test cases were stuffing byte[] in the Serializable attribute: the hash computation and equality algorithm used for this type was simply not working and two instances of my object containing the same sequence of bytes in two different byte arrays were bound to be different.

Since I was already Commons Lang for building the String representation of my object, I thought I should give a shot at the hashcode and equals builder it offers. Reading no doc and proceeding only by analogy with the toString method, I quickly typed this:

public int hashCode() {
return HashCodeBuilder.reflectionHashCode(this);

public boolean equals(Object obj) {
return EqualsBuilder.reflectionEquals(this, obj);

And it worked immediately! Not only it saved me a lot of lines of code but it worked exactly as I expected from the look at the name of the classes and the methods.

Nice API, nice library: you need it.

Wednesday, July 18, 2007

July And Still Jolting

DDJ has (finally) published the write-ups for the winners of this year's Jolt Product Excellence Awards.

There is no master page for the different categories, so here is a link that will provide you with search results showing all the relevant pages.

My own write-ups are in the Enterprise Tools and Libraries, Frameworks and Components categories.

Enjoy the reading!

Saturday, July 14, 2007

Everything Is Happy (or Content?)

On this sunny Bastille Day, there is at least one good reason to rejoice: the public review of JSR-283 (aka JCR 2.0) has been published yesterday! Knowing that one of the most interesting Java APIs of the moment has just gotten better is a true source of (geeky) happiness and the promise that it will keep shaking and shaping the world of content repositories.

Indeed, JCR has been playing and will keep playing a fundamental role in opening the doors of a domain that used to be the private hunting field of a few mammoth vendors. Of course, this is not about getting great repositories for free, instead the main benefits are allowing users to be in control of their content assets and enabling developers to build innovative content-driven solutions.

I did not have time to dig deep in the specification, but I noticed that observations now produce richer events (source path and indentifiers) and that the previous version of SQL and XPath query languages have been deprecated in favor of an Abstract Query Model (AQM) that incarnates in a new SQL (JCR-SQL2) and a query object model (JCR-JQOM). I was convinced that XPath was a perfect fit for JCR searching but apparently I was wrong! I think JQOM will be the more appealing option for tool builders as mapping query features to this object model will be less clumsy than generating SQL.

Now to my little wish list! As a user of the API, here are the three changes I would like to see in the new version. Some are trivial, like:
  • A Property object can be single or multi-valued hence offers getValue() and getValues(). I find logical that calling getValue() on a multi-valued property fails but I find deeply counter-intuitive that calling getValues() on a single-valued property fails. By allowing getValues() to return a unique value from a single-valued property, the code needed to read property content would be vastly unified, thus simplified. This would also be conform to the principle of least astonishment.

  • An Event has a type property that defines the reason why it was raised. This is defined by a list of static integer while I think it would be better to use a type safe enumeration, as it is done for PropertyType. I can understand that this change would create an ascending compatibility challenge so at least adding an helper class that could render to / parse from String would be very helpful for translating event types to a human readable form (PropertyType offers such methods: nameFromValue and valueFromName). On top of that, making Event serializable would also facilitate its processing.

  • A less trivial addition would be to offer a unified way of acquiring a repository. Currently each vendor can offer a different client factory: writing an application that connects to several repositories forces to write specific code (or to use SpringModules). A unified URI-driven repository factory mechanism a la JDBC would be very helpful.
Do not get me wrong: I am not grumpy about JCR! I am even working on some goodness based on it. If I would be grumpy about something, that would be about the fact that, thanks to Sun's policy on binaries redistribution, no JCR libraries are available in the central repository of Maven, making builds a little tricky (but feasible, as Day saves the day again).

Thursday, July 12, 2007

Skeletons And Open Closets

Before it went to the printed publications paradise, Software Development Magazine was running an "Horror Stories" series for Halloween time every year. It was fun to write, instructive to read and pretty scary as well.

Nowadays, the "Daily WTF", prudishly renamed Worse Than Failure, brings us this kind of story everyday, if not several times per day. A common point between all these stories seems to be the universal existence of software skeletons in IT closets, waiting to jump on poor new programmers laps and make their life a nightmare.

Life being short, it can be desirable to avoid getting aboard a ship that carries such monsters but how to tell that a company is facing "legacy code issues" from a few interviews? This kind of risk might make working for an open source software company a desirable move. Skeletons dislike open closets.

Are open source companies free of monsters? Of course not: for all the pieces of software that are not public (think back office systems, web sites...), there is a risk of facing the ugly spawn of years of software rot. But at least all the public facing code will have to stand to some elevated standards and is "up to the challenge of 1000+ eyeballs reading [it] every day".

Should code skeletons be avoided at all cost? I do not think so. Most of us can not see dead people, similarly management can not see dead programs. IIABDFI is the golden rule but sometimes it becomes clear that there is only that much marathons a dead man can run. Then, if your job is to refactor such a monster so it becomes maintainable and versatile, it can be as challenging as fun.

What should be avoided at all cost is the summoning of new skeletons in closets. With all the knowledge we have now thanks luminaries in our industry, this should be possible.