Sunday, April 29, 2007

Prefactoring A Bell

I am currently reading Ken Pugh's Prefactoring, which is a seminal book on writing software "right" from the beginning without erring on the side on BDUF. While reading this book, I have found that some concepts Ken introduces (or re-introduces as many of them were already known) directly map to certain situations I am currently facing. I will share this here, and maybe in upcoming posts, if other situations ring my bell...


Tight Coupling and the Singleton Identity

Of course, avoiding tight coupling is a goal every conscientious developer has in mind and tries to reach as much as he can. The difficulty is to spot tight coupling, which is coupling to a particular implementation, as it sometimes takes place unnoticed.

For example, I recently had the case of a developer who needed to test the identity of an object and for this opted to use equality because he knew the object was a singleton.
if (theObject == Singleton.theInstance)
This created tight coupling because, should the object ceases to be a singleton, testing for equality would break. The following should have been used:
if (theObject instanceof Singleton)

APIs of Least Surprise

Designing APIs is a tough subject: the intense discussion between Josh Bloch and Michael Feathers at the latest SD West was quite a lively proof of it. Sticking to the principle of least of least surprise is surely an excellent guideline for interface designers.

I recently came to use the javax.management.MBeanServerFactory class and bumped into an inconsistent behavior between two of its helper methods:
createMBeanServer(String domain)
findMBeanServer(String agentId)
As you can see, when you create an MBeanServer, you provide the API with a domain name, while when you use the same API to look for MBeanServers, you have to provide an agent ID. Since both are Strings, I assumed that they both represented the same concept but I was wrong. And surprised!

Saturday, April 21, 2007

My Top Three Mac OS X Annoyances

Now that I have switched to Mac OS X as my main OS, all my troubles seem so far away and it's a wonderful life.

Just kidding! Though OS X is a great OS, it carries a fair share of annoyances, pretty much like every system does. Here is the list of the top three glitches that drive me nuts:

  • Bad keyboard support: I find myself forced to use the mouse too often. Not that I dislike this kind of small mammals, but having to leave the keyboard to twiddle the mouse is really slowing me down, usually at the worst moment (when typing code for example). Very often a dialog will pop-up and I will have no other way to get rid of it but to use the mouse. Or when paging up and down in large texts, the fact the caret does not actually moves will also force me to use the mouse to click to position the cursor. Windows XP does a much better job with keyboard support, as you can do almost everything with your hands on the keys.

  • Lame file explorer: I am sorry but Finder is a pain in the neck. Navigating in a folder hierarchy, creating folders at the place you want them to be, moving files around, renaming them... all these operations carry a certain degree of clunkiness that quickly makes me fume and rant. Again, Windows XP does a much better job here (except for network folders, which consistently freeze the file explorer, if not worst).

  • Sweet and sour JVM: though Apple boasts about their superb JVM integration, not being able to use a standard one from Sun prevents you to be up to date. So the JVM is great but dragging behind the official releases from Sun. At this date, version 6 is still a developer preview while the mainstream VM is already an update 1 version. I think Apple should keep working on integrating the JVM in OS X as they do, but also make it simple for developers to deploy Sun ones in "private" mode.
Now that I have written this down I start to realize that my good old Kubuntu Dapper Drake machine, with its archaic looking UI, is not doing so bad, after all!

Sunday, April 15, 2007

A Bridge, a Donkey and a lot of Fire

Did it need to be so high?

JMS is a simple yet powerful API that allows developers to build asynchronous and loosely coupled systems pretty easily. In fact, it is so easy that its usage usually expands very rapidly in the IT landscape of a company until it hits a wall that is as high, austere and disabling as the Berlin's one was, namely: the firewall.

JMS listeners rely on specific ports, usually dynamically assigned, something that usually prevents its usage through a firewall as administrators are reluctant to open ranges of addresses. Fortunately, there is a highway that goes through this wall: it is called HTTP. It has a particular traffic regulation as it is a one-way road that goes from the inside (the Intranet zone) to the outside (the external DMZ that we will call the Internet zone).


Mule to the rescue!

This post demonstrates how to leverage Mule, the open source ESB, for bridging JMS Queues that reside on the both sides of the firewall through this highway. The following deployment diagram details what is involved in this scenario: as you can see, Mule is not deployed as a standalone application but is embedded in a J2EE web application and deployed on a server. The reasons for this approach are multiple:
  • System administrators can be reluctant to deploy new tools: deploying Mule as a web application on the standard J2EE server of your company alleviate this resistance.
  • The inbound Queues used by the bridge can be hosted by the server itself, leading to a neat and consistent self contained component without any interaction to an external system.
  • Using the servlet connector of Mule allows to leverage the well-known web stack provided by your favorite J2EE server.
When an application wants to send a message to another zone, it does it by sending the message to a dedicated queue in its own zone, which acts as a "letter box". The routing itself is based on a specific JMS Message Property (named "internet_destination" or "intranet_destination") that contains the targeted queue name alias. This bridge uses alias instead of real servers and queue names to reduce coupling and to limit routing to pre-defined destinations.

The following diagram presents the different components involved in the bridge. Routing from the intranet to the Internet is shown in green ; the other direction is shown in red. The arrows are oriented in the direction of message flows, not in the direction of the call from a particular caller. The gray boxes represent the application servers involved in the bridge and the Mule and JMS components they host.

[ Configuration files for JBoss 4.x: Intranet - Internet ]


From Intranet to Internet

A Mule component subscribes to the letter box queue in the intranet zones and listens to messages published there. When it gets a new message, it sends it by HTTP POST to the Mule servlet on the Internet zone. This servlet is the endpoint of a Mule component that performs the routing based on the aforementioned JMS Property and publishes the message to the targeted queue (or stores it in a DLQ - aka Dead Letter Channel - in case the target is unknown).


From Internet to Intranet

The other direction implies bringing messages back in the intranet zone because no sending can be initiated from the Internet zone. This is achieved in this bridge by using a Mule component in the intranet zone that regularly polls another Mule component in the Internet zone. The latter uses the power of scripting in Mule to define a component that consumes messages from the Internet letter box only when requested by a call from the intranet zone.


Your Turn Now

As you can see, this example does not cover temporary destinations (used by requesters for example), nor the reply-to feature of JMS. Note that, with a little of extra work, it would be fairly easy to support the case of reply-to targeting non-temporary destinations. This would be done by rewriting the destination JMS property in the messages entering the bridge to have the reply channel go through a pre-configured route.

Similarly, this scenario lacks any kind of retry mechanism needed if a failure occurs in an HTTP transfer or a possible message staging where payload could be scanned for viruses before being routed to the intranet destination.

In fact, this example gives you a fairly complete view of what can be achieved with Mule, a little bit of configuration and not a single line of compiled code.

The fact that no coding is involved is pretty important for production matters: any skilled system administrator can now activate new routes or deactivate existing ones by simply tweaking the Mule configuration. This can be done without involving a software developer. In that sense, this JMS Bridge becomes a first class citizen of the IT infrastructure.

Do not wait any longer and fetch the beast of burden that will massage your messages! But leave the cow alone...

Friday, April 13, 2007

Seriously Infected

After writing some tests for a Tomcat valve today, I came to wonder about what is it that I like the most with unit testing.

At first, I thought that it is when these red lights turn green - oh the jolly green! This is a truly enjoyable moment when all tests pass.

Then I thought, it is maybe when green lights turn red - oh the scary red! When I touch some piece of code and immediately see the impact it has, I really feel like unit testing saved my day (and possibly some nights).

But finally, I came to realize that, for me, the best moment with unit testing is when I ask myself this simple question: "How am I going to test this?". No matter if I am testing an existing piece of code or writing the tests first, asking this question is really a delightful moment. I reckon this is because I start considering what I will test as a living entity and not anymore as a mere concept: by figuring out its execution environment, its inputs and outputs, all this code or code-to-be comes to life in a very vivid way.

Am I describing the apex of some kind of geeky childbirth? I do not know but what I know for sure is that the more complex it is to imagine how to test something at first glance, the most rewarding it will be to figure out how to do it!

Tuesday, April 10, 2007

Where Have All The Bad Spams Gone?

Is this just me or has GMail spam filtering improved drastically these past days?

My junk mail count has came down from ~90 per day to ~15 per day since last Sunday.

This is great and scary at the same time: I have started to wonder if I am not losing legitimate emails in the process!

Friday, April 06, 2007

Put Your Business Rules to the Test

In the November 2006 issue of DDJ, Scott Ambler explained in an article titled "Ensuring Database Quality" the why, what and how of database testing. With the advent of commercial and open source business rules engines and their increased usage in applications of all size and complexity, ensuring rules quality has become a critical issue for many businesses.


No blind spot tolerated

There are several reasons why applications externalize their business rules instead of hard coding them, including the following:

  • to accommodate the need of frequently changing business rules in the cleanest possible way,
  • to enable non technical personal to author business rules,
  • to allow the exchange of rules between heterogeneous systems,
  • to delegate the handling of complex rules to a component trusted for its performance and reliability.

A successfully implemented rules engine will quickly end up handling the most critical aspects of a modern business infrastructure, as its strategy will become reified in rules and enforced by the engine. Therefore, the need for properly managing these rules will quickly become compelling.

Whether they use commercial tools (that come complete with advanced tooling) or open source implementations (that are traditionally weak in term of tooling), implementers will have to put in place a rules life cycle similar to the one shown hereafter.


One of the critical aspects of rule management we are going to discuss here is testing. Since we recognize what is at stake with business rules, keeping them in a blind spot and simply hoping for the best will have sooner or later direct and dramatic consequences that are likely to hurt the bottom line for the aforementioned reasons.

We will discuss the different steps necessary to properly set up a test environment and then how to iterate on this basis.

The specific case of the home grown engine

If the rule engine you use is home grown, it is essential to extensively test it to certify that, release after release; it keeps its deduction power! To ensure this, a combination of component level unit testing and white box testing is needed. They will both rely on a specific set of data and rules, usually not actual ones but ones made to cover core specifications, well known technical challenges (like critical values handling) and regression testing (tests created after bug reports).

These tests should also be completed with performance and load tests, in order to guarantee that your engine remains in its defined production constraints.

Setup Step 1: Setup the test sandbox

Rules engines are generally external components designed to be fed with data they can process and make deductions on, which usually end up by updating the existing data or by creating new data. This relative isolation, which somehow plays against the engine at runtime because it is costly to alleviate, can be leveraged for testing as it simplifies the setup of a sandbox, i.e. a dedicated and segregated environment where the engine will be free to modify data without touching any actual critical information.

For engines that are directly connected to enterprise resources like databases, the setup recommendations in Scott's article will apply.


Setup Step 2: Build input reference

Unless your business rules are simple or the amount of information you are dealing with is limited, it is almost impossible to manually create sample data that will represent the diversity of cases your engine will have to process and apply rules on. If you are in this situation, you will then have to work with business analysts to select relevant batches of data that represent a reasonable input in term of data complexity and diversity. This will become your input reference at one point of time.


Setup Step 3: Build output reference

After building this input reference, you will have to make your engine apply rules on them: the result will become your output reference, but only after having being validated by an expert.

With this reference data in hand, you will be in the position to certify your rule base, which is usually done by versioning it. It is of paramount importance that not only the rule file(s) be versioned but also all the reference data and related artifacts (like configuration files). This bunch of files will form a consistent, tested and validated set of information that will be candidate for production and, if need be, restoration if a downgrade is needed. It will also be the unit of data that will be copied when a new version of the rule base will be needed.


Following Steps: Iterate

After this initial phase, each subsequent modification of the rule base will have to go through a similar validation process, with the difference that the input reference will probably need edition to include new business cases or remove irrelevant ones.

When testing the modified rule bases, differences with the output reference will be induced by the simple fact business rules have been modified (for example, if a discount rate has been lowered, the new computed values will change). Again, a business analyst will have to analyze the difference between the previously validated output reference and the new candidate. A diff-like visual tool with a simple accept/reject feature could be of great help here.


Merry nights ahead

Business rule engines are now first class citizens in the agilist's toolbox. As such, they deserve a well defined and thorough test strategy. Depending on the engine used, setting up such a strategy will vary between defining your own or embracing the one provided by a vendor. At the end on the day, what really matters is what testing buys you: the possibility to make the most of business rules engines, that are intrinsically able to embrace changes, without losing your sleep.