Wednesday, October 08, 2008

Database Cargo Cult

Today, I have attended an epic product walk-through. When the demonstrator came to explore the database of the application and opened the stored procedures directory, the audience was aghast at the shocking display of hundreds of these entities. I have been told it is the canonical way of developing applications in the Microsoft world. It is well possible, as it creates a favorable vendor coupling, but, to me, it is more a matter of database cargo cult.

The fallacy of the database as an application tier is only equated by the folly of using it as an integration platform. Both approaches create a paramount technical debt that takes many years to pay, if it is ever paid. Why? Because both approaches lack the abstraction layer that you need to create loosely coupled systems. Why again? Because both approaches preclude any sound testing strategy.

I have already talked about what I think are valid use cases for stored procedures. You do not need to believe me. Check what my colleague Tim has to say about it. And you do not need to believe him either, but at least read what Neal Ford says about it.

If you are still not convinced, then I have for you the perfect shape for your diagrams:

1BDB stands for one big database. And if you are in a database cargo cult, you already know why it is in flame.

Thursday, October 02, 2008

Jolting 2008

The doors are now open for the 2008 Jolt Product Excellence Awards nominations.

If you have a great book or a cool product that has been published or had a significant version release in 2008, then do not wait any longer and enter the competition!

Oh, did I mention that early birds and open-source non-profit organizations get a discount?

Wednesday, October 01, 2008

Just Read: Working effectively with legacy code


This book from Michael C. Feathers came at a perfect moment in my software developer's life, as I started to read it right before I came to work on a legacy monstrosity.

The book is well organized, easy to ready and full of practical guidelines and best practices for taming the legacy codebases that are lurking out there.

I really appreciated Michael's definition of legacy code: for him "it is simply code without tests". And, indeed, untested code is both the cause and the characteristic of legacy code.

Near the end of the book, Michael has written a short chapter titled "We feel overwhelmed" that I have found encouraging and inspiring. Yes, working with legacy code can actually be fun if you look at it on the right side. My experience in this domain is that increasing test coverage is elating, deleting dead code and inane comments is bliss and seeing design emerge where only chaos existed is ecstatic.

Conclusion: read this book if you are dealing with today's legacy code or if you do not want to build tomorrow's.

Meme(me)


From Sacha:
1. Take a picture of yourself right now.
2. Don’t change your clothes, don’t fix your hair…just take a picture.
3. Post that picture with NO editing.
4. Post these instructions with your picture.

Tuesday, September 30, 2008

Can Mule Spring further?

Just wondering.

Mule 2 is already very Spring-oriented, or at least very Spring-friendly. Will Mule's distribution, which is currently a nice set of Maven-build artifacts, evolve to a set of OSGI bundles a la Spring for version 3?

If MuleSource follows that path and distributes Spring Dynamic Modules, that would allow people to run on the SpringSource dm Server, if they want it, and benefit from sweetnesses like hot service replacement.

I doubt that Mule could use SpringSource dm Server as a default platform unless they clear up what I think could be licensing incompatbilities. But end users could make that choice and put the two Sources, Mule and Spring, together.

Just wondering.

Wednesday, September 24, 2008

The Build Manifesto

My ex-colleague Owen has just introduced the Build Manifesto. I invite you to read it and share your thoughts with him.

Tuesday, September 23, 2008

Soon Serving Spring

I finally had a chance to step beyond trivialities with the SpringSource dm Server.

To make things clear, this server is all about giving life to Spring Dynamic Modules (DM). Spring DM is not new but putting this technology in action was awkward: you had to deal with an embedded OSGI container and figure a lot of things out. The SpringSource dm Server provides an integrated no-nonsense environment where dynamic modules live a happy and fruitful life.

The benefits of dynamic modules are well known, and mostly derived from the OSGI architecture. The goodness Spring adds on OSGI is the clean model for service declaration and referencing. This is priceless as it enables a true highly cohesive and loosely coupled application architecture within a single JVM. If you come from a world of EJBs and troubled class loaders, this is the holy grail.

To exercise this sweet platform beyond the obvious, I have built the prototype of a JMS-driven application. The architecture was pretty simple: a bundle using Message Driven POJOs to consume a Sun OpenMQ destination, which then informs clients of new messages through a collection of listener service references. The idea was to allow the deployment of new clients or the hot replacement of a particular client at runtime. Verdict? It just works. The SpringSource dm Server goes to great length to isolate you from bundles going up and down (depending on the cardinality of your references to OSGI services, you may still get a "service non available" exception).

What I really enjoyed was the possibility to use bundles of Spring wired beans as first class citizen applications. Gone is the need for wrapping Spring in a web application to get a proper life cycle... POJOs rule!

Version RC2 of the server still has some rough edges (for example, the deployment order in the pickup directory is fuzzy and the logging messages are badly truncated and ill formatted) but they are minor compared to the amount of work done on the platform itself.

For me, the main challenge remains in making this server truly production-grade. Here are some points that I think need improvement:
  • Consoles all the way down: You have to juggle between the Web Admin Console, the Equinox OSGI console and JConsole to perform different bundle management operations. The SpringSource dm Server needs a single console to rule them all.

  • The configuration management is unclear: I ended up using an extra module, the Pax ConfigManager, to load external properties file in the OSGI Configuration Admin service. The SpringSource dm Server needs a default tool for this before going to production.

  • A proper service Shell script is missing: you just get a start and a stop script, which is not enough for production. A unique script with the classic start/restart/stop/status commands would be way better. If the matter of licensing was not such a hot subject in SpringSource currently, I would suggest them to use Tanuki's Wrapper as their boot script.
It is obvious that SpringSource will take this server to the next level and make it production ready pretty soon. It might already be done while I write this.

I you are a latecomer like me and have not started to seriously investigate this technology, now is the time.

Sunday, September 21, 2008

Final painted all over it

In my previous post, I mentioned that I let Eclipse add final statements everywhere in my code. I initially was reluctant to this idea.

Then I came to work on a piece of code which was heavily dealing with string processing. The temptation to re-use a variable and assign a different string to it several times, as the processing was happening, was really high. Using the final keyword forced me to declare a new variable each time. The great benefit was that I had to create new variable name each time, thus making the code clearer in its intention.

Hence, using only final variables makes the code stricter and cleaner. Moreover it is a safety net for the days when I am sloppy: final variables give me the big no-no if I feel inclined to repurpose one of them!

In Clean Code Uncle Bob advocates against such a systematic use of the final keyword for the reason it creates too much visual clutter. I agree with him though you quickly tend to visually ignore these keywords. Scala has solved the cluttering issue elegantly thanks to specific declaration keywords instead of a modifier (val for values and var for variable).

Did I just say Scala?

Saturday, September 20, 2008

Microtechniques: a little more conversation?

Interestingly, just when I was about to write about the pleasure of not typing many things while coding in Eclipse, J. B. Rainsberger just posted about the potential importance of typing fast. I guess this is the nature of the webernet: everything and its opposite at the same time, and leave it to the readers to sort things out!

I was reflecting about my coding habits with Eclipse after a thread in the Java forums, where a poster was complaining about the continuous compilation feature, prompted some internal debates (my, myself and I often disagree). Interestingly, this feature has been touted by others as a key one.

I personally like the continuous feedback. The rare time I write C# code in SharpDevelop, it takes me a while to remember that I have to tell the IDE to look at my code. First, I do not like to have to tell my friend the computer to do things. Second, I want the continuous feedback. I do not mind if the compiler is temporarily confused because my syntax is momentarily broken. I want the early warning that this feature gives me.

In fact, continuous compilation creates a neat state of flow, where you actually engage into a conversation with the IDE. And this is where my stance about not typing comes into play. I do not type class names, but merely each capital letter of its name and let the code assistance propose me something that fits. I do not write variable declarations: I either assign it automatically or extract it from existing code. I less and less write method declarations but extract them. I allow Eclipse to add keywords (like final) and attributes (like @Override) everywhere it thinks necessary. I leave it up to the code formatter to clean up my mess and make it stick with the coding conventions in place.

All these automatic features do not dumb me down. They establish a rich conversation between my slow and fuzzy brain and the fast and strict development environment. The compiler tells me: "ok, this is what I understand" and the IDE tells me: "alright, here is what I propose". And then I correct course or keep going.

So maybe we need a little less typing and a little more conversation?

Tuesday, September 16, 2008

Just Read: Clean Code


I have been expecting this new book from Uncle Bob for quite a while so, as soon as I have got my copy, I rushed through it!

If you have no idea about who is this guy named Robert C. Martin and mainly expect people to "sent you teh codez", then you have to read this book. This will not transmogrify you into a craftsman but, at least, you will get a fair measure of the journey you still have to go through and be pointed in the right direction.

If you are familiar with Uncle Bob's writings and attend his conference talks, there will be no new concepts for you in this book. It will still be an insightful reading because of the extensive code samples and the refactoring sessions where you can actually follow the train of thoughts, and the actions they entail, as if you were in the master's head. It's like being John Malkovitch, but a little geekier.

The book itself is somewhat structurally challenged and lacks a little consistency, from a reader standpoint.

But who cares? As long as you can read this kind of stuff:
Clean code is not written by following a set of rules. You don't become a software craftsman by learning a list of heuristics. Professionalism and craftsmanship come from values that drive disciplines.

Robert C. Martin, Clean Code
... is the form so important?

Thursday, September 11, 2008

Colliding Cow Bells

I found that the original Large Hadron Collider Rap was lacking cowbells. Here is the reviewed version that is now fully compliant with Christopher Walken and Blue Oyster Cult's standards:


Friday, September 05, 2008

Code Onion

The code of an application is like an onion, which is why it may make you cry sometimes. It looks like this:


The core
    This is the code domain where unit test coverage is the highest, hence refactoring is free. This code is as clean as it can be. This is the comfortable place where everybody wants to work in. Thanks to high test coverage, the feedback loop is short and fast so morale and courage are high when it comes to touch anything in this area.
The borders
    This is where most of the compromises happen. Dictated by the use of frameworks or application containers, code becomes invaded with inane accessors, class names end up hard-coded in configuration files, out-of-code indirections (like JNDI based look-ups) weaken the edifice. Unit and integration tests help building reasonable confidence but some refactoring can induce issues that can only be detected when deploying in a target container. This slow and long feedback loop reduces the opportunities to make things better in this area.
The wilderness
    This is the outside world, where the rift between the world of code and the harsh reality of life resides. Databases inhabit this place and provide great services but they mismatch with objects. The network is there too, always happy to teach your application pesky lessons about latency, droped packets or broken pipes. Worst of all, users (yes, Tron, they exist) have invaded this area: from there they will constantly find creative ways to abuse your innocent code.
Tensions exist at the touch surfaces between these three layers, resulting in incongruous invasions of concerns and perversions of one layer into another. Ideally these layers would not exist: we would not need frameworks to bridge the world of pure code with the real world. In the meantime this dream comes true, we will have to keep dealing with the code onion and, every now and then, shed a tear on less than ideal code.

ESB Testing Strategies with Mule

SOA World Magazine has just put online my latest article, "ESB Testing Strategies with Mule", which they have published in their August issue.

Tuesday, September 02, 2008

That's silver, Jerry!

That's silver... for Mule In Action in the current Manning early access best seller roaster. And the book is only getting better thanks to our great reviewers!

Monday, September 01, 2008

Reading Calendar Irony

My six readers (according to Google Reader), will certainly appreciate the irony of my reading calendar.

Here are the two books I am currently reading:

Isn't this coincidence really fun?

Friday, August 29, 2008

Commit Risk Analysis

I like to compare the stability of a legacy application as a saddle point:

QA provides some lateral stability that prevents the legacy application (this little funny red dot) to fall sideways. But, still, a little push on the wrong side and down the hole the application will fall.

What the legacy code typically misses is unit tests. With unit tests, the saddle disappear and the code stability ends up in a pretty stable parabola:
In an ideal world, there will be no legacy code. In a little less ideal world, there will be legacy code but every time a developer would touch any class, he would first write unit tests to fully cover the code he is about to modify.

Unfortunately, in our world, code is written by human beings. Human beings turned the original code into legacy code and human beings will maintain and evolve this legacy code. Hence there will be modifications on non unit tested code that will end up checked in. Inevitably, the little red ball will drift on the wrong side of the saddle.

I came to wonder what could be a way to estimate the risk that has been introduced by changes in a code base. Basically, the risk would be inversely proportional to the test coverage of the modified class.

I came with a very basic tool, Corian (Commit Risk Analyzer), that simply fetches the Cobertura coverage percentage for classes that were modified in Subversion for a particular number of days in the past.

Do you know of any method or tool for estimating the risk that a series of revisions could have introduced in a code base?

Wednesday, August 27, 2008

Bauhaus & Software Development

It took me a while to realize this but I finally noticed the deep similitudes between software development and the Bauhaus school of design. My CS teacher and mentor, who was knowledgeable about almost everything, had a particular penchant for the Bauhaus: it only took me 16 years to grok why...

Quoting Wikipedia, "one of the main objectives of the Bauhaus was to unify art, craft, and technology". Is not this unification realized in software development? In fact, should not this unification be the basis of successful, satisfying and fulfilling endeavors in this field?

    Technology - This is the easiest one. Software development is obviously about technology, as the concrete manifestation of scientific and engineering progresses. The smaller the transistors, the denser the processors, the more powerful the computers, the happiest the software developers!

    Craft - Uncle Bob speaks about it better than I could ever dream of. He has just proposed a fifth element for the Agile Manifesto:
    Craftsmanship over Execution

    Most software development teams execute, but they don’t take care. We value execution, but we value craftsmanship more.


    Art - The connection between software development and art is often controversial. Here, I will let Kent Beck convince you, with an excerpt of Implementation Patterns:
    Aesthetics engage more of your brain than strictly linear logical thought. Once you have cultivated your sense of the aesthetics of code, the aesthetic impressions you receive of your code is valuable feedback about the quality of the code.

A superficial look at Bauhaus buildings or paintings may give an impression of coldness and impersonality. But if you look again while keeping in mind the objective to unify art, craft and technology in an harmonious design, you will see things differently.

Try with this:


Or that:

And now with that:

Tuesday, August 26, 2008

Just Read: Managing Humans


Michael Lopps' capacity to deconstruct and analyze every aspects of both software engineering management and nerd internal mechanics is simply outstanding.

This book is not only insightful and amusing, but is a looking glass where all the intricacies of humans' management get revealed.

A. Must. Read.

Blog Inaction

Larry O'Brien said it better that I could ever formulate it, so here you go:
I've been busier than some metaphorical thing in some metaphorical place where things are really busy.

And here is what kept me and is still keeping me busy those days:

Monday, August 18, 2008

Silverfight

The Register is running what seems to be a balanced review of (the yet to come) Microsoft Silverlight (2.0).

It seems balanced because you have ten pros and ten cons, which might suggest that adopting Silverlight is merely a matter of taste (XAML is attractive) or politics (like for the NBC Olympics). But I think that, if you ponderate the different pros and cons, you might end-up with a balance that leans on a particular side (I let you guess which one).

The availability of designers' tools that runs on the Macintosh platform will certainly be critical if Microsoft wants to entice them out of the Macromedia world.

Similarly, the heroic efforts deployed in Moonlight to make Silverlight cross platform will be key to the overall success of this, otherwise proprietary, platform.

As of today, here is how a simple example comparing Silverlight and Flash runs on my machine:

Yep, this is a big empty white box with statistics about how fast Silverlight renders it in the status bar. If Microsoft is serious about dethroning Flash, which I am not entirely convinced of, they will have to go past this kind of... emptiness.

Wednesday, August 13, 2008

Just Read: Implementation patterns

If you are an aficionado of formal pattern books, you might be disappointed by the latest book from Kent Beck. This book is more about a mentor sharing his experience than a succession of diagrams, code samples and rules for applying or not a particular pattern.

In this book, Kent clearly took the decision to engage the reader in a direct manner: there is no fluff, just the nitty-gritty. Just years of experience and experiments summarized in less than 150 pages. I leave to your imagination to figure out how dense the book is. It is sometimes so rich that I came to wish that a little bit of code or a neat hand-drawn schema could be added here and there, just to make a particular pattern more edible for a slow brain like mine.

There is an intense tension in this book: I have been fulminating after reading some takes from Kent where he states counter intuitive approaches as far as defensive coding is concerned. And then I reached the last part ("Evolving Frameworks") and it striked me: so far in book, Kent was not coding for public APIs. And it striked me again: Kent is a master, he adapts his way of coding to the context.

Let Kent Keck talk to you: buy this short book and listen to what he wants to share with you.

Monday, August 11, 2008

GMail Auto-resizing Rules!

Just a quick "thank you" to GMail's team for the new auto-resizing feature that makes the edit box use efficiently the available screen real estate.

This is good.

Wednesday, August 06, 2008

Cuil Geared To Hidden Success

It took me a while to realize this but Cuil, the new flashy search engine that randomly displays porn and has a name that looks like the French word for an unspeakable part of the male anatomy, bears in its name the inevitable fate of a hidden success.

Let me explain. To succeed on the webernets, you need two Os in your domain name. Amazon made the risky choice of a M-separated-double-A and they surprinsingly do well, so far. But anyway, if we narrow down the field to search engines only, it is pretty obvious that the double-O is de rigueur for success.

So what on Earth did happen to the guys at Cuil? Well, you see, the trick is in the pronunciation. It is pronounced "cool". Here you go! The double-O, that was hidden in the domain name, becomes visible when you say it.

Consequently, a hidden double-O can only lead to a hidden success. Which is not a failure by the way.

Tuesday, July 29, 2008

Tainted Heroes?

I am perplexed by the recent Microsoft's {Open Source} Heroes campaign.

Believe me, I am working very hard to fight any bias against the stuff that comes from Redmond (just by respect to the great people they have and the cool stuff they are cooking in their labs). But for this campaign I can not help but smelling something fishy. Maybe because I am (lightly) active in the .NET open source community.

Anyway, for this campaign, Microsoft was granting a Hack Pack containing a trial copy of Windows Server 2008 and Visual Studio 2008 to open source developers all around the world. How is that going to help the .NET open source community? I do not have the faintest idea. But I can easily see how it can benefit Microsoft, especially when the trial period is over and the hero needs to buy a license.

I do believe there are real open source minded people at Microsoft. I also believe they are not allowed to come anywhere near the marketing department. They probably wear a special dress and have "to ring bells to warn people of their presence" too.

My open source experience in .NET land, compared to the one I have in the Java-lala-land, suggests that the last thing Microsoft developers need is yet another tool-lock-in scheme. I find .NET developers deeply engrossed with their IDE, sorry, with the IDE, to the extent that any project that is not formatted and designed for Visual Studio is a real challenge.

A few years ago, I made the choice to use SharpDevelop for developing NxBRE. The first versions of this IDE were pretty rough but I was immediately convinced by the fact a version of SharpDevelop was not tied to a particular version of .NET. This establishes the necessary distinction between the CLR and the SDK on one hand, and the development environment on the other hand.

So what about our heroes? Open source developers do not need time-trialed (or not) vendor specific tooling. They need 36 hours days and 2 extra arms, something for which Microsoft can not do anything. They also need a community of like-minded developers, something Microsoft should stop smothering and start fostering.

Sunday, July 27, 2008

Ouch of the day

This book is built on a rather fragile premise: that good code matters. I have seen too much ugly code make too much money to believe that quality of code is either necessary or sufficient for commercial success or widespread use. However, I still believe that quality of code matters even if it doesn't provide control over the future. Businesses that are able to develop and release with confidence, shift direction in response to opportunities and competition, and maintain positive morale through challenges and setbacks will tend to be more successful than businesses with shoddy, buggy code.

Thursday, July 24, 2008

GR8 JRB MOO!

As this screen shot shows it, Mercury does not care if I have a 24" monitor. It is going to be 200 pixels and not one more, sir! Get over it, sir!



Has anybody tested this page? In the same vein: has anybody tested Outlook Web Access 2007? It behaves like a web mail from the early 2000s.

Why do expensive enterprise tools feel they must be sub-standard in their GUIs? Why is that that as soon as a product is deemed enterprise grade, the game changes and it suddenly is all about paying six figure license fees for something an open source project would feel embarrassed with.

Will vendors react or do they just try to milk the cow until it decides to kick them away? Or does the cow care at all? Maybe the cow likes arid and painful tools so they feel enterprisey?

Moo.

Wednesday, July 23, 2008

YAJIHP

I am currently going through an interviewing spree, which is both exhilarating and exhausting. We have established a pretty nice routine with my colleague Josh: he asks smart questions and I ask stupid ones with my outrageous accent.

I think I should share some tips that could be interesting to anyone planning to go through Java jobs interviews:
  • Java 5 is not new anymore. Maybe the Fortune 500 company you have been working for is still on JDK 1.4 (or before) but please do not call Java 5 "new". In case you do not know it, it is already in its end of life transition period.
  • Do not oversell you. If you grant yourself 9.5/10 on Java knowledge or title yourself Senior Something, expect advanced questions on threading, concurrency or the JVM memory model. If you have not read Effective Java or Java Concurrency In Practice, either postpone the interview or consider refactoring your displayed proficiency level.
  • No CS but no BS. We are not Google, so we will not question you on Java Data Structures and Algorithms. This said, we expect you to know the core collections and what they are good for, even roughly. Even if software engineering is closer to plumbing than computer science, this kind of basic knowledge is necessary.
  • Out of your past box. If you have been consulting for a large corporate, you have certainly been exposed to the home-grown Mother Of All Frameworks. That is great but we do not really care because, even if you have use the Mother Of All Frameworks for eight years, it is disposable knowledge. Do not refer to it as an answer to the question we ask.
  • Idle the IDE. It is great that you have been using this particular plug-in of this particular IDE but an IDE is a tool and there are many of them. Never ever give the impression that your IDE has been driving your development activity. It sure supports it, like with refactoring aids, but your first answer should not be "with this plug-in...".
I might update this post in the future after going through some more interviews.

Oh, YAJIHP? Yet Another Java Interview Hints Post.

Sunday, July 20, 2008

De Majestic Car

I have no interest at all in cars: they are boring transportation means that make me lose my time and sometimes my temper.

But I knew that my neighbor had this in his garage:

And since he is selling his place and will move soon, I figured out that this week-end was a good time to dare asking a discovery tour!

Needless to say that, even without flux capacitor or Mr. Fusion, my eyes were all sparkling at the silver gray splendor, and that was not all due to the stainless steel body.

Life is like this: one achievement at a time. "Get acquainted with a DeLorean" is now ticked off my list.

Saturday, July 19, 2008

Gnirps is Bliss

Four years ago, I was spending most of my day time job helping people moving away from proprietary J2EE application servers in favor of the JBoss platform. It was such a relief on many aspects: financially, as millions of Euros of per-CPU licensing were saved ; support-wise, as the typical three-level-of-escalation inane support was replaced with a geeky efficient one ; and technically, as closed-source monoliths were replaced with the elegant micro-kernel architecture of JBoss. Of course, there were glitches, like the Unfixable (universal?) Class Loader, which were compensated by great features (like the dynamic proxy client invocation stack). And better, way better than any documentation, the source code was always accessible.

Nowadays, I am spending most of my day time job finding ways to part from JBoss in favor of lighter approaches like Spring and Tomcat. And it is a relief because, despite the thin nature of its kernel, JBoss turned into the tightly-coupled bloatware J2EE seems to mandate as the ideal server platform. I find very ironic that what was hip and enjoyable four years ago, has turned into such a subject of pain and wrath today. But yes, there is no doubt that thinner, simpler and lighter is the way to go. Scarcity makes code better. Loose coupling makes platforms better.

So what is next?

Let us pretend we are in 2012. SpringSource has been bought by ${boring-company}. Like Marc Fleury, Rod Johnson came back to its true passion: music. They might even have founded a band together (name is: La Cucumber Picante). We are now moving away from Spring to Gnirps, a project that has been founded by some dissidents after ${boring-company} decided to change the cover image of the love book.

Gnirps is both a language and a framework, which runs on the JVM (there is still no better cross-platform execution environment). The language is a fusion of Nice and Einstein, with concurrent and distribution concepts borrowed from Erlang. The framework is still heavily based on Spring, which has been freed from all the J2EE compatibility layers and classes. It is now mainly focused on OSGI and SCA, and has kept only one way of doing things for which Spring used to support three or four ways back in 2008.

Aahh, finally, thanks to Gnirps, life is such a bliss.

Wait. Until the new and improved version of...

Wednesday, July 16, 2008

Breaking The Pipe

I needed to generate broken pipe exceptions on one of my server. But how to do so without resorting to fiddling with a browser?

I have written the following, which does the trick and consistently generate at least one broken pipe exception when it hits the passed URL.


private static void pipeBreaker(final URL url) {
for (int i = 0; i < 50; i++) {
try {
final HttpURLConnection connection =
(HttpURLConnection) url.openConnection();

connection.setReadTimeout(1);
connection.setDoOutput(false);
connection.setDoInput(true);
connection.setRequestMethod("GET");
connection.connect();
connection.getInputStream().read();

} catch (final Exception e) {
// ignore
}
}
}


All this code does is give the server the impression it is going to drain the response, but times out very quickly and does this in a repeated manner.

That is how I break my pipes. Now, how do you break yours? Do you have a better way?

Monday, July 14, 2008

Generate, Then Degenerate?

My friend Celso Gonzalez and some guy named Scott Ambler (just kidding Scott, this is an old Marc Fleury joke) have released a thought provoking article in the June issue of Better Software named "Agile Model-Driven Development".

Despite using formal models as bootstrapping artifacts for TDD, so it can scale or survive team distribution, one key aspect of AMDD resides in the skilled transformation that happens in the generation chain.

This is a subject dear to my heart. My very first published article was about a model to code generation tool and I had the chance to initiate and gravitate around the roll-out of a pragmatic MDA framework (notice the emphasis on pragmatic: I have not drunk the MDA kool-aid).

To make this skilled transformation happen, you have to capture architectural decisions so a model will translate in an application that is layered according to some standards.

But what happens after that? Developers' hands still have to run over their keyboards and, as business decisions will come and go, they will be likely to take shortcuts and alter the original architecture vision to satisfy some imperious needs. From there on, all bets are off... The original architectural intention will be compromised sooner or later. What was generated will degenerate.

We do have some pretty nice analysis tools to help us ensure that code will keep on satisfying pre-defined coding standards, will avoid common bugs or will shoot for acceptable levels of test coverage.

But what about architecture?

The June issue of Computer talks about a tool, named SAVE, which could help... saving the day! SAVE, an acronym for Software Architecture Visualization and Evaluation, started as a Fraunhofer-supported thesis in 2004 and looks really promising.

I have tried to find the Eclipse update site of the SAVE plug-in but to no avail (a generic term like "save" does not hint Google much about what I am really looking for).

Does anybody know where to get this tool? And does anybody know if it is possible to run SAVE in the build process, like other code base analyzers? Indeed, I think it is important that any IDE tool has a counterpart that is usable at build time.

Architectural verification sounds tasty. Is it too good to be true?

Friday, July 11, 2008

JCR Transport 2.0.0-M1 Released!

I am happy to announce the first milestone release of the upcoming JCR Transport for Mule 2.

Though there are still open tasks in the roadmap of the 2.0.0 final release, the transport is already able to support all the features of the 1.x branch.

Alongside the specific JCR XML schema for the configuration, you will also appreciate the improved support for streaming.

Enjoy the full distribution or the Maven artifact. And please report glitches to JIRA.

Saturday, July 05, 2008

Its name is JBound

I needed a little tool to torture with boundary values the public domain model objects of a project I am working on. There are excellent test generator software like Jtest or late AgitarOne, but I needed something really scaled down that would act as a complement to existing unit tests.

There came JBound, a mere six classes utility, whose sole purpose in life is to inject unfriendly values, like extremes and nulls, in the constructors and setters of a class. It also calls getters and standard methods like equals, hashcode and toString, just to be sure that the created object is not completely ravaged inside.

Using JBound is easy:
@Test
public void fullyExerciseBeans() {
JBound.run(new Exercises() {
{
forClasses(MutableBean.class, ImmutableBean.class);
}
});
}
By no means, JBound should be used as an artificial mean of creating high test coverage. You still need to write sensible tests, especially for methods like equals, if your object identity is strictly controlled. Moreover, it is debatable if JBound is useful for non-public APIs, as internal classes are less prone to be exposed to end-user abuses.

Is JBound original? Not at all! But, as I have been baffled by the amount of glitches this tiny tool has found in my project, I thought it could be interesting to share it. Maybe JBound will prove useful to you to. Maybe you will even feel you can make it better. If this is the case, please explain me how and I will welcome you aboard!

Friday, June 27, 2008

Oh Management

Sure enough, the fact that software industry is crippled by technologically-challenged project managers is mind-boggling. But I have been recently flabbergasted by the shameless disclaimer of such a manager who, in the middle of a crucial meeting directly concerning her project, announced: "Sorry, I am not technical". Hence all this discussion was sheer geeky mumbo-jumbo to her and she could not grasp that her project was going astray.

This is sad. Not even funny. Just plain sad. Direct management of software developers can not be that clueless.

But I have prepared my childish revenge. Next time a manager will ask for estimates, I will pretend not to understand and will feign the following excuse: "Sorry, I am not managerial".

Saturday, June 21, 2008

A Zephyr is blowing

My attention has been recently drawn to a new tool named Zephyr, which dubs itself as "The Next Generation Test Management Tool".

My very first impression is that the Zephyr team has done a great job putting on-line a complete workable demonstration environment. This is great to quickly delve into what the software is really about and getting assured that "it works on my machine", which is crucial nowadays as the corporate IT landscape is much more diverse than the traditional Windows + Internet Explorer desktops environment it used to be.

As far as platforms are concerned, Zephyr seems to run "on standard Windows desktop" (sic). I reckon they meant "Windows server", though I have seen production systems on desktops! Hence, I could not test the ease of install nor investigate the technologies used. Neither could I estimate its capacities to scale or work across a WAN.

The second impression is that the graphical interface is compelling, if not mind blowing. These guys did a great job of making arid forms filling an almost bearable task. Indeed the tool is brainy enough to avoid manual data copying and is able to pre-fill or filter data according to the context of the task.

Zephyr is also smartly aware of agile principles, as the notion of Scrum's sprint is hard wired into its dashboards. And dashboards is where Zephyr really shines. I love dashboards of all sorts, even complicated ones. But Zephyr's are truly awesome:

The dashboards and workspaces are tailored to the user profile, which makes navigation easier because you do not have to filter out a lot of non-relevant features. The tool does a great job at integrating and aggregating all sorts of QA related data, including the ones coming from defect tracking systems.

And I think this is where one of the challenges Zephyr will face resides. It currently connects to Bugzilla only, but there are many others out there. Moreover, companies have developed an habit of using their defect tracking system as a management tool for QA. How is Zephyr going to convert these users to this new platform? How disruptive for the practice would it be to move from the use, say, of JIRA to using Zephyr?

The thing that really truly bugs me about this tool is that it does not go further than the traditional "QA monkey" work, in which a human beings are presented a list of actions to perform and report the results thereof. That a tool with an agile penchant does not incite people to evolve their QA practices to more automation is flabbergasting. Where is the Selenium integration? Where is the FIT connector? Though manual QA will never be fully replaced, at least supporting a blend of automated and non-automated tasks would be a great first step. What the software industry really needs is lazy QA teams who mainly use their brains to work on automating their tasks!

To finish on a positive note, Zephyr takes integration seriously. It exposes JSON and REST APIs, the assurance that if you opt for this tool, you will not end up with yet another instant legacy application. This is something I would like to see more in so-called enterprise grade applications!

If you are looking for ways to improve your QA management, I can only recommend that you get Zephyr now, as the free 3 users licenses will allow you to see what this "Next Generation Test Management Tool" can do for you.

UPDATE 31-MAY-2009: Zephyr version 2.5 now comes with test automation features that include the ZBot technology.


ZBot allows you to execute testing scripts on remote machines and aggregate all results back in Zephyr. This is a great move, which addresses my concerns about the lack of support for automation in the previous releases.

Sunday, June 15, 2008

Best. Game. Ever.



Ok, so I am an old timer and there are plenty of great games nowadays, so it is probably no the best game ever.

But Carrier Command is a true jewel of playability combined with a perfect mix of strategy and action. This is rare. Moreover, it the only game I have played 11 hours uninterrupted: it was 20 years ago and I still remember this all-night session!

Friday, June 06, 2008

Just Read: Dreaming in code


Dreaming in Code is a scary book, the kind of book that makes you wonder if it is really wise to keep pursuing the vain ambition of writing software. By telling the storing of Chandler, an open source project aimed at revolutionize Personal Information Management tools, the author takes us deep in the moving sands of software development.

The most daunting aspect of the book is the following: if the best developers in the world gathered together under the supervision of a level 5 leader (Mitch Kapor) struggle to build software like the rest of us mortals do our pesky daily jobs, then is there any hope?

Maybe hope is in the unintentional software we build while intending to build something else? We got Ruby on Rails, Blogger and Flickr that way, after all.

Saturday, May 24, 2008

Heron, Gentil Heron

This is a follow-up for my last rant about how an unhappy Java developer I was on Leopard. I just had my first week of work with Hardy Heron running on my MacBook Pro and I am so glad I went through the few hours of setup and configuration: this bird really flies!

Gone are the feeling of clunkiness and resistance Leopard was giving me: things flow so easily in Ubuntu that I actually forget about the OS. Of course, I turned off all the fancy schmancy visual effects and activated just what is needed to jump between applications and desktops with a few key strokes.

I can now enjoy standard Java JDKs, too! It is a pretty good feeling to be back to year 2008.

In the migration, I have lost Entourage, which is in fact probably a blessing. I now use Thunderbird IMAP-connected to Exchange and have a Firefox tab opened to the Outlook Web Access 2007 (pre-web 2.0) interface to access my calendar.

The only application I will sure miss is the excellent Omnigraffle Pro. Any decent alternative on Linux?

So after this trial, I decided, Gentle heron, that I will not pull the feathers off your head.

Friday, May 23, 2008

Unit Tests: No Future?

Industry expert Andrew Binstock has just posted an entry in his blog titled "Is the popularity of unit tests waning?", where he discusses the staggering state of the practice of unit testing.

Andrew asks the crucial question of why did we end in this situation. I do not pretend to have the answer, but here are some patterns that I have observed, which, I think, could decrease the appeal of unit testing.

Bad unit testing practices

It is very easy to write fragile unit tests, for example by using stubs when mocks would be enough or by creating time sensitive tests that work intermittently or stop working after a while. Similarly, it is common to see integration-like tests sleep into the realm of unit testing, making these tests fragile, slow and dependent of external resources.

These bad practices tend to lead to a decrease of confidence in unit tests, which then can lead to a progressive reduction of their usage. For example, one programmer can decide he will not write any tests anymore for DAOs after struggling with poorly written tests.


Bad software design

There is a tight relationship between good code design and testability. I have already blogged about how increasing test coverage can lead to a better design: unfortunately, not all developers are ready to review their design to make their code more testable.

To be fair, our languages and frameworks often force us to write code that is hard or uninteresting to test. Who wants to write unit tests for infamous JavaBeans getters and setters? Not Allen Holub, for sure!

Other programmatic idioms like equals/hashcode often exhibit a high cyclomatic complexity: writing complete tests for those would be tedious, unless one use test generating products like Agitator, from late Agitar, or Jtest, from Parasoft.


Test benefits blindness

Management tend to be blind to the benefits of unit testing and very aware of its costs. A project I know have been deemed to have gone overboard with unit testing. At the same time, this project has the code base that is the most maintainable, flexible and fun to work with. It seems something prevents businesses to see the value added of solid test practices, as if it was all about some obscure geeky self satisfaction activities (as writing "perfect code" would be).


No Future?

Is there any hope, then? I think that, like for most of our problems on this little planet, education is the key. Whether we look at software engineers or business managers, there is still a great lack of education about unit testing, how it works and why it pays back.

Wednesday, May 21, 2008

SVN? VoilaSVN!

If you are using Subversion as you source control management system, there is now a way to go further with it and turn it into a full fledged project and knowledge management system.

Indeed, Arcetis has just released the first version of VoilaSVN Enterprise Edition, which "provides the tools to successfully manage your projects, to coordinate all your resources and capitalise on the knowledge of your team".

VoilaSVN leverages GWT to deliver a smooth web interface, which is pretty nice for a tool on which you will have to enter and mine data.

There is also a free edition, if you have simpler needs. So give it a try and, voila!, see how far you can go with Subversion.

Friday, May 16, 2008

Just Read: Release It! and ThoughtWorks Anthology


If you intend to write software that lasts and fares well during its journey, give yourself a hand and read this book. From actual situations, the author derives very concrete recommendations towards writing scalable and operable applications.

As far as I am concerned, reading this book has been an exhilarating experience. Though to a much smaller scale, I have experienced the same pains and came to similar conclusions than the author. Reading some pages entailed some sheer moments of excitement, very much like: "I have been saying the same!".

All in all, I am grateful that Michael Nygard has written such an authoritative book on this crucial matter for now no-one will be allowed to say "I did not know" anymore.



This book does a pretty good job at offering insights and real world feedback from top notch ThoughtWorkers on a variety of software development related subjects (coding, designing, building, QA). For a collection of essays from different authors, the overall uniformity has been pretty well maintained both on style and content.

As a software developer, I have found Neal Ford's "Polyglot Programming" and Jeff Bay's "Object Calisthenics" to be the most compelling pieces of this book.

At the end of the day, the real interest of this anthology resides in the fact that a small consulting firm has decided to share its sheer passion for software in a direct and hype-free manner. How many of the big ones out there could do the same?

Saturday, May 03, 2008

I don't know... yet!

In movies, computers know everything. Just ask the computer and you will get an accurate answer now. For us, developers who have to deliver the promises made by Hollywood, programming software that gives correct answers immediately is very easy. Unfortunately it is also very costly.

Giving the correct answer generally translates into querying the one source of truth of an application: the database. This is easy.

Sometimes the database is far away from the access layer the caller interacts with, whether it is a web tier or a service tier. No problem! It is very easy to ask the caller to hold his breath until a long chain of synchronous calls gather all the correct data needed and finally present it as a glorious reply. Usually the caller has enough breath to wait until it has to get some fresh air again (some call this a time out).

All this is fairly easy but unfortunately very costly. Contention to access the centralized source of blessed truth increases as more and more callers want to access it. If, for the sins of the application, it gets successful, the number of callers holding their breath while waiting will increase dramatically.

It is a time when the application would love to have callers with smaller lungs, so they could time out faster and free the threads they are holding while sitting idle. But they are not: the dozens of second they wait are an eternity for a server. Instead of being glued waiting for a remote service or database to deliver the ultimate answer, the application would like to afford replying:
I don't know... yet!

At least, this imprecise but immediate answer would allow the server to release threads almost as fast as they come. This would give the application the luxury of replying later, whenever possible...

Of course, such a paradigm shift is all but transparent for the caller: he must learn to deal with imprecision. He must embrace asynchronism. He must survive eventual consistency.

We now have the tools, from the web tier to the back end. Can we succeed in this paradigm shift?

I don't know... yet!

Saturday, April 26, 2008

Not Dash Bored

I love dashboards. You probably got that from my previous post. If I had enough screens, I would be surrounded by dashboards of all kinds: work tasks lists, continuous integration server control panel, project metrics sites and server monitors.

Maybe this comes from the time when I was handling another kind of dashboard, from which my life was directly depending!

(Yes, I am a pre-glass cockpit flying dinosaur)

Dashboards are great because they present a synthetic view of a situation in a form that is visually expressive and does not require a lot of concentrated attention to capture relevant and crucial information.

The most recent dashboard I built exposes particular aspects of several instances of Mule ESB I have under my control. Through JMX, Mule exposes a wealth of statistics about the different components of a particular instance. Here is a small portion of the HTML console that displays this information:


This is too much information the brain, or at least my brain, can digest efficiently and quickly enough. Hence I created a simplified view that represents the variations of message routing statistics on a selection of components:


The components are simply selected by name: I decided to prefix the important ones with "process" and "dispatch" and derived from there a simple selection pattern. I use different colors to show different states:
  • Green: no activity,
  • Yellow: at least one message went through,
  • Orange: the backing event queue has been resized up,
  • Red: an error has been routed,
  • Gray: no delta available (first call of the dashboard),
  • Black: component statistics unavailable.
Extra symbols represent a non active state of a component, like paused or stopped. The dashboard itself is the aggregation (frames, yuck!) of the HTML output of a specific component deployed alongside the other ones.

Of course, this does not compare to the professional grade monitoring tools you can buy from MuleSource, but is already handy for deployments of limited scope and criticality.

I think the most interesting aspect of this dashboard is how quickly you can develop the ability to recognize a normal behavior pattern from a faulty one. It is pretty much like reading the matrix undecoded... Now how could you get bored from such a board!

UPDATE 03-MAY-2008: This dashboard is now available on MuleForge.

Sunday, April 20, 2008

Building Value

This week, one of my colleague (a guy named Josh) was all grumpy about the time he spent adding documentation in his projects Maven sites so operations could deploy his application properly. This made me reflect on how great is this tool not only for building software but value in general.

By giving developers the opportunity to document in the same environment as where they code and by embedding the HTML rendition of this documentation in the project site alongside all the other technical reports, Maven presents to the stakeholders an overview of the value built by a project.

Value is a vague term, so let me be more specific:
  • Intrinsic value: technical metrics, like the ones coming from static bug analysis, package dependencies, test coverage or code style compliance, represent the core value of the code in term of quality, flexibility and maintainability.
  • Business value: results from acceptance testing tools like Selenium or FitNesse represent the capacity of the application to satisfy business requirements.
  • Corporate value: on top of the auto-generated technical documentation from the code base, all the extra documentation that is added, whether it is installation guides, monitoring procedures or deployment diagrams, brings value to the company as a whole, from operation teams to new recruits.
With all this goodness available, I am still baffled by the limited number of managers who pay attention to the reports generated by Maven. I imagine that they might be too technical for "generic" managers who are in the business of software as if they would be in the one of gravel and stone delivery. Moreover, the usual metrics for software development is generally focused only of features delivery and deadlines.

This said, thanks to continuous integration and the dashboard plugin, I believe it is possible to catch the interest of a broader audience because it is now possible to display trends instead of static values: management understands trends.

For example, a flat test coverage value is meaningless but a trend that shows it increases means that quality, thus value, does the same. Similarly, comparing projects based on their metrics is a non-sense, while comparing their trends makes sense.

Did you have any good experience sharing Maven sites to management? Did they get the feeling that value was being built?

Saturday, April 12, 2008

Abstraction First

When designing services, the common wisdom is to opt for a contract-first approach, instead of an implementation-first one. There is no question that this is a valid approach but I think the emphasis should be put on the necessity to design a good abstraction first.

Consider this: technical implementations, especially in strongly typed languages, often result into a pollution of the client model by the service model. Interfaces or stubs used by a client to perform remote invocations can very easily become mixed with its own domain.

This creates a tension between a service provider and its consumers, as they often tend to drive the contract too much on their side because they consider the local crystallization of the contract as a part of their model. Exacerbated, this tendency can result in tight coupling between both parties.

Let me give you an old but typical example from my tumultuous past.

A little more than a decade ago, I worked on a corporate centralized contact management system. It was supposed to serve contact details (persons, organizations, addresses and whatnot) to all the applications in use in the company, including secretaries' word processors. Admittedly an interesting project, it in fact quickly turned to be a death march. I soon learned this was the sixth attempt of such an endeavor and that people wanted to see how a n00b like me would fare.

I then realized that each department wanted very different things out of this system and their views would not be reconcilable. I ended shipping a version that was only usable by the secretaries, which I think was a smart move as you must
always be good to them!

The next n00b was assigned the creation of version 7 of the contact manager, built on what I did with the goal to generalize it to all departments. Of course it failed and the project disappeared for ever. Maybe the mythical number 7 was to be reached before the whole stuff could have been killed.


At that time, if I would have known better, I think the best I could have done would have been to advocate for the deployment of an LDAP provider. Indeed a directory server accessed via this protocol does not try to be everything for everybody and does not present a contract that any application would consider using in its own domain model. Yet it offers a simple and powerful abstraction that an application can use to query directory information and then tie them with its own object model.

Let me quote Uncle Bob:
"Abstraction is the elimination of the irrelevant and the amplification of the essential"

To me this sounds almost like a caricature, where the most prominent features are made so obvious and visible that there is no doubt left about what really matters. A good abstraction for a service should then translate in clear intents and well defined boundaries, which would guide the creation of valuable contracts.

Finally, for all these existing services that want to invade your application domain, there are fortunately ways to remain insulated. For example you could:
  • use dynamic language scriptlets to perform remote invocations and set values on your local domain,
  • use Dozer to tie client stubs and your objects,
  • consider invocation responses as raw XML and extract values out of them with XPath.

Saturday, April 05, 2008

Fruit and coffea? I don't think so...

Last year, a guy named Marc Fleury ranted about how Macintosh was not a suited platform for software development. At the time I thought he just hated it because he is French, and French people hate everything. Let me quote him:

"a mac is like a bimbo". It looks good and shiny from a distance, you think you really want to try it. But once you do, after 2 weeks of "doing it" you are bored, bored to tears. Tired of everything, the pretty animation stuff, the big tatas, the transparent look and feel, the big tatas, the genie bullshit animation, the big tatas, the stuff that is different from windows "just to be different", the big tatas and the empty brains.

Nowadays, I am the one who daily (hourly) complain about how clunky and inappropriate is Mac OS X for Java developers. People around me roll their eyes and probably assume I just hate it because I am French too, and French people hate everything.

But, boy, since I started to use this platform professionally, how much do I find Mac to be a hindrance to my development activities! How often the OS gets in my way and pulls me out of the state of flow I am in...

Here is a non-exhaustive list of my grievances, so you can decide if it is mere ranting or if there are some valid reasons for my grumbling:
  • Bitter Java: official JDK support is way behind other OSes. For example, the Apple JDK 6 was beta when I was on Tiger and disappeared on Leopard. Sure I could follow the great work of Landon Fuller with Open JDK, but I honestly do not have time to invest in such endeavors.
  • Keyboard support is a joke: even if I am using QuickSilver, I constantly have to grab the mouse to click this, highlight that or dismiss a pop-up. The last one really drives me nuts: why is it that the escape key does not always cancel a pop-up dialog?
  • Focus messy: there is always an application that steals the focus out of my working window. Maybe this is the fault of badly behaving applications, but it is the first OS where this happens to me so often that I get annoyed by it. Is this OS making it easier for applications to become focus-rude?
  • Prompt to freeze: the UI freezes very easily, to the point I can not even switch applications or invoke the task killer. It seems that a badly behaving application can very easily mess-up the whole user interface, hence the whole OS.
  • Hidden BSD: it is hard to say this but as far as the overall stability is concerned, OS X reminds me of Windows ME sans Blue Screen of Death. I have to do hard reboot at least once a week, whether the screen saver locks me out for ever or the UI freezes to the point I can not do anything except sitting on the power button. I have not seen this in any OS for almost a decade. And having this kind of behavior on an almost newly re-installed machine, with minimal applications running, is a plain disappointment.
  • Finder sucks big time: for an OS that is supposed to be all about user experience, I find the Finder to be a complete disgrace. Try to create a new directory: it never ends up where you want it. Try to shift-select files with the keyboard: going up and down performs some counter-intuitive file selections. Then use the mouse to drag the selected files: they might end-up where you drop them, or not. Instead of adding a new view in Leopard (cover flow), if only Apple could have fixed the existing ones so they become really usable (who uses the insane column view?).
  • Flaky AirPort: I am constantly losing connectivity with my Wifi router. Maybe my cheap D-Link router is the issue, but then why my other non-Mac machines have no problem maintaining their connections up and running for hours?

I will not mention Eclipse that dies unexpectedly on Leopard while it was stable on Tiger (yes I have configured the JVM memory parameters, thank you). I will not mention the disgrace that is Entourage, because it is a Microsoft product and Apple can not be blamed for it. And I will not mention that my MacBook Pro hard drive fried just after a year (bye-bye guarantee), which is the first time something like this happened to me for the past ten years that I have been working on laptops.

Nuf' said! So what are my platforms of choice for Java development? In order of preference: Kubuntu, Windows XP and... Mac OS X. But at home, I am very happy with the little white Mac Book we use for browsing, e-mailing and managing photos. For this kind of home activity, having an OS that shows off is acceptable. For professional usage, the less the OS gets in your face, the best it is.

Wednesday, April 02, 2008

Healthy Health Checks?

Here is a little story that happened to a friend of mine a few years ago. Users of his web application started to complain about the system being broken and not responding anymore. The curious thing was that the operation team was not aware of any issue. After checking with them, it appeared that the monitor they had in place for this web application was simply checking if an HTTP response was received. Any response. Even a 500 one!

This sounds naive and ridiculous but setting up application monitoring is a subject that is a little more hairy than it appears at first glance. Consider another more recent case that came to my attention: in this case, the application was still replying positively to its health check monitor but was not functioning properly, as it was unable to access required file system resources. Again, the end users were affected while the monitoring was happily receiving correct responses from the application.

So how can we, software developers, create health checks that operations can rely on?

Taking the canonical multi-tiered web application as an example, the following schema shows an health check that is too shallow to be useful (in red) and one that exercises the full layer depth (in green).
While it is clear that the shallow approach brings little value, as far as end user quality of service is concerned, why do not we always shoot for the deep approach then?

Well, if you consider how a serious load balancer appliance (like BIG-IP) works, you will realize that if performs health checks very regularly (by default every 5 seconds) in order to have the most up to date view of the sanity of the members of the pools it handles. Bearing this mind, if an health check request would exercise the full depth of an application, you would have a permanent load added to your system, which would increase the strain on your diverse resources, down to the database itself. With a farm of n servers, the cumulated strain induced by the health check requests on all the members of it would start to be non negligible on any shared resource.

My take on this would be the following: create an internal watchdog that evaluates the sanity of the application at a reasonable pace and report the current state of this watchdog when a monitor requests a health check from the application.

As shown in the above schema, the watchdog life cycle is uncoupled from the health check one, which allows to reduce strain on the underlying resources while allowing the monitoring environment to become aware of an application issue almost as soon as the application realizes it itself (because the monitor polling frequency will be kept high).

What is your own experience in this field and what is the path you have followed in order to build dependable health checks?

Saturday, March 29, 2008

Dumb XML

Beautiful Code is a book that can not let oneself indifferent. While reading it, I have been really annoyed by a statement made by one of the authors:
What I always tell people is that XML documents are just big text strings. Therefore, it's usually easier to just write one out using StringBuffer rather than trying to build a DOM (Document Object Model) or using a special XML generator library.

I fully disagree with this position, because I have seen time and again the adverse results of such a simplistic approach to XML generation:
  • Malformed XML: basic string construction does not handle general entities escaping, element name validity and correctly balanced tags. I once had to deal with an XML document that was so broken I initially thought it was SGML. It was neither and I ended using regular expressions instead of SAX to parse it.
  • Invalid XML: applications that generate XML should be polite enough to validate the data they produce before sharing it with other applications. At the time DTDs were current, I followed the practice to add the external declaration only if the document was tested to be valid, like a proofing stamp. Of course, I also had to deal with an application that never considered important to output XML that complied with its own schema!
  • Bad encoding: I realize that many developers live and work in a place where ASCII-7 is enough to represent all the characters they need. But the rest of the world cares for accents and other language particularities. Hence again: basic string building gives no guarantee in term of correct representation of Unicode characters.

Of course, for trivial XML blocks that will never contain any special character nor vary in form too much, using a StringBuilder (not StringBuffer by the way, most of the time it is unnecessary to use this synchronized version) is more than enough. Of course, you can use helper class to encoding all strings and escape all entities.

But if you go further than the trivial use cases or if the data you integrate in your XML document comes from an uncontrolled source (like a database connection or another application layer), use a proper library for building XML.

XML is simple, but do not dumb it down to simplistic
.

Thursday, March 27, 2008

Bugs Of Opportunity

Yesterday night, I fixed a trivial bug in NxBRE. Doing so, I have spent almost 5 times more time writing tests to assert the current behavior and the expected one when the bug would be killed.

This reminds me of an earlier reflection on how being test infected changed my reaction to incoming bugs. Prior to become green light addicted, bugs were my enemy and were received a treatment depending on my mood:

They are now an opportunity to improve and increase test coverage, regardless of the mood I happen to be in:
But, as you have certainly noticed it, grumbling is still part of the process!