Wednesday, March 19, 2008

Fighting The Good Fight

An article published yesterday in the Wall Street Journal (Pleasing Google's Tech-Savvy Staff), made me reflect on the fight for corporate standardization of technologies, a fight in which I have been pretty involved in the past 4 years.

This battle happens at many different levels: operating systems ; database, application and web servers ; development platforms, tools, frameworks and libraries. Is it worth fighting it?

First, let us bear in mind that there are some strong rationale in unifying the different technologies in an IT landscape. Here are a few:
  • Limited and simplified licensing and support contracts negotiations,
  • Facilitated software maintenance and operations,
  • Improved interoperability and potential for re-use.
But there are risks too, like:
  • Golden hammerism: Forcing the use of technologies into scenarios where they are not adequate,
  • Team entrenchment: any technological choice usually satisfies as much people it displeases,
  • Kool-aid intoxication: single vendor dependency often reduces options and opportunities for using better fitted approaches.
Is the Google way, where people are left free - and responsible - for their technological choices a viable approach? Can it even work anywhere else, where the density of geniuses is much lower?

I will not discuss the advantages of single server operating systems strategies: despite the clear advantage in term of resource management, the raise of virtualization platforms have somewhat made a multi-OS environment an easier possibility. Narrowing the discussion to developers, who have the tough job of taking decisions in a world where every day brings a new and promising tool, what is the actual risk of letting them make choices?

My experience in the matter taught me that:
  • No project gets doomed by its technological choices: I have seen more harm done by the abuse or misuse of a particular framework, than by the framework itself. And yes, this also holds true for projects condemned to use Entity Beans.
  • Applications designed on the same platform do not inter-operate by the sole fact of being developed and run on the same platform.
  • Similarly, re-use does not happen by unifying technologies (except if you develop for portals and want to re-use widgets without resorting to HTML scraping).
  • Paid-for support for open source projects is seldom useful (while "quick-start" consulting gigs are valuable).
So, rather than the quixotic pursuit of the perfect unified IT environment, what is the good fight worth sweating and bleeding for? I believe it is worth fighting for the following:
  • Practice quality over dependencies uniformity: more than the usage of a common set of libraries, improving code readability, test coverage, application design and build practices have the biggest impact on the maintainability and evolutility of a particular application.
  • Loose coupling over platform uniformity: an IT landscape greatly benefits from systems that have clear contracts between each other, interact in well defined and contained manners and can gracefully survive if their neighbors have temporarily fallen in digital limbos.
  • Operation friendliness over environment uniformity: applications that are developed while having operations in mind have a happier life in this world. Targeting a particular server or database or OS does not automatically translate into an application that will be easily handled by the production team of a company.
Let us fight the good fight!

Just Read: Beautiful Code

Beautiful Code is probably the most unequal software book I have ever read, both in term of style and content. Some chapters are very formal and academic, while others are more relaxed and down to earth. Some chapters really offer food for thought in the matter of software development, while others are arid displays of obscure code with no lesson to gather from. If all the royalties were not given to Amnesty International, I would have felt totally frustrated by this book. At least, I have the feeling to have served a good cause with my money.

Saturday, March 15, 2008

Unchaining Backward Chaining

NxBRE's Flow Engine is currently under work: I have added a simple backward chaining scheduler that can execute sets in order to produce a specified goal in the rule context. Not all rule bases qualify for it: no construct should exist outside of any set to be usable in this context.

This will allow to support Reaction RuleML as an input, alongside the current proprietary syntaxes, hence to be able to consume the output of Acumen's RuleManager, an excellent tool for BRE related development on the .NET platform.

Stay tuned for the upcoming new release. In the meantime, the most adventurous can already try the backward chaining engine by checking out the latest version out of SVN!

Friday, March 07, 2008

Was @SD West 08

SD West 2008 is over and my brain hurts. But batteries are reloaded: this is what happens when you come close to the luminaries of our industry. And it is a great feeling.

The CMP event team did a great job both expanding the conference with new sessions and filtering out the vendor kool-aid sessions that sometimes managed to enter the schedule.

Kudos and a big thank you to the organizing team!

@SD West 08: Highlights of Day Five

Ten Ways to Improve Your Code (Neal Ford)

Even if I do not fly airplanes anymore (for now?), I try to stay informed about what happens in the pilots' world. One aspect of it that has always impressed me is the importance given to constantly improving one's practice. Software development should not be different. This is why I like this kind of session, as there is always something to improve somewhere!

Neal presented ten ways to walk this path of improvement. Here is a very short version of these ways, consult the slides on-line for the full version:
  1. TDD for its design benefits (including DI).
  2. Static analysis (byte-code & source analysis).
  3. Good citizenship (encapsulation, invariants preservation from construction to mutation, cautious usage of singleton).
  4. YAGNI (no speculative development, no more ivory-towerish frameworks pleaeaease).
  5. Occam's razor (make the difference between essential & accidental complexity).
  6. Question authority (including established so-called standards, rebuke anti-patterns).
  7. SLAP (single level of abstraction principle: for this, refer to yesterday's "Clean Code").
  8. Polyglot programming (leverage languages targeted at specific problems)
  9. Learn the nuances of Java (discover the hidden JDK gems!).
  10. Anti-objects (too inspired by the real world and solving problems in a reverse manner).
Neal also presented 10 corporate development smells that would be hilarious if they were not tragic! Discover them now. For point 4 (the inane debate about stored procedures), you can also read my take on it in this discussion.

To finish I think it is worth quoting Neal's answer to client who try to find excuses for not making things better:
"Your problem is not more complex or so different than everybody else's!"

Responsible Web Design (Scott Fegette)

As if anything in software development could be responsible (software liabilities anyone?), Scott re-stated the need of staying abreast of current standards and best practices during his very open and non-dogmatic session. He also reminded us where we come from and all the progress that has been made along the way.

Here are a few key points:
  • Thinking semantically (avoiding layout-specific markup as much as possible), with microformats for example.
  • Properly managing CSS styles.
  • Opting for un-obstrusive JavaScript (enough of these links that break when JS is disabled!).
  • Staying current with the standards and practices.
I will not claim I followed everything in this session, even though Scott did a great job to demonstrate his points with actual web sites and code samples.


Memory Leaks in Java Applications (Gregg Sporar)

My worst memory leak, despite forgetting anniversaries, happened in 2002 when using Xalan. Each XSL transformation was leaving a bunch of not collectible objects in memory, until the JVM had enough and died. Since then, I am worried about all sorts of leakage (and since I am getting older, this should not surprise you), but I still like XSL ;-)

Gregg is obsessed with memory leaks too, but he works for Sun and knows the problem like the palm of his hand. The goal he had for us in his class was:
"To understand the different types of tools and techniques available for finding memory leaks."

He then went through a thorough review and demonstration of different techniques:
  • Post-mortem inspection: analyzing a thread dump after the JVM crashed with proper tooling.
  • Instrumentation: to perform live analysis of what is going on in the JVM, mainly by analyzing the trend in object generation count.
  • A combination of both: to capture dumps on running code.
He also explained the lack of user-friendly tooling for analyzing class-loader related memory leaks (that can lead to the infamous permanent generation error). To learn further about this interesting and uneasy subject, I encourage you to read the two articles he co-authored for ST&P in April and May 2007, from which he extracted the material for his talk.


Mock Objects / Mock Turtles: The Role of Patterns in TDD (Scott Bain)

After reminding us the youth of our industry, Scott made this very interesting statement:
"Software Development is both an intellectual and a practical profession."

I find this interesting because I tend to over-emphasis the intellectual part of the job when I digress about the nature of our profession. So, yes, there is also a practical dimension to it and Scott argued that testing is a driving force for this concretization.

So how do design patterns play a role in unit testing? On top of allowing us to use a simple name to communicate a lot of information and context, which is extremely valuable, did the GoF give us best practices for testing these patterns? Unfortunately not. But all hope is not lost, as Scott explained as he went on detailing applicable test strategies for the most prominent patterns (strategy, decorator, façade). This is a work in progress that can be contributed to on Net Objectives web site.

So what is the relationship with turtles? Because some patterns force you to know a lot about the chain of objects behind the scene (turtles all the way down) to be able to test them ; and that placing a mock at the right place can alleviate this issue. It is for this astutely located mock that Scott coined the term mock turtle (well in fact, he recycled the term).

Thursday, March 06, 2008

@SD West 08: Highlights of Day Four

HTTP for Web Developers (Jason Hunter)

After his yesterday's talk about caching, Jason detailed the core of the HTTP magic. Indeed modern web development tools very often abstract HTTP out of the development paradigm. You end up with developers who talk about controls on forms as if they were developing Access applications (do not laugh, it happened to me).

HTTP is a wild world: with browsers and servers interpreting the standard in their own manner (often leading to bastardized additions to the standard itself), developers need to know what is happening under the hood.

I will not detail all what Jason talked about but I am always amazed by the amount of extra knowledge you can get when an expert revisits the basics! A highly recommended exercise for anyone who does things with the webernet.


Clean Code: Functions in Java (Robert C. Martin)

Woohoo! A new class from Uncle Bob! It is of course impossible to properly summarize an hour and half of such a magistral session. The main concept that this talk detailed is the following:
"Making code more readable allows us to write code faster, because we end-up reading code twenty times more than writing code when we are in the process of writing code."

So how to achieve this? The general idea is that a function should be an executive summary of what is happening below it: it must be short and should not mix concepts from all layers of the application, in order to remain understandable. This must lead to a recursive functional decomposition and the creation of functions with meaningful and descriptive names all the way down through the abstraction layers. And by the way: if you find hard to find a good name for a method, it is probably because it does too much things.

Here is short summary of the talk:
  • Write small function, and if possible, even write smaller ones.
  • In a method: do one thing to preserve cohesion, i.e. one level of abstraction.
  • Three is the absolute maximum number of arguments: think about how two arguments are already confusing (right order?).
  • No side effects: a method should have no strange temporal coupling, it should always have the same semantics whenever it is called.
  • Throw exceptions instead of returning error codes because they get mixed with the natural outcome of the method and force ugly calling code.
Someone in the audience asked a question that is dear to me: "there is nothing new here, why keeping repeating it?" Uncle Bob replied: "because we do not believe it". This sounds like the classic "Shema Yisreal", with prophets and priests repeating the same things again and again because the people was unfaithful. We need to hear again and again to internalize things so we practice them. Writing code as if reading it matters is not natural but we can reach this goal thanks to method extraction refactoring.

Ouch. I will spend the rest of my life refactoring my own code.


Parallel or Perish!! - Are you Ready? (James Reinders)

After assuring us that multi-core are not a temporary trick to gain performance that will be deprecated later by faster single core processors, James made very clear the urgency and necessity of a mindset shift towards parallel programming, especially for client side developers.

He then gave us several tips that you can read in an article he wrote for DDJ a while ago.

I have really appreciated his remark on how to consider leveraging multicore parallelism in light of an increased work load, and not only by looking after accelerating existing code.

I was surprised by James remark about the lack of parallelism abstraction in Java: since version 1.5, the JDK offers a wealth a concurrency oriented high-level constructs (like collections, conditions, mutexes, futures and whatnot...). He might refer to the thread management themselves, but again I find that executors are offering an interesting way to splitting work around threads. But he confessed his main focus is on C/C++ though!

I am in fact more surprised by the number of Java developers who do not have clear (or at least basic) guidelines about how they write code that runs concurrently without coughing. As James said: we all have to learn and think about parallel development. Coming from Intel, this shows how much learning we can expect in the coming years!

In the meantime, thanks to either smart scheduling or pure randomness, this keynote was followed by two sessions on Java concurrency and parallelism: so I had an immediate opportunity to keep climbing the learning curve!


Thousands of Threads and Blocking I/O (Paul Tyma)

I was really looking forward this session and was not disappointed. The breadth and depth of the material Paul shared with the audience was worth attending. He did a great job debunking some myths about synchronous vs. asynchronous server models, all backed with hard facts. Here are a few of them:
  • Java asynchronous NIO has higher throughput than Java IO (false)
  • Thread context switching is expensive (false)
  • Synchronization is expensive (false, usually)
  • Thread per connection servers cannot scale (false)
Visit Paul's blog for more details and even the slides of the presentation. All in all, this talk confirmed my quasi-gutsy feeling about concurrent development: let the threads flow and be smart about their touch points!


Anti-Patterns in Software Projects: Human Factor (Rob Daigneau)

Writing code as a very mental process, hence is subjected to our human nature, with all its up and down sides. Rob gave a great presentation about how human factor affects software development and how can leaders of all sorts act to make the workplace a better place. This of course spawned a lot of lively discussions, as everybody has so much to say about what happens in their own life in software development.

It is hard to summarize all what Rob said but I really liked his emphasis on passion and how essential it is to let it burn in developers (without let them burning it, which is destructive). I also appreciate one of his final word: except if we write life-critical software, well, it does not really matter that much so better having fun while working. Sounds like a good advice to me.


Developer Bowl

Last year Developer Bowl was all about Googlers and their incredible supremacy in term of computer science knowledge. With Google being in no shortage of big brains, I walked in the theater with the expectation to get a fair deal of deja-vu...

This year was as entertaining as last year's edition. The questions, which were partially submitted by DDJ's readers, were much less oriented on the fundamentals of computer science and more on history and anecdotes. So this year Google did not pass the first round and IBM won versus Intel!

Oh well, it is all rigged anyway ;-)

Wednesday, March 05, 2008

@SD West 08: Highlights of Day Three

Demonstrating WCF: Beyond the Endpoints - Juval Löwy

So .NET is legacy and WCF is here to increase your productivity? What is this all about?Juval did a great job demonstrating how, by building this new platform on the CLR, Microsoft has delivered a complete development environment that offers a clean and efficient programming model for "enterprise" applications.

But is there anything new here? For .NET developers, surely yes. But from a JEE development standpoint: not really. All this sounds like a mix of EJB3 (framework-free classes, remote exceptions), JBoss call stack model (dynamic proxies, client-side and server-side interception), unified synchronous/asynchronous invocation model and workflow for long running operations.

To be fair, in this big mix of already known stuff, there are some pretty powerful features like resilience to change in service contracts. WCF goes to great length to transparently allow client and server at different version levels to keep exchanging messages even if they have changed.

Moreover, unlike the usual stack of disparate half-baked products that are common in Java-land, WCF is a typical Microsoft product: it comes complete with a wealth of tools (like the pretty impressive visual call stack analyzer) and offers to developers a trustable and stable development framework. Conclusion for .NET developers: if you are not using WCF today, do not further delay starting to leverage it!


Behavior-Driven Database Design (BDDD) - Scott Ambler

Scott warned the audience: he is going to be blunt. And he was. Let me quote him: "any monkey can rename a column in a production database"! If this sounds like a scary perspective to you, do not think you are alone: surveys show that there is still a majority of corporates for which this is a challenge too. Why is it so? Mostly because of the mystical belief that what is in the database is perfect and trustable, hence does not require testing.

No testing? Wait a minute. Survey shows that a majority of companies have business critical functionalities in their databases (triggers, stored procedures...). So... no testing? Does this sound reasonable? When all serious software developers are now test infected, is it acceptable that the data management community drags ten years behind in term of quality-oriented practices? When the problem of bad data quality is estimated to cost 600B$ per year in the USA, can we keep going on like this? Of course not.

Is this lack of testing the only factor that makes database refactoring look so hard? Not at all. Scott stated another factor very clearly: mainly because of poor data access architecture leading to tight coupling with the database itself. Ouch! Blunt again. True again.

So how can BDDD help? In short: BDDD is an evolutionary and test-driven (not model driven) approach to refactoring databases. It encourages to consider databases as having nothing special about them, hence to dare applying all the panoply of agile software development to them, which are, to name a few: continuous integration, automated build, SCM, versionning, developer sandboxes, granular refactoring (to minimize the risk of collision) and regular deployment between environment (frequent from developer to integration, less frequent to QA, highly controlled to production).

Because it is so different than traditional approaches, it appears as a threat to most of the data professionals. This should not be the case as their skills and knowledge are needed for the success of this refactoring. Moreover, with many developers now using some sort of O/R mapping tools tempted to dumb down databases as "just storage", having the feedback of data professionals can help leveraging the different features of databases, which must be considered as first class citizens of a software architecture (as application servers are).

The following stop sign in Sacramento shows the current situation: data and quality orthogonal to each other, and a big stop sign in the middle.


This must not be like this for ever. With both the software development and data communities working together, a little understanding and good share of courage, it is possible to make things better!


Object-Oriented Programming and Generic Programming and What Else? (Bjarne Stroustrup)

When the father of C++ gives a keynote about OOP and GP, everybody sits quietly and listens because everybody knows that anything below a fully concentrated attention will be not enough to follow. I did my best to follow the master and did not regret it! Bjarne detailed and compared the strengths and weaknesses of OOP versus GP (using the classical shapes example). He then presented where C++ is heading to (for its 0x version), making the interesting statement that the needs of concurrent programming will more and more shape the destiny of languages.

Bjarne has been honored with the Dr. Dobbs Excellence in Programming Award later on that day, a well deserved recognition for the extent of his contribution to our field.


Web Caching (Jason Hunter)

After reminding us why web caching is critical, which I will not detail here because if you do not know why it is you would rather stop reading this blog and start searching the web, Jason started by detailing the basics of HTTP before delving into five different cache techniques from Yahoo's Mike Radwin. Here is a non exhaustive list of all what he mentioned:
  • browser cache and conditional get for revalidation (leading to very cheap 304 replies from the server),
  • proxy cache that happens at different levels (company, country...),
  • beware of what can prevent caching (cookies, authentication), using mitigation techniques ...
... like:
  • serving static content from a cookie-free TLD or using cache control directives,
  • deciding that images never expire and use different versions of them (there is an Apache mod that helps for this),
  • go even further and version even JavaScript and Flash files (forcing to adapt HTML files so that links target the right and consistent versions),
  • leverage cache and expire directives correctly (for example instead of using 0 to mean "already expired", use a date far in the past, but not before the epoch),
  • do not trust anyone to understand these directives or to understand their latest versions, so cover all the spectrum of parameters and plan for stubborn clients (for example 302 redirect to a highly cached site).

The Busy .NET Developer's Guide to Rules and Rules Engines (Ted Neward)

I always wonder how good or bad are the stuff I am doing with NxBRE, so I decided to attend this session and listen to Ted giving a review of business rules engines in the world of .NET.

We all understand that business rules are natural members of software applications and, I hope, we all realize that hardcoding them makes our life more difficult, especially for rules that change often and bear a lot of conditional branching. Hence the rise of business rules engine (BREs), which origins are rooted in the artificial intelligence efforts and expert systems of the 70s. Ted gave this interesting definition of BREs:

"Business rules engines are generalized expert systems in which the expertise is missing, to be entered via some form of programming language at a later date"

I like it because it relates BREs to expert systems while showing the part left to developers. It re-enforces the idea that there will be a learning curve, that integration will be needed, hence that a BRE will not be for any project nor any budget.

Other interesting aspects to consider when using such beasts include rules edition (and the mythical "business user" editor), testing and validation (about what you can read my take) and operational lifecyle (promoting to production and vice-versa).

I am glad that Ted mentioned NxBRE in this session (without throwing stones at it, woohoo), even if he mainly focused on Microsoft Workflow Engine and Drools aka JBoss Rules .NET (definitively more interesting for the audience, anyway, as WF is available to anyone with .NET 3.x while Drools have been around for a long while).

The fact he has not mentioned RuleML only once, even when making fun of MS WF indigestible XML syntax, puzzled me... maybe RuleML still tastes too much academia for the industry?


18th Annual Jolt Product Excellence Awards
Dr. Dobb’s Excellence in Programming Award


When someone asked Uncle Bob if he would attend the Jolt ceremony, I overheard him reply "naaahh". Still, I can remember him four years ago cracking jokes from the first row with Alexa. So is all the fun gone for real? It is with this thoughts in mind that I entered the theater, ready to be more surprised by the results than ever, as I unfortunately had to drop Jolt Judging for this round.

Well, this ceremony was simply excellent. The whole process has been streamlined, with Productivity Awards simply named, and good drum rolls-slides synchronization! Robert X. Cringely, who was hosting the event, first provided us with a great insight on the transient nature of the software we produce, though stated that this transiency removes nothing of its importance. He then proceeded to the formal trophy handing celebration with brilliance and pizzazz.

I leave it up to the DDJ site to list all the winners of this year. My highlights are the following:
  • Atlassian (or should I say Cenqua?) received well-deserved awards for both Fisheye and Clover.
  • Smart Bear has been recognized for Code Collaborator, whose company published book I have talked about before.
  • O'Reilly Radar was high on the radar screen! If you do not read it yet, well, you know what you have to do...
  • I was really hoping to see the Spring Framework get the Jolt but it went to Google's Guice. I am not convinced that Guice will ever jolt the industry as Spring does. But, hey, a vote is a vote!

Friday, February 29, 2008

Stuffing My Brain

I will be attending the last three days of SD West 2008 next week, stuffing my brain with all the good things this conference always delivers!

Should you want to discuss NxBRE, the Mule JCR Transport or any other stuff, please shoot me an email and we will arrange a meeting.

Wednesday, February 27, 2008

Canonical Implementations

A few days ago, a colleague made a tongue-in-cheek remark about the fact that I am directly using some Spring Framework helper classes (in this case: FileCopyUtils). First, I started to wonder if this was really bad: after all, are we not we supposed to avoid direct dependency to any framework?

Then it struck me as an evidence: the Spring Framework provides canonical implementations for almost anything.

Sure, it is all bound together thanks to the dependency injection kernel. But, whether it is via direct access to helper classes or via configuration, Spring implements state of the art and rock solid implementations of most of the tricky things you want to do.

Like copying streams while handling all the checked exceptions correctly. Or reading from a JMS destination with a configurable pool of consumers. Or creating and applying aspects. Or registering MBeans. Or... whatnot...

So why writing code when you can re-use one that has been so many times read, executed and improved over the past few years?

Why not tapping this huge source of canonical implementations?

Happy Bus Ride

I am happy to announce the second milestone release for the version 1.1.0 of the JCR transport for Mule.

There are some new cool stuff so check the updated user guide and examples to discover what. You can also check the road map to see what is cooking for the next milestones...

Make your bus content: connect it to JCR!

UPDATED 09-FEB-2008: Version 1.1.0 is finally out!

UPDATED 27-FEB-2008: Michael Marth of Day, the company who is leading the JCR specification, has interviewed me about this transport.

Friday, February 15, 2008

Pragmatic ESB Use Cases

After four years fiddling with ESBs, I thought it could be interesting to share what I have found to be pragmatic use cases for that tool. But, first, what do I mean by pragmatic?

By pragmatic, I mean free of any vendor kool-aid or SOA hype ; and driven by concrete needs instead of general fantasies of "business processology and ontological governance" (you get the picture). I also mean cases where an ESB did not feel overkill for the job and where its inherent drawbacks were acceptable compared to the features it brought.

So here are some use cases grouped by themes, with highlights of what particular virtues an ESB brings in them.


The external world outreach use case

Isolating corporate software from the pesky details of calling third party services over the internet is a very good use case for an ESB. Indeed, these services tend to have bizarre authentication schemes, IP range or access count limits and random down times. It is then worthwhile to hide these remote services behind an abstraction layer that allows developers to invoke them without having to learn about their particularities.

Consuming feeds is also another task for which it is more desirable to write configuration instead of code, something an ESB offers natively.

Reaching servers inside a DMZ with non-HTTP protocols is also another good usage for an ESB, with nodes installed in each firewalled subnet. I have talked about this before.

ESB virtues: Distributed, Content base routing, Configuration over code.


The instant legacy integration use case

Though I have been lucky enough to be spared the traditional legacy integration consulting gigs, I had to perform some stunts in the realm of instant legacy integration. This kind of software does not consider that standards carry any value, hence often opt for weird invocation protocols or communication mechanisms.

The message transformation and protocol adaptation features an ESB offer come very handy in this kind of scenario: a remote synchronous mambo-jumbo talking service can then be exposed as a SOAP or MQ one to the interested applications.

The versatility of an ESB, which allows it to come into play with various systems without having to modify them or needing them to know anything about the bus, compensates the black box middleware it can appear to be. It is in fact often desirable to keep the ugly details of talking the crazy talk of some systems in a black box to preserve the sanity of the software developers who need to use them.

ESB virtues: Message transformations, Protocol adaptation, Non invasiveness.


The fast talker use case

An interested case I was once faced with involved dealing with high performance communications alongside with a more standard database oriented web application. Even the hip J2EE application server I was using at that time was not designed to handle this kind of traffic: EJB, SOAP or CORBA were fine but not bursts of data packets over POTCPIP (Plain Old TCP/IP).

This was another good use case for an ESB, as it offered not only an excellent support for even such a low level protocol, but also the capacity to configure and manage pools of threads and components used to process the particular load of this side of the application. Acting as a "heat shield", the ESB was then analyzing, aggregating and pre-chewed data packets in messages ready to be sent to the rest of the J2EE application over standard EJB and JMS channels.

ESB virtues: Configurable behavior (threading/polling), Protocol support, Message aggregation.


Now to the real expert

So where is the "ESB in the middle of all remote service calls" use case? Well, though very satisfying for the mind, this kind of deployment very often ends up as a not so great idea. Jim Webber, who is a real expert, can explain you why.

Saturday, February 09, 2008

JCR + ESB = You Decide!

After two milestone releases, I have finally released the latest version of my JCR Transport for Mule.

If you are looking for new ways to leverage JCR, then consider this transport that allows you to benefit from Mule ESB numerous transports for integrating content repositories into your infrastructure.

Saturday, January 26, 2008

16 Years Ago

As promised before, here is the second part of my "power of two" flashback series of posts.

During the development of a modem hub sixteen years ago, something happened to me, something that touched my geeky soul and left me nostalgic for years, something that have apparently happened to many other people: I have been touched by OS/2.

I mean, touched by:
Sorry, I could not resist. There is still something mythical and magical about this operating system.

I had the chance to develop a system that was receiving chemists orders sent by modems and was analyzing and multiplexing them before sending them to an AS/400 machine. The application was developed in C, in a multi-process and multi-thread manner, with named pipes and semaphores enabled communications. The local persistence database was DB/2. And it was internationalized (French and Flemish).

At all stage of the development, I was amazed how OS/2's memory and threading models were making my life easier. Totally resilient to hard crashes (C has pointers, remember?), inviting to think asynchronously, it was a dream of a programming environment.

Then I switched jobs and started the arid journey in the Valley of the Shadow of Death that started in Windows 3.11 (for Workgroups!) and ended with Windows XP/2003 (the first really solid and well designed OS from Redmond, AFAIAC).

For a while I thought I should be mad at Microsoft for having turned my developer life into such a miserable experience. But in fact I became bitter at IBM and their inane marketing campaigns for OS/2. Their lack of capacity to make a superior OS dominate the market was flabbergasting. This sense of waste is probably what fuels the nostalgia of OS/2 lovers all around the world until now.

In fact, I have been mad at IBM until the redeemed themselves with Eclipse. But this is another story...

Tuesday, January 22, 2008

Loan Redemption

I had my first loan redemption on Kiva today: it is truly great to see that the small business I have lent money to has made well enough to pay it back. I can not wait to re-invest this money and help another developing country entrepreneur!

If you are not already involved in micro credit, consider it seriously. Small entrepreneurship is the fabric that makes a country develop in a stable and durable manner. The few bucks you lend (and get back) is the little help that another person needs to start changing her life, community and, let us see big, country!

Monday, January 21, 2008

Need Some Rest?

If you are looking for a no-nonsense REST framework built on all the good stuff Spring offers, look nowhere else and get a good Mattress.

Despite the pun, this framework is an interesting initiative to watch, as it is a pragmatic (hence partial?) implementation of JSR-311.

Stay tuned and look for the first release and, I am pretty sure knowing the framework author, the upcoming witty logo.

UPDATED 21-JAN-2008: The first release of the framework is now available in Maven's central repository. You want to give it a try!

Wednesday, January 16, 2008

BBC?

Beautiful British Columbia

Really?

See for yourself: I have put my best BC shots on Panaramio.

Big Integers


Bernard Tapie, knowing this first hand, once said:

"If you owe one million to your banker, he holds you. If you owe him ten billions, you hold him."

So, how much do owe your banker, Larry?

Tuesday, January 15, 2008

One Step Further... Dynamism

So the buzz is all about using a mix of strongly and dynamically typed language in our applications. Is this really useful or is this just another hyped concept?

Well. Let us suppose that, while using our favorite ESB platform, we come with the need to invoke some remote EJBs. We would traditionally need to provide the client libraries to connect to our remote application server plus the required interfaces to cast dynamic proxies into something usable.

Now suppose we add JavaScript in the equation. In fact, if we use JDK 6, we do not need to add anything. We simply need to write a simple script that, thanks to the dynamic nature of the language, will not need any of the EJB interfaces to compile. Nor to execute. We will end up with fully functional EJB remote invocations without the hurdle of embarking extra dependencies.

The invocation chain that was dynamic down to the level of the local proxy can now go much further into your code without needing strong typing.

This might sound obvious to you: this means you are already addicted to mixing strongly and dynamically typed languages in your daily practice. Or this might sound to good to be true: then be my guest and give it a try. The only thing that is sure is: this is no hype.

Saturday, January 12, 2008

Direct Kudos

I do believe that DirectX is one of the greatest achievements of Microsoft.

Not only this SDK provided, from the very beginning, a clean and efficient way to develop high performance graphic applications for Windows (anybody remembers WinG?), it is also an bright demonstration of Redmond's giant mastery in the art of backwards compatibility.

Every time I put my hand on a new version of Windows, I run PackAttack, a basic DirectDraw 2D action game I wrote several years ago. The last time I compiled it was in December 2000. If memory serves, it was compiled for DirectX 3. I am flabbergasted to see this game still working with DirectX 9 on Windows XP.

Anybody with Vista out there who would dare trying to run PackAttack and see if the miracle still happens, despite the fact Microsoft is supposed to have lost the API war?

Monday, January 07, 2008

8 Years Ago

Computer is running a very instructive column named "32 & 16 Years Ago". As a blogging exercise, I will do the same but, because I am not that old, with smaller powers of two (8 and 16).

To approach the things I was doing 8 years ago, I thought it would be interesting to see what happened to the domain names that was used in the startup I was involved at that time. For our new line of products, we opted for a Djinn-prefixed family of names, to express the magic of the stuff that was happening behind the scene.

At that time we had:

  • Webdjinn, a pure DHTML RIA platform, with automatic code execution balancing between the browser and the server. It was using an XML based language (that we named ApiML) to describe the application, its components and their behavior. Knowing that this was developed in 1999 and 2000, you can imagine the technical difficulties of developing such a project on the browsers and servers of that time!

  • Datadjinn, a SQL generator that was using a generic XML query syntax (that we named DQML), which was transformed into optimized native database queries. It was speaking a few database dialects and was used for the database connectivity of Webdjinn and of a pure HTML report generator we also built.
I leave to your imagination to figure out what was the fate of these bold endeavors... So what happened to their names?

Datadjinn.com seems to be now used to host a link farm. Boo.

Webdjinn.com had a better fate and landed in the hands of an Autralian "husband, web developer, human" who totally understood the value of the *Djinn pattern, as he created the cmsDjinn moniker for the "perfect Web Content Management System (CMS or WCMS)" he is creating.

Now, that is a bold endeavor!

LinkedIn OSS Context

If you are a user of NxBRE or the Mule JCR Transport and want to connect with me on LinkedIn, please use one of the "fake" positions I have created for that matter, so we can get connected in the right context.

Thank you!

Sunday, January 06, 2008

New Year New Engine!

I am happy to announce the release of NxBRE 3.1.1.

This is mainly a service release for the Inference Engine, which got some bug fixed and simple features added.

See the release notes for more information. Enjoy!

Thursday, January 03, 2008

Sub My Version

We all love Subversion: it is a solid, simple and powerful SCM that is as free as beer on a Super Bowl day. But when Subversion starts throwing a tantrum, the sense of free fall that occurs is really discomforting, to put it mildly.

I know how to find my way out of most of the problems (twiddling the .svn folder usually does the trick), but it is not always enough and not all issues can be solved locally (at least by reasonably educated developers). Indeed, the thing I hate above all is when I need to break the build in order to solve a Subversion issue: when the local files are in such a painful state that a commit followed by an update is needed, crap will happen in the CI server.

Take this simple case for example: extracting an interface of the same name as the source class, while renaming that class. I always do it in the wrong order:
  1. Rename the class (say Foo to FooImpl),
  2. Extract interface (Foo out of FooImpl),
  3. Bang! Subversion wants to delete Foo from the repository (from step 1) and add a new version of it (from step 2).
Why do I have to care about SCM when performing such a trivial operation? Read Larry O'Brien suggestion about this.

Thursday, December 20, 2007

Just Read: Best Kept Secrets of Peer Code Review

Though a company sponsored book, this collection of essays on code review is enlightening. Rooted in the real world, the book makes a clear point for a practice that tends to be overlooked in most of the software shops. A recommended reading for anyone who wants to learn about the state of the art of code review and how it can benefit an organization and its people.

Sunday, December 16, 2007

Time To Run

Everybody loves re-use: no-one would like to re-invent the wheel (at least no-one worth calling a colleague). Re-use in action can be tricky, though, as the library or component you intend to leverage can come with strings attached under the form of runtime dependencies. These dependencies can take several forms, each with a different twist but all with the same promise to make your job a little more... intense!

  • Framework dependencies are often the (incestuous?) children of configuration and reflection: when the latter gets driven by the former, the incertitude concerning what concrete classes your application will actually use, grows and can reach the point when it endangers your system. As an example, consider the way JAXP's concrete classes are determined at runtime: though you program to a unified API, which allows you to re-use existing parsers and processors, you must code your application defensively to ensure the runtime dependencies will provide the features you need.

  • Data dependencies dependency will leverage your existing database, whatever its can be pulled into your application when you re-use a component that carries its own DAO layer. Provided the data layer is flexible enough to accommodate your choice of database engine, this can be managed easily. Things can get tricky if you have several applications re-using the same component on the same data source: you might well figure out the component was not designed for this particular deployment model. Things can get even trickier if several applications use different versions of this component on a shared database! A positive example here is jBPM, that you can embed in your application and which will require state persistence: thanks to Hibernate, this runtime dependency will speak the SQL dialect of your existing database, hence will not bring havoc to your application.

  • Service dependencies differ from data ones from several of their characteristics: they promote sharing logic above data, they can encourage loose coupling (but not always: think about EJB clients) and they are generally more concerned with and capable of backwards compatibility. This kind of reusable components, which are often developed in-house, must be carefully crafted to evolve well in space (different deployment environments) and time (different versions). When the grand dream of service auto-discovery and auto-wiring will be fully materialized, these dependencies will become more predictable and manageable. SCA will also play a great role in improving the usability of service dependent components.
Though modern build tools, like Maven, do a great job in dealing with software dependencies, the domain of runtime dependencies is still not yet clearly formalized and fully standardized. Figuring out beforehand these hidden catches and their potential impact on the component reuse you envision can save you some pain.

Keep It SimpleDB

Now that Amazon SimpleDB is out, the open source competition has started and we can expect some pretty good stuff to come out soon and allow us to leverage this new cool tool from the AWS family.

I would personally be interested in a JCR adapter for SimpleDB: this would enable a semantically meaningful data storage layer to be plugged on top of the Amazon service. Think about massively distributed content management system...

Another cool idea, one that would really be dear to me, would be a RuleML fact base adapter for NxBRE. Since SimpleDB is built on Erlang, it naturally manipulates tuples, which are the essence of RuleML's atoms and facts. A fact base adapter would allow the rule engine to tap into a centralized knowledge base and make deductions out of it. Think about massively distributed expert system...

Thursday, December 06, 2007

Precious Revisions

In his most recent post, Martin Fowler talks about the necessity to teach clients about the value that resides in the tests his company delivers with any software solution they ship (and how, without this recognition, tests will be altered and mangled by the client, thus will lose most of their intrinsic value).

Indeed, it is interesting to recognize that clients might consider that the only valuable outcomes of a software project are the software itself and its documentation, and consequently ignore other valuable resources.

In that matter, I think there is another valuable outcome that exists but is generally ignored by clients: the source control revision history. Understanding how a software has been built not only reveals a lot about the professionalism of the builder but also helps understanding how the application took form and reached its current incarnation, which can further help making it evolve gracefully.

As a software consulting client, the most convenient option is probably to open a dedicated source control repository for the consultants and grant them access both internally and externally (they will very often work off site). After the project has been completed, the source code and its history can then be moved from this repository to the client's private one. Similarly, an alternative option is to let the consulting firm use their own repository provided a export/import path exists between the consulting and the client source control tools.

To sum it up: cherish your code and its revisions! After all, this should come as no surprise since everybody knows there is a lot to learn from history: this is just a different context!

Monday, December 03, 2007

Jung Juggles Data Jungles

I am convinced that one of the tasks of a software architect consists in improving or building tools for software developers, so they can be more efficient in their daily dose of fun called "work". This of course includes visualization tools, a subject dear to Gregor Hohpe, who gives great insights on how and why visualization can help make sense of data whose amount or lack of order make hard to grasp.

Beginning of this year, I started to experiment with a very good library named JUNG, an acronym that means "Java Universal Network/Graph framework" and explains exactly what it is. So far I have built a pair of tools:
  • A grapher that analyzes the JCR repository of a web CMS and represents the component and templates as graphs, by following the hierarchy of inheritance they engage into.
  • A analyzer that parses a lenghty JavaEE web application deployment descriptor and represents as a tree the filters involved into processing HTTP requests.
Very easy and fast to develop, thanks to JUNG's clear API, these visualizers have been useful to find out discrepancies: for example graph disconnections or node duplications were nice visual clues that helped locating problems.

I invite you to explore the power of visualization on your own, with your own data and challenges.

Saturday, December 01, 2007

Mule Meets Rabbit

I have just released version 1.1.0-M1 of the Mule JCR Transport.

This new version represents a major step forward, as the transport now supports both reading and writing content to a JCR repository. The transport can also handle standard and custom node types thanks to an extensible type handling framework.

Just go and read the user guide to learn more about the features of this milestone release. You can also download the packaged release or simply add the Maven artifact to your project.

Note that the support of transactions and streaming are scheduled for the M2 release.

I hope you will enjoy the new possibilities that occur when a Mule meets a Rabbit.

Monday, November 26, 2007

Celesstin Project Alumni LinkedIn Group

Celesstin is a system that converts mechanical engineering drawings into a format suitable for CAD, that has been developed in the early nineties.

I encourage all alumni from the Celesstin Project (I, II, III and IV) to join this group.

Sunday, November 25, 2007

WebSpring

In the recent announcement for Spring Framework 2.5, I have been struck by the following line:

Officially certified WebSphere support

I think this is a very positive unfoldment in the Spring and BEA joint adventure. From a technical standpoint, the Pitchfork project was already very appealing, thanks to the seamless integration of Spring with Weblogic, providing all the goodness of the first at the core of the latter (instead of the usual "on top of JEE" deployment model).

Moreover, for developers, being able to benefit from Spring was a good way to restore the somewhat tainted "coolness" of a such traditional platform like Weblogic. Moreover, it is interesting to compare BEA's move to the one of JBoss, the hippest application server of that time. Indeed, JBoss passive-aggressive relationship with Spring has been, and still is, instrumental in making developers reconsider their commitment to this application server.

This newly announced certified support will ring a bell at management level, especially in the risk-averse prime market of Weblogic (financial and insurance institutions), where using an open source framework, as excellent it is, is very often frown upon (on the other hand, pragmatism is also a characteristic of such a clientele, especially in the UK, explaining the successful commitment of VOCA to Spring).

How this official support will materialize will be critical to the eventual success of this partnership, especially knowing the usually poor support vendors offer to their clients compared to the one they can get from open source projects.

Friday, November 23, 2007

Eclipse Default Key Mapping Request

Every time I install Eclipse, I have to bind "Refactor > Extract Constant" to Control+Alt+C or the equivalent with the funny Mac keys.

This refactoring is really common and I think it ought to be in the default mappings of Eclipse.

Am I the only one thinking so?

Recently Read

I have read this book cover to cover in just a few commute trips: it is indeed a fascinating reading to discover the ingenuity that hackers employ to abuse on-line games, sometimes for profit, often for fun, and the privacy-invading counter-measures games companies put in place. An eye opener for anyone playing on-line games who would not be willing to share all his private information with vendors.

A very good introduction to Erlang, which really invites you to start building resilient and parallel applications. I was amazed to see the similarity between Erlang's pattern matching philosophy and RuleML's.


Tuesday, November 20, 2007

Moral Code 2.0?

Should software developers have a moral code about their coding? ThoughtWorks says yes, according to eWeek's "Toward a Discussion of Morality and Code" article.

Anything new here? Not for any member of the IEEE, as its Code of Ethics clearly puts forward values that constitute an inspiring moral code.

So is it worth mentioning ThoughtWorks position? Certainly, because our industry needs thought leaders that establish credible models to follow. Why is this? Maybe because software development is one of the rare professional field where someone can read a "Teach Yourself" book and proclaim to be a specialist the week after.

Nor surprisingly, this practice of over inflating skills is not rare and often even encouraged by software consulting firms, in order to seduce clients and secure contracts. Hence, an interesting question to ask Mr.
Singham back is: Should software consulting firms have a moral code about their developers?

His answer might also teach a lesson for others.


Monday, November 19, 2007

Root of the Rot

I recently had to satisfy a pretty simple feature request for one my projects: to be able to reload part of its configuration at run time. Not a big deal, right? Well, not exactly. In fact, I have been amazed by the impact such a simple change had on the system, if not as a whole, at least in all the critical sections of it.

It is well known that applications tend to rot after time (i.e. after changes have been made to them) but, up to this recent feature request, I was unsure of the the actual cause of this rot.

Application rot comes from the chaotic relief of the tension created by changes that induce changes in fundamentals and invariants. This chaotic relief increases the software entropy, as software quality and maintainability principles get violated.

As I was implementing the aforementioned new feature, the tension on the application was causing its design, its thread safeness and its clarity to degrade at a distressing speed. I was going fast, but not. Fortunately, my alarm bell was ringing loud and clear and, after reverting to the latest head revision, I started again with a holistic plan that was taking care of not increasing the entropy of the application.

During this I have noted that nowadays:
  • Tools are efficient in helping us keeping the entropy low (FindBugs and Checkstyle were shaking their heads about some bad stuff I was doing),
  • Libraries are now rich enough to help mitigating changes that can compromise thread safety (think Java Concurrency or Intel Threading Building Blocks),
  • Industry luminaries have preached the need of elevated professional standards enough to make us become conscious of software rot when it happens (the bell goes on).

Tuesday, November 13, 2007

Will Find Your Bugs... Tomorrow

I have just upgraded to the latest version of the Findbugs Eclipse Plugin (1.3.0.20071108) and landed in a terrible world of sluggishness.

The new version of the plug-in is so slow on my machine (a 10 months old Mac Book Pro with 2GB of RAM, running Eclipse Europa) that I had to reverse to the previous version of the plug-in (1.2.1.20070531). I did not consider canceling the auto-run feature of Findbugs because I do not want to forget to run it: this is one of the interesting aspects of this plugin (without this option, I would simply uninstall the plug-in a rely on the Maven report that contains the same information).

Maybe the issue is visible because my project has a little more than 80 dependencies (the joys of open source). But the previous version was fast enough so something has probably went bonkers in the latest release of this very useful plug-in.

Any other one out there facing the same issue?

Sunday, November 04, 2007

Back to Humans

This month issue of Computer runs an article titled "Generation 3D: Living in Virtual Worlds", which ends up predicting that virtual 3D worlds could become pervasive in our lives by 2047. I must admit that, as cool as living a virtual life in an MMORPGs sounds to a geek like me, I am frightened by the implication for our societies.

If our avatars become the main mental projection of our psyches and if our disincarnate-selves become our main subject of concerns, what would happen to such fragile things like the environment, democracy or compassion ?

Will it matter to the "generation 3D" if the Earth must be over-exploited to produce enough energy for powering the zillions of servers hosting their fantasy worlds?

Will it matter to them if their countries turn into police states where their only liberties will be virtual, abandoning the ideals that founding fathers and thinkers of the past had for mankind?

And finally will it matter at all if others will be left out dying of cold or hunger at the fringe of the digital society?

Was Queen prophetic?

Tuesday, October 30, 2007

Geek Pride

Whatever the subject of his post entry is, Uncle Bob always ends up hammering the ultimate goal that should drive us, software developers:

"It is not good enough that a program work. A program must also be written well. As a programmer you should take pride in your work and never leave a mess under the hood. Remember, a product that works, but that has a bad internal structure is a bad product."

Thank you Uncle Bob. May you be read at all levels of management.

Software Patents Or Not?

This month issue of IEEE Canadian Review runs a very interesting article titled "Patenting Software Innovations: A brief overview of the situation in some jurisdictions of interest" (PDF available for download at the top left corner of this page).

In this article Alexandre Abecassis gives a short but informative overview of the reality of software patents in Canada, USA and Europe. A must read, if you want to have clear ideas on the subject.

Sunday, October 21, 2007

Another One Rides The Bus

At a time when big brains start to wonder what an Enterprise Service Bus (ESB) is all about and have doubts about the health of Service Oriented Architecture (SOA), today's post of Uncle Bob is a refreshing pragmatic counterpoint, as we can expect from him.

Of course I can only concur, but I must say that scorning enterprise service buses, as he does, is not necessary (maybe it is, just for the purpose of counterbalancing vendors who try hard to push expensive ESBs on clients...).

For me, an ESB is a distributed intermediation middleware whose main goals are:
  • Facilitating applications interoperability, and
  • Reducing applications coupling, and
  • Avoiding point to point communication, but also
  • Favoring asynchronous messaging and eventing above synchronous remote invocation.
I believe that deploying an ESB does not imply to embrace the full canonical SOA gong show, but can, in fact, be a good occasion for preaching the values I listed above and alleviate the following inevitable traits of a mature software landscape:
  • Applications tend to know too much about each other, with integration happening at data level, if not database level, and sometimes happening beyond the enterprise boundaries.
  • Applications tend to wait too much for each other, engaging in long chains of synchronous requests while asynchronous messaging could be used to free up threads, hence resources.
  • Applications tend to talk too much when they have nothing to say: polling mobilizes resources while efficient, yet simple, notification mechanisms have been around for a while.
As time passes, each change becomes more and more complex and risky, as it is hard to estimate what other application will go bonkers if you dare touching something. Nothing new here: this is just a macroscopic replay of what happens inside the applications, where components also tend to develop tight coupling.

So an ESB is not a golden hammer but is just an occasion, a driver, a extra reason for making things better. Presented like this, it is not surprising to find out that anyone who value what they do are willing to ride the bus.

Monday, October 15, 2007

Today is B.A.D.

Indeed, this is Blog Action Day and bloggers all around the world talk about the crucial subject that environment is.

What could I say that has not been said before? I do not know so here is the picture of a salmon I took yesterday afternoon in the creek that flows next to my house.

I wish many wild salmons to the next generations.


Sunday, October 14, 2007

Yeah, Please Fix It!



I am tired of having my MacBook losing its Wifi connection, while my XP and Kubuntu boxes have no trouble with it.

Saturday, October 13, 2007

The inspiring life of Eric Hahn

In the October 2007 issue of IEEE Spectrum, an article (The Codemaker) depicts the life of Eric Hahn, who "has been an executive, an entrepreneur, and an investigator. But he's happiest of all to call himself a programmer".

The life of Mr. Hahn can only intimately resonate with the life of the many whiz kids who started computing when this activity was only starting to become known to the public. Indeed, I started a few years after him and on a smaller scale of machines (Sinclair ZX-81 instead of a Digital PDP-8/m), writing tiny games instead of hard-core emulators. Living in the countryside of North-East France, the analogy stops here as Mr. Hahn had access to the more stimulating and responsive environments of New York and the Silicon Valley.

One very touching aspect of his life is the tension between making a career and remaining a programmer. Throughout the years he kept his passion writing code and has find enough will and talent to create himself opportunities to keep developing. This tension is symptomatic of our societies that respect more those who make others do than those of do, pushing people away from what they thrive to do to.
“I wonder,” Mr. Hahn says, “how many programmers are trapped in the bodies of Silicon Valley executives. We tend to leave programming jobs because they just don't pay enough to support kids and mortgages here in Silicon Valley. But increasingly, when people have some material independence, they revert.

The only thing I can teach Mr. Hahn is that this is not happening only in the Valley!

____
As a side note: if you are not already a reader of IEEE Spectrum and have any interest in technology, I can only strongly encourage you to subscribe, as this is the best magazine I happen to read nowadays and the only one I read cover to cover.

Thursday, October 11, 2007

Show off your cool NxBRE project!

Do you feel like running a few minutes remote demonstration of what you have accomplished with NxBRE and RuleML in your project?

Then you are up to the RuleML-2007 challenge!

Please contact me ASAP for more details.

Sunday, October 07, 2007

Blinded By Trust

Sun has improved the new version of their Java forums so my previous rant about how disastrous it was must now be taken with a bit of salt. So it is attractive again to browse these forums for helping people dealing with their development issues.

Answering questions on these forums is a very instructive process because, the same way we learn from our own mistakes, there is a lot to learn from the fumbles of others.

Another interesting aspect is trying to figure out what went wrong in the code submitted by a developer: it is a very hard exercise because you have to fight against the natural tendency to trust the other party for stating their problem correctly.

Moreover this exercise reveals incredible blind spots in the way we perceive other peoples' code when we assume that they know what they are doing, which is the normal position you have with your colleagues, for example.

This today's exchange on the forum is symptomatic of this. Focusing on the programmers stated serialization issue, I totally disregarded the JDBC code he wrote, assuming it was correct. But it was really badly flawed!

The lesson from this is that, when helping a developer with an issue, fight against the natural desire to trust what he is reporting, while, of course, maintaining a respectful behavior. This will help alleviating biases and blind spots when reviewing the defective code.

Wednesday, October 03, 2007

Paint It White

The Register recently reported that, according to boffins, it is dark times for application development.

Just when I thought everything was getting better. Way much better.

  • Writing code has never been so fun: we have great IDEs, loaded with refactoring features, enriched by a wealth of plugins that turn them into tailored productivity platform.

  • Our tool boxes are now loaded with pragmatism-driven frameworks, multi-threaded building blocks and a panoply of libraries for everything and whatnot.

  • Testing has never been so easy: we have a great variety of tools for testing applications at almost all levels and in fully automated ways.

  • Testing has never been so rewarding: funny colored lights give us instant reward on our efforts while test coverage tools provide us with an exciting challenge.

  • Source control management is now accessible to mere mortals: no need to be a command line guru or a sysadmin to store and manage code in repositories anymore.

  • Collaborating on-line is now a reality thanks to tools designed for sharing idea, tracking issues and progress and authoring content over the Internet.

  • Making reproducible and automated builds is a piece of cake: dependency management and library repositories combined with continuous integration platforms produce a sense of velocity and fluidity that makes development thrive.

  • The tyranny of modelling and the myth of big design up front have been debunked and relegated to the museum of toxic ideas.

  • Industry luminaries have risen and their voices have encouraged the inception of methodologies that promote communication, honesty, courage and elevated professional standards.

  • Hype and buzz words are consistently derided and exposed to their true natures by the same thought leaders.
Of course, we face clunkiness, bugs and disappointment every day: does this make our times dark ones?

Sunday, September 30, 2007

Quick Web Silver Runner

I have started to use Mozilla Webrunner on my Mac: used in conjunction with Quicksilver, this is a neat way to start web sites as standalone sand-boxed applications. This does not replace tab browsing, of course, but is very useful for the applications you usually dock on your second monitor (or third one, if you are a real guru), like your continuous integration server dashboard or your web mail home page.

To have Quicksilver bootstrap Webrunner applications, simply store your profiles in a directory that you register as a custom catalog. And voila, you can launch your web applications with a few key strokes.

This is another way to reduce the latency between thinking about what the computer should do for you and have it actually do it. When you think at the zillions of cycles the computer wastes waiting for you, any tool, gadget or utility that helps minimizing this loss is to be lauded!

Saturday, September 29, 2007

Agile In The Burbs?

In a recent post, my friend Alex commented on the complex art of managing employees in remote locations, which came at a time when I was thinking about the high toll of co-located teams.

My reflexion started a few weeks ago when I was reading a reader's letter in IEEE Spectrum's Forum. Commenting on a very complete coverage Spectrum just run on big cities and their challenges, the reader said:
Why does modern society think that it’s entitled to expend all that energy, in whatever form, merely to transport people to their jobs? No one mentions the toll that a 4-hour-per-day commute takes on relationships. (...) What has always seemed more sensible to me is to live where you work. My commute is 10 minutes each way, on foot. And in my entire career as an engineer, the longest commute I’ve had was a half-hour drive.
(Read the full letter "Megacommutes to megacities")

My first reaction was: lucky man! Then my second thought was: why can not we all have such a life, a life where the distance to work does not put a toll on our lives and our environment?

So why do we rush to big cities? Because this is where the demand for IT workers is high: banks, public sector entities and private companies have a tendency to locate themselves downtown. Since they have a high need of software engineers, they act as a magnet for us geeks of all sorts.

But why do not we telecommute more? After all, this is a revolution that has been announced a long time ago and we now have the tools that would allow us to make it happen. Massively distributed open source communities have proven that this is a model that can work.

On the contrary, agile principles tell us that teams should be as co-located as possible, because of the millstone that distance puts on communication, whose efficiency is a major cause of success for software projects (and the lack thereof a major cause of failure). Industry luminaries and extensive discussions have made this point very clear.

Since housing madness only allows singles and dinkies to live downtown, the ones of us with a family are then forced to live in the burbs, far away from work and far away from the perfect commute the aforementioned reader says he experienced all his life.

Trying to accommodate the often conflicting requirements of agility, personal life and business is not a trivial task, but one that is certainly calling for a broader rethinking of the way our cities and workplaces are organized and located.

Saturday, September 22, 2007

Code Literature

Open source projects documentation is a touchy subject.

I know it first hand from my own experience with NxBRE: writing and maintaining the 55+ pages PDF guide is not only a significant effort, but one of a very different nature than the effort of developing the product itself.

The wiki-based knowledge base is a good alternative: less formal, built from users questions and easy to maintain, it is definitively a viable approach for documenting small scale projects like mine.

This is also something learned on the field, as I have to work a lot with open source solutions. A pristine and up to date documentation is still an exception, at least for non-company backed projects. After hours of trial and error, fighting with an on-line documentation that was sometimes outdated (many examples would simply not work) and sometimes too advanced (showing features available in snapshot builds only), I came to consider recommending a paid-for solution, just for the sake of having someone to blame if the documentation was bad.

But that would have been too easy to surrender that way! Instead, I reverted to the more courageous tactics of the open source addict:
  • explore the provided running examples,
  • if not enough, browse the test cases,
  • if still not enough, trace debug with the source code attached to the IDE.
Except for company backed projects, you have got to get ready to do this when opting for open source. In fact, the presence of running samples, a wealth of unit tests and human readable source code are important decision factors to consider when selecting an open source solution. After all, the ultimate truth is in the code, all the rest being literature.

Monday, September 17, 2007

Saved By The Tickler

A few days ago, Sun has rolled out a new version of their Developer Forums. Since then, using these forums has turned into a nightmare:

  • All my watches are regularly lost, preventing me to follow-up with people asking questions, effectively killing the main value these forums are supposed to offer.
  • The new text editor mangles any text input that is anything else than plain text: copying/pasting from any source ends up with the addition of funky tags that appear only when you save (or preview) your post.
Congratulations Sun! You have successfully killed the last useful Java resource you still had under control.

Oh, maybe not, there is still the fancy JAVA Nasdaq ticker...

Sunday, September 09, 2007

Performance Driven Refactoring

I have recently talked about how fun it is to refactor code in order to increase its testability. Similarly, I have discovered another kind of refactoring driver: performance.

Starting to load test and performance profile an application you have developed is always an exhilarating time: I compare this with the Whack-A-Mole game, where, instead of funny animal heads, you hammer your classes down the execution tree until most of the time gets spent in code that is not yours.

Interestingly, I have seen the design of my application evolve while optimizing the critical parts of it. It is truly refactoring, as no feature gets added, but the code gets better performance wise.

Consider the following example that has arisen. Here is the original application design:

Profiling shown that computing the values of the context was really expensive and that these values were not always used by the target object.

I refactored to the following design:

In this design, the context values are only computed when the target object requests them, using a simple callback mechanism.

One could wonder why I did not simply remove the original context object and made O2 call O1. This would work but would have several disadvantages:
  • unnecessarily increasing coupling between these two classes,
  • visible design change, while I wanted the refactoring to respect the existing contracts between objects (the context in that case).
In conclusion, there are many good reasons for refactoring: the quest for performance can be a very valid one!

Tuesday, September 04, 2007

High Quantum Leap?

I am new to JPA and discovered today that HQL is not JPQL. This sounds pretty obvious but the circumstances when I was reminded this cruel reality were puzzling.

First let's start with my mistake! I wrote something like this:

FROM Magazine WHERE title = 'JDJ'

instead of writing that:

SELECT x FROM Magazine x WHERE x.title = 'JDJ'

i.e. HQL instead of JPQL.

You might think that I must be a lousy tester to release code that does not work but the trick is that it was working fine and passing the integration tests.

The issues was that the JPA provider I am using, Hibernate, was not running in strict mode hence was joyfully accepting the HQL queries. Everything was fine in the integration tests run on Jetty but when deployed on JBoss, where a strictly configured Hibernate was used as the JPA provider, all hell was breaking loose.

So conclusion one is that yes, I am a lousy tester: always target the same platform for your integration tests than for your deployments. There are simply too many provided runtime dependencies in an application server to take any chance with them.

And conclusion two is more an open question about the value of standardized Java API built on top of existing proprietary industry standards. Some success exist, like JCR, which is quite an achievement in the complex matter of unifying access to CMS implementations. But I must admit that one could find JPA an off-putting keyhole on Hibernate, spread with tricky drawbacks.

I want to believe that, the same way JAXP started as a gloomy reduction of the capacities of Crimson and Xalan behind wacky factories and ended as a convenient way to unify XML coding in Java, JPA will evolve towards a bright future.

Monday, September 03, 2007

A Triplet Of Releases

What better day than labor day for releases? Sorry, I could not resist the cheap childbirth analogy! So, I have just released the latest version of NxBRE, the Inference Engine Console and the DSL plug-in.

I have decided not to finalize the implementation of the support of RuleML 0.91 Naf Datalog, mainly because it is not clear how integrity queries are now implemented. The release was worthwhile anyway, thanks to patches, bug reports and feature requests submitted by the community of users.

Enough labor for today!

Saturday, September 01, 2007

Virtual Shelf

Thanks to Vijay, I have discovered Shelfari, a really neat virtual shelf where you can share your experience about your current readings, your previous ones and consult the point of view of others.

You can even integrate a widget on your blog, as you can see on the lower right of this page.

Highly recommended!