Friday, October 24, 2008

SpringSource's Elitism

SpringSource community is elitist and despises juniors. Here is the captcha I just go on their forums:

Why so much hatred? Seriously guys?

Half Life And No Regrets

In the latest edition of the excellent IEEE Software magazine, the non less excellent Philippe Kruchten has conjectured about the existence of a five years half-life for software engineering ideas. That would mean that 50% of our current fads would be gone within five years.

Though not (yet) backed by any hard numbers, his experience (as mine and yours) suggest that this is most probably true.

All in all, the sheer idea of such an half-life decay pattern helps letting obsolete knowledge go without regret. It is fine to forget everything about EJBs and code with POJOs. JavaBeans are evil: you can come back to pure OO development. You can bury your loved-hated-O/R-mapper skillset in favor of writing efficient SQL. It is OK to drop the WS-Death* for REST (this one is purposefully provocative, if a WS-nazi reads this blog I am dead).

Unlearning things that were so incredibly painful to learn may feel like a waste but it is not. This half-life decay is in fact great news because it translates into less knowledge overload and more focus on what matters... or, unfortunately, the adoption of new short-lived fads! And the latter is the scariest part. How much about what we are currently learning will survive more than five years?

Nothing platform or even language specific, I would say. But sound software engineering practices will remain: building elevated personal and professional standards, treasuring clean code, challenging our love for hype with the need for pragmatism...

This is where our today's investment must be: in the things that last.

Just Read: The Productive Programmer


In this concise book, Neal Ford shares his hard gained real world experience with software development. Because he is consulting for the best, his experience is diverse and rich, hence definitively worth reading.

This book feels a lot like "The Pragmatic Programmer Reloaded": same concepts with updated samples and situations. It is a perfect read for any junior developer, as the book will instill the right mindset. It is also a recommended book for any developer who struggles with his own practice (all of us, right?) and want to engage the next gear.

Personally, having attended some of Neal's conference speeches and being somewhat an old dog in the field of software, I kind of knew where he would take me in this book hence I did not have any big surprise nor made any huge discovery while reading it. This said, three chapters are truly outstanding and can talk to experienced geeks: "Ancient Philosophers", "Question Authority" and "Polyglot Programming".

Saturday, October 18, 2008

Two-Minutes Scala Rules Engine

Scala is really seductive. To drive my learning of this JVM programming language, I thought I should go beyond the obvious experiments and try to tackle a subject dear to my heart: building a general purpose rules engine.

Of course, the implementation I am going to talk about here is naive and inefficient but the rules engine is just a pretext here: this is about Scala and how it shines. And it is also a subset of what Scala is good at: I mainly scratch the surface by focusing on pattern matching and collections handling.

The fact representation I use is the same than the one used by RuleML: a fact is a named relationship between predicates. For example, in "blue is the color of the sky", color is the relationship between blue and sky. The rules engine matches these facts on predefined patterns and infers new facts out of these matches.

In Scala, I naturally opted for using a case class to define a fact, as the pattern matching feature it offers is all what is needed to build inference rules:
Normally you do not need to implement your own hashcode functions, as Scala's default ones are very smart about your objects' internals, but there is a known issue with case classes that forced me to do so.

For rules, I opted to have them created as objects constrained by a particular trait:
The method of note in this trait is run which accepts a list of n facts and returns zero (None) or one (Some) new fact ; with n equals to the value of the arity function.

So how is the list of facts fed to the rule run method created? This is where the naive and inefficient part comes into play: each rule tells the engine the number of facts it needs to receive as arguments when called. The engine then feeds the rule with all the unique combinations of facts that match its arity.

This is an inefficient but working design. The really amazing part here is how Scala shines again by providing all what is required to build these combinations in a concise manner. This is achieved thanks to the for-comprehension mechanism as shown here:
Notice how this code is generic and knows nothing about the objects it combines. With this combination function in place, the engine itself is a mere three functions:
With this in place, let us look at the facts and rules I created to solve the classic RuleML discount problem (I added an extra customer). Here is the initial facts definition:
And here are the two rules:
With this in place, calling:
produces this new fact base with the correct discount granted to good old Peter Miller but not to the infamously cheap John Doe:
This is truly a two minutes rules engine: don't ship it to production! Thanks to the standard pattern matching and collection handling functions in Scala, this was truly a bliss to develop. On top of that, Scala's advanced type inference mechanism and very smart and explicit compiler take strong typing to another dimension, one that could seduce ducks addicts.

So why does Scala matter? I believe the JVM is still the best execution platform available as of today. It is stable, fast, optimized, cross-platform and manageable. An alternative byte-code compiled language like Scala opens the door to new approaches in software development on the JVM.

Closing note: an interesting next step will consist in parallelizing the engine by using Scala actors. But this is another story...

Wednesday, October 08, 2008

Database Cargo Cult

Today, I have attended an epic product walk-through. When the demonstrator came to explore the database of the application and opened the stored procedures directory, the audience was aghast at the shocking display of hundreds of these entities. I have been told it is the canonical way of developing applications in the Microsoft world. It is well possible, as it creates a favorable vendor coupling, but, to me, it is more a matter of database cargo cult.

The fallacy of the database as an application tier is only equated by the folly of using it as an integration platform. Both approaches create a paramount technical debt that takes many years to pay, if it is ever paid. Why? Because both approaches lack the abstraction layer that you need to create loosely coupled systems. Why again? Because both approaches preclude any sound testing strategy.

I have already talked about what I think are valid use cases for stored procedures. You do not need to believe me. Check what my colleague Tim has to say about it. And you do not need to believe him either, but at least read what Neal Ford says about it.

If you are still not convinced, then I have for you the perfect shape for your diagrams:

1BDB stands for one big database. And if you are in a database cargo cult, you already know why it is in flame.

Thursday, October 02, 2008

Jolting 2008

The doors are now open for the 2008 Jolt Product Excellence Awards nominations.

If you have a great book or a cool product that has been published or had a significant version release in 2008, then do not wait any longer and enter the competition!

Oh, did I mention that early birds and open-source non-profit organizations get a discount?

Wednesday, October 01, 2008

Just Read: Working effectively with legacy code


This book from Michael C. Feathers came at a perfect moment in my software developer's life, as I started to read it right before I came to work on a legacy monstrosity.

The book is well organized, easy to ready and full of practical guidelines and best practices for taming the legacy codebases that are lurking out there.

I really appreciated Michael's definition of legacy code: for him "it is simply code without tests". And, indeed, untested code is both the cause and the characteristic of legacy code.

Near the end of the book, Michael has written a short chapter titled "We feel overwhelmed" that I have found encouraging and inspiring. Yes, working with legacy code can actually be fun if you look at it on the right side. My experience in this domain is that increasing test coverage is elating, deleting dead code and inane comments is bliss and seeing design emerge where only chaos existed is ecstatic.

Conclusion: read this book if you are dealing with today's legacy code or if you do not want to build tomorrow's.

Meme(me)


From Sacha:
1. Take a picture of yourself right now.
2. Don’t change your clothes, don’t fix your hair…just take a picture.
3. Post that picture with NO editing.
4. Post these instructions with your picture.