Friday, May 25, 2007

Max Planck and the TCO

In "Pitching Agile to Senior Management" (DDJ June 2007), Scott Ambler presents tactics for introducing agile approaches to management. Besides the necessity of talking the right talk, Scott emphasizes the importance of avoiding an "us versus them" way of thinking and, for this, to recognize the virtues and values of management.

In this article, Scott presents how quickly agile software development starts to provide value and how this factor can help pitching the positive bottom line impact of agile. There is though a parameter that management will also consider in this board game: an agile team is significantly more expensive than a traditional one. Agile teams are usually staffed with seasoned developers who are generalizing specialists: these species are more expensive that the usual programmers and analysts traditionally managed projects are used to deal with. And this is without mentioning the folly of co-located teams when you can have cheap and qualified labor near or off management shores!

Hence the comparison graph of the total software project costs will probably look like this...
... with the green line showing the cost of agile approaches while the red one shows the cost of traditional ones. So this is good news: agile still beats traditional over time! Yes, but the big question is how far in the product life time line will management look when making their decision. It might sound obvious that the whole life time will be considered always but it is not.

There are situations where management will have a narrow sight on this:
  • Organizational reasons: Maintenance of the product will be handed-off to a different unit, unconnected with the current managers. This happens in large structures and the point in the hierarchical pyramid where development and maintenance management chains meet is so high that no-one will look into how decisions on one side affects the other.

  • Personal reasons: upcoming promotion or retirement can make a particular manager not inclined into looking too far in the future. Though this might sound unprofessional or rare, with the baby boomers now on the departure, this situation will occur more than you think.
In these situations, you might end-up hitting a wall harder than Planck's one. And if this wall happens to be before the point where agile starts to deliver its financial goodness, as shown here...
... your pitch might be very difficult! In that case, you will have to be agile and re-factor the pitch to focus it more on time to market or quality aspects rather than sticking to the money side.

Monday, May 21, 2007

ANTLRWorks Works For Me

Alright, so writing XML really put Uncle Bob in rage. Of course, he is right: XML should be limited to machine to machine exchanges and should never be forced down the throats of human beings, let alone geeks of all sorts. The natural consequence of that is I decided to start looking into adding DSL support to NxBRE, as writing RuleML is really not a fun task.

In my wildest dreams it will remotely be as good as the DSL support of Drools (including a code-assist driven full text editor). The harsh reality of (a busy) life will probably limit the scope of this addition to NxBRE but should anyway give the rules authors a better way of expressing themselves.

To build the grammar I decided to use ANTLR and its great companion tool: ANTLRWorks. I came to this choice thanks to Martin Fowler's current exploratory works on DSLs.

ANTLRWorks has proven really useful in this endeavor: the immediate testing and debugging of the grammar is complemented by a tree representation of the exploration graph that simplifies the detection of syntax goofs and other mistakes.

I have committed the embryo of a rules grammar in the trunk Misc directory. Capturing is still to implement. Then a translation table of plain English format blocks to RuleML atoms will have to be added.

ETA is obviously N/A ;-)

Sunday, May 20, 2007

Thursday, May 17, 2007

For French Readers Only

Sorry for the segregative title but, alas, this post only concerns those of you who can read French.

Indeed I am happy to invite those of you who can to subscribe to Zeskyizelimit, a witty blog from IT industry samurai Jean-Luc Ensch. Sometimes impertinent and always pertinent, this blog will give you a different view on what is going on in our beloved professional field and also on what happens in this part of the galaxy.

I am saying alas because, unfortunately, no translation tool will be able to provide English readers with a fair rendering of Jean-Luc's humor and bonts mots.

Enjoy the reading!

Tuesday, May 15, 2007

Microsoft 2.0?

So this is it. Microsoft has started its long demise... Maybe not yet, but the company has clearly started a new strategy of betting on the wrong horses and doing it in a very visible manner.

For example, remember the recent introduction of an open XML office document specification while the world already had one or, two days, the asinine take on open-source community supposed patent infringements.

For this last fumble, several industry notables replied, including Linus Torvalds, but to my sense the most sensible analysis of the situation came from software maven Alan Zeichick who clearly balanced Microsoft's lack of innovation with its preference for litigation.

It is really time for Microsoft to realize that time has changed: we have entered a new era where the operating system and the office productivity suite are not fully in their hands anymore. With on-line solutions and open source alternatives, these two components of a personal computer are not so critical as they used to be. In fact, it is obvious that Microsoft is aware of this trend change as Windows and Office are their two traditional cash cows.

What should they do? Instead of trying to push a new office standard, why not build the best office suite for the existing open format? Users are now educated enough to recognize and appreciate a highly usable piece of software: they would certainly be willing to pay a reasonable amount of money for a productivity suite that would not lock their data in the playground of a vendor.

They could also start to innovate. Really. I can not quote any innovation from Microsoft: they have drastically improved existing things but what have they really invented? Well, many things I am sure, by the look at the really cool stuff they are doing in their labs. So where is all the cool stuff going?

Well, I guess the crux of the problem is that it goes through the "bully filter" that still exists at the top level of the company. This filter is in fact a transformation that turns innovation into products that lock users in and forces them to buy the full stack of Microsoft delicacies. And this, for ever.

Even after the stepping back of the master of bully, Bill Gates himself, the company is still run by thugs who do not realize that they can not continue walking this bloody path. Even Redmond products enthusiasts are starting to look elsewhere.

Can Microsoft change and enter into redemption? Considering that IBM succeeded in its conversion from an insipid consulting firm and dinosaurish hardware maker into a vibrant community daring to stuff Cell/B.E.s into mainframes, I think there is hope that they will leverage their army of bright engineers and their deep pockets to build a new version of themselves!

Edited 31-MAY-07: Old habits die hard: Microsoft is still a bully who do not care about slapping people who create value on their technology if these people do not play according to their rules.

Saturday, May 12, 2007

Business Under The Sea

With his "Mobilis In Mobile" motto, was Captain Nemo the first modern agilist?

Whatever is your reply to this question, I think the famous sub-mariner deserves a little tribute. So what could be more enviable to proudly wear Captain Nemo's motto and crest? I guess nothing and if you agree then go shopping like frenzy in the newly open Nautilus Warehouse, a dedicated shop I have created on Cafepress.

Oh and if you wonder what is the outrageous $2 cap on each item for, then you will be happy to learn that the true tribute to this tormented humanist resides in this tiny cap, which will invested Kiva micro-loans. So this is for fun and a good cause.

Thursday, May 10, 2007

The Four Lifes Of The Geek

A few days ago someone asked on Slashdot this very question: "Where to Go After a Lifetime in IT?".

If you filter the trolls out, you will find that the asker was left with these two categories of replies:
  • Do not change anything and keep cashing until you can enjoy your upcoming retirement,
  • Do not be afraid of a drastic change: not doing it will turn into a millstone of regrets.
All this is common wisdom. What can the life of exceptional people tell us about this? Consider the life of Thomas Kailath, recent recipient of the prestigious IEEE Medal of Honor. During his career, he was never scared of exploring new fields: indeed, he done it four times, which sounds like a good way of using a standard human working lifespan.

What could be the conditions for having four professional lives? If you look at the watermark underneath Professor Kailath's professional path, you will discover that:
  • Passion and curiosity must be the main drivers,
  • Excellence and rigor must be constantly sought,
  • Courage and optimism should be nurtured.
Is this reserved to IEEE's cream of the cream? I happens more and more. Look around you: people are daring to experiment their next life. But, of course, nothing come close to our feline companions and their nine lives.

Monday, May 07, 2007

Adaptative Parallelization Shall Rise

In a recent post in his blog, software guru Larry O'Brien talked again about the pitfalls of code parallelization and concluded with a truly insightful line:
This is a great example of why neither of the simplistic approaches to parallelization ("everything's a future" or "let the programmer decide") will ultimately prevail and how something akin to run-time optimization (a la HotSpot) will have to be used.

Like many of us, I have explored parallelization of different process intensive tasks and found that, most of the time, my efforts to chunk and parallelize them was just adding a processing overhead leading to worst performances. Even when using pooling to mitigate the expense of thread creation, the cost of context switching and synchronization needed ultimately to build the final state of the computation was still dragging the overall performance down.

In more subtler attempts I have done, like piping XSL transformations instead of chaining them, the results were sensitive to the amount of data processed (the more, the better) and the way the XSL were behaving (one that would start to output results early would lead to better performances when involved in a flow). Hence the context itself was of great importance for the result.

All in all, this lead me to think the following as far as parallelization and concurrency is concerned:
  • Let us write correct code regarding to thread safety,
  • Let us write efficient code as if only one thread was available,
  • Let us write readable code and avoid "clever" programming.

When Larry's vision of run-time automated parallelization optimization algorithms will become reality, such code will certainly fly and, if not, will be easily refactored to do so. And if you think this idea of adaptive optimization is far-fetched, read about out-of-order processors and Java Hotspot optimization: today, we take all these for granted but a few decades ago, this was sci-fi.

Thursday, May 03, 2007

beautifulMinds--;

I certainly think that professionalism is very important....To be a proper professional you need to think about the context and motivation and justifications of what you're doing...You don't need a fundamental philosophical discussion every time you put finger to keyboard, but as computing is spreading so far into people's lives you need to think about these things....I've always felt that once you see how important computing is for life you can't just leave it as a blank box and assume that somebody reasonably competent and relatively benign will do something right with it.
Karen Spärck Jones
(1935-2007)
Emeritus Professor of Computing and Information
at the University of Cambridge


Sunday, April 29, 2007

Prefactoring A Bell

I am currently reading Ken Pugh's Prefactoring, which is a seminal book on writing software "right" from the beginning without erring on the side on BDUF. While reading this book, I have found that some concepts Ken introduces (or re-introduces as many of them were already known) directly map to certain situations I am currently facing. I will share this here, and maybe in upcoming posts, if other situations ring my bell...


Tight Coupling and the Singleton Identity

Of course, avoiding tight coupling is a goal every conscientious developer has in mind and tries to reach as much as he can. The difficulty is to spot tight coupling, which is coupling to a particular implementation, as it sometimes takes place unnoticed.

For example, I recently had the case of a developer who needed to test the identity of an object and for this opted to use equality because he knew the object was a singleton.
if (theObject == Singleton.theInstance)
This created tight coupling because, should the object ceases to be a singleton, testing for equality would break. The following should have been used:
if (theObject instanceof Singleton)

APIs of Least Surprise

Designing APIs is a tough subject: the intense discussion between Josh Bloch and Michael Feathers at the latest SD West was quite a lively proof of it. Sticking to the principle of least of least surprise is surely an excellent guideline for interface designers.

I recently came to use the javax.management.MBeanServerFactory class and bumped into an inconsistent behavior between two of its helper methods:
createMBeanServer(String domain)
findMBeanServer(String agentId)
As you can see, when you create an MBeanServer, you provide the API with a domain name, while when you use the same API to look for MBeanServers, you have to provide an agent ID. Since both are Strings, I assumed that they both represented the same concept but I was wrong. And surprised!

Saturday, April 21, 2007

My Top Three Mac OS X Annoyances

Now that I have switched to Mac OS X as my main OS, all my troubles seem so far away and it's a wonderful life.

Just kidding! Though OS X is a great OS, it carries a fair share of annoyances, pretty much like every system does. Here is the list of the top three glitches that drive me nuts:

  • Bad keyboard support: I find myself forced to use the mouse too often. Not that I dislike this kind of small mammals, but having to leave the keyboard to twiddle the mouse is really slowing me down, usually at the worst moment (when typing code for example). Very often a dialog will pop-up and I will have no other way to get rid of it but to use the mouse. Or when paging up and down in large texts, the fact the caret does not actually moves will also force me to use the mouse to click to position the cursor. Windows XP does a much better job with keyboard support, as you can do almost everything with your hands on the keys.

  • Lame file explorer: I am sorry but Finder is a pain in the neck. Navigating in a folder hierarchy, creating folders at the place you want them to be, moving files around, renaming them... all these operations carry a certain degree of clunkiness that quickly makes me fume and rant. Again, Windows XP does a much better job here (except for network folders, which consistently freeze the file explorer, if not worst).

  • Sweet and sour JVM: though Apple boasts about their superb JVM integration, not being able to use a standard one from Sun prevents you to be up to date. So the JVM is great but dragging behind the official releases from Sun. At this date, version 6 is still a developer preview while the mainstream VM is already an update 1 version. I think Apple should keep working on integrating the JVM in OS X as they do, but also make it simple for developers to deploy Sun ones in "private" mode.
Now that I have written this down I start to realize that my good old Kubuntu Dapper Drake machine, with its archaic looking UI, is not doing so bad, after all!

Sunday, April 15, 2007

A Bridge, a Donkey and a lot of Fire

Did it need to be so high?

JMS is a simple yet powerful API that allows developers to build asynchronous and loosely coupled systems pretty easily. In fact, it is so easy that its usage usually expands very rapidly in the IT landscape of a company until it hits a wall that is as high, austere and disabling as the Berlin's one was, namely: the firewall.

JMS listeners rely on specific ports, usually dynamically assigned, something that usually prevents its usage through a firewall as administrators are reluctant to open ranges of addresses. Fortunately, there is a highway that goes through this wall: it is called HTTP. It has a particular traffic regulation as it is a one-way road that goes from the inside (the Intranet zone) to the outside (the external DMZ that we will call the Internet zone).


Mule to the rescue!

This post demonstrates how to leverage Mule, the open source ESB, for bridging JMS Queues that reside on the both sides of the firewall through this highway. The following deployment diagram details what is involved in this scenario: as you can see, Mule is not deployed as a standalone application but is embedded in a J2EE web application and deployed on a server. The reasons for this approach are multiple:
  • System administrators can be reluctant to deploy new tools: deploying Mule as a web application on the standard J2EE server of your company alleviate this resistance.
  • The inbound Queues used by the bridge can be hosted by the server itself, leading to a neat and consistent self contained component without any interaction to an external system.
  • Using the servlet connector of Mule allows to leverage the well-known web stack provided by your favorite J2EE server.
When an application wants to send a message to another zone, it does it by sending the message to a dedicated queue in its own zone, which acts as a "letter box". The routing itself is based on a specific JMS Message Property (named "internet_destination" or "intranet_destination") that contains the targeted queue name alias. This bridge uses alias instead of real servers and queue names to reduce coupling and to limit routing to pre-defined destinations.

The following diagram presents the different components involved in the bridge. Routing from the intranet to the Internet is shown in green ; the other direction is shown in red. The arrows are oriented in the direction of message flows, not in the direction of the call from a particular caller. The gray boxes represent the application servers involved in the bridge and the Mule and JMS components they host.

[ Configuration files for JBoss 4.x: Intranet - Internet ]


From Intranet to Internet

A Mule component subscribes to the letter box queue in the intranet zones and listens to messages published there. When it gets a new message, it sends it by HTTP POST to the Mule servlet on the Internet zone. This servlet is the endpoint of a Mule component that performs the routing based on the aforementioned JMS Property and publishes the message to the targeted queue (or stores it in a DLQ - aka Dead Letter Channel - in case the target is unknown).


From Internet to Intranet

The other direction implies bringing messages back in the intranet zone because no sending can be initiated from the Internet zone. This is achieved in this bridge by using a Mule component in the intranet zone that regularly polls another Mule component in the Internet zone. The latter uses the power of scripting in Mule to define a component that consumes messages from the Internet letter box only when requested by a call from the intranet zone.


Your Turn Now

As you can see, this example does not cover temporary destinations (used by requesters for example), nor the reply-to feature of JMS. Note that, with a little of extra work, it would be fairly easy to support the case of reply-to targeting non-temporary destinations. This would be done by rewriting the destination JMS property in the messages entering the bridge to have the reply channel go through a pre-configured route.

Similarly, this scenario lacks any kind of retry mechanism needed if a failure occurs in an HTTP transfer or a possible message staging where payload could be scanned for viruses before being routed to the intranet destination.

In fact, this example gives you a fairly complete view of what can be achieved with Mule, a little bit of configuration and not a single line of compiled code.

The fact that no coding is involved is pretty important for production matters: any skilled system administrator can now activate new routes or deactivate existing ones by simply tweaking the Mule configuration. This can be done without involving a software developer. In that sense, this JMS Bridge becomes a first class citizen of the IT infrastructure.

Do not wait any longer and fetch the beast of burden that will massage your messages! But leave the cow alone...

Friday, April 13, 2007

Seriously Infected

After writing some tests for a Tomcat valve today, I came to wonder about what is it that I like the most with unit testing.

At first, I thought that it is when these red lights turn green - oh the jolly green! This is a truly enjoyable moment when all tests pass.

Then I thought, it is maybe when green lights turn red - oh the scary red! When I touch some piece of code and immediately see the impact it has, I really feel like unit testing saved my day (and possibly some nights).

But finally, I came to realize that, for me, the best moment with unit testing is when I ask myself this simple question: "How am I going to test this?". No matter if I am testing an existing piece of code or writing the tests first, asking this question is really a delightful moment. I reckon this is because I start considering what I will test as a living entity and not anymore as a mere concept: by figuring out its execution environment, its inputs and outputs, all this code or code-to-be comes to life in a very vivid way.

Am I describing the apex of some kind of geeky childbirth? I do not know but what I know for sure is that the more complex it is to imagine how to test something at first glance, the most rewarding it will be to figure out how to do it!

Tuesday, April 10, 2007

Where Have All The Bad Spams Gone?

Is this just me or has GMail spam filtering improved drastically these past days?

My junk mail count has came down from ~90 per day to ~15 per day since last Sunday.

This is great and scary at the same time: I have started to wonder if I am not losing legitimate emails in the process!

Friday, April 06, 2007

Put Your Business Rules to the Test

In the November 2006 issue of DDJ, Scott Ambler explained in an article titled "Ensuring Database Quality" the why, what and how of database testing. With the advent of commercial and open source business rules engines and their increased usage in applications of all size and complexity, ensuring rules quality has become a critical issue for many businesses.


No blind spot tolerated

There are several reasons why applications externalize their business rules instead of hard coding them, including the following:

  • to accommodate the need of frequently changing business rules in the cleanest possible way,
  • to enable non technical personal to author business rules,
  • to allow the exchange of rules between heterogeneous systems,
  • to delegate the handling of complex rules to a component trusted for its performance and reliability.

A successfully implemented rules engine will quickly end up handling the most critical aspects of a modern business infrastructure, as its strategy will become reified in rules and enforced by the engine. Therefore, the need for properly managing these rules will quickly become compelling.

Whether they use commercial tools (that come complete with advanced tooling) or open source implementations (that are traditionally weak in term of tooling), implementers will have to put in place a rules life cycle similar to the one shown hereafter.


One of the critical aspects of rule management we are going to discuss here is testing. Since we recognize what is at stake with business rules, keeping them in a blind spot and simply hoping for the best will have sooner or later direct and dramatic consequences that are likely to hurt the bottom line for the aforementioned reasons.

We will discuss the different steps necessary to properly set up a test environment and then how to iterate on this basis.

The specific case of the home grown engine

If the rule engine you use is home grown, it is essential to extensively test it to certify that, release after release; it keeps its deduction power! To ensure this, a combination of component level unit testing and white box testing is needed. They will both rely on a specific set of data and rules, usually not actual ones but ones made to cover core specifications, well known technical challenges (like critical values handling) and regression testing (tests created after bug reports).

These tests should also be completed with performance and load tests, in order to guarantee that your engine remains in its defined production constraints.

Setup Step 1: Setup the test sandbox

Rules engines are generally external components designed to be fed with data they can process and make deductions on, which usually end up by updating the existing data or by creating new data. This relative isolation, which somehow plays against the engine at runtime because it is costly to alleviate, can be leveraged for testing as it simplifies the setup of a sandbox, i.e. a dedicated and segregated environment where the engine will be free to modify data without touching any actual critical information.

For engines that are directly connected to enterprise resources like databases, the setup recommendations in Scott's article will apply.


Setup Step 2: Build input reference

Unless your business rules are simple or the amount of information you are dealing with is limited, it is almost impossible to manually create sample data that will represent the diversity of cases your engine will have to process and apply rules on. If you are in this situation, you will then have to work with business analysts to select relevant batches of data that represent a reasonable input in term of data complexity and diversity. This will become your input reference at one point of time.


Setup Step 3: Build output reference

After building this input reference, you will have to make your engine apply rules on them: the result will become your output reference, but only after having being validated by an expert.

With this reference data in hand, you will be in the position to certify your rule base, which is usually done by versioning it. It is of paramount importance that not only the rule file(s) be versioned but also all the reference data and related artifacts (like configuration files). This bunch of files will form a consistent, tested and validated set of information that will be candidate for production and, if need be, restoration if a downgrade is needed. It will also be the unit of data that will be copied when a new version of the rule base will be needed.


Following Steps: Iterate

After this initial phase, each subsequent modification of the rule base will have to go through a similar validation process, with the difference that the input reference will probably need edition to include new business cases or remove irrelevant ones.

When testing the modified rule bases, differences with the output reference will be induced by the simple fact business rules have been modified (for example, if a discount rate has been lowered, the new computed values will change). Again, a business analyst will have to analyze the difference between the previously validated output reference and the new candidate. A diff-like visual tool with a simple accept/reject feature could be of great help here.


Merry nights ahead

Business rule engines are now first class citizens in the agilist's toolbox. As such, they deserve a well defined and thorough test strategy. Depending on the engine used, setting up such a strategy will vary between defining your own or embracing the one provided by a vendor. At the end on the day, what really matters is what testing buys you: the possibility to make the most of business rules engines, that are intrinsically able to embrace changes, without losing your sleep.

Friday, March 23, 2007

SD West Classes and Keynotes: Day 3

Software Visualization and Model Generation (Gregor Hohpe & Erik Doernenburg)

I am convinced that part of the job of an architect is to select, improve and, if need be, create tools to support the activity of developers. In this class, Gregor presented concrete cases and best practices for building simple and efficient tools that turn complex software in pictures we can understand.

This approach also concurs with Granville Miller's notion of "trailing shadows" that must come for free: a visualization tool that automatically produces models is useful to keep track of the actual architecture of a system. Since systems evolve uncontrolled in the world of agile, models should evolve dynamically or be discarded.

Because we, humans, are good in spotting patterns, it is essential to generate simple pictures that focus only on certain aspects we want to consider. Therefore complete models for large systems are, if not impossible to build, at least impossible to read thus useless. Building focused ad-hoc models is the key. Gregor introduced a five steps approach for this:

  • Select Meta-Model: No academic work here! Simply define the elements, their relationships and the rules that apply to them. Popular meta-models include: metrics, directed graphs, trees and process models.

  • Inspection / Instrumentation: There are two approaches here. Static inspection of the system design (source code, configuration...) and dynamic instrumentation of the running system (profiling, message or network sniffing, log file parsing...).

  • Mapping to Model: Instead of mapping to the graphic artifacts, map to the model that is an intermediate abstraction to isolate the concerns of internally and visually represent the system (they both evolve differently and should not be connected).

  • Vizualisation / Output: Use an automated graph layout tool (like GraphViz or JUNG).

  • Validation / Analysis: It is possible to apply rules against the model, like figuring out circles in a dependency graph, identifying islands or root nodes.
This session was an interesting confirmation of the things I am currently working on (a JCR repository grapher for a Web CMS). One very big disappointment though: when Gregor asked who knew the Spring Framework in the class room, only a handful of hands came up. How despicable ;-)


API Design as if Unit Testing Mattered (Michael Feathers)

Another packed presentation by Object Mentor: this company sure is a guru seedbed. Even Josh Bloch and Gregor Hohpe attended... and in an active manner!

After fiddling a little with JavaMail, Michael quickly made the point that it is pretty easy to make an API that is not only convoluted but also very hard (if not impossible) to use in code you want to unit test.

An API is a complex software entity because, once you have published it, you have made an implicit commitment and can not back off from it. A good design makes the API useful for its client while being extensible for future needs. It also must not impair the testability of the system that uses it.

Michael then listed some anti-patterns, like having only private implementers, exposing partially implemented superclass, forcing the user to instantiate chains of objects and using static factory method. To be frank, I kind of disagree on some of them: for example, if I provide a public interface, why should I also make public the implementing class I use? While agreeing to these points, Josh often gave possible ways to follow these anti-patterns but in a virtuous way. For example, using a SPI mechanism.

We also got a list of tips and tricks including:
  • avoiding static methods (unless an operation is completely replaceable or will never need to be replaced),

  • keeping the envelope of an API to its envelop of use (to which Josh responded that it is possible to expose advanced methods with plenty of warnings: see java.lang.Enum.name),

  • supplying interfaces for all public objects,

  • ensuring the users have ways to mock the API (else they will have to wrap it),

  • avoiding making public classes final (unless there is a compelling reason for doing so),

  • practicing to write unit tests for code that uses the API (not only for the API itself),

  • supplying the unit tests of the API to the users.
Another aspect to take into account is the politics of API Design: who is responsible when an API changes? What is acceptable and what is not? As Michael stated it himself, the purpose of this talk was not to give strict recipes but to increase awareness, which I think it did pretty well, judging by the heated discussions during and after the class.


The Social Enterprise: Innovation + Technology = Good Samaritan? (Carlos Baradello & Patrick Guerra)

Starting with the Good Samaritan story, this keynote lead us through the reality and the necessity of social entrepreneurship. The good thing about it? It has nothing to do with pure philanthropy but is really about building sustainable business models that end up benefiting all the parties involved into.

In geek talk, you will be glad to learn there is a 5 Giga Bucks market of 4 Mega People out there waiting to be tapped but the right way. Like the Good Samaritan, seeing when others merely watch and being able to connect dots where others stay in the blur, is of paramount importance. This involves all of the usual entrepreneur qualities (ambition, mission driven, strategic thinking, resourceful, resuls oriented) plus the exacerbated desire to step out of the comfort zone.

How to leverage the lessons learned in Silicon Valley for that? How to combine opportunity and innovation in order to achieve sustainability, positive social impact and social benefit? How to strike a balance between a fully for profit enterprise and a fully non-profit organization?

To start finding answers, it is important to consider that we live in an highly interconnected world, where new types of diasporas have emerged because nowadays air travel is affordable and pervasive telecommunication allows to maintain connectivity with the roots. It is also helpful to consider the opportunities from a high level point of view. Here is a list of the current opportunities: digital divide, health care, "collaboratories", affinity group aggregation, mobile knowledge, commerce and e-commerce and off the grid infrastructure. Trying to build solutions in these fields by leveraging current massively available technologies (like mobile phones) is another critical aspect.

To my sense, the main question raised by this keynote is the following: with all the knowledge we have, what are we going to do to improve the condition of mankind. To anyone considering new career options or reaching a certain age in the business, I believe this question should become a burning one.


Extreme Hiring: A Radical New Approach to Finding the Best Programmers (Brian Robertson)

In this very interesting session, Brian shared with us the lessons that his company has learned from hiring programmers and what process the now have in place. As a direct teaser, it is important to know that their extreme hiring process allowed us to divide the number of applications by ten, which was a great thing because the actual applicants are now of greater quality.

How to reach that goal? They first had to realize that the current pervasive vision of the world is still "business as machine", where machines are predictable, organized and made of parts. Where do people fit in this world view? Well, they are "human resources" (dub it: cogs). They came to favor an alternative world view, where humans are living systems, which self organize, sustain themselves and follow the rules of dynamic control not predictive control. And so are organizations.

Hence hiring should not be a process of finding the cogs that best matches some specifications, by applying pattern matching on their specifications sheets, em, their resumes. Dee Hock, of Visa International, said that we should "hire and promote integrity, motivation, capacity, understanding, knowledge and experience" in that very order of importance.

Brian broke this list in two categories: the first three ones in "cultural fit and talent" and the last three ones in "skills". Though skills can be acquired and built, talent and cultural fit can not. Therefore it is of paramount importance for a company to first define the particular talents it needs and what are its core values (that will determine the cultural fit of a candidate or its lack thereof). This introspection the company has to do must be honest and based on actual facts, not wishes.

How to spot talents? Their usual evidences are instantaneous responses, recurring pattern of behavior and passion. Brian stating the interesting fact that a candidate's strength is a skill served by a talent and that using this strength is energizing and satisfying. I find this definition of great interest.

So what recruiting process do they use? I do not want to disclose to much of it, mainly because I think each company should tailor it in an iterative process (sure enough, there will be casting errors). This said, here some of there key ideas:

  • Post job ads designed to attract great people with clear cultural requirements and core values that allows candidates to self-select out,

  • Ask for custom application letters, not formal cover ones,

  • Make the applicant write several short essays replying to simple questions,

  • Offer resume submission as truly optional,

  • Ask for a remote coding test and offer the possibility for the candidate to upload bits of code he has written in other context,

  • Do technical and personal phone interviews,

  • Do on-site interviews with team project simulation, debriefing, group and personal interviews and after hours socialization.
As you can guess, this takes an awful lot of time. But it is the price to pay to spot the talents, assess the general maturity and the overall suitability of a candidate. A successful hiring process must show the company values and culture in action: it is both an attraction and screening tool. It is a tough objective but, as you can guess, it pays each and every penny invested in it.


War Stories: Fighter Jets and Agile Development at Lockheed Martin (Mike Zwicker)

Man, SD West has been breathtaking until the last minute. This session, brilliantly presented by Mike Zwicker was amazing! When you hear Lockheed Martin, you immediately translate it into "waterfall". Mike shared with us the story of introducing agile in this venerable company. He went through detailing the compelling reasons from agile, its successes and its challenges.

Was there anything new here? Nope. Was it worth attending, then? Yes! Consider that: when you tell your other half you love her, is there anything new there? Nope. But it is just so good and so right to hear it again, and again, and again. Why? Because circumstances change every day and this love you keep re-asserting is like these stories of successful agility in different enterprise contexts. It is sheer bliss to keep telling again and over again these stories of drastic improvements in productivity, reduced software defects, shortened time to market and lighter costs!

I can hardly sum up my notes so I will just share three facts out of Mike prolific talk:

  • They have introduced agility in 3 phases: pilot (6-9 months), department launch (24-27 months) and enterprise rollout (24 months) ; each building on the previous phase. In fact, they have not started the last phase yet but are getting there.

  • They have opted for Scrum for the management framework it provides, but have also shopped practices at XP, Crystal and other shops of delicacies.

  • They decided to use a tool (VersionOne) only because they had distributed teams.
One last piece of wisdom from Mike: "Under pressure? Do not abandon the practice!". Let us print this on motivational posters.


It is now time to leave Sunny Silicon Valley. For me, SD West 2007 was a great vintage, with only one out of 15 presentations that turned disappointing. The rest was extremely instructive, thought provocative and entertaining.

Final note: slap on my hand! By turning airport off when not needed and tuning the performance setup, I could drain a lofty 4 hours of battery lifetime off my MacBook Pro. This machine really rocks.

Suggestion for Google's Next Acquisition

LinkedIn

This would be a great step for Google into high end social networking, as LinkedIn is a very credible actor in that field.

This would also be a great opportunity to deliver its content as a public service under a open sourced format. This would benefit any application requiring the user to provide some sort of a biography (ever tried to use a job board resume builder?).

Thursday, March 22, 2007

SD West Classes and Keynotes: Day 2

Agile Architectures (Granville Miller)

8.30 and the room is packed! Architecture sure is a subject that attracts people and make them react.

After reminding us that canonical agile does not mandate any architect role, Granville explained how the community has evolved towards the recognition of the need for architecture. This said, the agile myth that "refactoring is enough to make architecture useless" is still alive. Though experienced developers will have good design reflexes, there are at least two compelling reasons for them to err on the side of architecture:
  • when they will need to step back in order to keep on progressing (ditto Ward Cunningham),

  • to ensure smooth integration of the different parts of a large project.

Hence when a developer will wear the architect's hat, he will typically:
  1. partition the system on a whiteboard,

  2. discuss this partition to reach an agreement among the team,

  3. write unit tests, façades, scaffold to lay out the baseline of the architecture,

  4. then take his hat off and code.
How long will the architecture defined at step 1 be valid? Well, about an iteration, which is by the way the timeline of these 4 steps. The main lesson of this is that agile architecture is not carved on stone but evolves as the rest of the system in order to satisfy the most important of the customer rights: to change his mind without paying an exorbitant cost.

To visualize this reality, Granville presented the notion of shadow architecture on a chart that shows the transfer between leading shadows and trailing shadows. At the start of the iteration, leading shadows represent the majority of the architecture shadow: they are the products of white board design sessions. At the end of the iteration, trailing shadows totally replace the leading shadows: they are the actual reification of the architecture, possibly visualized in code analysis tools. If some leading shadows remain, it means more work to do on the next iteration. This is a great tribute to Punxsutawney Phil and the doom a visible shadow represents for this innocent animal and the rest of us, Spring addicts (!).


Creating a Domain-Specific Language (Juha-Pekka Tolvanen)

During this class, Juha-Pekka explored the creation of a domain-specific language (DSL) in the particular concept of interactive TV. The main goal to achieve in this process was to provide content producers with a limited language (15 concepts maximum, compared to UML hundreds of concepts) that they could use to actually build interactive TV applications.

How to proceed? Here are four main steps:
  1. Identify abstractions (ask the producer to "mind map" the different use cases in order to build a metamodel),

  2. Define modeling concepts and rules (notation),

  3. Specify notation,

  4. Implement generators & test with reference cases.
The overall presentation was then focused on creating the DSL using the speaker's company tool, MetaEdit+, which unfortunately do not currently run on Intel Macs so I could not do the exercise on my machine.

To my sense, like all vendor sponsored talks, it was too much focused on a particular tool and not enough on the available options out there. I was, for example, expecting to see lighter-less-box-and-arrows-and-XML-outcome things similar to what Martin Fowler is currently exploring in his bliki or to Drools DSL.


Agile Principles of Object-Oriented Class Design (Uncle Bob)

Uncle Bob right after lunch is probably the best cure for food drowsiness. His capacity to captivate the audience is amazing: I suspect he irradiates some mystical fluid that make our body redirect energy from food processing to brain power. Or he is simply a plainly bright orator.

After a quick and obviously unconventional introduction to OO, Uncle Bob immediately exposed the root cause why our application code starts to rot after time until it becomes an unmaintainable and tangled Gordian knot. And it is... drum roll... poor dependency management! Of course not (only) in the Maven sense of it but in the broader sense of dependency between objects, packages and components.

Poor dependency management leads to rigid, fragile and not re-usable code. If any change implies going through the whole structure, if touching one place breaks the system in many places, if no bit of code is re-purposable easily, then you surely have tight coupling which is a manifestation of a poor management of dependencies. How to write solid code? Tattoo "solid" on your forehead? Nope. But use it as a mnemonic for the five principles of software development craftsmanship wisdom:

  • Single responsibility principle: a class should have one and only one reason to change (it must not support features that have completely different evolution life-cycles).

  • Open closed principle: adding a new feature must be done by adding new code, not by modifying existing one.

  • Liskov substitution principle: derived classes must be valid substitutes for their base classes.

  • Interface segregation principle: avoid fat interfaces that offer methods relevant to many use cases but mainly irrelevant to its users.

  • Dependency inversion principle: details should depend on abstractions, never the opposite.

In this quest for good dependency management, abstraction is the most important keyword. Everything should depend on abstractions and not on implementations in order to reach the ultimate goal of independently deployable units. Because abstractions are critical to protect our application from the changes that will inevitably occur, how is it possible to figure them out in order to be safe? Should it be an upfront task? Is a crystal ball needed?

This is where the agile word come into play, not as hollow buzz but as a reality: build the abstractions as you need them, which can be as often as you get feedback from your users. Hence expose your system early and frequently to them...


What's New in XML in Java 5 and Java 6 (Elliotte Rusty Harold)

XML is not only for girls and can actually be proven useful, mind you! But parsing or processing XML never came for free in Java and have always required careful development practices to limit its impact on CPU or RAM. In this class, Elliotte presented the latest addition of Java 5 and 6 and how they can make XML related development easier and more efficient.

Here is what Elliotte had in his grab bag for us today:

Java 5


JAXP: The API is at version 1.3 in Java 5 with Crimson finally kicked out and replaced by Xerces and at version 1.4 in Java 6.

DOM3: In the many sweet additions of DOM level 3, Elliotte insisted on the addition of get/setTextContent on the Node interface. I can not agree more: time and again I have seen developers using an extra XML APIs just because writing or reading text was too painful with plain old DOM Level 2. Now the JDK offers everything you need for regular XML twiddling: that is one dependency to remove, which is always good.

I also find Tree Position interesting as you sometimes want to refer to a node without having to enter into the realm of XPath. Other additions of note contain Node Similarity and Equality methods, better namespace support and the possibility to bind any custom user data to any node.

DOM has also evolved as a framework with the capacity to control how documents are represented in memory (DOMConfiguration), new ways of creating documents (DOMImplementation) that avantageously replace the factory based JAXP builders and load and save features (LSParser and LSSerializer).

In the matter of parsing files, the newly added LSParserFilter can be leveraged to accept only certain nodes hence reduce the memory footprint of a document, a good strategy to consider for large XML instances you do not fully need in memory. LSParserFilter can also be used to alter a document at load time (for example to change a namespace into an other).

XPath: is now available in v1.0 with namespace support (NamespaceContext) and custom XPathFunction support.

Validation: has been added in a very versatile way as you can select the schema language you intend to use (Schema, Relax NG, DTD...). A validator has also the interesting ability of augmenting an document by adding optional elements discovered in its associated schema. It is now also possible to discover the type of a node based on a the validator of the document.

Java 6

XML Digital Signature: which is apparently a pretty involved API that is not limited to XML signing...

StAX: which is a promising pull parsing technology that is fast, memory efficient, streamable, read only. Sounds like SAX? Yes but think of StAX as an inversed SAX where the application actually calls for event via method invocations instead of being called back when they occur. This allows to control the flow of events hence to orient parsing for a greater efficiency.

And what about the future? Elliotte mentioned that XQuery API will probably make it to Java 7 while XML Encryption will probably wait a little longer before showing up in the JDK.


OK, I managed to conclude my blogging frenzy earlier today. Note that my main issue in this field trip blogging is the short live of my MacBook Pro battery...

Testing Toolbox Gets Richer

ZePAG has released the first version Usurper, a neat tool for easily creating test instances of your value objects. It is a nice compliment to the classical mocking and stubbing tools.

I warmly invite you to read his post presenting Usurper and then to rush to the tool web site to start benefiting from its goodnesses right away.

And please, do not forget to ask him why he opted for the org.org package root!

Wednesday, March 21, 2007

Note to Blogger: Your Composer Sucks!

Sorry guys but when I type a lower than or a greater than sign in the compose mode of the post editor, I really intend to display < and > and not some kind of HTML element.

If I want to enter HTML elements, I use the "Edit Html" tab.

So why on earth is your editor unable to properly escape these damn characters? It took me 15 minutes to clean up the mess your stupid composer made of my generic code samples in my previous post and the manual cleaning process of the messy HTML produced was so tedious I decided to drop half of the code sample.

Please, please, please do something about it. This is not rocket science. This is basic character escaping.

Thank you.