Saturday, December 30, 2006

The Day My iPod Became Just Another Appliance

This is today as my iPod officially joined the rank of my electronic appliances that need to be reset from time to time. The screen had a bizarre circular shift of 15% to the left, showing part of the left side of the screen on its right part. Everything was frozen with the backlight on.

For the record, so far only my oven and my dishwasher needed a little reboot from time to time. Now they are three in the club! Interestingly, they are all reset using an odd combination of front panel keys. Appliance makers do what they can with what they have, but only a few keys are needed for this maintenance operation (look on Windows: Ctrl + Alt + Del do the trick).

My wild guesses concerning the cause of failure for these diverse but similar software controlled appliances:
  • Oven: heat excess in the control panel,
  • Dishwasher: humidity too high in the control panel,
  • iPod: overall system too cold as it stayed overnight in my entrance.
Oh the true happiness of modernity.

Friday, December 29, 2006

Busy Buzz For Lazy Bees

After Sony, who really screwed up their buzz marketing for the PSP3, Microsoft does the same stuff for Vista.

Mmmh, what could be the reason for these two companies (not the tiniest or dumbest on earth) to miserably fail a marketing campaign? What can make them so desperate to have people talk about their glimmering new products?

The answer might be in the common points between the PSP3 and Vista: expensive, unattractive, deja vu.

Do you find I am tough? Well, let's get tougher: guys, next time, ask a buzz specialist to advise you, else drop the idea. A failed campaign is not something you want to experiment. Again.

Thursday, December 28, 2006

NxBRE 3.1.0Beta1 Is Out

I have just posted a beta release of NxBRE 3.1.0, which is mainly intended for those interested in testing one of these two new features:
  • A regular expression matcher operator has been added to the Flow Engine (which is also available in the Inference Engine),

  • The implication scheduling in the agenda has been improved to limitate the number of fired implications, hence yield to better performances with large rule bases.
There is also a preview implementation of the RuleML 0.91 Naf Datalog adapter. So far, this implementation does not support the cool features of the new version of RuleML (like multiple named rulebases in one rule file or retract operations), then it is not worth using it. Hey, it's a beta after all!

Friday, December 22, 2006

Unexpected Performances Can Occur (Sometimes)

I just can not believe the message Khalid Khalil has just posted on NxBRE's open discussion forum!

Using object pooling, Khalid has reached a throughput of 2000 requests per second with the Flow Engine, one of the two engine flavors available in NxBRE. It is far beyond the performances I was expecting for my tiny business rules engine: way too cool!

Tuesday, December 19, 2006

Behind Closed Doors

My friend Stuart Hogue, Strategy Director of frog design inc. in NYC, has published a pretty interesting article about hyper exclusive social networks.

I find significant that his article comes at a time when Metcalfe's law is under criticisms. In different replies to IEEE Spectum, Robert Metcalfe stands by his law stating that the value of a network is equal to the square of the number of users. His opponents are mainly saying that: "The fundamental flaw underlying (...) Metcalfe's (...) law is in the assignment of equal value to all connections or all groups".

So the key and maybe the mathematical justification of hyper exclusive social networks might lie in this principle: not all connections have similar value, hence dedicated selective network will maximize their value by selecting only people who will engage in high value relationships.

In practice, it is true that I have quickly dropped Orkut and Friendster to favor the more selective LinkedIn network. I am still waiting to see returns from the highly selective Blue Chip Expert network, to which I have recently been invited. If you visit their homepage, you will appreciate the main picture: a closed door that looks very serious!

Is the true value of social networks waiting behind closed doors?

Saturday, December 16, 2006

All I'm askin'

A recent and insightful Computer article from Stephen Jenkins (Concerning Interruptions) led me to rethink my authoritative position on working with headsets (read Kindly Kill That Headset).

I think the crux of the problem is not about knowing if programmers need isolation to avoid frustrating and concentration killing interruptions (this is granted), but actually respecting this need for isolation.

If everybody would respect this need by refraining from asking out-of-context questions (the most brain resources consuming ones) and by saving interruptions for better times (like planned meetings, lunch time or pit stops), programmers would need less to stick headsets on their ears and would therefore be more available for direct in-context questions from fellow team members.

Aretha was right: it's all about respect!

Tuesday, December 12, 2006

Proxy That Proxy

One day or the other, you will wake up with the bold idea of deploying several EAR files in your JBoss 4.0.x Application Server. Why bold? Because it is almost sure that:

  • you will need to isolate the class loaders of the EAR files (for example to load different versions of the same class in the different applications),

  • you will want EJBs to communicate between the different EAR files.
It will then be the day you will meet the often challenged allways challenging Unified Class Loader (a disciple of Dolph Lundgren) and its scoped deployment configuration options, get a few ClassCastException and finally opt for switching all your EJB calls to the "by value" semantics. Then you will start drinking to forget about not only the amount of pain involved in the process, but also the loss of performance induced by passing all parameters by value.

If your EJBs only take and return classes from the JDK, rejoice! There is hope for you! Indeed, you can remove this problematic casting:

HomeInterface hi = (HomeInterface) lookedUpHome;

which generates exceptions because the looked-up interface has been loaded by a different class loader than the one you cast to, by using a dynamic proxy that directly implements the business interface of your remote EJB.

To simplify this, you can leverage Spring Framework's SimpleRemoteStatelessSessionProxyFactoryBean, which will lookup your EJB, create it for you and return a dynamic proxy that implements the business methods. By this mean, the need for casting is removed.

Of course, as I said before, this only works if you exchange primitives or objects from the JDK (because they are loaded from a common hierachy of class loaders that includes the system class loader). But it is a viable alternative to keep in mind...

Saturday, December 02, 2006

Hetero-Genius

In the November issue of Computer, there is an interesting article from David Alan Grier, titled "The Benefits of Being Different", which talks about a subject I am currently particularly sensitive to: the dangers of uniformity in IT systems.

My reflection started a few weeks ago when IEEE Spectrum published a seminal article (Shrink the Targets). Indeed, after the rise and establishment of Microsoft supremacy on personal computers, we are in a great need of diversity because homogeneity has many hidden catches we have to pay one day or the other.

Homogeneity makes IT systems fragile because the same flaw or weakness is reproduced on hundred of thousands (if not millions) of machines, making all of them sensitive to a well targeted attack. The discussions about the superiority of a particular OS or tool (like a browser) that end up with plans for the worldwide replacement of one by another, are childish and futile. All these pieces of software have reached such a great level of quality that you can not select them on pure technical basis. So there is no supremacy to target, as replacing one domination by another one, would leave us with a situation as fragile as before. We must see the advent of non-geeky tribes who will recognize themselves in the usage of a particular combination of OS and tools, who will be empowered so they can assume their choice without being exposed to nasty technical problems or being pointed as originals.

Homogeneity carries another risk: the supremacy of a particular tool or vendor always induce the usage of proprietary file formats. Being locked-in a vendor tool because its features are so above the competitors is nothing bad: it benefits the user and, moreover, it never lasts. Competition always catch up. Then open source tools come in, disrupt the model and force everybody to think harder about increasing the quality of the tools and the services that gravitate around them. But being locked-in because your own personal data are locked-in a proprietary file format is nothing that can be accepted anymore. Diversity forces vendors and tool makers to enable data exchange, freeing the content that was held captive.

In a time when our most precious material assets are mainly digital (does not this feel like a quasi oxymoron?), homogeneity is no longer an issue. Let us be hetero-genius!

Saturday, November 18, 2006

Mapping panacea

To paraphrase the golden hammer saying, I have the feeling that nowadays, because you have object mapping frameworks, everything looks like an object!

What is it with developers who seem to get infatuated with such tools? Does everything need to be mapped to objects? Of course not! Here two recent cases that kind of drive me nuts:

  • Why reading XML files when I can map them to objects? Well, maybe because what you are looking for is actually... data. So please save your application from the miseries of JAXB 1.x (which does not even care to mandate a clear concurrent behavior in its API) and use SAX, XPath or any other efficient XML reading tool!

  • Why querying the database when I can map it to objects? Well, maybe because you are performing batch operations that are mainly data retrieving oriented and the overhead your O/R mapper will add plus its memory consumption will put your virtual machine on its knees. So please take the fast lane!
End Of Rant.

Thursday, November 16, 2006

Feeding the White Rabbit

So now that, for my birthday, I got my own Nabaztag, a white WIFI rabbit that can move its ears, display light signals on 5 RGB LEDs and play digitized sounds, the big question is: how to keep the funny bunny happy?

I guess by feeding it properly.

The default services offered for free by the vendor are pretty basic: weather, stock exchange, email account watcher... And the coolest: rabbit Tai Chi! Hey, what do you expect: they are free, right?

Since the animal has an HTTP based API that allows controlling its ears, lights and sounds (including a nice TTS feature), I do not see any reason to subscribe for the advanced paying services! It will be much more fun to write services for it!

So I came with the first one today, an appetizer for the white rabbit: the service connects to the GMail accounts of the family members and builds a vocal summary of the emails waiting to be read. And that's already cool ;-)

There are many other funny things to do: a service that would report the daily statistics of my open source projects could be fun. Another one that would connect Maven to the animal and that color the poor thing in red whenever a build is broken or unit tests are not A-OK, would be terrific.

You might wonder what is the moniker of this new family member? Knowing that if you know his name, you can make it speak and that the thingy sits in the middle of our living room, where any sentence rated above G are not welcome, I would say that we will call it "secret". For now.

Saturday, November 11, 2006

Centralized Configuration Management for JBoss

NOTE 12-MAY-2009: This particular post is driving a lot of traffic on my blog. Even if the JBoss Properties Service is still worth using, nowadays, I would not advocate the management of properties in SVN, as it is problematic security wise and requires the need of an HA repository. I would rather embed default properties values in my project and allow the overriding of some values (passwords...) via a simple file created in a well known location on the server.

Managing configuration of application servers and the applications they host in a centralized and rationalized way is a requirement for any corporate doing its own hosting. Not only configuration differs between server instances and the applications they host but also between the different environments they exist in (ie. development, integration, QA, pre-production and production).

JBoss 4 user guide shows how to leverage the Service Binding Manager to handle server configuration in a central way. This is a great approach but I think we can go further by leveraging Java system properties in conjunction with two other features of the server: the Properties Service and the fact that JBoss parses all XML configuration files (including J2EE deployment descriptors) and resolves ${references} to system properties. Moreover, if you use the Spring Framework in your application, you can also leverage its PropertyPlaceholderConfigurer to retrieve configuration information from Java system properties.

As shown above, the main idea is to store configuration information as properties files in a Subversion repository and have the different JBoss Application Server retrieve these configurations through HTTP and thanks to this kind of configuration of the Properties Service (in properties-service.xml):



These system properties will then be loaded at JBoss startup time (or on any modification, like a touch, of the properties-service.xml file) and will be available for use in many different places like:

  • XML server configuration files that configure ports and host names (for web server, invokers, JNDI tree...),

  • J2EE deployment descriptors to avoid hardcoding of parameters (like user name and password used to connect an MDB to a remote JMS destination),

  • application parameters provided they are read from Java system properties (which is easy and nice to do if you use Spring).
Of course, there should be a way to find the right configuration in the SVN repository. As you surely have noticed it in the previous configuration fragment, since the Properties Service is configured by an XML that is parsed, it is possible to use a system property to configure it (now this is eating your own dog food!). The system property used to select the proper configuration can be either ad hoc (a custom one added in the startup script or run.conf of the particular JBoss instance) or standardized (for example, if each JBoss instance is run with a specific user, the system property can then be user.name).

Another interesting feature of the Properties Service to leverage is the fact you can provide a list of URLs: all the properties will be loaded from these URLs, the latest ones having the precedence on the first ones. This can be used to organize configuration in a hierarchy, with global settings, server specific settings and application level settings loaded from a common place and mixed together in the Java system properties. Here is an example of such a configuration:



As you can see, not only is JBoss a versatile application server, but it is also an extremely configurable one that can be easily tuned to fit the needs of large companies with complex deployment constraints.

Saturday, November 04, 2006

New Position: File Name Manager at Oracle Corp.

Goals:
  • Rename Oracle JDBC driver file names so they actually differ between different versions,

  • If possible, improve the manifest files so there is more difference than the "Implementation-Time" attribute between two different versions.
Required skills:
  • A true desire to help developer teams and production departments making sense of the different driver releases.
Disclaimer for the slow brained: This is a hoax. I took the decision to create this position simply because someone had to do something.

To fully enjoy this post, go to these pages (9i, 10g & 10gR2) and download the different versions of the JDBC driver: you can not miss them, they are all named ojdbc14.jar. You can then try your skills by figuring out sensible names for these files!

Sunday, October 29, 2006

A 16 Months Iteration

I have finally been able to release a first alpha version of jinFORM, 16 months after the project started. The title is voluntarily provocative: it was not an iteration but a slow maturation!

Why did it take so long? As explained in jinFORM news, part of the time has been spent waiting for a presentation server able to render some much needed features of XForms 1.1. Initially, Chiba was targeted as the presentation server but testing and experiments have finally pushed the project to use Orbeon Presentation Server, which offers a more stable XHTML + JavaScript rendering, a better compliance to standards (XForms, XML Pipelines...) and the much needed XForms 1.1 features mentioned before.

So, what is jinFORM 0.1.0 worth? Not much for production: you can only fill simple forms with repeating sections, but without computation or logic. The only submission that is currently supported is for saving the XML instance built by the form behind the scene.

Then, was it worth the effort and why should anyone care about this project? Well, anyone with a slight interest in the intimates of Microsoft Infopath file format (aka XSN) should be interested to see how jinFORM transforms this proprietary form definition to XForms.

Because an XSN file is simply a CAB stuffed with XML, XSL and extra files, described with a documented manifest, groking its content is fairly easy. Doing anything with it is another thing. Take the XSL for view rendering (usually named view1.xsl) that is generated by Infopath. Because this XSL relies on implicit MSXML objects for supporting many extension function calls, jinFORM has to pre-chew it and implement similar functions in order for anything to work.

When designing this XSL, Microsoft, of course, had anything but elegance or portability in mind. It is cluttered with prescriptive commands like xsl:for-each, traps like testing for the existence of a function and using another one and oddities like using a function to retrieve the current document on which this XSL applies. Surprising approach from Microsoft? After they love functions and procedural programming: why would they embrace the template concept that is underlying XSL transformation?

There are still many things to add in jinFORM to make it really usable but it is a good first step. Foundations have been laid, now let's build on it!

Monday, October 16, 2006

Vive le ROI!

Please excuse my French but I could not resist the pun!

Agile proponents, including me, often use an improved return on investment (ROI) as an argument for avoiding traditional software development methodologies in favor of agility. The scope considered for this ROI usually concerns the financial investment made for the particular project and the result the stakeholders are expecting from it.

There is another ROI that I think is important to consider and that is also heavily impacted by the adoption of agile practices: the intellectual return on investment.

Indeed, developers involve a lot of themselves when working on a software project. The most obvious investment is the time spent at their desks but there is also a tremendous part of the job that is done after hours, outside of the office.

Ever zombie drove from work while mentally exploring the arcane of a new system's architecture? Ever had the light bulb that turns on while revisiting the latest written code under your shower? If yes, you probably see what I mean when I say that software development is pretty mind invasive.

Pecuniary compensation is often considered as the just compensation for this invasion: I pretend this is not enough. Money only buys time. The intellectual investment must be compensated by a specific ROI. What could be this specific ROI?

From my experience:
satisfaction in the quality of work
a sustainable elevated productivity and
an increased confidence in the system that is under construction
 
is a combination that forms a good intellectual ROI.
 
Knowing that this is the kind of conditions that agile methodologies can create, it can be used as an argument to sell them to programmers, who can sometimes get convinced of the benefits for their company (or the project stakeholders or their managers) but are not always convinced of the benefits for themselves.
 

Tuesday, October 03, 2006

The Roots of a Bias

A few weeks ago, Kubuntu released an upgrade that crashed the OS: this was a major goof and they promised they will never do it again. When my machine refused to boot after the upgrade, I frowned, grumbled, downgraded the faulty package and kept going until a new update restores the situation a few hours later.

Now, when I work on Microsoft Windows, I almost instantly go fuming and ballistic for the smallest glitch. Should the Explorer freeze for ages because I clicked on a network resource, I see red. Should I have to defragment the mess that has become my drive after a few months of work, there is smell of a burning martyr in the air.

Could I be somewhat biased against Windows? This would be a subject of shame because any prejudice is a reduction of reality that leads to unfair behavior. Yuck! Nothing to be proud of...

What could be the roots of such a bias? I do not particularly care about Windows supremacy on the desktop computers (except that such a global uniformity is a security problem), nor about the personal fortune of such Microsoft big hat. In fact, Windows XP Professional is probably the best Microsoft operating system I have ever used, so where is this wrath coming from?

Doing a little introspection and exploring the depths of my psyche, I think I have found out the event that triggered this anger: it was in the mid-90s so this is surely a case of unfinished business!

My first application that went into production was a modem hub for the Belgian branches of a pharmaceutical distributor. The application was written in C and, the most important factor, developed and executed on OS/2. Coming from Atari TOS, I really fell in love with OS/2!

Then I finished school and started to work in the professional world, dub it, the world of Windows for Workgroups 3.11. Coming from a world of sessions, processes and threads, WfW seemed to me like a castle of cards waiting for the faintest breath of wind to fall down in pieces. And it was. Rebooting was a big part of a developer's activity of this time.

When Windows 95 started to be announced, I was certain the industry will make the right decision based on the misery of WfW and move to OS/2 Warp. Yes, I was so naive to believe that technical soundness and robustness would help reverting the trend towards Windows to what I considered a better professional operating system.

History proved me deeply wrong and left me frustrated for the many years it took Microsoft to build an operating system as stable and usable as OS/2. I think this old frustration that has turned into bitterness is the root of my bias. I need to work on it and restore pacified relationships with Windows. And I will keep using Kubuntu as my main operating system, just in case... A relapse is always possible!

Monday, September 25, 2006

Gnosis of Diagnosis

For a few days, I have been procrastinating writing about the ill distribution of debugging skills among developers, then I have read a letter sent to Computer and published in the September 2006 issue under the evocative title of "Deskilling Software Engineering".

The author states that "diagnosing a problem remains an art and requires skill". I like the fact that he evokes art because there is truly a part of instinct in the difficult task of figuring out why a system fails or refuses to do what it is supposed to. Instinct is hard to define and quantify, so what else a developer would need to be successful when chasing bugs?

From my experience, I have retained what I consider the three topmost traits that helps in the art of diagnosis:

  • Systematic thinking: software being complex systems, they take a lot of various inputs (configuration, data, user entries...) and turn them into various outputs, with usually one being the desired one. Modifying too much inputs at the same time does not allow to figure out the impact of each modification on the output. Programmers without rigor or a varnish of scientific method will often try to modify many parameters at the same time, which will ruin any attempt to empirically figure out what is causing the issue.

  • Pugnacity and thoroughness: some problems are hard to tackle, especially the ones that require a long setup time before being able to test what is happening (think of rebooting a server to release a particular resource before testing). Others require to be able to read through extensive traces without panicking! I have seen many non-rookie developers unable to read a stack trace and I understand why: it is intimidating and requires an effort of will to delve into.

  • Hands-on experience: as the saying goes "once bitten twice shy", nothing replaces years of experience, dub it, the numerous blows and slaps you get creating then solving bug after bug. With the current trend of devaluing hands-on experience in favor of managerial skills, hourly rates for programmers are a clear signal that there is nothing good in keeping writing code. Hence when someone gets good at its job of writing code, he is kindly asked to stop doing it and moved to a more desirable (and rewarded) position, leaving projects with rookies who will have to go through years of blows and slaps to become efficient bug killers.
Off course, these traits can help in other software development activities as well. But their lack always impairs any diagnosis endeavor.

Tuesday, September 19, 2006

The price of common sense

In "Agile Software Development with Scrum", Ken Schwaber stated that:
 
"Scrum demands the liberal application of common sense".
 
I think this applies to any agile methodology: common sense is a driving force for any practice that relies on empirical management and self-organizing teams.
 
Now, I have just read "Unsystematic Engineering", an IEEE Spectrum article where Robert W. Lucky dares saying out loud what everybody knows:
 
"(...) systems engineering is often based on experience and common sense, and we know where common sense fits in the hierarchy of things that justify a high salary."
 
Okay, we all work for the beauty and love of our craftsmanship and not for the bottom line, but is not this pattern of under-valuing the qualities we have learnt to be the right ones, something that gets harder and harder to accept?
 
So, what should be the true price of common sense? Could not it be evaluated by measuring the cost of projects that failed because of a lack of it?
 

Monday, September 11, 2006

Dependable Dependencies

With the advent of human friendly build platforms like Maven 2 or Buildix, I am expecting to see improvements in the way programmers manage their dependencies. Of course, I might be too candid but proper tools can give enough incentive for developers to consider dealing with their dependencies in a less hip-shooter style and therefore benefit from more predictable, reproducible and automated build sequences.

Classical patterns of classpath quagmire I have noticed include:

  • Deployment of libraries needed only at compile time: this is the usual rookie error, where you typically end up with things like servlet.jar in WEB-INF/lib. A little explanation is usually enough for this to not happen again.

  • Deployment of libraries needed only at unit test time: less harmful than the previous error, it is simply disturbing to have applications deployed with unit test and mock libraries merrily packaged altogether.

  • Deployment of platform libraries in excess or in different versions: this is typically the case for applications that are designed to be deployed on "any" J2EE compliant server. Very often, it will contain libraries that are already present in the server, sometimes in different versions, which will lead to hard to track issues. This is usually done just in case your particular server does not support a particular version of a J2EE sub-component, and, unless you run on the exact same server the vendor is using, you will typically end-up shaving a dozen of useless jar files off the libraries' folder!

To my sense, when it comes to dependencies, a NoClassDefFoundError is much better than a ClassCastException, or, in other words, too little is better than too much. It is indeed easier to figure out what are the missing libraries rather than what are the version conflicting ones.

Anyway, as build tools help us to formalize what is the role of each of the library used by our projects (compile? test? deploy?), we can expect to see better handled and dependable dependencies around us in the near future.

Tuesday, September 05, 2006

A Simple Pattern for Prevayler

While implementing Prevayler in a cool project I will detail in a near future, I of course met the need of doing it in a way that will allow me to unit test my prevalent system independently from the persistence mechanism. This would allow me to exercise all the code implied in the business logic of my domain without actually bootstrapping the prevalence engine.

I came to the simple pattern shown below:

The IDomainManager interface exposes all the domain related methods. The MemoryDomainManager implements this interface and contains all the serializable data: it is the actual root of the prevalent system and the only object to thoroughly unit test.

The PrevalentDomainManager is merely an empty shell that delegates all calls to the MemoryDomainManager, wrapped in Prevayler transactions. It also contains the basic code to bootstrap the persistence engine.

Note that the MemoryDomainManager object does not implement any synchronization mechanism: Prevayler, thanks to its support for transactions, blissfully takes care of this.

Having unified MemoryDomainManager and PrevalentDomainManager behind the IDomainManager interface naturally allows to unit test all the components relying on the domain objects without using actual persistence and to deploy the productive application by designing it to use the prevalent implementation of the said interface.

So, how about dropping your prejudices and jump head first into the jolly world of Prevayler?

________
* For those of you who wonder, Prevayler is a purely object oriented persistence framework that offers a solid alternative to the now classic "RDBMS+O/R mapping framework" pair. It received a well deserved Productivity Award in 2004.

Thursday, August 31, 2006

Surfing the expansion wave

Software development is a very demanding professional field of activity. Anyone who caresses the desire to make a living of it should be very aware of this in order to take a well informed decision before plunging into this business. It is sometimes best to decline the red pill and take the blue one.

I vividly remember my 1989 student project leader who, after introducing the exhilarating objectives of the Celesstin research project that was done in cooperation with IBM and Dassault, opened the door and took the time to explain the challenges of software development: there would be no shame to leave the room before the project started, but after, it would be a world of pain for everyone. I wish all computer science teachers could do such an introductory speech: many people would have a better fitted job and would stop making passionate developers mad (on the other hand, what would then happen to the Daily WTF?).

For me, the most demanding aspect of our job is to stay current with what is going on out there. Software development, unlike most of jobs, has a constantly growing body of knowledge. All the difficulty is to stay somewhere close to the border of this constantly expanding universe. This is an exhausting and sometimes frustrating endeavor:

  • exhausting because on top of our daily paid-for activities, we have to explore this growing universe, very often using our free time and what is left of our brain power after a long day of work to read, download, test, wander and sometimes wade in terra incognita that others have pioneered for us,

  • frustrating because it expands in all directions and it is therefore virtually impossible to stay close to the border all around this sphere of knowledge: everyday brings its wealth of new technologies, new releases, new frameworks, new tools, new methodologies, new fundamental discoveries... We can only chose an area we feel comfortable with and surf the expansion wave as much as we can.
Once you have accepted that every morning you will wake up, the universe will have expanded a little and the border will have moved further so it will be another day of learning and discovery, life will be easier. No more dreams of "being at the top" but, instead, the simple path of the craftsman who learns and improves everyday and everyday realizes that the amount of things to learn and practices to improve has grown a littler bigger than the day before.

Now what is the fuel for doing this, if not passion?

Sunday, August 27, 2006

Coverage Generation or Generations of Coverage?

While I was thinking of this post, I came along the "32 & 16 Years Ago" section of the July 2006 issue of Computer and was amazed by what I read in the entry titled "Software Testing". It started like this: "The objective of (...) probe insertion mechanisms (...) is not to find errors; rather it is to quantitatively assess how thoroughly a program has been exercised by a set of test cases" (edited by Computer).

This is dated... July 1976! Code coverage in 1976! Golly! What a blunt reminder that we are, at most, building on the shoulders of giants... Humility is definitively the only path that is safe to walk.

Anyway, my current concern with code coverage was to fix the 100% target that I was talking about in my "Programming for Agility" post and that I borrowed from Uncle Bob himself. This is a nice high level target but can it really be reached? I do not have much doubts for code I am writing with my little busy hands but what with generated code? Especially code that contains many conditional branches like the one Eclipse generates for equals() and hashCode() methods?
 
@Override

public boolean equals(Object obj) {
if (this == obj)
return true;

if (obj == null)
return false;

if (getClass() != obj.getClass())
return false;

final DefaultTodoListItem other = (DefaultTodoListItem) obj;

if (id != other.id)
return false;

return true;
}
I came with three possible strategies, including one for which I already apologize:
  1. Exclude generated code from test coverage estimation: for having written this one, I have already slapped my hand as a due punishment* ! Generated code does not mean intrinsically safe code: it is live code and is subjected to all sorts of possible errors you can imagine.

  2. Use test code generators: great tools like Agitar Agitator can very conveniently generate test cases for all sorts of extreme values that will most probably explore all the possible execution paths of the aforementioned generated methods. These tests would complement nicely the ones written by the developers.

  3. Reduce the number of needed tests by enforcing class invariants: this can be done by working with immutable objects and check for invalid or exceptional conditions at construction time. This also allows to pre-calculate values like hashCode() and toString() rendering. The tests will then target user written code and will exercise construction of invalid objects instead of testing the calls of all methods possibly sensitive to an invalid state object.
Please do share other possible approaches or practices you think of or daily use in this matter. Even if we are the second generation of code coverage buffs, we might still have something useful to say!

____________________
* An efficient self-punishment technique I learnt from J.B. Rainsberger during SD West 2005.

Thursday, August 24, 2006

Parallelize This

Software guru Larry O'Brien has started a thought provocative series of articles on DevX that talks about leveraging multi-core systems with concurrent programming techniques.

This lead me to wonder how we, J2EE developers (or JEE developers, for the most advanced of us), could leverage these multi-core systems, which are bound to become our standard configurations. I might be completely wrong but I have not figured out how this could change our practice! Let me detail.

J2EE applications are designed to be deployed and executed on J2EE servers, which are highly multi-threaded environments: any J2EE server will contain several thread pools waiting to process your HTTP requests, or your EJB invocations, or your JMS message submissions... Such a system is de facto a greedy consumer of multi-core or multi-processor architectures.

In fact it requires a lot of talent to ensure that a J2EE application does not leverage this multi-threaded environment. I have seen once such a product: the developers had to be very consistent in using thread unsafe objects to ensure the system would crash if more than one user would use it. Indeed, even the simplest web-tier only application will benefit from the thread pool dedicated to serving web requests.

Attempts to parallelize some processes by creating specific threads can be made, but with great caution as it is not recommended to create random threads in an application server. In fact, for such parallelizing, internal JMS destinations and consumers can be used: this is already common practice.

Besides, it is interesting to note that more native thread execution capabilities will allow to better leverage JVM features like parallel GC. But this is true for any kind of Java applications (for JVM versions greater than or equal to 1.3.1), not only for J2EE ones.

To sum it up, these new multi-core architecture will improve performances of J2EE applications, but without the need for programmers to do anything about it.

This is good news, anyway.

Thursday, August 17, 2006

Ban the Nocturnal Gnome!

This is war time, we will not take it anymore! Read my lips: transparently subversive and constantly deceiving actions will not remain unpunished. We will strike fearlessly and get revenge of this odious and hateful rampage:


Yes, Mr. Knitty Knotty, your days are counted! Soon your nocturnal gnomic activities of wire knotting and knitting will be over. The people of Earth can not stand you anymore: your mischievousness, all devoted to mess up our computer connectivity while we are asleep, is coming to an end.

While the world goes wireless, you, Mr. Knitty Knotty, are entering oblivion. You will not be regretted. So long.

Saturday, August 12, 2006

Big Crunch Happens!

So, some say that if there is a big bang, there might even be a big crunch. I am far to be as versed in astronomy as Robert C. Martin (look closely at the picture behind him: it is a picture he took during his most recent worm hole expedition), but I must say that what I am currently observing in the code base of NxBRE might well happen for the universe as well. Or not.

Anyway, while I am working on the RC2 of the brand new v3.0.0, I am aggressively refactoring the code base, which basically translates into:
  • not adding any new features,
  • removing any useless code,
  • and simplifying everything that can be.
And I am amazed to see hundreds of LOCs and even complete classes going away!

This seems to be the fate of any software development project: after the big bang of features, bells, whistles and whatnots, a big crunch is bound to happen when enough significant milestones have been reached. At this stage, code smells can be merrily sniffed and dealt with.

Of course, this is only possible thanks to the wide test suite of NxBRE, which contains unit tests, white box tests and load tests.

Oh, what is the point of fixing it if it ain't broken? Well, let me give you the following reasons:
The plan is to have v3.0.0 out before O'Reilly's Windows Developer Power Tools comes out and a new crowd of developers start discovering NxBRE! Hopefully what will remain after the big crunch will still be working ;-)

Monday, August 07, 2006

Helping the Helpline Operator

From time to time, the infamous "we always do that" or "we have always done that way" excuse comes up: nothing new here. When hearing such a sentence over the phone, delivering a correct answer might be challenging though. No body language will help you to get through the wall of resistance to change that has just materialized in front of you (well, at the other side of the line).

What could possibly be a straight to the point reply, one that will not hurt the feelings of the other person (a sheer risk when questioning traditions), but will still make the need for change a clear necessity and trigger the reflection required to change one's mind? I propose the following punch lines:
  • I did not know it could even work that way!
  • You have been lucky so far!
  • Call Guinness!
What would you like to add?

Sunday, July 30, 2006

Going Hollow

If you go driving through France, you will notice that every hundred miles you come to cross the borders of the different historic regions, roughly matching the ancient duchies of the kingdom. Road signs clearly show these landmarks by displaying the name and the "logo" of the region.

While driving back to Lorraine, my home region, I noticed that the said logo is now a vague alteration of the original coat of arms. This alteration make no sense and does not carry any value or meaning, unlike the original blazon. See for yourself:

Original Coat of Arms (dated 1183)
Current Logo (dated 1999)

I am not preaching the altar of traditionalism here: all I am saying is that the original coat of arms had a meaning, while the new one has not. The three birds on the red band was a reminder of the legend saying that Godefroy de Bouillon, the initiator of the first crusade who took Jerusalem in 1099, killed three birds with one arrow during the siege of the holy city.

The current logo is simply an aesthetic redesign of the original blazon: the red band has disappeared, the birds point to the top right corner and they look like arrow headed deltaplanes.

Previous Logo (dated 1986)

In that sense, the previous logo was bearing more sense. Lorraine region is positioned on the hexagon that traditionally represents France. The colors represent the resources of the region (water, agriculture, steel industry...).

What is the lesson for us in the IT field? Let me propose the following: Let us not pursue novelty for the sake of itself but only if and when it brings new sense and value to our industry.

Monday, July 17, 2006

Agility Does Not Work!

Well, at least this is what some discussions are about currently on the blogosphere. Of course, this is as nonsensical as saying that concentration, stars or ants do not work. They all accomplish a purpose and do it very correctly when done at the right moment and the right place.

Most of the complaining people could have been frustrated by the fact that agility is not a tool-set, nor even a skill-set but truly a mind-set. This mind-set, whose substance is described by the Agile Manifesto, gets derived in many different practices. Therefore, one could imagine that by following these different practices, the project will become transmogrified into an agile gizmo. Unfortunately this does not work this way: as the law is spawned by the spirit of the law, agile practices come from agile minds. You need an agile mind first in order to put agility in practice.

With this mindset, it is possible to adapt agile principles to the sheer reality of your project or your clients, which is where the true challenge resides. Not all clients will accept the full monty: adaptation is needed. Here are some examples:

  • Your client is addicted to BDUF: discreetly invest “analysis” time in running spike projects (call them “proof of concept”) to tackle as soon as possible all the technological risks and uncertainties (allowing to fail fast if need be),

  • Your client loves methodologies: turn to Scott Ambler's AgileUP and see how agile the unified process has turned under Scott's expert hands,

  • Your client does not want to be collocated: release often and expose your client as early and as openly as possible to the system you are building for him (he sure will react and provide an invaluable feedback),

  • Your client has a really big project, so big that you think agility has touched its limits: then use a more traditional approach (serialization) to pilot agile driven smaller sub-projects (read Balancing Agility and Discipline for more about this).

There are many other situations when it will be difficult or impossible to openly sell an agile project to a client: pair programming (why paying two guys for the job of one?), time & features negotiation (these are the specs: what is the price and the deadline?)...

The important thing to remember is that more than a list of recipes, agility is a state of mind that will materialize differently in each and every project.



Saturday, July 15, 2006

Coming to the Rough Cuts!

Jim Holmes and James Avery new book, Windows Developer Power Tools, has hit the O'Reilly Rough Cuts, where its chapters will be progressively released.

Chapter 18, covering the vast of subject of Frameworks, will include a section titled "Externalize Business Rules with NxBRE". You can imagine how excited I am: after almost three years of hard work and thanks to the continuous feedback of an excellent community of users and developers, NxBRE is reaching a new level of visibility: an O'Reilly book!

This chapter has not made it to the Rough Cuts already but it is on its way. I will update this post when it will be there.

In the meantime, you can also browse the TOC of the book to discover all the gems waiting to be discovered in it!

[Update 20-JUL-2006] The book has a jolly cool cover!
[Update 19-AUG-2006] The book is now in pre-order on Amazon.

Monday, July 10, 2006

Managing the Gap

Zepag has posted an interesting story in this Blogical shift (whatever that means) titled "From software to detergentware...", where he talks about the new wave of two-zeros (Web, SOA and whatnot) that is currently hitting our jolly technoworld.

The fact that marketing feels like creating such apparently-brain-dead new versions of old stuffs probably means that it is part of a natural motion in the world of selling stuff (be it concepts, software, soap, consultants or SOAP).

This reminded me of the excellent blog posts from Eric Sink about the necessity and strategies for closing the gap between the product and the customer (read part 1 and part 2). And I came to the conclusion that this gap does not really need to be closed but to be managed.

When the gap becomes too small, the customers are in a position of deep familiarity with the product, up to the point were they will stop considering it as something they have tought about and paid for, but a simple and commoditized part of their own life.

This is when you need to manage the gap: creating a new two-zero of something makes the customer in need (if not urge) of getting this new product. Instantly, it has become a new object of desire. In that sense, vendors really know what they do and how to influence the crowd we are.

Of course, nowadays, with the instant worldwide reactivity of the internet connected network of customers we all are, there is a serious risk of backfire. Still, this is a strategy that pays.

A few years ago, Microsoft understood it and decided to take the whole boardgame one step further: they decided to switch from a raw number version system to a vintage-like years version system. This was very clever because it allowed to make the gap easier to grasp for the human mind: who cares about running on Word 7 in 2006? Now if you say that this is Word 95, you immedialy feel in great need of upgrade!

Hence the gap between products and customers is not negative fact but a parameter that must be managed by marketing people.

Friday, July 07, 2006

Who Can Have The Keys?

With the current upturn in the IT industry, recruiting software engineers is a hot business... again! There are numerous posts throughout the Internet talking about companies' strategies and candidates' adventures (good and bad). I have found this post from Reg Braithwaite particularly interesting, both from the story he tells and the links he shares at the bottom of the entry.

Let me explain how we do software engineers recruitment at Agile Partner. Like everybody else, we look for smart people who can solve problems but, because we are a small IT consultancy, we do not do generic recruiting, like Amazon or Google might do. We look for particular profiles (junior, senior, architects) on particular technologies (.NET, Java), then we tune the “smart people who can solve problems” filter for this particular job offer.

As purveyor of software and services for enterprises and governmental agencies, we look for people able to develop enterprise software (I am so scared to use this expression because of one terrifying post from Alex Papadimoulis), which I reckon is the case for the vast majority of the companies similar to us. What does this imply in term of recruiting?

Most of our software engineers daily job will consist in wiring building blocks together in order to create back-office systems that will solve the business problems of our clients.

This job is not exactly formal Computer Science as almost no algorithm knowledge is required (with the exception of proper XSL-T writing, which involves a lot of recursion. I tend to consider a good XSL-T developer as talented developer). Hence high CS grades are not something we do care for very much.

But this job is about curiosity, instinct and quality.

This is why we look for people:

  • who are eager to learn and able to do it fast in order to find their way out in open source project code bases or application server stack fest,

  • who know the secret of good middleware development and can conceptualize the intimates of a multi-threaded application,

  • who have high work ethics and work hard on themselves to stick to these standards.

If we feel that we share this set of core values and other common reference points (like gurus we worship or RSS feeds we read), we will then try to estimate the actual scope of technical knowledge of the candidate, considering that lacks are gaps to fill, not show-stoppers.

We will carefully avoid tricky questions because they show nothing, except a good memory or the reading of an interview preparation guide. We are much more interested by a candidate who knows where to find the information she needs instead of one who can write the code of an algorithm on a white board or rule out if an obscure piece of code can compile (if you can not decide what a piece of code does by reading it, it smells like it is in great need of refactoring).

Then we ensure that the actual coding practices of the candidate match with her assertions. In case the person is a committer on an open source project, this task is greatly simplified: the code review can be done with a simple browser, as most of the code repositories are accessible via HTTP. Else, we will ask the candidate to realize a simple program, with somewhat incomplete specifications (as in real life) and with no time constraints (the exercise being done at home).

All this can sound pretty lax but, believe me, very few go through this process until the end.

Which is ok because, at the end of the day, they will end up with the keys of the company. Something only possible with people you can trust.


Thursday, July 06, 2006

Not A Mere Option Anymore

Just a few days after receiving my free and good looking Kubuntu CD, I am up and running on this splendid platform.

My Dell Precision M90 is fully supported, including Wifi. All my tools are up and running: Callisto, jEdit, Firefox, jBoss, OpenOffice... And it really flies (jBoss up and running in 11 seconds).

So what is left for the other operating system, still available as a dual boot? .NET development because I want to stay on a platform close to the one of NxBRE users. What else? Ah, gaming of course!

With the nice marketing of Vista currently going on, switching to Linux is not a mere option anymore, it is a choice anyone can make, thanks to Kubuntu.

Let us spread the word.

Sunday, July 02, 2006

The Way Frameworks Go

While converting NxBRE to .NET 2.0, I came to realize that the SDK has been super sized.

Being a daytime Java geek and an open source developer on .NET, I can say I have a fairly good grasp of the two frameworks. When I started to work on .NET, I really appreciated the compactness and the clarity of its SDK. Informative namespace monikers and well-thought APIs were two features a guy coming from Java could appreciate.

Now the whole stuff is getting really fat, really Java-ish to speak frankly. But this is not the super sizing that I find disturbing.

The introduction of generics has triggered some really strange forks in the SDK. Take a look at the System.Collections namespace and you will notice that it has spawned a generic version of itself under System.Collections.Generic.

I would really appreciate if some .NET guru could explain why such a thing has happened? How come the existing collections have not been made generic? This has been done in Java, why not in .NET?

Now we end-up with two IDictionary interfaces, which can be a pain to handle in the same class. This can happen because you can not and do not want to use generics everywhere, especially if you look for performances with millions of entries collections.

To add to the confusion, the common implementation of IDictionary that was Hashtable has now a generic counterpart named Dictionary. Who said consistency was bad? I leave to the reader to appreciate the definition of Dictionary:
public class Dictionary<TKey,TValue> : IDictionary<TKey,TValue>, ICollection<KeyValuePair<TKey,TValue>>, IEnumerable<KeyValuePair<TKey,TValue>>, IDictionary, ICollection, IEnumerable, ISerializable, IDeserializationCallback

Truly delightful. I wonder how many pearls like this now hide in the SDK...

I thought .NET was a Java copycat but without its flaws. Not anymore. That is the way frameworks go.

PS for the Ruby guys: beware!

Sunday, June 25, 2006

Gloomywood

June issue of IEEE Spectrum features a terrifying article titled "Death by DMCA" which relates how far the American entertainment companies (aka "Hollywood") are going to protect their copyrights. They basically intend to control each and every bit of information from production to consumption and for this they will go to all possible extents, like putting a lid on too innovative technologies.

It is somehow funny to see that the analog-to-digital converters are on their targets list. Understandably, any bridge between the horrific world of audio and video tapes and the clean and controlled world of binary files is not something Hollywood appreciates.

Where is this going? Can their grand plan for control succeed anyway?

Consider French pay-TV named Canal+. Whatever effort they put in building a new unbreakable decoder to work with a new un-decipherable signal, was systematically matched by crafty people who not only created but industrialized series of pirate decoders.

Another example is the pathetic DVD zone segregation: who is still constrained by this lame attempt to control the marketing of movies? Everybody wants to be able to play DVDs bought anywhere in the world, so everyone will go surfing the net and, voila, ten minutes later their player will be de-zoned!

If people want to crack it, they will do it. And they will make the crack available to the common man.

So here are my $.02 for Hollywood: instead of trying to prevent people from stealing your goods, produce goods that people actually want to pay for.

Let me illustrate this with two possible marketing proposals:

Proposal A
  • Produce tasteless music made by disposable "artists", movie scenarios written by brain-dead pen-pushers and games that are endless clones of themselves,
  • Sell all this at prices that no-one finds fair,
  • Pretend you want to protect intellectual property while you clearly despise artistic creation.
Proposal B
  • Leverage respect people have for their favorite artist and encourage their passion by providing a wide access to all sort of music, movies and games,
  • Recognize that people are ready to pay for a nice box with cool artwork that wraps the product they buy,
  • Strongly assert the rights of the artists before the rights of the vendors.

All in all, these locks and bars Hollywood wants to put in place will end up by adding new complications to the non-techies (like when you buy a CD and can not play it because the security protection makes your player cough).

Is there any hope? For music, Jamendo is surely exploring new grounds and opening new possibilities. Open source games are interesting but will they ever be able to compete with multi-million $ games? And as for the movies...

Friday, June 23, 2006

The Case for NxBRE v3

I am about to launch a complete refactoring of NxBRE, which will be done under version number 3. I have already prepared the Subversion repository for that matter: instead of branching, I have created different sub-folders in the trunk, this to reflect that v2 and v3 will not be branches but parallel projects.

At this point of your reading, I can hear you guys, NxBRE users in particular and software developers in general, saying: "Ohmigod, he is about to go the Netscape way and do one of the things you should never do"!

Please rest assured that I will not fall into this pitfall (but maybe in other ones!): this will not be a complete rewrite, not even an incomplete one. If you browse the feature requests for group "NxBRE3", you will see that this refactoring is mainly about:

  • Refactoring the API, ie reorganizing files and folders, making namespaces more .NET flavored, substituting implementations with interfaces (this is mainly inspired by Joshua Bloch's reflections on APIs design),
  • Switching to .NET 2.0 and SharpDevelop 2.0,
  • Improving peripheral features like configuration and logging.
No new feature will be added to v3.0 so the same test suite will be used to validate that the new version has remained functionally equivalent.

The new version will break the ascending compatibility, which means that users of NxBRE will have to modify their code. This said, class names will not change so the impact should be limited to modify using clauses.

This refactoring will make NxBRE more palatable to new users and well-oriented for upcoming new features (like support for RuleML 1.0). It should have been done earlier, in fact, even before the very first release, right after converting JxBRE to .NET. This means that I was much less wiser three years ago!

Of course, no deadline has been set... After all, this is open source! Just kidding... I want things to be well done so version 3.0 will be cut when everybody will be happy with what they will see in the trunk!

Saturday, June 17, 2006

Not Grandma's Java

Though it is well known that Java has never been the language of choice for great hackers, I am convinced that great hackers were active in the Java space, at least in the early days, when people (including me) were trying to figure out where Dancing Duke could possibly take us.

Nowadays, Java has the reputation to have become the new Visual Basic, ie the language that Grandma uses for her quick and dirty programming. What common sense refers to Grandma in that case is not the wise old timer but the programmer who has not found another way of paying his bills and who punches keys to kill time until pay day.

This reputation hurts people who try very hard to practice like good craftsman and are by default assimilated with the crowd. It is nowadays hard to present yourself as a Java Developer without adding extra words like skilled or seasoned or whatnot-that-sounds-a-little-shiny, unless you feel that sounding like a perfect schmuck is acceptable as a conversation starter.

But rejoice my fellow Java programmers! After two intense days spent at SpringOne 2006, let me tell you that there is hope, light and future in our field.

There is a Java that is productive and robust, scalable and maintainable, professional and fun! A Java that no JSR has turned into a maze full of traps and pitfalls or into an altar for preaching the gospel of a particular vendor. There is a Java that can peacefully withstand Microsoft .NET and Ruby On Rails (the latter being even more of a true competitor).

This Java is brought to us by the efforts, dedication and rigor of the Spring Framework development team.

And this Java is not Grandma's.

Friday, June 09, 2006

Introducing the O/WS Impedance Mismatch

According to French cartoonist Marcel Gotlib it takes at least four arms to correctly fold a road map, which makes it virtually impossible for any human beings. So if you know someone who can fold a road map, he certainly is a visitor from another planet.

Nowadays, there is another way of spotting such hidden visitors: if you know someone who can make sense of these crazy web service standards, then she might well be from another planet.

In fact, you do not need four arms for that, but four heads:
  • one for dealing with your programming language,
  • one for dealing with the programming language of the service you want to call or the client you want to be called from,
  • one for dealing with the subtleties of SOAP style and encoding and other WSDL non-senses,
  • and one for the cursing, because believe me, there will be a lot of profanity involved.
How did we get there? Why on Earth do we have to deal with a new impedance mismatch issue? The origin of the classical O/R one is obvious: databases and object oriented worlds followed two different tracks at different periods of time and then decided to meet (to collide would be a more appropriate term).

But the origin of this new Object-WebService Impedance Mismatch are more confused to me. The lessons from CORBA and DCOM were well known so creating an inter-operable, easy to use and extend standard for web services should have been possible.

The basics were good: HTTP is simple yet powerful and XML is strict yet offers plenty of room for extensibility. But we are now far away from those basics. Two basic examples:
  • SOAP does not mean "Simple Object Access Protocol" anymore, since June 2003 it simply means soap, voila! Some said because the "Object Access" part was misleading, but the truth is that the "Simple" part was misleading. There is nothing simple anymore there. Moreover, soap is a good choice because it can make you slip and crash violently on the floor. It can also burn your eyes and make you cry. You have been warned.

  • standard APIs are just plain lame. If you go for the static stubs, you get so tightly coupled to the service you invoke that it would be wiser to embed its code in your application, at least the performance would gain. If you go for the dynamic proxies, you will swallow your hat when you will call services that disguise their RPC aspirations in document literal draperies. What else can you try? Proprietary APIs that outfox the standard ones? Probably, if it is an option.
I will not even mention the tragic WS-* family, where some members bear the same moniker to be sure that the bravest who were still trying hard to find their way out, will stumble and fall to the ground in a terrible noise of crushed iron.

Is there any hope?

If you control both sides of the equation, do yourself a favor and take some REST: any pragmatic XML over HTTP approach will make your live easier and might even extend it of a few years.

If you control only one side and have to deal with the O/WS mismatch, do not distribute the misery in all of your consumers or producers: federate the knowledge of the WS-Arcanes in a pervasive middleware component, have your guys talk only to it and let it handle the chewing and spitting of standard web service messages to the outside (and cruel) world.

Saturday, June 03, 2006

Si Senior!

At one point of time in my career, I started to consider myself as being "senior" in my profession. I thought I knew it all and nothing was left to discover, very much like the way scientists at the end of the 19th century were thinking.

Then I discovered OO and Java. Ouch! Landing was tough, a brand new world was opening up and it was vast. But it was also fun and rewarding to discover so I started to climb the learning curve.

At one point of time in this learning curve, I started to think I knew enough and, again, I started to consider myself as a "senior".

Then I met genuine seniors and gurus. Ouch! Landing was tough again! And its taste of déjà vu was, if not frustrating, at least vexing because I hate to redo the same mistake twice, like anyone of us...

What lesson could I possibly learn from these mistakes?

Well, nowadays, I really dare introducing myself as "senior" but simply because I have definitively stopped to consider myself as being such.

I am just walking the path of learning and discovery, I know it has no end and, therefore, is much bigger than me. Isn't this seniority?

Wednesday, May 24, 2006

Flows in Action

Knowing that it is the financial flow that made the middle-age third estate society collapse, will the information flow that is surging in sub-Saharan Africa make the bars of political oppression and the walls of cultural isolation collapse?

This is the question that stroke me while reading an insightful article from IEEE Spectrum. Comment here if you think similarly. Or not!

Sunday, May 21, 2006

Dematerialize Me

Every computer crash or re-install is the occasion for asking the right questions: what was wrong in my previous set-up? What can I improve?

Switching the OS for another one that asks much less care (defragmentation anyone?) is certainly an appropriate action: I will come back to it in a later post.

Reducing the hard disk drive footprint is surely another one. This means less to re-install and less to lose between two backups. This means dematerializing applications and use their web counterparts in lieu. So far, here I am:

Of course, this poses the question of confidentiality as my data is in other people's hands. I try not to store anything confidential there and I will probably resume using PGP. But the advantages, to my sense, over-exceed this inconvenient.

Not only my web related tools are available anywhere there is internet connectivity, but session state is persisted: for example, all the RSS feeds that I have marked as read will remain so (which is not the case when you have multiple clients on multiple machines).

The next step is to dematerialize me: I am still thinking on how I could be replaced with my web counterpart... In the meantime, if you know other smart tools like the ones I have listed before, please comment on this post!

Saturday, May 13, 2006

Duties and Rights (Quote of the day)

"A people that values its privileges above its principles soon loses both."

General Dwight D. Eisenhower

Tuesday, May 09, 2006

The Mystique of Software Craftsmanship, Part 5

[Previous Part]

Fifth Discipline : Control Temporal Localization

It has been said enough that psyche is a the key factor of success in human endeavors. One aspect that is worth paying attention to is the way we control how our mental activities are distributed in a time-based manner, or said differently, how we focus our streams of reflection on the past, the present and the future.

In that matter, anything that would not be following a standard normal distribution (also known as bell curve), where the peak of the curve would be the present time, the left side the past and the right side the future, would be damaging for our work and ourselves. This concept sounds pretty obvious and natural, but for intellectual workers, the risk is high to have the shape deviate in one direction or the other and it must be recognized in order to be controlled.

To understand why temporal localization is important, let us examine some potential side effects of deviating from the bell curve.


Focus

D

Risk and Mitigation

Past

+

This tendency could be the immediate consequence of work habits too much oriented to the past (see part 4). Regrets of a glorious past or a previous technical platform can make a developer unconsciously sabotage his work. It is important to recognize the constantly changing nature of our business (and the fact that a glorious past is a self-made myth) to avoid spending too much focus on the past.

-

Not having enough focus on the past leaves the developer at risk of ignoring valuable experience. Systematic project postmortem can help turning past experience into useful memories instead of denied and buried ones. Easily built knowledge bases (for example using a Wiki) can also be of great help.

Present

+

Focusing too much on present can expose developers to the feeling of being overwhelmed by the task they are working on. It is very difficult to reach the right level of detachment necessary to alleviate the emotional impact of technical difficulties that a programmer meet almost every minute. Developing a humorous attitude can be a positive way of shielding oneself!

-

The consequences of not focusing enough on the present time are well known: lack of attention to the details, unsatisfactory feeling of not seizing the instant... If this escapism is rooted in boredom, talk your manager: you will be surprised by how converging your interests are!

Future

+

It is very easy to focus too much on the future because software development entails huge prospective mental constructions. The risk is a high mental load that ends up in extreme fatigue and a difficulty to “go off line” (the fourth state in the cycle described part 3). Writing down informal do lists, models or plans as they come to mind can help reducing the amount of prospective concepts manipulated and explored simultaneously by the brain, thus reducing the propensity to excessively look forward.

-

Not paying enough attention to the future generally translates into ill-designed systems, unable to absorb changes without breaking. Of course, over-engineering is a real risk that must not be ignored, but learning simple non-intrusive techniques (like programming to interfaces) can help to efficiently prepare the future.

Because we are where our mind is, temporal localization is a factor that software developers should be aware of in order to optimize both their work capacity and mental load.


Monday, May 01, 2006

The Mystique of Software Craftsmanship, Part 4

[Previous Part]

Forth Discipline : Question Traditions

No workplace is free of traditions: in fact, establishing traditions is inherent to our human nature hence computer-related workplaces, whether they deal with software development or IT operations, are also bound to some forms of traditionalism.

We should become very aware of these traditions and by no means be afraid to question them. Very often a tradition acts as disabling force instead of simply being a neutral vector of knowledge. The same way they can turn a living faith into a dead religion, traditions can weight on decision making process and orient them into cumbersome or counterproductive acts.

There are many patterns under which traditions could take place in your working environment: it is crucial to get used to expose them. Here are three examples, that you can use as starting points for your inquiry:

  • the “self-perpetuating pain”, where the cause of a problem or the limitation of a system is long time gone, but the workarounds or constrictions that were put in place remain - unquestioned,

  • the infamous “we have always done it that way”, where the reason of a particular choice has been forgotten and, or more accurately, hence becomes impossible to change,

  • the “myth of the mountain”, where a practice, a system or a person is considered impossible to change or improve simply because communication as given way to prejudice.

Of course, we all know that questioning practices that have reached the status of habits is a very difficult endeavor, if not a risky one. Questioning a tradition goes far beyond the act of merely discussing a decision or a piece of knowledge: it sometimes touches the very fundamentals of an organization and will instinctively be considered as being destructive.

In the course of your adventure through traditional aspects of your organization, you will quickly find out that the most virulent supporters of a particular habit will be the ones who suffer the most of it: for them, enforcing such a painful tradition is close to the psychological trait known as capture-bonding.

Knowing the powerful psychological forces at stake in such a process, a wise craftsman will become a cunning tactician: he will avoid a direct confrontation and favor a more subtle approach, like systematically undermining practices that are peripheral to the targeted tradition or evangelizing intermediaries who can approach its fervent keepers.