I have started to use Mozilla Webrunner on my Mac: used in conjunction with Quicksilver, this is a neat way to start web sites as standalone sand-boxed applications. This does not replace tab browsing, of course, but is very useful for the applications you usually dock on your second monitor (or third one, if you are a real guru), like your continuous integration server dashboard or your web mail home page.
To have Quicksilver bootstrap Webrunner applications, simply store your profiles in a directory that you register as a custom catalog. And voila, you can launch your web applications with a few key strokes.
This is another way to reduce the latency between thinking about what the computer should do for you and have it actually do it. When you think at the zillions of cycles the computer wastes waiting for you, any tool, gadget or utility that helps minimizing this loss is to be lauded!
Sunday, September 30, 2007
Saturday, September 29, 2007
Agile In The Burbs?
In a recent post, my friend Alex commented on the complex art of managing employees in remote locations, which came at a time when I was thinking about the high toll of co-located teams.
My reflexion started a few weeks ago when I was reading a reader's letter in IEEE Spectrum's Forum. Commenting on a very complete coverage Spectrum just run on big cities and their challenges, the reader said:
My first reaction was: lucky man! Then my second thought was: why can not we all have such a life, a life where the distance to work does not put a toll on our lives and our environment?
So why do we rush to big cities? Because this is where the demand for IT workers is high: banks, public sector entities and private companies have a tendency to locate themselves downtown. Since they have a high need of software engineers, they act as a magnet for us geeks of all sorts.
But why do not we telecommute more? After all, this is a revolution that has been announced a long time ago and we now have the tools that would allow us to make it happen. Massively distributed open source communities have proven that this is a model that can work.
On the contrary, agile principles tell us that teams should be as co-located as possible, because of the millstone that distance puts on communication, whose efficiency is a major cause of success for software projects (and the lack thereof a major cause of failure). Industry luminaries and extensive discussions have made this point very clear.
Since housing madness only allows singles and dinkies to live downtown, the ones of us with a family are then forced to live in the burbs, far away from work and far away from the perfect commute the aforementioned reader says he experienced all his life.
Trying to accommodate the often conflicting requirements of agility, personal life and business is not a trivial task, but one that is certainly calling for a broader rethinking of the way our cities and workplaces are organized and located.
My reflexion started a few weeks ago when I was reading a reader's letter in IEEE Spectrum's Forum. Commenting on a very complete coverage Spectrum just run on big cities and their challenges, the reader said:
Why does modern society think that it’s entitled to expend all that energy, in whatever form, merely to transport people to their jobs? No one mentions the toll that a 4-hour-per-day commute takes on relationships. (...) What has always seemed more sensible to me is to live where you work. My commute is 10 minutes each way, on foot. And in my entire career as an engineer, the longest commute I’ve had was a half-hour drive.(Read the full letter "Megacommutes to megacities")
My first reaction was: lucky man! Then my second thought was: why can not we all have such a life, a life where the distance to work does not put a toll on our lives and our environment?
So why do we rush to big cities? Because this is where the demand for IT workers is high: banks, public sector entities and private companies have a tendency to locate themselves downtown. Since they have a high need of software engineers, they act as a magnet for us geeks of all sorts.
But why do not we telecommute more? After all, this is a revolution that has been announced a long time ago and we now have the tools that would allow us to make it happen. Massively distributed open source communities have proven that this is a model that can work.
On the contrary, agile principles tell us that teams should be as co-located as possible, because of the millstone that distance puts on communication, whose efficiency is a major cause of success for software projects (and the lack thereof a major cause of failure). Industry luminaries and extensive discussions have made this point very clear.
Since housing madness only allows singles and dinkies to live downtown, the ones of us with a family are then forced to live in the burbs, far away from work and far away from the perfect commute the aforementioned reader says he experienced all his life.
Trying to accommodate the often conflicting requirements of agility, personal life and business is not a trivial task, but one that is certainly calling for a broader rethinking of the way our cities and workplaces are organized and located.
Saturday, September 22, 2007
Code Literature
Open source projects documentation is a touchy subject.
I know it first hand from my own experience with NxBRE: writing and maintaining the 55+ pages PDF guide is not only a significant effort, but one of a very different nature than the effort of developing the product itself.
The wiki-based knowledge base is a good alternative: less formal, built from users questions and easy to maintain, it is definitively a viable approach for documenting small scale projects like mine.
This is also something learned on the field, as I have to work a lot with open source solutions. A pristine and up to date documentation is still an exception, at least for non-company backed projects. After hours of trial and error, fighting with an on-line documentation that was sometimes outdated (many examples would simply not work) and sometimes too advanced (showing features available in snapshot builds only), I came to consider recommending a paid-for solution, just for the sake of having someone to blame if the documentation was bad.
But that would have been too easy to surrender that way! Instead, I reverted to the more courageous tactics of the open source addict:
I know it first hand from my own experience with NxBRE: writing and maintaining the 55+ pages PDF guide is not only a significant effort, but one of a very different nature than the effort of developing the product itself.
The wiki-based knowledge base is a good alternative: less formal, built from users questions and easy to maintain, it is definitively a viable approach for documenting small scale projects like mine.
This is also something learned on the field, as I have to work a lot with open source solutions. A pristine and up to date documentation is still an exception, at least for non-company backed projects. After hours of trial and error, fighting with an on-line documentation that was sometimes outdated (many examples would simply not work) and sometimes too advanced (showing features available in snapshot builds only), I came to consider recommending a paid-for solution, just for the sake of having someone to blame if the documentation was bad.
But that would have been too easy to surrender that way! Instead, I reverted to the more courageous tactics of the open source addict:
- explore the provided running examples,
- if not enough, browse the test cases,
- if still not enough, trace debug with the source code attached to the IDE.
Monday, September 17, 2007
Saved By The Tickler
A few days ago, Sun has rolled out a new version of their Developer Forums. Since then, using these forums has turned into a nightmare:
Oh, maybe not, there is still the fancy JAVA Nasdaq ticker...
- All my watches are regularly lost, preventing me to follow-up with people asking questions, effectively killing the main value these forums are supposed to offer.
- The new text editor mangles any text input that is anything else than plain text: copying/pasting from any source ends up with the addition of funky tags that appear only when you save (or preview) your post.
Oh, maybe not, there is still the fancy JAVA Nasdaq ticker...
Sunday, September 09, 2007
Performance Driven Refactoring
I have recently talked about how fun it is to refactor code in order to increase its testability. Similarly, I have discovered another kind of refactoring driver: performance.
Starting to load test and performance profile an application you have developed is always an exhilarating time: I compare this with the Whack-A-Mole game, where, instead of funny animal heads, you hammer your classes down the execution tree until most of the time gets spent in code that is not yours.
Interestingly, I have seen the design of my application evolve while optimizing the critical parts of it. It is truly refactoring, as no feature gets added, but the code gets better performance wise.
Consider the following example that has arisen. Here is the original application design:
Profiling shown that computing the values of the context was really expensive and that these values were not always used by the target object.
I refactored to the following design:
In this design, the context values are only computed when the target object requests them, using a simple callback mechanism.
One could wonder why I did not simply remove the original context object and made O2 call O1. This would work but would have several disadvantages:
Starting to load test and performance profile an application you have developed is always an exhilarating time: I compare this with the Whack-A-Mole game, where, instead of funny animal heads, you hammer your classes down the execution tree until most of the time gets spent in code that is not yours.
Interestingly, I have seen the design of my application evolve while optimizing the critical parts of it. It is truly refactoring, as no feature gets added, but the code gets better performance wise.
Consider the following example that has arisen. Here is the original application design:
Profiling shown that computing the values of the context was really expensive and that these values were not always used by the target object.
I refactored to the following design:
In this design, the context values are only computed when the target object requests them, using a simple callback mechanism.
One could wonder why I did not simply remove the original context object and made O2 call O1. This would work but would have several disadvantages:
- unnecessarily increasing coupling between these two classes,
- visible design change, while I wanted the refactoring to respect the existing contracts between objects (the context in that case).
Tuesday, September 04, 2007
High Quantum Leap?
I am new to JPA and discovered today that HQL is not JPQL. This sounds pretty obvious but the circumstances when I was reminded this cruel reality were puzzling.
First let's start with my mistake! I wrote something like this:
instead of writing that:
i.e. HQL instead of JPQL.
You might think that I must be a lousy tester to release code that does not work but the trick is that it was working fine and passing the integration tests.
The issues was that the JPA provider I am using, Hibernate, was not running in strict mode hence was joyfully accepting the HQL queries. Everything was fine in the integration tests run on Jetty but when deployed on JBoss, where a strictly configured Hibernate was used as the JPA provider, all hell was breaking loose.
So conclusion one is that yes, I am a lousy tester: always target the same platform for your integration tests than for your deployments. There are simply too many provided runtime dependencies in an application server to take any chance with them.
And conclusion two is more an open question about the value of standardized Java API built on top of existing proprietary industry standards. Some success exist, like JCR, which is quite an achievement in the complex matter of unifying access to CMS implementations. But I must admit that one could find JPA an off-putting keyhole on Hibernate, spread with tricky drawbacks.
I want to believe that, the same way JAXP started as a gloomy reduction of the capacities of Crimson and Xalan behind wacky factories and ended as a convenient way to unify XML coding in Java, JPA will evolve towards a bright future.
First let's start with my mistake! I wrote something like this:
FROM Magazine WHERE title = 'JDJ'
instead of writing that:
SELECT x FROM Magazine x WHERE x.title = 'JDJ'
i.e. HQL instead of JPQL.
You might think that I must be a lousy tester to release code that does not work but the trick is that it was working fine and passing the integration tests.
The issues was that the JPA provider I am using, Hibernate, was not running in strict mode hence was joyfully accepting the HQL queries. Everything was fine in the integration tests run on Jetty but when deployed on JBoss, where a strictly configured Hibernate was used as the JPA provider, all hell was breaking loose.
So conclusion one is that yes, I am a lousy tester: always target the same platform for your integration tests than for your deployments. There are simply too many provided runtime dependencies in an application server to take any chance with them.
And conclusion two is more an open question about the value of standardized Java API built on top of existing proprietary industry standards. Some success exist, like JCR, which is quite an achievement in the complex matter of unifying access to CMS implementations. But I must admit that one could find JPA an off-putting keyhole on Hibernate, spread with tricky drawbacks.
I want to believe that, the same way JAXP started as a gloomy reduction of the capacities of Crimson and Xalan behind wacky factories and ended as a convenient way to unify XML coding in Java, JPA will evolve towards a bright future.
Monday, September 03, 2007
A Triplet Of Releases
What better day than labor day for releases? Sorry, I could not resist the cheap childbirth analogy! So, I have just released the latest version of NxBRE, the Inference Engine Console and the DSL plug-in.
I have decided not to finalize the implementation of the support of RuleML 0.91 Naf Datalog, mainly because it is not clear how integrity queries are now implemented. The release was worthwhile anyway, thanks to patches, bug reports and feature requests submitted by the community of users.
Enough labor for today!
I have decided not to finalize the implementation of the support of RuleML 0.91 Naf Datalog, mainly because it is not clear how integrity queries are now implemented. The release was worthwhile anyway, thanks to patches, bug reports and feature requests submitted by the community of users.
Enough labor for today!
Saturday, September 01, 2007
Virtual Shelf
Thanks to Vijay, I have discovered Shelfari, a really neat virtual shelf where you can share your experience about your current readings, your previous ones and consult the point of view of others.
You can even integrate a widget on your blog, as you can see on the lower right of this page.
Highly recommended!
You can even integrate a widget on your blog, as you can see on the lower right of this page.
Highly recommended!
Subscribe to:
Posts (Atom)