Though a company sponsored book, this collection of essays on code review is enlightening. Rooted in the real world, the book makes a clear point for a practice that tends to be overlooked in most of the software shops. A recommended reading for anyone who wants to learn about the state of the art of code review and how it can benefit an organization and its people.
Thursday, December 20, 2007
Just Read: Best Kept Secrets of Peer Code Review
Labels:
Readings
Sunday, December 16, 2007
Time To Run
Everybody loves re-use: no-one would like to re-invent the wheel (at least no-one worth calling a colleague). Re-use in action can be tricky, though, as the library or component you intend to leverage can come with strings attached under the form of runtime dependencies. These dependencies can take several forms, each with a different twist but all with the same promise to make your job a little more... intense!
- Framework dependencies are often the (incestuous?) children of configuration and reflection: when the latter gets driven by the former, the incertitude concerning what concrete classes your application will actually use, grows and can reach the point when it endangers your system. As an example, consider the way JAXP's concrete classes are determined at runtime: though you program to a unified API, which allows you to re-use existing parsers and processors, you must code your application defensively to ensure the runtime dependencies will provide the features you need.
- Data dependencies dependency will leverage your existing database, whatever its can be pulled into your application when you re-use a component that carries its own DAO layer. Provided the data layer is flexible enough to accommodate your choice of database engine, this can be managed easily. Things can get tricky if you have several applications re-using the same component on the same data source: you might well figure out the component was not designed for this particular deployment model. Things can get even trickier if several applications use different versions of this component on a shared database! A positive example here is jBPM, that you can embed in your application and which will require state persistence: thanks to Hibernate, this runtime dependency will speak the SQL dialect of your existing database, hence will not bring havoc to your application.
- Service dependencies differ from data ones from several of their characteristics: they promote sharing logic above data, they can encourage loose coupling (but not always: think about EJB clients) and they are generally more concerned with and capable of backwards compatibility. This kind of reusable components, which are often developed in-house, must be carefully crafted to evolve well in space (different deployment environments) and time (different versions). When the grand dream of service auto-discovery and auto-wiring will be fully materialized, these dependencies will become more predictable and manageable. SCA will also play a great role in improving the usability of service dependent components.
Labels:
Craftsmanship
Keep It SimpleDB
Now that Amazon SimpleDB is out, the open source competition has started and we can expect some pretty good stuff to come out soon and allow us to leverage this new cool tool from the AWS family.
I would personally be interested in a JCR adapter for SimpleDB: this would enable a semantically meaningful data storage layer to be plugged on top of the Amazon service. Think about massively distributed content management system...
Another cool idea, one that would really be dear to me, would be a RuleML fact base adapter for NxBRE. Since SimpleDB is built on Erlang, it naturally manipulates tuples, which are the essence of RuleML's atoms and facts. A fact base adapter would allow the rule engine to tap into a centralized knowledge base and make deductions out of it. Think about massively distributed expert system...
I would personally be interested in a JCR adapter for SimpleDB: this would enable a semantically meaningful data storage layer to be plugged on top of the Amazon service. Think about massively distributed content management system...
Another cool idea, one that would really be dear to me, would be a RuleML fact base adapter for NxBRE. Since SimpleDB is built on Erlang, it naturally manipulates tuples, which are the essence of RuleML's atoms and facts. A fact base adapter would allow the rule engine to tap into a centralized knowledge base and make deductions out of it. Think about massively distributed expert system...
Thursday, December 06, 2007
Precious Revisions
In his most recent post, Martin Fowler talks about the necessity to teach clients about the value that resides in the tests his company delivers with any software solution they ship (and how, without this recognition, tests will be altered and mangled by the client, thus will lose most of their intrinsic value).
Indeed, it is interesting to recognize that clients might consider that the only valuable outcomes of a software project are the software itself and its documentation, and consequently ignore other valuable resources.
In that matter, I think there is another valuable outcome that exists but is generally ignored by clients: the source control revision history. Understanding how a software has been built not only reveals a lot about the professionalism of the builder but also helps understanding how the application took form and reached its current incarnation, which can further help making it evolve gracefully.
As a software consulting client, the most convenient option is probably to open a dedicated source control repository for the consultants and grant them access both internally and externally (they will very often work off site). After the project has been completed, the source code and its history can then be moved from this repository to the client's private one. Similarly, an alternative option is to let the consulting firm use their own repository provided a export/import path exists between the consulting and the client source control tools.
To sum it up: cherish your code and its revisions! After all, this should come as no surprise since everybody knows there is a lot to learn from history: this is just a different context!
Indeed, it is interesting to recognize that clients might consider that the only valuable outcomes of a software project are the software itself and its documentation, and consequently ignore other valuable resources.
In that matter, I think there is another valuable outcome that exists but is generally ignored by clients: the source control revision history. Understanding how a software has been built not only reveals a lot about the professionalism of the builder but also helps understanding how the application took form and reached its current incarnation, which can further help making it evolve gracefully.
As a software consulting client, the most convenient option is probably to open a dedicated source control repository for the consultants and grant them access both internally and externally (they will very often work off site). After the project has been completed, the source code and its history can then be moved from this repository to the client's private one. Similarly, an alternative option is to let the consulting firm use their own repository provided a export/import path exists between the consulting and the client source control tools.
To sum it up: cherish your code and its revisions! After all, this should come as no surprise since everybody knows there is a lot to learn from history: this is just a different context!
Labels:
Craftsmanship
Monday, December 03, 2007
Jung Juggles Data Jungles
I am convinced that one of the tasks of a software architect consists in improving or building tools for software developers, so they can be more efficient in their daily dose of fun called "work". This of course includes visualization tools, a subject dear to Gregor Hohpe, who gives great insights on how and why visualization can help make sense of data whose amount or lack of order make hard to grasp.
Beginning of this year, I started to experiment with a very good library named JUNG, an acronym that means "Java Universal Network/Graph framework" and explains exactly what it is. So far I have built a pair of tools:
I invite you to explore the power of visualization on your own, with your own data and challenges.
Beginning of this year, I started to experiment with a very good library named JUNG, an acronym that means "Java Universal Network/Graph framework" and explains exactly what it is. So far I have built a pair of tools:
- A grapher that analyzes the JCR repository of a web CMS and represents the component and templates as graphs, by following the hierarchy of inheritance they engage into.
- A analyzer that parses a lenghty JavaEE web application deployment descriptor and represents as a tree the filters involved into processing HTTP requests.
I invite you to explore the power of visualization on your own, with your own data and challenges.
Labels:
Craftsmanship,
Tools
Saturday, December 01, 2007
Mule Meets Rabbit
I have just released version 1.1.0-M1 of the Mule JCR Transport.
This new version represents a major step forward, as the transport now supports both reading and writing content to a JCR repository. The transport can also handle standard and custom node types thanks to an extensible type handling framework.
Just go and read the user guide to learn more about the features of this milestone release. You can also download the packaged release or simply add the Maven artifact to your project.
Note that the support of transactions and streaming are scheduled for the M2 release.
I hope you will enjoy the new possibilities that occur when a Mule meets a Rabbit.
This new version represents a major step forward, as the transport now supports both reading and writing content to a JCR repository. The transport can also handle standard and custom node types thanks to an extensible type handling framework.
Just go and read the user guide to learn more about the features of this milestone release. You can also download the packaged release or simply add the Maven artifact to your project.
Note that the support of transactions and streaming are scheduled for the M2 release.
I hope you will enjoy the new possibilities that occur when a Mule meets a Rabbit.
Monday, November 26, 2007
Celesstin Project Alumni LinkedIn Group
Celesstin is a system that converts mechanical engineering drawings into a format suitable for CAD, that has been developed in the early nineties.
I encourage all alumni from the Celesstin Project (I, II, III and IV) to join this group.
I encourage all alumni from the Celesstin Project (I, II, III and IV) to join this group.
Labels:
World
Sunday, November 25, 2007
WebSpring
In the recent announcement for Spring Framework 2.5, I have been struck by the following line:
I think this is a very positive unfoldment in the Spring and BEA joint adventure. From a technical standpoint, the Pitchfork project was already very appealing, thanks to the seamless integration of Spring with Weblogic, providing all the goodness of the first at the core of the latter (instead of the usual "on top of JEE" deployment model).
Moreover, for developers, being able to benefit from Spring was a good way to restore the somewhat tainted "coolness" of a such traditional platform like Weblogic. Moreover, it is interesting to compare BEA's move to the one of JBoss, the hippest application server of that time. Indeed, JBoss passive-aggressive relationship with Spring has been, and still is, instrumental in making developers reconsider their commitment to this application server.
This newly announced certified support will ring a bell at management level, especially in the risk-averse prime market of Weblogic (financial and insurance institutions), where using an open source framework, as excellent it is, is very often frown upon (on the other hand, pragmatism is also a characteristic of such a clientele, especially in the UK, explaining the successful commitment of VOCA to Spring).
How this official support will materialize will be critical to the eventual success of this partnership, especially knowing the usually poor support vendors offer to their clients compared to the one they can get from open source projects.
Officially certified WebSphere support
I think this is a very positive unfoldment in the Spring and BEA joint adventure. From a technical standpoint, the Pitchfork project was already very appealing, thanks to the seamless integration of Spring with Weblogic, providing all the goodness of the first at the core of the latter (instead of the usual "on top of JEE" deployment model).
Moreover, for developers, being able to benefit from Spring was a good way to restore the somewhat tainted "coolness" of a such traditional platform like Weblogic. Moreover, it is interesting to compare BEA's move to the one of JBoss, the hippest application server of that time. Indeed, JBoss passive-aggressive relationship with Spring has been, and still is, instrumental in making developers reconsider their commitment to this application server.
This newly announced certified support will ring a bell at management level, especially in the risk-averse prime market of Weblogic (financial and insurance institutions), where using an open source framework, as excellent it is, is very often frown upon (on the other hand, pragmatism is also a characteristic of such a clientele, especially in the UK, explaining the successful commitment of VOCA to Spring).
How this official support will materialize will be critical to the eventual success of this partnership, especially knowing the usually poor support vendors offer to their clients compared to the one they can get from open source projects.
Friday, November 23, 2007
Eclipse Default Key Mapping Request
Every time I install Eclipse, I have to bind "Refactor > Extract Constant" to Control+Alt+C or the equivalent with the funny Mac keys.
This refactoring is really common and I think it ought to be in the default mappings of Eclipse.
Am I the only one thinking so?
This refactoring is really common and I think it ought to be in the default mappings of Eclipse.
Am I the only one thinking so?
Labels:
Tools
Recently Read
I have read this book cover to cover in just a few commute trips: it is indeed a fascinating reading to discover the ingenuity that hackers employ to abuse on-line games, sometimes for profit, often for fun, and the privacy-invading counter-measures games companies put in place. An eye opener for anyone playing on-line games who would not be willing to share all his private information with vendors.
A very good introduction to Erlang, which really invites you to start building resilient and parallel applications. I was amazed to see the similarity between Erlang's pattern matching philosophy and RuleML's.
Labels:
Readings
Tuesday, November 20, 2007
Moral Code 2.0?
Should software developers have a moral code about their coding? ThoughtWorks says yes, according to eWeek's "Toward a Discussion of Morality and Code" article.
Anything new here? Not for any member of the IEEE, as its Code of Ethics clearly puts forward values that constitute an inspiring moral code.
So is it worth mentioning ThoughtWorks position? Certainly, because our industry needs thought leaders that establish credible models to follow. Why is this? Maybe because software development is one of the rare professional field where someone can read a "Teach Yourself" book and proclaim to be a specialist the week after.
Nor surprisingly, this practice of over inflating skills is not rare and often even encouraged by software consulting firms, in order to seduce clients and secure contracts. Hence, an interesting question to ask Mr. Singham back is: Should software consulting firms have a moral code about their developers?
His answer might also teach a lesson for others.
Anything new here? Not for any member of the IEEE, as its Code of Ethics clearly puts forward values that constitute an inspiring moral code.
So is it worth mentioning ThoughtWorks position? Certainly, because our industry needs thought leaders that establish credible models to follow. Why is this? Maybe because software development is one of the rare professional field where someone can read a "Teach Yourself" book and proclaim to be a specialist the week after.
Nor surprisingly, this practice of over inflating skills is not rare and often even encouraged by software consulting firms, in order to seduce clients and secure contracts. Hence, an interesting question to ask Mr. Singham back is: Should software consulting firms have a moral code about their developers?
His answer might also teach a lesson for others.
Labels:
Craftsmanship
Monday, November 19, 2007
Root of the Rot
I recently had to satisfy a pretty simple feature request for one my projects: to be able to reload part of its configuration at run time. Not a big deal, right? Well, not exactly. In fact, I have been amazed by the impact such a simple change had on the system, if not as a whole, at least in all the critical sections of it.
It is well known that applications tend to rot after time (i.e. after changes have been made to them) but, up to this recent feature request, I was unsure of the the actual cause of this rot.
Application rot comes from the chaotic relief of the tension created by changes that induce changes in fundamentals and invariants. This chaotic relief increases the software entropy, as software quality and maintainability principles get violated.
As I was implementing the aforementioned new feature, the tension on the application was causing its design, its thread safeness and its clarity to degrade at a distressing speed. I was going fast, but not. Fortunately, my alarm bell was ringing loud and clear and, after reverting to the latest head revision, I started again with a holistic plan that was taking care of not increasing the entropy of the application.
During this I have noted that nowadays:
It is well known that applications tend to rot after time (i.e. after changes have been made to them) but, up to this recent feature request, I was unsure of the the actual cause of this rot.
Application rot comes from the chaotic relief of the tension created by changes that induce changes in fundamentals and invariants. This chaotic relief increases the software entropy, as software quality and maintainability principles get violated.
As I was implementing the aforementioned new feature, the tension on the application was causing its design, its thread safeness and its clarity to degrade at a distressing speed. I was going fast, but not. Fortunately, my alarm bell was ringing loud and clear and, after reverting to the latest head revision, I started again with a holistic plan that was taking care of not increasing the entropy of the application.
During this I have noted that nowadays:
- Tools are efficient in helping us keeping the entropy low (FindBugs and Checkstyle were shaking their heads about some bad stuff I was doing),
- Libraries are now rich enough to help mitigating changes that can compromise thread safety (think Java Concurrency or Intel Threading Building Blocks),
- Industry luminaries have preached the need of elevated professional standards enough to make us become conscious of software rot when it happens (the bell goes on).
Labels:
Craftsmanship
Tuesday, November 13, 2007
Will Find Your Bugs... Tomorrow
I have just upgraded to the latest version of the Findbugs Eclipse Plugin (1.3.0.20071108) and landed in a terrible world of sluggishness.
The new version of the plug-in is so slow on my machine (a 10 months old Mac Book Pro with 2GB of RAM, running Eclipse Europa) that I had to reverse to the previous version of the plug-in (1.2.1.20070531). I did not consider canceling the auto-run feature of Findbugs because I do not want to forget to run it: this is one of the interesting aspects of this plugin (without this option, I would simply uninstall the plug-in a rely on the Maven report that contains the same information).
Maybe the issue is visible because my project has a little more than 80 dependencies (the joys of open source). But the previous version was fast enough so something has probably went bonkers in the latest release of this very useful plug-in.
Any other one out there facing the same issue?
The new version of the plug-in is so slow on my machine (a 10 months old Mac Book Pro with 2GB of RAM, running Eclipse Europa) that I had to reverse to the previous version of the plug-in (1.2.1.20070531). I did not consider canceling the auto-run feature of Findbugs because I do not want to forget to run it: this is one of the interesting aspects of this plugin (without this option, I would simply uninstall the plug-in a rely on the Maven report that contains the same information).
Maybe the issue is visible because my project has a little more than 80 dependencies (the joys of open source). But the previous version was fast enough so something has probably went bonkers in the latest release of this very useful plug-in.
Any other one out there facing the same issue?
Labels:
Tools
Sunday, November 04, 2007
Back to Humans
This month issue of Computer runs an article titled "Generation 3D: Living in Virtual Worlds", which ends up predicting that virtual 3D worlds could become pervasive in our lives by 2047. I must admit that, as cool as living a virtual life in an MMORPGs sounds to a geek like me, I am frightened by the implication for our societies.
If our avatars become the main mental projection of our psyches and if our disincarnate-selves become our main subject of concerns, what would happen to such fragile things like the environment, democracy or compassion ?
Will it matter to the "generation 3D" if the Earth must be over-exploited to produce enough energy for powering the zillions of servers hosting their fantasy worlds?
Will it matter to them if their countries turn into police states where their only liberties will be virtual, abandoning the ideals that founding fathers and thinkers of the past had for mankind?
And finally will it matter at all if others will be left out dying of cold or hunger at the fringe of the digital society?
Was Queen prophetic?
If our avatars become the main mental projection of our psyches and if our disincarnate-selves become our main subject of concerns, what would happen to such fragile things like the environment, democracy or compassion ?
Will it matter to the "generation 3D" if the Earth must be over-exploited to produce enough energy for powering the zillions of servers hosting their fantasy worlds?
Will it matter to them if their countries turn into police states where their only liberties will be virtual, abandoning the ideals that founding fathers and thinkers of the past had for mankind?
And finally will it matter at all if others will be left out dying of cold or hunger at the fringe of the digital society?
Was Queen prophetic?
Labels:
World
Tuesday, October 30, 2007
Geek Pride
Whatever the subject of his post entry is, Uncle Bob always ends up hammering the ultimate goal that should drive us, software developers:
Thank you Uncle Bob. May you be read at all levels of management.
"It is not good enough that a program work. A program must also be written well. As a programmer you should take pride in your work and never leave a mess under the hood. Remember, a product that works, but that has a bad internal structure is a bad product."
Thank you Uncle Bob. May you be read at all levels of management.
Labels:
Craftsmanship
Software Patents Or Not?
This month issue of IEEE Canadian Review runs a very interesting article titled "Patenting Software Innovations: A brief overview of the situation in some jurisdictions of interest" (PDF available for download at the top left corner of this page).
In this article Alexandre Abecassis gives a short but informative overview of the reality of software patents in Canada, USA and Europe. A must read, if you want to have clear ideas on the subject.
In this article Alexandre Abecassis gives a short but informative overview of the reality of software patents in Canada, USA and Europe. A must read, if you want to have clear ideas on the subject.
Labels:
World
Sunday, October 21, 2007
Another One Rides The Bus
At a time when big brains start to wonder what an Enterprise Service Bus (ESB) is all about and have doubts about the health of Service Oriented Architecture (SOA), today's post of Uncle Bob is a refreshing pragmatic counterpoint, as we can expect from him.
Of course I can only concur, but I must say that scorning enterprise service buses, as he does, is not necessary (maybe it is, just for the purpose of counterbalancing vendors who try hard to push expensive ESBs on clients...).
For me, an ESB is a distributed intermediation middleware whose main goals are:
So an ESB is not a golden hammer but is just an occasion, a driver, a extra reason for making things better. Presented like this, it is not surprising to find out that anyone who value what they do are willing to ride the bus.
Of course I can only concur, but I must say that scorning enterprise service buses, as he does, is not necessary (maybe it is, just for the purpose of counterbalancing vendors who try hard to push expensive ESBs on clients...).
For me, an ESB is a distributed intermediation middleware whose main goals are:
- Facilitating applications interoperability, and
- Reducing applications coupling, and
- Avoiding point to point communication, but also
- Favoring asynchronous messaging and eventing above synchronous remote invocation.
- Applications tend to know too much about each other, with integration happening at data level, if not database level, and sometimes happening beyond the enterprise boundaries.
- Applications tend to wait too much for each other, engaging in long chains of synchronous requests while asynchronous messaging could be used to free up threads, hence resources.
- Applications tend to talk too much when they have nothing to say: polling mobilizes resources while efficient, yet simple, notification mechanisms have been around for a while.
So an ESB is not a golden hammer but is just an occasion, a driver, a extra reason for making things better. Presented like this, it is not surprising to find out that anyone who value what they do are willing to ride the bus.
Labels:
Craftsmanship,
Tools
Monday, October 15, 2007
Today is B.A.D.
Indeed, this is Blog Action Day and bloggers all around the world talk about the crucial subject that environment is.
What could I say that has not been said before? I do not know so here is the picture of a salmon I took yesterday afternoon in the creek that flows next to my house.
I wish many wild salmons to the next generations.
What could I say that has not been said before? I do not know so here is the picture of a salmon I took yesterday afternoon in the creek that flows next to my house.
I wish many wild salmons to the next generations.
Labels:
World
Sunday, October 14, 2007
Yeah, Please Fix It!
I am tired of having my MacBook losing its Wifi connection, while my XP and Kubuntu boxes have no trouble with it.
Labels:
Platform
Saturday, October 13, 2007
The inspiring life of Eric Hahn
In the October 2007 issue of IEEE Spectrum, an article (The Codemaker) depicts the life of Eric Hahn, who "has been an executive, an entrepreneur, and an investigator. But he's happiest of all to call himself a programmer".
The life of Mr. Hahn can only intimately resonate with the life of the many whiz kids who started computing when this activity was only starting to become known to the public. Indeed, I started a few years after him and on a smaller scale of machines (Sinclair ZX-81 instead of a Digital PDP-8/m), writing tiny games instead of hard-core emulators. Living in the countryside of North-East France, the analogy stops here as Mr. Hahn had access to the more stimulating and responsive environments of New York and the Silicon Valley.
One very touching aspect of his life is the tension between making a career and remaining a programmer. Throughout the years he kept his passion writing code and has find enough will and talent to create himself opportunities to keep developing. This tension is symptomatic of our societies that respect more those who make others do than those of do, pushing people away from what they thrive to do to.
The only thing I can teach Mr. Hahn is that this is not happening only in the Valley!
____
As a side note: if you are not already a reader of IEEE Spectrum and have any interest in technology, I can only strongly encourage you to subscribe, as this is the best magazine I happen to read nowadays and the only one I read cover to cover.
The life of Mr. Hahn can only intimately resonate with the life of the many whiz kids who started computing when this activity was only starting to become known to the public. Indeed, I started a few years after him and on a smaller scale of machines (Sinclair ZX-81 instead of a Digital PDP-8/m), writing tiny games instead of hard-core emulators. Living in the countryside of North-East France, the analogy stops here as Mr. Hahn had access to the more stimulating and responsive environments of New York and the Silicon Valley.
One very touching aspect of his life is the tension between making a career and remaining a programmer. Throughout the years he kept his passion writing code and has find enough will and talent to create himself opportunities to keep developing. This tension is symptomatic of our societies that respect more those who make others do than those of do, pushing people away from what they thrive to do to.
“I wonder,” Mr. Hahn says, “how many programmers are trapped in the bodies of Silicon Valley executives. We tend to leave programming jobs because they just don't pay enough to support kids and mortgages here in Silicon Valley. But increasingly, when people have some material independence, they revert.
The only thing I can teach Mr. Hahn is that this is not happening only in the Valley!
____
As a side note: if you are not already a reader of IEEE Spectrum and have any interest in technology, I can only strongly encourage you to subscribe, as this is the best magazine I happen to read nowadays and the only one I read cover to cover.
Labels:
Craftsmanship
Thursday, October 11, 2007
Show off your cool NxBRE project!
Do you feel like running a few minutes remote demonstration of what you have accomplished with NxBRE and RuleML in your project?
Then you are up to the RuleML-2007 challenge!
Please contact me ASAP for more details.
Then you are up to the RuleML-2007 challenge!
Please contact me ASAP for more details.
Labels:
My OSS
Sunday, October 07, 2007
Blinded By Trust
Sun has improved the new version of their Java forums so my previous rant about how disastrous it was must now be taken with a bit of salt. So it is attractive again to browse these forums for helping people dealing with their development issues.
Answering questions on these forums is a very instructive process because, the same way we learn from our own mistakes, there is a lot to learn from the fumbles of others.
Another interesting aspect is trying to figure out what went wrong in the code submitted by a developer: it is a very hard exercise because you have to fight against the natural tendency to trust the other party for stating their problem correctly.
Moreover this exercise reveals incredible blind spots in the way we perceive other peoples' code when we assume that they know what they are doing, which is the normal position you have with your colleagues, for example.
This today's exchange on the forum is symptomatic of this. Focusing on the programmers stated serialization issue, I totally disregarded the JDBC code he wrote, assuming it was correct. But it was really badly flawed!
The lesson from this is that, when helping a developer with an issue, fight against the natural desire to trust what he is reporting, while, of course, maintaining a respectful behavior. This will help alleviating biases and blind spots when reviewing the defective code.
Answering questions on these forums is a very instructive process because, the same way we learn from our own mistakes, there is a lot to learn from the fumbles of others.
Another interesting aspect is trying to figure out what went wrong in the code submitted by a developer: it is a very hard exercise because you have to fight against the natural tendency to trust the other party for stating their problem correctly.
Moreover this exercise reveals incredible blind spots in the way we perceive other peoples' code when we assume that they know what they are doing, which is the normal position you have with your colleagues, for example.
This today's exchange on the forum is symptomatic of this. Focusing on the programmers stated serialization issue, I totally disregarded the JDBC code he wrote, assuming it was correct. But it was really badly flawed!
The lesson from this is that, when helping a developer with an issue, fight against the natural desire to trust what he is reporting, while, of course, maintaining a respectful behavior. This will help alleviating biases and blind spots when reviewing the defective code.
Labels:
Craftsmanship
Wednesday, October 03, 2007
Paint It White
The Register recently reported that, according to boffins, it is dark times for application development.
Just when I thought everything was getting better. Way much better.
Just when I thought everything was getting better. Way much better.
- Writing code has never been so fun: we have great IDEs, loaded with refactoring features, enriched by a wealth of plugins that turn them into tailored productivity platform.
- Our tool boxes are now loaded with pragmatism-driven frameworks, multi-threaded building blocks and a panoply of libraries for everything and whatnot.
- Testing has never been so easy: we have a great variety of tools for testing applications at almost all levels and in fully automated ways.
- Testing has never been so rewarding: funny colored lights give us instant reward on our efforts while test coverage tools provide us with an exciting challenge.
- Source control management is now accessible to mere mortals: no need to be a command line guru or a sysadmin to store and manage code in repositories anymore.
- Collaborating on-line is now a reality thanks to tools designed for sharing idea, tracking issues and progress and authoring content over the Internet.
- Making reproducible and automated builds is a piece of cake: dependency management and library repositories combined with continuous integration platforms produce a sense of velocity and fluidity that makes development thrive.
- The tyranny of modelling and the myth of big design up front have been debunked and relegated to the museum of toxic ideas.
- Industry luminaries have risen and their voices have encouraged the inception of methodologies that promote communication, honesty, courage and elevated professional standards.
- Hype and buzz words are consistently derided and exposed to their true natures by the same thought leaders.
Labels:
Craftsmanship,
Tools
Sunday, September 30, 2007
Quick Web Silver Runner
I have started to use Mozilla Webrunner on my Mac: used in conjunction with Quicksilver, this is a neat way to start web sites as standalone sand-boxed applications. This does not replace tab browsing, of course, but is very useful for the applications you usually dock on your second monitor (or third one, if you are a real guru), like your continuous integration server dashboard or your web mail home page.
To have Quicksilver bootstrap Webrunner applications, simply store your profiles in a directory that you register as a custom catalog. And voila, you can launch your web applications with a few key strokes.
This is another way to reduce the latency between thinking about what the computer should do for you and have it actually do it. When you think at the zillions of cycles the computer wastes waiting for you, any tool, gadget or utility that helps minimizing this loss is to be lauded!
To have Quicksilver bootstrap Webrunner applications, simply store your profiles in a directory that you register as a custom catalog. And voila, you can launch your web applications with a few key strokes.
This is another way to reduce the latency between thinking about what the computer should do for you and have it actually do it. When you think at the zillions of cycles the computer wastes waiting for you, any tool, gadget or utility that helps minimizing this loss is to be lauded!
Labels:
Tools
Saturday, September 29, 2007
Agile In The Burbs?
In a recent post, my friend Alex commented on the complex art of managing employees in remote locations, which came at a time when I was thinking about the high toll of co-located teams.
My reflexion started a few weeks ago when I was reading a reader's letter in IEEE Spectrum's Forum. Commenting on a very complete coverage Spectrum just run on big cities and their challenges, the reader said:
My first reaction was: lucky man! Then my second thought was: why can not we all have such a life, a life where the distance to work does not put a toll on our lives and our environment?
So why do we rush to big cities? Because this is where the demand for IT workers is high: banks, public sector entities and private companies have a tendency to locate themselves downtown. Since they have a high need of software engineers, they act as a magnet for us geeks of all sorts.
But why do not we telecommute more? After all, this is a revolution that has been announced a long time ago and we now have the tools that would allow us to make it happen. Massively distributed open source communities have proven that this is a model that can work.
On the contrary, agile principles tell us that teams should be as co-located as possible, because of the millstone that distance puts on communication, whose efficiency is a major cause of success for software projects (and the lack thereof a major cause of failure). Industry luminaries and extensive discussions have made this point very clear.
Since housing madness only allows singles and dinkies to live downtown, the ones of us with a family are then forced to live in the burbs, far away from work and far away from the perfect commute the aforementioned reader says he experienced all his life.
Trying to accommodate the often conflicting requirements of agility, personal life and business is not a trivial task, but one that is certainly calling for a broader rethinking of the way our cities and workplaces are organized and located.
My reflexion started a few weeks ago when I was reading a reader's letter in IEEE Spectrum's Forum. Commenting on a very complete coverage Spectrum just run on big cities and their challenges, the reader said:
Why does modern society think that it’s entitled to expend all that energy, in whatever form, merely to transport people to their jobs? No one mentions the toll that a 4-hour-per-day commute takes on relationships. (...) What has always seemed more sensible to me is to live where you work. My commute is 10 minutes each way, on foot. And in my entire career as an engineer, the longest commute I’ve had was a half-hour drive.(Read the full letter "Megacommutes to megacities")
My first reaction was: lucky man! Then my second thought was: why can not we all have such a life, a life where the distance to work does not put a toll on our lives and our environment?
So why do we rush to big cities? Because this is where the demand for IT workers is high: banks, public sector entities and private companies have a tendency to locate themselves downtown. Since they have a high need of software engineers, they act as a magnet for us geeks of all sorts.
But why do not we telecommute more? After all, this is a revolution that has been announced a long time ago and we now have the tools that would allow us to make it happen. Massively distributed open source communities have proven that this is a model that can work.
On the contrary, agile principles tell us that teams should be as co-located as possible, because of the millstone that distance puts on communication, whose efficiency is a major cause of success for software projects (and the lack thereof a major cause of failure). Industry luminaries and extensive discussions have made this point very clear.
Since housing madness only allows singles and dinkies to live downtown, the ones of us with a family are then forced to live in the burbs, far away from work and far away from the perfect commute the aforementioned reader says he experienced all his life.
Trying to accommodate the often conflicting requirements of agility, personal life and business is not a trivial task, but one that is certainly calling for a broader rethinking of the way our cities and workplaces are organized and located.
Labels:
Craftsmanship
Saturday, September 22, 2007
Code Literature
Open source projects documentation is a touchy subject.
I know it first hand from my own experience with NxBRE: writing and maintaining the 55+ pages PDF guide is not only a significant effort, but one of a very different nature than the effort of developing the product itself.
The wiki-based knowledge base is a good alternative: less formal, built from users questions and easy to maintain, it is definitively a viable approach for documenting small scale projects like mine.
This is also something learned on the field, as I have to work a lot with open source solutions. A pristine and up to date documentation is still an exception, at least for non-company backed projects. After hours of trial and error, fighting with an on-line documentation that was sometimes outdated (many examples would simply not work) and sometimes too advanced (showing features available in snapshot builds only), I came to consider recommending a paid-for solution, just for the sake of having someone to blame if the documentation was bad.
But that would have been too easy to surrender that way! Instead, I reverted to the more courageous tactics of the open source addict:
I know it first hand from my own experience with NxBRE: writing and maintaining the 55+ pages PDF guide is not only a significant effort, but one of a very different nature than the effort of developing the product itself.
The wiki-based knowledge base is a good alternative: less formal, built from users questions and easy to maintain, it is definitively a viable approach for documenting small scale projects like mine.
This is also something learned on the field, as I have to work a lot with open source solutions. A pristine and up to date documentation is still an exception, at least for non-company backed projects. After hours of trial and error, fighting with an on-line documentation that was sometimes outdated (many examples would simply not work) and sometimes too advanced (showing features available in snapshot builds only), I came to consider recommending a paid-for solution, just for the sake of having someone to blame if the documentation was bad.
But that would have been too easy to surrender that way! Instead, I reverted to the more courageous tactics of the open source addict:
- explore the provided running examples,
- if not enough, browse the test cases,
- if still not enough, trace debug with the source code attached to the IDE.
Labels:
Craftsmanship,
My OSS
Monday, September 17, 2007
Saved By The Tickler
A few days ago, Sun has rolled out a new version of their Developer Forums. Since then, using these forums has turned into a nightmare:
Oh, maybe not, there is still the fancy JAVA Nasdaq ticker...
- All my watches are regularly lost, preventing me to follow-up with people asking questions, effectively killing the main value these forums are supposed to offer.
- The new text editor mangles any text input that is anything else than plain text: copying/pasting from any source ends up with the addition of funky tags that appear only when you save (or preview) your post.
Oh, maybe not, there is still the fancy JAVA Nasdaq ticker...
Labels:
World
Sunday, September 09, 2007
Performance Driven Refactoring
I have recently talked about how fun it is to refactor code in order to increase its testability. Similarly, I have discovered another kind of refactoring driver: performance.
Starting to load test and performance profile an application you have developed is always an exhilarating time: I compare this with the Whack-A-Mole game, where, instead of funny animal heads, you hammer your classes down the execution tree until most of the time gets spent in code that is not yours.
Interestingly, I have seen the design of my application evolve while optimizing the critical parts of it. It is truly refactoring, as no feature gets added, but the code gets better performance wise.
Consider the following example that has arisen. Here is the original application design:
Profiling shown that computing the values of the context was really expensive and that these values were not always used by the target object.
I refactored to the following design:
In this design, the context values are only computed when the target object requests them, using a simple callback mechanism.
One could wonder why I did not simply remove the original context object and made O2 call O1. This would work but would have several disadvantages:
Starting to load test and performance profile an application you have developed is always an exhilarating time: I compare this with the Whack-A-Mole game, where, instead of funny animal heads, you hammer your classes down the execution tree until most of the time gets spent in code that is not yours.
Interestingly, I have seen the design of my application evolve while optimizing the critical parts of it. It is truly refactoring, as no feature gets added, but the code gets better performance wise.
Consider the following example that has arisen. Here is the original application design:
Profiling shown that computing the values of the context was really expensive and that these values were not always used by the target object.
I refactored to the following design:
In this design, the context values are only computed when the target object requests them, using a simple callback mechanism.
One could wonder why I did not simply remove the original context object and made O2 call O1. This would work but would have several disadvantages:
- unnecessarily increasing coupling between these two classes,
- visible design change, while I wanted the refactoring to respect the existing contracts between objects (the context in that case).
Labels:
Craftsmanship
Tuesday, September 04, 2007
High Quantum Leap?
I am new to JPA and discovered today that HQL is not JPQL. This sounds pretty obvious but the circumstances when I was reminded this cruel reality were puzzling.
First let's start with my mistake! I wrote something like this:
instead of writing that:
i.e. HQL instead of JPQL.
You might think that I must be a lousy tester to release code that does not work but the trick is that it was working fine and passing the integration tests.
The issues was that the JPA provider I am using, Hibernate, was not running in strict mode hence was joyfully accepting the HQL queries. Everything was fine in the integration tests run on Jetty but when deployed on JBoss, where a strictly configured Hibernate was used as the JPA provider, all hell was breaking loose.
So conclusion one is that yes, I am a lousy tester: always target the same platform for your integration tests than for your deployments. There are simply too many provided runtime dependencies in an application server to take any chance with them.
And conclusion two is more an open question about the value of standardized Java API built on top of existing proprietary industry standards. Some success exist, like JCR, which is quite an achievement in the complex matter of unifying access to CMS implementations. But I must admit that one could find JPA an off-putting keyhole on Hibernate, spread with tricky drawbacks.
I want to believe that, the same way JAXP started as a gloomy reduction of the capacities of Crimson and Xalan behind wacky factories and ended as a convenient way to unify XML coding in Java, JPA will evolve towards a bright future.
First let's start with my mistake! I wrote something like this:
FROM Magazine WHERE title = 'JDJ'
instead of writing that:
SELECT x FROM Magazine x WHERE x.title = 'JDJ'
i.e. HQL instead of JPQL.
You might think that I must be a lousy tester to release code that does not work but the trick is that it was working fine and passing the integration tests.
The issues was that the JPA provider I am using, Hibernate, was not running in strict mode hence was joyfully accepting the HQL queries. Everything was fine in the integration tests run on Jetty but when deployed on JBoss, where a strictly configured Hibernate was used as the JPA provider, all hell was breaking loose.
So conclusion one is that yes, I am a lousy tester: always target the same platform for your integration tests than for your deployments. There are simply too many provided runtime dependencies in an application server to take any chance with them.
And conclusion two is more an open question about the value of standardized Java API built on top of existing proprietary industry standards. Some success exist, like JCR, which is quite an achievement in the complex matter of unifying access to CMS implementations. But I must admit that one could find JPA an off-putting keyhole on Hibernate, spread with tricky drawbacks.
I want to believe that, the same way JAXP started as a gloomy reduction of the capacities of Crimson and Xalan behind wacky factories and ended as a convenient way to unify XML coding in Java, JPA will evolve towards a bright future.
Labels:
Craftsmanship,
Tools
Monday, September 03, 2007
A Triplet Of Releases
What better day than labor day for releases? Sorry, I could not resist the cheap childbirth analogy! So, I have just released the latest version of NxBRE, the Inference Engine Console and the DSL plug-in.
I have decided not to finalize the implementation of the support of RuleML 0.91 Naf Datalog, mainly because it is not clear how integrity queries are now implemented. The release was worthwhile anyway, thanks to patches, bug reports and feature requests submitted by the community of users.
Enough labor for today!
I have decided not to finalize the implementation of the support of RuleML 0.91 Naf Datalog, mainly because it is not clear how integrity queries are now implemented. The release was worthwhile anyway, thanks to patches, bug reports and feature requests submitted by the community of users.
Enough labor for today!
Labels:
My OSS
Saturday, September 01, 2007
Virtual Shelf
Thanks to Vijay, I have discovered Shelfari, a really neat virtual shelf where you can share your experience about your current readings, your previous ones and consult the point of view of others.
You can even integrate a widget on your blog, as you can see on the lower right of this page.
Highly recommended!
You can even integrate a widget on your blog, as you can see on the lower right of this page.
Highly recommended!
Labels:
Tools
Friday, August 31, 2007
Green or Bust!
WordWeb (Dictionary + Thesaurus + Word Finder) wants a better future for all of us.
Read their free version license agreement.
You might find it laughable but I think it is laudable.
Labels:
World
Sunday, August 12, 2007
(Still) Waiting For Java 5
After the recent release of the public review of JSR-283, discussions have mainly been focused on the demise of XPath support and the installment of a new query language model. One thing that shocked me, when I browsed through the API, was the complete lack of any Java 5 language constructs, like generics or enumerations.
Or take a look at a the parent POM of a project like Mule ESB: the target JVM is 1.4.
Hence "enterprise" Java is still a 1.4 world. Even if it is possible to run 1.4-compiled applications on recent JVMs, mainly to reap their performance benefits or management features, the Java community is dragged behind by this state of affair.
And this means, we have to carry on with pre-1.5 compatibility features, like erasure, until the corporate world leaves the 1.4 platform for something less retarded. The good thing is that this will inevitably happen.
Or take a look at a the parent POM of a project like Mule ESB: the target JVM is 1.4.
Hence "enterprise" Java is still a 1.4 world. Even if it is possible to run 1.4-compiled applications on recent JVMs, mainly to reap their performance benefits or management features, the Java community is dragged behind by this state of affair.
And this means, we have to carry on with pre-1.5 compatibility features, like erasure, until the corporate world leaves the 1.4 platform for something less retarded. The good thing is that this will inevitably happen.
Labels:
Craftsmanship
Thursday, July 26, 2007
Order Is Disorder
Eclipse offers the possibility to auto-sort the members of a class while formatting its code. After seeing the result of member sorting applied to code I have written, I must confess I find this "feature" totally useless at best, deeply asinine at worst because the cons totally outnumber the pros.
If this sounds too harsh, then here is a more polished point of view from this developerWorks article:
Sorting members alphabetically is very often counter-intuitive. Let's take javax.servlet.Filter as an example. Which implementation do you prefer:
The latter is of course the more natural one: time can not be factored out when reading and understanding source code. In this matter, the alphabetic ordering acts as a randomization of the time line of the source code.
On top of time-line ordering, programmers naturally organize related members in groups because it is easier to keep a visual context that covers several methods, hence it facilitates their comprehension. Using the previous example, I would opt to sort the members this way:
keeping the lifecycle methods (init and destroy) visually grouped and time-line sorted.
Here again, alphabetic sorting damages another important factor for code understanding: the distance between related statements.
My conclusion is that member sorting is far beyond formatting concerns and should not be automatically applied. If some sort of sorting is needed, it is better to sort the hierarchical outline of the class, as explained in the aforementioned article:
If this sounds too harsh, then here is a more polished point of view from this developerWorks article:
Sometimes it's nice to have members sorted alphabetically to better navigate in your code. Again, there may be arguments against it. Imagine you structure your code so that methods that call each other are located close together for code navigation. The sort would reorganize them, and they may not be in the wanted order.
Sorting members alphabetically is very often counter-intuitive. Let's take javax.servlet.Filter as an example. Which implementation do you prefer:
The alphabetic order
void destroy()
void doFilter()
void init()
The execution flow order
void init()
void doFilter()
void destroy()
void destroy()
void doFilter()
void init()
The execution flow order
void init()
void doFilter()
void destroy()
The latter is of course the more natural one: time can not be factored out when reading and understanding source code. In this matter, the alphabetic ordering acts as a randomization of the time line of the source code.
On top of time-line ordering, programmers naturally organize related members in groups because it is easier to keep a visual context that covers several methods, hence it facilitates their comprehension. Using the previous example, I would opt to sort the members this way:
void init()
void destroy()
void doFilter()
void destroy()
void doFilter()
keeping the lifecycle methods (init and destroy) visually grouped and time-line sorted.
Here again, alphabetic sorting damages another important factor for code understanding: the distance between related statements.
My conclusion is that member sorting is far beyond formatting concerns and should not be automatically applied. If some sort of sorting is needed, it is better to sort the hierarchical outline of the class, as explained in the aforementioned article:
The outline view offers a nice feature to sort members in the view, but not in the code. The specific settings and how members are sorted can be found in Preferences > Java > Appearance > Members Sort Order.
Labels:
Craftsmanship
Wednesday, July 25, 2007
Monday, July 23, 2007
Some Serious Tooling
To further promote Windows Developer Power Tools, O'Reilly has just released two badges to show off the work done on the tools and on the book.
I really like the circular saw: though software engineering is more like plumbing, there are some serious cutting job as well, so a saw like this one can be handy.
Anyway, I start using the badges in the NxBRE sites and documentations.
I really like the circular saw: though software engineering is more like plumbing, there are some serious cutting job as well, so a saw like this one can be handy.
Anyway, I start using the badges in the NxBRE sites and documentations.
Sunday, July 22, 2007
Content Rides The Bus
I have just released the very first version of the JCR Transport for Mule. This transport supports JCR 1.0 (Java Content Repository 1.0, as defined by JSR 170) repositories and is a receiver only. It leverages the asynchronous notification mechanism, named observation, to receive events whenever a change occurs in the repository and is not intended for storing nor reading messages from a content repository.
Indeed the use case for this connector is to monitor a content repository for particular events and to transform these events into fully-resolved serializable content-rich objects that Mule ESB can route to interested endpoints or components, enabling them to trigger actions like instantiating work-flow sequences or flushing caches.
Developing this connector using Mule's Transport Artifact was very easy, the biggest part of the code being in the (optional) augmentation of the JCR events with content from the repository. The development environment that MuleSource has opened to developers, named MuleForge, uses all the good tools of the moment like Subversion, Confluence, Jira, Bamboo and, of course, Maven ; with Xircles as the one platform to bind them all in a consistent dashboard.
MuleForge has allowed me to develop, build, distribute and document this transport in a very efficient and professional way. For example, the wiki templates look exactly like the Mule ESB official guide, which is a great plus for the end-users who will experiment a consistent reading experience.
All in all, MuleForge is a smart initiative that fosters creativity and contribution around a powerful and well-established open source product. I vouch the efforts MuleSource has put in setting up this environment will pay: it will nurture projects contributed by a vibrant community of users and developers and will eventually enrich the overall Mule ESB offer.
Indeed the use case for this connector is to monitor a content repository for particular events and to transform these events into fully-resolved serializable content-rich objects that Mule ESB can route to interested endpoints or components, enabling them to trigger actions like instantiating work-flow sequences or flushing caches.
Developing this connector using Mule's Transport Artifact was very easy, the biggest part of the code being in the (optional) augmentation of the JCR events with content from the repository. The development environment that MuleSource has opened to developers, named MuleForge, uses all the good tools of the moment like Subversion, Confluence, Jira, Bamboo and, of course, Maven ; with Xircles as the one platform to bind them all in a consistent dashboard.
MuleForge has allowed me to develop, build, distribute and document this transport in a very efficient and professional way. For example, the wiki templates look exactly like the Mule ESB official guide, which is a great plus for the end-users who will experiment a consistent reading experience.
All in all, MuleForge is a smart initiative that fosters creativity and contribution around a powerful and well-established open source product. I vouch the efforts MuleSource has put in setting up this environment will pay: it will nurture projects contributed by a vibrant community of users and developers and will eventually enrich the overall Mule ESB offer.
Saturday, July 21, 2007
DieSeL for the Engine
Thanks to ANTLR and ANTLRWorks, I have been able to make significant progress on the addition of Domain Specific Language (DSL) support for the Inference Engine of NxBRE. A usable version is available in the SVN repository on SourceForge.
NxBRE DSL, aka NxDSL, main goal is to allow rules authors to use their own natural language to write executable rules. Technically this feature is based on:
In the following example, the words in red are parsed by ANTLR while the ones in green are matched by the regular expression from the definition file.
The terms in italic are captured by ANTLR and the regular expressions to get values for labels, implication actions and atom values.
As you can see, the ANTLR grammar defines and enforces the structure of the rulebase, i.e. the skeleton of the rules, logical blocks and statements.
NxDSL comes with a grammar that allows using French terms for defining the rule structure, thus opening the door to consistently write the body of the rules in the same language, as shown hereafter:
While NxDSL adds extra work for the engine implementor, the positive impact for the rule writers will justify its usage for projects in need of letting non-technical users manage rules.
NxDSL also opens the possibility of writing a plain-text rule editor by leveraging the definition file to provide code assistance to the writer. Anyone feeling like contributing such an editor?
NxBRE DSL, aka NxDSL, main goal is to allow rules authors to use their own natural language to write executable rules. Technically this feature is based on:
- a language-specific ANTLR grammar that strictly enforces the structure of a rule file: this file is not supposed to be edited by the implementer,
- a custom definition file that translates statements into RuleML atoms: this file must be created by the implementer to capture, with regular expressions, the natural language fragments and how they translate in RuleML atoms.
In the following example, the words in red are parsed by ANTLR while the ones in green are matched by the regular expression from the definition file.
rule "Discount for regular products"
if
The customer is rated premium
and
The product is a regular product
then deduct
The discount on this product for the customer is 5.0 %
The terms in italic are captured by ANTLR and the regular expressions to get values for labels, implication actions and atom values.
As you can see, the ANTLR grammar defines and enforces the structure of the rulebase, i.e. the skeleton of the rules, logical blocks and statements.
NxDSL comes with a grammar that allows using French terms for defining the rule structure, thus opening the door to consistently write the body of the rules in the same language, as shown hereafter:
règle "Remise pour les produits standards"
si
Le client est en catégorie premium
et
Le produit est de type standard
alors déduis
La remise pour ce produit pour ce client est de 5.0 %
While NxDSL adds extra work for the engine implementor, the positive impact for the rule writers will justify its usage for projects in need of letting non-technical users manage rules.
NxDSL also opens the possibility of writing a plain-text rule editor by leveraging the definition file to provide code assistance to the writer. Anyone feeling like contributing such an editor?
Labels:
My OSS
Thursday, July 19, 2007
Lang for the Commons
I had to deal today with a class that contains an attribute of type java.io.Serializable. When I asked Eclipse to generate the equals and hashCode methods that I needed to be implemented, it warned me that the generated code will not work properly because Serializable does not mandate anything in term of implementation of these two methods.
And as expected, I started to get red lights when my test cases were stuffing byte[] in the Serializable attribute: the hash computation and equality algorithm used for this type was simply not working and two instances of my object containing the same sequence of bytes in two different byte arrays were bound to be different.
Since I was already Commons Lang for building the String representation of my object, I thought I should give a shot at the hashcode and equals builder it offers. Reading no doc and proceeding only by analogy with the toString method, I quickly typed this:
And it worked immediately! Not only it saved me a lot of lines of code but it worked exactly as I expected from the look at the name of the classes and the methods.
Nice API, nice library: you need it.
And as expected, I started to get red lights when my test cases were stuffing byte[] in the Serializable attribute: the hash computation and equality algorithm used for this type was simply not working and two instances of my object containing the same sequence of bytes in two different byte arrays were bound to be different.
Since I was already Commons Lang for building the String representation of my object, I thought I should give a shot at the hashcode and equals builder it offers. Reading no doc and proceeding only by analogy with the toString method, I quickly typed this:
public int hashCode() {
return HashCodeBuilder.reflectionHashCode(this);
}
public boolean equals(Object obj) {
return EqualsBuilder.reflectionEquals(this, obj);
}
And it worked immediately! Not only it saved me a lot of lines of code but it worked exactly as I expected from the look at the name of the classes and the methods.
Nice API, nice library: you need it.
Labels:
Craftsmanship
Wednesday, July 18, 2007
July And Still Jolting
DDJ has (finally) published the write-ups for the winners of this year's Jolt Product Excellence Awards.
There is no master page for the different categories, so here is a link that will provide you with search results showing all the relevant pages.
My own write-ups are in the Enterprise Tools and Libraries, Frameworks and Components categories.
Enjoy the reading!
There is no master page for the different categories, so here is a link that will provide you with search results showing all the relevant pages.
My own write-ups are in the Enterprise Tools and Libraries, Frameworks and Components categories.
Enjoy the reading!
Labels:
Tools
Saturday, July 14, 2007
Everything Is Happy (or Content?)
On this sunny Bastille Day, there is at least one good reason to rejoice: the public review of JSR-283 (aka JCR 2.0) has been published yesterday! Knowing that one of the most interesting Java APIs of the moment has just gotten better is a true source of (geeky) happiness and the promise that it will keep shaking and shaping the world of content repositories.
Indeed, JCR has been playing and will keep playing a fundamental role in opening the doors of a domain that used to be the private hunting field of a few mammoth vendors. Of course, this is not about getting great repositories for free, instead the main benefits are allowing users to be in control of their content assets and enabling developers to build innovative content-driven solutions.
I did not have time to dig deep in the specification, but I noticed that observations now produce richer events (source path and indentifiers) and that the previous version of SQL and XPath query languages have been deprecated in favor of an Abstract Query Model (AQM) that incarnates in a new SQL (JCR-SQL2) and a query object model (JCR-JQOM). I was convinced that XPath was a perfect fit for JCR searching but apparently I was wrong! I think JQOM will be the more appealing option for tool builders as mapping query features to this object model will be less clumsy than generating SQL.
Now to my little wish list! As a user of the API, here are the three changes I would like to see in the new version. Some are trivial, like:
Indeed, JCR has been playing and will keep playing a fundamental role in opening the doors of a domain that used to be the private hunting field of a few mammoth vendors. Of course, this is not about getting great repositories for free, instead the main benefits are allowing users to be in control of their content assets and enabling developers to build innovative content-driven solutions.
I did not have time to dig deep in the specification, but I noticed that observations now produce richer events (source path and indentifiers) and that the previous version of SQL and XPath query languages have been deprecated in favor of an Abstract Query Model (AQM) that incarnates in a new SQL (JCR-SQL2) and a query object model (JCR-JQOM). I was convinced that XPath was a perfect fit for JCR searching but apparently I was wrong! I think JQOM will be the more appealing option for tool builders as mapping query features to this object model will be less clumsy than generating SQL.
Now to my little wish list! As a user of the API, here are the three changes I would like to see in the new version. Some are trivial, like:
- A Property object can be single or multi-valued hence offers getValue() and getValues(). I find logical that calling getValue() on a multi-valued property fails but I find deeply counter-intuitive that calling getValues() on a single-valued property fails. By allowing getValues() to return a unique value from a single-valued property, the code needed to read property content would be vastly unified, thus simplified. This would also be conform to the principle of least astonishment.
- An Event has a type property that defines the reason why it was raised. This is defined by a list of static integer while I think it would be better to use a type safe enumeration, as it is done for PropertyType. I can understand that this change would create an ascending compatibility challenge so at least adding an helper class that could render to / parse from String would be very helpful for translating event types to a human readable form (PropertyType offers such methods: nameFromValue and valueFromName). On top of that, making Event serializable would also facilitate its processing.
- A less trivial addition would be to offer a unified way of acquiring a repository. Currently each vendor can offer a different client factory: writing an application that connects to several repositories forces to write specific code (or to use SpringModules). A unified URI-driven repository factory mechanism a la JDBC would be very helpful.
Labels:
Tools
Thursday, July 12, 2007
Skeletons And Open Closets
Before it went to the printed publications paradise, Software Development Magazine was running an "Horror Stories" series for Halloween time every year. It was fun to write, instructive to read and pretty scary as well.
Nowadays, the "Daily WTF", prudishly renamed Worse Than Failure, brings us this kind of story everyday, if not several times per day. A common point between all these stories seems to be the universal existence of software skeletons in IT closets, waiting to jump on poor new programmers laps and make their life a nightmare.
Life being short, it can be desirable to avoid getting aboard a ship that carries such monsters but how to tell that a company is facing "legacy code issues" from a few interviews? This kind of risk might make working for an open source software company a desirable move. Skeletons dislike open closets.
Are open source companies free of monsters? Of course not: for all the pieces of software that are not public (think back office systems, web sites...), there is a risk of facing the ugly spawn of years of software rot. But at least all the public facing code will have to stand to some elevated standards and is "up to the challenge of 1000+ eyeballs reading [it] every day".
Should code skeletons be avoided at all cost? I do not think so. Most of us can not see dead people, similarly management can not see dead programs. IIABDFI is the golden rule but sometimes it becomes clear that there is only that much marathons a dead man can run. Then, if your job is to refactor such a monster so it becomes maintainable and versatile, it can be as challenging as fun.
What should be avoided at all cost is the summoning of new skeletons in closets. With all the knowledge we have now thanks luminaries in our industry, this should be possible.
Nowadays, the "Daily WTF", prudishly renamed Worse Than Failure, brings us this kind of story everyday, if not several times per day. A common point between all these stories seems to be the universal existence of software skeletons in IT closets, waiting to jump on poor new programmers laps and make their life a nightmare.
Life being short, it can be desirable to avoid getting aboard a ship that carries such monsters but how to tell that a company is facing "legacy code issues" from a few interviews? This kind of risk might make working for an open source software company a desirable move. Skeletons dislike open closets.
Are open source companies free of monsters? Of course not: for all the pieces of software that are not public (think back office systems, web sites...), there is a risk of facing the ugly spawn of years of software rot. But at least all the public facing code will have to stand to some elevated standards and is "up to the challenge of 1000+ eyeballs reading [it] every day".
Should code skeletons be avoided at all cost? I do not think so. Most of us can not see dead people, similarly management can not see dead programs. IIABDFI is the golden rule but sometimes it becomes clear that there is only that much marathons a dead man can run. Then, if your job is to refactor such a monster so it becomes maintainable and versatile, it can be as challenging as fun.
What should be avoided at all cost is the summoning of new skeletons in closets. With all the knowledge we have now thanks luminaries in our industry, this should be possible.
Labels:
Craftsmanship
Friday, June 29, 2007
Obfuscated by obfuscation
From time to time, I have to add in the list of dependencies of one my projects a library whose byte code has been obfuscated.
Who cares, would you ask? Well not me until I have to step debug into the said library because it is usually easier to follow an execution path than a technical guide. And then the pain begins: obfuscated library are (intentionally) a mess and you can not do anything in debug mode with them.
So here is my messages to vendors and other smart guys who think security by obscurity works: this is freaking retarded! You prevent legitimate users to do their business with your library the way experienced users do. No matter how good you think your user guides are, when something goes wrong, nothing replaces the ability to follow the actual execution of a program. As Bruce Schneier says it:
So please, do not obfuscate the libraries you distribute, no matter how proprietary you think they are. Your value, as a software company, does not reside in the heart of the few bytes that you try to hide. It lies in your people, your know-how and your services.
Let there be light!
Who cares, would you ask? Well not me until I have to step debug into the said library because it is usually easier to follow an execution path than a technical guide. And then the pain begins: obfuscated library are (intentionally) a mess and you can not do anything in debug mode with them.
So here is my messages to vendors and other smart guys who think security by obscurity works: this is freaking retarded! You prevent legitimate users to do their business with your library the way experienced users do. No matter how good you think your user guides are, when something goes wrong, nothing replaces the ability to follow the actual execution of a program. As Bruce Schneier says it:
Security by obscurity: it doesn't work, and it's a royal pain to recover when it fails.
So please, do not obfuscate the libraries you distribute, no matter how proprietary you think they are. Your value, as a software company, does not reside in the heart of the few bytes that you try to hide. It lies in your people, your know-how and your services.
Let there be light!
Labels:
Craftsmanship,
Tools
Sunday, June 24, 2007
Iconoclast
I am wondering how long this icon will keep meaning "Save" to anyone:
This icon does not mean anything for my kids already:
For them a phone is either cordless or cellular: no funny roundy thingy on it!
Coming back to my first example, the last time I have used a floppy disk was in 2005: is it time to overhaul our icons?
This icon does not mean anything for my kids already:
For them a phone is either cordless or cellular: no funny roundy thingy on it!
Coming back to my first example, the last time I have used a floppy disk was in 2005: is it time to overhaul our icons?
Labels:
Craftsmanship
Friday, June 08, 2007
How To Always Look On Strike?
Follow the example of the world champion in the domain, i.e. SNCF.
Like them, implement some kind of lousy "not available" message to display if you hit your web site without typing www in the URL. Since many users now omit the dub-dub-dub prefix, your site (hence your company) will always appear under repair. Or on strike. Or both.
Try it! Click http://sncf.fr now!
Like them, implement some kind of lousy "not available" message to display if you hit your web site without typing www in the URL. Since many users now omit the dub-dub-dub prefix, your site (hence your company) will always appear under repair. Or on strike. Or both.
Try it! Click http://sncf.fr now!
Labels:
Fun
Wednesday, June 06, 2007
Agitation On Demand
In my previous post, I talked about how sweet it is to push the limits of unit tests for the sake of increasing test coverage. During the testing frenzy I mentioned, I came to test a class whose most of the methods throw an UnsupportedOperationException.
Needless to say that this service is very well done. The client I used is an Eclipse plug-in, that seamlessly installs on version 3.2 from a proper update site. To avoid uploading my whole project to their service, I isolated my troubled class in a temporary project and pushed the magic button.
A few seconds later, a test case was automatically added to my project, with all my problematic methods (and all the other ones) thoroughly test covered. Awesome: agitation on demand truly works! In other times, anarchists would have rejoiced.
When I decided to include this test case in my main project, I started to hit some issues. The unit test does not extend JUnit's TestCase but an Agitar custom one. Decided not to be stopped by such a minor issue, I promptly added their library to our Maven repository.
Then I pushed the execute button in Eclipse to run my tests and the Agitar test case complained because the launcher was the standard one from JUnit and not their own. This is when I called it quit. Of course, I could have modified my Eclipse start command to use their launcher. But with my Maven builds and reports?
That was a little too much to do and I decided to walk the path of reflection to systematically invoke the targets methods.
This is too bad because JUnit Factory is really a great concept. Try it, it might work very well for your needs.
Ugly? Well, this class is a subclass of javax.servlet.jsp.PageContext with a very narrow scope of usage, so this is why.I wanted to generate calls to all these methods and catch the exception as the expected test result. My first thought was to use reflection to do this in a dynamic manner but before to opt for this approach I decided to Google for free unit test generators. This is how I came to discover JUnit Factory, a test generation service from Agitar. I already knew Agitar from Agitator, their award winning unit test generator. But their online generation service was unknown to me.
Needless to say that this service is very well done. The client I used is an Eclipse plug-in, that seamlessly installs on version 3.2 from a proper update site. To avoid uploading my whole project to their service, I isolated my troubled class in a temporary project and pushed the magic button.
A few seconds later, a test case was automatically added to my project, with all my problematic methods (and all the other ones) thoroughly test covered. Awesome: agitation on demand truly works! In other times, anarchists would have rejoiced.
When I decided to include this test case in my main project, I started to hit some issues. The unit test does not extend JUnit's TestCase but an Agitar custom one. Decided not to be stopped by such a minor issue, I promptly added their library to our Maven repository.
Then I pushed the execute button in Eclipse to run my tests and the Agitar test case complained because the launcher was the standard one from JUnit and not their own. This is when I called it quit. Of course, I could have modified my Eclipse start command to use their launcher. But with my Maven builds and reports?
That was a little too much to do and I decided to walk the path of reflection to systematically invoke the targets methods.
This is too bad because JUnit Factory is really a great concept. Try it, it might work very well for your needs.
Push Tests, Pull Cover !
I have nearly spent the whole day increasing the test coverage of one of my project. For traditional employers or ill educated project managers, this could sound like an incredible waste of time. For others, this could sound like pure geeky zealotry or a new mean of self satisfaction for IT nerds.
Indeed, seeing the coverage report display numbers above 80% is extremely satisfying but the true reason for this testing frenzy is elsewhere: this project being bound to play a critical role in the core of the business of my company, testing it thoroughly is simply the only option.
Beyond the assurance such a high coverage buys, the implicit documentation of the system's behavior that an extensive battery of test creates is worth the effort on its own. But there is more. As recently discussed by industry guru Andrew Binstock, writing tests will inevitably lead you to question both design and implementation. Hard to test classes full of not covered "blind spots" will cry for refactoring!
I was amazed to see my code getting simpler and more concise while I was struggling to push the limits of the test coverage. I wish you similar amazing moments!
Indeed, seeing the coverage report display numbers above 80% is extremely satisfying but the true reason for this testing frenzy is elsewhere: this project being bound to play a critical role in the core of the business of my company, testing it thoroughly is simply the only option.
Beyond the assurance such a high coverage buys, the implicit documentation of the system's behavior that an extensive battery of test creates is worth the effort on its own. But there is more. As recently discussed by industry guru Andrew Binstock, writing tests will inevitably lead you to question both design and implementation. Hard to test classes full of not covered "blind spots" will cry for refactoring!
I was amazed to see my code getting simpler and more concise while I was struggling to push the limits of the test coverage. I wish you similar amazing moments!
Labels:
Craftsmanship,
Testing
Friday, May 25, 2007
Max Planck and the TCO
In "Pitching Agile to Senior Management" (DDJ June 2007), Scott Ambler presents tactics for introducing agile approaches to management. Besides the necessity of talking the right talk, Scott emphasizes the importance of avoiding an "us versus them" way of thinking and, for this, to recognize the virtues and values of management.
In this article, Scott presents how quickly agile software development starts to provide value and how this factor can help pitching the positive bottom line impact of agile. There is though a parameter that management will also consider in this board game: an agile team is significantly more expensive than a traditional one. Agile teams are usually staffed with seasoned developers who are generalizing specialists: these species are more expensive that the usual programmers and analysts traditionally managed projects are used to deal with. And this is without mentioning the folly of co-located teams when you can have cheap and qualified labor near or off management shores!
Hence the comparison graph of the total software project costs will probably look like this...
... with the green line showing the cost of agile approaches while the red one shows the cost of traditional ones. So this is good news: agile still beats traditional over time! Yes, but the big question is how far in the product life time line will management look when making their decision. It might sound obvious that the whole life time will be considered always but it is not.
There are situations where management will have a narrow sight on this:
... your pitch might be very difficult! In that case, you will have to be agile and re-factor the pitch to focus it more on time to market or quality aspects rather than sticking to the money side.
In this article, Scott presents how quickly agile software development starts to provide value and how this factor can help pitching the positive bottom line impact of agile. There is though a parameter that management will also consider in this board game: an agile team is significantly more expensive than a traditional one. Agile teams are usually staffed with seasoned developers who are generalizing specialists: these species are more expensive that the usual programmers and analysts traditionally managed projects are used to deal with. And this is without mentioning the folly of co-located teams when you can have cheap and qualified labor near or off management shores!
Hence the comparison graph of the total software project costs will probably look like this...
... with the green line showing the cost of agile approaches while the red one shows the cost of traditional ones. So this is good news: agile still beats traditional over time! Yes, but the big question is how far in the product life time line will management look when making their decision. It might sound obvious that the whole life time will be considered always but it is not.
There are situations where management will have a narrow sight on this:
- Organizational reasons: Maintenance of the product will be handed-off to a different unit, unconnected with the current managers. This happens in large structures and the point in the hierarchical pyramid where development and maintenance management chains meet is so high that no-one will look into how decisions on one side affects the other.
- Personal reasons: upcoming promotion or retirement can make a particular manager not inclined into looking too far in the future. Though this might sound unprofessional or rare, with the baby boomers now on the departure, this situation will occur more than you think.
... your pitch might be very difficult! In that case, you will have to be agile and re-factor the pitch to focus it more on time to market or quality aspects rather than sticking to the money side.
Labels:
Craftsmanship
Monday, May 21, 2007
ANTLRWorks Works For Me
Alright, so writing XML really put Uncle Bob in rage. Of course, he is right: XML should be limited to machine to machine exchanges and should never be forced down the throats of human beings, let alone geeks of all sorts. The natural consequence of that is I decided to start looking into adding DSL support to NxBRE, as writing RuleML is really not a fun task.
In my wildest dreams it will remotely be as good as the DSL support of Drools (including a code-assist driven full text editor). The harsh reality of (a busy) life will probably limit the scope of this addition to NxBRE but should anyway give the rules authors a better way of expressing themselves.
To build the grammar I decided to use ANTLR and its great companion tool: ANTLRWorks. I came to this choice thanks to Martin Fowler's current exploratory works on DSLs.
ANTLRWorks has proven really useful in this endeavor: the immediate testing and debugging of the grammar is complemented by a tree representation of the exploration graph that simplifies the detection of syntax goofs and other mistakes.
I have committed the embryo of a rules grammar in the trunk Misc directory. Capturing is still to implement. Then a translation table of plain English format blocks to RuleML atoms will have to be added.
ETA is obviously N/A ;-)
In my wildest dreams it will remotely be as good as the DSL support of Drools (including a code-assist driven full text editor). The harsh reality of (a busy) life will probably limit the scope of this addition to NxBRE but should anyway give the rules authors a better way of expressing themselves.
To build the grammar I decided to use ANTLR and its great companion tool: ANTLRWorks. I came to this choice thanks to Martin Fowler's current exploratory works on DSLs.
ANTLRWorks has proven really useful in this endeavor: the immediate testing and debugging of the grammar is complemented by a tree representation of the exploration graph that simplifies the detection of syntax goofs and other mistakes.
I have committed the embryo of a rules grammar in the trunk Misc directory. Capturing is still to implement. Then a translation table of plain English format blocks to RuleML atoms will have to be added.
ETA is obviously N/A ;-)
Sunday, May 20, 2007
Meet The Real Brain
My cousin Manuel will be talking tomorrow at Nanotech 2007, in Santa Clara, CA.
The subject of his session will be: Near-Field Raman and Luminescence Spectroscopies to evidence chemical heterogeneity of surfaces with sub-wavelength spatial resolution.
Boy, I feel dizzy, proud and really like an educated plumber.
The subject of his session will be: Near-Field Raman and Luminescence Spectroscopies to evidence chemical heterogeneity of surfaces with sub-wavelength spatial resolution.
Boy, I feel dizzy, proud and really like an educated plumber.
Labels:
Science
Thursday, May 17, 2007
For French Readers Only
Sorry for the segregative title but, alas, this post only concerns those of you who can read French.
Indeed I am happy to invite those of you who can to subscribe to Zeskyizelimit, a witty blog from IT industry samurai Jean-Luc Ensch. Sometimes impertinent and always pertinent, this blog will give you a different view on what is going on in our beloved professional field and also on what happens in this part of the galaxy.
I am saying alas because, unfortunately, no translation tool will be able to provide English readers with a fair rendering of Jean-Luc's humor and bonts mots.
Enjoy the reading!
Indeed I am happy to invite those of you who can to subscribe to Zeskyizelimit, a witty blog from IT industry samurai Jean-Luc Ensch. Sometimes impertinent and always pertinent, this blog will give you a different view on what is going on in our beloved professional field and also on what happens in this part of the galaxy.
I am saying alas because, unfortunately, no translation tool will be able to provide English readers with a fair rendering of Jean-Luc's humor and bonts mots.
Enjoy the reading!
Labels:
Fun
Tuesday, May 15, 2007
Microsoft 2.0?
So this is it. Microsoft has started its long demise... Maybe not yet, but the company has clearly started a new strategy of betting on the wrong horses and doing it in a very visible manner.
For example, remember the recent introduction of an open XML office document specification while the world already had one or, two days, the asinine take on open-source community supposed patent infringements.
For this last fumble, several industry notables replied, including Linus Torvalds, but to my sense the most sensible analysis of the situation came from software maven Alan Zeichick who clearly balanced Microsoft's lack of innovation with its preference for litigation.
It is really time for Microsoft to realize that time has changed: we have entered a new era where the operating system and the office productivity suite are not fully in their hands anymore. With on-line solutions and open source alternatives, these two components of a personal computer are not so critical as they used to be. In fact, it is obvious that Microsoft is aware of this trend change as Windows and Office are their two traditional cash cows.
What should they do? Instead of trying to push a new office standard, why not build the best office suite for the existing open format? Users are now educated enough to recognize and appreciate a highly usable piece of software: they would certainly be willing to pay a reasonable amount of money for a productivity suite that would not lock their data in the playground of a vendor.
They could also start to innovate. Really. I can not quote any innovation from Microsoft: they have drastically improved existing things but what have they really invented? Well, many things I am sure, by the look at the really cool stuff they are doing in their labs. So where is all the cool stuff going?
Well, I guess the crux of the problem is that it goes through the "bully filter" that still exists at the top level of the company. This filter is in fact a transformation that turns innovation into products that lock users in and forces them to buy the full stack of Microsoft delicacies. And this, for ever.
Even after the stepping back of the master of bully, Bill Gates himself, the company is still run by thugs who do not realize that they can not continue walking this bloody path. Even Redmond products enthusiasts are starting to look elsewhere.
Can Microsoft change and enter into redemption? Considering that IBM succeeded in its conversion from an insipid consulting firm and dinosaurish hardware maker into a vibrant community daring to stuff Cell/B.E.s into mainframes, I think there is hope that they will leverage their army of bright engineers and their deep pockets to build a new version of themselves!
Edited 31-MAY-07: Old habits die hard: Microsoft is still a bully who do not care about slapping people who create value on their technology if these people do not play according to their rules.
For example, remember the recent introduction of an open XML office document specification while the world already had one or, two days, the asinine take on open-source community supposed patent infringements.
For this last fumble, several industry notables replied, including Linus Torvalds, but to my sense the most sensible analysis of the situation came from software maven Alan Zeichick who clearly balanced Microsoft's lack of innovation with its preference for litigation.
It is really time for Microsoft to realize that time has changed: we have entered a new era where the operating system and the office productivity suite are not fully in their hands anymore. With on-line solutions and open source alternatives, these two components of a personal computer are not so critical as they used to be. In fact, it is obvious that Microsoft is aware of this trend change as Windows and Office are their two traditional cash cows.
What should they do? Instead of trying to push a new office standard, why not build the best office suite for the existing open format? Users are now educated enough to recognize and appreciate a highly usable piece of software: they would certainly be willing to pay a reasonable amount of money for a productivity suite that would not lock their data in the playground of a vendor.
They could also start to innovate. Really. I can not quote any innovation from Microsoft: they have drastically improved existing things but what have they really invented? Well, many things I am sure, by the look at the really cool stuff they are doing in their labs. So where is all the cool stuff going?
Well, I guess the crux of the problem is that it goes through the "bully filter" that still exists at the top level of the company. This filter is in fact a transformation that turns innovation into products that lock users in and forces them to buy the full stack of Microsoft delicacies. And this, for ever.
Even after the stepping back of the master of bully, Bill Gates himself, the company is still run by thugs who do not realize that they can not continue walking this bloody path. Even Redmond products enthusiasts are starting to look elsewhere.
Can Microsoft change and enter into redemption? Considering that IBM succeeded in its conversion from an insipid consulting firm and dinosaurish hardware maker into a vibrant community daring to stuff Cell/B.E.s into mainframes, I think there is hope that they will leverage their army of bright engineers and their deep pockets to build a new version of themselves!
Edited 31-MAY-07: Old habits die hard: Microsoft is still a bully who do not care about slapping people who create value on their technology if these people do not play according to their rules.
Labels:
World
Saturday, May 12, 2007
Business Under The Sea
With his "Mobilis In Mobile" motto, was Captain Nemo the first modern agilist?
Whatever is your reply to this question, I think the famous sub-mariner deserves a little tribute. So what could be more enviable to proudly wear Captain Nemo's motto and crest? I guess nothing and if you agree then go shopping like frenzy in the newly open Nautilus Warehouse, a dedicated shop I have created on Cafepress.
Oh and if you wonder what is the outrageous $2 cap on each item for, then you will be happy to learn that the true tribute to this tormented humanist resides in this tiny cap, which will invested Kiva micro-loans. So this is for fun and a good cause.
Whatever is your reply to this question, I think the famous sub-mariner deserves a little tribute. So what could be more enviable to proudly wear Captain Nemo's motto and crest? I guess nothing and if you agree then go shopping like frenzy in the newly open Nautilus Warehouse, a dedicated shop I have created on Cafepress.
Oh and if you wonder what is the outrageous $2 cap on each item for, then you will be happy to learn that the true tribute to this tormented humanist resides in this tiny cap, which will invested Kiva micro-loans. So this is for fun and a good cause.
Labels:
Fun
Thursday, May 10, 2007
The Four Lifes Of The Geek
A few days ago someone asked on Slashdot this very question: "Where to Go After a Lifetime in IT?".
If you filter the trolls out, you will find that the asker was left with these two categories of replies:
What could be the conditions for having four professional lives? If you look at the watermark underneath Professor Kailath's professional path, you will discover that:
If you filter the trolls out, you will find that the asker was left with these two categories of replies:
- Do not change anything and keep cashing until you can enjoy your upcoming retirement,
- Do not be afraid of a drastic change: not doing it will turn into a millstone of regrets.
What could be the conditions for having four professional lives? If you look at the watermark underneath Professor Kailath's professional path, you will discover that:
- Passion and curiosity must be the main drivers,
- Excellence and rigor must be constantly sought,
- Courage and optimism should be nurtured.
Labels:
World
Monday, May 07, 2007
Adaptative Parallelization Shall Rise
In a recent post in his blog, software guru Larry O'Brien talked again about the pitfalls of code parallelization and concluded with a truly insightful line:
Like many of us, I have explored parallelization of different process intensive tasks and found that, most of the time, my efforts to chunk and parallelize them was just adding a processing overhead leading to worst performances. Even when using pooling to mitigate the expense of thread creation, the cost of context switching and synchronization needed ultimately to build the final state of the computation was still dragging the overall performance down.
In more subtler attempts I have done, like piping XSL transformations instead of chaining them, the results were sensitive to the amount of data processed (the more, the better) and the way the XSL were behaving (one that would start to output results early would lead to better performances when involved in a flow). Hence the context itself was of great importance for the result.
All in all, this lead me to think the following as far as parallelization and concurrency is concerned:
When Larry's vision of run-time automated parallelization optimization algorithms will become reality, such code will certainly fly and, if not, will be easily refactored to do so. And if you think this idea of adaptive optimization is far-fetched, read about out-of-order processors and Java Hotspot optimization: today, we take all these for granted but a few decades ago, this was sci-fi.
This is a great example of why neither of the simplistic approaches to parallelization ("everything's a future" or "let the programmer decide") will ultimately prevail and how something akin to run-time optimization (a la HotSpot) will have to be used.
Like many of us, I have explored parallelization of different process intensive tasks and found that, most of the time, my efforts to chunk and parallelize them was just adding a processing overhead leading to worst performances. Even when using pooling to mitigate the expense of thread creation, the cost of context switching and synchronization needed ultimately to build the final state of the computation was still dragging the overall performance down.
In more subtler attempts I have done, like piping XSL transformations instead of chaining them, the results were sensitive to the amount of data processed (the more, the better) and the way the XSL were behaving (one that would start to output results early would lead to better performances when involved in a flow). Hence the context itself was of great importance for the result.
All in all, this lead me to think the following as far as parallelization and concurrency is concerned:
- Let us write correct code regarding to thread safety,
- Let us write efficient code as if only one thread was available,
- Let us write readable code and avoid "clever" programming.
When Larry's vision of run-time automated parallelization optimization algorithms will become reality, such code will certainly fly and, if not, will be easily refactored to do so. And if you think this idea of adaptive optimization is far-fetched, read about out-of-order processors and Java Hotspot optimization: today, we take all these for granted but a few decades ago, this was sci-fi.
Labels:
Multi-threading
Thursday, May 03, 2007
beautifulMinds--;
I certainly think that professionalism is very important....To be a proper professional you need to think about the context and motivation and justifications of what you're doing...You don't need a fundamental philosophical discussion every time you put finger to keyboard, but as computing is spreading so far into people's lives you need to think about these things....I've always felt that once you see how important computing is for life you can't just leave it as a blank box and assume that somebody reasonably competent and relatively benign will do something right with it.
(1935-2007)
Emeritus Professor of Computing and Information
at the University of Cambridge
Labels:
World
Sunday, April 29, 2007
Prefactoring A Bell
I am currently reading Ken Pugh's Prefactoring, which is a seminal book on writing software "right" from the beginning without erring on the side on BDUF. While reading this book, I have found that some concepts Ken introduces (or re-introduces as many of them were already known) directly map to certain situations I am currently facing. I will share this here, and maybe in upcoming posts, if other situations ring my bell...
Tight Coupling and the Singleton Identity
Of course, avoiding tight coupling is a goal every conscientious developer has in mind and tries to reach as much as he can. The difficulty is to spot tight coupling, which is coupling to a particular implementation, as it sometimes takes place unnoticed.
For example, I recently had the case of a developer who needed to test the identity of an object and for this opted to use equality because he knew the object was a singleton.
APIs of Least Surprise
Designing APIs is a tough subject: the intense discussion between Josh Bloch and Michael Feathers at the latest SD West was quite a lively proof of it. Sticking to the principle of least of least surprise is surely an excellent guideline for interface designers.
I recently came to use the javax.management.MBeanServerFactory class and bumped into an inconsistent behavior between two of its helper methods:
Tight Coupling and the Singleton Identity
Of course, avoiding tight coupling is a goal every conscientious developer has in mind and tries to reach as much as he can. The difficulty is to spot tight coupling, which is coupling to a particular implementation, as it sometimes takes place unnoticed.
For example, I recently had the case of a developer who needed to test the identity of an object and for this opted to use equality because he knew the object was a singleton.
if (theObject == Singleton.theInstance)This created tight coupling because, should the object ceases to be a singleton, testing for equality would break. The following should have been used:
if (theObject instanceof Singleton)
APIs of Least Surprise
Designing APIs is a tough subject: the intense discussion between Josh Bloch and Michael Feathers at the latest SD West was quite a lively proof of it. Sticking to the principle of least of least surprise is surely an excellent guideline for interface designers.
I recently came to use the javax.management.MBeanServerFactory class and bumped into an inconsistent behavior between two of its helper methods:
createMBeanServer(String domain)As you can see, when you create an MBeanServer, you provide the API with a domain name, while when you use the same API to look for MBeanServers, you have to provide an agent ID. Since both are Strings, I assumed that they both represented the same concept but I was wrong. And surprised!
findMBeanServer(String agentId)
Labels:
Craftsmanship
Saturday, April 21, 2007
My Top Three Mac OS X Annoyances
Now that I have switched to Mac OS X as my main OS, all my troubles seem so far away and it's a wonderful life.
Just kidding! Though OS X is a great OS, it carries a fair share of annoyances, pretty much like every system does. Here is the list of the top three glitches that drive me nuts:
Just kidding! Though OS X is a great OS, it carries a fair share of annoyances, pretty much like every system does. Here is the list of the top three glitches that drive me nuts:
- Bad keyboard support: I find myself forced to use the mouse too often. Not that I dislike this kind of small mammals, but having to leave the keyboard to twiddle the mouse is really slowing me down, usually at the worst moment (when typing code for example). Very often a dialog will pop-up and I will have no other way to get rid of it but to use the mouse. Or when paging up and down in large texts, the fact the caret does not actually moves will also force me to use the mouse to click to position the cursor. Windows XP does a much better job with keyboard support, as you can do almost everything with your hands on the keys.
- Lame file explorer: I am sorry but Finder is a pain in the neck. Navigating in a folder hierarchy, creating folders at the place you want them to be, moving files around, renaming them... all these operations carry a certain degree of clunkiness that quickly makes me fume and rant. Again, Windows XP does a much better job here (except for network folders, which consistently freeze the file explorer, if not worst).
- Sweet and sour JVM: though Apple boasts about their superb JVM integration, not being able to use a standard one from Sun prevents you to be up to date. So the JVM is great but dragging behind the official releases from Sun. At this date, version 6 is still a developer preview while the mainstream VM is already an update 1 version. I think Apple should keep working on integrating the JVM in OS X as they do, but also make it simple for developers to deploy Sun ones in "private" mode.
Labels:
Platform
Sunday, April 15, 2007
A Bridge, a Donkey and a lot of Fire
Did it need to be so high?
JMS is a simple yet powerful API that allows developers to build asynchronous and loosely coupled systems pretty easily. In fact, it is so easy that its usage usually expands very rapidly in the IT landscape of a company until it hits a wall that is as high, austere and disabling as the Berlin's one was, namely: the firewall.
JMS listeners rely on specific ports, usually dynamically assigned, something that usually prevents its usage through a firewall as administrators are reluctant to open ranges of addresses. Fortunately, there is a highway that goes through this wall: it is called HTTP. It has a particular traffic regulation as it is a one-way road that goes from the inside (the Intranet zone) to the outside (the external DMZ that we will call the Internet zone).
Mule to the rescue!
This post demonstrates how to leverage Mule, the open source ESB, for bridging JMS Queues that reside on the both sides of the firewall through this highway. The following deployment diagram details what is involved in this scenario: as you can see, Mule is not deployed as a standalone application but is embedded in a J2EE web application and deployed on a server. The reasons for this approach are multiple:
The following diagram presents the different components involved in the bridge. Routing from the intranet to the Internet is shown in green ; the other direction is shown in red. The arrows are oriented in the direction of message flows, not in the direction of the call from a particular caller. The gray boxes represent the application servers involved in the bridge and the Mule and JMS components they host.
From Intranet to Internet
A Mule component subscribes to the letter box queue in the intranet zones and listens to messages published there. When it gets a new message, it sends it by HTTP POST to the Mule servlet on the Internet zone. This servlet is the endpoint of a Mule component that performs the routing based on the aforementioned JMS Property and publishes the message to the targeted queue (or stores it in a DLQ - aka Dead Letter Channel - in case the target is unknown).
From Internet to Intranet
The other direction implies bringing messages back in the intranet zone because no sending can be initiated from the Internet zone. This is achieved in this bridge by using a Mule component in the intranet zone that regularly polls another Mule component in the Internet zone. The latter uses the power of scripting in Mule to define a component that consumes messages from the Internet letter box only when requested by a call from the intranet zone.
Your Turn Now
As you can see, this example does not cover temporary destinations (used by requesters for example), nor the reply-to feature of JMS. Note that, with a little of extra work, it would be fairly easy to support the case of reply-to targeting non-temporary destinations. This would be done by rewriting the destination JMS property in the messages entering the bridge to have the reply channel go through a pre-configured route.
Similarly, this scenario lacks any kind of retry mechanism needed if a failure occurs in an HTTP transfer or a possible message staging where payload could be scanned for viruses before being routed to the intranet destination.
In fact, this example gives you a fairly complete view of what can be achieved with Mule, a little bit of configuration and not a single line of compiled code.
The fact that no coding is involved is pretty important for production matters: any skilled system administrator can now activate new routes or deactivate existing ones by simply tweaking the Mule configuration. This can be done without involving a software developer. In that sense, this JMS Bridge becomes a first class citizen of the IT infrastructure.
Do not wait any longer and fetch the beast of burden that will massage your messages! But leave the cow alone...
JMS is a simple yet powerful API that allows developers to build asynchronous and loosely coupled systems pretty easily. In fact, it is so easy that its usage usually expands very rapidly in the IT landscape of a company until it hits a wall that is as high, austere and disabling as the Berlin's one was, namely: the firewall.
JMS listeners rely on specific ports, usually dynamically assigned, something that usually prevents its usage through a firewall as administrators are reluctant to open ranges of addresses. Fortunately, there is a highway that goes through this wall: it is called HTTP. It has a particular traffic regulation as it is a one-way road that goes from the inside (the Intranet zone) to the outside (the external DMZ that we will call the Internet zone).
Mule to the rescue!
This post demonstrates how to leverage Mule, the open source ESB, for bridging JMS Queues that reside on the both sides of the firewall through this highway. The following deployment diagram details what is involved in this scenario: as you can see, Mule is not deployed as a standalone application but is embedded in a J2EE web application and deployed on a server. The reasons for this approach are multiple:
- System administrators can be reluctant to deploy new tools: deploying Mule as a web application on the standard J2EE server of your company alleviate this resistance.
- The inbound Queues used by the bridge can be hosted by the server itself, leading to a neat and consistent self contained component without any interaction to an external system.
- Using the servlet connector of Mule allows to leverage the well-known web stack provided by your favorite J2EE server.
The following diagram presents the different components involved in the bridge. Routing from the intranet to the Internet is shown in green ; the other direction is shown in red. The arrows are oriented in the direction of message flows, not in the direction of the call from a particular caller. The gray boxes represent the application servers involved in the bridge and the Mule and JMS components they host.
From Intranet to Internet
A Mule component subscribes to the letter box queue in the intranet zones and listens to messages published there. When it gets a new message, it sends it by HTTP POST to the Mule servlet on the Internet zone. This servlet is the endpoint of a Mule component that performs the routing based on the aforementioned JMS Property and publishes the message to the targeted queue (or stores it in a DLQ - aka Dead Letter Channel - in case the target is unknown).
From Internet to Intranet
The other direction implies bringing messages back in the intranet zone because no sending can be initiated from the Internet zone. This is achieved in this bridge by using a Mule component in the intranet zone that regularly polls another Mule component in the Internet zone. The latter uses the power of scripting in Mule to define a component that consumes messages from the Internet letter box only when requested by a call from the intranet zone.
Your Turn Now
As you can see, this example does not cover temporary destinations (used by requesters for example), nor the reply-to feature of JMS. Note that, with a little of extra work, it would be fairly easy to support the case of reply-to targeting non-temporary destinations. This would be done by rewriting the destination JMS property in the messages entering the bridge to have the reply channel go through a pre-configured route.
Similarly, this scenario lacks any kind of retry mechanism needed if a failure occurs in an HTTP transfer or a possible message staging where payload could be scanned for viruses before being routed to the intranet destination.
In fact, this example gives you a fairly complete view of what can be achieved with Mule, a little bit of configuration and not a single line of compiled code.
The fact that no coding is involved is pretty important for production matters: any skilled system administrator can now activate new routes or deactivate existing ones by simply tweaking the Mule configuration. This can be done without involving a software developer. In that sense, this JMS Bridge becomes a first class citizen of the IT infrastructure.
Do not wait any longer and fetch the beast of burden that will massage your messages! But leave the cow alone...
Labels:
Mule,
Pragmatic ESB,
Tools
Subscribe to:
Posts (Atom)