Friday, March 23, 2007

SD West Classes and Keynotes: Day 3

Software Visualization and Model Generation (Gregor Hohpe & Erik Doernenburg)

I am convinced that part of the job of an architect is to select, improve and, if need be, create tools to support the activity of developers. In this class, Gregor presented concrete cases and best practices for building simple and efficient tools that turn complex software in pictures we can understand.

This approach also concurs with Granville Miller's notion of "trailing shadows" that must come for free: a visualization tool that automatically produces models is useful to keep track of the actual architecture of a system. Since systems evolve uncontrolled in the world of agile, models should evolve dynamically or be discarded.

Because we, humans, are good in spotting patterns, it is essential to generate simple pictures that focus only on certain aspects we want to consider. Therefore complete models for large systems are, if not impossible to build, at least impossible to read thus useless. Building focused ad-hoc models is the key. Gregor introduced a five steps approach for this:

  • Select Meta-Model: No academic work here! Simply define the elements, their relationships and the rules that apply to them. Popular meta-models include: metrics, directed graphs, trees and process models.

  • Inspection / Instrumentation: There are two approaches here. Static inspection of the system design (source code, configuration...) and dynamic instrumentation of the running system (profiling, message or network sniffing, log file parsing...).

  • Mapping to Model: Instead of mapping to the graphic artifacts, map to the model that is an intermediate abstraction to isolate the concerns of internally and visually represent the system (they both evolve differently and should not be connected).

  • Vizualisation / Output: Use an automated graph layout tool (like GraphViz or JUNG).

  • Validation / Analysis: It is possible to apply rules against the model, like figuring out circles in a dependency graph, identifying islands or root nodes.
This session was an interesting confirmation of the things I am currently working on (a JCR repository grapher for a Web CMS). One very big disappointment though: when Gregor asked who knew the Spring Framework in the class room, only a handful of hands came up. How despicable ;-)


API Design as if Unit Testing Mattered (Michael Feathers)

Another packed presentation by Object Mentor: this company sure is a guru seedbed. Even Josh Bloch and Gregor Hohpe attended... and in an active manner!

After fiddling a little with JavaMail, Michael quickly made the point that it is pretty easy to make an API that is not only convoluted but also very hard (if not impossible) to use in code you want to unit test.

An API is a complex software entity because, once you have published it, you have made an implicit commitment and can not back off from it. A good design makes the API useful for its client while being extensible for future needs. It also must not impair the testability of the system that uses it.

Michael then listed some anti-patterns, like having only private implementers, exposing partially implemented superclass, forcing the user to instantiate chains of objects and using static factory method. To be frank, I kind of disagree on some of them: for example, if I provide a public interface, why should I also make public the implementing class I use? While agreeing to these points, Josh often gave possible ways to follow these anti-patterns but in a virtuous way. For example, using a SPI mechanism.

We also got a list of tips and tricks including:
  • avoiding static methods (unless an operation is completely replaceable or will never need to be replaced),

  • keeping the envelope of an API to its envelop of use (to which Josh responded that it is possible to expose advanced methods with plenty of warnings: see java.lang.Enum.name),

  • supplying interfaces for all public objects,

  • ensuring the users have ways to mock the API (else they will have to wrap it),

  • avoiding making public classes final (unless there is a compelling reason for doing so),

  • practicing to write unit tests for code that uses the API (not only for the API itself),

  • supplying the unit tests of the API to the users.
Another aspect to take into account is the politics of API Design: who is responsible when an API changes? What is acceptable and what is not? As Michael stated it himself, the purpose of this talk was not to give strict recipes but to increase awareness, which I think it did pretty well, judging by the heated discussions during and after the class.


The Social Enterprise: Innovation + Technology = Good Samaritan? (Carlos Baradello & Patrick Guerra)

Starting with the Good Samaritan story, this keynote lead us through the reality and the necessity of social entrepreneurship. The good thing about it? It has nothing to do with pure philanthropy but is really about building sustainable business models that end up benefiting all the parties involved into.

In geek talk, you will be glad to learn there is a 5 Giga Bucks market of 4 Mega People out there waiting to be tapped but the right way. Like the Good Samaritan, seeing when others merely watch and being able to connect dots where others stay in the blur, is of paramount importance. This involves all of the usual entrepreneur qualities (ambition, mission driven, strategic thinking, resourceful, resuls oriented) plus the exacerbated desire to step out of the comfort zone.

How to leverage the lessons learned in Silicon Valley for that? How to combine opportunity and innovation in order to achieve sustainability, positive social impact and social benefit? How to strike a balance between a fully for profit enterprise and a fully non-profit organization?

To start finding answers, it is important to consider that we live in an highly interconnected world, where new types of diasporas have emerged because nowadays air travel is affordable and pervasive telecommunication allows to maintain connectivity with the roots. It is also helpful to consider the opportunities from a high level point of view. Here is a list of the current opportunities: digital divide, health care, "collaboratories", affinity group aggregation, mobile knowledge, commerce and e-commerce and off the grid infrastructure. Trying to build solutions in these fields by leveraging current massively available technologies (like mobile phones) is another critical aspect.

To my sense, the main question raised by this keynote is the following: with all the knowledge we have, what are we going to do to improve the condition of mankind. To anyone considering new career options or reaching a certain age in the business, I believe this question should become a burning one.


Extreme Hiring: A Radical New Approach to Finding the Best Programmers (Brian Robertson)

In this very interesting session, Brian shared with us the lessons that his company has learned from hiring programmers and what process the now have in place. As a direct teaser, it is important to know that their extreme hiring process allowed us to divide the number of applications by ten, which was a great thing because the actual applicants are now of greater quality.

How to reach that goal? They first had to realize that the current pervasive vision of the world is still "business as machine", where machines are predictable, organized and made of parts. Where do people fit in this world view? Well, they are "human resources" (dub it: cogs). They came to favor an alternative world view, where humans are living systems, which self organize, sustain themselves and follow the rules of dynamic control not predictive control. And so are organizations.

Hence hiring should not be a process of finding the cogs that best matches some specifications, by applying pattern matching on their specifications sheets, em, their resumes. Dee Hock, of Visa International, said that we should "hire and promote integrity, motivation, capacity, understanding, knowledge and experience" in that very order of importance.

Brian broke this list in two categories: the first three ones in "cultural fit and talent" and the last three ones in "skills". Though skills can be acquired and built, talent and cultural fit can not. Therefore it is of paramount importance for a company to first define the particular talents it needs and what are its core values (that will determine the cultural fit of a candidate or its lack thereof). This introspection the company has to do must be honest and based on actual facts, not wishes.

How to spot talents? Their usual evidences are instantaneous responses, recurring pattern of behavior and passion. Brian stating the interesting fact that a candidate's strength is a skill served by a talent and that using this strength is energizing and satisfying. I find this definition of great interest.

So what recruiting process do they use? I do not want to disclose to much of it, mainly because I think each company should tailor it in an iterative process (sure enough, there will be casting errors). This said, here some of there key ideas:

  • Post job ads designed to attract great people with clear cultural requirements and core values that allows candidates to self-select out,

  • Ask for custom application letters, not formal cover ones,

  • Make the applicant write several short essays replying to simple questions,

  • Offer resume submission as truly optional,

  • Ask for a remote coding test and offer the possibility for the candidate to upload bits of code he has written in other context,

  • Do technical and personal phone interviews,

  • Do on-site interviews with team project simulation, debriefing, group and personal interviews and after hours socialization.
As you can guess, this takes an awful lot of time. But it is the price to pay to spot the talents, assess the general maturity and the overall suitability of a candidate. A successful hiring process must show the company values and culture in action: it is both an attraction and screening tool. It is a tough objective but, as you can guess, it pays each and every penny invested in it.


War Stories: Fighter Jets and Agile Development at Lockheed Martin (Mike Zwicker)

Man, SD West has been breathtaking until the last minute. This session, brilliantly presented by Mike Zwicker was amazing! When you hear Lockheed Martin, you immediately translate it into "waterfall". Mike shared with us the story of introducing agile in this venerable company. He went through detailing the compelling reasons from agile, its successes and its challenges.

Was there anything new here? Nope. Was it worth attending, then? Yes! Consider that: when you tell your other half you love her, is there anything new there? Nope. But it is just so good and so right to hear it again, and again, and again. Why? Because circumstances change every day and this love you keep re-asserting is like these stories of successful agility in different enterprise contexts. It is sheer bliss to keep telling again and over again these stories of drastic improvements in productivity, reduced software defects, shortened time to market and lighter costs!

I can hardly sum up my notes so I will just share three facts out of Mike prolific talk:

  • They have introduced agility in 3 phases: pilot (6-9 months), department launch (24-27 months) and enterprise rollout (24 months) ; each building on the previous phase. In fact, they have not started the last phase yet but are getting there.

  • They have opted for Scrum for the management framework it provides, but have also shopped practices at XP, Crystal and other shops of delicacies.

  • They decided to use a tool (VersionOne) only because they had distributed teams.
One last piece of wisdom from Mike: "Under pressure? Do not abandon the practice!". Let us print this on motivational posters.


It is now time to leave Sunny Silicon Valley. For me, SD West 2007 was a great vintage, with only one out of 15 presentations that turned disappointing. The rest was extremely instructive, thought provocative and entertaining.

Final note: slap on my hand! By turning airport off when not needed and tuning the performance setup, I could drain a lofty 4 hours of battery lifetime off my MacBook Pro. This machine really rocks.

Suggestion for Google's Next Acquisition

LinkedIn

This would be a great step for Google into high end social networking, as LinkedIn is a very credible actor in that field.

This would also be a great opportunity to deliver its content as a public service under a open sourced format. This would benefit any application requiring the user to provide some sort of a biography (ever tried to use a job board resume builder?).

Thursday, March 22, 2007

SD West Classes and Keynotes: Day 2

Agile Architectures (Granville Miller)

8.30 and the room is packed! Architecture sure is a subject that attracts people and make them react.

After reminding us that canonical agile does not mandate any architect role, Granville explained how the community has evolved towards the recognition of the need for architecture. This said, the agile myth that "refactoring is enough to make architecture useless" is still alive. Though experienced developers will have good design reflexes, there are at least two compelling reasons for them to err on the side of architecture:
  • when they will need to step back in order to keep on progressing (ditto Ward Cunningham),

  • to ensure smooth integration of the different parts of a large project.

Hence when a developer will wear the architect's hat, he will typically:
  1. partition the system on a whiteboard,

  2. discuss this partition to reach an agreement among the team,

  3. write unit tests, façades, scaffold to lay out the baseline of the architecture,

  4. then take his hat off and code.
How long will the architecture defined at step 1 be valid? Well, about an iteration, which is by the way the timeline of these 4 steps. The main lesson of this is that agile architecture is not carved on stone but evolves as the rest of the system in order to satisfy the most important of the customer rights: to change his mind without paying an exorbitant cost.

To visualize this reality, Granville presented the notion of shadow architecture on a chart that shows the transfer between leading shadows and trailing shadows. At the start of the iteration, leading shadows represent the majority of the architecture shadow: they are the products of white board design sessions. At the end of the iteration, trailing shadows totally replace the leading shadows: they are the actual reification of the architecture, possibly visualized in code analysis tools. If some leading shadows remain, it means more work to do on the next iteration. This is a great tribute to Punxsutawney Phil and the doom a visible shadow represents for this innocent animal and the rest of us, Spring addicts (!).


Creating a Domain-Specific Language (Juha-Pekka Tolvanen)

During this class, Juha-Pekka explored the creation of a domain-specific language (DSL) in the particular concept of interactive TV. The main goal to achieve in this process was to provide content producers with a limited language (15 concepts maximum, compared to UML hundreds of concepts) that they could use to actually build interactive TV applications.

How to proceed? Here are four main steps:
  1. Identify abstractions (ask the producer to "mind map" the different use cases in order to build a metamodel),

  2. Define modeling concepts and rules (notation),

  3. Specify notation,

  4. Implement generators & test with reference cases.
The overall presentation was then focused on creating the DSL using the speaker's company tool, MetaEdit+, which unfortunately do not currently run on Intel Macs so I could not do the exercise on my machine.

To my sense, like all vendor sponsored talks, it was too much focused on a particular tool and not enough on the available options out there. I was, for example, expecting to see lighter-less-box-and-arrows-and-XML-outcome things similar to what Martin Fowler is currently exploring in his bliki or to Drools DSL.


Agile Principles of Object-Oriented Class Design (Uncle Bob)

Uncle Bob right after lunch is probably the best cure for food drowsiness. His capacity to captivate the audience is amazing: I suspect he irradiates some mystical fluid that make our body redirect energy from food processing to brain power. Or he is simply a plainly bright orator.

After a quick and obviously unconventional introduction to OO, Uncle Bob immediately exposed the root cause why our application code starts to rot after time until it becomes an unmaintainable and tangled Gordian knot. And it is... drum roll... poor dependency management! Of course not (only) in the Maven sense of it but in the broader sense of dependency between objects, packages and components.

Poor dependency management leads to rigid, fragile and not re-usable code. If any change implies going through the whole structure, if touching one place breaks the system in many places, if no bit of code is re-purposable easily, then you surely have tight coupling which is a manifestation of a poor management of dependencies. How to write solid code? Tattoo "solid" on your forehead? Nope. But use it as a mnemonic for the five principles of software development craftsmanship wisdom:

  • Single responsibility principle: a class should have one and only one reason to change (it must not support features that have completely different evolution life-cycles).

  • Open closed principle: adding a new feature must be done by adding new code, not by modifying existing one.

  • Liskov substitution principle: derived classes must be valid substitutes for their base classes.

  • Interface segregation principle: avoid fat interfaces that offer methods relevant to many use cases but mainly irrelevant to its users.

  • Dependency inversion principle: details should depend on abstractions, never the opposite.

In this quest for good dependency management, abstraction is the most important keyword. Everything should depend on abstractions and not on implementations in order to reach the ultimate goal of independently deployable units. Because abstractions are critical to protect our application from the changes that will inevitably occur, how is it possible to figure them out in order to be safe? Should it be an upfront task? Is a crystal ball needed?

This is where the agile word come into play, not as hollow buzz but as a reality: build the abstractions as you need them, which can be as often as you get feedback from your users. Hence expose your system early and frequently to them...


What's New in XML in Java 5 and Java 6 (Elliotte Rusty Harold)

XML is not only for girls and can actually be proven useful, mind you! But parsing or processing XML never came for free in Java and have always required careful development practices to limit its impact on CPU or RAM. In this class, Elliotte presented the latest addition of Java 5 and 6 and how they can make XML related development easier and more efficient.

Here is what Elliotte had in his grab bag for us today:

Java 5


JAXP: The API is at version 1.3 in Java 5 with Crimson finally kicked out and replaced by Xerces and at version 1.4 in Java 6.

DOM3: In the many sweet additions of DOM level 3, Elliotte insisted on the addition of get/setTextContent on the Node interface. I can not agree more: time and again I have seen developers using an extra XML APIs just because writing or reading text was too painful with plain old DOM Level 2. Now the JDK offers everything you need for regular XML twiddling: that is one dependency to remove, which is always good.

I also find Tree Position interesting as you sometimes want to refer to a node without having to enter into the realm of XPath. Other additions of note contain Node Similarity and Equality methods, better namespace support and the possibility to bind any custom user data to any node.

DOM has also evolved as a framework with the capacity to control how documents are represented in memory (DOMConfiguration), new ways of creating documents (DOMImplementation) that avantageously replace the factory based JAXP builders and load and save features (LSParser and LSSerializer).

In the matter of parsing files, the newly added LSParserFilter can be leveraged to accept only certain nodes hence reduce the memory footprint of a document, a good strategy to consider for large XML instances you do not fully need in memory. LSParserFilter can also be used to alter a document at load time (for example to change a namespace into an other).

XPath: is now available in v1.0 with namespace support (NamespaceContext) and custom XPathFunction support.

Validation: has been added in a very versatile way as you can select the schema language you intend to use (Schema, Relax NG, DTD...). A validator has also the interesting ability of augmenting an document by adding optional elements discovered in its associated schema. It is now also possible to discover the type of a node based on a the validator of the document.

Java 6

XML Digital Signature: which is apparently a pretty involved API that is not limited to XML signing...

StAX: which is a promising pull parsing technology that is fast, memory efficient, streamable, read only. Sounds like SAX? Yes but think of StAX as an inversed SAX where the application actually calls for event via method invocations instead of being called back when they occur. This allows to control the flow of events hence to orient parsing for a greater efficiency.

And what about the future? Elliotte mentioned that XQuery API will probably make it to Java 7 while XML Encryption will probably wait a little longer before showing up in the JDK.


OK, I managed to conclude my blogging frenzy earlier today. Note that my main issue in this field trip blogging is the short live of my MacBook Pro battery...

Testing Toolbox Gets Richer

ZePAG has released the first version Usurper, a neat tool for easily creating test instances of your value objects. It is a nice compliment to the classical mocking and stubbing tools.

I warmly invite you to read his post presenting Usurper and then to rush to the tool web site to start benefiting from its goodnesses right away.

And please, do not forget to ask him why he opted for the org.org package root!

Wednesday, March 21, 2007

Note to Blogger: Your Composer Sucks!

Sorry guys but when I type a lower than or a greater than sign in the compose mode of the post editor, I really intend to display < and > and not some kind of HTML element.

If I want to enter HTML elements, I use the "Edit Html" tab.

So why on earth is your editor unable to properly escape these damn characters? It took me 15 minutes to clean up the mess your stupid composer made of my generic code samples in my previous post and the manual cleaning process of the messy HTML produced was so tedious I decided to drop half of the code sample.

Please, please, please do something about it. This is not rocket science. This is basic character escaping.

Thank you.

SD West Classes and Keynotes: Day 1

Only two hours of flight and here I am in the Bay Area! For a European used to endure the kindness of airline crew for 13 excruciating hours before reaching the Golden State, this is a great difference. And no jetlag so I do not even spend the whole day yawning like a big cow or wandering around like a red eyed ghost.

In this post and the coming ones, I will try to share what I have learned or heard during the classes and keynotes of SD West 2007. Some classes are very dense so I will only share the key points out of them.


Effective Java Reloaded (Josh Bloch)

During this class, Josh served us a couple of appetizers, a main course and some sweet dessert! Dub it: a couple of neat tricks, serious generics material and a couple of extra advices.

Appetizer 1: Leverage Type Inference

I did not know the compiler was able to infer the types in generic constructs. This allows this kind of cool syntax:

Map<String, Integer> map = newHashMap();

Note that I do not need to precise the types again when calling this method:

<K, V> HashMap<K, V> newHashMap() { return new HashMap<K, V>(); }

Is not this sweet? The compiler has actually inferred the types to use when calling the method from the map declaration.

Appetizer 2: Builder to the rescue

Instantiating immutable objects that have numerous attributes can be painful, especially if some of the attributes are optional. You end up with a class bloated with a bunch of telescoping constructors, which try to represent the different construction scenarios that make sense. Should immutability be sacrificed and some good old setters be added? Nope! Use a dedicated builder and voila:

MyImmutableClass mic = new MyImmutableClass.Builder(p1, p2).p3(value).p4(value).build();

In this example, the Builder is a public static nested class of MyImmutableClass whose only constructor is public and takes the mandatory parameters of the said immutable object (p1 and p2). The optional parameters are added by calling methods whose names are the parameters names (p3 and p4). Finally, the build method actually invokes the unique and private constructor of MyImmutableClass, passing the builder instance to it.

An other interesting aspect of this approach is that it emulates named parameters that exist in other languages, a safer mechanism than only relying on the parameters types and positions as in a classic constructor.

Main Course: Making the most of generics

Here are a couple of tricks or reminders to improve our usage of generics:
  • Use @SuppressWarnings("unchecked") on the smallest possible scope, i.e. variable declaration. Do not hesitate to create a local temporary variable just for that purpose: it is still better than adding the annotation to the enclosing method.

  • To make life easy to users of generic public methods of an API, prefer to use a wildcard instead of a type variable if it appears only once in the method signature ; the exception to this rule being conjunctive types where you need to use a type variable
  • Even if our first impression tells us the opposite, a collection of a subtype is not a subtype of a collection of the super type. This is worth to remember as it allows to better understand the origin of some very cryptic compilation error messages. It also is the root cause of the next tip.

  • Use bounded wildcards to increase the usability of collection related methods of an API. More precisely: use <? extends T> for read/input collections and <? super T> for write/output collections, else you might end up with pretty useless collections, i.e. collections that only offer basic read methods. I remember this has been explained in details by Uncle Bob at the time he was writing the Craftsman column for SD Magazine, but it is good to hear it again!

  • Do not confuse bounded wildcards and bounded type variables. The first makes a lousy return type and should be only used for type parameters.

  • Leverage the non instantiable Void class instead of using a wildcard when you want to support the notion of "nothing", doing for example: Future<void> submit(Runnable task) or Map<X, Void> for a map where the values would be of no importance.

  • Generics and arrays do not mix well: prefer generics whenever possible (but do not throw arrays away, as they are useful too).

  • Leverage the new Class.cast method to perform runtime casting compatible with generics. This allows to build Typesafe Heterogeneous Containers (not related to this other THC) that can handle multi-typed dynamic objects like database rows.


Dessert 1: Do the right "over"

Always use the @Override annotation to ensure at compilation time that you are actually overriding and not overloading members (remember the infamous equals overloading that you have at least written once).

Dessert 2 : Final word

Declaring all variables final should become a typing reflex, except, of course, when an immutable object is not suited (for example, if it creates serialization or cloning issues).


Forgotten Algorithms (Jason Hunter)

Since I am not very much versed in the art of algorithms, I decided to attend this session and see how much a plumber like me was missing in term of behind-the-scene magic that happens in my beloved computers.

The in-depth exploration of several concrete cases (swapping without temporary variable, credit card check aka Luhn algorithm, public key cryptography, negative number storage strategies, Google map reduce) was interesting and entertaining, but left me with the feeling that either I was missing the real big picture or either there was no big picture at all.


Challenges in scaling Infrastructure (Felipe Cabrera & Jeff Barr)

In this keynote, Amazon gurus and enthusiastic clients came to talk about AWS goodness.

With great humor and sharp insight, Felipe explained the needs of typical AWS clients (fast growth, exploding scale of infrastructure), their classic options (build own infrastructure or outsource it a la IBM), their requirements (on demand scalability, elastic capacity, high performance and availability, rock solidity and cost effectiveness) and their expected benefits (ability to focus on product and core competencies, not data center stuff). Felipe then explained the tough design decisions they went through and how he oriented his ruling decisions by sticking to the blessed principle of "simplicity first".

After that, Doug Kaye of GigaVox Media and Marten Nelson of Abaca Technology presented what usage they have of the different services of the AWS suite, namely S3, EC2 and SQS. The main lessons are: a development shift toward loosely coupled asynchronous services, a consideration to pay to the impact of that on GUIs and the necessity to rethink approaches to best map on AWS simple APIs (for example, there is no way to count the number of messages waiting for processing in a queue but you can keep track of their insertion time and determine latency, which is an even better way of monitoring a system).

Felipe concluded by quickly sketching the near future of AWS which will mainly consist in making S3, EC2 and SQS better work together by using consistent protocols.


The Buzz about Fuzz: A powerful way to find software vulnerabilities (Herbert Thompson)

I am deeply convinced that any developer should follow serious security courses maybe because, these past years, I have seen enough monstrosities like the Share My Drive Servlet, an innocent servlet intended to serve generated files in a well defined folder but reveals able to serve the full file system. That is why I decided to attend Thompson's class, as he is a renown security expert. In fact the true reason I attended is that Herbert is blessed by terrific scenic capacities!

In his talk, he presented how fuzzing, a testing technique that consists of finding weird behavior in software by feeding it with random data, is becoming more relevant in nowadays software, which tends to become highly complex and composed of heterogeneous components.

The main keywords were:
  • Fuzzable, which is basically any piece of software that accepts input.

  • Random Fuzzing that is the simplest and less efficient form of the art of fuzz.

  • Context Sensitive Fuzzing, which goes one step beyond than the previous step by building flawed but apparently correct random data (for example by computing correct checksums).

  • Adaptative Fuzzing that is the most advance technique, which tries to produce data that will explore all the code base of an application, using real time tracers / decompilers to guide the fuzz generation. Similar to test code coverage, this allows to decide when one application has been fuzzed enough with.

  • Oracling, which is the process of defining what is a correct application outcome and what is not.

Herbert advises to keep fuzzing an application until it is actually phased out, as it is always less embarrassing and costly to find a vulnerability internally than to let your clients do it for you.


Designing Contracts for Web Services (Christian Gross)

I was expecting a lot from this class as designing good APIs is one of my subjects of concern and because I have recently struggled with the difficulties of building such APIs on current web service technologies.

International expert Christian Gross left us with four steps:
  • Define use cases in order to build services client will actually be willing to use and pay for.

  • Choose your technology but do not care too much about it and favor the usage of an intermediate broker that can adapt the different technologies.

  • Decide on a context, i.e. what of data oriented, RPC oriented or messaging oriented service will best fit, knowing that RPC remains the paradigm of choice.

  • Stick to simple types and avoid playing with complex object return types.
Christian presented a matrix that allows to pick up the best web service technology depending on the type of client and server (browser, dynamic language, classic typed language). I must confess that I felt the talk was too much oriented on AJAX stuff: in fact, the title of the PowerPoint document was "Developing AJAX Applications" so I had the feeling the presentation has been repurposed. This said, Christian did a great job emphasizing pragmatic strategies and decision paths.

Why Software Sucks? (David Platt)

Software sucks because we, unrepentant geeks, have not yet recognized that the users are not us! Since the web became pervasive, the vast majority of users are now common people of the real world. They simply care about one thing: the purpose of your software, how it will allow them to do what they intend to, but could not care less about the software itself.

To try to correct this attitude in our industry, David shook the audience with an hilarious yet magistral keynote, left us with five recommendations:
  • Add a virgin to the team: well, someone virgin to the software itself and who will focus only on what it does and will be able to question each aspect of it, even the most fundamental ones. Unless you have access to some MIB technology, it is not possible to make a developer forget about the application itself and focus on its features.

  • Break convention if needed: the fact that every application does the same does not mean it is right. For example, is it really good to ask the user if he wants to save the data he just modified or would not it be better to default by saving changes and let the user skip saving only if he really needs to?

  • Do not let edge cases complicate the main stream: often developers will be tempted to implement extra features only based on the technical possibilities a particular technique gives him. These bonus features usually prove counter productive for the end-user.

  • Instrument - Carefully: instrumenting an application allows you to get solid concrete facts on what users are actually doing with your application and, therefore, be in a better position for ruling out what features make sense to improve or phase out. Of course this requires full disclosure, user agreement and no intrusion of the instrumentation into the application.

  • Question each decision: ask if each decision is taking your project closer to the result or further away from it?

Pheew, what a day! I am on my knees... More fun to come tomorrow.

Monday, March 12, 2007

Going South to SD West

I will be attending SD West 2007 from Wednesday to Friday.

In case you do not know this conference, let me highly recommend it: not axed on a particular technology or vendor, though clearly organized by specialized tracks, this is the place to get the latest feedback from the greatest experts of our industry.

If you feel like meeting me, for example to discuss about NxBRE, that you be a great place and time for doing so. Just shoot me an e-mail.

Wednesday, March 07, 2007

Accelerated Diversification

Wow. A year ago, all the workstation operating systems I was using, either at home or at work, were Microsoft Windows based. It was a full range of old to recent machines, running everything from Windows 98 OSR2 to Windows XP Professional.

One year after, my landscape has drastically changed and I just really realized how much it did change. For work, I am using a MacBook Pro. At home, we now run a MacBook and a dual boot Kubuntu/Windows XP Professional Dell Precision M90. The only reasons I boot on Windows are .NET development (for NxBRE) and gaming. I am doing all rest, from Java development, Office work and personal finance to writing this blog, on Kubuntu.

This transition to an almost Microsoft-less world happened naturally, mainly because there are now software and hardware options that permit easy option to opt-out: office productivity suites (like OpenOffice, NeoOffice or KOffice), a good browser, a cool IDE... plus the Intel based Macintosh machines.

Of course, this diversification entailed some turbulence like:

  • Keyboard shortcut mix up: switching between OS disturbs a well-trained brain used to its shortcuts. It is worth tuning these shortcuts to unify them as much as possible, to maintain a good productivity level.
  • Application downgrade: Microsoft Office is still way ahead its open source followers in term of comfort of use. A special mention to Entourage, a Microsoft sub-Outlook for the Mac that would have better been named: Sabotage.
  • Driver quagmire: not all of my laptop devices are recognized by Kubuntu and printing on an network shared HP PSC 750 from Mac has never really worked.
But all in all, this diversification is a good thing. As I said before, I do not long for a world unified on Linux or Mac, but one where each OS has a fare share of users.

What about Vista? Mmmmh, not yet on my horizon, not even a faint blip on my radar screen. Maybe for my next machine, in half a decade. Or not.