So .NET is legacy and WCF is here to increase your productivity? What is this all about?Juval did a great job demonstrating how, by building this new platform on the CLR, Microsoft has delivered a complete development environment that offers a clean and efficient programming model for "enterprise" applications.
But is there anything new here? For .NET developers, surely yes. But from a JEE development standpoint: not really. All this sounds like a mix of EJB3 (framework-free classes, remote exceptions), JBoss call stack model (dynamic proxies, client-side and server-side interception), unified synchronous/asynchronous invocation model and workflow for long running operations.
To be fair, in this big mix of already known stuff, there are some pretty powerful features like resilience to change in service contracts. WCF goes to great length to transparently allow client and server at different version levels to keep exchanging messages even if they have changed.
Moreover, unlike the usual stack of disparate half-baked products that are common in Java-land, WCF is a typical Microsoft product: it comes complete with a wealth of tools (like the pretty impressive visual call stack analyzer) and offers to developers a trustable and stable development framework. Conclusion for .NET developers: if you are not using WCF today, do not further delay starting to leverage it!
Behavior-Driven Database Design (BDDD) - Scott Ambler
Scott warned the audience: he is going to be blunt. And he was. Let me quote him: "any monkey can rename a column in a production database"! If this sounds like a scary perspective to you, do not think you are alone: surveys show that there is still a majority of corporates for which this is a challenge too. Why is it so? Mostly because of the mystical belief that what is in the database is perfect and trustable, hence does not require testing.
No testing? Wait a minute. Survey shows that a majority of companies have business critical functionalities in their databases (triggers, stored procedures...). So... no testing? Does this sound reasonable? When all serious software developers are now test infected, is it acceptable that the data management community drags ten years behind in term of quality-oriented practices? When the problem of bad data quality is estimated to cost 600B$ per year in the USA, can we keep going on like this? Of course not.
Is this lack of testing the only factor that makes database refactoring look so hard? Not at all. Scott stated another factor very clearly: mainly because of poor data access architecture leading to tight coupling with the database itself. Ouch! Blunt again. True again.
So how can BDDD help? In short: BDDD is an evolutionary and test-driven (not model driven) approach to refactoring databases. It encourages to consider databases as having nothing special about them, hence to dare applying all the panoply of agile software development to them, which are, to name a few: continuous integration, automated build, SCM, versionning, developer sandboxes, granular refactoring (to minimize the risk of collision) and regular deployment between environment (frequent from developer to integration, less frequent to QA, highly controlled to production).
Because it is so different than traditional approaches, it appears as a threat to most of the data professionals. This should not be the case as their skills and knowledge are needed for the success of this refactoring. Moreover, with many developers now using some sort of O/R mapping tools tempted to dumb down databases as "just storage", having the feedback of data professionals can help leveraging the different features of databases, which must be considered as first class citizens of a software architecture (as application servers are).
The following stop sign in Sacramento shows the current situation: data and quality orthogonal to each other, and a big stop sign in the middle.
This must not be like this for ever. With both the software development and data communities working together, a little understanding and good share of courage, it is possible to make things better!
Object-Oriented Programming and Generic Programming and What Else? (Bjarne Stroustrup)
When the father of C++ gives a keynote about OOP and GP, everybody sits quietly and listens because everybody knows that anything below a fully concentrated attention will be not enough to follow. I did my best to follow the master and did not regret it! Bjarne detailed and compared the strengths and weaknesses of OOP versus GP (using the classical shapes example). He then presented where C++ is heading to (for its 0x version), making the interesting statement that the needs of concurrent programming will more and more shape the destiny of languages.
Bjarne has been honored with the Dr. Dobbs Excellence in Programming Award later on that day, a well deserved recognition for the extent of his contribution to our field.
Web Caching (Jason Hunter)
After reminding us why web caching is critical, which I will not detail here because if you do not know why it is you would rather stop reading this blog and start searching the web, Jason started by detailing the basics of HTTP before delving into five different cache techniques from Yahoo's Mike Radwin. Here is a non exhaustive list of all what he mentioned:
- browser cache and conditional get for revalidation (leading to very cheap 304 replies from the server),
- proxy cache that happens at different levels (company, country...),
- beware of what can prevent caching (cookies, authentication), using mitigation techniques ...
- serving static content from a cookie-free TLD or using cache control directives,
- deciding that images never expire and use different versions of them (there is an Apache mod that helps for this),
- leverage cache and expire directives correctly (for example instead of using 0 to mean "already expired", use a date far in the past, but not before the epoch),
- do not trust anyone to understand these directives or to understand their latest versions, so cover all the spectrum of parameters and plan for stubborn clients (for example 302 redirect to a highly cached site).
The Busy .NET Developer's Guide to Rules and Rules Engines (Ted Neward)
I always wonder how good or bad are the stuff I am doing with NxBRE, so I decided to attend this session and listen to Ted giving a review of business rules engines in the world of .NET.
We all understand that business rules are natural members of software applications and, I hope, we all realize that hardcoding them makes our life more difficult, especially for rules that change often and bear a lot of conditional branching. Hence the rise of business rules engine (BREs), which origins are rooted in the artificial intelligence efforts and expert systems of the 70s. Ted gave this interesting definition of BREs:
"Business rules engines are generalized expert systems in which the expertise is missing, to be entered via some form of programming language at a later date"
I like it because it relates BREs to expert systems while showing the part left to developers. It re-enforces the idea that there will be a learning curve, that integration will be needed, hence that a BRE will not be for any project nor any budget.
Other interesting aspects to consider when using such beasts include rules edition (and the mythical "business user" editor), testing and validation (about what you can read my take) and operational lifecyle (promoting to production and vice-versa).
I am glad that Ted mentioned NxBRE in this session (without throwing stones at it, woohoo), even if he mainly focused on Microsoft Workflow Engine and Drools aka JBoss Rules .NET (definitively more interesting for the audience, anyway, as WF is available to anyone with .NET 3.x while Drools have been around for a long while).
The fact he has not mentioned RuleML only once, even when making fun of MS WF indigestible XML syntax, puzzled me... maybe RuleML still tastes too much academia for the industry?
18th Annual Jolt Product Excellence Awards
Dr. Dobb’s Excellence in Programming Award
When someone asked Uncle Bob if he would attend the Jolt ceremony, I overheard him reply "naaahh". Still, I can remember him four years ago cracking jokes from the first row with Alexa. So is all the fun gone for real? It is with this thoughts in mind that I entered the theater, ready to be more surprised by the results than ever, as I unfortunately had to drop Jolt Judging for this round.
Well, this ceremony was simply excellent. The whole process has been streamlined, with Productivity Awards simply named, and good drum rolls-slides synchronization! Robert X. Cringely, who was hosting the event, first provided us with a great insight on the transient nature of the software we produce, though stated that this transiency removes nothing of its importance. He then proceeded to the formal trophy handing celebration with brilliance and pizzazz.
I leave it up to the DDJ site to list all the winners of this year. My highlights are the following:
- Atlassian (or should I say Cenqua?) received well-deserved awards for both Fisheye and Clover.
- Smart Bear has been recognized for Code Collaborator, whose company published book I have talked about before.
- O'Reilly Radar was high on the radar screen! If you do not read it yet, well, you know what you have to do...
- I was really hoping to see the Spring Framework get the Jolt but it went to Google's Guice. I am not convinced that Guice will ever jolt the industry as Spring does. But, hey, a vote is a vote!