Friday, February 27, 2009

Conversation with a Web Thread

DD: Hi Mr. Web Thread and thanks for joining us.
WT: My pleasure. Do you mind if I stay in the pool?

DD: Hmm? Sure, why not. So, can you please tell us how is your life nowadays?
WT: Life has been pretty good. I have become very popular recently and came to perform some massive gigs in highly trafficked web sites. I really like this pool.

DD: Mmhh, okay. How do you think developers treat you, nowadays?
WT: Well, I am glad you ask. I think things have improved a lot, thanks to the emergence of concepts like continuations and AJAX. Still, I sometimes get badly beaten by some reckless coders. This pool is awesome.

DD: So, if you were to give a piece of advice to these programmers, what would it be?
WT: I think that these developers only need to understand what is the ultimate goal of my life.

DD: Which is?
WT: Coming back to that pool!

DD: Pardon me?
WT: You see, I get pulled of this pool very often and sometimes I am forced out of it for too long. In that case, I am like a fish out of water. And when I suffer like that, the whole application suffers.

DD: So what you are saying is that developers should thrive to let you return to the pool as quick as possible?
WT: Absolutely.

DD: Whatever the cost may be?
WT: Well, it depends of course, but if they can afford letting me return with slightly stale data, that is the thing to do.

DD: Where would they get such a stale data from?
WT: Oh, I realize I never introduced you to Mrs. Web Thread, née Cache. She usually takes care of this.

DD: But ladies, sorry, caches are complex. They require eviction strategies, invalidation messages broadcasts, etc.
WT: Or not. You can use a simple time-evicted opportunistic cache, provided it matches your business needs.

DD: Ah, I see, something like a 5 minutes cache.
WT: Or 5 seconds.

DD: You got to be kidding me, what's a 5 seconds cache worth?
WT: Do you have any idea of all the things I can do in 5 seconds? Do you realize that it is an agonizing eternity for me?

DD: Well, uh, I guess not. Let us lighten up the debate a little. Now that you are a rock star, thanks to your success in multi-million users web sites, do you get to sign a lot of autographs? Do people recognize you a lot?
WT: Sure, I can hardly go anywhere without being spotted. But, believe it or not, it still happens that I get mixed up with my cousins.

DD: Oh, you have cousins?
WT: Yes, Worker Thread and Background Thread. Sometimes, I get confused with one of them and receive way too much work to do or work that simply does not concern me at all.

DD: What do you do in that case?
WT: As I said before: I become grumpy and the whole web tier suffers with me too. I kind of like to share my pain!

DD: OK, Mr. Web Thread, I think it is time we wrap up now. Thanks a lot for your time and sharing your experience with us.
WT: You are welcome. Now back to the pool.

Splash!

Just Read: Statistics Hacks


More than twenty years after my last statistics class, this book really tasted like a rejuvenating read. It is well structured, with an opening focused on theory followed by numerous applications in all sorts of domains (yes, including poker, though my preferred subject was the Drake Equation). As such, the book will stand as a quick reference guide to which the reader will return every now and then.

Recommended for aging minds in need of a refresher (like mine) or curious minds wanting to learn more about statistics and how relevant they are to their every day's life.

As a side note, this is the first e-book I bought: I got it under EPUB format, which renders almost flawlessly on my Sony PSR-700 (texts, figures, schemas are good, very large tables are cut though). Kudos to O'Reilly who do a truly great job with their e-books, providing them in three formats and not limiting you in the number of times you download them (so I don't even bother backing-up my purchases).

Wednesday, February 25, 2009

Standards vs. Upgrade pains

In Standards Based vs. Standardized Neal Ford develops a very interesting rhetoric that is mainly focused on what he sees being "wrong with SOA (Service Oriented Architecture) in the way that it's being sold by vendors", but really touches a vast subject that has many thought provoking ramifications.

While considering the application upgrades I am involved in or that occur around me, it is clear that applications deployed on standardized platforms have easier upgrade paths. Or have they?

Let me consider the whole spectrum of applications I am involved with and discuss the related upgrade pains, how standards play a role there and if they are worth the effort. I won't discuss the necessity of upgrading application platforms: I reckon everybody feels the same itch when they see an application running on a platform that is many releases behind its current revision.

Non-standardized

Neal mentions ESBs as canonical non-standardized platforms, so I will start with them and consider my experience with following Mule versions. Many aspects of Mule are standards-based, from its protocol stacks to the zillions of libraries it is built upon (which I consider as being industry standards). Even its configuration mechanism piggybacks on Spring's schema driven configuration. What is not standardized are Mule's API and configuration syntax. Therefore they tend to evolve quickly, and sometimes drastically, as Mule expands and refines its feature base. Consequently, following Mule versions is not a small feat: it requires a pretty advanced knowledge of the platform to be performed right (and the assistance of extensive unit and integration tests, but everyone has this already, right?). Why would anyone willingly accept to jump through such hoops? Because it is worth it: being exposed to Mule's non-standard aspects is rewarded by a full access to its complete feature set and its full power.

Standardized

At the other end of the spectrum lies web applications that fully pack their dependencies and can be deployed on pretty much any container that understands their standardized configuration. A typical example is a Spring/Hibernate application deployed on JEE web container, like Tomcat or Jetty. It is to be noted that, even if this kind of applications can have very complex needs, their reliance on the JEE standard is actually quite small: a few entries in web.xml that hook some basic application entry points to the container and that is pretty much it. The rest is not standardized but fully resides in the realm of what the application controls (Spring and Hibernate configurations for example). Hence migrating between different versions or even different implementations of the JEE container is usually a no-brainer.

Somewhat standardized

Finally, in the middle of the spectrum, we find the applications deployed on platforms where standards fall short of allowing them to easily follow releases. Consider a full fledged JEE application, say deployed on JBoss. In this case, two forces act against the standard. The first one is the limitation in scope of this standard: anything that has been left to the interpretation of the implementor will likely differ enough between major releases to cause troubles. The second force is the fact that the standard itself is a keyhole applied against the full range of features of the platform itself: these features are there, accessible, often compelling, and it requires a lot of willpower to refuse to use them directly (by shielding them behind a custom facade) or not at all (by using extra dependencies to get these needed features). These two forces combined usually lead to full-range JEE applications that do not evolve gracefully when their underlying standardized platform is upgraded.

So, there is a clear trade off between having full access to a platform and being isolated from the disruptive aspects of its evolution over time. It is also true that standards with more teeth are probably needed (OSGi is an interesting one in this matter).

But everybody knew this already.

Sunday, February 22, 2009

Two minutes on Ping.fm

Always interested in sharing the futility of my own existence, I thought that I could use Ping.fm to broadcast my boredom level (i.e. LinkedIn status) on different channels.

All went fine until I hit the page for connecting my Ping.fm account to my LinkedIn one. See for yourself:

Seriously, guys, you want the password of my LinkedIn account? Is this really the best we can do for integrating social networks? Are we waiting until the Wrath of OWASP falls on us to fix this?

Then I closed my Ping.fm account.

Thursday, February 19, 2009

Integration Testing Asynchronous Services

Whether you use Mule or another integration framework, writing integration tests for asynchronous services can be a little tricky, as you may start running assertions in your main test thread while messages are still being processed.

Suppose you want to ensure that your content based router sends messages to the expected target services. Your integration test suite will consist in sending valid and invalid messages to the input channel in front of this router and check they end-up in the expected destination services.

But how to be sure that the delivery has happened and it is safe to start asserting values?

The naive solution to this problem consists in:
Thread.sleep(1000);

It is naive because it arbitrarily slows down your test suite without giving any guarantee that the message will be delivered. It may work on your machine but the same suite running on a busy integration server will behave very differently: one second may not be enough a wait if the threads handling the message delivery behind the scene are slower than usual.

A problematic fix to this broken solution consists in:
while (messageNotReceived) Thread.sleep(1000);

This is problematic because if, for any reason, the message does not hit the expected service, this loop will hang the test suite for ever. Of course, you can add a counter to limit the loop and reduce the sleep time to be more reactive, but this amounts to re-inventing the wheel when higher level concurrency primitives are ready-made for that.

Indeed, we can use a countdown latch to hold the main test thread until the message hits the expected service. Here is the code I use for my Mule integration testing:

I leverage the notification mechanism of Mule and the capacity its functional test component has to broadcast such an event when it is hit by a message. Though a Mule specific implementation, the design principle I discuss here is applicable to other situations.

Notice how I pass the triggering action of the test (like sending a message with the MuleClient) as a Runnable to this method. This allows me to prepare a latch and register a listener before calling this Runnable and, then, to wait on the latch until it is lowered. When the execution comes back to the caller method, it can safely assess properties on the received message.

Obviously, the test is designed so its normal execution leads to a message always hitting the right service. Hence, in practice, the wait on the latch is very short, way under the safeguard timeout of 15 seconds.

But what if we want to test that no message will hit a particular service?

This is when things get tricky with asynchronous testing, because there is not systematic approach to testing the absence of an event when it can occur unpredictably in the future.

One strategy I use is to turn the problem around and change the negative assertion into a positive one. For example, by always ensuring a message will go somewhere, even if it is a service that sends messages to oblivion. By doing so, instead of testing that a bad message never hits a good service, I test that a bad message hits the oblivion service.

Do you have other advices to share about asynchronous testing?