Monday, May 24, 2010

Data Interaction Patterns

Throughout my experience with working on back-end systems for anything from big governmental to online gaming, I have came to develop a particular appreciation of the interactions that happen between data consumers and data producers. The following is a non-exhaustive and non-authoritative review of the different data interaction patterns that I've came up to play with. These are mostly unstructured notes from my experience in the field that I hope may turn useful to others.

As you know, when data is involved caching comes into play when performance and scalability are sought. In the coming diagrams, cache is represented as a vertical rectangle. The persistent storage is represented as a vertical blue cylinder, while horizontal cylinders represent some form of reliable and asynchronous message delivery channels. The data interactions are represented with curvy arrows: they can represent reading or writing.

Direct [R/W]

Besides the obvious drawbacks coming from the temporal coupling with the persistent storage mechanism, the interesting thing to note in such a trivial data access pattern is that there is often some form of request-scoped caching happening without the need to explicitly do anything. This first level of cache you get from data access layers help in optimizing operations provided they occur in the same request (to which is bound the transaction, if one exists).

Being short lived, this kind of caching is free from the problem of expired cache entries eviction: it can kick in transparently without the application being aware of it.

Through Cache [R/W]

Reading through cache is a simple and powerful mechanism where an application tries first to read from a long lived cache (a very cheap operation) and, if the requested data can't be found, proceeds with a read in the persistent storage (a way more expensive operation).

It's interesting to note that write operations don't necessarily happen the same way, ie. it is well possible that a write to the persistent storage doesn't perform a similar write in the cache. Why is that? Cached data is often a specific representation of the data available in the storage: it can be for example an aggregation of different data points that correspond to a particular cache key. The same persistent data can lead to the creation of several different cache entries. In the case, a write can simply lead to an immediate cache flush, waiting for subsequent read operations to repopulate these entries with new data.

Conversely, it's possible to have write operations update the cache, which opens the interesting problem of consistency. In the current scenario, the persistent storage remains the absolute truth of consistency: the application must handle the case when the cache was inconsistent and led to an invalid data operation in the persistent storage. I've found that localized cache evictions work well: the system goes through a little hiccup but quickly restores its data sanity.

Though some data access technologies allow the automatic management of this kind of second level of caching, I personally prefer that my applications have an explicit interaction with the caching technology they use, and this at the service layer. This is especially true when considering distributed caching and the need to address the inherent idiosyncrasies of such a caching model.

Cache distribution or clustering is not compulsory though: you can reap the benefits of reading through cache with localized caches but at the expense of needing to establish some form of stickiness between the data consumers and the providers (for example, by keeping a user sticky to a particular server based on its IP or session ID).

This said, stickiness skews load balancing and doesn't play well when you alter a pool of servers: I've really became convinced that you get better applications by preventing stickiness and letting requests hit any server. In that case, cache distribution or clustering becomes necessary: the former presents some challenges (like getting stale data after a repartition of the caching continuum) but scales better than the latter.

Write Behind [W]

Writing behind consists in updating the data cache synchronously and then defer the writing to the persistent storage to an asynchronous process, through a reliable messaging channel.

This is possible with regular caching technologies if there is no strong integrity constraints or if it's acceptable to present temporarily wrong data to the data consumer. In case the application has strong integrity constraints, the caching technology must be able to become the primary source of integrity truth: consistent distributed cached that supports some form of transactional data manipulation becomes necessary.

In this scenario, the persistent storage doesn't enforce any form of data constraint, mostly because it is too hard to propagate violation issues back to the upstream layers in any meaningful form. One could wonder what is the point of using such a persistent storage if it is dumbed down to such a mundane role: if this storage is an RDBMS, there is still value in writing to it because external systems like a back-office or business intelligence tools often require to access a standard data store.

Cache Push [R]

Pushing to cache is very useful for data whose lifecycle is not related to the interactions with its consumers. This is valid for feeds or the result of expensive computations not triggered by client requests.

The mechanism that pushes to cache can be something like a scheduled task or a process consuming asynchronous message channels.

Future Read [R]

In this scenario, the data producers synchronously answers the consumers with the promise of the future delivery of the requested data. When available, this data is delivered to the client via some sort of server push mechanism (see next section).

This approach works very well for expensive computations triggered by client requests.

Server Push [R]

Server push can be used to complement any of the previous interactions: in that case, a process prepares some data and delivers it directly to the consumer. There are many well known technological approaches for this, including HTTP long-polling, AJAX/CometD, web sockets or AMQP. Enabling server push in an application opens the door to very interesting data interactions as it allows to decouple the activities of the data consumers and producers.


Monday, May 17, 2010

Infected but not driven

The least I can say is that I'm test infected: when a coverage report shows lines of code that are not exercised by any test, I can't help but freak out a little (unless it appears that this code is truly useless and can be mercilessly pruned). This quasi obsession for testing is not vain at all: time and again I have experienced the quality, stability and freedom of move a high test coverage gives me. Things work, regressions are rare and refactoring is a bliss thanks to the safety net tests provide.

So what's with TDD... and me?

There are some interesting discussions going on around TDD and its applicability, which I think are mostly fueled by the heavy insistence of TDD advocates on their particular way of approaching software development in general and testing in particular. The more time I spend thinking about these discussions, the more it becomes clear to me that as far as testing is concerned, the usual rule of precaution of our industry applies: ie. it depends.

To be frank, I'm having a hard time with the middle D in TDD: as I said, I'm test infected, low test coverage gives me the creeps, but my process of building software is not driven by tests. From an external viewpoint, it is driven by features so that would make it FDD. From my personal viewpoint, it is driven by gratification, which makes it GDD.

Being gratified when writing software is what has driven me since I'm a kid: I didn't spend countless hours hurting my fingers on a flat and painful ZX-81 keyboard for the sake of it. I did it to see my programs turned into tangible actions on the computer. It was gratifying. And this is what I'm still looking for when writing software.

But let's go back to the main point of this discussion: TDD. With all the industry notables heavy-weighting on writing code while being driven by test, should I conclude there's something wrong with my practice? Or is the insistence on test first just a way to have developers write tests at all?

Adding features to a system, at least for the kind of systems I'm working on, mainly consist in implementing a behavior and exposing it through some sort of a public interface. Let's consider these two activities and how testing relates to them.

Behavior

When I write simple utility functions, like chewing on some binary or data structure and spitting out a result, I will certainly write tests first because I will be able to express the complete intended behavior of the function with these tests.

Unfortunately, most of the functions I write are not that trivial: they interact with functions in other modules in non-obvious manners (asynchronously) and support different failure scenarios. Following a common Erlang idiom, these functions often end up replying a simple ok: such a result is not enough to drive the development of the function (else fun() -> ok end would be the only function to write to be done). In fact, testing first this kind of functions implies expressing with mock expectations all the interactions that will happen when calling the top function. That's MDD (Mock Driven Development) and it's only a letter away from making me MAD. Sorry but writing mocks first makes me nauseous.

My approach to developing and testing complex functions is, to me, more palatable as it leads to a faster gratification: I start by creating an empty function. Then I fill it with a blueprint of the main interactions I am envisionning expressed as comments. Afterwards, I reify this blueprint by turning the few comments in the original function into a cascade of smaller functions. At this point, I fire-up the application and manually exercise the new function: this is when the fun begins as I see this new code coming to life, finding implementation bugs and fixing potential oversights. After being gratified with running code, I then proceed to unit test it thoroughly, exploring each failure scenarios with mocks and using a code coverage tool to ensure I haven't forgotten any execution branch in my tests.

This said, there is another behavior-related circumstance under which I will write tests first: when the implemented behavior is proven wrong. In that case, writing tests that make the problem visible before fixing it is the best approach to debugging as it deals with the problem of bad days and lurking regressions.

Public Interface

Writing usable modules imply designing interfaces that are convenient to use. Discussing good API design is way beyond the point here. The point is: could writing tests first be a good guide for creating good interfaces? The immediate answer is yes, as by eating your own dog food first makes you more inclined into cooking it into the best palatable form possible (anyone who has had to eat dog food, say while enduring hazing, knows this is a parabola).

In my practice, I have found things to be a little different, again for less than trivial functions, which unfortunately compose most of a complex production system. For this class of functions, I have found that the context of a unit test is seldom enough to fairly represent the actual context where the functions will be used. And consequently, the capacity to infer a well-designed interface based on these tests first and alone is not enough. Indeed, a unit test context is not reality: look at all the mocks in it, don't they make the whole set look like a movie stage? Do you think it's air you're breathing?

When creating non-trivial public functions, I've found a great help into going through a serious amount of code reading in the different places where it is envisioned these functions will be used. Reading a lot of code before writing a little of it is commonplace in our industry: while going through the reading phase, you're actually loading all sort of contextual information in your short term memory. Armed with such a mental model, it becomes possible to design new moving parts that will naturally fit in this edifice. So that I guess that practice would be RDD (reading driven development).

Daring to conclude?

I find it hard to conclude anything from the dichotomy between my practices as opposed to what TDD proponents advocate. I consider myself a well-rounded software professional producing code of reasonably good quality: unless I'm completely misguided about myself, I think the conclusion is that it's possible to write solid production code without doing it in a test-driven fashion. If you have the discipline to write tests, you can afford to not being driven by them.

Friday, May 14, 2010

Just Read: Zabbix 1.8 Network Monitoring

Since Zabbix 1.8 came out, I have been wanting to upgrade just for the sake of getting the new and improved AJAXy front-end. Indeed, the Achilles' heel of the previous versions of this otherwise very solid and capable monitoring platform, was the poorly responsive GUI. But I kept pushing the upgrade for a later date.

When the good folks at Packt Publishing offered me to take a peek at their brand new Zabbix book, my procrastination was over. Equipped with such a complete and up-to-date reference material, I had no reason for not taking the plunge and upgrade.

This 400+ pages book is not only welcome as a supporting resource when upgrading, it is also a consummate reference guide that was much needed by all Zabbix users. I've found the book to be easy to read, as it is loaded with screenshots, but also one step beyond than a pure user guide. Indeed, the author covers general subjects about application monitoring: for example, the section on SNMP is actually a very good introduction to this protocol, with tons of hands-on example to guide you through the learning path.

On the down side of things, as it is often the case with technical books, I have found the index to be wanting (it's a little short and sometimes deceiving). This is not a big deal though because, in order to make the most of this comprehensive book, it's a good idea to get the eBook version and use full text search to reach the information needed.

Whether you're using Zabbix and want to deepen your skills or want to learn about monitoring in practice, this book will get you covered. And if you don't want to take my word on this, download this free chapter and see for yourself!