Saturday, November 19, 2011

QCon San Francisco - Day 3

QCon San Francisco 2011 is a wrap. I have mixed feelings about the conference and I believe I'm not the only one. Feedback forms will speak... Here are the last tweets that I favorited and that represent the ideas I grabbed from the sessions I attended. Thank you to all the twitter users I've shamelessly re-used :)

Wednesday, November 16, 2011

QCon San Francisco - Day 1

As I'm getting old and lazy, I've decided to cover QCon by favoriting other people's tweets on sessions I attended and post the result as a daily chunk...

Saturday, November 05, 2011

API-First Development

A while back, REST advocate Juergen Brendel twitted:
API-first development: Modern dev. must consider multiple clients. Think about your REST API first, then add front end.
I retwited him because I can't agree more. In fact, even if there is a single client I'm convinced API-First development is a winning approach.

I've been involved in two three API-first projects, all very different in term of domain, size and technology. But in all cases, things turned out pretty well, and, in fact, way better than with traditional approaches to web development.

Clear responsibility delineation - When an API gets defined, client and server responsibilities become easier to define. The API becomes the guardian of this separation of concerns, as any temptation to make concerns of one side permeate to the other will either be impossible or at least extremely cumbersome. The API will resist almost naturally to desires to pervert it.

Better maintainability - Server-side front-end generation usually ends up in a pretty ugly mess where presentation concerns get mixed with service concerns. Even when following the MVC pattern, the sheer fact all artifacts are contained within the same project space opens the door to nasty transfers of responsibility between layers. An API is a contract around which clients and server can evolve harmoniously.

Client liberated - Front-end developers can use rich internet application frameworks and work on a clear interface with the back-end instead of getting constrained by a particular technology and having to deal with data transfer objects they don't fully control and which may change unexpectedly.

Test what matters most - This revolves around my previous rants about what's really important to test. When you expose an API, you can fully test your system as it is experienced by its different clients. There is no need for in-browser testing gimmicks. A simple HTTP client allows you to guarantee that the back-end is performing its duty and offering the front-end the features it needs. And that's what matters most.

Ninjas only - This point is a little harsh but here we go: API-first tolls the bell of tag soup developers. This category of developers, not skilled (or experienced) enough to be doing pure server-side development nor pure front-end one, is not needed anymore. Developers building clients for a server API are experts in their domain, be it HTML5, JavaScript, Flex, Android, iOS, Phonegap or whatnot. They can't rely on "the back-end guys" to pre-chew their job and are fully in charge of what they do.

In API-First development, the time you reach the point of trying a first connection between an actual client and the server is an exhilarating moment. Sure you've been testing the back-end for a while with a simulated front-end, but when they both connect for the first time, it's party time! And amazingly, things work very well right away, all thanks to a well defined API.

Before closing I should mention a downside for which I haven't found a satisfactory answer yet: against what back-end should client developers work? It is sure easy enough to expose a development-grade server to use while developing the front-end. But it's not always practical. A local server is an option, a client-specific stub is another one. Or maybe a state machine simulating the server API and allowing the front-end developer to change its state at will to easily simulate test scenarios?

So, what's your experience with API-first development? Any blazing success or horror story to share?

Saturday, October 29, 2011

Service Oriented Organizations

Earlier this year, I twitted this:

and that:

Though seemingly disconnected these tweets are actually related. So here we are, in the blink of an eye, almost five months later and I can finally find some time to circle back to these ideas and expand them a little.

Both these tweets are related to the problem of growth and the pains a software company goes through when it expands.

At the beginning of a company, life is good, sharing code is easy. Whether everybody sits in the same room or not, the number of people involved in coding the product is so limited that synchronization is easy. Friction is limited, things go fast. Conflicts can be resolved with beer and pizza.

If by accident (!) the company ends up being successful, things change, sometimes very quickly and most of the time not for the best. So the team grows, divides into groups, and once the 150 persons mark gets passed, people start losing track of who's who.

The traditional path to a graceful handling of growth consists in adding layers of management. Whether the control this approach brings is actual or illusionary, the fact of the matter is that it increases the distance between teams. Here I'm not talking about physical distance, though it may be the case, but really about the perceived distance, the kind of distance from which the "us versus them" mindset stems.

Independently of all that, code still gets written. As the business has grown, so did the code base. Teams are sharing code. But now any code change must go through several layers of management, both up and down for each team. Fluidity has been lost in favor of process, which protects the business stability by doing what process does best: slowing things down, if possible to the point nothing happens thus preventing anything bad to happen.

At this point, the sheer fact of sharing code becomes a heavy burden. In software companies, shared code doesn't typically consist of pure libraries, like Apache Commons. Shared code more often than not involves shared dependencies on enterprise resources, like databases. High level of coupling, if not tangling, exists in the shared codebase. The mismatch between the boundaries that have been cut across teams and the actual software artifacts' delineation becomes an impediment to progress.

This is where the idea for a Service Oriented Organization comes from (notice how I avoided Service Oriented Business for obvious "acronymistic" reasons). It is not about getting rid of management or meetings. It is about organizing teams around tangible boundaries that fit the needs of sustainable software development.

In a Service Oriented Organization teams interact around well defined contracts: APIs and SLAs are the promises teams make to each other. And they fully define the actual extent of their commitments to each other. Teams do not share code but services.

Having clear objectives in order to achieve common goals without exposing any gory details is beneficial both for people management and software development. Whether APIs expose fine-grained technical methods or coarse-grained business ones, teams will have the rabid desire to provide the best service possible for what they're responsible of, and this for functional and non-functional qualities.

Of course, all this sounds nice and rosy and quite possibly the wishful thinking of an utopian. The quest that many people have embarked upon is to find ways for corporates not to succumb to the illusion of control and grow themselves into dumbness.

Service Oriented Organizations have been discussed before. This is just my two cents about the concept. Please share yours.

Saturday, October 22, 2011

SNMP Monitoring for Scout

Half a year ago, I created a JMX plug-in for the Scout monitoring platform in the cloud. This time, I've created a plug-in for reading values from SNMP-enabled applications or systems.

The plug-in is available on GitHub.

It's capable of reading multiple values or walking the tree and gather all read values.

Let me know what you think of it: post bugs or feature requests here or directly in GitHub.

Saturday, September 03, 2011

The loggr Erlang Client is out!

I've just released the very first version of loggErL, my Erlang client for loggr. In case you do not know, loggr is a sweet piece of SaaS that offers a great deal of compelling log-related features wrapped in beautiful UI.
loggErL offers direct API calls to the loggr API, including support for optional fields:
loggErL comes complete with a Log4Erl appender that allows sending log events to loggr, again optionally supporting extra fields:
Feel free to fork this project on GitHub:

Friday, July 29, 2011

Mounting Resque Web Server in Ruby on Rails 3

If you want to load Resque's web server on a URL path alongside your main Ruby on Rails 3 application, no need to mess with or Rack::URLMap, as shown in different places.

The solution is way simpler and consists in using the RoR3's capacity to mount Rack applications directly in the routes table, as shown here:

Yep, it's that simple. Enjoy!

Thursday, May 26, 2011

Erlang Monads FTW!

A few days ago, the big brains at RabbitMQ have released Erlando, a nifty pair of parse transformers that add support for cuts and do-syntax monads in Erlang. Like many others I'm sure, I've quickly started to use these new language constructs.

Here is a quick demonstration of how the Maybe monad and the do keyword can simplify the chaining of methods. The following shows a succession of condition evaluations which must all succeed for the final function (process_json_entities) to be called:

For those of you who wonder, yes I know that WebMachine could do most of this stuff for me: using this awesome REST framework is not an option for my project.
Without the help of the maybe monad, I would have add to either nest case clauses (yuck!) or chain methods, each of them calling the next one in case of success of its condition.

There are two problems with the latter approach:
  • the most obvious problem is that the overall sequence of calls would not be in one place but buried deeper and deeper in the succession of functions,
  • the least obvious problem pertains to the classic issue of naming things: finding good names for a chain of methods, each of them testing a condition and calling the next one is very hard.
So what's inside each of these method? Let's look into the first, which is basically a blueprint for most of them:

Notice on line 4 and 7 how the return/1 and fail/1 method of the Maybe monad are used to influence the flow of the sequence in the encapsulating do. The values I'm passing to these two functions are purely arbitrary but it is not always the case. In the first example above, on line 5 the value passed to return/1 in extract_json_entities is assigned to JsonEntities, eventually passed to the actual processing function.

To make this work, besides a Rebar dependency, not much is needed besides the following directives:

Hopefully, this will whet your appetite and decide you to give Erlando a try!

Wednesday, April 20, 2011

JMX Monitoring for Scout

Scout is a very convenient monitoring platform in the cloud that I have started to use recently. I needed to monitor JMX data point, something that Scout doesn't do by default.

One of the many shiny things about Scout is its extensibility: it is super trivial to write a Ruby plug-in and have start use it to report custom data points.

Therefore, I've created a JMX plug-in which, after some QA from the awesome team at Scout, just ended up in their repository of supported plug-ins.

Read more about this here.

Monday, February 28, 2011

Put a rabbit in your HTTP

I'm pleased to announce the release of http-safe, a store-and-forward HTTP gateway plugin for RabbitMQ.

Its goal is to simplify the integration and communication of services over HTTP by relieving systems from the chore of resending requests when something went wrong with "the other side".

http-safe goes beyond the fire and forget paradigm as it supports the notion of delivery callback in order to inform the originating system of the success or failure of its dispatch request.