Monday, February 09, 2015

Unveiling RxMule

I'm super excited to announce RxMule, my latest open source project. RxMule provides reactive extensions for Mule ESB, via a set of specific bindings for RxJava.


For several years, I've been mulling over the idea of creating a DSL for configuring Mule. Indeed, there is a treasure trove of pre-existing transports and connectors for Mule, which is very compelling for anyone building connected applications (which, nowadays, is probably everybody). Unfortunately developers can be put off by the XML-based DSL used to configure Mule, and thus may pass the opportunity to leverage all this available goodness.

As Rx is gaining traction, more and more developers are getting accustomed to its concepts and primitives. With this mind, and knowing that Mule is at core an event processing platform, it dawned on me that instead of creating a DSL that would mimic the XML artifacts (which are Mule specific), I'd rather create bindings to allow using Mule's essential moving parts via Rx.

In summary, RxMule adds a number of classes to RxJava that make it possible to observe:
In short, RxMule allows creating Observable<MuleEvent> instances from different sources. A MuleEvent is what Mule creates and processes. It wraps a MuleMessage which contains the actual data and meta-data that's being processed.
You can read more about the structure of a MuleMessage here.
The following demonstrates an asynchronous HTTP-to-Redis bridge, that only accepts one request per remote IP:


So why don't you take RxMule for a spin and see if it helps you achieving your application and API integration needs: with Mule at its core, I bet it will! Bug reports and pull requests are welcome at GitHub.

Saturday, November 15, 2014

Call me never (Ignite Talk)

I've been super honoured to give an ignite talk during DevOps Days Vancouver 2014. Ignite talks are intense, as the slides mercilessly fly-by every 15 seconds, and this for 5 minutes sharp (yes, that's just 20 slides!).





In this talk, I tried to present some of the lessons we've learned at Unbounce while rebuilding our page serving infrastructure. Our availability target is five-nines (that's an allowance of 6 seconds of downtime per week) so we've put lots of effort into building a stable, self-healing, gracefully-degrading piece of software. We had a few close calls though, hence the lessons learned shared in this talk.







I was initially planning to cover this subject in a 30 minutes talk and had gathered tons of material to go in-depth, so delivering this material in 5 minutes was an interesting challenge! It was good actually, as it forced me to be drastically concise, while trying to preserve interesting content.





If you can put up with my French accent, you can watch the recorded presentation here:


Otherwise, here are the slides:

Wednesday, November 05, 2014

yopa is (almost) Your Own Personal AWS

If you're using AWS SQS and SNS for more than trivial things, you've probably wished that you could run your queues and topics locally, and be able to peek at the message flows happening in topics subscriptions.


Enter yopa, a local SQS and SNS simulator whose sole purpose in life is to make developers' lives easier! Open sourced by Unbounce and coded in Clojure by yours truly, yopa builds on the solid foundation of ElasticMQ and adds its own SNS implementation.

Granted, yopa only supports SQS and SNS for now so it's light years away from truly being Your Own Personal AWS but, hey, let's start small and see where it goes. And actually, there are already discussions about adding support for some EC2 and S3 APIs features as well.

So if you're using SQS and SNS, please give yopa a try! Pull requests are more than welcome.

PS. Oh, there's also a Docker image available...


Saturday, April 05, 2014

I write only about funny animals

The rabbit is out of the hat: I'm indeed working on a new book. It's called "RabbitMQ Essentials" and is published by PackT Publishing. Yes, you're reading right, after Mule, it's now RabbitMQ's turn! Clearly, I'm specializing in writing about animal-named technologies.


Why writing yet another book about RabbitMQ? After all, there are already several very excellent books on the subject out there. I think Ross Mason gave the best answer to this question on Twitter:



Let me further articulate the reasons why I decided to embark on this new book project while the ink on Mule in Action is still wet:

  • RabbitMQ is a great piece of open source technology and I think it can use all the coverage it can get. From the get go, RabbitMQ has been built with the idea to make things right: it's a breeze to install and configure. Adding extra plug-ins is a no-brainer. It's enterprise-grade software without the enterprise kludge. Moreover, AMQP is one of the rare messaging protocol that is truly interoperable so it too deserves to be discussed again and again.
  • It's 2014 and surprisingly many developers still don't have a good grasp of messaging concepts. Beyond covering RabbitMQ, I'm covering more general principles around building distributed applications that leverage messaging as a decoupling agent. In the book, I'm sharing the love for everything asynchronous!
  • I love to write not only code but about coding as well. I'm not an outdoor person by any stretch of imagination, so this new book is a perfect project for the gloomy and wet winter that we get in the Pacific North-West.
The book will be short and easy to read. Its style will be conversational, with lots of questions asked to the reader. It won't be an in-depth coverage of any particular feature of RabbitMQ but will instead give a broad coverage of the broker, the AMQP protocol and some of the custom extensions added to it. The book will contain lots of code (Java, Ruby, Python) and will be articulated about the coding journey of a fictitious company that starts using RabbitMQ and see it multiply... and multiply...

Hopefully you'll enjoy this journey too and will learn a few things while reading RabbitMQ Essentials. You can pre-order your copy right from PackT's website.


Now what could be the next animal I could write about?

Saturday, March 01, 2014

Explicit content required

I had this post in gestation for a while but the recent post on SmartBear's blog ("Please stop saying Java sucks") decided it me to finalize and publish it.

I believe that it's a complete fallacy to equate "less code" with "better", this whether one is considering a language or a framework. In that matter (like in many others), I think the Zen of Python provides the correct viewpoint:

Explicit is better than implicit
Why is this important? Here is a quote from Bob Martin's Clean Code:

The ratio of time spent reading (code) versus writing is well over 10 to 1 ... (therefore) making it easy to read makes it easier to write.

I'm inclined to add that not having any code to read at all hinders the capacity to reason about a piece of software, hence reduces the ability to work with it.

One could brush aside this argument and argue that if a programmer knew better language X or framework Y, he or she should now that this or that behaviour would implicitly apply in the considered context. I would then posit that readability trumps knowledge.

Applications tend to last longer than fashionable tools, languages or frameworks. Long after the knowledge of the intimate details of whatever technology has been deemed useless and dumped into oblivion, someone will have to read the code and maintain it. This someone will appreciate to have the capacity to navigate all the aspects of the application by following explicit seams that bind them together.

When everything is implicit, an application code ends up like a novel where important points are pushed to footnotes, but without any reference to these footnotes. In that respect, I find Scala's implicits to be emblematic of a good approach to make code disappear and this because these implicits need to be explicitly used.

Coming back to SmartBear's post, old Java's code signal to noise ratio is the key problem. Things are getting better release after release: noise is decreasing and thus the ratio improves.

And that's the whole point of this post: throwing away the signal with the noise is not the solution. Code needs to be explicit so we can reason about it and so we can evolve it with confidence.

Saturday, November 23, 2013

More than a language

Nearly 300 high-level programming languages have been developed during the last decade.
One can only nod when reading this quote: indeed there's hardly a month without a new language being announced. The fun part is that it's the opening sentence of "A Guide to PL/I", printed in 1969! Clearly the programming community has always been prolific when it comes to spawning languages.

And this trend makes sense: the tension between the need for general programming languages, which can deal with any type of problems, and the desire for specific programming languages, which best accommodate particular issues, is prone to give birth to a wide range of diverse languages.

As a programmer, you therefore have plenty of choice when it comes to picking up a language for your shiny new project. Beyond the intrinsic qualities of the language, I posit that there's more to consider when deciding. Here are a few points that come to mind:

  • Platform - What is the runtime environment that this languages runs on? Is it mature? Is it easy to install and upgrade? Can it be well monitored in production? Does it use system resources efficiently?
  • Ecosystem - Is there a large body of libraries available? Are these libraries mature? Does the community have a tradition for rolling out breaking changes in libraries?
  • Tooling - Are there any tools for crafting code with this language? Any auto-formatters so SCM conflicts are minimal? Any static analysis tools? Any dynamic analysis tools?
I haven't included the "availability of developers" criteria because I think it's a red herring. Indeed, I'm convinced any programmer worth hiring should be able to pick up a new language very quickly.

So if your project can accommodate risk and potential yak shaving, you can disregard these points and select a language only for itself. But for those building systems that should last and evolve gracefully, these extra criteria should be considered as much as the qualities of the language itself.

Are there any other aspects you typically consider when opting for a particular language that you'd like to share?

Friday, April 26, 2013

Meet jerg, a JSON Schema to Erlang Records Generator

I'm happy to announce the very first release of my latest Erlang open source project, jerg, a JSON Schema to Erlang Records Generator.

The objective of this project is to be a build tool that parses a rich domain model defined in JSON Schema (as specified in the IETF Internet Draft) in order to generate record definitions that can be used to Erlang applications to both read and write JSON.

I believe that having a formally defined object model for the data that's exchange over web resources is a major plus for APIs. It doesn't necessary imply that consumers have to generate static clients based on such a definition, but as far as the server is concerned, it raises the bar in term of consistency and opens the door for self-generated documentation.

jerg doesn't deal with the actual mapping between JSON and records: there is already a library named json_rec for that. It doesn't also deal with validation as, again, a library named jesse can take care of it.

This first version of jerg generates records with the appropriate type specification to enable type checking with dialyzer. It supports:
  • cross-references for properties and collection items (ie the $ref property),
  • default values,
  • integer enumerations (other types can not be enumerated per limitation of the Erlang type specification).
It also supports a few extensions to JSON schema:
  • extends: to allow a schema to extend another one in order to inherit all the properties of its parent (and any ancestors above it),
  • abstract: to mark schemas that actually do not need to be output as records because they are just used through references and extension,
  • recordName: to customize the name of the record generated from the concerned schema definition.
jerg has a few limitations, like its lack of support for embedded object schemas: pull-requests are more than welcome!

Enough talking: time to get started! Your next stop is jerg's GitHub project. Enjoy!