Friday, April 06, 2007

Put Your Business Rules to the Test

In the November 2006 issue of DDJ, Scott Ambler explained in an article titled "Ensuring Database Quality" the why, what and how of database testing. With the advent of commercial and open source business rules engines and their increased usage in applications of all size and complexity, ensuring rules quality has become a critical issue for many businesses.


No blind spot tolerated

There are several reasons why applications externalize their business rules instead of hard coding them, including the following:

  • to accommodate the need of frequently changing business rules in the cleanest possible way,
  • to enable non technical personal to author business rules,
  • to allow the exchange of rules between heterogeneous systems,
  • to delegate the handling of complex rules to a component trusted for its performance and reliability.

A successfully implemented rules engine will quickly end up handling the most critical aspects of a modern business infrastructure, as its strategy will become reified in rules and enforced by the engine. Therefore, the need for properly managing these rules will quickly become compelling.

Whether they use commercial tools (that come complete with advanced tooling) or open source implementations (that are traditionally weak in term of tooling), implementers will have to put in place a rules life cycle similar to the one shown hereafter.


One of the critical aspects of rule management we are going to discuss here is testing. Since we recognize what is at stake with business rules, keeping them in a blind spot and simply hoping for the best will have sooner or later direct and dramatic consequences that are likely to hurt the bottom line for the aforementioned reasons.

We will discuss the different steps necessary to properly set up a test environment and then how to iterate on this basis.

The specific case of the home grown engine

If the rule engine you use is home grown, it is essential to extensively test it to certify that, release after release; it keeps its deduction power! To ensure this, a combination of component level unit testing and white box testing is needed. They will both rely on a specific set of data and rules, usually not actual ones but ones made to cover core specifications, well known technical challenges (like critical values handling) and regression testing (tests created after bug reports).

These tests should also be completed with performance and load tests, in order to guarantee that your engine remains in its defined production constraints.

Setup Step 1: Setup the test sandbox

Rules engines are generally external components designed to be fed with data they can process and make deductions on, which usually end up by updating the existing data or by creating new data. This relative isolation, which somehow plays against the engine at runtime because it is costly to alleviate, can be leveraged for testing as it simplifies the setup of a sandbox, i.e. a dedicated and segregated environment where the engine will be free to modify data without touching any actual critical information.

For engines that are directly connected to enterprise resources like databases, the setup recommendations in Scott's article will apply.


Setup Step 2: Build input reference

Unless your business rules are simple or the amount of information you are dealing with is limited, it is almost impossible to manually create sample data that will represent the diversity of cases your engine will have to process and apply rules on. If you are in this situation, you will then have to work with business analysts to select relevant batches of data that represent a reasonable input in term of data complexity and diversity. This will become your input reference at one point of time.


Setup Step 3: Build output reference

After building this input reference, you will have to make your engine apply rules on them: the result will become your output reference, but only after having being validated by an expert.

With this reference data in hand, you will be in the position to certify your rule base, which is usually done by versioning it. It is of paramount importance that not only the rule file(s) be versioned but also all the reference data and related artifacts (like configuration files). This bunch of files will form a consistent, tested and validated set of information that will be candidate for production and, if need be, restoration if a downgrade is needed. It will also be the unit of data that will be copied when a new version of the rule base will be needed.


Following Steps: Iterate

After this initial phase, each subsequent modification of the rule base will have to go through a similar validation process, with the difference that the input reference will probably need edition to include new business cases or remove irrelevant ones.

When testing the modified rule bases, differences with the output reference will be induced by the simple fact business rules have been modified (for example, if a discount rate has been lowered, the new computed values will change). Again, a business analyst will have to analyze the difference between the previously validated output reference and the new candidate. A diff-like visual tool with a simple accept/reject feature could be of great help here.


Merry nights ahead

Business rule engines are now first class citizens in the agilist's toolbox. As such, they deserve a well defined and thorough test strategy. Depending on the engine used, setting up such a strategy will vary between defining your own or embracing the one provided by a vendor. At the end on the day, what really matters is what testing buys you: the possibility to make the most of business rules engines, that are intrinsically able to embrace changes, without losing your sleep.