AJS_Validator Motivation

Constraining Type, Value and State

The term 'Design by Contract' (also known as 'Programming by Contract'), was coined by Bertrand Meyer, the designer of Eiffel. Design by Contract (DbC) is an approach to software development, wherein developers define and enforce the interfaces between software components rigorously, along with their behaviour, such that contravention of the limits imposed constitutes an exception.

In this light, strong-typing in languages like C++ can be seen as a form of DbC because passing, say, a float to a function that is defined as taking an int causes a compile-time exception. DbC in full, however, goes beyond this to include the concept of applying pre-and post-conditions to the execution of a function at run-time, to require that certain things be true on entry to the function, and that certain things are true on exit.

This approach reduces dramatically the need for unit testing during application development; by definition, tests are performed implicitly and repeatedly as code executes (and thus DbC can be seen as a form of automated Test-Driven Development). Additionally, these tests are performed in situ, thus fostering the detection of defects that unit tests may not trap. These factors expedite development and improve system reliability. Yet JavaScript does not support run-time type- and value-checking intrinsically, therefore an extrinsic equivalent is necessary to achieve the same effect.

The following sections in this page consider the unfavourable nature of conventional approaches to implementing DbC in JavaScript, before exploring the ways in which AJS_Validator – a design-by-contract tool that the AJS object makes possible – can yield all the benefits of design-by-contract development without the serious drawbacks that the other techniques embody.

Contents Constraining Type, Value and State
Conditional Clutter
Special Comments
Hungarian Notation
Elidable Logic
The Case for Method-Call Interception
AJS_Validator Defined
Easier Library-Management
Better Documentation
Better Design
Non-DbC Benefits

Conditional Clutter

The most straightforward approach to DbC in JavaScript requires peppering one's code with often-complex conditional-logic. Example 1 illustrates this by adding 18 lines of extra code (not counting whitespace) to a function that could otherwise be relatively short; perhaps just one line.

Those extra lines of code bloat the function, harming readability and, from there, comprehension. They also consume bandwidth and contribute to latency should they remain in place when a system is deployed across a network, and they impinge on run-time performance too. To compound this, the extra code is dedicated to trapping exceptional conditions, yet exceptional conditions, by definition, arise only rarely. This means that the checking logic always exacts a hefty price yet pays its way only very occasionally, thus worsening its deleterious effects.

Ideally, we would have a mechanism that allowed us to elide trivially the error-checking code from the system when it is deployed, and which allowed us to re-instate that logic with equal ease should we need to return to the system to resolve defects, or to update it in some way.

Moreover, if we consider the try/throw/catch model of exception handling, we see that it has the effect of gathering all exception-related code in one place, thus removing it from the non-exception related code. Implementing DbC in JavaScript should afford us the same benefit, where contract-enforcing code resides separately from the code that is subject to contract enforcement.

A further problem with the approach shown in Example 1 is that the conditional logic is just that; it must embody not only the factors that we wish to check but also the steps by which the validation should be performed. When using regular expressions, we state patterns that direct the searches that the regex engine performs, without needing to state how those searches should be implemented. It follows that we would prefer an approach to DbC where we state what should be policed without having to implement the validation process too.

To compound these drawbacks, the argument-checking that Example 1 implements applies to that function alone, and is not re-used elsewhere. We could address this by transferring the error-checking code into some form of generic validation-module, which would contain a family of methods for performing a range of checks. However, we would still have to place calls to those methods within any function the execution of which we wished to police. While this would reduce the code-verbiage, we would still suffer a net loss.

Finally, using this approach in debugging a legacy code-base forces us to edit the code in question, where we would prefer to leave that code untouched. It follows that the optimum DbC tool will allow us to diagnose problems in legacy code-bases without using 'invasive' techniques, and a parallel is to be found in modern medicine; doctors prefer to use body scanners to diagnose disease, rather than to perform exploratory surgery that carries substantial risks.


 // -- Example 1 --------------------------------------------------------------
 //
 // Note that using the typeof operator is not the best way to determine an
 // object's class in JavaScript, and that a considerably superior alternative
 // exists, which is explored in the AJS_Validator user-guide. That is: the
 // typeof operator is used here for the purposes of familiarity and brevity
 // only.
 //

 function someFunc (Num, Str, FuncRef)
    {
    if (       Num     ===  undefined) { throw new Error ("Num argument is undefined");                  }
    if (       Num     ===  null)      { throw new Error ("Num argument is null");                       }
    if (typeof Num     !== "number")   { throw new Error ("Num argument in not a number");               }

    if (       Num     !==  2
    &&         Num     !==  4
    &&         Num     !==  6)         { throw new Error ("Num argument has an incorrect value");        }

    if (       Str     ===  undefined) { throw new Error ("Str argument is undefined");                  }
    if (       Str     ===  null)      { throw new Error ("Str argument is null");                       }

    if (typeof Str     !== "string")   { throw new Error ("Str argument is not a string");               }

    if (       FuncRef ===  undefined) { throw new Error ("FuncRef argument is undefined");              }
    if (       FuncRef ===  null)      { throw new Error ("FuncRef argument is null");                   }
    if (typeof FuncRef !== "function") { throw new Error ("FuncRef argument is not a function");         }

    // -- Normal Code ----------------------------

    var Value = 42;


    // ...


    // -- End of Normal Code ---------------------

    if (       Value   ===  undefined) { throw new Error ("Return value from someFunc is undefined");    }
    if (       Value   ===  null)      { throw new Error ("Return value from someFunc is null");         }
    if (typeof Value   !== "number")   { throw new Error ("Return value from someFunc is not a number"); }

    if (       Value   !==  3
    &&         Value   !==  5
    &&         Value   !==  7)         { throw new Error ("Return value from someFunc is incorrect");    }

    return Value;

    }

 someFunc (6, true, "I am not a function reference");

 -- Output ----------------------------

 Error: Str argument is not a string
      

Special Comments

A second approach to DbC that addresses some of the concerns enumerated above is to use a statically-evaluated commenting scheme, such as that employed by Google's Closure Compiler. This minifying/optimising tool can use comments that start with the '/**' comment-convention as directives that describe the types that a given function will accept as arguments. It can then check the arguments passed at each call-point throughout a given code-base for the function in question.

Example 2 demonstrates this approach. Here, the comment-header states that the first argument to someFunc should always be a number, that the second should be a string, and that the third is always a reference to a function that takes a single string-argument.

The advantage here is that we lose the bandwidth and run-time problems from which run-time checking-logic suffers (minification removes the comments), but you must be prepared to clutter your code with the required comments. Moreover, the analysis that Google's tool performs is static only. It can do nothing about run-time type violations where, for example, a function is passed a reference to another function (held within, say, an array of such references) where that second function's signature violates the constraints laid down in the comment-header.

Furthermore, its static nature means there is no way of enforcing pre- and post-conditions, and so, at best, this approach is only half the solution to implementing DbC in JavaScript.


 // -- Example 2 --------------------------------------------------------------

 /**
  * @param {number}
  * @param {string}
  * @param {function (string)}
  */

 function someFunc (SomeNum, SomeString, someFuncRef)
    {

    // ...

    }
      

Hungarian Notation

Another conceivable approach is to observe a formal argument-naming scheme akin to 'Hungarian Notation', which some programmers employ in a quasi-DbC context, in order to provide information about an object's type, class and purpose within a given object's name (Microsoft adopted this technique heavily in its use of C and C++).

In Hungarian Notation, a variable that, for example, holds a real number describing a numeric-width value might be called Width (its 'given name', as it is known in Hungarian Notation), and might be rendered in code as rnWidth, whereas a boolean called, say, SystemBusy would be rendered as bSystemBusy. We could use this to implement proper DbC by extending the naming-scheme to include value-constraints, where some form of pre-processor (of considerable sophistication) would parse the code for the DbC notation, and would ensure, for every appearance of a given object, that correct operations only were performed on that object.

In principle, this technique offers certain advantages: it does away with messy commenting-schemes that bloat the code, and it lessens the need for additional non-DbC comments to clarify the intent and proper usage of a given object. Moreover, the manifest nature of type, class and value-limit information in the code absolves developers from looking-up such information in separate documentation-files. In addition, minification reduces symbol length to a minimum, so it does not worsen the latency that deployed code incurs in a networking context.

However, this approach also carries a slew of disadvantages for the developer. For example, changing the type-/class-/value-constraints on an object would entail effecting a name-change across, potentially, an entire code-base. This could be tedious and time-consuming, especially in the earlier phases of a project when design and implementation are in a high state-of-flux, where re-naming operations are frequent.

Moreover, it is impossible to turn the scheme on and off trivially in order to gain respite from the symbolic overhead; once you start down this route, you are wedded to the naming regimen, where divorce is protracted and tiresome because ridding oneself of the scheme completely requires time and effort to rename objects throughout the system in question.

It also leads to exceptionally long and unwieldy names. For example, how would one name an object that was some form of counting variable with a given name of 'Count', and which should always hold an integer value that must never be zero or negative, but which must be less than, say, 100?

'Count_IntNotZeroNonNegLT100' comes painfully to mind; indeed, such monstrosities would serve to bloat one's code excessively. This would render it far less readable, and far harder with which to work – DbC is supposed to make life easier, not re-introduce defect liability by the back door.

To demonstrate this point, Examples 3 and 4 give a before-and-after comparison of the technique. The code in Example 3 defines a function that accepts an argument called Radius (representing the radius of a circle), and which uses the value of that to return an object containing the diameter, circumference and area values for a circle of that radius. Pretty simple stuff, and the code is eminently readable; even a neophyte could make a reasoned guess as to what the code does.

Example 4, however, re-defines that function, prefixing the various names with typing- and value-constraints, and the difference is notable; indeed, the example shows that Hungarian notation carries a degree of harmful redundancy. A name such as 'Circumference' implies clearly and reliably that the object in question is most-likely a real number that relates to circles, yet affixing type/class/value signifiers degrades its semantic readiness. In this way, we make our code a slave to the approach, which constitutes an inversion of concerns – DbC is supposed to be there for us, not the other way round.

Furthermore, and as with Google's approach, enforcing pre- and post-conditions is impossible, as the technique is purely static; nor can we re-use type-, class- and value-checking assertions across a system (or even a set of systems). This was the problem with HTML prior to CSS: developers had to state styling properties as attributes in every relevant tag, which was dreadfully inefficient and unwieldy, and which made changing styles across an entire site exceptionally tedious and time consuming.

To add to these concerns, developers would have to beware of using given names that clashed with the DbC notation-scheme. For example a system that was designed to manage, say, a photographic-film archive, would have to avoid using the term 'Neg' to name objects that related to photographic negatives, because this might collide with the use of the term NonNeg, which, in the example given above, would mandate that a numeric object never be negative in value. This could cause confusion on the part of the developers working on that system, thus impinging on the project unfavourably.


 // -- Example 3 --------------------------------------------------------------

 function getCircleValues (Radius)
    {
    return {

       Diameter      : 2 * Radius,
       Area          :     Math.PI * Radius * Radius;
       Circumference : 2 * Math.PI * Radius;

       };

    }
            

 // -- Example 4 --------------------------------------------------------------

 function funcRtnObj_GetCircleValues (RealNonNeg_Radius)
    {
    return {

       RealNonNeg_Diameter      : 2 * RealNonNeg_Radius,
       RealNonNeg_Area          :     Math.PI * RealNonNeg_Radius * RealNonNeg_Radius;
       RealNonNeg_Circumference : 2 * Math.PI * RealNonNeg_Radius;

       };

    }

 // -- Prefix Key ------------------------------
 //
 // Func   - The symbol is a function name (necessary because we may wish to take a reference to a function)
 // Rtn    - Denotes what the function returns
 // Obj    - Denotes an object (rather than an array, function reference, integer etc.)
 // Real   - A number with a fractional component (i.e. not an integer)
 // NonNeg - The value cannot be negative
 //
      

Elidable Logic

Another approach to DbC is to use, for example, the ASSERT macro when programming in C. The additional logic that this introduces into a compilation unit can be eliminated prior to the application's release simply by re-compiling with the NDEBUG macro defined before the inclusion of <assert.h>

As a parallel when working in other languages like JavaScript, we can use special comments to enclose our validation logic, thus indicating the tracts of code that a preprocessor should elide from the system prior to deployment. Example 3 demonstrates this, where a suitable preprocessor would remove anything sitting between the REMOVE_ME_START and REMOVE_ME_END comments.

This is a viable scheme that banishes the bandwidth and performance issues that dog the most-straightforward approach considered above. Moreover, the checking-logic remains dynamic, thus enabling us, in principle, to trap any class of error during development. However, the clutter of the validation syntax is still present in the code, which reprises the problem demonstrated by Example 1, and lards that code with obtrusive comments that bloat matters still further. This harms readability and comprehension even more than before.


 // -- Example 5 --------------------------------------------------------------

 function someFunc (SomeNum, SomeString, someFuncRef)
    {
    // REMOVE_ME_START

    //
    // Argument and pre-condition checking here as before
    //

    // REMOVE_ME_END


    //
    // Function-core, as before
    //


    // REMOVE_ME_START

    //
    // Return-value and post-condition checking as before
    //

    // REMOVE_ME_END

    return // ...

    }
      

The Case for Method-Call Interception

Happily, however, we can use method-call interception to implement a design-by-contract mechanism that satisfies all of the requirements detailed above, with none of the restrictions that the other approaches entail.

By using the AJS object, we can attach a prefix- and suffix-function to a given method, where the prefix checks the type and value of arguments that are passed to that method, as well as system state just prior to entry to the method (if necessary). Similarly, the suffix can check the type and value returned by the method, as well as system state following the method's execution (again, if necessary).

We can develop this concept to the point that we use an object to define the constraints that must be placed on arguments and return-values (a 'Validation Definition' or 'ValidationDef' in the terminology used here), and which also supports optional, user-defined checks on other factors in a system. We can then pass such an object to a 'Validation Enforcer', which uses it thereon as a set of directives that control its validation operations.

The diagram depicts these essential ideas, showing that they confer enormous advantages over the other approaches to DbC explored above. To wit:

  • We need not clutter our code with special comments, thus we can preserve readability and comprehension throughout.
  • Methods that we police for contract violations remain uncluttered by contract logic. As with the try/throw/catch model of exception handling, this approach gathers all the validation-related code in one place, while all the 'normal' code resides elsewhere. This gives the potential to validate legacy code-bases without editing them. Even if we wish to validate calls to methods of objects that are internal to a given function, we can collect the relevant ValidationDef(s) in one place in that function, separate from the non-validation code.
  • We need not pervert the names we give our objects, such that they read like medieval Klingon, thus we can keep our code maximally clear and readable.
  • We can perform type-, class- and value-checking dynamically rather than statically thus catching errors that the static approach would miss.
  • As with regular expressions, the ValidationDef object need state only what to check, not how to check it – the validation enforcer can do that for us.
  • It is trivial to turn the enforcer on and off by (un-)commenting respectively a single method-call. This allows us to eliminate the overhead of the DbC system easily when rigorous policing is un-needed (i.e. when the system is deployed, or when a particular sub-set of the application's code can be considered sound).
  • We can re-use a given ValidationDef on methods that share the same calling signature, thus reducing redundancy.
  • We can re-use common ValidationDef-elements in perhaps many ValidationDefs, thus reducing redundancy still further.
  • We can debug remotely as well as locally by providing the contract enforcer with an XHR or a WebSocket object by which it can send exception notifications across the network.
Diagram of internal call-chains followed by AJS_Validator's ancillary methods

AJS_Validator Defined

This then is the rationale underlying AJS_Validator, an object that uses the AJS object to implement the ideals outlined above. AJS_Validator lends JavaScript the equivalent formal-support for design-by-contract programming that languages such as ADA and Eiffel possess natively, and you need only a little understanding to start using it profitably, with no need to understand any other element of the AspectJS library.

In practice, AJS_Validator proves to be a bug-hunter par excellence, and in the development of sophisticated systems such as the Online Bibliography of Ottoman-Turkish Literature (of which a screen-shot is presented here), its use resulted in a demonstrably high-quality application, and saved weeks of valuable time during one six-month phase alone.

The Ottoman system is a scholarly research database, and its principal functionality constitutes a modern, single-page web application that is JavaScript programming writ large (none of your arthritic one-page-per-action model here thank you). Its implementation draws upon a set of over 70 fine-grained JavaScript components, all of which have corresponding ValidationDefs. During development, and at any point up to the current moment where it underwent some form of extension or modification, AJS_Validator was enabled and thus subjected the application to an exacting, unrelenting and automatic test-regimen whenever a given element of the system's functionality was exercised.

In fact, it is striking how AJS_Validator will detect problems even when you are not looking for them (assuming a scorched-earth policy when generating ValidationDefs for the application in question). The tool brings defects to your attention unbidden and immediately as you test an application, giving detailed information on the nature and precise location of the problem, and so acts as an ever-present but transparent sentinel that saves substantial development time.

Indeed, AJS_Validator assisted the development of AJS_Logger and AJS_ODL too, due to the fact that they possessed their own constantly evolving ValidationDefs from the outset (see the Dog-Food Consumption section in the Product Overview), and this cut the development time of the AspectJS product (and this web site) as a whole. Moreover, the size of those ValidationDefs demonstrate clearly the volume of validation logic we can pull out of a system when using method-call interception to catch exceptional conditions. That is: when comparing their minified sizes (in the interest of maximal accuracy), AJS_Logger and AJS_ODL are 1.359k and 4.498k in size respectively, while the validation code for each is 4.962k and 10.518k in size – the validation sources are substantially bigger than those of the corresponding validatees.

Screen-shot of Ottoman-Literature site

Easier Library-Management

AJS_Validator has also proved to be of great value in the development of libraries that are shared among multiple applications. This benefit was not anticipated during the tool's inception, but became apparent in the development of kembrapublications.com and beckybye.com, screen-shots for which are presented here. Development of these sites commenced after the Ottoman system came into being, and their administrative functionality (which is not available to ordinary site-visitors) draws from the same set of JavaScript components.

This situation demonstrated what we all know to be true: having developed a body of code, you will return to it almost inevitably in order to resolve defects, improve performance, or to extend or adapt it in some way. If that code constitutes a component library that is shared between applications, and you modify that library in order to satisfy the needs of a current project, there is a fair chance that such modification will introduce defects into those applications that were developed previously.

This will force you to re-visit those prior applications in order to resolve the trouble, and this can be challenging when you have not worked recently on the code concerned. We are all familiar with that experience: you have to re-familiarise yourself with the system's design, implementation and modus operandi, and this can take considerable, frustrating time (why did I do it that way? What is that doing there? I don't remember writing that!).

This is where static typing can really come into its own, because laying down the rules for calling a given method when defining that method relieves you of the need to remember, perhaps years later, how that method should be called. However, JavaScript does not support static typing, meaning that one's only guard against shared-library mis-calls in that language is extraordinary and unrelenting vigilance, the acceptance of a far greater debugging-burden, or the use of some form of design-by-contract tool.

Experience shows the final option to be the clear winner; AJS_Validator brings dynamic typing to JavaScript, where, with that tool in-place, defects that arise from shared-library modifications come to your attention swiftly and automatically as you put the relevant applications through their paces.

It is worth noting too the sweet sense of relief that such notifications bring, as you realise that a given problem can be fixed very quickly, and that perhaps hours of your time have been saved for use on something far more profitable and satisfying than yet more debugging.

Screen-shot of beckybye.com and kembrapublications.com

Better Documentation

A further benefit that comes from the use of AJS_Validator – one that also manifested unexpectedly during its development – is that it aids the documentation process (something of a chore that none of us approach with relish). This is because the properties that ValidationDefs can possess are precise in meaning and relatively few in number, and this makes it easy to convert them into natural prose, or some form of tabular format.

The documentation process is aided still further by the fact that all the contract-related code is defined separately rather than being scattered like weeds throughout the non-contract-related code. To put this another way: when you need to generate API documentation for a given tranche of functionality, you need only gather the ValidationDefs for the components in question in one place, and then read-off the API information directly from there.

To demonstrate this, Example 6 shows a real ValidationDef that relates to a fundamental component in the library on which the systems cited above depend. The user guide elucidates the meaning of this code fully, but, to summarise, it states that a method called createTransactor accepts six arguments, all of which can be class of Object only, except the second argument, which can also be an array. It states also that the three trailing-arguments must be objects that carry certain identifier-strings ('tags' – again, see the user guide for clarification), and it states too that AJS_Validator must append a tag with the value of 'Transactor' to the objects that createTransactor returns (tags are of great value in debugging).


 // -- Example 6 --------------------------------------------------------------

 AJS_Validator.pushValidationDef (this,                                            // The function to validate
    {                                                                              // is at global scope...
    MethodNames       : ["createTransactor"],                                      // ...And it is called
    CallDef           :                                                            // createTransactor.
       [
       { AllowClasses : ["Object"          ],                                              },  // Arg 0
       { AllowClasses : ["Object", "Array" ],                                              },  //     1
       { AllowClasses : ["Object"          ],                                              },  //     2
       { AllowClasses : ["Object"          ], AllowTags : ["Messenger"                   ] },  //     3
       { AllowClasses : ["Object"          ], AllowTags : ["CtrlSet", "BtnSet", "LinkSet"] },  //     4
       { AllowClasses : ["Object"          ], AllowTags : ["Form"                        ] }   //     5
       ],

    RtnDef            : { ApplyTag : "Transactor" }                                // Label the objects
                                                                                   // that createTransactor
    });                                                                            // returns.
      

Better Design

This documentation bonus extends to the point where AJS_Validator facilitates improvement of a given design (another unanticipated bonus). Beyond its superb powers of bug detection, the tool proves to be an agent of disclosure that can illuminate areas of inconsistency and inefficiency in a system's design and implementation.

This occurs when you review side-by-side a set of ValidationDefs for a given set of methods, and see, for example, inconsistencies that exist between method signatures. You may see, say, methods that accept identical arguments but which differ in the order in which they accept those arguments. This readily apparent contrast helps you to re-factor your design and implementation so as to increase its orthogonality, and this feeds back in-turn into the composition of the ValidationDefs in question, in that you are able to re-code them subsequently, thus yielding a greater degree of ValidationDef re-use – a win-win situation.

Non-DbC Benefits

The method-call interception approach to system validation also yields benefits that do not relate directly to design-by-contract practice. Given that the AJS object allows us to apply multiple prefixes and/or suffixes to a method, we can combine transparently the use of AJS_Validator with run-time logging and/or on-demand resource retrieval. This means that a given method's execution can be subject simultaneously to validation and logging and performance instrumentation (using AJS_Logger), and can also trigger the automatic loading of other code (using AJS_ODL), where each such requirement is satisfied in blissful ignorance of the others.

This chimes deeply with the fundamental concept behind aspect-oriented programming, in that it allows us to separate-out cross-cutting concerns – tight cohesion with loose coupling – and this demonstrates the power of the AspectJS library.

So, to learn how to go a long way towards ensuring that your JavaScript software does exactly what it should, that it is considerably smaller when deployed, that it implements a more-favourable design, and that its development requires significantly less time, go now to the AJS_Validator user guide.