Testing in yeti using yebspec

Posted on April 1, 2011


After some first steps using the repl to learn yeti – the statically typed functional language for the jvm – you probably want to do a bit more to get a better feeling. In my experience one of the best ways to do so is to write unit tests.

As it is easy to code regular Java classes in yeti you can write JUnit TestCases and do your unit-testing that way. However in this post I want to introduce you a more ‘yetish’ approach using yebspec a very simple test-module which is a somehow between classical unit-testing and the scala Behavior Driven-Design (BDD) test-framework specs.

Yebspec is implementation-wise a very simple test-helper. It is just one module in the yeb project (https://github.com/chrisichris/yeb) a still very alpha project for yeti. However yebspec is the part of the project which is most stable (and used).

In this blog-post I want to first show

  1. how to write a yebspec test than
  2. how to run it from the console or through JUnit integration and finally
  3. a bit how it is implemented

Writing Specifications to Specify and Test your code

As said yebspec is implementation wise a very simple framework – much simpler than i. e. scala specs. All it does is to collect and report AssertionFailedError which might be thrown inside user defined specification functions by assert functions – exactly like in JUnit.

Describe what Something Should do

You basically create a dedicated spec module (similar to JUnit TestCase class) and define different specifications and assertions in it. This module can than be run and return a result-summary, which can be printed out or otherwise reported and analyst.

Let’s look at an example, where we test part of the yeti.lang.std module:

module example.stdSpec;

load org.yeb.yebspec;

specificationsFor "the yeti.lang.std module which is implicitly loaded in yeti"\(

    specification "for the list manipulation funtions" \(
        baseList = [1 .. 20];

        describe "map" \(
            should "apply function to each element" \(
                r = map (+2) baseList;
                assertEquals [3..22] r;
            should "work on empty lists as well" \(
                assertEquals [] (map id []);

    specification "for the string functions" \(
        describe "strLength" \(
            should "give right length" \(assertEquals 3 (strLength "123"));
    //just describe threadLocal
    describe "peekObject" \(
        should "read struct" \(

To define specifications a new module is created. The result of the modules is the result of the one specificationsFor function.

The specificationsFor function takes a descriptive string and a function argument – \() – which gets evaluated once each time the result of specificationsFor is evaluated.

Inside the argument function the functions specification, description and should (collectively called subSpecs) are called which in turn take a description-string and a function argument, which can recursively  contain again subSpecs and different assert statements.    From the code-logic point of view there is no difference between specification, describe und should. They are all the same function only with a different name for better readability.

To compare this with JUnit: The module plus the specificationsFor function is similar to a TestCase class, the subSpecs (specification, describe, should) are the test-methods and which contain assert statements

Asserting Behavior

Like in JUnit in yebspec different assert functions can be used to assert that code does what it should do. (In BDD it is a central concept to use different names for assert, because there you do not test something but describe it, however yebspec uses assert like JUnit)

The assert statements must be executed in one of the subSpecs (specification, describe, should) argument functions. Like their JUnit counterparts assert functions throw junit.framework.AssertionFailedErrors if the assertion does not hold and this error is caught by the subSpecs and collected to report.

There are many different assert functions. Basic ones (fail, assertTrue, assertFalse, assertEquals, assertNotEquals, assertSame, assertDefined), some for lists (assertEmpty, assertNotEmpty, assertContains, assertContainsAll, assertEqualsElements) and some  for Exceptions (assertException, assertFailWith, assertExceptionIs). (To see how to use them it is currently best to look at the source code of src/main/java/org/yeb/yebspec.yeti module at https://github.com/chrisichris/yeb)

If you want you can add a description to an assert using the assertLabel function:

assertLabel "some description" \(assertTrue someCondition);

Writing a new assert function is simple. Just check the condition and if it fails call testFail with the assert specific message. For example assertEquals is implemented this way:

assertEquals right test =
    if right != test then
        testFail ("Should be [\(right)] but was [\(test)]")
    else () fi;

The testFail function will throw the right exception and take care that the assertLabel is added.


The argument functions to specificationsFor and the subSpecs are normal yeti functions. Any code can be executed in them and the subSpecs are also normal functions which are invoked – no magic here.

Therefore all normal yeti idioms can be used to define fixtures etc. There is no need for setUp() tearDown() methods.

To define a fixture which is fresh for each ‘should’ you generally do the following:

describe "array functions" \(
    shouldFX desc fn =
        should desc \(fn (array [1,2,3]));

    shouldFX "push" do ar:
        push ar 4;
        assertEquals (array [1..4]) ar;

    shouldFX "shift" do ar:
        assertEquals 1 (shift ar);
        assertEquals (array [2,3]) ar;


Here we define our own should function shouldFX. It takes the description-string and a function which gets a fresh array as test data. The shouldFX function than calls the standard should with the provided function with a fresh array. And then the shouldFX function is used like the standard should function to test the push and shift array functions from yeti.lang.std.

If you have to do some clean up you should wrap the provided function call in a try finally yrt (as standard in Java or yeti):

shouldFX desc fn = should desc \(
   stream = ....
   try fn stream finally stream#close() yrt

To have a test-environment which you do not want to setup for each ‘should’ because it is too expensive (i.e. in integration testing a long starting service), you just wrap many should in a try finally yrt:

describe "expensive service" \(
	service = ....
		should "provide this" \(..);
		should "provide that" \(..);
		closeService service;

This is all standard yeti. There is no support form yebspec for fixtures, test-environments etc. because yeti provides it more concise itself than the framework could do.

Composing a Suite out of different Yebspec Modules

Like in JUnit different yebspec modules can be combined in a suite which looks to the outside itself like a yebspec. Suites can be combined themselves in suites and so on.

This comes handy if you want to run all the tests form one place or combine different places.

To do so define a module with the specSuite function which takes a list of yebspec modules:

module example.allSpecSuite;

load org.yeb.yebspec;  

specSuite "all specs for yeti lang"
        [load example.stdSpec,
         load example.ioSpec,
         load example.fooSpec

Running from the Repl and JUnit Integration

A specification module can be run from the regular yeti-console, more convenient through the  yeti-maven-plugin repl and through JUnit for build-tool and ide-integration.

Running from the regular console

yeti>ys = load org.yeb.yebspec;
yeti>res = (load example.stdSpec) none
yeti>ys.printResult print res

First the yebspec module is loaded, than the spec we have defined is evaluated and finally the result is printed.

There is a lot of uninteresting intermediate output, so you generally do this in one line:

yeti>(load org.yeb.yebspec).printResult print ((load example.stdSpec) none)
 Y e b   Y e t i   S p e c s
success:   specifications for: the yeti.lang.std module which is implicitly loaded in yeti (s:6, f:0, e:0)
  success:   specification: for the list manipulation funtions (s:4, f:0, e:0)
    success:   describe: map (s:2, f:0, e:0)
      success:   should: apply function to each element
      success:   should: work on empty lists as well

    success:   describe: array (s:2, f:0, e:0)
      success:   should: push
      success:   should: shift

  success:   specification: for the string functions (s:1, f:0, e:0)
    success:   describe: strLength (s:1, f:0, e:0)
      success:   should: give right length

  success:   describe: peekObject (s:1, f:0, e:0)
    success:   should: read struct

Result: SUCCESS:
Specs run: 6, failed:0, exceptions:0
===== E N D   Y e b   Y e t i   S p e c s

Running from the yeti-maven-plugin Repl

If you are using the yeti-maven-plugin (which is handy when developing with yeti see a previous blog-entry) there is additional support.

Especially you can run all or certain specs automatically each time you change a source-file. This is very helpful during development, because yeti immediately recompiles the tested source file and test it. This is very similar to what you get through modern Java-IDE’s instant-error reporting and even more because it does not only report compile errors but also test errors. And it does not hinder the workflow because yeti is fast – very fast. This way you get nearly both the interactive development features of dynamic language with the compile time safety of static-languages.

To enable the additional maven support add the yeb dependency and configure the maven-yeti repl to automatically load the org.yeb.yebshell module.  The org.yeb.yebshell module is the helper module which provides i.e. the repl support for yebspec.



            ys = load org.yeb.yebshell;

Start the maven repl from the command line and call –ys.checkSpec “module.name” to run spec once and  –ys.monitorSpec “module.name” to run a spec each time a source file is changed:

C:\yeti\example>mvn -Dyetic=false clean yeti:repl
yeti>-ys.checkSpec "example.stdSpec"
yeti>-ys.monitorSpec "example.stdSpec"

Note the module-name is given as a string. This is because the tests are executed in an extra repl.

Use –s.showMonitors () to show all current monitors and –s.removeMonitor id to remove a monitored spec.

JUnit integration

JUnit is of course excellent integrated in build-tools and IDEs etc – something yebspec is indeed not Zwinkerndes Smiley. However yebspec uses JUnit as a brigde to get to the same support us JUnit has ie nice graphical output from IDEs, automatic testing through maven ant etc.

If you want that – and you propably do – just create a Java JUnit TestCase class and load into the suite() method all the tests in your yebspec. To JUnit (and all the tools) this than looks than exactly like a regular JUnit 3 TestSuite:

package example;

import junit.framework.Test;
import junit.framework.TestCase;
import org.yeb.YebUtils;

public class StdSpecTest extends TestCase {

    public StdSpecTest(String testName) {

    public static Test suite() {
        Test suite = YebUtils.createSuite("example.stdSpec");
        return suite;


That’s it now all your specs are run and reported as JUnit Tests by maven, Netbeans etc.


As said the implementation is actually very simple. It is just one module where half of it is assert definitions and reporting. You can take a look at this module at the git-hub projects site in src/main/java/org/yeb/yebspec.yeti

At the core there are these functions

_newNode name is string -> resultNode = (
    var parent = none,
    var children = [],
    var result = Right (),
    var msg = "",
    var rights = 0,
    var fails = 0,
    var exceptions = 0

_parentNodeTL = (x is None () | Some resultNode = none; threadLocal x);

_specs maybeParentNodeIn name action is ( None () | Some resultNode ) -> string -> (resultNode -> 'a) -> resultNode= (
    //executes given action on each node and all its ancestors (parents)
    withParents node ac = (
        ac node;
        case node.parent of
            None _ : ();
            Some n : withParents n ac;

    maybeParentNode = case maybeParentNodeIn of
                         None _: _parentNodeTL.value;
                         Some _ : maybeParentNodeIn;

    //creaet node for esecuting this
    myNode = _newNode name;
    myNode.parent := maybeParentNode;
    case maybeParentNode of
        None _: ();
        Some pn : pn.children := pn.children ++ [myNode];

    //set our node as the new thread local
    oldTLV = _parentNodeTL.value;
    _parentNodeTL.value := Some myNode;


        _ = action myNode;
        myNode.msg := "";
        myNode.result := Right ();
        if empty? myNode.children then
            withParents myNode do n: n.rights := n.rights + 1 done;
    catch AssertionFailedError ex:
        withParents myNode do n: n.fails := n.fails + 1 done;
        myNode.msg := "fail: \(ex#getMessage())";
        myNode.result := Fail ex;
    catch java.lang.Exception ex:
        withParents myNode do n: n.exceptions := n.exceptions + 1 done;
        myNode.msg := "exception: \(ex)";
        myNode.result := Exception ex;
        //reset the parent node
        _parentNodeTL.value := oldTLV;

specificationsFor name action maybeParentNode =
     _specs maybeParentNode "specifications for: \(name)" action;

_subSpec specName name action = (
    _ = _specs none (specName ^ name) action;

specification name action = _subSpec "specification: " name action,
describe name action = _subSpec "describe: " name action,
should name action = _subSpec "should: " name action,

The central function here is _specs. This function takes an optional parent node (a node is a structure which describes in a tree the result of the evaluation of a subSpec) a description and a function which it executes and return a result node. If no parent node is given it checks he ThreadLocal _parentNodeTL for one. Than it creates a new node for its own result and adds it to the parent node (if there is one). And than it binds its new node as the parent-node to the ThreadLocal.

After that it just executes the given function and catches any AssertionFaildErrors or other Exceptions and adds the result of a failed or successful run to the node (and its parents).

The specificationsFor and specification, describe and should functions just delegate to the _specs function. The specificationsFor returns the node which can than be printed, analyzed, used for JUnit integration etc.


Well that’s about it. I think the most important point here is probably that you see how easy it is to write useful code in yeti.

Hope you like it and hope even more you comment on it. It just more rewarding if there are comments positive but of course also critiques.

Posted in: Yeti