Ben Nadel
On User Experience (UX) Design, JavaScript, ColdFusion, Node.js, Life, and Love.
Ben Nadel at NCDevCon 2011 (Raleigh, NC) with: Andrew Duvall
Ben Nadel at NCDevCon 2011 (Raleigh, NC) with: Andrew Duvall

Tiny Test - An Exploration Of Unit Testing In ColdFusion

By Ben Nadel on
Tags: ColdFusion

I am not good at unit testing my code. I've played around a little bit with MXUnit (for ColdFusion) and Jasmine (for JavaScript); but, I've not really committed to using unit testing within my professional workflow. I know this is bad; and, I know it needs to be fixed. So, I decided to sit down and really try to get comfortable with unit testing from the inside out. The best way to do that: write code. In doing so, I ended up creating a little project called "Tiny Test." Tiny Test is a bare-bones unit testing framework for ColdFusion designed to "just work" with my particular style of programming. It requires zero configuration, consists of just a few files, and works perfectly with ALT-TAB-Refresh style programming.



View the Tiny Test project on GitHub.

The Tiny Test project is designed to be dropped, "as is," into your tests directory. It has its own Application.cfc (you extend this file implicitly) which overrides the index file request and outputs the test runner. The test runner is optimized for both clickability and window-focus-based running. It scans your "specs" directory and lists any test case files that end in "Test.cfc". You can choose to run any or all of these test cases at one time.

Out of the box, I only wanted to provide the most bare bones assertion methods:

  • assert( truthy )
  • assertTrue( truthy )
  • assertFalse( falsey )
  • assertEquals( simpleValue, simpleValue )
  • assertNotEquals( simpleValue, simpleValue )

But, if you want to add your own assertion methods, you can easily do so by augmenting the "TestCase.cfc" component that ships in the "specs" directory. All of your test cases should extend this base TestCase.cfc, which in turn, extends the core Tiny Test TestCase.cfc.

To run your test cases using your mouse, you simply click anywhere within the "status" portion of the page (ie. the huge colored area at the top). If you want to be really fast, however, you can enable the auto-run feature (ie. the checkbox in the bottom-left corner of the page). When this feature is enabled, the selected tests will be re-run automatically whenever the test runner window receives focus. This means that when you update your code, you simply need to ALT-TAB back to your test runner and your tests will be re-run instantaneously - no clicking required!

Tiny Test is very minimal and was primarily a means though which I could become more comfortable with unit testing. Is it definitely not intended to compete with robust unit testing frameworks like MXUnit. I just felt a need to create something that was perfectly aligned with my own programming workflow.

Reader Comments

I'm in the same boat, have never adopted it or particularly seen the value in it. I generally just opt to not write code that would ever error.

But I know that that's "wrong", excited to play with this. Thanks!


Awesome - if you have any feedback / suggestions, I'm all ears. I was trying to make it as easy as possible to use.


I definitely like the idea of a single assert() method. That was one of the first things I wanted to try; as I've played with other testing frameworks, the diversity asserts always felt like it was diluting my intent, which was simply to say that such-and-such expression was either True or False.

I'll look more closely; and again, I am very new to unit testing.

I noticed that if two assertions fail the interface only reports the first. Is that by design? Just for simplicity?

Thank you for making the report interface attractive. Also, loving the alt + tab feature btw, though it took me a while to find on the page ( I remembered it from when I first read this post a while ago ).


I suppose I look at "ALT-Tab" more as a mind-set and life philosophy than as an actual set of key-commands :P

When the tests run, the test runner actually stops on the first failed test. So, even if multiple tests *would* have failed, only the first failed one gets executed.

I thought about continuing the testing even after a failed test; but, I could think of a persuasive reason to do so. It seems that if a test fails, I need to fix it - I am not sure there is extra value in seeing multiple tests fail. If anything, I think it's easier on the mind to see one at a time :)

In a team situation seeing multiple tests fail allows the team to see multiple pieces of code fail when used together...

For a single dev, this would be less important. My 2c

Getting back into CF, I'm going to check this and rocketunit out again...