Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine

Unit Testing Generated HTML

by dws (Chancellor)
on Jul 02, 2002 at 07:43 UTC ( #178785=perlmeditation: print w/replies, xml ) Need Help??

Here are a pair of techniques I helped devise a while back when unit testing UIs for web applications. If they work for you, you're welcome to them. If you have a better way, please post. I can alway use more testing tricks.

The Problem

So you got the Unit Test religion, and you're building up a set unit tests as you develop a web-based application. Good for you. And you're using templates to separate as much content as you can from application logic. Good for you again. Now how to you write unit tests that you won't have to touch every time someone changes a template? And how do you test in the presense of some dynamic data (like file modification times) that you can't easily dummy up?

How We Approached the Problem

When we wrote unit tests to cover the "upper" portion of our application, we captured generated HTML and compared it against known-good reference copies. But easy-to-change HTML in templates meant that the HTML got changed more often, breaking unit tests until we could update the known-good reference copy. We were spending too much time updating our unit tests in reaction to non-problems. Then someone had the bright idea (or maybe they read it somewhere) of keeping a separate set of "testing" templates, with content stripped to a bare minimum, where "bare" mean no Javascript, DHTML, whatever. After all, these unit tests weren't trying to simulate a browser, they were merely testing that the stuff we expected to get expanded into the templates did, in fact, get expanded.

This worked great when the template expansion could be forced to be "statically dynamic". That is, when the "underlying" dynamic data could be supplied statically by the unit tests. But we kept having stuff like file modification timestamps that couldn't easily be faked without causing some intrusive changes to the code. (We were trying to avoid having special debugging logic interwoven through the codebase.) What to do?

Our first approach was to write some custom post-diff logic that would figure out what differences were safe to ignore. This was an ugly hack. In some cases it mean an extra page of Perl per unit test, to massage diff output before deciding whether a test had really failed, or whether it passed once some set of magic differences were ignored. Yuck.

Insight number two was to insert pairs of special HTML comment markers into our testing templates to denote where dynamic stuff would appear when a given template was expanded. Then, before diff'ing the results against the known-good reference output, we would wipe out the special comment tag pairs and everything in between each pair. The diff was then straightforward and authoritative. All of the kludgey "what do we ignore in the diff output for this test" logic went away. Very cool

Replies are listed 'Best First'.
Re: Unit Testing Generated HTML
by Aristotle (Chancellor) on Jul 02, 2002 at 10:52 UTC


    Especially in the beginning of the project it's not unlikely that the data structures getting pulled into the production templates will change from time to time, if only minorly. If that happens frequently, one might further augment efficiency by building a tool that parses the production templates and spits out debug templates with raw interpolation directives. (Possibly with HTML-escaped surrounding tags if they just provide the content for a tag's attribute, eg for things like the following.) <img width="[% product.pic.x %]" height="[% product.pic.y %]" src="[% product.pic.url %]" alt="[% product.shortdesc %]"> ____________
    Makeshifts last the longest.

Log In?

What's my password?
Create A New User
Node Status?
node history
Node Type: perlmeditation [id://178785]
Approved by particle
Front-paged by FoxtrotUniform
and the daffodils sway...

How do I use this? | Other CB clients
Other Users?
Others taking refuge in the Monastery: (5)
As of 2017-08-23 12:00 GMT
Find Nodes?
    Voting Booth?
    Who is your favorite scientist and why?

    Results (350 votes). Check out past polls.