In part one of a two-part series on the value of creating mail tests that produce measurable, and telling, results for your catalog, this week I tell the story of one cataloger and how its testing proved to have inexact results.
Catalogers these days are racking their brains thinking about ways to decrease costs. Many have turned to changing their books. But as I illustrate below, making universal changes to your catalog can have mixed results. Some of the earliest direct marketers called their work “scientific advertising.”
Catalogers separate themselves from brand marketers by measuring their results and learning from their successes and failures. That’s why catalogers must consistently stay true to the principles of scientific advertising and test everything. I know you’re desperate to reduce costs. But I implore you to test before making any universal changes to your book. To illustrate my point, let me share with you a story — more like a cautionary tale — of a company that shunned the notion of a test before rolling out a major change to its catalog’s paper.
A client of mine decided that upgrading the paper it used in its catalog would result in increased sales — just a gut instinct. I warned them about testing first, as well as running some profit and loss scenarios, to determine how the additional costs would affect their catalog’s break-even point.
They scoffed at the notion of running a pro forma break-even analysis to determine how much revenue they needed to offset the additional paper and postal costs. In fact, it took the convincing of their paper merchant, printer and service bureau reps, along with myself, to convince them to set up a test before changing their paper weight. We set up a straightforward scientific A/B split test. We took half their customers and prospects and sent them a catalog printed on their regular paper. The other half were sent the book with the upgraded, more costly paper.
To keep the test scientific, the service bureau chose every other name from each list segment. In scientific terms, the A portion of names represented the “control” group and were mailed the “before” catalog; the “test” B group was mailed the catalog with the upgraded paper stock. The goal was for the test group to outperform the control group.
When we analyzed the results of the test, the client and I came to two different conclusions. They concluded the test was a success. After all, there was a marginal increase in response rates driving in a few new customers in some, but not all, of the list segments.
On the other hand, I saw something entirely different.
Yes, results were up slightly in some prospecting segments, but response rates were down in the housefile, especially single buyers segments. This meant it was converting fewer new customers (who were just trying the products) to multibuyers — a key metric and clear indicator of future success.
Just as important, the incremental cost of the higher grade of paper was not covered by the slight bump in sales. Prospects were costing more to acquire, and the mailer was losing slightly more money up front — money that would have to be made up in future mailings and orders. Customers were accounting for less profit on a per-order basis.
So who was right?
The client at that point was willing to lose a bit more up front to increase sales and put out a sharper-looking product. Eventually, the increased costs would catch up with it, however. The business suffered and the paper grade had to be scaled back.
Next week in the final part of this two-part series on the value of creating mail tests that produce measurable, and telling, results for your catalog, I continue my story from this week by providing some key pointers you can take away from this example, as well as listing some tips to help your company run successful and subjective tests.