Had the same newsletter template/design for 12 or more months? It is a good time for a facelift. Here are a few reasons why.
- It’s a good opportunity to do some in-depth analysis on what is working and what is not. If a section of your newsletter is not performing, yank it or change it up in the template. If anything, your readers will appreciate a fresh look.
- It goes without saying from best practices point of view you should always be adapting your emails to contend with image rendering issues etc.
- From a design stand point, try and keep it simple but aesthetically pleasing, and don’t be afraid of white space. It will clearly define your content.
Once you have your new template, now is the time to optimize your newsletter. I think all e-marketers struggle with how much is too much, or too little! Each month I suggest selecting a section of the newsletter and do an A – B test, example, if you have top 10 tips:
1) Version A- Feature 1 tip with a CTA to all 10 tips
2) Version B- Feature 3 tips with a CTA to all 10 tips
This is a great exercise, depending on the content I found anything from a 20% lift in click-through to a 20% decline.
Change is good – here is the header image layout for Gallery Exposure, that was tested well over 4 years:
The use of web analytics to target email campaigns improves revenue by nine times more than does the use of broadcast mailings. Despite additional campaign costs, relevant campaigns increase net profits by an average of 18 times more than do broadcast mailings. (Source: JupiterResearch, Email Marketing: An Hour a Day, by Jeannie Mullen and David Daniels)
Most of us know that relevant, personal emails vastly increase the success of an email campaign. In my experience I’ve seen anywhere from 10% to 70% higher metrics, when the campaign has been segmented and targeted against additional data.
For those using a hosted solution, you can also get your ESP to add data points onto the system. Most of the ones I’ve talked to- MailChimp, Yesmail, Responsys, for example- have been helpful and interested in building out client datasets.
What do these additional data points look like? Oh, and by the way they’re all within your current data systems (I don’t advocate appending 3rd party data.)
Live purchase information. A simple set of daily key metrics will give you a huge boost, and you can test and rebuild the feed to add more detail
- first purchase
- last (most recent) purchase
- lifetime purchase value
- products purchased- detailed, or a simple category
Live browsing information. Who clicked on what, when, and keep this data fresh. If this data is too large to bring in, specify product areas, specific types of customers (prospects, existing) and work with these segments incrementally.
Unique industry information. Any kind of information on your site that is specific and unique to your company.
Email marketing feedback and response data. Opens, clicks, bounces and unsubscribes, by campaign (and segmented target).
- Print and catalog
- Ad banner clicks
- Affiliate activity (hosted on other sites)
- Face to face event data
Social site data
– Blog responses
– Twitter accounts
– Facebook accounts
The work involved in adding the data can quickly pay for itself. It does involve some database developer time to find, implement, and automate adding this data. But it pays for itself by open-ended revenue streams. The next post will cover the tactical technical details to implementing additional data.
Continue reading: “Accessing Your Data, 2 of 2″
So the general idea is that you have a customer database, and you feel it’s inadequate so you go to one of these companies and send them info, and they match it and send it back to you. Sometimes you get these cute scores and nicknames based on zip code, like mine, “Bohemian Mix.” I’m supposed to have an Audi, read the New Yorker, etc. None of that is true.
Managers who decide to buy data appends are crying out in desperation for insights into certain hidden segments of their database. What they really need is an analytical tool to help them dive into their own data. Acquired data is not going to tell them anything useful. Instead, they have a lot of information, they just don’t have the tools to get to it.
Three main reasons why I avoid them at all costs- is that they require you to submit data which they then add to their databases and sell the findings to other customers (anonymous-ly, but still). More importantly, there is no visibility into the logic that they use to determine models and segmentation, and neither are the sources. Also, it’s largely without consumer permission.
But companies have permission from their customers- from interactions. If the companies just take the time and effort to dive into their own data, they can find out trends and behaviors far more significant and relevant than those provided by these vendors.
Try it at home- as I’ve done on numerous occasions to prove the uselessness of these farms- randomly split a list into a control, and two test groups. In group A, further segment using your own behavioral thresholds, in group B, apply the relevant data farm’d segments (i.e. “Bohemian Mix,” “Digerati,” etc.). Give the control group a randomly pruned list of the same size. My experience, doing this on various campaigns over a year, was that data farm segments provided less or no lift than internally derived findings on purchase history, interactions, and past campaign behavior logic, to name a few.
I believe the savvy email marketer has to recognize (and avoid) their own bias, rely on previous findings from tests and analysis, and continue to test and model to create an ongoing, maturing view of the customer. Quick fixes like data appends simply muddy the waters, and worse, substantiate assumptions about the customers without data, testing, and findings behind it. Their sources are too diffuse, and the wins aren’t big enough.
Another issue for me is that to use the appends you must provide them with data- this in turn creates the very product they are selling back to you. Repackaging data and forwarding it on is unethical in my view, a disservice to the consumer, and costly.
(This is a series: See Believing the Numbers)
We have many site-based transactional emails, and we have little visibility into the metrics on those emails. Do you know any vendors that can sweep in, determine metrics or either setup metrics on an ongoing production system, that we can access and tweak the marketing messaging or find other issues? At least have an idea if they are being opened or not?
Blinded by Ops
Strongmail is basically built for internal mail systems like this, but you probably don’t have it. Keep it in mind, though, if you do want a more robust metrics & control system on those internal messages. Deliverability usually refers to seeding your output with specific emails that are then tracked and tested through a third party software. To analyze real customer emails going out in a custom application, that’s a different story.
Essentially, it’s a log parsing problem- as all mail systems issue logs based on the act of sending an email (and failures associated). There are log parsers out there. Getting a one-time parsing of the emails shouldn’t be costly or an issue, if you have internal resources you can point at the problem. Using Habeas as a professional services may work- or any other deliverability expert- as they probably do this as part of discovery of any project.
Getting an ongoing view of internally sent email, though, is a bit thornier. You need to setup trigger points in the application that populate a database (ideally) with various records: sent, opened, clicked, and has segment and cell names associated so you can do some live tracking of the effects of transactional systems. Essentially you want to design an API to another email service, and have the developers access that class or function instead of outputting to sendmail/qmail, or whatever internal mail system they are using.
If you have experiencing parsing mail logs with a favorite tool or also have a situation like this, please comment!
I hear from a lot of people that they know technically the best way to approach email marketing, but getting the organization around the idea is the hard part. This series will address various issues in the workplace, surrounding email marketing best practices.
My boss doesn’t understand, or believe the numbers. I show him how segmentation works, how we get 30% more profit by targeting our base and providing relevant content. I’ve done A/B splits, and we recognize profit, but he still thinks that’s “too much work” and the gains we get without segmentation are enough. How do I get him to see the light?
Unhappy in Non Profit
Traditional marketers will always have a distaste for email marketing, because statistics and response numbers rule the roost, and gut marketing is out the window. Sadly direct marketing and direct mail has been doing segmentation for a long time- especially in non profit and subscription businesses.
Here are a few techniques in dealing with management resistant to the magic of quick response reports from email:
- Show, over time, the behavior of various segmentations. Seeing the numbers correspond to his understanding of how various campaigns did or did not perform will help him see the validity of the numbers. I’ve seen this with other clients- if they can map poor response of one segment to a known factor that they understood, it’s more likely they’ll trust the validity of metrics on phenomenon they don’t understand.
- Start small: pick a small campaign and segment, and work on the results, show it around to people in the organization including him, and then branch out into bigger and bigger campaigns. The groundswell of support for the initiatives will outweigh his odd beliefs. Especially if he’s talking cost and hours in comparison to profit, small “proof of concept” (just proving it to him, of course!) may help swallow the pill of larger scale segmentation strategies.
- Money talks. Keep beating the drum that this brings in more money. Talk to people other than him about how this is the way to go, for a sheer revenue standpoint. In budget meetings, when people want to implement bells & whistles, mention that “if we had done segmentation, we’d have this money on the table.”
- I believe many marketers have a hard time listening to their customer base. They get used to understanding the customer as one thing, and when the customers change, the marketers have a hard time changing with them. It’s also about ego. Many managers have a lot riding on the line of a former understanding of the customer. Surveys can help “speak for the customer,” customer testimonials, or how the competition is addressing this shift in the perception of the customer. Agree on a specific metric, and map it back a few years. Show how the agreed-on metric is changing, and how that impacts response to fundraising and subscriptions (as I assume you’re doing as you’re a non-profit).
- Do a simple data audit of the workflow to confirm that all systems are OK. Assuage his fears, basically, and discount them methodically.
In the spirit of spring- with the wacky winds and brisk air, and moments of clear sunshine, we tend to want to roll up our sleeves and fix things that have been annoying us for a long time. In email marketing, I’ve noticed that these projects have started to get underway:
Database & List Cleanup
- Shrink your database. Take advantage of areas that are no longer in use.
- Evaluate your data: do you really need it? Or is it just a one-off, rarely used item? Could it make way for more important, useful data? There’s a trend to behaviorally instead of demographically, target customers- is this your system?
- Look through your inbox and find requests for data that you didn’t have, make a list and start bumping through it.
- Check the data flow and see if all of the data points are working as planned, a simple audit of key touchpoints like unsubscription, new data, mailing list uploads, etc.
- With some simple analysis, find out when people truly become inactive in your system, and segment your campaigns accordingly.
- Do a quick review of your email list by domain, region, and browser, any kind of browser availability out there, and see if your design guidelines match up. Basically: if you’ve decided not to support Gmail, see if it’s a significant portion of your list.
Marketing Program Clean-Up
- Are there campaigns still running that really aren’t earning their keep? If you can’t justify it with revenue or customer relationship gains, time to move on. Great opportunity to experiment with new subject lines, copy, or targeting.
- Are you caught in blast-land or managing a nice lifecycle program? Time to re-evaluate the efficiency of your programs and make your life (and your customer’s) more pleasant.
- Offers that are no-wins. We may have thought this was a good idea at one time, but due to competition or consumer trends, nobody uses this offer anymore. Refresh your offer list with invigorating and new offers- and be realistic about pet projects.
- There’s good complexity and bad complexity. Are your reports really singing the truth or just dragging you down into the muck? Re-focus your internal metrics and get them to speak to your goals.
- Check out old reports, and create time-lapse reporting on specific campaigns, consumer behavior or email metrics. These combinations of timely reports can give you insight you didn’t have.
- Distribute reports. I hate being the bearer of bad news, but if we’re all on the same page, it makes it easier to get the car out of the ditch. I’m loving these business metaphors.
A snippet from an article I wrote on MarketingProfs:
You may or may not be using the basic segmentation strategy of RFM (recency, frequency, monetary)—that is, dividing your mailing list into a few buckets based on recency in ordering or visitation to the site, the number of times they’ve ordered or visited the site, and the lifetime spend.
My issue with RFM models is that I would instead like to see each threshold between activity, and tweak it on an ongoing basis. That’s the joy of email marketing, it’s all so available and adjustable, and in real time.
The article, on MarketingProfs.
Today I was midpoint in a sluggish afternoon meeting when something was projected on the wall that made me jerk awake and wonder: why are my delivereds so low? Why bounces so high? What did we do right? What did we do wrong?
In one way email marketing is a godsend, it’s hard and cold data, which we can measure and count to our hearts delight. Yet in another way, it’s also too much data. It’s got lots of ins and outs and variances.
Take for instance delivered (or sent?). And bounce (or soft, unless you use hard…) And then opened (well with images off that number is… interesting). Then clicks, pretty clear, if counted distinctly, as in one-persons-click-per-day. Then clicks-to-open (CTO), or maybe you like clicks-from-delivered (CTR). You get my idea. But the funny thing, is that these are the easy metrics.
- If there are high opens but low clicks, did the subject line oversell?
- If there are low opens but high clicks, did the subject line undersell?
- If the click through is low, really any content- design, offer, expectation- could be the cause
- Are the leads qualified? Are there known or unknown variances between cells and segmentation? The undiscovered segmentations that magically hit on the offer.
- Timing and contact frequency. Did you have suppression rules on the campaign, de-prioritizing it in favor of some other campaign? Did that starve your list?
- More timing- is this the third email in three days? Could you have exhausted your interested readers?
- Have webmail sites deployed some new foil – such as Yahoo recently has hidden the “view images” button from me (need to research) which I expect to see in lowered opens for newer Yahoo subscribers.
I end up routinely asking these questions of metrics, when something on the projected wall of a meeting is odd or unusual:
- Don’t just show the percentages, show the real numbers, We know volume skews ratios, so show the volume delivered, and then the CTO or Open Rate.
- Apples to apples. Be aware of your cell and campaign segmentation before comparing the metrics. Campaigns that target more active, engaged targets will have far better metrics than less engaged, larger “blast” campaigns.
- Be aware of contact frequency of campaigns in tight time frames
- Calculate a few difference ratios, or insignificant factors, to give yourself an idea of the inaccuracy. My favorite is the non-open click, which is the percentage of customers that generally click through an email without loading images (which count as opens). This skews the CTO metric- which I like- but it’s just good to know in general how much. I also keep track of the sent vs. delivered, for a few types of campaigns (transactional & promotional). I don’t need to know for every cell or segmentation, just in general.
More reading on metrics, and their variability:
Improving Email Open Rates by the folks at MailChimp
Email marketing statistics: six misinterpretations and Seven Tips for Interpreting Your Email Marketing Reports by Mark Brownlow
Benchmarketing Email Response Metrics by Tamara Gielen
Obsessed with Open Rates? Stop it; Focus on Feedback Loop by Ken Magill
Unified Multichannel Metric by Kevin Hillstrom
Ask The Expert: How does my open rate compare to my peers? by DJ Waldow
First, he showed the chart on “taking the temperature” on email marketing, and it showed, year over year, a stagnation of “rosy” chart, and that basically problems have gotten more intense for email marketers.
Stefan then flipped to a slide showing something we all know- “newsletters still work”- for acquiring leads, essentially.
He displayed an odd report on the percentage of false positives on the spam filter, that predominantly (61%) were flagged by a single ISP, and a huge dropoff when classified by up to 3 ISPs. ISPs examined: Hotmail, AOL & Yahoo. He finished by saying that “the score does not tell you the entire story, you should monitor ISPs.” Interestingly, this is more a concern for B2B than B2C.
I was pleased to see the analysis of opens and clicks across segmented, versus unsegmented lists, and that sometimes the difference is 2:1 more efficacy for highly segmented lists.
On copywriting, and subject lines, Stefan quoted Ann Holland that “subject lines are getting shorter”, and they are getting ‘short & punchy.’ Essentially, “the first twenty characters are going to get looked at,” of the 45 characters suggested.
The most interesting report displayed, of course, the eyetracking chart, and renewed advice to move the template around to get- move the template around periodically (every 3 campaigns) to make users click on non-content, or less interesting or focussed panels, such as sponsor ads.
He lent advice on landing page optimization: remove navigation, and surprisingly enough, repeated page testing raises the efficacy up to 400% more. 400%. Amazing.
Last slide was a review of email marketers and their judging their inside, ASP or ESP vendors and how they are “good to great.” Turns out full service ESPs win out on that category, and with 60% email marketers with ESPs “able to handle complexity.” Basically- they’re worth the expense!
Interesting quick talk, in a very crowded hall, but deemed useful by my neighbors.
MarketingSherpa offers a certification course before the EmailSummit sessions begin on Monday. Last night at dinner, my two colleagues and I were talking about the course, and I wondered if, since we were already doing pretty sophisticated emails, segmentation, etc., whether it was going to be worthwhile. The bits I witnessed, with Dr. Flint McGlaughlin of MarketingExperiments, was chock full of great advice, methodology, and data points. More importantly perhaps is that it spurred a conversation with the same colleagues on the efficacy of one of our key campaigns. McGloughlin went through various emails very clearly into the reading flow of consumers, the issue of expectation from the subject line to the content of the email, as well as the offer.
In the presentation, MarketingExperimenets presents a formula based on effectiveness of email marketing, and the fact that they had a formula really dovetailed nicely with a book I read on the plane (and will review on here at some point): SuperCrunchers, which establishes data and regression testing in contrast to (what I term) gut marketing, or intuitive defenses of the ways and means of effective marketing. Haven’t we all seen that- someone in management or executive level saying “I don’t like that,” with no justification, data, or analysis to back it up? Frustrating.
This certification course has a lot of testing and information in helping corporate email marketing departments defend certain issues and “reasons why” we do things- short email subjects, not selling so hard (Phil hates the word ‘deal’ for very good reasons!), etc. So in this regard the certification course is very valuable. Looking forward to the email marketing internal politics session after the keynote tomorrow!
Back to the subject of data analysis in constant battle with ‘gut marketing’, what interesting to me is the fact that data analysis is not the gold standard, because, basically, learnings age. Negotiating that aging period is the real challenge. When do you have to retest various lessons you learned? When do offers get stale (i.e., “free shipping”), voice gets old, segmentation becomes complex and no longer useful, etc.