Saturday, October 15, 2011

NAMP Blog Salon posts

NAMP Blog Salon posts:

Last week, I participated in the National Arts Marketing Project Blog Salon over at Americans for the Arts. My two entries focused on applying research and feedback-gathering principles to a marketing context. Not the typical Createquity fare, but if you find such things of interest, here’s some more information below.

Is Your Arts Programming Usable? considers the concept of usability testing taken outside of its usual tech- or product-specific milieu. Here’s an excerpt:

At Fractured Atlas, we’re in the process of rolling out a few new technology products that have been in the pipeline for the past year or so. One of these is Artful.ly, which is the hosted version of the ATHENA open-source ticketing and CRM platform that was released earlier this year. Another is a calendar and rental engine add-on to our performing arts space databases in New York City and the San Francisco Bay Area that will allow visitors to the site to reserve and pay for space directly online.

For both of these resources, we felt it was important to get feedback from actual users before proceeding with a full launch. So we engaged in a round of what’s called usability testing. Usability testing differs from focus groups in that it involves the observation of participants as they actually use the product. So, rather than have people sit around a room and talk about (for example) how they might react to a new feature or what challenges they face in their daily work, you have people sitting in front of a computer and trying to navigate a website’s capabilities while staff members look over their shoulders and take notes.

Whither the Time Machine? Considering the Counterfactual in Arts Marketing explains why deducing what would have happened if things had gone differently is the central problem of arts research, and offers a couple of examples of how arts marketing can take advantage of control groups.

In a marketing-specific context, counterfactual scenarios come into play when considering alternative strategies aimed at driving sales or conversions. One technique that a number of organizations have used is called A/B testing, which is when two different versions of, say, a newsletter or a website get sent to random segments of your target audience.

Internet technology makes A/B testing relatively painless to execute: in the case of a newsletter, for example, all it requires is a random sorting algorithm in Excel to divide the list in two before sending the slightly different newsletter versions to the lists as you normally would. You could test which design results in more clickthroughs to a specific link or which subject line results in a higher open rate.

By creating an A group and a B group, you are finding a way to test the counterfactual without the use of a time machine to go back and try things a different way. Assuming the groups truly are random and the sample size isn’t tiny, it’s a really great way of getting reliable information on what you’re doing.

A/B testing is not the only way of pursuing this kind of inquiry, however. Sometimes it’s not that easy to simply divide your target audience into two.

Enjoy!

Share

Related posts:
View of ADO:Strives for ease in information sharing for development

No comments:

Post a Comment