The Neuromarketing Challenge: First Response
It’s been more than a year since I posted the first Neuromarketing Challenge, and we’ve just now received our first response. The challenge, in case you missed that post, was for neuromarketing firms to submit a detailed case study or white paper that demonstrated a successful application of neuromarketing techniques. There’s very little published academic research on the topic (see Neuromarketing Proof? UCLA Brain Scans Predict Ad Success for one paper), and client data is usually closely guarded by commercial firms. We issued the challenge to try to bring some additional data into public view.
Innerscope Research sent the first response. It’s not fully responsive to the challenge, as the detailed numeric data isn’t there for us to pick apart. But, it’s a start in the right direction. I’m going to post their response without editing. Feel free to chime in with questions and your thoughts on this in the comments.
From Innerscope Research:
Innerscope Biometric Research Helps Mimoco Pre-Test Product Design Pipeline – Validation Case Study
There is a new breed of designer USB flash drives called MIMOBOTs that are a must-have for the gadget obsessed – an “it” accessory that is part pop art and part functional technology. These drives, made by Mimoco, are sold in a wide variety of retailers from museum shops to high-end department stores, as well as online, priced at about $20-$70 for eight to 64 gigabytes of memory, compared to the generic versions sold for $10 to $35.
As reported in The New York Times, Mimoco is the “Haagen Dazs” of designer flash drives. Innerscope Research partnered with Mimoco to study the emotional engagement of potential MIMOBOT users – tech-savvy, fashion-conscious, 20-somethings – as they reviewed a variety of designs. The product designs were split into two groups for passive evaluation while using Innerscope’s biometric technology:
(1) 30 designs that were already in-market with known sales performance
(2) 30 new designs that were being proposed for in-market use
If the emotional responses to (1) lined up with Mimoco’s sales data, the Company would feel confident in using the responses to (2) for selecting which designs to move into production.
There were two explicit goals for the study – test Innerscope’s ability to predict actual sales behavior and offer Mimoco specific insights on what design elements lead to higher sales. It was win-win for both parties, particularly for Mimoco which could use data based on what their target customer base found emotionally engaging on a nonconscious level to inform future design choices.
Part I: Predicting Online Sales Behavior
Out of their 300+ in-market designs at the time of the study, Mimoco provided a sampling of both best sellers and poor performers, including pop culture MIMOBOT character designs from the “Hello Kitty” and “Star Wars” brands. They did not share the sales data with any of us prior to fielding the study. It was truly blind validation research on a series of designs that represented over 100,000 unique purchases.
Data was collected using our Biometric Monitoring System™, which involves a medical grade biometric belt that measures moment-by-moment changes in heart rate, respiration, skin sweat, and motion in combination with eye tracking to gauge visual attention and cognitive processing. The individual responses were combined across the audience to determine the emotional response for each MIMOBOT design.
The correlation between the emotional response of 60 potential Mimobot users and Mimoco’s historical sales for the sample designs was 0.73, incredibly close to a perfect correlation of 1. This is a strong statistical indicator of the connection between designs that evoke high emotional response and those that achieve higher sales.
In fact, the designs we found to be most engaging, actually ranked #1, 3, 4, and 5 in sales among the dataset.
“I was amazed when I received the initial findings from Innerscope because the research correctly identified four of our five best-selling designs from the sample set,” said Dwight Schultheis, Mimoco Vice President. “These designs outsold the less engaging designs 3:1 in the marketplace. From that moment on, I was confident that Innerscope had the ability to truly measure our customers’ emotional engagement with our product.”
Part II: Identifying Design Elements that Work
Once Mimoco was convinced of the connection between emotional response and actual sales, we turned our attention to identifying design themes that emerged among the highest performers. This evaluation took into account all of the designs studied – both in-market and concepts under review.
Across the board, we saw common elements in the high performers – prominent features often with large eyes and a bold color palette. The most engaging designs also tended to “express an attitude.” Interestingly, the inverse of these types (small eyes, pale colors, vacant stares) were common among the designs that generated low emotional engagement. Here is a chart of this looks like:
“The findings from Innerscope have had a material impact on Mimoco’s business,” said Schultheis. “Our ‘hit rate’ of producing ‘good’ and ‘great’ selling MIMOBOT designs has improved as a result of Innerscope’s design recommendations (namely, prominent eyes, bold colors and contrast, and design detail).”
Part III: Predicting Online Ad Response
We then applied these findings to an advertising scenario to explore what it could mean in terms of predicting future behavior. We created two versions of the same Mimobot online display ad and ran them on MySpace for roughly 200,000 exposures.
Version 1 used two of the most engaging Mimobot designs:
Version 2 featured a couple of the less engaging designs:
The ad featuring the most engaging Mimobots had 2x the number of clicks as the ad with the less engaging designs. This was a strong indicator that designs with high engagement lead to stronger ads.
In summary, our emotional response metric had a strong correlation to Mimoco’s sales data with the highly engaging designs clearly outselling the less engaging ones. We were also able to predict online advertising behavior – the emotionally engaging designs drove twice as much click-through behavior.
“In a consumer products business like Mimoco’s, the most basic criteria for success are that consumers love our product,” said Schultheis. “By identifying key design attributes of our MIMOBOT flash drive product, Innerscope handed us a recipe for success. We have more best-selling designs than ever before (and much less expensive and slow-selling inventory).”
We are confident that this case study offers direct evidence to the validity of using biometrics for measuring unconscious emotional engagement and to the long term value this research approach offers to brands.
——-End of Innerscope-provided content—–
If you can get past the occasional burst of press-release style prose, what do you think? Does this make the case for biometric measurements as a predictor of consumer preference?
I wonder how nr 2 in sales was ranking in emotional responses. And if it ranked very low, doesn’t that make the error margin to big?
While it would be nice to establish near-perfect correlation between two metrics, the real world usually doesn’t work that neatly, Beate. If we had all of the data, we could form a better opinion. Based on what we have, matching four of the top 5 seems fairly amazing. #2 could have achieved higher sales for a separate reason – a large single purchase, similarity to a popular cartoon character, a different emotional trigger than the rest, etc. We have no way of knowing.
This is interesting work and thank you for hosting this challenge. My work as a consultant for companies making and branding consumer goods has required me to find tools to help guide design teams to identify key features which grab and hold onto consumer’s attention. It would be nice -as you mentioned in the article – if the data file was available for study, and of course too the detail workings of the emotional response metric; this is unlikely to occur for obvious proprietary reasons. Would it be feasible however to host your own test? For example purchase of online EEG and other biometric monitoring gear, affiliation with a university with biometric analysis tools and test older product with known sales outcomes against new concepts with unknown sales (plus post online to check hits)? What strikes me as interesting is that I think one day the companies who conduct this research could face a big change as monitoring gear leaves their offices and winds up inside branded companies who will then conduct their own in-house research. The biometric analysis will come from small firms who demonstrate strong and accurate findings with their analytic software – and which is easy to use and interpret. This is obviously simple to request, but a big bite to chew. Thanks, again, for your work and postings.
Good idea, Jim, but at the moment I don’t plan to buy the hardware and get down the learning curve myself. There are several neuromarketing programs at universities (Akron University, Iowa State) along with a variety of decision science, neuroeconomics, and other programs at a bigger selection of schools. So, I expect some relevant academic papers to start appearing. I think testing of products and ads with known sales outcomes is a great idea – I’ve been suggesting that for years, but have seen very little effort of that type so far.
I do agree that a few firms will bring the neuromarketing work in house, but I expect many companies will outsource it just as they do with other market research tasks. Thanks for stopping by!
Wait… Martin Lindstrom says he does this all the time on the today show?!?!? 🙂
Martin Lindstrom, AK Pradeep, and others have shared some great info in their books and appearances, but you make a good point – none of it really rises to the level of scientific proof. (I understand the “secret sauce” argument, but I do think there’s a middle ground where firms can share persuasive data without revealing the fine points of their methodology. Without more data, the study in this post doesn’t quite make that cut, but we’re getting closer. Maybe by the next entry…
As a scientist i feel a bit left out of the data loop by this press release-y type of report. The biometric belt creates a ton of data. what did they use? Also, what did they learn from the eye tracker data? If they are making assertions based on correlations it is important to see what data they chose to correlate. NOTE: a .73 correlation, while quite high, is not ‘incredibly close” to perfect. In fact, it accounts for only half the variance so technically it is half as good as a perfect correlation. Sorry but this report leaves me dry and wanting to see the data. Correlation does not imply causation and is fraught with “technical” issues that have not even been mentioned. How many data points? How much variance? and on and on and on …
Dr. Larry Rosen
Professor of Psychology
Author of “iDisorder: Understanding Our Obsession With Technology and Overcoming its Hold on Us” (2012)
While I am very interested in the tools employed by neuromarketing (and the statistical software developed to interpret the data), I believe – as I believe you are – that independent testing should be conducted to help us understand just how helpful this type of applied science is. I can tell you from first hand experience that working with a neuromarketing firm to conduct a study is not at all inexpensive. If the science is valid and really delivers the goods then the fees are justified. If on the other hand the science is being overly hyped then prices and promises need to me modified. Thanks for you post Dr. Rosen.
Jim – Have you been following the independent testing efforts that the ARF has been doing in the neuro space? We were one of several firms that participated in the original NeuroStandards Collaboration project. The outcomes from that project have led into the ARF’s current “Predicting How TV Advertising Drives Sales” research initiative that is in conjunction with Temple University’s Center for Neural Decision Making and New York University’s marketing department.
Thanks you for the two university references. I will dig into their work’s progress shortly. (Can you tell me what ARF is?) I have seen the latent power of decision science in action, and “am a believer” now. However, the industry is new and it will take folks a while to become comfortable with the new science. One area I wish to use it is to help bridge the divide between the Industrial Design/Engineers and the Marketing departments for a few clients. They generally speak slightly different language dialects, see the world a bit differently and are driven by different forces of the same consumer market segment they service. I would still love to bring in a set of tools for the designers to perform quick checks on new designs to measure if these concepts are meeting marketing brief expectations and desires. (And there cases I have worked with where a designer/eng team has pushed into very new territory, while other business stakeholders have yet to “see” – or, better, “feel” – the potential benefits of this new design ground being evaluated. There will be discrepancies along the way in this data – for example, Falk, Berkman and Lieverman’s From Neural Responses to Population Behavior: Neural Focus Group Predicts Population-Level Media Effects, PS, 4/1, 2012. Self-reporting will differ from what is measured. And this is a critical area I will have to navigate with clients so they understand clearly what is happening: these tools measure without emotional bias, while we humans definitely are moved by our emotions. Thank you very much for your reply, and keep up the great work!
Sure thing. The ARF is the acronym for the Advertising Research Foundation. I don’t know how to embed links into this post, but here is the path to the project if you want to take a look: http://www.thearf.org/tv-advertising-sales.php.
All of your questions are good ones, Dr. Rosen. This piece was an effort to tell the story of the study while meeting the parameters of Roger’s challenge. In an effort to stay “fairly short, say, under 1,000 words” and gear the style towards the lay reader, it focuses more on teh execution of the study, the key results, and how Mimoco was able to use them, with as much emphasis on that last point as possible.
Similar to Erik du Plessis and Roger’s comments posted on 9/6, the effectiveness of a methodology (at least from our clients’ perspective) is typically based on their ability to get business value for their research investment. Mimoco has continued using the results past the conclusion of the study, so the ROI for them keeps growing.
A mountain of research states that people don’t always say exactly what they think, so the data gathered is subjective and – since you rely on spoken statements – it is also explicit. Neuromarketing removes the subjectivity, by not asking any questions and instead simply measuring the brain regions that activate whenever an emotion is experienced. In this sense, Neuromarketing is completely implicit.
Hi Roger, Thanks for this competition and the results. I think this is much better than anything Lindstrom or Pradeep has published.
We can question the lack of the datafiles – but then so can we for many academic papers.
What I like is that the results carry a certain ‘common sense’ that most marketing directors would feel comfortable with.
I agree, Erik, the data does have a kind of practical appeal. I’m reminded about the joke about the difference between a mathematician and an engineer. Both (males, of course, it’s an old joke) are placed across the room from a beautiful woman. They are told they can go half the distance toward her, and half the remaining distance again, and so on. The mathematician is crestfallen, saying, “No matter how many times I do that, I’ll never get there!” The engineer is far more cheerful, noting, “That may be true, but I’ll get close enough!”
For a marketer, this kind of data is likely actionable and better than no data at all.
My company does video. I find this particular submission interesting and perfectly in line with the principles we most strongly favor in creating commercial video. We always encourage clients to use actual people in their ads because it creates connection. Interestingly, the most compelling designs had hyper-human imagery.
Another thing we tend to do for commercial video is bump up the color in post-production editing. Greens are vibrantly green, reds are powerful, yellow are warm and lush. This is why TV ads never look like the pictures you take on your best camera. But we use color to sell. We find that it increases the “look-i-ness” of any video we create. And, again, in this study, they showed how bright colors and high contrast designs were just better for their bottom line.
In the comparative sets, the lower performing designs actually have bigger eyes, and one is more expressive, which goes against the study conclusions. Also, it seems to me that the a differentiator between the sets is that the lower performing designs were both clearly threatening with exposed teeth. Interestingly, I choose the one eyed design before reading the article, which was a low performer in the study. Maybe an age group filter should have have been included in the study. Sorry, but I really don’t see any scientific validation of the claims.
Thanks for the feedback. Part One of the study was the validation component where the engagement metric was compared to historical sales data. Part Two was about offering guidance to the Mimoco design selection team. The trends were identified through observational analysis informed by the biometric data and are not equivalent to the sales validation of Part One.
Keep in mind that only nine of the 60 designs are used as imagery in the article. The Mimobots pictured in the chart are representative of the specific trend they are associated with. We make no claim to being a design firm or a creative agency, just to using this research to help inform Mimoco’s decision-making process.
I like Roger’s analogy of half-a-step forward. I.e. that companies are publishing some case-studies that have face-credibility.
Yes, we can knock this submission for lack of data being offered, but then even in the best peer-reviewed journals the base data is not given because it often will take up all the space for the paper.
for this type of submission I am happier than many of the critics that something is published.
Maybe the authors want to give a bit more comment given the interest this generated?
I pointed NMSBA to this – I think this carried more substantiation than most they have published.
Thanks for weighing in, Erik. The creation of the NMSBA is an exciting milestone in the global development of this as a permanent resaerch area. Hopefully, they will keep pushing for validation just like Roger.
Thanks Roger and everyone else who has commented so far. We responded to the challenge in the hope of just this sort of dialogue taking place.
We want to publically thank Mimoco for allowing us to do this. They are the exception to the rule in the sense that companies typically use our research in an effort to gather competitive intelligence or gain a competitive advantage. They typically keep findings very close to the vest, particularly when the results correlate to sales. Feel free to poke fun if this sounds like corporate messaging when you read it, but it is the truth.
Director of Marketing
Innerscope Research, Inc.
Quite interesting. Please correct me if I am wrong, but step by step summary of what innerscope did was :
1. Measure the emotional response to already on the market designs (through heart rate, eye fixation etc.)
2. Suggest design elements through subjective evaluation of the results (a subjective inference)
In that sense, why, as a firm, I can not do the same thing myself ?
1. Check out the sales statistics
2. Come up with causation analysing the best sellers (which are emotionally engaging)
What was the exact contribution of neuroscience to this process ?
To my mind, neuromarketing can work as long as it can predict what I cannot predict as a company.
This is healthy skepticism, for sure. One thing I can say from first hand research experience (and supported in academic literature) is the discrepancy between self-reported responses/reactions/effectiveness of product design (in this case) and what modern equipment can detect – and the software tease out – as the individual’s emotional state over time. This is new cognitive science coming into the mainstream of mature product design and marketing methodologies, but it has, in my opinion, a place at the table to participate.
Burak, I totally agree.
The first part is interesting because they correlated old sales with current emotional response. However, for the second and third part you don’t need the biometric data. As Burak said, just look at the best sellers and the worst sellers groups, learn the common characteristics of each group and suggest new design tips.
The advantage of any physiological tool would be if they add something above and beyond what you can achieve with behavior only.
If anyone could show that the biometric data would predict future sales better than just behavior, then there is good reason to use this method
C’mon guys – you’re missing the point.
Yes, you are right.A shortcut method would have been (anyone) looking at past successes and hypothesising a reason (called marketing insight) and then making recommendations.
Rogers competition is aimed at evidence that neuromarketing has some evidence that it can work. Here the client asked them to prove their technology before they use it. Then use it. I think they did this credibly.
We need many such case studies so that we can learn. If we are going to knock everything that people submit in this competition we are actively going to stop people submitting and will not have a databank of cases building – if such a databank can be developed.
I´m studying the usage of neuroscience in market research. In Brazil there are fewer cases studies available. This report will be really helpful because I´m collecting data about what was already published.
Thank you, and if there are any other material to share, please send me.
[…] double the so-called “gold standard” USA Today conscious measure of dial testing. This and other validation studies are useful in demonstrating the use of neuroscience in market research as potentially very […]