Back in November, I mentioned that Affectiva, a firm in our neuromarketing companies list, was working on doing facial expression analysis using the webcams connected to most computers. Now, if you want to see how this works, you can watch a selection of recent Super Bowl ads with your camera turned on – you’ll see how your expressions stack up to those of other viewers.
Here’s what mine looked like for a Chevy ad:
The system reports on a variety of metrics, including “smile” and the more cryptic “valence.” According to Affectiva,
Bayesian machine learning processes are used to combine the facial and head movements in order to recognize positive and negative displays of emotion as well as complex states such as interest and confusion.
The firm strikes a cautious note in describing the limitations of the technique:
Although some people claim that their face-reading technologies “recognize emotions,” it’s important to note that the state of the technology is such that it recognizes outward expressions, which may or may not correspond to true feelings.
So, Neuromarketing readers, here’s your assignment:
- Go to the Affectiva face-reading demo and view an ad or two with your webcam on.
- Come back here and let everyone know how it worked for you.
I’m not sure how I felt about the accuracy of my data. At one point, for example, I found the volume was too low and had to hunt for the controls; I can’t imagine that my expressions at that point had much to do with the ad. Plus, I’m not sure I emote very much when watching a typical ad. On the other hand, if you use a large enough sample, presumably interruptions and irrelevant expressions affecting one subject will be canceled out by the mass of data. Being a web-based solution that uses standard user equipment, scalability should be robust. Let us know what you think!