People have different opinions on social media monitoring in terms of its value. What I think it comes down to is whether you love the random and chaotic nature of the data or get frustrated by its inconsistencies, vagaries and lack of absolutes.
What I’ve done is identify a few of those aspects of the data that are perceived to be either positive or negative.
People’s authentic views: what you find on mining this data are people’s beliefs about things. It could be brands, customer service or how to make a cup of tea, but they are real.
Unfettered by constraint or control they’re not answering a researchers questions, they’re not responding to a list of product imagery. These people have been moved, for whatever reason, to issue an opinion.
Instantaneous reaction: comments, opinion, spleen-venting and joy are often generated immediately as a result of an experience – good or bad – without time taken to ponder the consequences.
People share: social media is about sharing and so not only can you see how far a story, piece of content or trending topic has spread but also its origin and drivers.
Relevant associations (challenging competitor sets): we always recommend setting results in the context of a competitor, particularly when we undertake large-scale retrospective social media monitoring pieces. It’s always interesting to see who the brand thinks their competitors are in comparison to how customers view the market.
Not geographic specific: because of the nature of the internet it is largely impossible to tell the country of origin of a post or mention. Data of that nature is often collected in places like Facebook and forums, but is not publicly available. Twitter users can include location in their bio, but it is not to be trusted.
Not easy to identify demographics: As with geographic location this sort of data is collected by Facebook and some forums, but is not publicly available. So trying to guess the sex, earning power or occupation of someone posting on a forum or writing a Tweet is nigh-on impossible.
(Some social media monitoring tools are now presenting data along these lines but the volume of data on which its based is so small as to render it unrepresentative, and the less said about its reliability, the better).
People don’t express themselves clearly: This could be due to many things. The scourges of natural language processing – irony and facetiousness – are obvious examples. It could also be haste, lack of grammar or use of slang or colloquialisms.
People gossip: As in offline life, people don’t always have something original to say and may merely be repeating what they’ve heard from someone else, or seen in a newspaper or on the television. Separating this repetition of mainstream media from the data in order to find consumer insight or naturally occurring trends is hard work.
Dirty data: No matter how well a social media monitoring tool claims to be able to present the user with clean set of data there is always work to be done. Whether it be excluding a site that has a linked story on every page or removing forum posts because a very active member has used a product name in their footer, there always has to be a certain lack of confidence in the volumes of data.
Social media didn’t evolve in order to provide marketers and brands with the hugely powerful (positive) sources of data listed above. It’s not an analytics form that’s driven by data and numbers in the way that email, web and dm analytics are. It’s much more about language and opinion and reflects the nature of social media, and more broadly, the internet itself.
We can use tools to focus on a specific area of the massive amorphous whole of the web, but these will only ever do half the job for us.
When you’re talking about undertaking social media monitoring for a popular brand there is always too much data to go through it all. You have to clean as much as you can and then read as much as you can in order to understand their true meaning and their value
Language, in this context, is both a conduit and a barrier and so social media monitoring sits on the borders between analytics and research.