
The GCC-China Momentum: Opportunities for Those Investing in Communications

Turbulent Times in Thailand: Government and Policy Affairs Review
Authentic Data Narratives: The Responsibility of Researchers and Brands in Communicating Data Accurately in the Media
September 2025

By Craig Young. Based in Sydney, Craig is the Managing Director of Sandpiper Research & Insights and has 30 years of experience as a research practitioner and evidence-based strategy consultant.
For communicators, data is a powerful tool for storytelling. In recent reports on AI adoption and the safety of smartphones for children, data has been used in media headlines to hook the interest of readers.
Proof points that back up and validate information are extremely powerful in an era when the accuracy of information is under threat.
However, psychological research shows that most people don’t remember the statistics alone; they remember the stories that the data helps to tell. Stories connect with people emotionally, provide context, and enhance our memory facilities.
For researchers and data scientists, this means that stories matter as much as analysis and insights. It also means that we have the responsibility to tell clear stories that authentically and factually represent research data.
For anyone who’s ever spent long hours stewing over a complex data set, the inevitable question is: How can we represent the scientific methodology and statistical context of the data, and simultaneously deliver “a-ha” insights to an audience that may not be interested in the context and nuance of how the data was produced?
Several recent examples that have made headlines serve as reference.
Stories and Soundbites
A recent study published by MIT about AI adoption sought to uncover if businesses are successful in driving returns on investments in AI. The study used 52 structured interviews of enterprise stakeholders, systematic analysis of 300+ public AI initiatives and announcements, and a survey of 153 leaders to develop its findings.
After the report was published, the story was carried by multiple media outlets and exploded into professional social media feeds. Headlines included:
- Forbes: MIT Finds 95% Of GenAI Pilots Fail Because Companies Avoid Friction
- Fortune: MIT report: 95% of generative AI pilots at companies are failing
From these headlines, one would be right to believe that AI is failing businesses—conveniently fitting a narrative that GenAI is overrated.
But digging deeper into the report, the story is more complex. It shows that businesses, as a whole, are struggling to move beyond using AI to drive individual personnel efficiency into production-level return on investment that adds to profits. In short, GenAI can help individuals in their roles, but companies have not unlocked how to use it at scale to drive increased returns on investment.
However, one would also be right to take away that finding quick returns from an AI-driven transformation initiative do happen, although rarely. Organisations should aspire to learn from the 5% of businesses that found significant ROI from AI pilots in the first six months.
The above may be a more meaningful interpretation of the study than many of the headlines suggest.
Business transformation through AI initiatives is complex and time consuming. Impact to the bottom line takes time, as adapting an organisation to run more efficiently and intelligently with AI tools requires genuine bottom-up transformation. Six months may not be enough time to see these results for every business, but for some it is.
It is instead important for organisations is to learn about what is working and what is not, and to gain insights from real-world examples. As Harvard Business Review noted: “To be sure, the MIT report is actually a bit more nuanced than the headline finding suggests: it argues that while individuals are successfully adopting gen AI tools that increase their productivity, such results aren’t measurable at a P&L level, and companies are struggling with enterprise-wide deployments.”
Is this as clean a soundbite? No. But is it a more authentic story and representation of the data? Decidedly, yes.
Avoiding Tropes
Tropes about the demise of society through the affectations of the newest generation date as far back as ancient Greece, when philosophers decried that youth had no respect for their elders. Among budding democracies in the seventeenth century, publications fretted over the fickleness of the revolution-oriented young masses. Today, we see hand-wringing over a new generation that is at-once seamlessly co-existing with technology and potentially undermined by it.
A recent study by scientific journal The Lancet found that school bans of smartphones in isolation don’t lead to significant improvements in student wellbeing. Media immediately picked up the research to urge for smartphone bans alongside other policies, failing to note that the research did not study other factors.
After initial pushback, headlines were changed to: “School phone bans alone do not improve grades or wellbeing, says UK study.”
Again, context and authenticity are key. While noting that pairing smartphone bans with other methods can improve wellbeing is an effective headline, it doesn’t accurately represent the research. A better approach—and the one ultimately landed on—was to showcase the nuance of the data within the story.
How Communicators can be Authentic to the Data
From these two cases we see that many stories can arise from the same data set. These stories tend not to be purposely counterfactual, but they may be missing context.
This is why researchers and communicators alike need to focus on context and authenticity in data storytelling. Media’s main goal is to serve its readers with facts, but those facts still need to drive viewership or readership.
When communicating data, the soundbite is important, as is alignment with existing trends that are easy for the average person to understand. Surfacing the data that does this work is our job as research professionals and communicators.
When converting data to a story, communicators must always consider:
Will the public understand the findings? Distilling the simplest story is essential for non-research audiences and will drive retention of information.
Have we successfully represented the data and accurately portrayed potential findings? Accuracy and context are key to building trust.
Does the data align with existing preconceptions, or does it refute them? Both can be powerful storytelling tools.





