
An Ageing World and the Growing Lifespan-Healthspan Gap: The Impacts on Healthcare and Communications
Sandpiper Tech Insights: AI and Cybersecurity – Friend or Foe?
July 2025

By the Sandpiper Technology Team
Cybersecurity is a dynamic and volatile topic for organisations and it has only become more complex by the rapid growth of AI, which brings benefits to users and cybersecurity firms but also, increasingly, to bad actors. In this evolving landscape, leaders must ask: how can my organisation make sure AI works for us, and not against us?
Cloudflare’s Navigating the New Security Landscape: Asia Pacific Cybersecurity Readiness Survey 2024 revealed that 87% of cybersecurity leaders are concerned about AI increasing the sophistication and severity of data breaches. And in the recent 2025 Cybersecurity Assessment Report from Bitdefender, 51% of IT and cybersecurity experts surveyed say AI-generated threats are their number one concern, while 63% say they have already experienced an AI-related cybersecurity incident in the past year.
Cybersecurity leaders are gearing up to tackle AI-driven risks, exemplified by Cisco’s 2025 Cybersecurity Readiness Index, where nine out of 10 respondents say their company’s cybersecurity budget has increased in the past 12 to 24 months, in many cases to tackle AI-related cybersecurity risks. For enterprise technology leaders across the Asia Pacific region, and beyond, the influence and impact of AI is hard to ignore or avoid. The question is no longer whether AI will impact cybersecurity, but how to harness its potential while mitigating its potential risks.
AI as Cybersecurity Friend
Organisations are investing heavily in AI, with a report from February this year by IDC predicting that AI spending will grow at 1.7x the rate of overall digital technology budget spend over the next three years. These investments are being funnelled towards diverse organisational needs.
One such area of focus for AI investment is cybersecurity. 95% out of 1,500 surveyed cybersecurity professionals in a study from this year by British cyber security company Darktrace say that AI-powered cybersecurity solutions significantly improve the speed and efficiency of prevention, detection, response and recovery. Advanced machine learning and predictive analytics enable systems to detect anomalies, such as unusual user behaviour or unexpected network traffic, with speed and precision. AI also strengthens proactive defences. Predictive models identify likely attack vectors, allowing organisations to bolster their defences before threats materialise.
Beyond detection, AI-driven tools automate repetitive tasks like malware detection and vulnerability scanning. This frees up cybersecurity teams to focus on strategy and critical decision-making – an absolute game changer especially when the industry is still facing a perennial talent crunch. ISC2, the world’s leading member association for cybersecurity professionals, estimated the 2024 cybersecurity talent gap in Asia Pacific to be upwards of 3.3 million, and there are no signs of the situation improving any time soon. For enterprises managing sprawling IT infrastructures, AI’s ability to scale across systems and geographies is therefore invaluable.
AI as Cybersecurity Foe
The same capabilities that make AI a powerful ally can also lead to it being a formidable adversary. Cloudflare’s 2024 study revealed that respondents were concerned about AI increasing the sophistication and severity of multiple types of cybersecurity and threats, and Cisco’s 2025 study found that 86% of business leaders with cybersecurity responsibilities reported at least one AI-related cyber incident in the past 12 months. In fact, there’s already hard evidence of AI being used to perpetuate deepfakes. Group-IB, a leading creator of cybersecurity technologies to investigate, prevent, and fight digital crime globally, found that in August 2024 AI was used to create highly realistic deepfakes bypassing biometric security protocols of an Indonesian financial institution. And in another case, an AI deepfake was used to steal $25 million from UK engineering firm Arup.
There are also regulatory, compliance, and ethical concerns to contend with, such as algorithm biases and data privacy concerns when training an algorithm. Effective AI governance is more crucial than ever, yet, in Cisco’s index, under half (45%) of respondents said they feel their company has the resources and expertise to conduct comprehensive AI security assessments.
The lack of talent with expertise in AI governance was also cited as an impediment to improving AI readiness when it came to ethics and governance. Cloudflare’s study found that 48% of respondents reported spending more than 10% of their work week keeping pace with industry regulatory and certification requirements.
Navigating the Opportunities and Risks
It’s abundantly clear is that AI is here to stay. How then can organisations maximise AI’s potential while managing its pitfalls? A balanced approach is essential:
- Human-AI Collaboration: AI excels at processing data, but human expertise is crucial for interpretation and judgement. Ensuring cybersecurity teams are equipped to work with AI tools is vital.
- Ethical AI Practices: Transparency and fairness in AI deployment build trust. Aligning systems with international and local regulations mitigates risks.
- Continuous Learning: Both AI systems and cybersecurity teams must evolve to counter emerging threats. Regular updates, training, and simulations are critical.
- Resilience Against AI-Powered Threats: Organisations should bolster their defences with advanced monitoring and foster security awareness across all levels of the organisation.
The Case for Leadership from Asia Pacific
Asia Pacific is in an exciting position when it comes to AI proliferation. Investments continue to pour into the technology and innovative startups are mushrooming across the ecosystem. This presents unique opportunities and challenges for AI-driven cybersecurity. Rapid digital transformation and escalating cyber threats necessitate urgent integration of AI into security strategies. However, uneven technological maturity and regulatory variations, availability and access to talent, and resources all require tailored solutions and steady counsel.
Collaborative efforts among governments, private sector enterprises, and academia can address these disparities and build robust regional ecosystems to responsibly proliferate AI technology, while enhancing regulatory frameworks, developing talent pipelines, and ensuring AI is used in a safe and ethical manner.
In this landscape, AI is neither friend nor foe for cybersecurity. Instead, it is a tool; one that, used thoughtfully, can redefine how organisations defend themselves against a complex and ever-changing threat landscape. By fostering a balance of innovation, vigilance, and ethical governance, enterprises can ensure AI remains an ally in their cybersecurity strategies.