July 2023
by Mark Johnson, an Associate Director of Sandpiper based in Singapore. Mark leads the Technology Communications practice for Sandpiper in the Asia Pacific Region. He has 15+ years’ experience advising and positioning business leaders and senior politicians in multiple regions and a proven track record of adapting campaign methodology to communications and public affairs projects.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” – Statement of AI Risk from the Center for AI Safety.
This statement was signed by leading scientists, academics, businesspeople, and most importantly by Sam Altman, CEO of OpenAI, the company that gave us ChatGPT. It comes not long after Elon Musk caused a stir by co-signing an open letter that asked artificial intelligence labs to pause training systems for six months so that we could have time to fully understand the implications of artificial intelligence (AI) advancements (or as some argued, for him to catch up with the competition).
Both drew attention to a growing sense of dread recently taking hold in popular consciousness concerning AI. It is as if a future that combines the worst of all our favourite science fiction is inevitable. Yet, does this chime with the reality of how we have been adopting AI? Is this moral panic taking hold in policy circles? And is it based on a misunderstanding of what AI does and what it can achieve?
AI has been reshaping the foundations of technology that drives modern societies for over a decade. In our day-to-day lives, the breadth of AI use cases is extensive and growing by the day without most people even being aware until now. From unlocking your phone with your face or fingerprint to autonomous cars, AI technology is embedded throughout. Whenever you use Spotify or settle down to find something to watch on Netflix, guess what? You are deep in the world of AI.
Despite this, there has been a renewed focus on AI in both the media and policy circles. This is primarily due to the explosion of interest in generative AI, following OpenAI’s release of ChatGPT and subsequent other generative AI platforms that have followed. For policymakers and businesses, the evolution of AI and generative AI has led to uncertainty at the government level over the risks and rewards of AI technology.
The fragmented approach taken at the national and regional levels underlines this uncertainty in defining and regulating AI.
Much has been made of the EU’s steps to implement the EU AI ACT. With the draft legislation approved it now waits for the details to be agreed upon with the European Commission and member states. , Once finalised the EU will have the building blocks for a risk-based approach that establishes obligations for providers depending on the level of risk an AI is forecast to generate. The UK, along with the US, is taking a far more hands-off wait and see approach, prizing competitiveness above all else. On the other side of the spectrum is China, which has taken a strong stance against some of the more harmful elements attributed to AI, and generative AI in particular.
External bodies are getting in on the act too. The OECD recently announced that it would be updating its guidelines on the use of generative AI. Whilst the details of the changes remain to be decided, it is expected that they will reflect the new reality member states are facing concerning the advancements in AI.
Looking closer at APAC and the piecemeal, incremental approach across different governments becomes relatively clear.
Singapore, often a bell-weather for a progressive approach to regulating technology, has a broad, overarching AI Strategy released in 2019 specifically designed to drive responsible AI adoption. The government also established an “Advisory Council on the Ethical Use of AI and Data” in 2018 and developed a “Model AI Governance Framework” in 2019 to advise decision-makers on ethical, legal, and policy issues arising from the commercial development of AI.
Taking perhaps the quickest steps toward full AI legislation, the South Korean government proposed its “Law on Nurturing the AI Industry and Establishing a Trust Basis” which places regulatory requirements on “High-Risk” systems comparable to the EU AI Act. If the legislation becomes a reality, it is likely to be the first of its kind, with the European Union’s implementation expected to not occur before 2024.
In Vietnam, the government’s Ministry of Information and Communication (‘MIC’) requested public comments on 20 April 2023 on the draft National Standard on Artificial Intelligence and Big Data. Signaling their intent to push through AI regulations soon.
Other countries, including India, Australia, Thailand, Malaysia, and Japan, are at different stages of their thinking on AI legislation. It is expected that further draft regulations in the region will be forthcoming.
As policy practitioners and business leaders focus on this regulatory reality, many will ask what part they can play in shaping fair and open AI governance frameworks. Frameworks that mitigate the most harmful use cases while encouraging innovation will require a diversity of voices from businesses and the public sphere to help shape AI impact assessments, helping to ensure they are fit for purpose and proportional to the risk posed by the technology. Where regulatory frameworks already exist, such as for fake news dissemination, business leaders need to be prominent in helping inform and educate, ensuring that regulators do not reinvent the wheel regarding AI regulations. Finally, they need to be active in areas where they want to shape AI policy, collaborating with other stakeholders, like-minded companies, industry associations, and government agencies.
Artificial intelligence presents an enormous opportunity if, through AI governance, it is treated as such. Cutting through the persuasive visions of doomsday and letting sensible heads prevail is a challenge that can and should be surmounted.