Jun 2023
In March 2023, Sandpiper launched the first ever global report on the AI opportunities and risks for the communications and public affairs industries. The study is based on findings from a survey of over 400 communications professionals based across five continents.
86% see AI as an opportunity
Those at the mid-point of their career (35-44 years) are most positive.
61% use generative AI in their work today
Tools are mainly used to support desk research, analyse data, generate creative ideas and content creation for social media and article writing.
89% expect generative AI to become part of ‘business as usual’ within six months
Over 1 in 4 expects daily usage within six months, with nearly half of respondents expecting so within 2 years.
85% are concerned about potential legal and ethical risks
Only 11% of companies have an AI policy
Despite governance concerns, only one in three plan to have a policy or guideline in place within 12 months.
Industry leaders lack knowledge and are perceived as not moving fast enough
Half of respondents believe their company is not doing enough to educate its employees. Over 7 in 10 say they would like more training.
Australia set out its national “AI Action Plan” in 2021. It details its vision to be a leading digital economy and society by 2030 through government action in AI direct measures, programmes, and incentives that drive the growth of technology, digital skills, and foundational policy setting.
In 2017, the State Council issued the “New Generation Artificial Intelligence Development Plan” (AIDP), an overarching strategy for the AI industry that set out an ambitious roadmap including targets through 2030.
The AIDP declares that by 2025, AI will become “the main driving force for China’s industrial upgrading and economic transformation”, and by 2030, China will see its AI industry competitiveness to reach the world-leading level, with AI core industry scale exceeding RMB 1 trillion.
The EU AI Act in June overwhelmingly approved, will be the world’s first rule governing AI. The rules follow a risk-based approach and establish obligations for providers depending on the level of risk an AI is forecast to generate. High-risk AI areas include threats to people’s health, safety, fundamental human rights, or the environment.
In April 2023, the Indian government’s Ministry of Electronics and Information Technology said in a written response to
a parliamentary inquiry that it would not introduce any immediate regulations on AI, choosing to instead take a light touch approach to nurture the growth of the technology.
Japan launched its AI Strategy in 2019 where it set out four strategic objectives: (i) human resource development, (ii) strengthening the competitiveness of its industries, (iii) realising a sustainable society embracing diversity, and (iv) strengthening research and development. However, due to a lack of progress on the government’s part to nurture the application of the technology, it has updated its strategy in 2021 and 2022.
Malaysia launched its “Malaysian National AI Roadmap for 2021-2025” to drive national competitiveness and encourage innovation to drive growth in the sector. The roadmap also incorporated principles for the responsible use of AI, including Fairness, Reliability, Safety and Control, Privacy and Security, Inclusiveness, Transparency, Accountability, and the Pursuit of Human Benefit and Happiness.
Singapore has a broad, overarching AI Strategy released in 2019 that is specifically designed to drive responsible AI adoption and promote its deployment across public and private organisations. The government also set up an “Advisory Council on the Ethical Use of AI and Data” in 2018 and developed a “Model AI Governance Framework” in 2019 to advise decision-makers on ethical, legal, and policy issues arising from the commercial development of AI.
Regulations-wise, as recently as early this year, the South Korean government proposed its “Law on Nurturing the AI Industry and Establishing a Trust Basis” which places regulatory requirements on “High-Risk” systems comparable to the EU AI Act. If the legislation becomes a reality then it is likely to be the first of its kind, with the European Union’s implementation expected to not take place before 2024.
Thailand’s Cabinet approved its national AI strategy, (Draft) “Thailand National AI Strategy and Action Plan (2022 – 2027)” in July 2022 under its vision of promoting AI development and application to “enhance the economy and quality of life within 2027”. The government body primarily responsible for this strategy is the National Electronics and Computer Technology Centre (NECTEC). The first phase of strategy implementation, from 2022-23, will essentially focus on three industries: government services, food and agriculture, and healthcare and medical.
The United States
The US has largely stayed away from creating laws that shape and regulate industry use of AI and has taken a light-touch approach to growing applications. In 2022, the Biden administration issued voluntary guidance on AI through its Blueprint for an AI Bill of Rights, encouraging Federal agencies to move AI principles into practice. It also issued an Executives Order focused on equity and taking action against “algorithmic discrimination”.
The government’s Ministry of Information and Communication (‘MIC’) requested, on 20 April 2023, public comments on the draft National Standard on Artificial Intelligence and Big Data. Notably, the draft Standard on AI details the aim to establish quality assurance and transparency of AI modules, proposing quality requirements for the safety, privacy, and ethics of AI. It also outlines that the first step in evaluating AI is whether the AI module in question presents a high or low risk, and the steps that should be considered when undertaking a risk assessment of AI modules.