The Gathering Storm: AI's Looming Risks Across Asia's Digital Horizon
The Gathering Storm: AI's Looming Risks Across Asia's Digital Horizon
  • Korea IT Times
  • 승인 2024.03.23 00:56
  • 댓글 0
이 기사를 공유합니다

By Goh Ser Yoong, head of compliance, ADVANCE.AI, and member of the ISACA Emerging Trends Working Group
Goh Ser Yoong, Head of Complliance at ADVANCE.AI.

It has been less than two years since ChatGPT was first released by OpenAI, back in 2022 when the global artificial intelligence (AI) market size was valued at $136.55 billion with an adoption rate of 35 percent. Since then, the world has seen unprecedented advancements in AI, with its adoption across industries continuing to escalate, transforming operational paradigms and fostering innovation.

The global AI market size is projected to reach $15.7 trillion by 2030, according to a PwC report. Additionally, according to Gartner, by 2026, global AI implementation will experience a remarkable increase of 80 percent, with over 30 percent of Asian businesses integrating AI into their operational frameworks, a figure that underscores the region's commitment to digital transformation. 

However, on the flip side,the technology’s potential for being abused or misused has led to the World Economic Forum 2024 Global Risks report featuring AI prominently  as one of the top global risks. This surge in AI adoption, while rapid and promising, brings forth a spectrum of risks that IT leaders must adeptly navigate with prudent oversight and strategic planning to ensure sustainable and secure integration of AI technologies, as I have shared recently as one of the top priorities that digital trust professionals should be focusing on in 2024 .
 

Risks of AI Deployments in Asia: What IT Leaders Need to Know

Regulatory Compliance and Governance

By 2026, Gartner predicts that 50 percent of governments worldwide will be putting in AI regulations and policies. The increasing dependency on AI systems poses a risk of overreliance on AI without sufficient guardrails from the governance perspective, potentially leading to a lack of critical human oversight. In South Korea, where AI integration in industries like finance and healthcare is extensive, there have been instances of overdependence leading to operational vulnerabilities. The regulatory landscape for AI in Asia is complex and varied, with countries like China implementing strict AI governance frameworks. Additionally, the EU AI Act was just passed by the European Parliament; IT leaders will need to monitor developments related to this and begin to assess its potential impact on their respective-ascertained countries and organizations. For instance, though it has yet to be ascertained as to how the enforcement shall be carried out and by which authority, organizations should initiate their internal assessment on the categories of AI systems based on risk classifications that they have and the potential deadline to comply with the respective requirements. Navigating these regulations requires a nuanced understanding and approach to compliance, as non-conformity can result in significant penalties and reputational damage. 

One of the findings from ISACA’s Generative AI 2023 Pulse Poll showed that more than 1 in 4 organizations surveyed have no AI policy in place but more than 40 percent say their employees are using it still. This indicates a gap in having proper governance internally as well as within organizations deploying AI systems. IT leaders must establish robust governance structures that demonstrate that responsible AI practices are contextualized within their organizations. 

Organizations also need to ensure AI deployments are compliant with local and international laws, though arguably from an AI regulation perspective, different governments would potentially look at creating their respective policies, resulting in a web of competing regulations. It may still be somewhat muddy how the web of regulations may look, but in general, organizations, communities, and agencies may craft their own pockets of guidelines and principles based on overall responsible and ethical AI principles, such as those from OECD, that are agreed on by a majority. Hence, organizations need to commit resources to monitor and incorporate continuous mapping of their internal AI roadmap and strategy towards developing regulations and AI frameworks across the markets in which they have a presence, to identify any potential gaps and improvements needed to maintain trustworthy AI systems.

Cybersecurity, Deepfakes and Misinformation

With technology advancements, there are always two sides of the coin – the opportunities and the threats. Opportunities are generated when AI is used ethically and responsibly, while losses and damages are incurred when those systems are misused and abused. For instance, good outcomes from AI can be achieved, like breakthroughs in medical detection technologies and rapid discovery of new elements that help better human lives. , However, malicious actors could leverage AI machines for damaging purposes such as cyberwarfare, killer robots, and mass disinformation. 

The increasing threat from deepfakes, Bad GPT and FraudGPT, mandates a need to revisit the existing technical AI guardrails where singular controls such as Presentation Attack Detection (PAD) in identity verification via AI-powered solutions are no longer sufficient. CISOs and technology leaders, especially in critical or sensitive industries such as healthcare and financial services, need to consider complementing multiple guardrails with other parameters such as watermarking, injection attack detection (IAD), as well as behavioral analytics. 

On the other hand, with more than 50 countries going to the polls this year, there is a heightened risk of misinformation and fake news being spread where social networks are a major source of information. Take, for example, the ‘resurrection’ through an AI-generated deepfake video of a late Indonesian president, which was still somewhat easier to identify as animated. Or the deepfake videos of several Singaporean leaders promoting investment scams. However, with OpenAI’s SORA beta release, there is potentially higher quality AI-generated content (AIGC) that makes it harder for the public to identify deepfakes. There is the potential for AI to be misused to generate deepfakes to manipulate voters. There needs to be an increase in awareness and education in spotting deepfakes being propagated on social networks. Additionally, platform providers need to have better guardrails in place to proactively remove or vet potential AIGC rather than reactively relying on community guidelines or soft intervention which may come after the damage has been done. 

Data Privacy & Intellectual Protection 

Data is critical in AI-powered systems such as customer chatbots, robo advisors, and medical diagnoses that rely on those data to feed into the AI models that produce AIGC. In Asia, where AI adoption is rapidly advancing, data privacy is emerging as a paramount concern, specifically when it comes to how personal data could be collected and used as training data in building AI models. Even with most countries having data protection regulations in place today, countries like Singapore and South Korea, renowned for their technological prowess, have reported instances where AI systems were exploited due to vulnerabilities, leading to data breaches. 

To be able to build a trustworthy AI, IT leaders must prioritize the adherence to ethical AI principles, such as the privacy principle from OECD stringent data protection regulations, and Singapore's Personal Data Protection Act (PDPA), to mitigate these risks. Questions that ought to be thought out during organizations' AI strategy planning include: 

-    Can the data be anonymized and encrypted to protect its integrity and the data subject’s identity? 
-    How will the data flows and transactions be monitored for data breaches and anomalies? 
-    What is the process for any data subject’s request regarding any incorrect misrepresentation, mistakes, or inaccuracy in the AI output generated by them? 

Talent Shortage and Skill Gaps

With all the risks that have been outlined above illustrating the potential disruptions and damage that an AI system or AIGC could inflict on society, individuals, or organizations, it is pertinent that the AI systems are managed by staff that are highly familiar with such systems. From ISACA’s Generative AI 2023 Pulse Poll, more than half (54 percent) of the respondents reported that no AI training has been provided. Despite the rapid adoption of AI, many Asian countries face a shortage of skilled AI professionals, which can impede the development and deployment of effective AI solutions. For example, Japan has been proactive in addressing this challenge through government initiatives aimed at AI talent development. In Malaysia, a target was set through the roll-out of a freely available AI training program to enhance their citizens’ knowledge and understanding of AI technology. Organizations must invest in training and development programs to cultivate the necessary skills within their workforce, with several freely available AI-related training courses from service providers such as AWS, Microsoft, and Google. 
 

Key Takeaways for IT Leaders

As AI continues to redefine the business landscape in Asia, IT leaders play a crucial role in steering their organizations towards successful and secure AI adoption. The journey is fraught with challenges, from data privacy issues to ethical dilemmas and navigating regulatory hurdles. However, by acknowledging and addressing these risks, IT leaders can harness the full potential of AI to drive innovation and competitiveness. Here are three key takeaways for IT leaders:

△ Prioritize Ethical AI Practices to Build Trustworthy AI Systems: While waiting for the region to progress with better harmonization and standardization of AI regulations and framework policies, AI systems should leverage best practices such as those from NIST and OECD that are developed and used in a manner that is ethical, transparent, and free of bias, fostering trust among stakeholders.

△ Invest in Educating, and Talent Upskilling: With AIGC getting more sophisticated, it is becoming harder to differentiate a real person from programs an AIGC, which eventually may lead to an increase in cybercrimes. There is a growing need to implement training programmes and materials to educate the public on identifying potential AIGC, such as deepfakes and misinformation, to avoid falling victim to sophisticated phishing scams and social engineering.Bridge the talent gap by focusing on skill development, leveraging partnerships with academic institutions, and fostering a culture of continuous learning within the organization.

△ Adopt a Balanced Approach to AI Deployment: While embracing the opportunities AI offers, leaders need to maintain a critical perspective to ensure there is a balance between technological reliance and human judgment, safeguarding against potential overreliance and operational risks. Communication and understanding of human behaviors are critical components of  cracking the AI enigma. Have a nimble mindset supported by robust governance structures, while also taking into consideration the diverse culture and multi-ethnicity within Asian countries. 

In navigating the complex landscape of AI deployment, IT leaders in Asia must remain vigilant and proactive because the changes brought upon by AI systems could be disruptive. Organizations’ employees need to be aligned with their company's AI roadmap while ensuring that AI development is compliant with an evolving and turbulent AI regulation landscape. 
 


댓글삭제
삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
댓글쓰기
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.

  • ABOUT
  • CONTACT US
  • SIGN UP MEMBERSHIP
  • RSS
  • 2-D 678, National Assembly-daero, 36-gil, Yeongdeungpo-gu, Seoul, Korea (Postal code: 07257)
  • URL: www.koreaittimes.com | Editorial Div: 82-2-578- 0434 / 82-10-2442-9446 | North America Dept: 070-7008-0005 | Email: info@koreaittimes.com
  • Publisher and Editor in Chief: Monica Younsoo Chung | Chief Editorial Writer: Hyoung Joong Kim | Editor: Yeon Jin Jung
  • Juvenile Protection Manager: Choul Woong Yeon
  • Masthead: Korea IT Times. Copyright(C) Korea IT Times, All rights reserved.
ND소프트