Navigating the AI Frontier: Five Imperatives for Digital Trust Professionals in 2024
Navigating the AI Frontier: Five Imperatives for Digital Trust Professionals in 2024
  • Korea IT Times
  • 승인 2024.02.06 11:26
  • 댓글 0
이 기사를 공유합니다

By Goh Ser Yoong, CISA, CISM, CGEIT, CRISC, CDPSE, CISSP, MBA.

In today's digitally driven world, the proliferation of artificial intelligence (AI) has undeniably changed the way we live, work, and interact. We regularly witness exciting breakthroughs, such as the recent discovery of a new material through AI and supercomputing that could reduce the use of lithium in batteries. However, this rapid integration of AI technologies has also raised legitimate concerns about ethics, bias, and accountability.

For digital trust professionals, the task of fostering trust and ensuring ethical AI practices is becoming paramount. Here are five key priorities for building trust in AI systems in the coming year:

Author: Goh Ser Yoong

Awareness of misinformation, deepfakes, and disinformation.
In 2024, more than a dozen countries will hold elections. Individuals need to be aware that there is an increasing risk of misinformation, deepfake fraud, and disinformation and manipulation during elections. For example, there have already been cases of fake videos impersonating Singaporean leaders luring the public into investment scams. Personal vigilance is key to maintaining the integrity of information and democratic processes. In the same way that phishing and vishing attacks attempt to trick individuals into visiting fake websites, individuals should verify the authenticity of news or videos they receive with the appropriate sources. It is incumbent upon individuals to pay closer attention to videos that may be "too good to be true" and to confirm the validity of shocking content by cross-checking multiple sources.

Expect increased regulation of AI use.
The promise of digitization and innovation powered by AI, leading to futuristic trends such as autonomous vehicles and a breakthrough in disease research, is certainly promising. However, if done in a risky manner, it can put human lives at risk. Therefore, with the EU taking the lead in 2023 by passing the AI Act, which will come into effect in 2025, digital trust professionals can expect an increase in regulations from the other regions, industries, and countries to follow suit. As such, it would be prudent to stay abreast of the potential impact of these regulations.

As governments and regulatory bodies seek to address ethical concerns and ensure responsible AI practices, understanding these regulations and compliance requirements is critical. This may require a solution to have certain new certifications or enhancements, such as watermarking video generated by AI. Awareness and foresight of these implications would enable digital trust professionals to better advise on what is needed for their organizations to prepare for the evolution of AI regulatory frameworks.

Assess data privacy risks and transparency in the use of AI.
On the other hand, because AI typically requires a large amount of data, digital trust professionals should consider the data privacy risks and threats associated with excessive collection or sharing of personal data for AI applications. Excessive data sharing can lead to data breaches, identity theft, and unauthorized use of personal information.

Being vigilant about data privacy protects individuals from potential harm and ensures responsible and ethical use of personal data in AI applications. When using AI, digital trust professionals must advocate for transparency, which is the cornerstone of building trust in AI systems.

Disclosing the use of AI in systems would empower individuals to make informed decisions about engaging with these technologies. By providing clear communication about AI integration, individuals can gain an understanding of how their data is being processed and used. This transparency fosters a sense of control and allows users to evaluate the risks and benefits associated with AI-enabled systems.

Understand AI decision-making processes & fairness
Understanding how AI models arrive at decisions is fundamental to building trust. By demonstrating and advocating for the necessary governance of how the algorithms that power AI are built, users can gain visibility into the factors that influence an AI model's outputs, whether they are decisions or recommendations. Clarity around the decision-making process ensures consistency and reliability. When users can understand the reasoning behind AI-generated results, trust in the system's accuracy and fairness is enhanced.

Behind AI decision-making is the input, which is data. Since fairness is another critical aspect of trustworthy AI, digital trust professionals must ensure that the data used to train AI models is representative and free of bias. Preventing unintentional discrimination requires rigorous measures to identify and mitigate bias at every stage of the AI lifecycle, from data collection to model development and deployment. This makes the task of building fair and transparent AI all the more challenging.

Ensure security and resilience
Ensuring the safety and resilience of AI systems is imperative, especially when the outcome could affect a person's life, as in the case of AI in self-driving cars or other autonomous vehicles. Users must have confidence that these systems will not cause harm and will perform reliably under a variety of circumstances, including unexpected inputs. Robustness and resilience in AI models provide confidence that they will perform as intended without compromising safety.

In this regard, it would be worthwhile for digital trust professionals to consider acquiring quality assurance and testing skills. In addition, existing regulations in affected industries should also be studied to better understand and anticipate how AI would augment them. This can be seen in the increasing number of lawsuits against technology giants such as Meta, Google, and Amazon.

Make 2024 the year to take stock of AI's impact
2024 will be an exciting year for digital trust professionals involved in projects that use AI (both professionally and in their personal lives). However, depending on the evolution of regulations, resources, and other external factors, 2024 could also be a year to take stock and rethink the impact of AI. This is because building trust in AI requires a multifaceted approach that prioritizes transparency, fairness, security, and effective governance. Digital trust professionals have a critical role to play in ensuring that AI systems are not only technically robust, but also ethically sound and aligned with societal values that respect individual privacy rights. The security risks and threats posed by AI have grown exponentially in the past year, and will only become more challenging as AI is weaponized and poses geopolitical threats from malicious actors.

By considering the five priorities above, digital trust professionals can be better informed and positioned to help their organizations develop responsible and trustworthy integration of AI, fostering a future where AI enhances human potential while maintaining dignity, fairness, and reliability.


댓글삭제
삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
댓글쓰기
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.

  • ABOUT
  • CONTACT US
  • SIGN UP MEMBERSHIP
  • RSS
  • 2-D 678, National Assembly-daero, 36-gil, Yeongdeungpo-gu, Seoul, Korea (Postal code: 07257)
  • URL: www.koreaittimes.com | Editorial Div: 82-2-578- 0434 / 82-10-2442-9446 | North America Dept: 070-7008-0005 | Email: info@koreaittimes.com
  • Publisher and Editor in Chief: Monica Younsoo Chung | Chief Editorial Writer: Hyoung Joong Kim | Editor: Yeon Jin Jung
  • Juvenile Protection Manager: Choul Woong Yeon
  • Masthead: Korea IT Times. Copyright(C) Korea IT Times, All rights reserved.
ND소프트