Generative AI, surge in phishing attacks amid the emergence of ChatGPT
Generative AI, surge in phishing attacks amid the emergence of ChatGPT
  • Yeon Choul-woong
  • 승인 2024.03.06 02:40
  • 댓글 0
이 기사를 공유합니다

Image Source: ETRI

A double-edged sword for cybersecurity Generative AI is one of the most exciting and impactful technologies of our time. It can potentially revolutionize fields as diverse as art, entertainment, education, and healthcare by creating novel and realistic content from scratch. However, as generative AI becomes more accessible and powerful, it also poses serious cybersecurity challenges. We explore how generative AI is being used for both good and bad in the cyber domain, and what can be done to mitigate the risks.

The Dark Side of Generative AI 
One of the most prominent applications of generative AI is the creation of natural language content, such as text, speech, and dialog. Large language models (LLMs), such as OpenAI's GPT-4 and Google DeepMind's Gemini, are capable of generating coherent and persuasive text on any topic, given a few words or sentences as input. These models can also process images and video, enabling multimodal content generation.

While these models can be used for beneficial purposes such as education, entertainment, and journalism, they can also be exploited by malicious actors for nefarious purposes such as phishing, fraud, propaganda, and cyberattacks. According to a report by Stocklytics, email phishing attacks have dramatically increased by more than 10 times since the launch of ChatGPT, a web-based tool that allows anyone to customize and deploy their own GPT-4-powered chatbots. Some organizations have reported a 1265% increase in these attacks.

Phishing is a form of social engineering that involves sending deceptive emails, texts, or social media messages that appear to come from legitimate sources, such as banks, government agencies, or trusted contacts. The goal is to trick recipients into clicking on malicious links, downloading malicious attachments, or disclosing sensitive information such as usernames, passwords, credit card numbers, or personal identification numbers. Phishing can lead to financial loss, identity theft, data breaches, or compromised accounts.

Generative AI makes phishing more effective and scalable by creating realistic and personalized messages that are tailored to the target's profile, preferences, and behavior. For example, a phishing email can mimic the writing style and tone of a colleague, friend, or family member, and include relevant details such as names, dates, events, or locations to add credibility and urgency to the message. Alternatively, a phishing text or call can use a synthetic voice that sounds like a trusted person or authority and engage in a natural conversation with the target, using emotional cues such as fear, anger, or sympathy to manipulate the target's response.

Generative AI can also create fake images and videos, known as deepfakes, that can be used to impersonate or defame individuals, spread misinformation or propaganda, or blackmail or extort victims. For example, a deepfake video can show a politician, celebrity, or business leader saying or doing something they never did to damage their reputation or sway public opinion. Alternatively, a deepfake image can show a person's face or body in a compromising or inappropriate situation and be used to coerce or threaten that person.

The Bright Side of Generative AI 

Despite the dangers of generative AI, it can also be a powerful ally for cybersecurity. Generative AI can help detect, prevent, and respond to cyber threats by leveraging its ability to analyze, synthesize, and optimize data. For example, generative AI can:

Detect phishing and deepfake attacks by using natural language understanding, computer vision, and anomaly detection techniques to identify inconsistencies, errors, or anomalies in content, such as grammar, spelling, punctuation, style, tone, sentiment, context, logic, or authenticity.

Prevent phishing and deepfake attacks by using natural language generation, image and video synthesis, and adversarial learning techniques to create realistic and varied content that can be used to train and test security systems, such as spam filters, antivirus software, or biometric authentication systems, and improve their robustness and accuracy.

Respond to phishing and deepfake attacks by using natural language dialogue, image and video processing, and reinforcement learning techniques to create interactive and adaptive content that can be used to communicate and collaborate with victims, such as alerting, guiding, or offering assistance, and mitigate the impact and damage of attacks.

Generative AI can also help improve cybersecurity education and awareness by creating engaging and immersive content that can be used to train and test users, such as employees, customers, or citizens, and improve their knowledge and skills. For example, generative AI can

Create realistic and personalized scenarios such as emails, texts, phone calls, images, or videos that simulate phishing or deepfake attacks and challenge users to identify and avoid them, and provide feedback and recommendations.

Create interactive and adaptive games, such as quizzes, puzzles, or simulations, that educate users about cybersecurity concepts, principles, and best practices; assess their understanding and performance; and provide rewards and incentives. Create dynamic and varied stories, such as articles, podcasts, or documentaries, that inform users about cybersecurity trends, developments, and innovations, and inspire them to learn more and take action.

The Future of Generative AI and Cybersecurity Generative AI is a double-edged sword for cybersecurity. It can be used for both good and evil, and the balance between the two will depend on how we develop, deploy, and regulate this technology. As generative AI becomes more accessible and powerful, we need to ensure that it is used responsibly and ethically and that it aligns with our values and goals. We must also be aware of the risks and challenges it poses and be prepared to deal with them effectively and efficiently.

To achieve this, we need to foster collaboration and coordination among different stakeholders, such as researchers, developers, regulators, policymakers, educators, and users, and establish standards and guidelines for the design, evaluation, and governance of generative AI systems. We also need to invest in research and innovation and explore new ways to use generative AI for cybersecurity and address the technical and societal issues that arise from its use.

Generative AI is a game changer for cybersecurity. It can be a formidable threat or a valuable asset, depending on how we use it.


댓글삭제
삭제한 댓글은 다시 복구할 수 없습니다.
그래도 삭제하시겠습니까?
댓글 0
댓글쓰기
계정을 선택하시면 로그인·계정인증을 통해
댓글을 남기실 수 있습니다.

  • ABOUT
  • CONTACT US
  • SIGN UP MEMBERSHIP
  • RSS
  • 2-D 678, National Assembly-daero, 36-gil, Yeongdeungpo-gu, Seoul, Korea (Postal code: 07257)
  • URL: www.koreaittimes.com | Editorial Div: 82-2-578- 0434 / 82-10-2442-9446 | North America Dept: 070-7008-0005 | Email: info@koreaittimes.com
  • Publisher and Editor in Chief: Monica Younsoo Chung | Chief Editorial Writer: Hyoung Joong Kim | Editor: Yeon Jin Jung
  • Juvenile Protection Manager: Choul Woong Yeon
  • Masthead: Korea IT Times. Copyright(C) Korea IT Times, All rights reserved.
ND소프트