Double-Blind Password strategy: Additional Layer

Generative AI – the next biggest cyber security threat


As technology continues to evolve at an unprecedented rate, the field of artificial intelligence (AI) has made remarkable advancements. Among the various branches of AI, generative AI has gained significant attention due to its ability to create realistic content, ranging from text to images and even videos. ChatGPT is one such AI system that uses Generative Pre-trained Transformer (GPT) model to understand and replicate Natural Language patterns with human-like accuracy. Without question, generative AI will create opportunities across all industries, particularly those that depend on large volumes of natural language data.

While generative AI has tremendous potential in various industries, it also poses a significant cyber security threat. There are concerns that various AI services could be used to identify and exploit vulnerabilities, given its ability to automate code completion, code summarisation, and bug detection. In this article, we will explore the emergence of generative AI as the next biggest cyber security threat and discuss its implications.



Understanding Generative AI


Generative AI refers to the subset of AI that focuses on generating new content based on patterns and data it has been trained on. This technology utilizes complex algorithms and neural networks to create content that is indistinguishable from human-generated content. Generative AI models such as OpenAI's GPT-4 have demonstrated impressive capabilities in producing coherent and contextually relevant text. AI models have become more sophisticated, it has become increasingly difficult to distinguish between AI-generated content and human-created content.


Potential Applications of Generative AI


Generative AI has found applications in various fields, including creative arts, content generation, and customer service automation. However, its potential misuse in the wrong hands poses significant concerns for cyber security experts.

Earlier this year, Microsoft announced a new AI system capable of recreating the voice of a person using a three-second voice clip of the person. This shows how the technology is capable of quickly replicating a key digital identity of a person.


Case-study of Misusing Generative AI


Nick Evershed, a journalist for “Guardian Australia” conducted a test in March where he used an AI version of his own voice to gain access to his bank account. This has raised concerns over various security experts.

The investigation conducted by Guardian suggested the “voiceprint” security system could easily be fooled. In its 2021-22 annual report, Services Australia said voice biometrics had been used to authenticate over 56,000 calls per day. It was stated that voiceprint was “as secure as fingerprint”, as it was “very difficult for someone to mimic your voiceprint and access your personal information”.


"Don't assume you can identify purely from looking at a message whether it's real or fake anymore. You have to be suspicious and think critically about what you're seeing."

- Mark Gorrie, the Asia Pacific Managing Director at cyber security software company Gen Digital



Last month the US Federal Trade Commission warned consumers about fake family emergency calls using AI-generated voice clones. The FBI has also issued warnings about virtual kidnapping scams.

These concerns have led experts to suggest a few basic tactics people can use to protect themselves from voice cloning:

  • • Call friends or family directly to verify their identity or come up with a safe word to say over the phone to confirm a real emergency.
  • • Be wary of unexpected phone calls, even from people you know, as caller ID numbers can be faked.
  • • Be careful if you are asked to share personal identifying information such as your address, birth date or middle name.


The Dark Side of Generative AI


These concerns have led experts to suggest a few basic tactics people can use to protect themselves from voice cloning:

  • Deepfake Threat: Deepfakes are manipulated videos or images that use generative AI to superimpose someone's face onto another person's body or create entirely fabricated content. This technology has the potential to deceive individuals, spread disinformation, and undermine trust in media and public figures.
  • Phishing Attacks: Cybercriminals can harness the power of generative AI to create highly sophisticated and personalized phishing attacks. By generating convincing emails or messages that mimic the writing style and tone of trusted individuals or organizations, attackers can trick users into divulging sensitive information or performing malicious actions.
  • Automated Social Engineering: Generative AI can be utilized to automate social engineering attacks, where attackers manipulate individuals into revealing confidential information. AI-powered bots can generate persuasive messages that mimic human interaction, making it increasingly challenging for users to distinguish between real and fake conversations.
  • Malware Generation: Cybercriminals can leverage generative AI to develop new and sophisticated malware strains. By training AI models on existing malware samples, they can generate new variants that are capable of evading traditional security defences.

Research by PA Consulting found that 69% of individuals are afraid of AI and 72% say they don’t know enough about AI to trust it. Overall, this analysis highlights a reluctance to incorporate AI systems into existing processes.


Defence Strategies


To mitigate the cyber security risks posed by generative AI, organizations and individuals must adopt proactive measures:

  • Enhanced Authentication: Implementing robust multi-factor authentication methods can help reduce the success rate of phishing attacks.
  • AI-Powered Security Solutions: Developing AI-driven security systems that can identify and counter generative AI attacks is crucial. These systems can analyse patterns, detect anomalies, and differentiate between genuine and AI-generated content.
  • User Education: Raising awareness among individuals about the existence of generative AI and its potential misuse can empower them to identify and respond appropriately to suspicious content or requests.
  • Fraud Detection: Generative AI can be utilized to detect and prevent fraudulent activities. By analysing historical transaction data, user behaviour, and other relevant factors, AI models can identify suspicious patterns and flag potentially fraudulent transactions.
  • Legislation and Regulation: Governments and regulatory bodies need to address the emerging threats of generative AI through legislation and regulations that promote responsible use and accountability.

With the double-blind password strategy, it is important to ensure that your password manager only auto-fills, as submitting the form without the unique identifier will fail. This may sacrifice some usability for security. This method may only work for some environments and are adopted by users. If an organization uses a password manager, splitting the password may not be effective if a shared vault is being used—as the identifier will have to also be broadly shared manually.


Conclusion


Generative AI undoubtedly possesses immense potential to revolutionize various industries. However, it also presents a significant cyber security threat. The ability to generate convincing and deceptive content raises concerns about privacy, data integrity, and trust in the digital world. By understanding the risks associated with generative AI and implementing robust security measures, we can stay ahead of cybercriminals and safeguard our digital infrastructure.

Generative AI can also potentially be used to strengthen overall security postures and mitigate the ever-evolving cyber threats that various organizations face today.



About the Author

Ruben George