hero image

AI tools like ChatGPT, CoPilot, Bard, and Dall-E offer significant potential for positive applications, such as aiding doctors in diagnosis and increasing access to expertise. However, there is concern that individuals with malicious intent could misuse these technologies, posing risks to the general public.

Criminals using AI to rum scams

Criminals are leveraging AI chatbots for hacking and scams. The UK government’s Generative AI Framework and the National Cyber Security Centre highlight the risks associated with AI. For instance, ChatGPT and Dall-E are examples of generative AI systems that could be exploited by criminals to craft convincing scams and phishing messages due to their ability to generate tailored content from simple prompts.

A potential risk exists where scammers can input basic details such as name, gender, and job title into large language models (LLMs) like ChatGPT to generate tailored phishing messages. Despite preventive measures, this remains possible. Additionally, LLMs enable large-scale phishing scams, with criminals exploiting them for fraud, information theft, and even creating ransomware, as revealed through analysis of underground hacking communities.

Various malicious variants of large language models have emerged, such as WormGPT and FraudGPT, which can create malware, exploit security vulnerabilities, facilitate scams, hacking, and compromise electronic devices. Love-GPT is a newer variant involved in romance scams, generating fake dating profiles to deceive users on platforms like Tinder and Bumble.

Concerns regarding AI platform’s ability to protect personal data

There are concerns about privacy and trust when using AI platforms like ChatGPT and CoPilot, as they may inadvertently expose personal or corporate confidential information. This risk is compounded by the fact that LLMs incorporate data inputs into their training datasets and could potentially share sensitive data if compromised.

Researchers have uncovered vulnerabilities in ChatGPT, allowing for leakage of sensitive data with simple prompts. This poses privacy risks, leading to concerns from companies like Apple and Amazon, which have banned its use. Users should exercise caution, verifying information from AI tools and refraining from sharing sensitive data. Additionally, employer approval is advisable before using AI in work settings. Precautions are crucial amidst evolving technology threats.