Any technology has two sides to it and artificial intelligence-driven technology is no different. In this regard, ChatGPT, a third-generation Generative Pre-trained Transformer is noteworthy. Hackers have joined on the bandwagon to take advantage of the craze on social networking sites for responding like a human by writing dangerous software and breaking into your gadgets.
All you need to know about ChatGPT being a helping hand!
As part of a review activity from its creator, Microsoft-owned OpenAI technology is currently available for free to the public. However, this opens a Pandora’s Box because the potential uses for the tech are endless, both good and evil. Russian fraudsters are attempting to get beyond OpenAI’s limitations in order to exploit ChatGPT for illegal reasons, according to Check Point Research (CPR), a cybersecurity organization.
Hackers are debating ways to get over the IP address, payment card, and phone number restrictions that are required to access ChatGPT from Russia on underground hacking forums. CPR published screenshots of what they observed and issues a warning about hackers’ rising desire to use the AI-tech to scale illicit activity. Because ChatGPT uses artificial intelligence technology, cybercriminals are becoming more and more fascinated with it because it can help them operate more profitably.
Both good and bad uses of ChatGPT are possible. For example, it can be used to help developers write code. On a well-known underground hacking site, a post titled “ChatGPT – Benefits of Malware” surfaced on December 29. The thread’s creator revealed that he was using ChatGPT to test out malware strains and strategies that were documented in research papers and articles about common malware.
A threat actor published a Python script on December 21 and stated that it was the “first script he ever developed.” The hacker acknowledged that OpenAI gave him a “good helping hand to finish the script with a nice scope” following a comment from another cybercriminal that the malware’s structure is comparable to OpenAI code.
This may imply that wannabe cybercriminals with little to no development experience could use ChatGPT to create malicious tools and advance to the level of full-fledged cybercriminals with technical expertise. The dissemination of fake news and misleading information using ChatGPT poses another risk. However, OpenAI is already on guard in this regard.
Its researchers have worked with the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology in the US to look at the possibility of massive language models being abused for misinformation. As generative language models advance, new opportunities arise in a variety of industries, including law, science, medicine, and education.