ChatGPT has received an update that will make it difficult for cyber criminals to use the conversational AI as a tool to write malicious codes. Asking the bot to write phishing mail may no longer work as it recognises the request as illegal and unethical.
(For insights on emerging themes at the intersection of technology, business, and policy, subscribe to our tech newsletter Today’s Cache.)
Multiple reports had earlier revealed that underground hacking communities with little to no hacking experience were using ChatGPT to write codes that could be used for cyber spying, and launching ransomware attacks. With this update, the bot rejects prompts with unethical hacking and malware writing requests. It is still not clear what forced the changes in OpenAI’s large language model bot.

Screenshot of a response from ChatGPT when asked to write malicious code. | Photo Credit: Special Arrangement
This update comes on the back of Microsoft’s additional $10 billion investment in OpenAI. Coupled with the investment, the tech titan has also decided to move the chatbot into its Azure cloud. So, building security features into the chatbot is a welcome development.
Some users have noted that the AI tool is not completely fool-proof as there are ways to circumvent the chatbot’s security into tricking it to write malicious emails, according to a blog by Cyber Careers.
For technically inclined writers or individuals with knowledge of how to circumvent the security measures, there are still ways to misuse the chatbot. But these work-arounds could soon been gated with future updates from OpenAI as ChatGPT can get adept at learning malicious requests.
Microsoft, on Monday, also shared that it will be making OpenAI’s ChatGPT available with its Azure OpenAI suite of services. Sharing in an official announcement that enterprise customers who use Azure cloud services will be able to access ChatGPT through Azure OpenAI services and can apply for access to AI models including GPT-3.5, Codex, and DALL•E 2.
COMMents
SHARE