OpenAI has revealed that its AI tools are being exploited by networks associated with Russia, China, Iran, and Israel to spread disinformation.
What Happened: OpenAI has disclosed that networks linked to Russia, China, Iran, and Israel are exploiting its AI tools to disseminate disinformation, according to a report by the Financial Times on Thursday.
The San Francisco-based company, known for its ChatGPT chatbot, revealed that five covert influence operations have utilized its AI models to generate misleading text and images.
These operations have focused on topics such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, European and U.S. politics, and criticisms of the Chinese government.
OpenAI’s policies explicitly prohibit using its models to deceive or mislead. However, these networks have managed to enhance their productivity by using AI for tasks like debugging code ...