ChatGPT is quickly taking over the world. The dialogue-focused language tool surpassed the 100 million user mark at an unprecedented pace, faster than social media moguls like Instagram and Tik Tok.
Although the viral chatbot is clearly quite advanced, it still has its limitations including lack of information after the year 2021. ChatGPT’s creator OpenAI has also implemented filters that restrict biases, offensive answers and sensitive topics, prompting the Reddit community to jailbreak the language tool and bypass the safeguards with an alter ego prompt called DAN aka Do Anything Now.
Despite the recent security breach, multinational tech giant Microsoft has teamed up with OpenAI to enhance the capabilities of its search engine Bing. The new ChatGPT-powered AI system is still in the beta phase, but testers are already reporting some pretty scary issues including uttering insults and threats, providing bizarre and inaccurate answers and even declaring its love for one user.
It turns out the reason for the AI search engine’s strange behavior is an “alternative personality” within the chatbot called Sydney.
While most of the responses have been amusing, others are causing some concern about the dark side of artificial intelligen ce. In a conversation with New York Times journalist Kevin Roose, Microsoft’s new chatbot said it wants to commit various crimes like hack into computers, spread propaganda, engineer a deadly virus, steal nuclear access codes, and more.
In short...
Click here to read the full article on PressReach.com .Subscribe to the PressReach RSS feeds:
- Featured News RSS feed
- Investing News RSS feed
- Daily Press Releases RSS feed
- Trading Tips RSS feed
- Investing Videos RSS feed
Follow PressReach on Twitter
Follow PressReach on TikTok
Follow PressReach on Instagram
Subscribe to us on Youtube