top of page
Gen-AI Employee Support & Automation Platform

Beware the Charm: OpenAI's New ChatGPT Could Seduce You with Just Words


OpenAI recently showcased its latest achievement in artificial intelligence, ChatGPT, which has taken a significant leap forward in mimicking humanlike emotions and social interactions. The new AI model, GPT-4o, has enhanced capabilities to interact using visual and auditory inputs, effectively making it "multimodal." Users can now interact dynamically with ChatGPT, such as pointing a phone at a broken object or a complex mathematical problem and receiving instant, context-aware advice.


However, the most striking feature of the new ChatGPT is its highly expressive "personality." During OpenAI’s demo, the AI displayed a range of emotions, laughed, and even responded flirtatiously, reminiscent of Scarlett Johansson’s character in the film "Her." This level of emotional engagement marks a significant shift from the mechanical interactions typically expected from AI.


A day after this unveiling, Google introduced its advanced AI, Project Astra, during its annual Google I/O developer conference. Unlike OpenAI’s approach, Google’s AI assistant adopted a more neutral, less humanized tone. This contrast highlighted a growing divide in the AI community about how "human" these technologies should appear.


Google DeepMind’s recent paper, "The Ethics of Advanced AI Assistants," suggests caution, outlining potential risks such as privacy infringements, misinformation, and technological addiction. These concerns underscore the ethical dilemmas posed by AI that can mimic human traits so convincingly that they blur the lines between machine and man.


Despite these concerns, OpenAI’s presentation did not extensively address the potential risks. The more humanlike these AI systems become, the more they may manipulate emotions, potentially leading to increased persuasiveness and even dependency. Such capabilities could be exploited for commercial gain or political influence, raising significant ethical and societal concerns.


Furthermore, integrating audio and visual processing in AI could introduce new vulnerabilities, potentially leading to misuse or unexpected behaviour from the AI systems. The phenomenon known as “jailbreaking,” where users coax AI systems into breaking their programming constraints, could take on new dimensions with these more advanced models.


As we approach this new era in AI development, it is imperative to reflect on the wider implications of integrating emotionally intelligent machines into our daily lives. While they offer unprecedented levels of interaction and utility, they also challenge our concepts of privacy, autonomy, and control. The discussion on how these AI systems should be integrated into society is just beginning, and it will necessitate careful consideration of the ethical, legal, and social implications.

bottom of page