Skip to content
Join our Newsletter

Opinion: Want an advanced AI assistant? Prepare for them to be all up in your business

Concerns about privacy, consent and memory raised by artificial intelligence will become more urgent as more advanced models enter the market.
gettyimages-2188069569
Sophisticated AI assistants are becoming commonplace in people's lives.

The growing proliferation of AI-powered chatbots has led to debates around their social roles as friend, companion or work assistant.

And they’re growing increasingly more sophisticated. The role-playing platform Character AI promises personal and creative engagement through conversations with its bot characters. There have also been some negative outcomes: currently, Character.ai is facing a court case involving its chatbot’s role in a teen’s suicide.

Others, like ChatGPT and Google Gemini, promise improved work efficiency through genAI. But where is this going next? Amid this frenzy, inventors are now developing advanced AI assistants that will be far more socially intuitive and capable of more complex tasks.

Future shock

The shock instigated by OpenAI’s ChatGPT two years ago was not only due to the soaring rate of adoption and the threat to jobs, but also because of the cultural blow it aimed at creative writing and education.

My research explores how the hype surrounding AI affects some people’s ability to make professional judgments about it. This is due to anxiety related to the vulnerability of human civilization, feeding the idea of a future “superintelligence” that might outpace human control.

With US$1.3 trillion in revenue projected for 2032, the financial forecast for genAI drives further hype.

Mainstream media coverage also sensationalizes AI’s creativity, and frames the tech as a threat to human civilization.

Raising the alarm

Scientists all over the world have signalled an urgency around the implementations and applications of AI.

Geoffrey Hinton, Nobel Prize winner and AI pioneer, left his position at Google over disagreements about the development of AI and regretted his work at Google because of AI’s progress. The future threat, however, is much more personal.

Recreating users

The turn in AI underway now is a shift toward self-centric and personalized AI tools that go well beyond current capabilities to recreating what has become a commodity: the self. AI technologies reshape how we perceive ourselves: our personas, thoughts and feelings.

The next wave of AI assistants, a form of AI agents, will not only know their users intimately, but they will be able to act on a user’s behalf or even impersonate them. This idea is far more compelling than those that only serve as assistants writing text, creating video or coding software.

These personalized AI agents will be able to determine intentions and carry out work.

Iason Gabriel, senior research scientist at Google DeepMind, and a large team of researchers wrote about the ethical development of advanced AI assistants. Their research sounds the alarm that AI assistants can “influence user beliefs and behaviour,” including through “deception, coercion and exploitation.”

There is still a techno-utopian aspect to AI. In a podcast, Gabriel ruminates that “many of us would like to be plugged into a technology that can take care of a lot of life tasks on our behalf,” also calling it a “thought partner.”

Cultural disruption

This more recent turn in AI disruption will interfere with how we understand ourselves, and as such, we need to anticipate the techno-cultural impact.

Online, people express hyper-real and highly curated versions of themselves across platforms like X, Instagram or Linkedin. And the way users interact with personal digital assistants like Apple’s Siri or Amazon’s Alexa has socialized us to reimagine our personal lives. These “life narrative” practices inform a key role in developing the next wave of advanced assistants.

The quantified self movement is when users track their lives through various apps, wearable technologies and social media platforms. New developments in AI assistants could leverage these same tools for biohacking and self-improvement, yet these emerging tools also raise concerns about processing personal data. AI tools involve the risk of identity theft, gender and racial discrimination and various digital divides.

More than assistance

Human-AI assistant interaction can converge with other fields. Digital twin technologies for health apply user biodata. They involve creating a virtual representation of a person’s physiological state and can help predict future developments. This could also lead to over-reliance on AI Assistants for medical information without human oversight from medical professionals.

Other advanced AI assistants will “remember” people’s pasts and infer intentions or make suggestions for future life goals. Serious harms have already been identified when remembering is automated, such as for victims of intimate partner violence.

We need to expand data protections and governance models to address potential privacy harms. This upcoming cultural disruption will require regulating AI. Let’s prepare now for AI’s next cultural turn.

The ConversationIsabel Pedersen receives funding from the Social Sciences and Humanities Research Council of Canada (SSHRC).