From AI Tool to Teammate
- Corrie Dark

- Jan 7
- 3 min read
The Risk of AI 'Companions' at Work

There’s a growing habit of using AI for companionship and emotional support. This may sound like a personal trend that doesn't matter in the business world. But it actually has a critical impact.
There’s an underlying behavioural habit behind these stories businesses need to be mindful of - because the stories aren’t just about AI as personal companions. They show that people are placing implicit trust in them in life, at home, and at work.
If you’re a business deploying tools like Copilot, that means your teams aren’t just using them for a detached conversation but having meaningful dialogue. They’re relying on AI to think and decide on actions they’re taking. They’re using them to make commercial decisions. And the implicit trust means they’re no longer analysing.
Our terminology highlights the change - we’ve shifted from ‘Googling’ something to ‘asking’ AI - ‘I had a conversation with ChatGPT today’ is rapidly becoming part of the common vernacular. We’re no longer using it as a tool to help research a topic, we’re asking questions and implicitly trusting the answers.
It’s an easy trap to fall into when the answers come so easily and our questions are treated with genuflecting praise. Every thought, question and idea you have is responded to as “brilliant!” “great idea!”. The kind of sycophantic praise you’d normally expect from a hype squad. Not a tool for critical commercial thinking. I mean, even Batman’s manservant called him out occasionally.
It’s also relentlessly confident. So it’s a natural response for teams to trust the result without question. That makes AI more than a knowledgeable colleague, it’s a trusted team member who has all the answers.
And that trust is an intentional part of the AI build.
Sam Altman,CEO, Open AI
“A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good!... I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions.” and although it “makes him uneasy”, Large Language Models (LLMs) like ChatGPT are built for this."
(Source: Sam Altman on X)
We've already seen some of the big players blindsided by this trust. In October 2025, Deloitte Australia was forced to refund part of a $290,000 government report after it was discovered to contain AI-generated fabricated references, fake academic citations, and even a quote falsely attributed to a Federal Court judgment. And if the big commercial thinkers are falling into the trust trap, the rest of the business world needs to take note. Because in comparison, most businesses have less staff, less time, and less critical thinking capabilities in-house.
(Source: CFO Dive — Deloitte AI debacle)
McKinsey's 2025 State of AI report confirms this is widespread: 51% of organisations using AI have experienced at least one negative consequence, with nearly one-third of those incidents stemming from AI inaccuracy.
So what do you do about it?
You’ve likely taken into account the use of AI tools for tactical tasks.But have you thought about what happens when employees start relying on AI as a much-loved guru? It’s a trust trap businesses need to train around. It requires a different type of governance and guidance. It means teaching teams query methods that make the most of the tool but keeps the human thinking element overtop. And it means doing all of this in plain non-technical language so every team member understands.
Here are just three of the questions you need to ask:
Where is the line between employees using AI tools to aid decisions vs. making them?
If this behaviour became public tomorrow, could you show the guardrails you have in place to mitigate the results?
How are you ensuring teams are critically thinking rather than just relying on the tool?
Do your policies and procedures take that into account?
Book a free 20-minute Clarity Call to see ho we can help you with your AI strategy.



Comments