
Will people ever stop using AI wrong? 🤖
Ha!
Discover why you can expect AI to be inaccurate across a broad range of business and marketing subjects.
(I’m not an AI evangelist nor a Luddite.
Nuanced opinions don’t sell on LinkedIn, especially with hyped topics.
But I’m going to keep trying anyway. 🤷‍♂️)
Here’s a simple experiment that shows why it’s risky to treat AI chatbots as founts of knowledge, especially in marketing and business.
I asked gpt4 and Bard a straightforward question posed by a client:
“What are the leading models and frameworks for mapping customer journeys?” 💡
The twist? I generated answers three times for each.
Results?
• High variability in gpt4’s responses, with differing list lengths and items.
• Over-indexing of thought leaders with large publishing volume / PR.
• Influential Nielsen-Norman group framework appeared only once.
• Bard’s anonymized labels made lists nearly useless.
• Both occasionally included loosely related concepts (e.g., AIDA)
Why does this matter? 🤔
On a practical level, result variance indicates if you’re in the AI’s comfort zone or twilight zone and how much fact-checking is needed. High variance? Be cautious!
But there’s a bigger issue. With many flavours and branded versions of customer journey mapping, LLMs struggle to find clear dominant patterns. This issue is common in marketing, strategy, and business concepts.
Let’s not force AI into corners. It’s just not a great desk research tool at this point. Regenerate and iterate for better clarity. 💡
(Version 1.5 of “the sceptic’s guide to ChatGPT” is out)