
Will people ever stop using AI wrong? š¤
Ha!
Discover why you can expect AI to be inaccurate across a broad range of business and marketing subjects.
(Iām not an AI evangelist nor a Luddite.
Nuanced opinions donāt sell on LinkedIn, especially with hyped topics.
But Iām going to keep trying anyway. š¤·āāļø)
Hereās a simple experiment that shows why itās risky to treat AI chatbots as founts of knowledge, especially in marketing and business.
I asked gpt4 and Bard a straightforward question posed by a client:
āWhat are the leading models and frameworks for mapping customer journeys?ā š”
The twist? I generated answers three times for each.
Results?
⢠High variability in gpt4ās responses, with differing list lengths and items.
⢠Over-indexing of thought leaders with large publishing volume / PR.
⢠Influential Nielsen-Norman group framework appeared only once.
⢠Bardās anonymized labels made lists nearly useless.
⢠Both occasionally included loosely related concepts (e.g., AIDA)
Why does this matter? š¤
On a practical level, result variance indicates if youāre in the AIās comfort zone or twilight zone and how much fact-checking is needed. High variance? Be cautious!
But thereās a bigger issue. With many flavours and branded versions of customer journey mapping, LLMs struggle to find clear dominant patterns. This issue is common in marketing, strategy, and business concepts.
Letās not force AI into corners. Itās just not a great desk research tool at this point. Regenerate and iterate for better clarity. š”
(Version 1.5 of “the sceptic’s guide to ChatGPT” is out)