
About a week ago, I experienced the most incredible AI demo since my first encounter with ChatGPT. Trust me, this is no exaggeration.
This post isn’t just about a cool tool; it also touches upon a crucial, unresolved issue in the world of Generative AI: handling those massive, complex data sets that are key to unlocking deeper value in many business and research areas.
Enter NotebookLM
I recently had a strategy brainstorming session with Angus Grundy (Angus is developing highly effective strategic frameworks; reach out to him if you’re looking to strategise/plan or think things through in any business or personal context. Insights guaranteed.). We recorded our 90-minute chat using Zoom’s AI assistant. Later, Angus used that transcript with Google’s experimental tool, NotebookLM.
On some levels, NotebookLM is a game-changer. You can upload extensive data sets (up to 50 files of 500,000 words each. That’s nearly 50 War and Peaces!) and interact with a chatbot in elaborate ways. You can ask highly specific questions and generate timelines, study guides, FAQs… all cross-referenced to your sources. You can then create notes from the chat results or add your own, shift focus between sources, manipulate it all, and more. You can even create, at the push of a button, an “audio overview,” where two AIs create a podcast-like discussion about the content. It’s as strange as it is delightful. But is it actually useful?
Continue reading

