Hallucinations For Fun and Profit

NYC 2023

Session

About This Session

Large language models (LLMs) have a tenuous grasp on the truth. They tend to "hallucinate", providing confident answers to questions that have little bearing on reality. This has typically been considered a weakness of the technology, and something that poses a major problem to practical business use of LLMs. This session will argue the opposite: that hallucinations are not only the most powerful characteristic of this technology, but the characteristic most likely to radically reshape how marketing works. We'll talk about why that's the case, and demo some experiments that leverage hallucination as a feature—rather than a bug—of LLMs.