Hallucinating Machines: Why Artificial Intelligence Sometimes Gets It Wrong
Sales end soon

Hallucinating Machines: Why Artificial Intelligence Sometimes Gets It Wrong

By Northeastern University London

AI can write with brilliance - but it can also confidently state things that simply aren’t true. This lecture explores why.

Date and time

Location

Online

Good to know

Highlights

  • 1 hour
  • Online

About this event

Science & Tech • High Tech

AI can write with brilliance - but it can also confidently state things that simply aren’t true. This lecture explores why. We’ll look at how large language models generate text through probabilistic token prediction rather than retrieving stored facts, making hallucination an inherent feature rather than a flaw.

We’ll also examine the key causes - from training data gaps to out-of-distribution queries and the model’s tendency to stay coherent even when uncertain.

You’ll also get an inside look at postgraduate study at Northeastern University London, including an overview of the MSc Artificial Intelligence programme and a live Q&A with our expert panel.


*We have an exciting suite of AI courses and scholarships for the 26 Spring Term to help you dive into the field. If you're interested or have any questions, please contact masters@nulondon.ac.uk.

*For course information, please check our website.
*Interested in what else is coming up? Visit our website for more upcoming events.


Frequently asked questions

Organized by

Northeastern University London

Followers

--

Events

--

Hosting

--

Free
Oct 21 · 10:00 AM PDT