Amsterdam tried using algorithms to fairly assess welfare applicants, but bias still crept in. Why did Amsterdam fail? And more important, can this ever be done right? Hear from MIT Technology Review editor Amanda Silverman, investigative reporter Eileen Guo, and Lighthouse Reports investigative reporter Gabriel Geiger as they explore if algorithms can ever be fair.
Speakers: Eileen Guo, features & investigations reporter, Amanda Silverman, features & investigations editor, and Gabriel Geiger investigative reporter at Lighthouse Reports
Recorded on July 30, 2025
Related Coverage:
- Inside Amsterdam’s high-stakes experiment to create fair welfare AI
- The true dangers of AI are closer than we think
- How we investigated Amsterdam’s attempt to build a ‘fair’ fraud detection model – Lighthouse Reports
- The coming war on the hidden algorithms that trap people in poverty
- Machine Bias – ProPublica
- Suspicion Machines investigation – Wired