Just Added

Ethical AI Risk Discovery Session

By A Jar of Insights
Online event

Overview

Join us online to explore ethical AI risks and how to spot them before they become a problem!

Join Our Ethical AI Risk Discovery Session!

If you're building AI for healthcare, education, or security, this free 60-minute workshop will show you exactly where your users might feel judged, invaded, or unsafe - and give you a framework to fix it.

What You'll Experience in This Workshop

This is not a lecture. It's a hands-on, interactive session where you'll actively map your product's ethical risk landscape alongside other founders and product leaders facing similar challenges.

Plus, you'll schedule your follow-up call where we'll review your progress and troubleshoot challenges.

What Makes This Different from Other Ethical AI Workshops

Most ethical AI training focuses on:

  • Algorithmic bias (are outcomes fair across demographic groups?)
  • Data privacy (is user data protected?)
  • Transparency (can users understand how decisions are made?)

This workshop focuses on:

  • Emotional harm at vulnerable moments (does your product make users feel judged, invaded, or unsafe?)
  • Trust barriers that cause abandonment (why do working products still lose users?)
  • Practical implementation (not just principles, but specific product decisions)


Category: Science & Tech, High Tech

Lineup

Good to know

Highlights

  • 1 hour
  • Online

Location

Online event

Agenda

Intro: The Implementation Gap

You'll discover why having ethical principles like "respect dignity" or "build trust" isn't enough and why even technically excellent products fail when emotional stakes are high.

Part 1: Build Your Ethical Stakeholder Map

Using your specific product, you'll map your stakeholders on the Emotional Radar. You'll identify: - Who's in your high risk zone - What emotional states these users experience and why - Where your product might violate their dignity, autonomy, or safety What you'll have by the end: a visual map showing exactly which user groups might need the most careful design and which moments are most dangerous.

Part 2: Document Your Risk Assumptions

Now that you know WHO is vulnerable and WHERE they're at risk, you'll document specific assumptions your product makes that might cause harm.

Frequently asked questions

Organised by

A Jar of Insights

Followers

--

Events

--

Hosting

--

Free
Jan 29 · 09:30 PST