Hack Before You Launch
Get your coding skills sharp and hack your way to a killer launch before anyone else does!
You Built It with AI. Now Watch How It Breaks.
🎬 Watch Katie hack a live AI agent before you read another word.
We've pulled a 5 minute clip from Katie Paxton-Fear's session "Understanding AI Agents and MCP Tool Security" and it's free to watch right now. In it, you'll see exactly how MCP tools become attack surfaces in real AI stacks. It's the kind of thing that makes you immediately think about your own code.
👉 Watch for free here → (Watch instantly)
AI tools can help you build and ship apps faster than ever. But while they’re great at fixing bugs, they often miss something far more dangerous: security vulnerabilities.
And once your app is live, those vulnerabilities don’t stay hidden for long.
In this live, hands-on workshop, ethical hacker Katie Paxton-Fear will show you exactly how AI-generated applications break in the real world—and how to fix them before attackers find them first.
What you’ll experience
This isn’t a theoretical session.
You’ll watch a real AI-built app get:
- Exploited for authentication flaws
- Broken through prompt injection
- Leaked via insecure data handling
Then, step by step, you’ll see how those issues can be identified and fixed using a practical, lightweight approach you can apply immediately.
What you’ll walk away with
By the end of this session, you’ll be able to:
- Spot the most common (and dangerous) security flaws in AI-generated code
- Understand how attackers approach and exploit your app
- Apply a simple pre-launch security checklist before going live
- Decide whether your app is safe to ship—or needs fixing first
Who this is for
- Developers building apps with AI tools (ChatGPT, Copilot, etc.)
- Indie hackers and fast-moving builders shipping MVPs
- Anyone using AI to write code—but unsure about security
Why this matters
AI can get you to launch in hours.
Attackers can break your app in minutes.
Before you hit deploy, make sure you know what you’re exposing.
About the speaker
Katie Paxton-Fear is a security researcher and ethical hacker known for uncovering real-world vulnerabilities and translating them into practical insights for developers. She specializes in showing how systems break—and how to fix them before it’s too late.
Want to see what the session covers before you commit?
We've made a 5-minute clip from Katie's session on AI Agent and MCP Tool Security available for free. If it's relevant to what you're building, the live session is worth your time.
Get your coding skills sharp and hack your way to a killer launch before anyone else does!
You Built It with AI. Now Watch How It Breaks.
🎬 Watch Katie hack a live AI agent before you read another word.
We've pulled a 5 minute clip from Katie Paxton-Fear's session "Understanding AI Agents and MCP Tool Security" and it's free to watch right now. In it, you'll see exactly how MCP tools become attack surfaces in real AI stacks. It's the kind of thing that makes you immediately think about your own code.
👉 Watch for free here → (Watch instantly)
AI tools can help you build and ship apps faster than ever. But while they’re great at fixing bugs, they often miss something far more dangerous: security vulnerabilities.
And once your app is live, those vulnerabilities don’t stay hidden for long.
In this live, hands-on workshop, ethical hacker Katie Paxton-Fear will show you exactly how AI-generated applications break in the real world—and how to fix them before attackers find them first.
What you’ll experience
This isn’t a theoretical session.
You’ll watch a real AI-built app get:
- Exploited for authentication flaws
- Broken through prompt injection
- Leaked via insecure data handling
Then, step by step, you’ll see how those issues can be identified and fixed using a practical, lightweight approach you can apply immediately.
What you’ll walk away with
By the end of this session, you’ll be able to:
- Spot the most common (and dangerous) security flaws in AI-generated code
- Understand how attackers approach and exploit your app
- Apply a simple pre-launch security checklist before going live
- Decide whether your app is safe to ship—or needs fixing first
Who this is for
- Developers building apps with AI tools (ChatGPT, Copilot, etc.)
- Indie hackers and fast-moving builders shipping MVPs
- Anyone using AI to write code—but unsure about security
Why this matters
AI can get you to launch in hours.
Attackers can break your app in minutes.
Before you hit deploy, make sure you know what you’re exposing.
About the speaker
Katie Paxton-Fear is a security researcher and ethical hacker known for uncovering real-world vulnerabilities and translating them into practical insights for developers. She specializes in showing how systems break—and how to fix them before it’s too late.
Want to see what the session covers before you commit?
We've made a 5-minute clip from Katie's session on AI Agent and MCP Tool Security available for free. If it's relevant to what you're building, the live session is worth your time.
Lineup
Katie Paxton-Fear
Good to know
Highlights
- 1 hour 30 minutes
- Online
Refund Policy
Location
Online event
Agenda
-