Build, Run and Deploy AI Agents with OpenClaw & Docker Model Runner
Create, secure, and deploy your own private AI assistant using OpenClaw and Docker Model Runner (No cloud dependency required.)
AI assistants are everywhere but most rely on external APIs, raising concerns around privacy, cost, and control.
In this hands-on workshop, you’ll learn how to build your own fully local, private AI assistant powered by OpenClaw and Docker Model Runner, giving you complete control over your data and infrastructure.
Whether you want a personal productivity assistant or a deployable AI interface for messaging platforms like WhatsApp or Telegram, this workshop will guide you step-by-step from setup to deployment.
By the end of this session, you’ll have a working AI assistant running locally and the knowledge to extend it further.
What You’ll Learn
- How OpenClaw works and how it enables personal AI assistants
- Setting up and configuring OpenClaw locally
- Running and managing local LLMs using Docker Model Runner
- Security best practices for local AI deployments
- Connecting your assistant to real-world messaging platforms
- Designing a scalable and extensible assistant architecture
Hands-On Outcomes
By the end of the workshop, you will:
- Run a fully functional local AI assistant
- Configure and manage local LLMs via Docker
- Deploy your assistant to WhatsApp or Telegram
- Understand security and privacy implications
- Have a reusable architecture for future AI projects
Prerequisites
Participants should have:
- Basic familiarity with Python and APIs
- A laptop with:
- Docker installed
- Minimum 16GB RAM recommended (for smooth local inference)
- Optional: Experience with LLMs or agent frameworks
Create, secure, and deploy your own private AI assistant using OpenClaw and Docker Model Runner (No cloud dependency required.)
AI assistants are everywhere but most rely on external APIs, raising concerns around privacy, cost, and control.
In this hands-on workshop, you’ll learn how to build your own fully local, private AI assistant powered by OpenClaw and Docker Model Runner, giving you complete control over your data and infrastructure.
Whether you want a personal productivity assistant or a deployable AI interface for messaging platforms like WhatsApp or Telegram, this workshop will guide you step-by-step from setup to deployment.
By the end of this session, you’ll have a working AI assistant running locally and the knowledge to extend it further.
What You’ll Learn
- How OpenClaw works and how it enables personal AI assistants
- Setting up and configuring OpenClaw locally
- Running and managing local LLMs using Docker Model Runner
- Security best practices for local AI deployments
- Connecting your assistant to real-world messaging platforms
- Designing a scalable and extensible assistant architecture
Hands-On Outcomes
By the end of the workshop, you will:
- Run a fully functional local AI assistant
- Configure and manage local LLMs via Docker
- Deploy your assistant to WhatsApp or Telegram
- Understand security and privacy implications
- Have a reusable architecture for future AI projects
Prerequisites
Participants should have:
- Basic familiarity with Python and APIs
- A laptop with:
- Docker installed
- Minimum 16GB RAM recommended (for smooth local inference)
- Optional: Experience with LLMs or agent frameworks
Lineup
Rami Krispin
Good to know
Highlights
- 4 hours
- Online
Refund Policy