Responsible AI Professional (RAP) Certification

Lead with ethics in an AI-first world.

A four-week certification for leaders, executives, and practitioners who need to understand, implement, and govern responsible AI in their organizations. Limited to 30 participants per cohort.

 

Two cohorts running in 2026:

  • Cohort 1 — Online pilot. Kickoff info session Friday May 22, 2026 at 3:00 PM PT, four weeks following. Delivered online via Zoom. Registration open now.
  • Cohort 2 — In-person intensive. Tuesday October 27, 2026, 9:30 AM – 4:30 PM PT, at 1000 Parker St, Vancouver. Early-bird pricing through October 1, 2026.

Why we built RAP

Last October, I watched a room full of 250 people at the Vancouver Planetarium lean forward when our speaker asked: “How many of you are deploying AI systems at work right now?” Nearly every hand went up.

Then he asked: “How many of you have a governance framework for how you’re doing it?” The hands dropped. Maybe a dozen stayed raised.

That gap — between deployment speed and ethical practice — is why we built RAP.

Every organization is rolling out AI. Fast. The pressure to ship, to automate, to not fall behind — it’s real. I get it. But the systems we’re deploying now (hiring algorithms, healthcare decisions, content moderation, financial assessments) are making choices that affect real people. When the algorithm is biased, when the system hallucinates in a high-stakes moment, when trust erodes because nobody asked the hard questions early enough — the damage isn’t theoretical.

The organizations that will lead in the AI era aren’t the ones with the fastest models. They’re the ones with people who know how to ask: Should we build this? Who gets hurt if we get it wrong? What oversight do we need? How do we maintain human agency?

— Kris Krüg, Executive Director, BC + AI Ecosystem Association

Four weeks, four artifacts

RAP is four weeks, 90 minutes live each week, with pre-work and practical exercises. Not attendance-based — you have to pass quizzes and complete artifacts. Here’s the structure:

  • Week 1 — Foundations. Understanding AI and its limits. How these systems actually work. The accuracy problem: why AI sounds confident when it’s wrong. Global frameworks you can reference when making decisions. Artifact: Personal AI Inventory.
  • Week 2 — Core Ethics. Bias, privacy, and ownership. Algorithmic bias and fairness definitions (and why they conflict). Privacy and consent in an age of surveillance capitalism. Copyright questions nobody’s fully answered yet. Artifact: Ethics Assessment.
  • Week 3 — Societal Impact. Deployment, labor, and environment. When should AI be deployed? When should it never be? Labor implications. Environmental costs nobody wants to talk about. Artifact: Deployment Checklist.
  • Week 4 — The Human Element. Authenticity, relationships, and meaning. Deepfakes and trust erosion. AI companions and vulnerable populations. What creativity and agency mean when machines can do the work. Artifact: Ethics Impact Assessment.

By the end, those four artifacts become your Ethics Practice Assistant: a custom GPT trained on your own work that knows how you think about these issues. Your ongoing practice partner after the program ends.

Who's teaching

What unites this team: we believe responsible AI work requires both technical depth and human understanding. You can’t do it with just frameworks. You can’t do it with just good intentions either.

 

Kris Krüg — Program Lead

Executive Director, BC + AI Ecosystem Association. Founder of the Vancouver AI Meetup. 25 years building technology across creative and technical worlds — from the world’s first Drupal development company to Dead.net for the Grateful Dead, plus books on BitTorrent and iPhone photography. Brings the community perspective: two years listening to what people actually struggle with, what questions keep coming up, what gaps exist between theory and practice.

 

Martin Lopatka — Curriculum Lead

PhD in Forensic Statistics, Master’s in AI. Mozilla alumni with production ML systems experience. Knows responsible AI assessment frameworks cold. He’s volunteering his expertise because this work matters to him.

 

Sarah Downey — Governance Facilitation

20+ years in nonprofit leadership, helping mission-driven organizations adopt AI responsibly. Her perspective grounds everything we do in values-centered practice.

Investment

Pricing (per cohort):

  • Standard: $1,500
  • Early Bird: $1,200 (Cohort 2: through October 1, 2026)
  • BC + AI Member: $750
  • Early Bird Member: $600

The membership math is simple. BC + AI annual membership is $340. The certification discount saves you $750 — that’s $410 net in your pocket, plus Friday office hours, Discord access, meetup priority, and all future BC + AI certification discounts.

 

Join BC + AI first →

Ready to join?

Who this is for:

Leaders and executives overseeing AI deployments who need governance frameworks that actually work — not policy documents that sit in a drawer. Career transitioners with upskilling funds, pivoting into AI governance. Practitioners building a practice, not just collecting a credential.

No technical AI experience required. If you’re making decisions about AI — budgeting, deploying, governing — you’re qualified.

 

Questions? Come to Friday Office Hours (12–1 PM PT, free, open to all) or email [email protected].