Before I tell you about a thing we built, I want to give you something you can use at work today.
I’ve spent the last three years teaching AI to working professionals. Executives. Nonprofit leaders. Creatives. Comms teams. Founders. Not machine learning engineers. Regular people doing real work under real pressure with powerful tools and almost no guidance.
Across all those workshops, office hours, and late-night conversations, one thing became obvious.
Most people do not need more prompts.
So here’s the filter I use.
Three questions. Every time. Steal them!
The Three Question Deployment Test
Question 1: If this AI gets it wrong, who eats the consequences?
This is the first question because it cuts through the fantasy fast.
When AI gets something wrong, the damage does not land on the model. It lands on a person. A job applicant who never gets seen. A patient who gets deprioritized. A junior staffer who is told to trust the output and move on.
The cost is always human. Usually specific. Usually borne by someone with less power than the person approving the system.
So name them.
Not “the user.”
Not “the public.”
A real person.
A first-time job applicant.
A patient navigating care in a second language.
An employee who cannot safely challenge the machine.
If you cannot name the person carrying the downside, you have not thought hard enough about the deployment.
(This is the ground we cover in Week 3, Deployment Ethics. When is AI ready. When should it never be used. We build a Deployment Checklist you actually use.)
Question 2: Would the subject be pissed if they knew?
This one is brutally simple. Imagine telling the person affected exactly how AI was used.
“By the way, we ran your interview through AI before making a decision.”
“By the way, your performance review was partly drafted by ChatGPT.”
“By the way, an automated system helped decide whether you moved forward.”
Would they feel informed? Or would they feel tricked?
That reaction tells you a lot.
This is not an argument against using AI. It is a test for whether your use can survive daylight. If transparency would make the whole thing feel sketchy, the problem is not optics. The problem is the decision.
If you would not want to disclose it, stop and ask why.
(This question straddles Week 2, Privacy and Consent, and Week 4, Authenticity, deepfakes, and trust erosion. Two of the heaviest weeks of the course.)

Question 3: What is the catch layer?
AI is wrong in a very particular way. It is wrong fluently.
That is what makes it dangerous.
Humans hesitate when they are unsure. Language models often do the opposite. They deliver guesswork in the same tone they deliver truth. So the question is not whether the system makes mistakes. It will. The question is what catches those mistakes before they ship.

What is the review layer?
What is the verification step?
Who checks the thing before it affects someone else?
A senior editor reviews every line.
A recruiter looks at every rejection.
A researcher checks every citation against source material.
No catch layer means you are borrowing confidence from a machine and calling it judgment.
Bad trade.
That’s the test.
If you only remember three things from this email, make it these:
Who pays when it fails.
How it looks in full daylight.
What catches it before harm leaves the building.
(This is Week 1 territory, the confidence accuracy gap, hallucinations, confabulation. When inaccuracy is annoying vs dangerous. You leave with a Personal AI Inventory that maps every system you use and the catch layer behind it.)
Responsible AI Professional Certification is a four-week live program for people who want to use AI well without becoming a liability to their team, organization, clients, or community.
We cover foundations and governance, privacy and consent, deployment ethics, labour and environmental impacts, authenticity, deepfakes, and trust. You leave with practical artifacts you can actually use, including an AI Inventory, an Ethics Assessment, a Deployment Checklist, and an Ethics Impact Assessment.
This is for people who want a real standard. Something they can defend in a meeting, explain to a board, use with clients, or bring back to their team.
Four Fridays starting May 22nd 2026.
Live sessions.
Small cohort.
Built with Martin Lopatka and Sarah Downey.
And even if you never take the course, run these three questions for the next month on every serious AI use around you. You will start seeing the cracks immediately.
Technology is not neutral and neither are we.
Kris Krüg
Executive Director
BC + AI Ecosystem Association
Multi-modal, multi-cultural, radically local, and future-facing.






