Context: AI is everywhere, but kids are learning it the wrong way
Over the last two years, more and more parents come to us with the same question: "I want my child to learn AI." We get the worry — the world is changing fast. But often what a parent is picturing isn't what the child actually needs.
Most "AI for kids" content online boils down to: how to use ChatGPT to write a school essay. That's "using a tool", not understanding AI. It's like saying you've learned maths because you can use a calculator.
We teach something quite different. In the Black Belt AI program (ages 11–13), kids learn how AI thinks under the hood — how a model is "trained", how it makes mistakes, what data is, how to evaluate whether it's working.
What an 11-year-old can grasp about AI
Concept 1: AI = telling things apart
The closest analogy: AI is a program that learns to tell things apart. If we want an AI to recognise a cat in a photo, we show it thousands of pictures labelled "cat" and thousands without. After enough examples, it starts telling them apart on its own.
An 11-year-old gets this immediately. We draw two squares on the board. "What makes a cat different from a dog?" — discussion. "What makes a happy face different from a sad one?" — discussion. AI does exactly that, just faster and with more examples.
Concept 2: AI makes mistakes — and makes them predictably
The biggest theme in the first month of the program: AI isn't "smarter than a human". AI is just faster at some things and a lot dumber at others.
When a child trains their own small model (using Teachable Machine or a similar tool) and sees the model confuse "cat fur" with "a fluffy carpet" — they're learning the limits of AI. That's a more valuable lesson than any ChatGPT prompt tip.
Concept 3: Data drives behaviour
If an AI only sees photos of blonde hair, it won't recognise dark hair well. If it only learns from one language, it won't know another. AI ethics starts right there — and kids understand it.
A conversation we have a lot: "Why do you think some AIs don't understand the words you use every day?" Answer: because their data about your world isn't complete. A big lesson.
What kids actually build
The Black Belt AI program runs for one term. A typical project arc:
- Weeks 1–2: A "guess what I drew" game — a model that recognises drawings.
- Weeks 3–5: A voice assistant for a parent — say "turn on the light" and the program responds.
- Weeks 6–8: A small posture-detection system — a camera spots whether the child is sitting up straight (mum can use this one).
- Weeks 9–11: An AI-driven robot — a Lego robot following voice commands.
- Week 12: A final project of the child's choosing.
Kids do all of this themselves. The teacher helps but doesn't do the work for them. Final projects are often "mum, look what I made" moments — and very often they aren't school projects but something the child actually keeps using afterwards.
Best example from last year's group
A twelve-year-old built an AI that detects when his baby brother cries. The app texts mum when the brother wakes up. It actually works. Nobody assigned him that project — he came up with it once he realised what he could actually do.
What kids don't learn (and why that's fine)
Let's also be straight about what we don't do:
- We don't teach "prompt engineering" for ChatGPT. We figure that's a skill the child will pick up in five minutes once they need it. Understanding how AI works is harder to learn and more valuable to know.
- We're not building Skynet. Kids' AI projects are small, local, simple. The goal is understanding, not grand ambition.
- We don't go into the maths behind AI. Calculus, linear algebra, gradients — concepts an 11-year-old doesn't and shouldn't grasp. Intuitive understanding is more than enough.
- We don't teach Python AI libraries (TensorFlow, PyTorch). That's for university. We use visual tools that are powerful enough to train real models without writing complex code.
The goal isn't to turn our twelve-year-olds into AI engineers. The goal is to give them a good intuition for something that will surround them for life — to know what AI can do, what it can't, and how to use it wisely.
What parents should know before signing up
- Black Belt AI is for ages 11–13. Younger kids aren't ready — not cognitively, but in terms of attention span.
- Black Belt isn't recommended as a first program. Ideally, before AI, the child has the Blue or Red Belt. If they're coming "from outside" — we do an assessment.
- There's no homework, but kids often work on their projects at home. Don't push it — let them do it only if they want to.
- After Black Belt AI, the natural next step is Python (Black Belt III). But kids can also wrap up the DigiKids path here and choose where to go next.
Bonus: AI ethics for kids
Five questions we ask kids during the program:
- Is AI always right? (No.)
- Who's responsible when AI gets it wrong — the developer or the user? (Discussion.)
- Can you copy someone else's work and call it your own? (No.) Can you say your work was helped by AI? (Yes, if you say so.)
- Can AI be biased? (Yes, because of the data.)
- What happens if everyone uses AI to write their homework? (Open question, great discussion.)
Honestly, these conversations are valuable beyond the coding itself. A child discussing AI ethics at twelve understands the dilemmas of the world they're walking into.
More about the program: Black Belt — AI and Robotics. The trial class is free, and afterwards we give you our honest opinion on whether the program is right for your child at this stage.
