As a K–12 student in the mid 90s through the late 2000s, I was taught how to use computer programs like Microsoft Word, leveraging new tools like spellcheck to expedite and support my work. After I graduated high school, K-12 schools began expanding technology coursework to include computer science, giving students opportunities to build, not just use, digital tools.
Today, students are engaging with artificial intelligence in the classroom in a variety of ways, from using it as a support to outsourcing their thinking altogether. In many ways, this reflects an earlier stage of technology in my education: heavy use, but often light understanding. Rarely, however, are students explicitly taught AI literacy, meaning how these systems work as well as their impacts, and I believe they should be.
The sections that follow are designed as a practical AI 101 guide for educators, exploring key concepts behind the technology before moving into how bias and ethics shape these tools. I’ve also included recommended activities that can be used directly with students to help explore these concepts in a meaningful manner.
1AI 101
Let’s establish an understanding of how AI works at a foundational level. A helpful way to frame this for students is through a simple progression of Sense, Think, and Act. I’ve found it’s most effective to introduce machine learning by comparing it with human learning, so we’ll use the example of a human driver vs. a self-driving car to illustrate this sequence.
Sense refers to how information is gathered to understand an environment.
Humans: Use senses like sight and hearing to gather information about current driving conditions.
Self-Driving Cars: Use sensors such as cameras and radar to gather information about current driving conditions.
Think refers to how information is interpreted to understand the situation.
Humans: Take in cues like reduced speed signs, orange barrels, and flashing lights collectively to recognize a construction zone.
Self-Driving Cars: Process signs, barrels, and lights one at a time, matching each one to a model. Together, those models help the car recognize a construction zone.
In AI, a model is what a machine uses to recognize what it senses. We’ll explore this more in the Models & Data Sets section.
Act is the response that follows.
Humans: Slow down, increase following distance, and proceed cautiously because prior driving experience has shown it’s the safest way to navigate construction zones.
Self-Driving Cars: Slow down and increase following distance because they are programmed to respond this way in construction zones.

When introducing this progression in the classroom, the Sorting Hat scene in Harry Potter serves as an accessible example for students to follow and can also be used as a quick informal assessment activity where students identify the Sense, Think, and Act steps.
- When placed on a student’s head, the Sorting Hat senses (not with sensors, but with magic) the traits they possess.
- The Sorting Hat then thinks about which Hogwarts house—each house acting like a different model—best aligns with the student’s traits and qualities.
- Finally, the Sorting Hat acts by shouting the student’s house placement across the Great Hall.
2 Models & Data Sets
For machines to execute the “Think” stage, they first need to learn. Machine learning is shaped by two key concepts: models and datasets.
- A model is built-in knowledge that a machine uses to understand what it is sensing.
- A dataset is a collection of examples used to build/train that model.
Let’s explore these concepts using shapes.
As young students learn about triangles and squares, they build a model of what makes something a shape—sides, angels, etc. For kindergartners, the dataset of examples used to build their initial shape model is quite small.
As students progress in school, they learn more types of shapes—hexagons, trapezoids, etc.—which expand their dataset and make their model of shapes more complete.
Similarly, AI models become more “intelligent” when they are trained on larger and more varied datasets. To help students explore what makes a strong dataset, have them create a collage of stop sign images that could be used to train a self-driving car. Be sure to ask them to factor in different lighting conditions, angles, weather, and obstructions a car might encounter.
3 Biases & Ethics
The stop sign activity can also be used to spark a broader conversation about bias: What happens when a dataset is incomplete or unrepresentative? Because humans rely on pattern recognition to make sense of the world, the data we collect to build models often reflects those same patterns and blind spots.
Lead students through an exploration of biases and how they show up in our decision-making. Then examine how those same patterns can carry over into the data used to train intelligent systems:
- In pedestrian detection, some systems perform less accurately when identifying people with darker skin tones, due in part to training datasets that contain a higher proportion of lighter-skinned individuals.
- In hiring, AI screening tools have shown bias toward resumes with experience like collegiate internships, which were uncommon or nonexistent for older generations, putting those applicants at a disadvantage.
- In healthcare, underrepresentation of women in clinical trials (only required by U.S. law since 1993) has led to AI systems, trained on historical data, having a harder time accurately recognizing and predicting health outcomes for women.
On top of the challenge of mitigating bias, it is also important for students to consider the ethical questions raised by the involvement of AI in ethical decision-making.
When leading discussions of ethics, I often start with a clip from The Good Place that humorously explores the trolley problem. This lead-in helps students recognize that many ethical decisions are not black and white. From there, students can examine scenarios where AI is increasingly being used to make fraught decisions:
- Content moderation: Platforms like YouTube or TikTok use AI to detect and remove harmful content. Where is the line between safety and free expression?
- Medical priority: AI systems help hospitals prioritize care, including ICU beds, transplants, or early screenings. How should a model weigh factors like age, health history, or likelihood of survival?
- Disaster response: AI is used to help route emergency resources during events like wildfires and hurricanes. When resources are limited, how should systems determine which areas receive assistance first?
Beyond decision-making, AI also raises ethical questions around privacy, copyright, environmental impact, job displacement, and surveillance. The debates surrounding these matters will likely rise alongside AI’s prominence. AI literacy is the best tool we have to prepare students to approach these topics with understanding and nuance.

Across states, districts, and schools, there is a growing momentum to formalize guidance around the use and teaching of AI. Institutions such as CSTA have begun updating their standards to include AI literacy in K–12 education as well. For educators interested in further guidance on teaching machine learning, MIT RAISE, TeachAI, and AI4K12 offer a range of resources and support.
As machine learning tools and standards evolve, the goal is not to turn every student into an AI developer, but to ensure they leave school with a grounded understanding of how these systems work, and how to think critically about their role in the world.





