Skip to main content

40 AI Terms — Plain-English Guide + Hands-On “How-To”

Audience: teachers, student-learners, community mentors, and beginner builders using Incubator.org.
Use this guide: Every term has a short, human-readable definition plus at least one way to try it yourself using widely-used tools (most are free or have generous tiers). Scan the glossary for fast definitions, then jump to the Try it / Tools snippets to actually do something with each idea. Copy the prompt boxes into your AI tool of choice and adapt them for class, self-study, or a workshop.

Quick Glossary (plain English)

  1. Bias — When an AI prefers or treats some things unfairly because of patterns in its training data.
  2. Label — The “answer” attached to data (e.g., a cat/not-cat tag).
  3. Model — The trained program that makes predictions or generates content.
  4. Training — Teaching a model using examples so it improves.
  5. Chatbot — A program that converses by text or voice.
  6. Dataset — A large, structured collection of examples used for training/evaluation.
  7. Algorithm — A step-by-step method to solve a problem.
  8. Token — A chunk of text (or data unit) used by language models.
  9. Overfitting — When a model memorizes the training set and fails on new data.
  10. AI Agent — Software that can plan and act (often across tools/APIs) toward a goal.
  11. AI Ethics — Principles and practices to make AI fair, safe, and accountable.
  12. Explainability — Ways to understand why a model made a decision.
  13. Inference — Running the trained model to get outputs (predictions/generations).
  14. Turing Test — A thought experiment: can a machine’s responses pass for human?
  15. Prompt — The instruction or input you give an AI system.
  16. Fine-Tuning — Further training a model on your specific data or style.
  17. Generative AI — Models that create text, images, audio, or video.
  18. AI Automation — Using AI to complete multi-step tasks without constant supervision.
  19. Neural Network — A model architecture loosely inspired by brain neurons.
  20. Computer Vision — AI that interprets images or video.
  21. Transfer Learning — Starting from a pretrained model and adapting it to a new task.
  22. Guardrails — Controls to keep outputs safe, on-topic, and policy-compliant.
  23. Open-Source AI — Models/tools whose code/weights are openly shared.
  24. Deep Learning — Neural networks with many layers that learn complex patterns.
  25. Reinforcement Learning — Training by trial, error, and rewards.
  26. Hallucination — When an AI confidently makes up facts.
  27. Zero-Shot Learning — Doing a new task without training examples, guided by the prompt.
  28. Speech Recognition — Turning spoken language into text.
  29. Supervised Learning — Training with labeled examples.
  30. Model Context Protocol (MCP) — A standard for letting models securely use local tools/data.
  31. Machine Learning — Letting computers learn patterns from data.
  32. AI (Artificial Intelligence) — Systems that perform tasks we associate with human intelligence.
  33. Unsupervised Learning — Finding patterns in unlabeled data.
  34. LLM (Large Language Model) — Big text models that read/write code, essays, etc.
  35. ASI (Artificial Superintelligence) — Hypothetical AI vastly beyond human general ability.
  36. GPU (Graphics Processing Unit) — Hardware that speeds up AI training/inference.
  37. NLP (Natural Language Processing) — AI that works with human language.
  38. AGI (Artificial General Intelligence) — Hypothetical AI that can learn anything a human can.
  39. GPT (Generative Pretrained Transformer) — A popular LLM family trained then adapted for tasks.
  40. API (Application Programming Interface) — A standard way apps/services talk to each other.

Hands-On: tools, links, and “try it” steps

(Each bullet: what to try + where to click. Most tools have free tiers.)

Bias, Labels, Overfitting, Explainability, Supervised/Unsupervised

Try it: Load a small dataset in scikit-learn → split train/test → compare test vs. train accuracy (spot overfitting) → run SHAP to see which features mattered.


Datasets & Algorithms

Try it: Pick any dataset on Kaggle → open in a Notebook → train LogisticRegression and RandomForest → compare accuracy and confusion matrices.


Tokens, Prompts, Guardrails

Try it: Paste your assignment into the Tokenizer and observe token counts. Add a system prompt with rules + a JSON schema via Guardrails to keep outputs on-format.


Chatbots, LLMs, GPTs, APIs, MCP

Try it: Build a tiny Q&A bot: create an API key → call a /chat/completions endpoint with a system prompt and a user question → display the assistant’s reply in your app.


Generative AI (text, image, audio, video) & Fine-Tuning / Transfer Learning

Try it: Take a small set of your organization’s emails → fine-tune a support style model (LoRA) → evaluate on held-out examples before using in production.


Neural Networks, Deep Learning, GPUs

Try it: In Colab, run a Keras CNN on MNIST (handwritten digits). Toggle GPU runtime and compare training speed vs. CPU.


Computer Vision

Try it: Detect objects in a classroom photo with YOLOv8 and draw bounding boxes. Discuss privacy and consent before sharing images.


Speech Recognition

Try it: Record a 1-minute reflection → transcribe with Whisper → summarize with an LLM.


NLP (language) & RAG to reduce Hallucination

Try it: Build a “course FAQ” bot: index your PDFs with LlamaIndex → answer questions with sources → compare vs. no-RAG to see fewer hallucinations.


Reinforcement Learning

Try it: Train a CartPole agent for a class demo, then discuss reward shaping and safety.


AI Ethics, Guardrails, Safety

Try it: Create a classroom “AI Use Policy” one-pager—roles, acceptable use, privacy, citation norms.


Open-Source AI

Try it: Run a small open model locally (e.g., llama.cpp, ollama) and compare responses with hosted LLMs.


AI Automation & Agents

Try it: Build a “research → draft → summarize → email” pipeline with one button.


Concepts & Big-Picture (AI, ML, AGI, ASI, Turing Test)

Try it: Host a class debate: “What would count as evidence of AGI?”


Copy-Prompt Boxes (paste into your LLM)

Copy Prompt — Teach the Term
“Explain {term} in 120 words for a mixed classroom (ages 15–18). Give 1 real-world example, 1 pitfall to avoid, and a 2-step mini-activity students can do in 10 minutes.”

Copy Prompt — Compare & Contrast
“In a table, compare supervised, unsupervised, and reinforcement learning: goals, data needed, example tools, quick classroom demo.”

Copy Prompt — Reduce Hallucination
“Answer the question using only the provided sources. Quote and link sources inline. If a claim isn’t supported, say ‘insufficient evidence.’ Return JSON with keys: answer, citations.”

Copy Prompt — Build a Guardrailed Helper
“You are a classroom writing coach. Follow these rules: no personal data collection; no copyrighted text over 90 words; cite 2 reputable sources; refuse unsafe requests. Output in this JSON schema: {‘tips’: [string], ‘outline’: [string], ‘sources’: [ {title, url} ] }.”


Classroom & Cohort “How-To” Ideas

  • Mini-lab: Count tokens of your prompt, then optimize wording to fit a token budget.

  • Project: Build a FAQ bot with RAG for your course or club handbook.

  • Ethics circle: Use a real case (e.g., face recognition) and analyze bias, consent, and impact.

  • Show-and-tell: Students fine-tune a tiny model (LoRA) on their writing voice and present safety mitigations.


Useful Hubs & Docs (one-stop links)

 

Authors

Incubator.org Editorial Team

Show comment form