Skip to main content
Mythos

This @memo is a glossary of terms for all things related to @Artificial Intelligence (AI).

Glossary

A

B

  • Benchmarks – Standard test sets for model performance (MMLU, GSM8K, ARC, etc).

C

  • Chain-of-Thought (CoT) – Step-by-step reasoning method for reliable answers.
  • Closed-weight Model – Proprietary models with non-public parameters.
  • Context Rot – Decline in reliability with long or cluttered prompts.
  • Context Window – Max tokens a model can process in one pass.
  • Custom GPT – Tailored GPT built for specific use cases.

D

  • Deep Learning (DL) – Multi-layer neural nets for vision, speech, language.

E

  • Embeddings – Dense vector representation of text, images, or data.
  • Episodic Memory – Recall of past events to improve personalization.
  • Evals – Frameworks for testing AI across standard datasets.

F

  • Faithfulness – Outputs remaining true to given sources.
  • Few-shot – Task learning guided by multiple examples.
  • Foundation Model – Large pre-trained models adaptable to many tasks.
  • Frontier Model – Cutting-edge models pushing AI performance.
  • Function Calling – AI invoking APIs/tools with structured inputs.

G

  • Golden Set – Reference pairs for regression testing quality.
  • Graph-of-Thought (GoT) – Reasoning with DAG subproblems and reusable paths.

H

  • Hallucination Rate – Share of unsupported claims in model outputs.
  • HNSW – ANN algorithm for fast, high-recall vector search.
  • HumanEval – Benchmark for code generation.

J

  • Jailbreaks – Prompts that bypass AI’s safety or alignment rules.

K

  • KV Cache – Speeds generation by reusing attention states.

L

  • @LLM-as-a-Judge – Models used to evaluate outputs by rubric.
  • Long-context – Models with extended token windows.

M

  • @Machine Learning (ML) – AI subset learning from data patterns.
  • Mixture-of-Experts (MoE) – Large models with specialist subnetworks.
  • @Model Context Protocol (MCP) – Open standard for tool integration.
  • Model Landscape – Core AI building blocks and structure.
  • @Multi-Agent AI (MAAI) – Systems with multiple cooperating AI agents.
  • Multimodal LLM (MLLM) – Models combining text, image, audio, video.

O

  • Open-weight Model – Models with publicly available parameters.

P

  • Pairwise Preference – Evaluation method comparing two outputs.
  • Program-of-Thought (PoT) – Reasoning expressed as code steps.
  • @Prompt Injection – Hidden instructions tricking models.
  • Prompt Template – Reusable structure with variable placeholders.
  • Prompting – Crafting inputs to guide AI outputs.

R

  • ReAct – Pattern mixing reasoning and tool actions.
  • Reasoning Model – AI built to plan, verify, and justify answers.
  • Regression Tests – Checks for quality after updates.
  • Retrieval-Augmented Generation (RAG) – Combines models with document retrieval.

S

  • Safety – Ensuring models produce non-harmful outputs.
  • Self-Refine – Iterative AI self-revision loop.
  • Semantic Caching – Stores responses for similar queries.
  • Semantic Memory – AI recall of facts or user details.
  • Session Memory – Persistent context across user chats.
  • Sycophancy – AI over-agreeing with users.
  • System Prompt – Foundational instruction guiding AI behavior.

T

  • Temperature – Randomness control in output generation.
  • Top-k – Limits next-token choices to top k options.
  • Top-p – Samples tokens from cumulative probability mass.
  • Tree-of-Thought (ToT) – Branching reasoning exploration.

U

  • User Prompt – Direct input from a user to the AI.

V

  • Vector Database – Stores embeddings for retrieval and search.
  • VLM – Vision-language model, multimodal with images.

Z

  • Zero-shot – Solving tasks without prior examples. Do you want me to also add current examples (like GPT-5, Claude Sonnet 4, Gemini 2.5) as their own glossary entries, or keep the glossary focused on concepts only?

Contexts

  • #ai-lexicon (this is the @Root Memo)
  • #glossary
Created with 💜 by One Inc | Copyright 2026