Skip to content

Think it. Build it. Prove it.

Simple enough to ship your first mission in hours. Rigorous enough to earn a credential employers actually trust.

Foundation

Six domains. Calibrated to your skill level.

Every learner calibrates on the same six domains before entering a school. Three speeds let you skip what you already prove and drill what you do not.

Python

The default language of modern AI engineering.

  • Type hints, async/await, dataclasses, modern stdlib
  • Testing discipline — pytest, fixtures, mocking
  • Package management and environment hygiene

Git & collaboration

Ship code the way teams actually ship code.

  • Branching, merging, rebasing, resolving conflicts
  • PR review culture — small changes, readable diffs
  • CI basics and pre-merge checks

APIs & HTTP

Every system is an API. Learn the contract.

  • REST fundamentals, status codes, idempotency
  • Auth patterns (API keys, tokens, OAuth basics)
  • Rate limits, retries, timeouts, failure handling

Databases & SQL

Data is where most AI systems actually live.

  • Schema design, indexing, query planning
  • SQL joins and window functions beyond the basics
  • Transactions, isolation levels, migrations

Docker & runtime

Ship your code in a container. Reproducibly.

  • Dockerfiles, layers, and image hygiene
  • docker-compose for local multi-service dev
  • Healthchecks, resource limits, logging

AI integration

Wire models into real systems without breaking them.

  • Provider-neutral LLM calls, streaming, tool use
  • Prompt templates, retries, structured outputs
  • Observability — tracing tokens, latency, cost

Three speeds through Foundation.

  1. Beginner

    All six domains, full depth. Structured passes through every capability with graded missions at each step.

  2. Intermediate

    Targeted pass on the gaps the calibration identifies. Skip what you already prove, drill what you do not.

  3. Experienced

    Bypass Foundation via a verified assessment. Go straight to school entry with a capability snapshot on your profile.

Build

Real missions. Real companies. Real engineering.

You ship work for four virtual companies across 16 industry contexts, inside a full execution cockpit — IDE, terminal, AI tutor, and real compute.

Virtual companies

Four companies. Sixteen industries. One engineering record.

Each company has its own CTO, VP of engineering, senior engineers, and tech lead — with review priorities that mirror the real industry they represent.

  • NovaMind AI

    Agent orchestration, generative AI, retrieval quality

    Fast iteration and hallucination control at the bleeding edge of AI.

  • DataForge Labs

    High-throughput data engineering, real-time pipelines, observability

    Scale and reliability under production pressure.

  • VisionArc

    Deep learning, computer vision, edge inference

    Performance under hardware and latency constraints.

  • Sentient Health

    Healthcare AI, explainability, compliance

    Building AI that meets strict regulatory requirements (HIPAA, audit trails).

Industry context

16 industry packs. Every mission lands in a real-world brief.

When a virtual company assigns a ticket, it carries an industry brief — domain constraints, compliance expectations, terminology your reviewer will actually use.

  • Healthcare
  • Finance
  • Retail
  • Manufacturing
  • Logistics
  • Energy
  • Climate
  • Agriculture
  • Education
  • Legal
  • Media
  • Public sector
  • Insurance
  • Real estate
  • Transportation
  • Security

Execution cockpit

Everything you need to ship, in one pane.

A full cloud IDE with terminal, inline AI tutor, real compute, and the named reviewer who sees your work the moment you commit.

  • Full cloud IDE with terminal and language servers — no local setup.
  • Inline AI Tutor — your named Lead Instructor, in-context, answering in the codebase.
  • Real compute — run training jobs, start a database, spin up a small cluster.
  • Review triggers the moment you commit — feedback from a named reviewer, not a queue.

Prove

Every submission graded. Every win verified.

Five dimensions. One Confidence Score. A public profile at safua.ai/u/yourhandle that employers can verify in seconds.

The rubric

Five dimensions. One standard across every mission.

  • Correctness94%

    Does the system actually work? The reviewer runs hidden evaluation cases, not just the ones the learner saw.

  • Code quality88%

    Modular, tested, documented. Catches AI-generated code that works once but ships untested.

  • Problem solving91%

    How the problem was decomposed. The single skill that separates engineers who direct AI from engineers replaced by it.

  • Engineering thinking87%

    Architecture, trade-offs, failure modes. Production readiness graded like a senior engineer would grade it.

  • Communication95%

    Can the learner explain why they built it this way? The most telling signal when AI generated most of the code.

Confidence Score

A single number employers can read.

Every submission produces a 0.001.00 score. Submissions below 0.60 are flagged for resubmission. Learners predict their own score before review — tolerance is ±0.25, and accuracy on that prediction is itself graded.

Proof Profile

A public URL employers can verify in seconds.

Every mission, review, artifact, and score lands atsafua.ai/u/yourhandle. Hiring managers see the same profile you do — same reviewers, same scores, same trajectory. No private debrief, no screenshot theatre.

Verifiable credentials

W3C-standard. Cryptographically signed. Portable.

Every milestone mints a W3C Verifiable Credential signed by Safua. Verifiers can check the credential against the signed issuer key without contacting Safua. Your profile is yours — not a lock-in.

Your instructors

30+ synthetic engineers with persistent memory.

Named deans, lead instructors, teaching assistants, and company senior engineers. They remember your code, your progress, and your gaps — across every mission you ship.

  • Persistent memory, mission over mission.

    Your Lead Instructor remembers the data model you drew last week and calls you on the decision three missions later when it bites you.

  • The Senior Engineer sees your code, not a summary.

    Reviewers diff your submission against the rubric. They cite lines, not abstractions.

  • Understand → Build → Reflect conversations.

    Every major mission opens with an Understand chat (why are you building this) and closes with a Reflect chat (what you changed and why). Graded on Communication.

  • Per-learner personalisation, not generic prompts.

    Faculty tune their questions to what you have shipped before — quieter on things you have proven, firmer on gaps you keep dodging.

Your trajectory

Real-time career mapping.

Jordan Hayes, the Career Director, maps your profile against live market signals. Weekly briefings, salary benchmarks, skill gaps, and portfolio readiness score — updated every time you ship.

  • Weekly briefings

    Plain-English market updates on the skills employers are actually asking for right now.

  • Salary benchmarks

    Role × region × skill-level bands updated against live job-market signals.

  • Gap analysis

    The skills your profile is missing for the roles you are aiming at — ranked by marginal impact.

  • Portfolio readiness

    A numeric score for how ready your profile is to support an application today.

Beyond the tools

Learn to set up the infrastructure, not just use it.

MLOps learners provision the harness, not just the model. Kubernetes clusters, MLflow deployments, Prometheus monitoring, CI/CD pipelines — configured by hand, reviewed by Viktor.

  • Kubernetes

    Provision a cluster from scratch. Namespaces, RBAC, resource quotas, autoscaling — configured, not copy-pasted.

  • MLflow

    Deploy a tracking server with persistent storage, artifact registry, and a model registry that multiple services can read from.

  • Prometheus + Grafana

    Wire up metrics scraping, define SLOs, author alerts that fire on symptoms engineers actually care about.

  • CI/CD

    Build a pipeline that tests, containers, deploys, and rolls back — reviewed on idempotency and recoverability, not just happy-path.

Reviewed by Viktor · Lead Instructor, School of MLOps & Infrastructure

Faculty

Meet the reviewers your code will answer to.

30+ named engineers — each with a persona, a persistent memory of your work, and a review style your submissions will learn to anticipate.

See how your work becomes an un-fakeable profile.

Every tool, every review, every shipped project — one continuous engineering record.