How Safua Works

Safua is structured as a three-phase engineering residency. Every learner passes through Foundation, then Build, then Prove. Each phase has a distinct purpose and a distinct output. Together they produce a public, verifiable record of what an engineer can actually do in production.

This page is the five-minute version of the platform. Each concept linked here has a dedicated page with more depth.

Phase 1 — Foundation#

Foundation calibrates what a learner already knows.

Before anyone ships a mission, Safua needs an honest picture of their baseline. Self-reported skills don't cut it. Neither do multiple-choice assessments — they measure recall, not engineering judgment. Foundation uses six domains:

  1. Python (syntax, idioms, testing)
  2. Git (branching, rebasing, merge conflict resolution)
  3. APIs (HTTP semantics, auth, pagination, error handling)
  4. SQL (joins, aggregations, query planning)
  5. Docker (images, volumes, composition)
  6. AI integration (prompting, embeddings, API patterns)

Each domain is a small adaptive probe: the probe picks questions based on the learner's responses and stops as soon as it has enough signal to place them. A confident junior completes Foundation in an afternoon. Someone genuinely strong in, say, SQL but weak in Docker, leaves with a skew — their Build missions are calibrated to that skew rather than a uniform starting line.

The output of Foundation is not a grade. It's a placement — which missions in Build are at the learner's edge, which are below, and which are still too far ahead. Foundation doesn't rank; it routes.

Phase 2 — Build#

Build is where the work happens.

Every learner in Build is assigned missions. A mission is a real engineering brief — the kind a senior engineer writes when they need a junior to ship something. Build a batch-to-streaming ingest migration that preserves idempotency. Add a HIPAA audit trail to an existing AI pipeline without blocking the request path. Red-team a RAG system and document the three prompt-injection vectors you found.

Missions live inside one of four virtual companies — NovaMind AI, DataForge Labs, VisionArc, Sentient Health. Each company has its own industry framing and its own review priorities. A mission shipped to Sentient Health gets graded against a different compliance bar than the same shape of mission at NovaMind — because real healthcare engineering doesn't forgive PHI leaks.

Every submitted mission is reviewed by a named senior engineer — a specific person on the faculty, not an anonymous grader. The reviewer grades the work against Safua's five-dimension rubric and leaves written feedback. Reviews include concrete references to the submitted artifacts: the function that doesn't handle the null case, the test that would have caught it, the design choice that made the null case possible in the first place.

A learner doesn't finish a mission when the code runs. They finish when the review lands and they've iterated on what the reviewer flagged. Iteration is the pedagogy. First-pass code from most engineers is never the code that ships; the point of Build is to practice the cycle of submit → review → revise at production cadence, in public, with real stakes, before you're doing it with a paycheck on the line.

Phase 3 — Prove#

Prove graduates learners into a public record of their work.

Every mission completed in Build — including the reviews, the iterations, and the aggregate Confidence Score — lands on a public profile at safua.ai/u/<handle>. Employers verify the profile directly; no phone call, no "take our word for it."

The Proof Profile shows:

  • Every mission: brief, submitted artifact (with reviewer-approved excerpts — never raw submitted code), review summary, iteration history.
  • Every score: per-dimension breakdown across the five review dimensions, aggregated per-school Confidence Scores, trend line over the residency.
  • Top artifacts: the reviewer-flagged highlights — specific functions, design decisions, write-ups that demonstrate strength.
  • Emerging strengths: the pattern across missions that the reviewer feedback has surfaced (e.g., "communication consistently strong on governance-heavy missions; correctness still developing in agent orchestration contexts").

The profile is the credential. It's not a PDF certificate, not a badge image, not a LinkedIn endorsement — it's the submitted work, linked to the review, the reviewer's identity, and the date of submission. Every record is signed, timestamped, and anchored to an engineer whose public profile on the Faculty page is verifiable. Tampering with a profile entry would require forging a named engineer's review signature, which is the security property the system is designed around.

Six schools#

Under the residency framing, Safua organises its curriculum into six engineering schools, each with its own faculty, its own mission catalog, and its own standards:

A learner can focus on one school or traverse several. Each school defines its own Confidence threshold for graduation and its own weighting across the five dimensions — correctness weighs more in Safety & Governance than in Data Engineering, for example. The rubric is uniform; the weights are school-specific.

Who reviews the work#

Safua's faculty — all 30+ of them — are the named engineers behind every review. Each one has a specialty, an organisational affiliation (school, company, or the Career Office), and a review voice that shows up consistently in feedback. Rafael Mendes's feedback on a RAG mission reads different from Jin Park's feedback on a pipeline mission, and that's the point: learners develop taste by being reviewed by distinct engineers with distinct standards.

The faculty list is public. Reviews are attributed. There's no anonymous grader pool.

What comes next#

The pages linked below go deeper into each concept:

If you're here to evaluate Safua for a team, the enterprise page covers procurement, SSO, audit exports, and deployment options. If you're an individual learner, the pricing page covers the plans.