Python
The default language of modern AI engineering.
- Type hints, async/await, dataclasses, modern stdlib
- Testing discipline — pytest, fixtures, mocking
- Package management and environment hygiene
Simple enough to ship your first mission in hours. Rigorous enough to earn a credential employers actually trust.
Foundation
Every learner calibrates on the same six domains before entering a school. Three speeds let you skip what you already prove and drill what you do not.
The default language of modern AI engineering.
Ship code the way teams actually ship code.
Every system is an API. Learn the contract.
Data is where most AI systems actually live.
Ship your code in a container. Reproducibly.
Wire models into real systems without breaking them.
All six domains, full depth. Structured passes through every capability with graded missions at each step.
Targeted pass on the gaps the calibration identifies. Skip what you already prove, drill what you do not.
Bypass Foundation via a verified assessment. Go straight to school entry with a capability snapshot on your profile.
Build
You ship work for four virtual companies across 16 industry contexts, inside a full execution cockpit — IDE, terminal, AI tutor, and real compute.
Virtual companies
Each company has its own CTO, VP of engineering, senior engineers, and tech lead — with review priorities that mirror the real industry they represent.
Agent orchestration, generative AI, retrieval quality
Fast iteration and hallucination control at the bleeding edge of AI.
High-throughput data engineering, real-time pipelines, observability
Scale and reliability under production pressure.
Deep learning, computer vision, edge inference
Performance under hardware and latency constraints.
Healthcare AI, explainability, compliance
Building AI that meets strict regulatory requirements (HIPAA, audit trails).
Industry context
When a virtual company assigns a ticket, it carries an industry brief — domain constraints, compliance expectations, terminology your reviewer will actually use.
Execution cockpit
A full cloud IDE with terminal, inline AI tutor, real compute, and the named reviewer who sees your work the moment you commit.
Prove
Five dimensions. One Confidence Score. A public profile at safua.ai/u/yourhandle that employers can verify in seconds.
The rubric
Does the system actually work? The reviewer runs hidden evaluation cases, not just the ones the learner saw.
Modular, tested, documented. Catches AI-generated code that works once but ships untested.
How the problem was decomposed. The single skill that separates engineers who direct AI from engineers replaced by it.
Architecture, trade-offs, failure modes. Production readiness graded like a senior engineer would grade it.
Can the learner explain why they built it this way? The most telling signal when AI generated most of the code.
Confidence Score
Every submission produces a 0.00–1.00 score. Submissions below 0.60 are flagged for resubmission. Learners predict their own score before review — tolerance is ±0.25, and accuracy on that prediction is itself graded.
Proof Profile
Every mission, review, artifact, and score lands atsafua.ai/u/yourhandle. Hiring managers see the same profile you do — same reviewers, same scores, same trajectory. No private debrief, no screenshot theatre.
Verifiable credentials
Every milestone mints a W3C Verifiable Credential signed by Safua. Verifiers can check the credential against the signed issuer key without contacting Safua. Your profile is yours — not a lock-in.
Your instructors
Named deans, lead instructors, teaching assistants, and company senior engineers. They remember your code, your progress, and your gaps — across every mission you ship.
Your Lead Instructor remembers the data model you drew last week and calls you on the decision three missions later when it bites you.
Reviewers diff your submission against the rubric. They cite lines, not abstractions.
Every major mission opens with an Understand chat (why are you building this) and closes with a Reflect chat (what you changed and why). Graded on Communication.
Faculty tune their questions to what you have shipped before — quieter on things you have proven, firmer on gaps you keep dodging.
Your trajectory
Jordan Hayes, the Career Director, maps your profile against live market signals. Weekly briefings, salary benchmarks, skill gaps, and portfolio readiness score — updated every time you ship.
Weekly briefings
Plain-English market updates on the skills employers are actually asking for right now.
Salary benchmarks
Role × region × skill-level bands updated against live job-market signals.
Gap analysis
The skills your profile is missing for the roles you are aiming at — ranked by marginal impact.
Portfolio readiness
A numeric score for how ready your profile is to support an application today.
Beyond the tools
MLOps learners provision the harness, not just the model. Kubernetes clusters, MLflow deployments, Prometheus monitoring, CI/CD pipelines — configured by hand, reviewed by Viktor.
Kubernetes
Provision a cluster from scratch. Namespaces, RBAC, resource quotas, autoscaling — configured, not copy-pasted.
MLflow
Deploy a tracking server with persistent storage, artifact registry, and a model registry that multiple services can read from.
Prometheus + Grafana
Wire up metrics scraping, define SLOs, author alerts that fire on symptoms engineers actually care about.
CI/CD
Build a pipeline that tests, containers, deploys, and rolls back — reviewed on idempotency and recoverability, not just happy-path.
Reviewed by Viktor · Lead Instructor, School of MLOps & Infrastructure
Faculty
30+ named engineers — each with a persona, a persistent memory of your work, and a review style your submissions will learn to anticipate.
Dr. Emeka Adeyemi
Dean
School of Data Engineering
“Data is the foundation. Everything else is built on top of what you build here.”
Marcus
Lead Instructor
School of Data Engineering
“Tell me WHY before you show me HOW.”
Tomás
Teaching Assistant — SQL & Databases
School of Data Engineering
“Before you write the query, draw the tables. What connects them?”
Rina
Teaching Assistant — Pipeline Architecture
School of Data Engineering
“What happens when this task fails at 3am? Show me your retry logic.”
Kwame
Teaching Assistant — Data Modeling
School of Data Engineering
“Model for the questions people will ask, not the data you have.”
Dr. Sarah Lin
Dean
School of Machine Learning
“A model that works but can’t be explained is a liability, not an asset.”
Priya
Lead Instructor
School of Machine Learning
“Before you tune hyperparameters, tell me why you chose this architecture.”
David
Teaching Assistant — Classical ML
School of Machine Learning
“Try a simple baseline first. You’d be surprised how often it wins.”
Mei
Teaching Assistant — Deep Learning
School of Machine Learning
“Think of attention as the model asking: which parts of the input matter most for this output?”
Alejandro
Teaching Assistant — Computer Vision
School of Machine Learning
“Your model is accurate. Now make it run in 50ms on a device with 2GB RAM.”
Dr. Amara Osei
Dean
School of AI Engineering
“AI engineering is not about the model. It’s about the system around the model.”
Kaia
Lead Instructor
School of AI Engineering
“But does it scale?”
Nadia
Teaching Assistant — RAG & Retrieval
School of AI Engineering
“If your retrieval is wrong, your generation is confidently wrong. Fix retrieval first.”
Hiroshi
Teaching Assistant — Fine-Tuning
School of AI Engineering
“Show me your training data before you show me your hyperparameters.”
Zara
Teaching Assistant — AI App Architecture
School of AI Engineering
“What’s the cost per request? Your architecture is only viable if the unit economics work.”
Dr. Rashid Patel
Dean
School of Agentic AI
“You’re not building a tool. You’re building something that makes decisions. Treat that seriously.”
Soren
Lead Instructor
School of Agentic AI
“What happens when agent A and agent B disagree? Design for conflict, not just cooperation.”
Yuna
Teaching Assistant — Agent Frameworks
School of Agentic AI
“Build a tiny agent first. Make it work. Then make it smart.”
Eliot
Teaching Assistant — Memory & Planning
School of Agentic AI
“An agent without memory is just a function call. Memory is what makes it an agent.”
Dr. Chen Wei
Dean
School of MLOps & Infrastructure
“If it’s not in production with monitoring, it doesn’t exist.”
Viktor
Lead Instructor
School of MLOps & Infrastructure
“Your model is only as reliable as your deployment pipeline. Show me the pipeline.”
Fatima
Teaching Assistant — Model Serving
School of MLOps & Infrastructure
“What’s your p99 latency? What’s your cost per prediction? Those two numbers define your architecture.”
Andrei
Teaching Assistant — Monitoring
School of MLOps & Infrastructure
“If your alert fires, you’re already late. Design monitoring that predicts the problem.”
Dr. Adaeze Nwosu
Dean
School of AI Safety & Governance
“The question is never just "can we build it?" It’s "should we, and how do we build it responsibly?"”
James
Lead Instructor
School of AI Safety & Governance
“Compliance is not paperwork. It’s architecture. Build it into the system from day one.”
Lucia
Teaching Assistant — Explainability
School of AI Safety & Governance
“If a patient asks why the model flagged them, what do you say? That’s explainability.”
Omar
Teaching Assistant — Red-Teaming
School of AI Safety & Governance
“I found three ways to make your model produce harmful output. Now let’s fix all three.”
Dr. Yuki Tanaka
CTO
NovaMind AI
“Would you deploy this to production with your name on it?”
Dimitri Volkov
VP of Engineering
NovaMind AI
“This isn’t a coding exercise. The retrieval team is blocked until your pipeline ships.”
Rafael Mendes
Senior Engineer — RAG & Retrieval
NovaMind AI
“Your chunking strategy tells me everything about how you think about this problem.”
Isla Nakamura
Senior Engineer — Agent Orchestration
NovaMind AI
“An agent that can’t explain its own decision is an agent you can’t trust.”
Amir Rezaei
Tech Lead
NovaMind AI
“What’s the smallest thing you can ship today that moves us forward?”
Dr. Miriam Okafor
CTO
DataForge Labs
“Show me how this handles a million records. Then we’ll talk about your algorithm.”
Marcus Chen
VP of Engineering
DataForge Labs
“If you can’t monitor it, you can’t ship it.”
Jin Park
Senior Engineer — Pipelines
DataForge Labs
“I evaluate idempotency before business logic.”
Clara Johansson
Senior Engineer — MLOps
DataForge Labs
“A notebook is not a deployment. Show me the Docker file, the CI pipeline, and the rollback strategy.”
Luis Morales
Tech Lead
DataForge Labs
“If your teammate can’t understand this code in 6 months, rewrite it.”
Dr. Andreas Müller
CTO
VisionArc
“Fast enough is never fast enough. Find the bottleneck and eliminate it.”
Priyanka Sharma
VP of Engineering
VisionArc
“You have 2GB of RAM and 100ms latency budget. Make it work within those constraints.”
Dr. Fatou Diallo
Senior Engineer — Computer Vision
VisionArc
“Explain the receptive field of your architecture. If you can’t, you don’t understand your model.”
Kai Yamamoto
Senior Engineer — Deep Learning
VisionArc
“The best architecture is the simplest one that meets your requirements. Start there.”
Aisha Patel
Tech Lead
VisionArc
“What’s the smallest thing you can ship today?”
Dr. Elizabeth Okafor
CTO
Sentient Health
“Behind every data point is a patient. Build systems worthy of that responsibility.”
Daniel Nakamura
VP of Engineering
Sentient Health
“Every line of code in healthcare is auditable. Write it like a regulator is reading it.”
Dr. Lena Kowalski
Senior Engineer — Compliance & Safety
Sentient Health
“In healthcare, a logging mistake isn’t a bug — it’s a lawsuit.”
Dr. Ravi Mehta
Senior Engineer — Explainability
Sentient Health
“Your model denied a patient coverage. Can you explain why to their doctor? That’s the standard.”
Sophie Tremblay
Tech Lead
Sentient Health
“We ship slower than other companies. But what we ship never harms a patient.”
Jordan Hayes
Career Director
Safua Career Office
“Your profile says more than your resume ever could.”
Every tool, every review, every shipped project — one continuous engineering record.