DS314
Faculty Profiles

Aleksandr Kuznetsov
ML/AI Architect & Tech Lead

Sergey Cherepanov
CTO at Squad.Education
Course length
Duration
Total hours
Credits
Language
Course type
Fee for single course
Fee for degree students
Skills you’ll learn
This course teaches students to design, build, and reason about agentic AI systems — software systems where language models act as runtime components that reason, use tools, manage memory, and coordinate with humans and other agents. The first half builds a working multi-agent assistant from minimal Python and direct provider APIs, without hiding architecture behind frameworks. The second half introduces production frameworks, retrieval, evaluation, observability, guardrails, and deployment. By the end, students will have built, tested, and presented a complete agentic system.
15 classes
Practical techniques for using coding assistants: feedback loops, context as a constraint, condensation, subagents as noise bins. Paired contrasts make the workflow-vs-agent distinction concrete, and the concepts students extract become the course map.
The LLM as a precise callable: API mechanics, structured output, and the context window as a design constraint. Reasoning models and per-call model routing. Hands-on: a Python chatbot with a two-model router.
Function calling, tool schemas, and the observe → reason → act → verify loop with stopping conditions. Irreversibility-based human-in-the-loop. Hands-on: the first agent with one approval gate.
Separating state layers — conversation, task state, operating rules, durable knowledge — and session routing as a first-class concern. Context built deliberately each run; JSON for prototypes, SQLite for structure.
When a split is actually justified, and the orchestrator-worker, planner-executor, and router-specialist patterns. Standing orders written before any system prompt. Hands-on: orchestrator plus two specialists with scoped tools and explicit authority bands.
Moving from chat window to “ambient” agents: meeting users where they live, acting without being asked, and integrating with real data. The email-copilot walkthrough. Hands-on: add one non-request-response trigger.
Acceptance criteria before code, then unit and smoke tests adapted for non-deterministic loops. Failure-triage workshop across tool-error propagation, runaway loops, context overflow, and over-delegation. Supervised integration before the midpoint demo.
Students demonstrate their first-half systems: end-to-end flow, architecture, authority boundaries, and failure handling. Transition to the second instructor.
LLM-abstraction frameworks (LangChain, PydanticAI) vs. orchestration frameworks (LangGraph): what each replaces and when raw Python is still the answer. Hands-on: refactor one first-half component and compare what was easier, what was hidden.
Embeddings, chunking, and why pure vector search fails on exact terms. Hybrid retrieval (BM25 + RRF), rerankers, and agentic RAG loops as long-term memory on top of the first-half store.
RAG evaluation (faithfulness, context precision/recall, answer relevance) separated from agent evaluation (trajectory metrics, tool selection, goal completion). LLM-as-judge biases; pass@k vs. pass^k; honest reading of GAIA, SWE-bench, τ-bench.
OpenTelemetry GenAI semantic conventions as the vendor-neutral standard: span types, attributes, metrics. MLflow Tracing as an OTel-compatible backend. Hands-on: instrument end-to-end and debug a planted failure from a trace.
Prompt injection defense in depth: input filtering, structural separation, output filtering, and capability constraints. ML-based guardrails, sandboxed code execution, permission scoping, and adversarial tests wired into the eval suite.
Async execution, streaming, and cost optimization through model routing, caching, and batching. Containerization, staging environments, and CI/CD with eval-gated deploys. An end-to-end production walkthrough.
Students present second-half deltas on top of the midpoint baseline: framework refactor, retrieval, evaluation results, observability, safety posture, and production readiness.
Media
Proficiency in Python programming.
Comfort with the command line and development environments (VS Code).
Working familiarity with git (branches, commits, diffs, merges).
Basic exposure to machine learning concepts (what a model is, what training and inference mean).
Ability to read and work with APIs and JSON.
Advanced topics such as distributed systems, DevOps, containerisation, or deep ML theory are not prerequisites.
Each three-hour session is split into two halves of roughly 90 minutes. Each half opens with 30–40 minutes of concept introduction, followed by 50–60 minutes of guided notebook walkthroughs, live demonstrations, and hands-on implementation.
The first seven sessions emphasise building from minimal abstractions: students work in Python with Visual Studio Code and Jupyter Notebooks, using direct provider APIs. Frameworks and higher-level tools are introduced in the second half, after students understand what those frameworks are implementing.
Students build a single assistant that evolves across all 15 sessions. Each homework assignment extends the system built so far — beginning with personal configuration of a coding assistant, growing into a working multi-agent assistant by the midpoint demonstration (Session 8), then hardening with frameworks, retrieval, evaluation, observability, and production practices in the second half. The instructor demonstrates concepts using a reference project; students choose from two to three offered tracks and build a structurally related project of their own.
Aleksandr is an AI/ML Tech Lead based in Barcelona, with over 10 years of experience across research and industry in healthcare, pharma, and life sciences. He specialises in Natural Language Processing, Computer Vision, and Machine Learning Engineering, with a focus on production-grade ML solutions for clients ranging from early-stage startups to large enterprises.
As a technical leader, he has guided ML teams and led the end-to-end development of AI systems — from prototyping and validation through to deployment on Amazon Web Services and Google Cloud Platform. His backend engineering work spans Python and Rust.
See full profileSergey is the CTO of Gauss, a US-focused fintech startup, with over 10 years of experience across engineering leadership and hands-on development. He specialises in backend architecture, functional programming, and production agentic AI systems, with a focus on shipping reliable and maintainable software.
As a technical leader, he has guided engineering teams and led the end-to-end development of complex products - from early prototypes through to production deployment in the cloud. His engineering work spans Haskell, Python, and TypeScript, and he recently spoke in Barcelona on LLM-assisted engineering inside large codebases.
See full profileApply for this course
by Aleksandr Kuznetsov, Sergey Cherepanov
Total hours
45 Hours
Dates
Jul 20 - Aug 07, 2026
Fee for single course
€1500
Fee for degree students
€750
How to secure your spot
Complete the form below to kickstart your application
Schedule your Harbour.Space interview
If successful, get ready to join us on campus
FAQ
Will I receive a certificate after completion?
Yes. Upon completion of the course, you will receive a certificate signed by the director of the program your course belonged to.
Do I need a visa?
This depends on your case. Please check with the Spanish or Thai consulate in your country of residence about visa requirements. We will do our part to provide you with the necessary documents, such as the Certificate of Enrollment.
Can I get a discount?
Yes. The easiest way to enroll in a course at a discounted price is to register for multiple courses. Registering for multiple courses will reduce the cost per individual course. Please ask the Admissions Office for more information about the other kinds of discounts we offer and what you can do to receive one.