Studies
Admissions
The Institute
Resources
Studies
Admissions
The Institute
Resources
Studies
Admissions
The Institute
Resources

DS213BKK

Intro to Deep Learning

Bangkok Campus
May 27, 2024 - Jun 14, 2024
This course will introduce you to Neural Networks (sometimes called AI), which is the most attractive area of Machine Learning.
Bangkok Campus
May 27, 2024 - Jun 14, 2024
Igor Slinko

Faculty

Igor Slinko

Computer Vision Engineer at SportTotal.tv

Course length

3 weeks

Duration

3 hours
per day

Total hours

45 hours

Credits

4 ECTS

Language

English

Course type

Offline

Fee for single course

€1500

Fee for degree students

€750

Skills you’ll learn

Computer VisionNeural NetworksRegressionClassificationSolving Natural Language Processing Problems
OverviewCourse outlineCourse materialsPrerequisitesMethod & grading

Overview

By completing this course, students will gain fundamental knowledge and practical skills in deep learning, their first step towards becoming a data scientist.

To start, we'll go through the basics of neural networks. We will explore their architecture and tuning algorithms, gaining a deep understanding of everything that happens after clicking "Start Training". We will discuss how to best present a problem to a neural network since not all problems are solvable in principle, and in this, the method of maximum likelihood will assist us.

Next, we will explore two major application areas of neural networks: computer vision and natural language processing. To gain a good understanding of the former, you will study convolutional neural networks, regularisation methods, and normalisation. As for natural language processing, we will discuss transformers, BERT, and GPT. We will also learn how image and text representations can be placed in the same space using CLIP.

Learning highlights

  • Ability to formulate a problem in terms of machine learning.
  • Knowledge of specific machine learning tasks such as regression and classification, detection, segmentation, text, and image generation
  • Knowledge of deep learning architectures: dense and convolutional neural networks, transformers, and their variations
  • Ability to train a deep learning model for a specific business task.
  • Knowledge of basic metrics for evaluating the quality of models.

Course outline

15 classes

Dive into the details of the course and get a sense of what each class will cover.
Monday
Tuesday
Wednesday
Thursday
Friday
Monday
1

Neuron and Neural Network

  • Mathematical Model of a Neuron
  • Theoretical Problems
  • Boolean Operations as Neurons
  • From Neuron to Neural Network
  • Practice: Basic Work in PyTorch
  • Building your First Neural Network
Tuesday
2

Building your First Neural Network:

  • Recovering Dependencies with Neural Networks
  • Components of a Neural Network
  • Neural Network Tuning Algorithm
  • Theoretical Problems: Computational Graphs and BackProp
  • Theoretical Problems: Dependency Recovery
  • Practice: Implementing Gradient Descent
Wednesday
3

Tasks Solved with Neural Networks in Computer Vision

  • Binary Classification? Binary Cross-Entropy!
  • Multiclass Classification? Softmax!
  • Localization, Detection, Segmentation, and Super-Resolution
  • Theoretical Problems: Loss Functions
  • Practice: Building Your First Neural Network
  • Practice: Classification in PyTorch
Thursday
4

Optimisation Methods

  • Basic Gradient Descent
  • Modifications of Gradient Descent
  • Theoretical Problems: Understanding SGD with Momentum
  • Practice: Classification of Handwritten Digits with a Fully Connected Network
Friday
5

Convolutional Neural Networks

  • Convolution, Convolution Cascades
  • Practice: Implementing a Convolutional Layer
  • Collect Them All: LeNet Architecture (1998)
  • Collect Them All: AlexNet (2012) and VGG (2014)
  • Collect Them All: GoogLeNet and ResNet (2015)
  • Practice: Recognizing Handwritten Digits with a Convolutional Neural Network
  • Theoretical Problems: Architectures of Convolutional Neural Networks
Monday
6

Regularization and Normalisation

  • Fighting Overfitting with Dropout!
  • Not Enough with Eternity? Batch Normalisation!
  • Practice: Normalisation Layer
  • Practice: Solving a Classification Problem on the CIFAR Dataset
  • Theoretical Problems: Regularisation
Tuesday
7

Maximum Likelihood Method and Competition

  • Maximum Likelihood Method
  • Theoretical Problems: Maximum Likelihood Method
  • Practice: Transfer Learning with a Kaggle Competition Example
Wednesday
8

Fundamentals of Natural Language Processing

  • Text Embeddings
  • Practice: Text Classification through One-Hot Encoding and a Linear Model
Thursday
9

Text Embeddings

  • Unsupervised Learning
  • Word2Vec and FastText
  • Practice: Emotion Classification Competition for Twitter Posts
Friday
10

Transformer Architecture

  • Positional Encodings
  • Text Translation Task from One Language to Another
  • Attention Mechanism and the Quadratic Time Computation Problem
Monday
11

Splitting the Transformer into Two Parts

  • BERT and GPT
  • HuggingFace Library
  • Practice: Competition on Determining Programming Language from Code
Tuesday
12

Transformers for Computer Vision

  • Discussing the Paper "An Image is Worth 16x16 Words"
  • Practice: Image Classification Task in the CIFAR Dataset through Vision Transformer
Wednesday
13

Combining Images and Text

  • CLIP Architecture
Thursday
14

Recap of the Course

  • Recap of the Course
Friday
15

Final Exam

Final Exam

Prerequisites

Python

Students need to have basic knowledge of linear algebra and calculus. They must remember what the equation for the plane looks like and what the “gradient” is.

Methodology

Each lesson lasts 3 hours. We study new material and analyse homework for the first hour and a half. Then, we work on a practical task in the second hour and a half. Each week, students will have a contest or a challenge (like Kaggle.com) to train a model for a particular task.

Grading

The final grade will be composed of the following criteria:
40% - Homework
40% - Contests
20% - Participation
Igor Slinko

Faculty

Igor Slinko

Computer Vision Engineer at SportTotal.tv

Ex. Samsung AI Center, Yandex, VK, Brickit.app, OneSoil Master of Computer Science at MIPT

Igor Slinko obtained a Master's degree in Mathematics and Computer Science at MIPT (Moscow). After that, he worked as C++ and Python developer at Yandex. Several years later he turned his attention to Data Science and Computer Vision. He switched to a researcher position at Mail.ru, and also started teaching Machine Learning at HSE (Moscow). Then he became team lead at a newly developed Samsung AI Center, where he developed Computer Vision algorithms in Robotics. He also collaborated with Michael Romanov to create an open course called "Neural Networks and Computer Vision" which amassed an audience of 50k students.

See full profile

Apply for this course

Snap up your chance to enroll before all spaces fill up.

Intro to Deep Learning

by Igor Slinko

Total hours

45 Hours

Dates

May 27 - Jun 14, 2024

Fee for single course

€1500

Fee for degree students

€750

How to secure your spot

Complete the form below to kickstart your application

Schedule your Harbour.Space interview

If successful, get ready to join us on campus

FAQ

Will I receive a certificate after completion?

Yes. Upon completion of the course, you will receive a certificate signed by the director of the program your course belonged to.

Do I need a visa?

This depends on your case. Please check with the Spanish or Thai consulate in your country of residence about visa requirements. We will do our part to provide you with the necessary documents, such as the Certificate of Enrollment.

Can I get a discount?

Yes. The easiest way to enroll in a course at a discounted price is to register for multiple courses. Registering for multiple courses will reduce the cost per individual course. Please ask the Admissions Office for more information about the other kinds of discounts we offer and what you can do to receive one.