Risorse bibliografiche
Risorsa bibliografica obbligatoria
Risorsa bibliografica facoltativa
Scheda Riassuntiva
Anno Accademico 2019/2020
Tipo incarico Dottorato
Cfu 5.00 Tipo insegnamento Monodisciplinare
Docenti: Titolare (Co-titolari) Silvano Cristina

Corso di Dottorato Da (compreso) A (escluso) Insegnamento

Programma dettagliato e risultati di apprendimento attesi

Objectives of the course:

Recent trends in Neural Networks imposed hardware accelerators as the most viable solution for several classes of applications such as image recognition and classification for self-driving cars, computer vision and speech recognition. Main goal of the course is to present the major concepts and different hardware optimization techniques used in computer architectures to accelerate Deep Neural Networks. More in detail, the course presents and discusses how the mapping of DNNs interact with the runtime usage of hardware resources to optimize performance and memory bandwidth. Overall goal is to understand the most recent computer architecture commercial solutions and the trends that drive the research evolution in the field.

Program of the course:

  1. Recap on Deep Neural Networks: Basic concepts and terminology; Fully-connected Neural Networks; Convolutional Neural Networks; Recurrent Neural Networks; Long Short-Term Memory;
  2. Overview on Deep Neural Networks: LeNet, AlexNet, OverFeat, VGGNet, GoogleNet, ResNet; Datasets and metrics to evaluate and compare hardware solutions;
  3. Deep Convolutional Neural Networks: Compiler optimizations of convolutional kernels; convolutional kernel computation in hardware; memory access and data reuse; fused layers; energy efficiency transformations; approximate precision;
  4. Methodologies for design space exploration and runtime mapping of DCNNs to HW accelerators;
  5. Survey of DNN architectures: Intel CPUs for Deep Learning; Nvidia GPUs for Deep Learning;
  6. Survey of DNN accelerators: Google TPU family; STMicroelectronics Orlando SoC; Intel Movidius VPUs; MIT Eyeriss DNN Accelerator; Imperial College FPGAConvNet for DL; Nvidia NVDLA on Xavier SoC; GreenWave GAP8;

Knowledge on computer architectures at the level of the MS. Course on "Advanced Computer Architectures" is required. Basic concepts on deep learning is required.


Note Sulla Modalità di valutazione

Expected learning outcomes:

DdD 1 (Knowledge and understanding): Students will learn how to:

  • Understand and analyze the key design solutions to implement DCNNs;
  • Understand the most advanced techniques used to accelerate in hardware DNNs;
  • Understand recent architectures and technology trends and opportunities;

DdD 2 (Applying knowledge and understanding): Given specific problems and project cases, students will be capable to:

  • Evaluate the performance of DNN accelerators by applying the appropriate metrics;
  • Understand power/performance tradeoffs among various design architecture solutions;

DdD3 (Making judgements): Given specific problems and project cases, students will be capable to analyze the performance goals and compare autonomously different architectures for Deep Neural Networks in terms of power/performance tradeoffs.

DdD4 (Communication skills): Students will learn how to present orally a scientific text and how to discuss in front of their colleagues the benefits and drawbacks of different technical solutions.

DdD5 (Learning skills): Studens will be capable lo learn autonomously the most advanced research techniques related to computer architectures for Deep Neural Networks.

Final exam:

At the end of the course, an oral exam will verify the level of knowledge reached per each student. The oral exam will cover concepts and techniques presented during the course -- Expected learning outcomes 1, 2, 3, 5.

Optionally, students can ask to be assigned to: 1) a scientific paper to be presented orally and discussed in terms of benefits and drawbacks; or 2) to develop a design project to implement some solutions and run some empirical experiments by using open-source design tools -- Expected learning outcomes 1, 2, 3, 5.


Intervallo di svolgimento dell'attività didattica
Data inizio
Data termine

Calendario testuale dell'attività didattica
Due to the COVID emergency, the course schedule has been reorganized by Teams online sessions in two periods: first period from 15 June to 15 July and second period from 1 to 14 Sept. 2020. Final exams can be done from 15 Sept. 2020.
Course schedule (25 h):
1. Intro to the course (1h) Monday June 15, 2020, 9:00-10:00
2. Introduction to Image Classification and Neural Networks (2h) Mon. 15 June 2020, 10:00-12:00
3. Convolutional Neural Networks for Image Classification (2h) Mon. 15 June 2020, 13:00-15:00
4. Convolutional Neural Networks for Advanced Visual Recognition Tasks (2h) Wed. 17 June 2020, 10:00-12:00
5. Overview on Deep Neural Networks, Datasets and Metrics (2h), Wed. 1 July, 2020, 9:00-11:00
6. Efficient Deep Neural Networks (2-h), Wed. 8 July 2020, 9:00-11:00
7. Deep Neural Networks in Hardware (2-h), Wed. 15 July 2020, 9:00-11:00
8. DNN Compiler Optimizations (2-h), Tues. 1 Sept. 2020, 9:00-11:00
9. Exploration and Mapping of DNNs to HW Accelerators (2-h), Tues. 1 Sept 2020, 11:00-13:00
10. Survey of DNN Architectures (4-h), Mon. 7 Sept 2020, 9:00-13:00
11. Survey of DNN Accelerators: (4-h), Mon. 14 Sept 2020, 9:00-13:00
Periodicity: Course to be activated every two years.

Risorsa bibliografica obbligatoria Slides will be provided to accompany the lectures.

Slides will be available through the course website in Beep.

Risorsa bibliografica obbligatoria Links to scientific literature and tutorials will be provided

Software utilizzato
Nessun software richiesto

Mix Forme Didattiche
Tipo Forma Didattica Ore didattiche
laboratorio informatico
laboratorio sperimentale
laboratorio di progetto

Informazioni in lingua inglese a supporto dell'internazionalizzazione
Insegnamento erogato in lingua Inglese

Note Docente
schedaincarico v. 1.9.7 / 1.9.7
Area Servizi ICT