Start date: 11 December 2019
Duration: Half Day (3 hours)
Location: Xilinx, City West, Dublin
Cost: Members Free, for up to 2 attendees per company
Course code: N/A
We would like to run this course again in 2020, depending on interest levels. Please use email link below to register interest and to be notified once dates are confirmed.
Computational aspects and engineering challenges of accelerating Machine Learning
Attendees will finish with a good understanding of deep learning algorithms and their scope and applicability, popular architecture choices for acceleration and algorithmic optimization techniques, in particular quantization, and how to take neural networks from frameworks to hardware.
Who is the course for?
Professionals with computer architecture and general programming knowledge who are interested in developing expertise in deep learning and data scientists who would like to develop expertise in the quantization of deep learning algorithms.
– General machine learning paradigms
– Applications of deep learning (what works and what doesn’t)
– Basic understanding of algorithmic requirements in regard to compute and memory for deep learning
– Popular optimization schemes for CNNs, in particular pruning and reduced precision
o How to train for these optimizations
o In particular we explain quantization-aware training techniques
– Architectural choices
– End-to-end tool flow with FINN: an example framework that allows customization of CNNs and hardware architectures for CNNs, with a fast prototyping flow on FPGAs
Yaman Umuroglu: Machine Learning Researcher @ Xilinx Research
Nicholas Fraser: Machine Learning Researcher @ Xilinx Research
Michaela Blott: Distinguished Engineer @ Xilinx Research