Class projects
At Columbia University, students have the opportunity to do a project on ESP as part of CSEE 6868 - Embedded Scalable Platforms, an inter-disciplinary seminar on the design and programming of SoC architectures. CSEE 6868 builds upon the concepts learnt in CSEE 4868 - System-on-Chip Platforms, a course that is described in the paper Teaching Heterogeneous Computing with System-Level Design Methods. This page collects information and material for the CSEE 6868 class projects.
Design and integration of an accelerator with ESP
For this project each student will use ESP to design one or more accelerators and to integrate them in a system-on-chip (SoC), capable of booting Linux. Then the student will evaluate the SoC both with RTL simulation and on FPGA.
To get a more practical sense of the project, you should familiarize yourself with ESP by using the resources on this website. Specifically:
-
Check out the ESP website Homepage including the short introductory video.
-
Watch the 16 minutes overview video in the Documentation section.
-
Watch the videos and read the guides of the relevant hands-on tutorials available in the Documentation section. Especially relevant are the “How to: setup”, “How to: design a single-core SoC” guides and the “How to: design an accelerator in …“ guide that applies to your specific project.
-
Explore the rest of the website to get the full picture of the ESP project.
Accelerator flows
For your project proposal, you are asked to choose which design flow you want to use to build your accelerator.
ESP offers multiple accelerator design flows: Stratus HLS flow (accelerator designed in SystemC), the Vivado HLS flow (accelerator designed in C/C++), the Catapult HLS flow (accelerator designed in C/C++) and the hls4ml flow (accelerators designed in Keras/Pytorch/ONNX).
Other options include designing the accelerator in RTL (Verilog, VHDL, SystemVerilog, Chisel). These other options do not have full support and documentation yet. It is possible to use them, but they will require a bigger integration effort.
Accelerator choice
For your project proposal you are asked to choose which application you want to target with hardware acceleration. The following list of benchmark suites provide a variety of applications that are suitable for hardware acceleration. We recommend selecting your application(s) of choice from these benchmark suites. Alternatively, you can propose your own applications, for which you should provide a clean implementation written in C.
The above benchmarks suites do not include deep learning applications. If you decide to work on a deep learning accelerator specified in Keras/Pytorch/ONNX, you should either build and train you own neural network model for an application of your choice (e.g. image classification on the ImageNet dataset) or you should pick an already existing neural network model.
Milestones
Students can propose to implement one or more accelerators with the design flows of their preference.
The following is an example of a sequence of steps for a typical project.
-
Study and polish the reference application to be accelerated. This will be the golden model for the accelerator design.
-
Generate the accelerator skeleton with ESP, which includes testbench, device driver and test apps in bare-metal and for Linux. Complete the design of the accelerator starting from the skeleton and test it in isolation.
-
Optimize the accelerator and perform a design space exploration. This step can potentially be moved to later stages of the project.
-
Complete the design of the test applications: bare-metal application and the Linux user application.
-
Simulate the full-system RTL of an SoC after integrating one or more instances of the accelerator.
-
Perform an FPGA-based emulation of the SoC, first with the bare-metal application and then with the user application running on top of Linux. Compare the results with the execution of the application on a processor.
Logistics
Project repository
Each student will work in its own fork of ESP. You can choose between a public and a private fork on GitHub.
-
Setup a public fork: GitHub fork instructions
-
Setup a private fork. Private forks are not allowed on GitHub, so you will create a mirror of the ESP repository instead: GitHub mirror instructions. Since the repository is private, the student should give access to the instructors.
Class servers
The instructors may provide class servers, where they pre-installed all the software that you might need to work with ESP. To connect to the servers we recommend to use SSH for everything you can do in a terminal and X2Go whenever you need to open a graphical interface. The alternative to the class servers is to follow the “How to: setup” guide to prepare the environment on your own machine. To simplify this task, you can use the provided ESP Docker images, as explained in the guide.
Class FPGAs
The instructors may provide (remote) access to an FPGA. In this case, the students will receive detailed instructions on how to use them: deploy the ESP bitstreams on those FPGAs and run the bare-metal apps or the Linux OS.
Deliverables
In addition to submitting their work regularly in their Git repository (frequent commits are recommended), the students will deliver a project proposal, a midterm report and presentation, and a final report and presentation.
ESP support
Both instructors and students are welcome to contact the ESP team for support on setting up the project and on ESP-specific issues. See the Contacts page.
Note: More material on ESP class projects is coming soon!