Thierry Tambe
Assistant Professor in Electrical Engineering, Stanford University
My research is building a heterogeneity of solutions co-optimized across the algorithm, memory subsystem, hardware architecture, and silicon stack to generate breakthrough advances in arithmetic performance, compute density and flexibility, and energy efficiency for on-chip machine learning, and emerging compute-intensive applications. I also bear a keen interest in agile chip design methodologies.
My work received Best Paper Award at DAC (2020), ACM SIGDA Research Highlights (2021), IEEE MICRO Top Picks Honorable Mention (2022) accolades, and has been recognized with a NVIDIA Graduate Fellowship (2021), and an IEEE SSCS Predoctoral Achievement Award (2021).
I received my Ph.D. in Electrical Engineering from Harvard University. Prior to debuting my doctoral studies, I was a senior engineer at Intel, where I designed mixed-signal transceiver and peripheral circuits for EMIB-based chips.
Research Interests:
- VLSI systems (i.e., number systems, schedulers, architectures, circuits, devices, and chips) for emerging AI and compute-intensive applications
- AI for VLSI (e.g., AI-aided hardware and compiler design, AI-based smart power management ICs)
- Heterogenous system integration (2D, 2.5D, 3D chiplets and systems-in-package)
- Agile chip development
News
Jan, 2024 | Our work on building a 12nm 64mm2 heterogeneous RISC-V SoC will appear at ISSCC’24! |
Jan, 2024 | Happy to serve as a Workshop & Tutorial chair at MICRO 2024! |
Oct, 2023 | Our paper on eDRAM-based on-device ML training will appear at HPCA’24! |
Aug, 2023 | Beginning a post-doc at NVIDIA Research. |
May, 2023 | Our paper on model-architecture co-design for efficient on-device ML training using on-chip embedded DRAMs is released on Arxiv. |
Selected Papers [full list]
- HPCACAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device LearningIn International Symposium on High-Performance Computer Architecture (HPCA), 2024