Thierry Tambe is an Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science, and the William George and Ida Mary Hoover Faculty Fellow at Stanford University. His research explores the intersection of AI/ML and hardware systems, developing algorithms, hardware architectures, chips, and tools to make accelerated AI computing more portable, scalable, efficient, and easier to design. Previously, Thierry was a visiting research scientist at NVIDIA and an engineer at Intel. He received a B.S., and M.Eng. from Texas A&M University, and a Ph.D. from Harvard University, all in Electrical Engineering. His research has been recognized through a NVIDIA Graduate PhD Fellowship, an IEEE SSCS Predoctoral Achievement Award, and distinguished paper awards at DAC and MICRO.
News
Jan, 2024 | Our work on building a 12nm 64mm2 heterogeneous RISC-V SoC will appear at ISSCC’24! |
Jan, 2024 | Happy to serve as a Workshop & Tutorial chair at MICRO 2024! |
Oct, 2023 | Our paper on eDRAM-based on-device ML training will appear at HPCA’24! |
Aug, 2023 | Beginning a post-doc at NVIDIA Research. |
May, 2023 | Our paper on model-architecture co-design for efficient on-device ML training using on-chip embedded DRAMs is released on Arxiv. |
Selected Papers [full list]
- arXivBlockDialect: Block-wise Fine-grained Mixed Format for Energy-Efficient LLM InferenceIn arXiv.2501.01144, 2025
- HPCACAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device LearningIn International Symposium on High-Performance Computer Architecture (HPCA), 2024