Thierry Tambe

Research Scientist, NVIDIA Research
Incoming Assistant Professor in Electrical Engineering, Stanford University (official start Sept. 2024)


My research is building a heterogeneity of solutions co-optimized across the algorithm, memory subsystem, hardware architecture, and silicon stack to generate breakthrough advances in arithmetic performance, compute density and flexibility, and energy efficiency for on-chip machine learning, and emerging compute-intensive applications. I also bear a keen interest in agile chip design methodologies.

My work received Best Paper Award at DAC (2020), ACM SIGDA Research Highlights (2021), IEEE MICRO Top Picks Honorable Mention (2022) accolades, and has been recognized with a NVIDIA Graduate Fellowship (2021), and an IEEE SSCS Predoctoral Achievement Award (2021).

I received my Ph.D. in Electrical Engineering from Harvard University. Prior to debuting my doctoral studies, I was a senior engineer at Intel, where I designed mixed-signal transceiver and peripheral circuits for EMIB-based chips.

Research Interests:
  • VLSI systems (i.e., number systems, schedulers, architectures, circuits, devices, and chips) for emerging AI and compute-intensive applications
  • AI for VLSI (e.g., AI-aided hardware and compiler design, AI-based smart power management ICs)
  • Heterogenous system integration (2D, 2.5D, 3D chiplets and systems-in-package)
  • Agile chip development


Jan, 2024
Our work on building a 12nm 64mm2 heterogeneous RISC-V SoC will appear at ISSCC’24!
Jan, 2024
Happy to serve as a Workshop & Tutorial chair at MICRO 2024!
Oct, 2023
Our paper on eDRAM-based on-device ML training will appear at HPCA’24!
Aug, 2023
Beginning a post-doc at NVIDIA Research.
May, 2023
Our paper on model-architecture co-design for efficient on-device ML training using on-chip embedded DRAMs is released on Arxiv.

Selected Papers [full list]

  1. HPCA
    CAMEL: Co-Designing AI Models and Embedded DRAMs for Efficient On-Device Learning
    Sai Qian Zhang*, Thierry Tambe*, Nestor Cuevas, Gu-Yeon Wei, and David Brooks
    In International Symposium on High-Performance Computer Architecture (HPCA), 2024
  2. ISSCC
    A 12nm 18.1TFLOPs/W Sparse Transformer Processor with Entropy-Based Early Exit, Mixed-Precision Predication and Fine-Grained Power Management
    Thierry Tambe, Jeff Zhang, Coleman Hooper, Tianyu Jia, Paul N. Whatmough, Joseph Zuckerman, Maico Cassel Dos Santos, Erik Jens Loscalzo, Davide Giri, Kenneth Shepard, Luca Carloni, Alexander Rush, David Brooks, and Gu-Yeon Wei
    In 2023 IEEE International Solid- State Circuits Conference (ISSCC), 2023
  3. JSSC
    A 16-nm SoC for Noise-Robust Speech and NLP Edge AI Inference With Bayesian Sound Source Separation and Attention-Based DNNs
    Thierry Tambe, En-Yu Yang, Glenn G. Ko, Yuji Chai, Coleman Hooper, Marco Donato, Paul N. Whatmough, Alexander M. Rush, David Brooks, and Gu-Yeon Wei
    IEEE Journal of Solid-State Circuits, 2023
  4. MICRO
    EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference
    Thierry Tambe, Coleman Hooper, Lillian Pentecost, Tianyu Jia, En-Yu Yang, Marco Donato, Victor Sanh, Paul Whatmough, Alexander M. Rush, David Brooks, and Gu-Yeon Wei
    In MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, 2021
  5. DAC
    Best Paper Award
    Algorithm-Hardware Co-Design of Adaptive Floating-Point Encodings for Resilient Deep Learning Inference
    Thierry Tambe, En-Yu Yang, Zishen Wan, Yuntian Deng, Vijay Janapa Reddi, Alexander Rush, David Brooks, and Gu-Yeon Wei
    In 2020 57th ACM/IEEE Design Automation Conference (DAC), 2020