Gus Henry Smith

gussmith@cs.washington.edujustg.us

Fellowships

ongoing, Fall 2024–Summer 2025

Bonderman Fellowship for Independent Travel

University of Washington

Fellowship funding independent, solo travel around the world, focusing on long-term stays in unfamiliar countries. Current itinerary includes Singapore, Thailand, Vietnam, India, Taiwan, and Japan.

Education

2018–2024

University of Washington

Ph.D. in Computer Science and Engineering

Dissertation: Generation of Compiler Backends from Formal Models of Hardware. Coadvised by Luis Ceze and Zach Tatlock. Focus: using tools from Programming Languages to automatically generate compilers for custom hardware.

2013–2018

Penn State Schreyer Honors College

B.S. and M.S. in Computer Science and Engineering

Advised by Vijay Narayanan and John Sampson.

Publications

Scaling Program Synthesis Based Technology Mapping with Equality Saturation. Submitted to WOSET 2024. Gus Henry Smith, Colin Knizek, Daniel Petrisko, Zachary Tatlock, Jonathan Balkind, Gilbert Louis Bernstein, Haobin Ni, and Chandrakana Nandi. (The Churchroad workshop paper.)

Generation of Compiler Backends from Formal Models of Hardware. Dissertation, University of Washington, 2024. Arxiv link.

FPGA Technology Mapping Using Sketch-Guided Program Synthesis. ASPLOS 2024. Arxiv link. Gus Henry Smith, Ben Kushigian, Vishal Canumalla, Andrew Cheung, Steven Lyubomirsky, Sorawee Porncharoenwase, René Just, and Zachary Tatlock. (The Lakeroad paper.)

Application-Level Validation of Accelerator Designs Using a Formal Software/Hardware Interface. TODAES 2023. Arxiv link. Bo-Yuan Huang, Steven Lyubomirsky, Yi Li, Mike He, Gus Henry Smith, Thierry Tambe, Akash Gaonkar, Vishal Canumalla, Gu-Yeon Wei, Aarti Gupta, Zachary Tatlock, Sharad Malik. (The 3LA paper.)

Fridge Compiler: Optimal Circuits from Molecular Inventories. International Conference on Computational Methods in Systems Biology. Lancelot Wathieu, Gus Smith, Luis Ceze, and Chris Thachuk.

Generate Compilers from Hardware Models! PLARCH@PLDI23. Arxiv link. Gus Henry Smith, Ben Kushigian, Vishal Canumalla, Andrew Cheung, René Just, and Zachary Tatlock.

Pure Tensor Program Rewriting via Access Patterns (Representation Pearl). MAPS 2021. Arxiv link. Gus Henry Smith, Andrew Liu, Steven Lyubomirsky, Scott Davidson, Joseph McMahan, Michael Taylor, Luis Ceze, Zachary Tatlock. (The Glenside paper.)

From DSLs to Accelerator-Rich Platform Implementations: Addressing the Mapping Gap. LATTE 2021. Bo-Yuan Huang, Steven Lyubomirsky, Thierry Tambe, Yi Li, Mike He, Gus Smith, Gu-Yeon Wei, Aarti Gupta, Sharad Malik, Zachary Tatlock.

Enumerating Hardware-Software Splits with Program Rewriting. YArch 2020. Gus Smith, Zachary Tatlock, Luis Ceze.

A FerroFET-Based In-Memory Processor for Solving Distributed and Iterative Optimizations via Least-Squares Method. IEEE Journal on Exploratory Solid-State Computational Devices and Circuits, 2019. Insik Yoon, Muya Chang, Kai Ni, Matthew Jerry, Samantak Gangopadhyay, Gus Henry Smith, Tomer Hamam, Justin Romberg, Vijaykrishnan Narayanan, Asif Khan, Suman Datta, Arijit Raychowdhury.

Designing Processing in Memory Architectures via Static Analysis of Real Programs. MS Thesis, 2018.

Computing With Networks of Oscillatory Dynamical Systems. Proceedings of the IEEE, 2018. Arijit Raychowdhury, Abhinav Parihar, Gus Henry Smith, Vijaykrishnan Narayanan, György Csaba, Matthew Jerry, Wolfgang Porod, Suman Datta.

A FeFET Based Processing-In-Memory Architecture for Solving Distributed Least-Square Optimizations. DRC 2018. Insik Yoon, Muya Chang, Kai Ni, Matthew Jerry, Samantak Gangopadhyay, Gus Smith, Tomer Hamam, Vijayakrishan Narayanan, Justin Romberg, Shih-Lien Lu, Suman Datta, Arijit Raychowdhury.

Third Eye: A Shopping Assistant for the Visually Impaired. IEEE Computer, 2017. Peter A Zientara, Sooyeon Lee, Gus H Smith, Rorry Brenner, Laurent Itti, Mary B Rosson, John M Carroll, Kevin M Irick, Vijaykrishnan Narayanan.

Research Projects

since Summer 2024

Churchroad

Lead Researcher

Scaling program-synthesis-based FPGA technology mapping (Lakeroad) using equality saturation.

since Fall 2021

Lakeroad

Lead Researcher; UW SAMPL Lab/UW PLSE Lab/Real-time Machine Learning

Implementing more complete, more correct technology mapping for specialized FPGA primitives (e.g. DSPs) using program synthesis and formal semantics automatically extracted from hardware simulation models.

2020–2024

3LA

Contributor; UW PLSE Lab, w/ colleagues at Princeton and Harvard

Designing a methodology for verifiably mapping deep learning models to
custom accelerators. Utilizing Glenside to expose mappings in workloads.

2020–2022

Glenside

Lead Researcher; UW SAMPL Lab/Real-time Machine Learning

Designed a pure, binder-free intermediate language for optimizing low-level tensor programs via program rewriting. (See Pure Tensor Program Rewriting via Access Patterns.) Used the language to map computations to custom hardware. (See Specialized Accelerators and Compiler Flows: Replacing Accelerator APIs with a Formal Software/Hardware Interface.)

2018–2020

Bring Your Own Datatypes

Lead Researcher; UW SAMPL Lab

Enabled the exploration of new, nontraditional datatypes (i.e., alternatives to IEEE 754 floating point) with an extension to TVM, a deep learning compiler. My qualifying exam project for my Ph.D.

2017–2018

Static Analysis for Processing in Memory Accelerator Design

Master’s Project; PSU Microsystems Design Lab

Given a model of accelerating computation using processing in memory, used LLVM to detect potentially offloadable code sections within workloads.

2014–2018

ThirdEye: Shopping Assistant for the Visually Impaired

Contributor, Lead Researcher; PSU Microsystems Design Lab

Built a wearable system to assist the visually impaired in shopping. My undergraduate research.

Industry Experience

Began Fall 2023

Sandia National Laboratories

Student Research Intern

Working on Lakeroad and related technologies.

Fall 2021–Summer 2022

Google

Student Researcher (part-time)

Continued my previous work developing a learned cost model for configuring sparse tensor kernels.

Summer 2021

Google

Software Engineering Intern, MLIR

Developed a learned cost model for configuring sparse tensor kernels. In addition, contributed to the MLIR sparse tensor dialect.

Summer 2019

Microsoft

Research Intern, AI and Advanced Architectures

Statically analyzed deep learning workloads to inform architecture design.

Summer 2018

Google

Software Engineering Intern, Fuchsia

Implemented the RFCOMM protocol for one of Google’s OSes, Fuchsia.

Summer 2017

Google

Software Engineering Intern, Chrome

Helped the Chrome Remote Desktop team identify and implement optimizations for embedded devices such as the Raspberry Pi.

Summer 2016

Google

Software Engineering Intern, Android Internal Tools

Contributed to Java-based Android profiling tools.

Service

Other Stuff I’ve Written

In the News