About Etched
Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep chain-of-thought reasoning.
Software Engineer, LLM Infrastructure
Transformer ASICs, like those built by Etched, dramatically improve time-to-first-token latency. For a large model like Llama-3-70B with 2048 input tokens, the TTFT will be single-digit milliseconds (we will announce performance figures publicly at our launch).
However, single-digit millisecond latency means nothing if the rest of the serving stack takes 100+ ms, and customers actually use it (or adopt the optimizations into their own stack). You will help make both of these happen.
You will work with our software team to build software for continuous batching, and write world-class interactive documentation (like Pytorch’s Run in Colab feature) to show customers how it works. You will get this software working on our pre-silicon platform, and port it over to work on the physical chips once they are done being fabbed. You will find creative, new ways to improve this latency - can we speculatively decode the user’s inputs? Can we pre-empt sequences if we run out of KV cache space and re-compute them later? Can we cache common pre-fills?
Representative projects:
- Working with emulators like Palladium to develop software for chips while they are being fabricated
- Developing algorithms for balancing prefill and completion tokens when serving LLMs
- Profiling network latency when responding to prompts to help eliminate it in our test environment
- Develop ways for customers to work with our pre-silicon infrastructure and understand how their workloads will run on it.
- Build tools for Jupyter notebooks to connect to emulated and physical Etched systems
You may be a good fit if you:
- Have 3+ years of software engineering experience
- Are good at math, and good at communicating mathematical ideas
- Pick up slack, even if it goes outside your job description
- Are results-oriented, and bias towards shipping products
- Want to learn more about machine learning research
We encourage you to apply even if you do not believe you meet every single qualification.
Strong candidates may also have experience with:
- Palladium emulation
- Real-time audio and video communication
- GPU kernel profiling and low-level programming
- Transformer optimizations, such as FlashAttention
- Ongoing research in machine learning
How we’re different:
Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.
We are a fully in-person team in Cupertino, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.
Benefits:
- Full medical, dental, and vision packages, with 100% of premium covered, 90% for dependents
- Housing subsidy of $2,000/month for those living within walking distance of the office
- Daily lunch and dinner in our office
- Relocation support for those moving to Cupertino