Companies you'll love to work for

companies
Jobs

ML Inference Engineer

Rhoda AI

Rhoda AI

Software Engineering, Data Science
Palo Alto, CA, USA
Posted on Mar 17, 2026

Location

Palo Alto

Employment Type

Full time

Department

Software

At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality.

We're looking for an ML Infrastructure Engineer to help build and operate the inference systems that power our automation stack. You'll be responsible for running large foundation models efficiently and reliably, integrating closely with our robot platform and internal task tooling.

What You'll Do

  • Build and maintain infrastructure to run model inference across cloud and on-prem environments

  • Optimize latency, throughput, and reliability of deployed models

  • Design and scale services to serve various foundation models across research and production use cases

  • Work closely with research and robotics teams for inference optimization and integration

  • Build tooling for model deployment, versioning, and observability to support fast iteration cycles

  • Contribute to the reliability and scalability of the inference stack as model complexity and deployment footprint grow

What We're Looking For

  • 3+ years of experience in ML infrastructure, MLOps, or backend systems

  • Experience deploying and managing ML inference workloads in production

  • Strong proficiency with Kubernetes and containerized deployment pipelines

  • Experience with cloud providers (e.g., AWS, GCP) and GPU orchestration

  • Familiarity with common ML frameworks (e.g., PyTorch, TensorFlow) and model serving tools (e.g., Triton, TorchServe, Ray Serve)

  • Strong debugging instincts and ownership mentality — comfortable driving issues to resolution across the stack

Nice to Have (But Not Required)

  • Experience optimizing inference performance (e.g., quantization, batching, caching)

  • Familiarity with multimodal or large foundation models

  • Exposure to real-time systems, robotics, or edge/cloud hybrid deployment patterns

  • Interest in building tooling for model deployment, observability, and version control

  • Experience with on-robot or edge inference and the latency constraints that come with it

Why This Role

  • Own the inference layer that connects our foundation models to real robot behavior — a direct line between your work and what the robot does in the world

  • Be part of building the infrastructure stack for one of the most technically ambitious robotics companies in the world