Cloud Infrastructure Engineer
Rhoda AI
Software Engineering, Other Engineering
Palo Alto, CA, USA
Location
Palo Alto
Employment Type
Full time
Department
Software
At Rhoda AI, we're building the full-stack foundation for the next generation of humanoid robots — from high-performance, software-defined hardware to the foundational models and video world models that control it. Our robots are designed to be generalists capable of operating in complex, real-world environments and handling scenarios unseen in training. We work at the intersection of large-scale learning, robotics, and systems, with a research team that includes researchers from Stanford, Berkeley, Harvard, and beyond. We're not building a feature; we're building a new computing platform for physical work — and with over $400M raised, we're investing aggressively in the R&D, hardware development, and manufacturing scale-up to make that a reality.
We're looking for a Cloud Infrastructure Engineer to build and operate the systems that power our robotics and AI platform. You'll own the infrastructure that collects training data, keeps our robots running in the field, and trains and evaluates our models — designing for high reliability and low latency across every layer. The systems you build will be the backbone of what we ship.
What You'll Do
Design, build, and maintain cloud infrastructure supporting data collection pipelines, robot operations, and model training and evaluation workflows
Own the reliability, availability, and latency of core infrastructure including databases, data warehouses, and object storage systems
Develop and maintain backend services and APIs that expose infrastructure capabilities to internal teams and customers
Identify and resolve performance bottlenecks across the data and compute stack to meet latency and throughput requirements
Partner with research teams to understand model training and evaluation infrastructure needs and translate them into scalable solutions
Collaborate with robotics teams to ensure field operations are reliably supported by low-latency backend services
Build observability tooling — metrics, logging, alerting — to proactively detect and respond to infrastructure issues
Define and enforce infrastructure best practices around security, cost management, and scalability
Participate in on-call rotations and contribute to incident response and postmortems
What We're Looking For
4+ years of experience in cloud infrastructure, platform engineering, or a related role
Strong proficiency with at least one major cloud provider (AWS, GCP, or Azure), including compute, networking, storage, and managed database services
Hands-on experience managing relational and NoSQL databases in production, including performance tuning, replication, and failover
Experience operating data warehouse solutions (e.g., BigQuery, Redshift, Snowflake) and large-scale object storage (e.g., S3, GCS)
Solid backend development skills — comfortable writing and maintaining services in Python, Go, or a similar language
Strong understanding of distributed systems concepts: consistency, availability, fault tolerance, and latency trade-offs
Familiarity with container orchestration using Kubernetes or equivalent platforms
Proven ability to debug and resolve complex production incidents under pressure
Nice to Have (But Not Required)
Experience building infrastructure for ML workloads — GPU cluster management, distributed training frameworks, or model serving pipelines
Familiarity with robotics or embedded systems backends, including real-time telemetry or command-and-control infrastructure
Experience designing and operating high-throughput, low-latency data pipelines using tools like Kafka, Flink, or Spark
Background working with time-series databases (e.g., InfluxDB, TimescaleDB) for sensor or operational data
Experience with infrastructure-as-code tools such as Terraform or Pulumi
Track record of building multi-tenant infrastructure that serves diverse customer and internal stakeholder needs simultaneously
Experience building self-service infrastructure platforms that reduce engineering toil for research or product teams
Why This Role
Own the infrastructure layer that everything else runs on — from robot field ops to model training — with direct, measurable impact on reliability and research velocity
Work at the intersection of cloud systems and physical AI, building backends that support both frontier model training and real humanoids operating in the world
Foundational role on a small team where your architectural decisions shape the platform the entire company scales on