Xiyuan (Cyrus) Liu

Staff Software Engineer · Robotics, Autonomous Driving and AI

I build the software that makes autonomous vehicles think. Currently at Bosch developing AI-driven systems for autonomous vehicles. Previously at Waymo, Motional, Nvidia, and Aurora. M.S. from CMU Robotics Institute.

Motion PlanningE2E AutonomyRLSoftware ArchitectureLLM / RAG SystemsFederated LearningC++PyTorch
GitHubLinkedInEmail
scroll
Experience & Education

Building autonomous vehicles
from A* search to AI planning.

Bosch USA· Sunnyvale, CA
Sep 2022 – Present

Staff Software Engineer — Behavior, AI-based Planning

  • ·Defined software architecture and modular design for the classical L4 behavior planning stack, establishing integration contracts across prediction, trajectory generation, and control interfaces.
  • ·Identified data generation as a foundational gap for E2E ML planning and led the pipeline from concept to org-wide adoption — ahead of broader team prioritization.
  • ·Pioneered a GPU-based RL simulator for closed-loop motion planning — now a core project at the Sunnyvale site.
  • ·Architected federated learning infrastructure across Azure and Tencent Cloud, positioning Sunnyvale as the technical lead for global ML within Bosch.
  • ·Scaled a RAG-based quality management platform from hackathon prototype to 100+ weekly global users with $0 budget, driving AI transformation across organizational boundaries.
L4Behavioral PlanningL2++Data GenerationRLFederated LearningLLM/RAGAzure
Nvidia· Santa Clara, CA
Aug 2025 – Oct 2025

Staff Software Engineer IC5 — Prediction Planning & Control

  • ·Rapidly onboarded to the L2++ planning stack; overhauled the path optimizer and built an iLQR rapid-prototyping and debugging toolchain, demonstrating the ability to drive architectural impact across an unfamiliar codebase.
  • ·Implemented lateral behavior improvements for urban driving, including road edge detection and avoidance logic.
L2++iLQRPath OptimizationC++
Waymo LLC· Mountain View, CA
Jul 2020 – Sep 2022

Software Engineer — Motion Generation

  • ·Developed foundational search-based algorithms — including reasoning interfaces, cost functions, and graph construction — and modularized the sampling pipeline for scalability and maintainability.
  • ·Built evaluation and debugging tooling that enabled comprehensive assessment of search quality and effectiveness.
  • ·Architected the next-generation motion planner with modularized component selection; shaped org-wide technical direction and received the Onboard Software Excellence Award.
Motion PlanningSearch AlgorithmsC++Software Architecture
Aurora Innovation· Pittsburgh, PA
Nov 2019 – Jul 2020

Software Engineer — TeleAssist

  • ·As primary developer, designed and shipped the first fully operational tele-operation MVP, integrating closely with the perception stack, autonomy stack, and user control systems.
Tele-operationC++gRPCRapid Prototyping
Aptiv PLC (now Motional)· Pittsburgh, PA
Feb 2018 – Oct 2019

Senior R&D Software Engineer — Motion Planning & Control

  • ·Enhanced graph generation achieving 40x speedup; validated across 100+ vehicle fleet in Las Vegas.
  • ·Led optimization pipeline architecture and spearheaded speed planning transition from prototype to fleet deployment in 3 months.
Motion PlanningOptimizationA* SearchC++
Marble Robot· San Francisco, CA
Summer 2017

Software Engineering Intern — Perception & Localization

  • ·Built a 3D point cloud filter for semi-static object removal to improve sidewalk robot localization. SLAM maps captured during mapping runs contain transient objects — parked cars, pedestrians — that are absent during task runs, causing localization drift.
  • ·Implemented a visibility-based range image filter: projects consecutive LiDAR frames into spherical range images and flags points in the prior map that are occluded by closer observations in the current scan as transient. Effectively removes dynamic objects such as pedestrians where inter-frame point cloud differences are high.
  • ·Identified the open challenge posed by semi-static objects (e.g. parked cars) that appear static within a run but disappear between runs — a problem class requiring longer time-horizon map maintenance beyond frame-to-frame comparison.
LiDARSLAMPoint CloudRange ImageC++
2016 – 2017

M.S., Robotic Systems Development

  • ·Robotics Institute, School of Computer Science.
  • ·Focused on autonomous systems, motion planning, and ML-based robotics.
RoboticsMotion PlanningMachine Learning
Wake Forest University· Winston-Salem, NC
2012 – 2016

B.S., Computer Science — Summa Cum Laude

  • ·Graduated Summa Cum Laude with highest honors in Computer Science.
Computer Science
Feature Work

Projects & experiments
things I built for fun.

2015

3D printer animatronic hand

archived

A 3D printed animatronic hand, driven by servos and fishing wires. Controlled by a flex sensor glove through Arduino and bluetooth, and later with Myo muscle sensor band.

Animatronic hand v1
Sensor glove

This was my summer project at Wake Forest and the true catalyst for my interest in robotics. Seeing my code move the hand in real-time just... clicked. It became clear that having a physical connection to software was exactly what I wanted to do. I remember thinking, ‘You know what? It’d be pretty cool if I could do this for a living.’

Robotics3D PrintingArduino
2017

Neural Network

archived

Simple feedforward neural network that drives itself.

Neural network car simulation

A bit of a lazy project for a neuroscience course. ALVINN did this back in the 80s with a real car, so this wasn't exactly groundbreaking. But honestly, even a simple multi-layer perceptron is surprisingly powerful—and definitely underrated.

Neural Networks
2017

Team Loco

archived

A 1/10 scale autonomous RC car built from scratch, aiming to handle high-speed evasive maneuvers that break tire limits, and more...

Team Lo(w)-Co(efficient). Our capstone project at CMU. While our goal was handling high-speed moose tests, frankly, being able to perform a self-driving-drift-parallel-park was definitely cooler. If the hand got me into robotics, Loco was the reason I fell in love with autonomous driving and motion planning.

Motion PlanningSystem DevelopmentRoboticsHardware
2017

Vehicle Drift Simulation

archived

Physics-accurate 2D drift simulator using the Stanford Dynamic Control Lab vehicle model, with iLQR for trajectory optimization. Ported to TypeScript as a playable browser game.

Drift simulation
System ID / vehicle model

My primary focus on Team Loco was simulation and motion planning using iLQR. We used the Stanford Dynamic Control Lab’s model for drifting vehicles and applied iLQR for high-level control. Naturally, once you have a working drift sim, you have to turn it into a game. I’ve written this in MATLAB, Python, and Swift at various times; I finally ported it to TypeScript so you can play the demo right here. Fair warning: a ‘physically accurate’ simulation doesn’t always equal a ‘fun’ game... it’s a nightmare to control on a keyboard.

TypeScriptCanvasVehicle DynamicsPhysicsiLQR
2017

Adversarial Vehicle Planning with Deep RL

archived

DDPG-based autonomous driving agent trained against adversarial opponents in the TORCS simulator, using raw sensor observations as input.

This was back before transformers were cool and VLAs weren’t even a thing yet. DDPG was the state-of-the-art for this kind of problem: driving in simulation with limited sensor inputs. We pushed it further by playing against adversarial AI in the TORCS simulator. We saw some ‘fun emergent behavior’... which is just a fancy way of saying ‘maybe it was luck?’

PythonDeep RLDDPGAutonomous DrivingPyTorchTORCS
2026

Pareidolia

live

GPU-scaled RL system for discovering trading signals in financial time-series data, inspired by research on pattern emergence in random-trader markets.

Pareidolia preview

I saw a YouTube video about how stock market patterns can emerge from purely random traders, which suggests that technical analysis isn't just the male equivalent of astrology. Naturally, I had to try it myself. Knowing what I know now about scaling RL on GPUs, a simulator like this could make me rich... maybe...

PythonTaichiGPUQuantitative FinanceSimulation
2026

Dungeon OS

live

CLI-based D&D engine where Claude acts as the Dungeon Master — handling storytelling, bookkeeping, and NPC management via LLM tool calls.

I’m a huge fan of Baldur’s Gate 3, even though I’ve never played D&D in real life (and don’t actually know anyone who does). With the AI boom, an AI Dungeon Master seemed like the logical next step. With a bit of CLI skill and some APIs, you don't even have to code a DM—Claude can just *be* the DM. Storytelling, bookkeeping, and NPC management, all handled. It’s playable, though I have no idea if it’s actually ‘good’ D&D. But hey, it’s a start.

PythonLLMAgenticD&DCLI
2026

PR Monitor

live

Aggregated PR review dashboard for engineers with multiple GitHub accounts across organizations, with priority sorting and noise filtering.

One of the best lessons I learned from my mentors at Waymo and Google: PR reviews should be priority zero. It should almost always be the first thing you do as soon as you’re requested; that’s how you keep development momentum alive. However, I’ve somehow ended up with five GitHub accounts across four different organizations, and it’s a pain to filter through the noise GitHub throws at you. Back at Google, there was a handy internal tool for this, but I haven’t found anything similar for the outside world. Since we’re in the age of AI, I figured it should be trivial to just build it myself.

PythonGitHub APIDevToolsWorkflowAutomation