Skip to main content
Back to top

In the Burnout racing game series’ infamous “Crash Mode,” players are rewarded for how aggressively they drive, how many risky maneuvers they take, and how much damage their inevitable high-speed collisions cause. Chia-Hao Ching, a 2017 DigiPen MS in Computer Science graduate, has built his career out of developing “games” that are the complete and utter opposite of Burnout’s Crash Mode.

“After graduation, I found an autonomous vehicle company hiring people who had experience working on the Unreal Engine, in C++, and with 3D meshes,” Ching explains. “They were looking for that because, for self-driving vehicles, simulations are actually a really important part of development — they’re a much safer way to test new features than out on the road.” Now having worked at two separate San Jose area self-driving vehicle companies, Ching’s career has found him developing detailed simulations of traffic scenarios that look a lot like video games — games that autonomous cars “play” to get better at driving safely.

Self-driving cars run on different “modules” that all work in tandem to assess traffic situations and make driving decisions. The car’s perception module may observe a vehicle or pedestrian approaching at a particular distance and angle, and then tell the other modules to slow down and turn right in response. “All these different modules are basically just a program, and they don’t just work on cars,” Ching explains. “They can run on any computer as long as you have the data.” Those programming modules are the entities that “play” Ching’s simulations — and the way they decide to play informs how the company’s engineers modify their behavior out on the road in real life.

The self-driving car simulations Ching builds look very similar to the free and open-source Voyage Deepdrive simulator above, which also utilizes Unreal Engine 4.

“When you first start building these simulations, it really does feel a lot like making a video game,” Ching says.  “You have pedestrians, objects, vehicles. You build out the world. You can have weather systems, and you’re making different tasks for the car to do.” Ching’s simulations typically come in two flavors. The first kind replicates actual traffic data gathered from old road tests. “We will replay the data as a simulation and see, after we change some code, does the vehicle respond to the situation differently?” The second kind are hypothetical events Ching generates himself, often used to test vehicle responses to particularly dire situations. “Those are more useful for very extreme scenarios you don’t get from real life road tests — scenarios you don’t want from real life road tests,” Ching says.

At the first self-driving vehicle company he worked for, Ching put together all the simulations in the Unreal Engine — but the similarities to building a video game usually stopped after the initial setup. “When you think about how these are actually being utilized, the company wants as much data as possible. So you’re always thinking about, how can I run more scenarios? Can I get 100, maybe 1,000 simulations running at the same time? How do I do gather the statistics from them all? That’s something you don’t think about at a game studio,” Ching says. When running simulations to improve the car’s camera sensors in particular, Ching’s simulations tend to look the most like a video game. “If you want to simulate and test the cameras, you’re trying to make the render result as close as possible to what a real-world camera would see,” Ching explains. “Of course Unreal 4 has great graphics, but it’s hard to get it as close as it is with real world cameras!”

Given the amount of polish and time required to achieve that level of graphic fidelity, running thousands of simultaneous camera sensor simulations isn’t feasible. Instead, Ching’s simulations often graphically recreate what’s called LIDAR, or Light Detection and Ranging. One of self-driving vehicles’ other key systems for remote sensing beyond cameras, LIDAR is essentially a 3D laser scan the car takes by rapidly emitting and bouncing light beams off the environment. “Usually there are 100,000 beams every 0.1 seconds,” Ching says. “So for that part, I use my graphics knowledge to simulate the car’s LIDAR view in real-time.” The data generated by Ching’s simulations is collected in real-time too — another huge benefit over real-life road tests, which require long data upload times after the fact.

Recreating autonomous cars’ LIDAR view isn’t as graphically intense as recreating their camera view, allowing Ching to run more simulations in less time.

The first company Ching worked at made a splash in 2020 by debuting China’s first public fleet of driverless taxis in Shenzen. Early this year, they followed it up by earning a permit from the state of California to start testing their driverless vehicles in parts of San Jose. “The project here had been running for over a year just to get that permit,” Ching says. “It basically just takes lots of tests and simulations to get to that point!” While Ching recently left the company to join a separate autonomous car company – also based in San Jose – his work still largely looks the same. “The big difference is that the new company has their own simulation program based on C++, so I don’t work in Unreal anymore,” Ching says. “But I’m still building out these scenarios and scenes to help our engineers check out this data more easily.”

When it comes to helping engineers utilize his simulations and tools easily, Ching says his DigiPen education has played a huge role. “What DigiPen taught me is that if you want to make a really good game, really good tools, or really good software in general, the interaction and user interface is way more important than most engineers think,” Ching says. “Many software engineers I’ve encountered are solely pursuing performance and stability, but DigiPen taught me to always consider, ‘Is this feature going to be easy to use?’ It’s just like games — you want it to look good and feel good for the ‘player’ so they keep coming back.”