Aurora just explained their simulation approach in detail, on their blog. In particular, they “apply procedural generation to simulation, we can create scenarios at the massive scale needed to rapidly develop and deploy the Aurora Driver.”
Interestingly, they have hired a team with Hollywood computer-generated imagery experience to automate the construction of simulation tests. They use an approach called “procedural generation”, which allows engineers to generate thousands of specific tests by specifying only a few general parameters for a scenario.
For example, Aurora engineers might ask for lots of tests involving highway merges in the rain, within a certain speed range. Their system would then generate thousands of permutations of that type of test, using a combination of mapping and behavioral data from the real world, and simulation-specific data.
It’s a really interesting read, and something Aurora believes in strongly. “The Aurora Driver performed 2.27 million unprotected left turns in simulation before even attempting one in the real world,” they reveal.
The timing of the blog post is interesting, coming right on the heels of the 2020 California Autonomous Vehicle Mileage and Disengagement Reports. Aurora’s numbers in those reports were really low — probably a function of the company’s focus on Pittsburgh and other areas for testing.
Nonetheless, a piece of the puzzle I’d love to see in Aurora’s blog post is a metric of how well simulation allows their vehicles to perform on the road. Ultimately, that should be the true measure of how effective a simulator is.