Road tests are the ultimate arbiter of safety, but road tests typically come only at the very last stages of the design cycle and are freighted with the same sort of risks to human life that researchers are hoping to avoid. “Machine learning has enabled robotic driving in downtown San Francisco, for example, but it’s a huge computational problem that makes validation all the harder.” “The systems themselves are extremely complex, but the environments we are asking them to operate in are incredibly complex, too,” Corso says. “Our main challenge as a field is how do we guarantee that these amazing capabilities of AI systems-driverless cars and pilotless planes-are safe before we deploy them in places where human lives are at stake?” says Anthony Corso, a postdoctoral scholar in aeronautics and astronautics and executive director of the Stanford Center for AI Safety. Engineers place great trust in the intelligent systems that see and sense the world and help self-controlled vehicles steer clear of virtually every hazard that might come their way. A lot is at stake every time an autonomous car or plane strikes off on its own.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |