Opinion | Autonomous Vehicles Are Driving Blind

In a recent opinion piece, Julia Angwin highlights the potential dangers posed by autonomous vehicles due to the lack of federal software safety testing standards for these vehicles. She points out that while there is much focus on the future threat of artificial intelligence taking control, there is little attention to the immediate dangers posed by AI-driven vehicles. For example, self-driving cars have interfered with firefighting efforts, and Tesla’s autopilot system has been involved in numerous crashes and fatalities since 2019.

The key issue is the absence of government scrutiny and regulation of the AI software used in autonomous vehicles. While hardware components of vehicles are subject to regulation, the AI systems are not. Companies can obtain permits to operate driverless cars in some states by declaring their vehicles safe to operate.

Julia Angwin argues that we need more data and testing of autonomous vehicles to determine their safety compared to human drivers. AI systems can make unexpected mistakes, and their behavior can be unpredictable. She suggests that AI should be subject to licensing requirements similar to vision and performance tests for pilots and drivers.

The article highlights the need for a more comprehensive approach to AI safety, not just in autonomous vehicles but in various domains where AI is deployed. Angwin emphasizes that the focus should shift from distant, catastrophic scenarios to addressing the immediate risks of AI.