Waymo is recalling more than 3,700 vehicles due to a software vulnerability that could lead the autonomous fleet to drive into high waters, including flooded roadways. The move comes as the company continues to scale its driverless ride-hailing operations, highlighting the persistent struggle to solve “edge cases”—those rare but dangerous environmental scenarios that challenge the perception systems of artificial intelligence.
The recall focuses on a specific failure in how the vehicles’ software interprets standing water. In certain conditions, the system may fail to recognize a flooded road as a hazard, potentially directing the vehicle into depths that could cause mechanical failure or create dangerous situations for passengers and other road users. While Waymo has not reported any accidents resulting from this specific software flaw, the recall serves as a preemptive safety measure to prevent potential casualties.
For those of us who have spent time in the weeds of software engineering, this issue is a classic example of the “perception gap” in robotics. Autonomous vehicles do not “see” the world the way humans do. they construct a mathematical model of their surroundings based on a fusion of sensors. When that model fails to account for the unique optical properties of water, the results can be catastrophic.
The technical struggle with water perception
To understand why a sophisticated AI would drive into a flood, one has to look at the physics of the sensors involved. Waymo vehicles rely heavily on LiDAR (Light Detection and Ranging), which fires millions of laser pulses per second to create a 3D map of the environment. However, water is a specular surface, meaning it often reflects laser pulses away from the sensor rather than bouncing them back. This can create “holes” in the point cloud, making a flooded street appear as a void or a perfectly flat, drivable surface.

Computer vision—the cameras that identify lane lines and signs—faces a similar hurdle. Standing water acts like a mirror, reflecting the sky, nearby buildings, or traffic lights. This “mirror effect” can confuse semantic segmentation models, which are designed to categorize pixels as “road,” “sidewalk,” or “obstacle.” If the AI identifies the reflection of a clear blue sky on a flooded road as simply “empty space,” it may conclude the path is safe to traverse.
Dealing with these environmental anomalies is what engineers call “long-tail” problems. While an AI can easily master 99% of driving scenarios, the final 1%—such as flash floods, heavy snow, or erratic human behavior—requires a level of contextual reasoning that current machine learning models still struggle to replicate.
How the recall will be implemented
Unlike traditional automotive recalls that require a trip to a physical dealership, the majority of this correction will likely happen via over-the-air (OTA) software updates. This allows Waymo to push new code to the entire fleet simultaneously, updating the perception algorithms to better identify the visual and sensor signatures of standing water.
The scale of the recall, affecting over 3,700 vehicles, suggests a significant portion of the active fleet is involved. This may lead to temporary service disruptions in cities where Waymo One operates, as vehicles are taken offline or restricted to specific zones until the update is verified.
| Detail | Information |
|---|---|
| Vehicles Affected | 3,700+ |
| Primary Issue | Software perception of high water |
| Risk Factor | Driving into flooded roadways |
| Resolution Method | Software update |
Regulatory oversight and the path to trust
This recall occurs under the watchful eye of the National Highway Traffic Safety Administration (NHTSA), which has increased its scrutiny of autonomous vehicle (AV) operators. The agency now requires companies to report crashes and software failures involving automated systems, ensuring that “silent” failures—those that don’t result in a crash but pose a risk—are addressed publicly.
The transparency of this recall is a double-edged sword for Waymo. On one hand, it demonstrates a commitment to safety and a willingness to proactively address flaws. On the other, it reminds the public that the “driver” in the car is still a work in progress. For a ride-hailing service to achieve mass adoption, the public must trust that the vehicle can handle not just a sunny day in Phoenix, but a torrential downpour in a complex urban environment.
The industry is currently moving toward “redundant perception,” where different types of sensors are used to cross-verify data. For example, combining LiDAR with thermal imaging or ultrasonic sensors could help a vehicle “feel” or “sense” water levels more accurately than cameras alone.
As Waymo rolls out the fix, the company will likely focus on expanding its training datasets to include more diverse weather patterns. By feeding the AI thousands of hours of footage and sensor data from flooded environments, the system can learn to recognize the subtle cues that signal danger—such as the way water ripples around a curb or the specific reflectivity of a submerged lane.
The next critical checkpoint will be the company’s safety report following the update, which will detail whether the new software successfully mitigates the risk without introducing new “regressions”—bugs that appear when fixing an existing problem. We expect further updates from regulators as they monitor the fleet’s performance during the next rainy season.
Do you trust a robotaxi to navigate a storm? Share your thoughts in the comments or let us know your experience with autonomous rides.
