Bay Area/ San Francisco

When Waymo's Safest Feature Becomes a Trap

AI Assisted Icon
Published on March 17, 2026
When Waymo's Safest Feature Becomes a TrapSource: Daniel Ramirez from Honolulu, USA, CC BY 2.0, via Wikimedia Commons

In January, a San Francisco man spent roughly six minutes punching the windows of a stopped Waymo, trying to lift it off the ground, and screaming death threats at the three riders inside. According to passenger Doug Fulop, who described the ordeal to The New York Times, the car sat immobile while bystanders cheered the attacker on. A police report reviewed by the Times confirmed his account. The car eventually pulled away only after the man drifted far enough from the vehicle that its sensors cleared.

The incident is disturbing on its own terms. But it also surfaces a design tension that Waymo has not publicly resolved: the same software that makes the car stop when a human is nearby — a genuine safety achievement — is the mechanism that kept Fulop and the other riders locked in place while someone threatened to kill them.

A feature, not a bug — until it is

Waymo's stop-for-humans protocol is central to its safety record. The company's own data, cited in peer-reviewed research, shows approximately a 90 percent reduction in serious-injury crashes compared with human drivers over equivalent distances, according to KQED. Those numbers are real, and they reflect the value of a system that never drives drunk, never texts, and reacts in milliseconds. But the Fulop incident joins a growing catalog of cases in which that same protocol leaves passengers stranded — not because the car is malfunctioning, but because it is working exactly as designed.

The Times reported that Waymo told riders it would not remotely command the car to leave if someone was standing nearby, and that the car's software has no provision for passengers to climb into a driver's seat and override the system. Fulop's only option was to stay on the phone with Waymo's support line and wait.

This is not a new vulnerability. A documented pattern of incidents stretches back years: in 2024, a man covered a robotaxi's sensors while passengers were inside, effectively freezing it; three women were trapped as vandals spray-painted their cab; a solo rider in SoMa was harassed by men who blocked the vehicle and tried to get her phone number; and in February 2024, a Waymo in Chinatown was surrounded, its windows smashed, and a lit firework thrown inside — though that vehicle, fortunately, was empty. In each case, the car's response to human presence near its body was the same: stop and hold.

An accountability gap that goes beyond the car

The January attack lands during a period of escalating tension between San Francisco and its robotaxi fleet. In December 2025, a city-wide power outage caused 1,593 Waymo vehicles to stall for at least two minutes on city roadways, according to data the company submitted to the California Public Utilities Commission. The city's Department of Emergency Management placed 31 calls to Waymo's first-responder line that afternoon and kept being placed on hold, the SF Examiner reported. Sixty-four vehicles had to be physically moved by people. At a subsequent City Hall hearing, Supervisor Jackie Fielder was pointed: "Innovation without accountability is recklessness."

Accountability, it turns out, is structurally difficult with a driverless vehicle. When a Waymo executed an illegal U-turn in San Bruno in September 2025 directly in front of a sign prohibiting the maneuver, local police were unable to issue a citation — there was no human driver to ticket, as the Associated Press reported. Federal investigators opened separate inquiries after Waymo vehicles allegedly passed stopped school buses with signals active — at least 19 times before a software recall, with additional incidents afterward, according to Autoblog. A state bill that would have given San Francisco the authority to cap the number of robotaxis on its streets died in the legislature two years ago.

The public is noticing

Anti-robotaxi sentiment has moved from Reddit threads to organized political action. On January 9, Lyft and Uber drivers rallied outside the California Public Utilities Commission headquarters in San Francisco, demanding greater oversight — while a steady stream of Waymo vehicles rolled past. Similar demonstrations have followed in Seattle, Los Angeles, and New York City, with labor unions and gig worker organizations as the organizing spine. In Boston, Teamsters Local 25 formed a formal coalition against robotaxis last fall. A recent Pew Research Center survey of more than 4,600 people found that about 85 percent believe the rollout of driverless cars will lead to job losses, and roughly 70 percent called them a "bad idea for society" or said they were unsure, according to Futurism.

That hostility has turned violent. In June 2025, demonstrators torched five Waymo vehicles during protests in downtown Los Angeles, forcing the company to suspend service in parts of the city, as The Washington Post documented. San Diego's regional transit board has since filed a formal protest with the CPUC, seeking local control over Waymo's planned expansion there. Meanwhile, records obtained by Fast Company and analyzed by Carnegie Mellon's Safety21 show that San Francisco's Municipal Transportation Agency created an entirely new dispatch category — "Driverless Car Incident" — to log the growing volume of stalled-vehicle reports from Muni bus and trolley operators, with resolution sometimes taking up to an hour.

The passengers caught in the middle

None of this hostility has made Waymo riders feel safer riding through it. Fulop told the Times he stopped using the service after the January attack and would avoid it at night unless the company revised its policy of not intervening when someone is actively threatening passengers. Other riders have reached different conclusions from similar experiences — one rider attacked by e-bikers in Los Angeles told the Times he actually felt reassured, trusting the car's cameras to record the incident without a driver who might panic or escalate. The divergence in those reactions points to the core question Waymo hasn't answered: at what threshold does imminent danger to a passenger override the stop-for-humans rule?

Waymo spokeswoman Katherine Barna called the Fulop incident "very unfortunate" but a "rare occurrence," telling the Times the company's remote team had stayed on the phone with riders throughout. The company's expansion plans remain unchanged: 15 million trips completed in 2025, with a target of more than 20 new cities by the end of this year. Hoodline has previously documented the smaller frictions that accumulate city-block by city-block — cones on sensors, nose-to-nose standoffs on narrow streets — that haven't yet prompted policy change either.

Fulop's final word to the Times was something closer to a policy demand than a complaint: "As passengers, we deserve more safety than that if someone is trying to attack us. This can't be the policy to be trapped there." Whether Waymo treats that as an edge case or a design flaw may say more about the company's priorities than any safety statistic.