Having experienced a thrilling law enforcement career with advancements in how police work is done with the latest technology, I wondered about the advent of self-driving automobiles and how cops would enforce the rather unorthodox mode of transportation. Despite feats in automobile automation, computer engineering marvels, and infinite interfaces, accidents are still bound to happen. Given that last factor, who gets the ticket(s)?
Although the self-driving concept conjures an unoccupied self-operating vehicle, the reality is that there is at least one breathing human organism on-board. Other than delivering pizza to the driveway of my house, what would be the point of an entirely empty self-driving car? I’m not sure, but WIRED‘s Aarian Marshall wrote in November, “driverless rides aren’t human-free, not yet.” We’ve come a long way…and we have a long way to go.
Before we go any further, Google’s Waymo self-driving car manufacturing subsidiary defines their product this way: “Our [self-driving] vehicles have sensors and software that are designed to detect pedestrians, cyclists, vehicles, road work and more from a distance of up to two football fields away in all directions.” So that is the scope of functionality built-in to autonomously-operating autos.
However, San Francisco booted Uber’s fleet of autonomous cars from their streets after a pattern of cyclists came close to being waffled. Per coverage in Bicycling.com, “Allegations of the vehicles taking dangerous ‘right hook‘ turns raised concerns about whether the cars were ready for any permits at all.”
Indeed, Uber’s chronic permitting problems compelled the ride-share giant to relocate its self-driving fleet to Arizona. Instead of correcting the reported safety dilemma, it relocated it to another metropolis. And it did so upon “flatbeds attached to Otto self-driving semi-trucks and transporting them to Arizona, with the blessing of Arizona Gov. Doug Ducey,” reported Dan Roe.
There is human presence in self-driving autos, although not exactly behind the wheel. Think of a cab driver transporting a fare around town, except there is no cabbie steering the wheel or depressing the pedals. But the car is occupied by someone. Incidentally, Waymo is preparing to launch a completely driverless taxi service in Phoenix, Arizona.
So, our central question is: When police actions are necessary, how would cops pursue violations of law?
One vision I have involves what is commonly known in DUI investigations as “constructive possession.” In basic legal terms, a person with constructive possession bears equal legal responsibility as a person with actual possession. The Legal Dictionary defines constructive possession as when “a person has knowledge of an object plus the ability to control the object.”
In a DUI example, an intoxicated motorist behind the wheel of a car which, let’s say…parked alongside (or on) the curb with the keys dangling from the ignition, is deemed in constructive possession of that auto. That equates to probable cause for police to conduct a field sobriety examination and, potentially, an arrest.
Given the aforementioned example, can “constructive possession” principles enable cops to legally hold accountable and cite passengers in self-driving cars? Perhaps the brief list of test-cities will provoke answers to legal-based questions.
Google weighed-in while also boasting that none of their self-driving cars ever received a ticket from a cop. The Atlantic published a piece on this, saying, “When the car is in operation, there is someone sitting in the driver’s seat [or the front/rear passenger seats], but that person isn’t actually doing anything. Perhaps the ticket should go to the programmer who wrote the algorithm that made the mistake?” Isn’t that saying how much (little) they think about their product/design?
Perhaps the answer to that comes from Google self-driving gurus.
Whether cocky or confident or putting on its poker-face, Google’s position on driverless car violations is : “What we’ve been saying to the folks in the DMV, even in public session, for unmanned vehicles, we think the ticket should go to the [auto-making] company. Because the decisions are not being made by the individual,” said Ron Medford, safety director for Google’s self-driving car program, and the former deputy administrator of the National Highway Traffic Safety Administration.
At present, self-driving automobiles are tested in Arizona, California, Washington, D.C., Florida, Michigan, Nevada, Ohio, Texas and Washington State. Per the California Department of Motor Vehicles, 20 manufacturers were issued permits to test hundreds of self-driving cars in their state.
Although self-driving automobiles are unquestionably futuristic and otherworldly, the cost factor is said to be out of this world as well.
Will owners of these rather expensive machines have the false sense of indemnity or be entirely responsible for what the freewheeling auto-pilot thing does (or fails to do), despite on-board sensors’ navigational awe?
Will owners of these rather expensive machines have the false sense of indemnity or be entirely responsible for what the freewheeling auto-pilot thing does (or fails to do), despite on-board sensors’ navigational awe?
According to autoinsurancecenter.com, the per-unit “cost of implementing the new technology could be way out of reach for most Americans. Currently, the engineering, power and computer requirements, software, and sensors add up to more than $100,000.” Surely, plenty of cutting-edge technology factoring into the AI brain is not inexpensive. Eek, too cost-constraining for my wallet. And lacking legal precedents can also be dissuasive for consumers.
Now that we’ve established we have an expensive hot-potato topic on our hands, how do we enforce violations of law without bobbling on the side of the road, cop staring at an empty driver’s seat? Traditionally, statutes require motorists in violation of traffic laws to sign the ticket, so how are cops expected mechanical, AI entities? Is the passenger a suitable endorsement? Will “I didn’t do it, the machine did!” and “You can’t place me behind the wheel!” arguments hold-up in traffic courts?
Although it is quite beneficial and safe for the community to refrain from drinking and driving (making self-driving cars a utopian concept), what about the passenger(s) possessing open containers? That is a violation of state law everywhere. If observed by a cop, how does enforcement action take place? Does the auto’s brain (sensors) sense a pissed-off police officer needing to conduct a traffic stop? Will the passenger somehow activate the “disengage” features (more on that in a moment) and obey lawful directions to pull over? Lacking human cooperation, how the heck does a police officer stop such a contraption? Do police have to aim their cruiser at the thing to have its sensors halt movement? Playing chicken could be perilous to public safety.
And what about sobriety checkpoints. It is my understanding that autonomous vehicles sense objects and steer clear or come to a halt altogether. Field sobriety cops want/need that vehicle to follow their directions. To counter that is a virtual standoff and tension-filled road episode. If any occupant of that driverless car attempts to take-over operations, to a cop…that looks like hoodwinking and someone attempting to switch-out drivers.
In my days as a policeman, I had several traffic stops whereby the driver and a passenger swiftly switched roles to avoid being detected of “driving on a dirty license.” Not only does it not look good, but it raises the enforcement ante. The interplay with a self-driving car can be quite interesting under these circumstances.
Can driver-less cars be engineered to comply with police procedures automatically? Yeah, I know I may be reaching too much! But cops are a thought-provoking bunch, and this particular one can kinda envision police pursuing something akin to the headless horseman. Disney is not the only one with imagination. Besides, cops must be curious to be effective in their role.
Speaking of imagination, what are lawmakers doing about the question of self-driving cars and law enforcement? Some states may not have even a crumb of legislation regarding law enforcement practices encompassing self-driving cars. No fault of their own, cops may not be familiar with the entirely nascent concept capability of a driverless car.
Case in point: a Mountain View police officer pulled over a Google-made self-driving car in November 2015 for “going too slow” and “impeding traffic flow.”
In its daily incident reports; the Mountain View PD bulletin stated, “This afternoon a Mountain View Police Department traffic officer noticed traffic backing up behind a slow moving car travelling in the eastbound lane on El Camino Real. The car was travelling at 24mph in a 35mph zone. As the officer approached the slow moving car he realized it was a Google Autonomous Vehicle.
“The officer stopped the car and made contact with the operators to learn more about how the car was choosing speeds along certain roadways and to educate the operators about impeding traffic per the California Highway Code.” (The substitution for “operators” cited in the bulletin should actually read “occupants.”)
Google’s response to the aforementioned traffic stop encounter? “Driving too slowly? Bet humans don’t get pulled over for that too often. We’ve capped the speed of our prototype vehicles at 25mph for safety reasons.” Swell. So they admitted it is engineered to underperform when a more-rapid traffic pace is legally warranted. So, who gets the ticket: the owner or the manufacturer?
A report in NetworkWorld.com examined it from the other side of the speed factor, saying “state and local governments will need to account for a drastic reduction in fines from traffic violations as autonomous cars stick to the speed limit.” So, with driverless cars come economic impacts as well.
Connected Cars and Smart Cities
OpsLens recently published a piece exploring the artificial intelligence quotient in traffic-control operations in cities whose vehicle crashes were stacking-up and how predictive policing using AI technology can subdue such snarls before they transpire. But how does that translate to autonomous cars tooling around American streets and responding in-advance of traffic woes? Do self-driving autos have the capacity to identify and avoid trouble spots? If not, what is the workaround?
Google’s Waymo website revealed such hiccups: “While we’re working toward fully self-driving cars, we have test drivers who monitor our vehicles and can take over driving if needed—we call this a ‘disengage.’ By the end of 2016, our rate of safety-related disengages had fallen dramatically. That was a four-time improvement in just 12 months and we’re working on bringing the numbers down even further.”
Well, sounds like calibrations are askew in Robo-town, even while Waymo touts its autonomous-driving product: “This will deliver the biggest impact on improving road safety and mobility for everyone.” If in 2016 the manufacturer of self-driving autos recognized that things (operation) may go awry , and even gave it a working name —”safety-related disengages”— there is cause for concern, especially as it relates to law enforcers having to interface with these roadsters.
Every police officer writes traffic citations and traffic crash reports. On these legal documents are tiny boxes the cop checks-off to indicate factors relevant to the traffic infraction and/or traffic crash causation, whether it be operational, environmental, mechanical or climatic (Mother Nature sometimes spits hail at windshields). I suppose jurisdictions will have to check the Other or Unknown box and write-in “self-driving stuff.”
Hackers’ Hijackings
Given the full automation mode of self-driving cars, how do we preempt pesky hackers and their hijinks abilities? Taking over the brain of an autonomous car means the potential for nefarious yet tech-savvy hackers to careen a car into a crowd at-will. Or lock-up its braking system resulting in a massive pile-up. Or park it in a lively intersection and seize-up its ability to go mobile when necessary.
Lee Riley, CEO of ALPS (Auto Legal Protection Services), wrote, “Driverless vehicles relying on 4G/5G connectivity will bring with them new risks such as cyber crime, hacking, ransomwares. It’s not inconceivable (particularly when cars are increasingly the weapon of choice for terrorists) that hackers could re-program a vehicle to change lanes and have a head-on [collision], override vehicle controls, or even create what has been coined ‘spam jams’ (hacker-created congestion).”
Among those concerned, US lawmakers do not feel we are ready for driver-less cars either.
“For a few senior party lawmakers, the fear is that these computer-driven vehicles aren’t yet ready for major roadways or might be susceptible to cyber attacks. So they’re standing in the way of a Senate vote on the bill, demanding changes that they say are essential to protect riders’ safety,” reported Recode.net on January 18, 2018.
Divided between support and apprehension, sometimes kooky Congress still steps forward. Not only did the House embrace the self-driving concept, it bent-over backwards for the industry bigwigs such as Tesla, Google (Waymo), and Uber.
According to Recode reporter Tony Romm, “Lawmakers there specifically sought to help tech giants and automakers obtain special exemptions so that they could test droves of new experimental vehicles around the country — without adhering to the same safety standards that apply to older cars. Their bill, called the Self-Drive Act, won swift, broad approval from House Democrats and Republicans alike.”
This, while Senator Dianne Feinstein astutely emphasized, “People need to be assured, and they need to be assured over time. And you can’t just dump something on a freeway and have people looking over saying, ‘My God, there’s no driver.'” How refreshing to have some clear-cut common sense discussion in our nation’s Capitol. Makes me wanna throw confetti and go for a Sunday drive, waving to passersby with one hand as I steer with the other. I prefer control.
Democratic Sens. Richard Blumenthal, Edward Markey and Dianne Feinstein have holds on the driverless cars bill, preventing unanimous consent, Sen. John Thune said. https://t.co/3ae6aIVtIs
— Roll Call (@rollcall) January 25, 2018
Lending his voice to Sen. Feinstein’s stance, Sen. Richard Blumenthal dubbed autonomous-driving technology “an emerging and unproven technology.” I can not argue with Sen. Blumenthal’s assertion. In Wednesday’s RollCall edition, Sen. Blumenthal was cited as having invoked “Unsafe at Any Speed” which is the title of Ralph Nader’s 1965 book that “accused manufacturers of failing to make cars safe.”
“It seems that the problems faced by driverless cars and by human drivers are much the same. We try to avoid crashes and collisions, and we have to make split-second decisions when we can’t.”
Fast Company reported, “It seems that the problems faced by driverless cars and by human drivers are much the same. We try to avoid crashes and collisions, and we have to make split-second decisions when we can’t.” Again, in either event, someone must be held accountable.
Frankly, I liken this self-driving automobile and law enforcement quandary to how automated Red light violations (recorded by cameras at intersections) are handled. No matter who is in the car (or not) at time of the infraction, the registered owner of-record gets the ticket. As with all traffic citations, the courts are purposed to hear sides of each case.
Google’s Medford said California legislature has not “changed the motor vehicle codes. We’ve encouraged the DMV to think creatively about how you deal with this so they don’t give a ticket to a person who is not responsible or involved in driving.”
California DMV counsel Brian Soublet explained traffic codes regarding self-driving cars: “The vehicle code defines an operator as the person seated in the driver’s seat or, if there is no one seated in the driver’s seat, the person who causes the autonomous technology to engage.” And if it’s a hacker?
How states mete-out traffic enforcement statutes oughta be interesting, don’t you think?