It is the future. Sitting in your robotic, autonomous car, the commute to work has become a time for reflection, preparation, or relaxation. You can do whatever you choose to do, from surfing the internet to catching up on missed sleep. The autonomous car has liberated you from the daily grind—unless, of course, you use it to do even more work.
-But no matter what you envision yourself doing, that’s the story we’ve all been told, and according to many with a vested interest in seeing autonomous driving become reality, it’s just a few years away—five or six years, if you believe the likes of Tesla Motors CEO Elon Musk. But is that just a bill of goods? Despite what Musk and other futurists and technophiles say, both inside the automotive industry and out, autonomous vehicles (also referred to as self-driving, robotic, or driverless vehicles) have some hurdles to clear.
-A truly autonomous car is one that can, without any intervention from the human occupant, go from point A to point B. This type of driverless car is known as a “Level 4” autonomous car in the current National Highway Traffic Safety Administration (NHTSA) legal terminology, which assigns five levels of automated driving from 0 through 4. Zero means no automation at all. Level 1 is a single automated system, such as stability control (ESC) or adaptive cruise control; since the 2012 model year, all new cars in the U.S. have been required to offer this level of autonomy via ESC. Level 2 and 3 can be applied fairly commonly to today’s cars, which may allow the driver to cede control and attention to a significant degree, but still require a human element. It’s the leap from Level 3 to Level 4 that concerns us here, a shift that will require massive amounts of philosophical, technological, and legislative effort and change.
--
Ethics
-Creating a car that autonomously moves through the populated physical world requires ethical considerations. What happens when the car finds itself in a predicament where there is no way to avoid an accident? To put a finer point on it, how can a driver—human or machine—choose decide between killing or maiming a pedestrian or the occupants of an oncoming car?
-Humans make split-second judgment calls like these in accidents everyday. But the human brain is a remarkable data-collection and computing system, even when it’s not at its best. While actual computers may react more quickly and more precisely, there are few, if any, vehicle-borne computer systems that can take in and interpret all the data a human can, and none that can present a self-informed case for a moral or ethical judgment. Asking your car to choose between lives is as logical as you asking your Nest thermostat whether you should spank your child.
-Machine ethics isn’t a highly developed field as yet. One of the most thoroughly thought-out systems for the management of machine decision-making comes from the field of science fiction, through the writings of Isaac Asimov. More than half a century ago, Asimov was exploring the ways we might regulate the use of decision-making machines that would face questions of human peril or mortality.
-Much of Asimov’s fiction involves exploration of the conflict between these laws and the seemingly—or even factually—impossible situations such a decision-making machine might face in the real world.
-Even if we can build computerized cars capable of adhering to Asimov’s tenets, humans would have to program those cars with ethical code. But that means not just humans, but corporations, would be directly responsible for the decisions made, in real-time, by a machine, somewhere potentially thousands of miles away and years down the road. Where does culpability lie? Who should go to jail?
-These questions are all so complex they might require significantly more than a few years to work out; they may be the work of generations.
- -Technology
-We are told to expect computer-driven cars to be at least 10 times safer than those piloted by humans, a figure that likely would require a constantly applicable intelligence equal to that of—at the least—the youngest legal human drivers.
-Right now, there are companies that have produced cars capable of negotiating closed race courses at speeds matching those of professional human racing drivers. We have autonomous cars testing on American highways, mostly in California and Nevada, but also in Florida and Michigan. Those cars are required by law to have human occupants ready to take controls at all times. They are works in progress, and while they are able to achieve impressive feats of self-driving, they still aren’t completely ready to deal with the world at large.
-There are practical matters, too, including packaging the technology into something that leaves room for humans and their cargo. The smartest current autonomous cars have tech that occupies the entire cargo area, and some must have a technician with a laptop monitoring its status at all times. It’s also largely experimental hardware and software, with a cost measuring well into the tens of millions of dollars.
-With Moore’s Law on our side, we can expect miniaturization to happen, perhaps even within a few years. With appropriate government subsidy or sufficient volumes of commercial interest, the cost issue may be solved, too. But that assumes industry that turns the full brunt of its research and engineering might on autonomous cars. And that still leaves the question of whether all that technological effort is capable of delivering machines that are truly capable of handling any eventuality.
- -Law
-Assuming we could solve the ethical dilemma and make the technology work for a reasonable price, we still have another obstacle: the law. There are dozens of potential legal pitfalls in the development of a self-driving car, but three big ones stand out: product liability, legislation, and multi-jurisdictionality.
-Unlike the ethical question, we’re not playing the blame game in a strictly moral sense, but in a more practical manner: Who should foot the bill when people get hurt? Product liability is a broad and multifaceted field, one already well-equipped to handle novel circumstances, so it will likely handle the self-driving car with something approaching ease, at least in judicial terms. But one possible outcome of the existing product-liability system tackling the self-driven car is to pin the blame on the company that produced it.
-Even allowing for the predicted order of magnitude reduction in crashes promised by fully autonomous cars, that leaves some number of accidents per year that could see some or all of the financial responsibility fall on the vehicle manufacturer. With individual product-liability judgments sometimes totaling multiple millions of dollars, that’s enough to put a multibillion-dollar strain on the automotive industry—and potentially enough to put some companies out of business altogether.
-We may yet find a solution that protects the consumer without bankrupting the industry, but there still remains the issue of legislating and regulating the construction, sale, and use of autonomous cars on America’s roads.
- -According to the Stanford Law School’s Center for Internet and Society, of the 30 states that have taken autonomous-driving legislation under consideration (even for the simple purpose of allowing testing), nine have failed to pass any legislation at all, 17 are still waiting to make a decision, and only the aforementioned four (plus the District of Columbia) have actually given the green light to any form of autonomous-car testing.
-Driving is a state issue, with the age for licensure, testing requirements, renewal periods, penalties, laws, and rules all varying—sometimes widely. Composing legislation that allows a single autonomous car to solo-navigate any plurality of the 50 states, much less a significant contiguous portion of them, will be a monumental—and time-consuming—challenge.
-Because there are so many potential variations in laws, manufacturers could be forced to build vehicles which conform to the laws of each of the jurisdictions in which they wish to sell cars. Much like the current emissions-regulations scenario, this could prove to be an expensive and burdensome situation for carmakers.
-None of this is to deny the benefits to autonomous-vehicle development. No one actually enjoys stop-and-go traffic, and knowing the inattentive moron in the car behind you isn’t going to end up in your back seat is something to welcome with both arms, a smile, and, hey—why not?—a cold beer. Oh, wait, that’s one more issue to sort out. It is to say, however, that as much as some people might want, or even need, autonomous cars in our lives, bringing that reality about will take time, thought, effort, and expense of a societal magnitude.
- -from Car and Driver Blog http://ift.tt/1XWYdtv
via Agya