A woman sits in the driver's seat of her Tesla car, driving down a road without her hands on the steering wheel

Autonomous autos could cause controversy

Jasper Juinen/Bloomberg via Getty Images

A car crashes while on autopilot and the driver dies. A hefty security robot at a US shopping mall knocks down and injures a child.

Autonomous systems and machines that make their own decisions are here. And more are on the way, such as airborne and ground-based delivery drones. But the way they learn about the world and decide to act is complex and opaque and it is hard to know how often they will get it wrong. There are, as yet, no standards set for this.

So perhaps it is time, before such artificially intelligent machines become more widespread, to insist on a layer of AI-savvy oversight to certify this aspect of partially or completely autonomous machines.

Technological leaps have always spawned new regulatory bodies to keep innovators in check and ensure safety standards are met. For instance, in the 1930s and 1940s, the ghastly crash rate of early airliners saw aviation safety authorities established worldwide to certify aircraft designs.

And in the 1960s, horrific road crash injuries and fatalities saw a mass consumer movement force car makers to improve safety – adding seat belts and safety glass, for instance. This led to mandatory national standards and the establishment of the US National Highway Traffic Safety Administration (NHSTA).

Tough to test

Such regulators traditionally pronounce products fit for duty or send them back to the drawing board. That’s easier for machines in which a limited set of starting conditions produces clear, testable outcomes. But it’s tougher for cognitive systems, which let a machine make its …

More on these topics:

Let’s block ads! (Why?)

Related Posts

Facebook Comments

Return to Top ▲Return to Top ▲