Jasper Juinen/Bloomberg via Getty Images
Aand the driver dies. A hefty security .
Autonomous systems and machines that make their own decisions are here. And more are on the way, such as airborne and ground-based. But the way they learn about the world and decide to act is complex and opaque and it is . There are, as yet, set for this.
So perhaps it is time, before such artificially intelligent machines become more widespread, to insist on a layer of AI-savvy oversight to certify this aspect of partially or completely autonomous machines.
Technological leaps have always spawned new regulatory bodies to keep innovators in check and ensure safety standards are met. For instance, in the 1930s and 1940s, the ghastly crash rate of early airliners saw aviation safety authorities established worldwide to certify aircraft designs.
And in the 1960s, horrific road crash injuries and fatalities saw a mass consumer movement force car makers to improve safety – adding seat belts and safety glass, for instance. This led to mandatory national standards and the establishment of the US National Highway Traffic Safety Administration (NHSTA).
Tough to test
Such regulators traditionally pronounce products fit for duty or send them back to the drawing board. That’s easier for machines in which a limited set of starting conditions produces clear, testable outcomes. But, which let a machine make its …
More on these topics: