Process helps technology firms develop safer AI, machine learning

Technology News |
安全专家开发第一个标准化procedure to help assure the safety of robots, drones, self-driving cars, or any products that use machine learning.Read More
By Jean-Pierre Joosting

Share:

A team of UK computer scientists at the University of York have developed a ground-breaking process to help make machine learning (ML) and AI for autonomous technologies safe.

The methodology helps engineers build a safety case that explicitly and systematically establishes confidence in the ML long before ending up in the hands of everyday users.

As robots, delivery drones, smart factories and driverless cars become increasingly part of our lives, current safety regulations for autonomous technologies present a grey area. Global guidelines for autonomous systems are not as stringent compared to other high-risk technologies. Current standards often lack detail, meaning new technologies that use AI and ML to improve our lives are potentially unsafe when they go to market.

Developed by the Assuring Autonomy International Programme (AAIP) at the University of York, this new guidance is called the Assurance of Machine Learning for use in Autonomous Systems (AMLAS). The AAIP worked with industry experts to develop the process, which systematically integrates safety assurance into the development of ML components.

Dr Richard Hawkins, Senior Research Fellow and one of the authors of AMLAS, said: “The current approach to assuring safety in autonomous technologies is haphazard, with very little guidance or set standards in place. Sectors everywhere struggle to develop new guidelines fast enough to ensure that robotics and autonomous systems are safe for people to use.

“If the rush to market is the most important consideration when developing a new product, it will only be a matter of time before an unsafe piece of technology causes a serious accident.”

+MORE

Linked Articles

eeNews Embedded

10s
Baidu