Process helps technology firms develop safer AI, machine learning

Technology News |
安全专家开发第一个标准化procedure to help assure the safety of robots, drones, self-driving cars, or any products that use machine learning.
By Jean-Pierre Joosting

Share:

A team of UK computer scientists at the University of York have developed a ground-breaking process to help make machine learning (ML) and AI for autonomous technologies safe.

The methodology helps engineers build a safety case that explicitly and systematically establishes confidence in the ML long before ending up in the hands of everyday users.

As robots, delivery drones, smart factories and driverless cars become increasingly part of our lives, current safety regulations for autonomous technologies present a grey area. Global guidelines for autonomous systems are not as stringent compared to other high-risk technologies. Current standards often lack detail, meaning new technologies that use AI and ML to improve our lives are potentially unsafe when they go to market.

Developed by the Assuring Autonomy International Programme (AAIP) at the University of York, this new guidance is called the Assurance of Machine Learning for use in Autonomous Systems (AMLAS). The AAIP worked with industry experts to develop the process, which systematically integrates safety assurance into the development of ML components.

Dr Richard Hawkins, Senior Research Fellow and one of the authors of AMLAS, said: “The current approach to assuring safety in autonomous technologies is haphazard, with very little guidance or set standards in place. Sectors everywhere struggle to develop new guidelines fast enough to ensure that robotics and autonomous systems are safe for people to use.

“If the rush to market is the most important consideration when developing a new product, it will only be a matter of time before an unsafe piece of technology causes a serious accident.”

Machine learning in healthcare

The AMLAS methodology has already been used in several applications, including transport and healthcare. In one of its healthcare projects, AAIP is working with NHS Digital, the British Standards Institution, and Human Factors Everywhere to use AMLAS to help create resources that support manufacturers to meet the regulatory requirements for their ML healthcare tools.

Dr Ibrahim Habli, Reader at the University of York and another of the authors, said: “Although there are many standards related to digital health technology, there is no published standard addressing specific safety assurance considerations. There is little published literature supporting the adequate assurance of AI-enabled healthcare products.

“印度醋栗现有医疗r桥梁之间的差距egulations, which predate AI and ML, and the proliferation of these new technologies in the domain.”

+MORE

Linked Articles

eeNews Embedded

10s
Baidu