SUBSCRIBE   |   MY ACCOUNT   |   VIEW SHOPPING CART   |   Log In      
   CURRENT ISSUE   |   PAST ISSUES   |   SEARCH   |   SPONSORSHIPS   

 

Share on FacebookShare on TwitterShare on LinkedInEmail a link to a friend
Monday September 05, 2022

CPSC Report Outlines Steps for AI/ML Safety Reviews

A four-step risks assessment process for artificial intelligence (AI) and machine learning (ML) is the core of an August 24 report by CPSC's chief technologist. It also urges a three-part process to understand AI/ML applications in products. The report (bit.ly/3AZrBtL) calls on commissioners to consider these steps in the ongoing review of CPSC's FY2023 operating plan.

 

For risk assessment, CPSC would:

  • "Screen consumer products and identify the existence of AI/ML technologies."
  •  

  • "Assess capabilities and determine the implications of these technologies."
  •  

  • "Analyze contributing factors that AI/ML have to discern if hazardous."
  •  

  • "Monitor/Measure conditions to determine if/when AI/ML evolves beyond safe parameters."

To understand AL/ML in products, CPSC would look at:

  • "Functions/features…to discern if AI/ML is contributing to unreasonable hazards that are creating the conditions to cause a safety concern."
  •  

  • "The system in which the AI/ML capabilities exist…to determine whether there are contributing factors which create product safety concerns."
  •  

  • "Facts/data that are ingested by the system, as well as the interpretation of the outputs, and the environmental considerations…to determine if the data veracity, unforeseen use, or unintended consequences arise as a result of AI/ML technologies within the product."

 

This report is a follow-up to one last year (PSL, 5/31/21) that addressed plans more generally, including the need to collaborate with other agencies and voluntary standards groups. The new report reiterates those needs. NIST – which set up an AI advisory panel last fall (PSL, 9/13/21) – is a primary partner and resource for CPSC.