Three Key Principles to Successfully Deploying AI for Public Safety
By Jack Williams
Thursday, September 23, 2021 | Comments
The deployment of artificial intelligence (AI) is becoming increasingly ubiquitous, and investment shows no sign of abating; it is projected to grow from $50.1 billion worldwide in 2020 to more than $110 billion annually by 2024. The promise of machine intelligence is understandably tantalizing for every industry, promising to unlock a range of impacts that will transform the way people everywhere live and work.

That is not to say the road to AI adoption is without challenges, however. One example of a hurdle AI adoption currently faces is commonly referred to as the “black box” problem, in that the reasoning behind every conclusion AI reaches might not be fully apparent to humans. But full transparency regarding the decision-making process is crucial if we are to one day trust AI with the most critical applications.

Public safety can certainly be considered one such critical application of AI, as lives truly depend on quick, accurate choices by first responders on a daily basis. There are many promising applications of AI in public safety, from resource deployment to monitoring the mental health of public-safety staff. In every instance, when public-safety agencies do consider using AI to improve workflows and outcomes, those agencies should first assess three key principles that will help guide the implementation.

The first principle addresses the “black box” problem because trust in AI is so fundamental to its success. The end result must always be explainable and understandable; instead of a black box, AI must be a glass box. In public safety specifically, transparency is particularly important in order to mitigate any potential issues concerning fairness. It is paramount to ensure that public-safety AI treats all people equally, and so an explicit, concerted focus must be made to eliminate any biases from AI models.

The second key principle for successfully deploying AI is that it has to operate reliably even when circumstances change. This is especially true in public safety — by definition, emergency circumstances are often unpredictable — but also applies to nearly every industry as each competitive landscape evolves. For first responders, when there are potentially life-threatening situations, it is imperative to have absolute confidence that AI can operate reliably even if the details of each circumstance are unique.

The last key principle for successful AI deployment concerns the sanctity of the underlying data, chiefly, protecting it from cyberattacks, but also ensuring the privacy of any personal user data. The importance of sound cybersecurity measures has grown in step with the exponential increase in cyberattacks around the world, and the decisions that an AI reaches can’t be trusted if the data underpinning them isn’t secure.

For some applications of AI, there is a long road ahead before each of these principles are met, but AI also doesn’t always have to be an all-or-nothing proposition. Right now in the field of public safety, there are “assistive AI” solutions that focus on helping humans do their jobs better, instead of replacing them. Assistive AI is embedded within an operational system like CAD and focuses on augmenting human judgement and intuition in real-time. It acts as a second set of eyes, helping first responders make better and more informed decisions by amplifying them using already available information. This way, humans remain an integral part of the decision-making process so that concerns about transparency, fairness and integrity are always addressed.

For example, public-safety answering point (PSAP) operators fielding dozens of calls during a major traffic accident might not catch that a truck involved in the crash was actually described by one witness as a “tanker truck.” This slight nuance could indicate the vehicle was carrying potentially explosive or otherwise hazardous liquids, and that information would be vital for the first responders arriving on-scene, especially if a fire started because of the incident. Assistive AI would immediately highlight this critical detail to PSAP staff, who could inform arriving first responders, so they can take needed precautions.

AI is poised to be the most disruptive innovation of our generation. But, to ensure AI’s continued success, it is also clear that adopters must practice thoughtful implementation that ensures reliable outcomes, such as through assistive AI. The public-safety agencies that do take advantage of the benefits that AI provides will be able to maximize their life-saving work.

Would you like to comment on this story? Find our comments system below.


Jack Williams is director of industry and portfolio marketing in Hexagon’s Safety, Infrastructure and Geospatial division.



 
 
Post a comment
Name: *
Email: *
Title: *
Comment: *
 

Comments

No Comments Submitted Yet

Be the first by using the form above to submit a comment!


Education







Events
November 2021

2 - 3
Wireless Leadership Summit (WLS)
Austin, Texas
https://www.enterprisewireless.org/wls2021

3 - 5
Critical Communications World 2021
Madrid, Spain
https://www.critical-communications-world.com

23 - 25
PMRExpo
Cologne, Germany
https://www.pmrexpo.de/en/pmr21/

More Events >

Site Navigation

Close