University of Missouri Researchers Streamline Data for First Responders
Wednesday, June 29, 2016 | Comments

A group of computer science researchers from the University of Missouri developed a visual cloud computing architecture that streamlines visual data.

Visual data created by numerous security cameras, personal mobile devices and aerial video provide useful data for first responders and law enforcement. That data can be critical in terms of knowing where to send emergency personnel and resources, tracking suspects in man-made disasters or detecting hazardous materials.

“In disaster scenarios, the amount of visual data generated can create a bottleneck in the network,” said Prasad Calyam, assistant professor of computer science in the University of Missouri College of Engineering. “This abundance of visual data, especially high-resolution video streams, is difficult to process even under normal circumstances. In a disaster situation, the computing and networking resources needed to process it may be scarce and even not be available. We are working to develop the most efficient way to process data and study how to quickly present visual information to first responders and law enforcement.”

The research team, including Kannappan Palaniappan and Ye Duan, associate professors in the Department of Computer Science, developed a framework for disaster incident data computation that links the system to mobile devices in a mobile cloud. Algorithms designed by the team help determine what information needs to be processed by the cloud and what information can be processed on local devices, such as laptops and smartphones. This spreads the processing over multiple devices and helps responders receive the information faster.

“Often, we see many of the same images from overlapping cameras,” Palaniappan said. “Responders generally do not need to see two separate pictures but rather the distinctive parts. That mosaic stitching that we helped define happens in the periphery of the network to limit the amount of data that needs to be sent to the cloud. This is a natural way of compressing visual data without losing information. Clever algorithms help determine what types of visual processing to perform in the edge or fog of the network, and what data and computation should be done in the core cloud.”

“Incident-supporting visual cloud computing utilizing software-defined networking” was published in the journal IEEE Transactions on Circuits and Systems for Video Technology in a special issue on cloud computing for mobile devices. Guna Seetharaman of the U.S. Naval Research Laboratory also contributed to the study.

Funding for the project came from a combination of ongoing grants from the National Science Foundation, Air Force Research Laboratory and the U.S. National Academies Jefferson Science Fellowship. The content is the responsibility of the authors and does not necessarily represent the official views of the funding agencies.

Would you like to comment on this story? Find our comments system below.




 
 
Post a comment
Name: *
Email: *
Title: *
Comment: *
 

Comments

No Comments Submitted Yet

Be the first by using the form above to submit a comment!


Magazines in Print







Events
February 2020

5 - 5
Critical Communications World (CCW)
Madrid, Spain
https://www.critical-communications-world.com/

March 2020

30 - 4/3
International Wireless Communications Expo (IWCE)
Las Vegas
https://www.iwceexpo.com/

April 2020

6 - 9
APCO Western Regional Conference
Ogden, Utah
http://www.apcowrc.org/

7 - 9
ENTELEC Conference and Expo
Houston
https://www.entelec.org/event/2020-entelec-conference-expo/

More Events >

Site Navigation

Close