Researchers developing autonomous robot surveillance

Cornell University researchers plan to develop a robot surveillance system that would involve robots sharing “information as they move around, and if necessary, interpret what they see. This would allow the robots to conduct surveillance as a single entity with many eyes.” This would be done allegedly to “protect you from danger.”

According to the robot surveillance project paper "Convolutional-Features Analysis and Control for Mobile Visual Scene Perception,” researchers want to develop a surveillance method that could do more than any surveillance to date, as it would “operate autonomously and robustly under unknown, and possibly disconnected, topologies.”

The researchers already have experience in matching and combining images from several surveillance cameras of the same area, as well as identifying and tracking objects and people from place to place. The university's press release states, “The work will require groundbreaking research because most prior work in the field has focused on analyzing images from just a single camera as it moves around. The new system will fuse information from fixed cameras, mobile observers and outside sources.”

A summary of how they intend to pull this is off, states:

The proposed research will develop new deep-learning Bayesian video-processing, estimation, and planning algorithms with the following capabilities: (i) extract low- to high-level mission-relevant data from video with little or no prior knowledge of the relevant states or the scene; (ii) fuse spatio-temporal video data obtained with different viewpoints and changes in appearance, scale, illumination, and focus; (iii) extract and share compact models and classifications autonomously from video with few manually labeled data; (iv) operate autonomously and robustly under unknown, and possibly disconnected, topologies.

The researchers suggested that “mobile observers might include autonomous aircraft and ground vehicles and perhaps humanoid robots wandering through a crowd. They will send their images to a central control unit, which might also have access to other cameras looking at the region of interest, as well as access to the internet for help in labeling what it sees. What make of car is that? How do you open this container? Identify this person.”


By knowing the spatial layout and context of a scene, robot observers “can detect suspicious actors and activities that might otherwise go unnoticed. For example, a person running may be a common occurrence on a college campus, while it may require further scrutiny in a secured area.”

The robot squad will sort through redundant and task-irrelevant data, take the appearance of a person, vehicle or object and the type of action into account for context, extract “scene features at a level of detail suited to the accuracy required” and “operate in real time.”

The unified framework being developed in this project will enable robots “to access and rapidly process large amounts of data and videos, such as datasets available on the web, while also extracting mission-relevant information from local video frames,” the researchers say.

The point of all this is to help the U.S. Navy and Marine Corps in the future when robots can autonomously provide aid during operations in “unstructured and potentially hazardous conditions.” Examples of missions that could benefit from this new robot surveillance include “intelligence gathering, surveillance, reconnaissance, security monitoring and situation awareness.”

Once the researchers have developed the system, it will first be tested on the Cornell University campus “using research robots to ‘surveil’ crowded areas while drawing on an overview from existing webcams.” The press release added, “This work might lead to incorporating the new technology into campus security.”

http://www.networkworld.com