One of the biggest obstacles in security surveillance has always been that the amount of collection far exceeds the amount of man power available for analysis. A camera may catch every move that happens in an airport but is the person watching that camera going to be able to go through all that videotape to pick out suspicious activity? Probably not. In response to this, technologies which can do some of the analysis for us are being developed. Much of this technology remains, for the moment, in laboratories. But Charles Cohen, the boss of Cybernet Systems, a firm based in Ann Arbor, Michigan, which is working for America’s Army Research Laboratory, says behaviour-recognition systems are getting good, and are already deployed at some security checkpoints.
Human gaits, for example, can provide a lot of information about people’s intentions. Correlating these movements with consequences, such as the throwing of a bomb, allows them to develop computer models that link posture and consequence reasonably reliably. The system can, for example, pick out a person in a crowd who is carrying a concealed package with the weight of a large explosives belt. According to Mr Morelli, the army plans to deploy the system at military checkpoints, on vehicles and at embassy perimeters.
Some intelligent surveillance systems are able to go beyond even this. Instead of merely learning what a threat looks like, they can learn the context in which behaviour is probably threatening. That people linger in places such as bus stops, for example, is normal. Loitering in a stairwell, however, is a rarer occurrence that may warrant examination by human security staff
As object- and motion-recognition technology improves, researchers are starting to focus on facial expressions and what they can reveal. The Human Factors Division of America’s Department of Homeland Security (DHS), for example, is running what it calls Project Hostile Intent. This boasts a system that scrutinises fleeting “micro-expressions”, easily missed by human eyes. Many flash for less than a tenth of a second and involve just a small portion of the face.
Terrorists are often trained to conceal emotions; micro-expressions, however, are largely involuntary. Even better, from the researchers’ point of view, conscious attempts to suppress facial expressions actually accentuate micro-expressions. Sharla Rausch, the director of the Human Factors Division, refers to this somewhat disturbingly as “micro-facial leakage”.
While all of this technology seems that it will create a safer environment in which it will be easier to spot terrorist in our midst, this raises a few questions about innocent civilians and their rights to be “awkward” at security checkpoints in public places like airports. For example, anyone who is nervous about flying or in a hurry to get through security because they are running late for a flight may seem anxious. Profiling is already an acceptable form of security screening in airports but with this new technology it seems that anyone acting in any way suspicious could be red flagged on a security tape. I’m a toe-walker, is this type of gait going to get me pulled out of line to be questioned by authorities? I also get very cold on planes, is the fact that I wear an oversized sweatshirt to fly even in august going to put me on the no-fly list?
A second concern is that expressions of emotion across cultures are different. An American expresses anxiety in a very different way from a citizen of India. Is the software going to be programmed to recognize cultural background and be able to adjust for those differences as well?
While I’m sure that authorities will have the final say, it seems that leaving the analysis to a computer could cut down on the leg work of having to sift through a plethora of surviellance but at what cost to the citizens it is trying to protect? The age old debate concerning security of civilians is how many of our civil liberties are we willing to concede so that order can be maintained. We already allow our bags to be searched, our shoes to be scanned, and our faces to be profiled. Will we now concede the rhythm of our gait as well as our expression of emotion in order to feel safe?
Human gaits, for example, can provide a lot of information about people’s intentions. Correlating these movements with consequences, such as the throwing of a bomb, allows them to develop computer models that link posture and consequence reasonably reliably. The system can, for example, pick out a person in a crowd who is carrying a concealed package with the weight of a large explosives belt. According to Mr Morelli, the army plans to deploy the system at military checkpoints, on vehicles and at embassy perimeters.
Some intelligent surveillance systems are able to go beyond even this. Instead of merely learning what a threat looks like, they can learn the context in which behaviour is probably threatening. That people linger in places such as bus stops, for example, is normal. Loitering in a stairwell, however, is a rarer occurrence that may warrant examination by human security staff
As object- and motion-recognition technology improves, researchers are starting to focus on facial expressions and what they can reveal. The Human Factors Division of America’s Department of Homeland Security (DHS), for example, is running what it calls Project Hostile Intent. This boasts a system that scrutinises fleeting “micro-expressions”, easily missed by human eyes. Many flash for less than a tenth of a second and involve just a small portion of the face.
Terrorists are often trained to conceal emotions; micro-expressions, however, are largely involuntary. Even better, from the researchers’ point of view, conscious attempts to suppress facial expressions actually accentuate micro-expressions. Sharla Rausch, the director of the Human Factors Division, refers to this somewhat disturbingly as “micro-facial leakage”.
While all of this technology seems that it will create a safer environment in which it will be easier to spot terrorist in our midst, this raises a few questions about innocent civilians and their rights to be “awkward” at security checkpoints in public places like airports. For example, anyone who is nervous about flying or in a hurry to get through security because they are running late for a flight may seem anxious. Profiling is already an acceptable form of security screening in airports but with this new technology it seems that anyone acting in any way suspicious could be red flagged on a security tape. I’m a toe-walker, is this type of gait going to get me pulled out of line to be questioned by authorities? I also get very cold on planes, is the fact that I wear an oversized sweatshirt to fly even in august going to put me on the no-fly list?
A second concern is that expressions of emotion across cultures are different. An American expresses anxiety in a very different way from a citizen of India. Is the software going to be programmed to recognize cultural background and be able to adjust for those differences as well?
While I’m sure that authorities will have the final say, it seems that leaving the analysis to a computer could cut down on the leg work of having to sift through a plethora of surviellance but at what cost to the citizens it is trying to protect? The age old debate concerning security of civilians is how many of our civil liberties are we willing to concede so that order can be maintained. We already allow our bags to be searched, our shoes to be scanned, and our faces to be profiled. Will we now concede the rhythm of our gait as well as our expression of emotion in order to feel safe?
http://www.economist.com/science/displaystory.cfm?story_id=12465303
3 comments:
That's some serious 1984 type shit, there.
Meanwhile DHS playing some futuristic games, there are much more potent technologies already available, but so called DHS bureaucratic "scientists" has either no ideas about it or has no brains to understand it.
See URL: www.northampsychotech.com
Thought-provoking stuff. Awesome.
Post a Comment