Header Ads

Insight to Facial Recognition


Lately, India and several other countries in the world have been the targets of terrorist attacks – kamikaze conspiracies that have destroyed innocent lives and public property. People and governments all over the world are gearing up to fight this menace and protect the innocent, and the answer lies in technology – and biometrics in particular, which is universally accepted as the one sure-fire way to ensure the protection and security of vulnerable populations and hopefully, someday, the elimination of terrorism altogether.

From fingerprints to faceprints

Biometrics, or the measurement and analysis of biological data, is that arm of technology that could ensure a safer world, if scientists are to be believed. Biometrics refers to the technologies of identifying, recording and correlating characteristics of the human body such as fingerprints, eye retinas and irises, voice patterns, palm geometry and DNA. Fingerprints, one of the oldest forms of biometric analysis, were used in 14th century China and subsequently by 20th century police, and are still an unavoidable ingredient in pulp-detective fiction and TV shows.

For more than a decade now, technologists, researchers and startups all over the world seem convinced that the unique features of the face can be constructively used by artificial intelligence – computers could create more foolproof security systems. Facial structures, when translated into mathematical descriptions, could help security agencies to face up to the post-9/11 threats of infiltration into public spaces by suspected criminals. Face detection and face-tagging are already familiar to us as the wow-features of our digital cameras, photo-sharing sites such as iPhoto, Flickr and Picasa and the not-so-foolproof VeriFace facial recognition login function of Lenovo’s notebooks.

The first and most well-known, large-scale application of facial recognition software was in the June 2001 Super Bowl in Tampa, Florida, in which video security cameras recorded the facial features of thousands of fans entering the stadium and compared them with mug shots in the Tampa Police database. The cameras were installed with a face-recognition application called Face-IT, created by Visionics Corporation, New Jersey. Famously, the system made several ‘false positives’ (matched faces that were not really matching) and led to the arrest of not a single wanted criminal. Civil liberty activists raised the alarm about big brother tactics and infringements on privacy. Since then, there have been several advances in facial recognition technologies and in October 2008, Interpol proposed an automated face-recognition system for international borders. Again, voices were raised against the proposal as it could lead to an infringement of privacy, abuse by officials, and, like most facial recognition systems in the past, fall prey to inaccuracies. Yet, these systems are highly attractive to the powers-that-be as they can be used to prevent voter fraud, thwart the misuse of ATM machines, and, basically, provide an easy means to control large groups of people – the masses. But this is only possible if the error margins decrease. Perhaps the reason for the high error frequency lies in how the system works.

How facial recognition systems work

Some of us never forget a face. Humans have an intrinsic ability to remember hundreds of faces and, more often than not, connect each one of these faces with a name. The challenge for a face-recognition system is to be able to mimic this ability with at least an equivalent measure of accuracy, if not better, with minimal intervention.

Facial recognition technologies are usually used for verification (confirming whether a person is who he / she claims to be) and identification (matching unknown faces taken from surveillance footage with images in a database, e.g. criminal records). The techniques used for facial recognition can be geometrical (feature-based) or photometric (template-based). Traditionally, there have been four basic methods employed by facial recognition systems:

Eigenfaces

The famous mathematician David Hilbert was the first to use the term eigen (meaning ‘own’ or ‘peculiar to’) for a non-zero vector on which, when a particular linear transformation is applied, may change in length, but not in direction. The Eigenfaces technology was patented at MIT and it makes use of 2-D greyscale images that represent distinguishing features of a facial image. Values are assigned to these features and an average set is prepared. Using statistical calculations, a covariance matrix of eigenvectors is arrived at, each eigenvector representing an ‘eigenface’. Each new face that the system now encounters is evaluated on the basis of how it differs from the ‘mean face’. When a face is ‘enrolled’, the subject’s eigenface is mapped to a series of numbers. Understandably, it’s all to do with numbers in the end, and not really visuals. These are then compared to a ‘template’ in the database. For verification, a subject’s live template is compared against the enrolled template for identification, the comparison set increases but the process stays the same. The most significant drawbacks of eigenfaces lie in the preconditions – the images must always be frontal, full-face and the surroundings well lit for optimal results.

Feature analysis

Local feature analysis, used by Visionics in Tampa, is based on dividing the features of the faces into building blocks while simultaneously incorporating the relative position of each feature.

The interesting aspect of this software is that it even expects small movements of a feature and the resultant and simultaneous shifting of adjacent features that inevitably occurs. Unlike Eigenfaces, it can accept angles of 25 degrees (horizontal) and 15 degrees (vertical). Each human face has about 80 ‘nodal points’ – distinguishing peaks and valleys and the software measures aspects such as the distance between the eyes, the width of the nose, the shape of the cheekbones, the depth of the eye sockets and the length of the jaw line to create a ‘faceprint’. However, lighting and the angle to which the face was tilted towards the camera could adversely affect the results.

Neural network mapping

Artificial neuron networks are programmes that imitate the interconnected, unified responsive behavior of biological neurons and are often combined with Eigenface systems to bring out better results in facial recognition. An algorithm is used to determine the similarity of a live person’s face with an enrolled or a reference face. The system automatically re-adjusts the weight it assigns to individual features in the event of a false match.

Automatic face processing

A simplistic, but sometimes quicker technology, automatic face processing (AFP) uses the distance between prominent facial features such as the eyes, the end of the nose and the corner of the mouth to create its template. This is, however, not so efficient as the above systems.

3D Facial Recognition

In the quest for greater accuracy, the trend of the last decade is towards the development of a facial recognition software that uses a 3D model. the image of a person’s face is captured in 3D, allowing the system to note the curves of the eye sockets, for example, or the contours of the chin or forehead. Even a face in profile would suffice because the system uses depth, and an axis of measurement, which gives it enough information to construct a full face. The 3-D system usually proceeds thus:

Detection

Capturing a face either by scanning a photograph or photographing a person’s face in real time.

Position

Determining the location, size and angle of the head.

Measurement

Assigning measurements to each curve of the face to make a template with specific focus on the outside of the eye, the inside of the eye and the tip of the nose.

Representation

Converting the template into a code – a numerical representation of the face, and,

Matching

Comparing the received data with faces in the existing database. In case the 3D image is to be compared with an existing 3D image, it needs to undergo no alterations. Typically, however, photos that are stored are in 2D, and in that case, the 3D image needs a few changes. This is tricky, and is one of the biggest challenges in the field today.

In the Face Recognition Grand Challenge held in 2006 at Arlington, Virginia, the new algorithms proved to be ten times more precise than those of 2002. The advances even permitted the system to accurately identify and differentiate between identical twins.

Skin texture

Skin biometrics is another supportive technology being developed by companies such as Identix, which merged with identity solutions provider, Viisage in 2006 – using the uniqueness of skin texture for more precise results. The software takes a picture of a patch of skin – a skinprint, which is broken into smaller portions and converted into a mathematical space by an algorithm, picking the lines, pores and patterns that constitute the skin. Systems such as FaceIt have been developed to incorporate Eigenvectors, local feature analysis and surface texture analysis to optimize results. However, long hair, dim lights or sunglasses could hinder the system’s performance.

The commitment of resources by governments and venture capitalists and the labor of scientists and research students indicate face recognition is developing in the most unanticipated directions and face biometrics has already become an integral part of our daily life.

Tomorrow is today

It is the distinctive feature of our times that we shall live to see many science fiction predictions come true in our own lifetimes. A lot of people were impressed by the sci-fi gadgetry in Steven Spielberg’s Minority Report. If you were one of them, you’d probably be even more impressed, and pleasantly surprised to know that something quite similar is already happening around you.

Here are a few examples to shock and awe you:

Infrared facials

A new innovative project in the UK has led to a breakthrough in eSecurity services by using infrared light. St. Neots Community College, in Camridgeshire has given its support to the project and allowed the system to be installed in order to record attendance. The students only just need to walk over to a scanner, and enter a PIN for their identity to be verified. The scanner is basically a system that takes an image using infrared light, invisible to the human eye, reinforcing the belief that face recognition is the most non-intrusive and time-saving biometric technology today. Poor ambient lighting does not hamper the system because the infrared flash lights up the face sufficiently, without the subject perceiving it. The system measures features such as distance between the pupils of the eyes, and, in less than 2 seconds, makes a match with the templates in the existing database.

Korean forays

San Francisco-based 3VR Security Inc. recently released the results of its facial recognition technology tests, conducted in collaboration with Korea’s SK Networks. The report claims 85 to 92 per cent accuracy with very few false positives, in the most uncontrolled and highly populated environments viz. the subway stations in Seoul. 3VR’s special software is geared towards overcoming the challenges of video-based biometrics, specifically, the task of recognizing and tracking faces in crowds of hundreds in real time, in spite of poor lighting and imperfect camera angles. If 3VR’s re-modeled still-image face recognition algorithms find their way into CCTV cameras all over the world, who knows, in a few years, you might just be picked off the street for that age-old parking ticket you didn’t bother to pay.

Smarter licenses

The transport authorities in Queensland, Australia have decided to replace almost three million drivers’ licenses with ‘smartcards’ using facial recognition technology provided by Unisys Australia. The project is huge, involving several other partner companies such as Leigh Mardon Australia that will design the customer interface devices, and Daon, which will provide the biometric enrolment technology and middleware software. Starting in April 2009, and over the period of the next five years, Unisys will help prevent identity theft by embedding a biometric facial image into the smartcard chip. Typically, if someone’s photo does not match the one on the card or matches the picture on another card – an alarm is set off and then it’s up to the humans to commence investigation.

Faucet facetsi

Home, a company based in China has come out with the ultimate luxury device – a super-intelligent tap! The Smartfaucet activates itself once it detects motion and uses face recognition technology to identify who exactly has stepped into the tub. The device then sets the temperature of the bath according to your prior preferences. It even has an LCD touch screen that can connect you to the world wide web.

Face-savvy facades

Yahoo! Japan plans to install billboards that scan passers-by, identify their gender and age group, using facial recognition technology, and in true Minority Report style flash advertisements based on this assessment. Yahoo! has teamed up with Comel – a Tokyo based company – for the hardware, and NEC Soft (Japan) for the facial analysis technology. The boards will also display weather updates and news content apart from advertisements. Yahoo plans to set up about 500 such billboards in malls and railway stations of Fukoka, to begin with and later expand to Tokyo and Osaka. And you don’t need to wake up when September ends for this one – it’s already happening.

Lip Reads

Ever since the London bombings and the hours and hours of CCTV footage that had to be examined, law enforcers in the U.K. have decided to take the bull of terrorism by the horns. Earlier this year, groundbreaking research has begun, in an attempt to rectify the aberrations of suspect identification on CCTV records, by adding lip movement and speech pattern recognition to existing face recognition software. Face recognition experts, Omni perception and BAE Systems, a defense and security company, have tied up to try and hone the system to such a degree that a subject’s face could be accurately recognized from distances as large as a hundred meters.

It looks like the entire world seems to be working to create that perfect face recognition solution. But what lies in store for face-based software in the near future? In order to remedy the inaccuracies of the systems we’ve developed so far, scientists all over the world have realized that the solution lies not just in ‘face-recognition’ but in other branches of science entirely, namely, neurophysiology, ornithology, entomology and the human visual system.

How Humans See

The whole point of artificial intelligence is to try and create computing systems that operate more and more like humans and don’t appear artificial at all. New research into the perception patterns of human beings has revealed certain strange and useful facts that are sure to have a significant impact on face recognition technologies.

Cues from the Face-blind: recent research into face blindness in humans has shown that the presence of emotional information in the face increases neural activity in the area of the brain called the FFA (Fusiform Face Area).

This proves that body and face-sensitive perceptions are dependent on emotional content and that neutral faces are more difficult to differentiate between. The FFA displays lower activation for neutral faces. Perhaps, then, our systems need to identify emotions in faces to make accurate assessments, just as humans do.

Remote control faces: a computer scientist in San Diego seems to have done just that. Jacob Whitehill, a computer science Ph.D. student from UC San Diego’s Jacobs School of engineering has developed a technology for detecting facial expressions (part of an ongoing project at the school). Early last year, Whitehill successfully showed how his facial expressions could be turned into a remote control that speeds or slows video playback.

Robotic Eyes

Facial Expressions fall under the broad head of ‘non-verbal communication’. At the Tokyo Institute of Technology, a team led by Yoichi Yamazaki have built and eye-robot which is nothing but a pair of ‘eyeballs’ which can convey a whole range of emotions. While the attempt is to make a robot more expressive, using eyes as we humans do, it stands to reason that if a machine can successfully emote, it can surely be made to identify human emotions as displayed on a human face before it.

Expressions in binary code

Jessica Dennis and Michael Ulrich, engineering students at the Rowan University College of Engineering, have spent 2008 working on developing a software program that has the capacity to read facial expressions. Using the JAFFE (Japanese Female Facial Expression) database, they arrived at four key measurements: the height and the width of the mouth and the height and width of the eyebrows. How wide the eyes were open and the jaw line were also considered as secondary measurements. The computer was then made to understand the expressions by translating them into binary code. After testing hundreds of photos, the duo determined that their software program had a 94.2 per cent accuracy for the emotions of surprise, fear and anger and an average of 76.5 per cent for all expressions.

Liar, Liar

Paul Ekman, a veteran in the field of study of human facial expressions, in studying people with mental disorders has successfully identified what he calls “microexpressions” – split second facial expressions – that betray the subject’s emotional state. According to him, no one can fake a microexpression because they are inadvertent and slip out in spite of the subject’s conscious attempt to repression. The web site www.humintell.com was then set up by Ekman and his colleagues – a facility that anyone can log onto and train oneself in the art of picking out microexpressions that can show that someone is lying.

This has now been taken forward by Mary Frank, a former student of Ekman’s, presently teaching at the University of Buffalo, New York, with the help of computer scientists from the University of California. They have succeeded in automating Ekman’s Facial Action Coding System (FACS), turning it into a technology called the Computer Expression Recognition Toolbox (CERT). This has tremendous potential for use in forensics, and if proved to be faultless, can be used not just on video footage of court trials but also in interviews, weddings and business contracts. The only point that ought to be perhaps considered here is the that now the machines are going to be better at reading faces than most of us humans are. What that portents for the human race, we leave you to contemplate.

Facing facts

To cut a long story short, research into facial recognition technologies is digging channels into territories that were deemed unconnected until a few years ago. The fact remains, however, that just as retina and fingerprint scanners can be fooled, faces can be altered (vide Michael Jackson) to outwit the most accurate algorithms. Even if Picasa’s face-tagging software can distinguish between identical twins, there’s a hacker born every minute and, in the long run, relying only on facial biometrics would be, well, myopic. It seems inevitable then, that a combination of several biometric technologies holds the key to a more secure identification system in the future. The latest in a list of biometric topics of interest are palm geometry and tracing the pattern of veins under the palm; the way a person walks (gait), the way a person bangs out text from his keyboard (keystroke dynamics), DNA and voice prints could be some possible combinations that could make face recognition more infallible. The right combination might just wipe that smile of the terrorist’s face for good.