Watch CBS News

Facebook's DeepFace shows serious facial recognition skills

We can no longer say that computers will one day be able to put names to human faces better than we can -- that day might be already here.

Facebook researchers have published a paper about a newly designed facial recognition system with 97.25 percent accuracy -- a mere .28 percent less than a human being. The project, called DeepFace, performed better than most facial recognition systems when measured against a data set commonly used to judge the effectiveness of these systems.

First reported by MIT Technology Review, the development of DeepFace represents a significant advancement over previous facial recognition systems. This is due to the new approach to artificial intelligence known as "deep learning," in which networks of simulated neurons learn to recognize patterns in large amounts of data.

Although the concept of deep learning is decades old, computers have only just become powerful enough for the mathematical computations needed to create more layers of virtual neurons. With this greater depth come major advances in speech and image recognition, and web companies like Facebook, Google, Pinterest and Netflix are cashing in.

The reason why tech companies are so hot for these systems -- it allows for greater recognition of objects within systems and analyzing languages, without human supervision. Ranging from advertising data to identifying untagged friends in photos, these systems ultimately are supposed to help to create better social networks.

Deep learning technology works in four stages: detect, align, represent and classify. Essentially, the artificial intelligence is recognizing the small features that make up an object or a single piece of text and then putting them together to create a map of the whole thing. DeepFace takes the aligning and representing stages one step further.

Using a three-dimensional model of an average human being, the software positions the face to look forward. Then, a numerical description of the reoriented face is calculated using a simulated neural network. If DeepFace comes with a similar enough description from two different images, it must show the same face, according to MIT Technology Review.

The Facebook researchers trained its DeepFace system by tapping into the company's user image data -- four million facial images belonging to almost 4,000 people. Each person had an average of nearly 1,000 images.

The researchers plan to present the work at the IEEE Conference on Computer Vision and Pattern Recognition in June.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.