Facial recognition tech helping place faces from the past

Facial recognition market
Image: IDGNS

Smart indexing capable of finding people in specific situations



Read More:

21 January 2019 | 0

The ability of the likes of Facebook to find our faces and recognise friends in embarrassing photos taken 10 years ago is familiar to most of us.

Typically the technology works by mapping a face’s geometry; the relative positions and distances between the eyes, nose, brow, mouth and chin. Up to 70 ‘facial landmarks’ can be used to give a face its ‘facial signature’ and distinguish it from others.

This signature can be used to find other faces in a database with very similar signatures, and so identify your face in long-forgotten pics or video footage.

In recent years, thanks in part to more readily available facial recognition capabilities offered by major cloud providers, the same technique is being applied to identify people not just in photos from our college days, but from as far back as the 1860s.

Photo fit
Last year, developer Vignesh Sankaran built a tool which recognised alike faces in the State Library of New South Wales’ digitised image collection.

The application used Amazon Web Services’ Rekognition facial detection and recognition capabilities, to pick out faces in photographs from the library’s Sam Hood collection.

Hood worked as a photographer and photojournalist predominantly in the Sydney area from the 1880s to the 1950s. A collection of more than 30,000 of Hood’s negatives were acquired by the library in the 1970s.

“Clicking on an image shows the results of the facial detection with bounding boxes around the detected faces. Bounding boxes coloured in dark blue are faces that have had similar faces detected in the sample image collection, with a 95% degree of confidence,” Sankaran described.

Clicking on a blue box brings up any other photos in which that face appears.

“The results of the facial analysis were stored as JSON files in S3, alongside the images themselves. API endpoints for the front end were built with the Serverless framework in Node JS, and were hosted on AWS Lambda. Serverless handled the deployment and configuration details, and was quite easy to use. The front end was built with React JS, which I found to be a complementary technology to Node JS,” he said.

Potentially, the application could be further developed to attach names to any recognised faces, making searching for individuals in the collection far easier for library staff.

Similar work is underway on a larger scale in the US to match faces found in crowd-sourced and archived American Civil War photos.

In 2017, collaboration between researchers at Virginia Tech, the Virginia Center for Civil War Studies and Military Images magazine, resulted in the development of CivilWarPhotoSleuth (CWPS).

The tool uses facial recognition software to identify 27 ‘facial landmarks’ in photographs from the era uploaded by the public. CWPS then compares the unique facial reference points against the tens of thousands of photos in its archive.

“Face recognition allows us to find matches even when the soldier’s facial hair changes, or if a different view of him is in our archive,” the tool’s makers said.

“One of the greatest strengths of the site is that the more people use it, the more valuable it becomes. When you add an identified photo from your collection, it may instantly match a mystery photo that another user has been trying to identify for years. Likewise, if you search an unidentified photo and don’t find a match at first, you will be automatically notified if a potential matching photo appears on the site at any point in the future,” they added.

A public version of the site was launched in August.

Similar work is being undertaken by Danish firm Vintage Cloud. It uses a visual recognition API offered by Clarifai, to apply meta-tagging to old film stock in a product called Smart Indexing. The company recently announced a database of 100,000 faces that customers can access and match to those found in archive footage.

“Imagine if a producer came to you, needing footage of Marlon Brando, a fire in a skyscraper or a 1976 Ford Pinto,” said Peter Englesson, CEO of Vintage Cloud. “Smart Indexing your archive assets would allow you not only to quickly establish whether you had the desired clip but also to access it immediately – providing the opportunity to realise the value of that asset.”

IDG News Service

Read More:

Comments are closed.

Back to Top ↑