Facebook on Monday began using artificial intelligence to help people with visual impairments enjoy photos posted at the leading social network.
Facebook introduced machine learning technology trained to recognise objects in pictures and then describe photos aloud.
The feature was being tested on mobile devices powered by Apple iOS software and which have screen readers set to English. Facebook planned to expand the capability to devices with other kinds of operating systems and add more languages, according to King, who lost his vision as a US college student studying electrical engineering.
The technology works across Facebook's family of applications and is based on a "neural network" taught to recognise things in pictures using millions of examples.
The Silicon Valley-based social network said that it was moving slowly with the feature to avoid potentially offensive or embarrassing gaffes when it comes to automatically describing what is in pictures. Words used in descriptions included those related to transportation, outdoors settings, sports, food, and people's appearances.
The Cortana Intelligence Suite boasted the ability to let applications see, hear, speak, understand and interpret people's needs.
Microsoft said that a "Seeing AI" research project was underway to show how those capabilities could be woven into applications to help people who are visually impaired or blind better learn what is around them, say by scanning scenes with smartphone cameras or specially equipped eyewear.

Copyright Agence France-Presse, 2016

Comments

Comments are closed.