Microsoft continues to use technology to improve accessibility and also tires to enhance its growth and development in Project Tokyo, a combination of technologies that extend people’s capabilities. Project Tokyo uses artificial intelligence to help improve accessibility.
It’s powered specifically by the object, scene recognition, and machine learning. You could very well take a picture of your friends and hear who’s doing what and where, and whether there’s a dog in the picture (important) and so on because there’s facial recognition built-in as well.
According to a blog post published by Microsoft, Janeiro, Brazil, observing how they interacted with other people as they navigated airports, attended sporting venues, and went sightseeing, among other activities. The research group began from the U.K. to the 2016 Paralympic Games in Rio de by following athletes and spectators with varying levels of vision on a trip.
Project Tokyo team developed the aforementioned algorithms connected to a HoloLens from which the front lenses have been removed. Machine learning experts which run on graphical processing units are housed in a PC.
All of this information is relayed to the wearer through audio cues. It’ll play a click that sounds as though it’s coming from roughly that distance to the left. For example, if the modified HoloLens detects a person one meter away on the user’s left side.
Microsoft pledged $25 million over five years for universities, philanthropic organizations, and others developing AI tools for those with disabilities, announced in May for accessibility.