Today, a new entry in Apple’s Machine Learning Journal was published, where face detection and related Vision frameworks were discussed for the developers to use for apps in macOS, iOS, and tvOS.
The entry is titled “An On-device Deep Neural Network for Face Detection,” and it explores the barriers against Vision to work. It also covers privacy maintenance ‘by running detection’ locally and not via cloud servers.
An excerpt from the paper by the Computer Vision Machine Learning Team of Apple reads, “The deep-learning models need to be shipped as part of the operating system, taking up valuable NAND storage space. They also need to be loaded into RAM and require significant computational time on the GPU and/or CPU.”
The team further supports on-device computation over cloud-based services during the process of system resource share, while other apps are running. At the same time, the team writes about a high-level efficiency requirement for the computation for processing a huge Photos library in – a short time, low thermal increase, and low power usage.
To break the barriers, Apple optimized the framework to ‘fully leverage’ CPUs and GPUs with the help of BNNS (Basic Neural Network Subroutines) and Metal graphics of the brand. It also optimized memory usage for image loading, caching, and network inference.
It has been recently noted that Apple is heavily investing in machine learning. They came up with executing a ‘Neural Engine,’ dedicated to the A11 Bionic processor in iPhone X and 8. Furthermore, CEO Tim Cook said earlier this year that machine learning is an indispensable asset for Apple’s self-driving car platform, which is presently in testing on the roads of California.
How do you like the new developments in Apple’s contributions toward machine learning? Do you think you could have anything to add to the paper? State your opinions in the comments, and stay with us for more tech updates.