Apple Published First Paper On Artificial Intelligence Research Focusing On Advanced Image Recognition

Apple is indeed true to its word that they would begin publishing papers on AI research. In less than a month since Apple's announcement, the Cupertino-based tech giant pulled back the curtains and their research team published their first AI paper.

During the NIPS 2016 earlier this month, Apple made waves when they officially announced in the conference that artificial intelligence (AI) and machine learning researchers are allowed to publish, share, and engage with the academia. This is notable as the tech giant has remained to be secretive when it comes to creation processes.

Although the AI paper was submitted on Nov. 15, Apple published the very first paper on artificial intelligence on Dec. 22, according to Forbes. The paper is aptly titled as "Learning from Simulated and Unsupervised Images through Adversarial Training."

Apple's very first AI paper focuses on, according to Engadget, tackling the problem of teaching an algorithm the ability to recognize simulated images rather than using real-world images. In other words, the paper focuses on advanced image recognition.

Using synthetic images, similar to those in video games, in machine learning research, to train neural networks could be more efficient because these synthetic images are already annotated and labeled. Whereas real-world images necessitate a human workforce to individually label and annotate exhaustively everything the computer sees, such as a bike, cat, or flower.

However, fully switching to synthetic image approach could have downsides as "synthetic data is often not realistic enough, leading the network to learn details only present in synthetic images and fail to generalize well on real images," per the AI paper published by Apple.

In a report from MacRumors, the AI paper from Apple reads: "In this paper, we propose Simulated+Unsupervised (S+U) learning, where the goal is to improve the realism of synthetic images from a simulator using unlabeled real data. The improved realism enables the training of better machine learning models on large datasets without any data collection or human annotation effort."

The AI paper's lead author is Ashish Shrivastava who has a Ph.D. in computer vision from the University of Maryland. The other authors of the paper include Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, Russ Webb. Susskind co-founded an AI startup called Emotient, assessing a person's emotions by simply looking at facial expressions. For a full description of the AI paper published by Apple, you can check it out here.

            

© 2024 ParentHerald.com All rights reserved. Do not reproduce without permission.

Join the Discussion
Real Time Analytics