Disclaimer: We may earn a commission if you make any purchase by clicking our links. Please see our detailed guide here.

Follow us on:

Google News
Whatsapp

“Real-time AI” From Intel in Deep Learning Platform of Microsoft

Aniruddha Paul
Aniruddha Paul
Writer, passionate in content development on latest technology updates. Loves to follow relevantly on social media, business, games, cultural references and all that symbolizes tech progressions. Philosophy, creation, life and freedom are his fondness.

AI (Artificial Intelligence) and DL (Deep Learning) have the potential to influence communications and business operations for the better. Intel is taking it to live by working together with Microsoft in the aspect of real-time AI performance. It is bringing its AI technology for running complex DL models for the betterment of enabling applications.

Microsoft is using the Intel® Stratix® 10 FPGAs as a chief hardware accelerator for its new accelerated DL platform (Project Brainwave). With this addition, the platform can now deliver real-time artificial intelligence. This will enable cloud infrastructure to process and transmit data at the same pace as it received. In the process, ultra-low latency will be achieved as well.

Real-time AI demands

While AI is at an ever-evolving stage presently, it requires technologies to compensate for workload purposes. This involves the users’ convenience, which in turn includes consistent precision in performances. The adjoining Intel AI technologies and Microsoft’s new DL platform can suffice accordingly. Project Brainwave and Intel Stratix 10 are already bettering standards and performances in real-time AI computation.

With time, delivering real-time AI is gaining more demand in the cloud. This is because the systems need to process live data streams and deliver them rapidly back to the users. The concerned partnership is aiming at coping with the demand.

Microsoft and Intel

In today’s released blog, Microsoft revealed in detail how its platform is using Intel FPGAs. Microsoft, in fact, was the first key service provider in deploying FPGAs in public cloud infrastructure, along with its technological advancements.

The blog revealed that the FPGAs are able enough to handle the most complex DL models with supreme flexibility and performance. It enables the acceleration of DNN (Deep Neural Networks), which can conceptually substitute the thinking attribute of the human brain.

Microsoft is presently working to deploy Project Brainwave in the Azure cloud. This will help customers to run complex DL models at high-level performances.

As for Intel technologies, transforming data is one of their strong suits, with premium graded performance. The Intel FPGAs are programmable and customizable as well.

The Latest

Partner With Us

Digital advertising offers a way for your business to reach out and make much-needed connections with your audience in a meaningful way. Advertising on Techgenyz will help you build brand awareness, increase website traffic, generate qualified leads, and grow your business.

Recommended