Why tech companies are racing each other to make their own custom A.I. chips

Why tech companies are racing each other to make their own custom A.I. chips

On Thursday, Alibaba said that its recently formed research and development arm – dubbed the Academy for Discovery, Adventure, Momentum and Outlook – has been working on an AI chip called the Ali-NPU and that the chips will become available for anyone to use through its public cloud, a spokesman told CNBC.

The idea is to strengthen the Alibaba cloud and enable the future of commerce and a variety of AI applications within many industries, the spokesman said. In the fourth quarter Alibaba held 4 percent of the cloud infrastructure services market, meaning that it was smaller than Amazon, Microsoft, IBM and Google, according to Synergy Research Group.

Alibaba’s research academy has been opening offices around the world, including in Bellevue, Washington, near Microsoft headquarters. Last year Alibaba hired Qualcomm employee Liang Han as an “AI chip architect” in the Silicon Valley city of Sunnyvale. Job listings show that Alibaba is looking to add more people to the effort at that location.

The activity bears a resemblance to Google-parent Alphabet’s efforts.

Internally Alphabet engineers have been using Google’s custom-built tensor processing unit, or TPUs, to accelerate their own machine learning tasks, since 2015. Last year Google announced a second-generation TPU that could handle more challenging computing work, and in February Google started letting the public use second-generation TPUs through its cloud.

The second generation of the Google AI chip can be used in the place of graphics processing units from the likes of Nvidia, which can do more than just train AI models.

The Alibaba and Google server chip programs are still in relative infancy, at least compared to Nvidia’s GPU business in data centers.

Indeed, Google and Nvidia remain partners, and Nvidia’s GPUs remain available on the Google cloud alongside the TPUs. Alibaba also offers Nvidia GPUs through its cloud and will continue to do after the Ali-NPU comes out, the spokesman said.

In a note last July, analysts Matthew Ramsay and Vinod Srinivasaraghavan with Canaccord Genuity said that with the release of Nvidia’s latest GPUs, they have “increased confidence Nvidia will … more successfully defend pricing as data center sales scale and in-house and merchant ASIC [application-specific integrated circuit] offerings increase.”

Be the first to comment on "Why tech companies are racing each other to make their own custom A.I. chips"

Leave a comment

Your email address will not be published.