Tip Us Hey you, we are hiring! Join us if you are an author, developer or designer!

Google Cloud TPUs announced at I/O 2017

17 May 2017 0

Google I/O 2017 has just started and there has been a lot of new things coming from the company. Just a few minutes ago, Sundar Pichai, CEO at Google announced that there are more than 2 billion active Android devices all over the world. This I/O is all about Mobile First to AI First paradigm shift and the people at Google are just doing that across all of the services that the company provides.

We all know about the Google's AlphaGo machine learning based model that defeated the world champion in Go not so long ago. Well, the technology behind it was something called a TPU (Tensor Processing Units) which deliver an insane throughput to allow machine learning models to run faster.

Today at I/O 2017, the company has announced the second generation TPUs which will be way faster and accessible to the people all around the world through Google Compute Engine. This is how the hardware for the gen-2 TPUs look like.


While our first TPU was designed to run machine learning models quickly and efficiently—to translate a set of sentences or choose the next move in Go—those models still had to be trained separately. Training a machine learning model is even more difficult than running it, and days or weeks of computation on the best available CPUs and GPUs are commonly required to reach state-of-the-art levels of accuracy.

The new TPU's announced today are made to do the training as well as the inference tasks on the same device which removes bottlenecks and increases overall performance for training as well as running the model. Each TPU is capable of a throughput of 180 teraflops. These powerful TPUs are designed to work together in a herd known as TPU Pod. One TPU Pod can accommodate 64 TPUs, that's a total of 11.5 petaflops! That's insane!

Using these TPU pods, we've already seen dramatic improvements in training times. One of our new large-scale translation models used to take a full day to train on 32 of the best commercially-available GPUs—now it trains to the same accuracy in an afternoon using just one eighth of a TPU pod.

The TPUs are very powerful as we saw. Hence they can be leveraged by people all over the world which is exactly what Google is going to allow using Google Cloud TPUs. Here, the processing power of the TPUs will be provided as a service to the Cloud users. TPUs can be programmed using TensorFlow.

Google is also going to provide 1000 Cloud TPUs for free to researchers around the world as getting such high-end machinery for research use becomes a hindrance while doing research. Google is calling it TensorFlow Research Cloud.


0

comments

Google Cloud TPUs announced at I/O 2017
Write a comment...
Android

Surface Duo 2 will receive three years of OS and security updates | Report

Android

Surface Duo Android 11 update may arrive this summer | UPD: Official timeframe

Windows

Surface Laptop Studio 2-in-1 laptop with 120Hz display announced

Windows

Microsoft unveils Surface Pro 8 and Surface Go 3 2-in-1 laptops