Google unveiled the Tensor Processing Unit (TPU), an application specific integrated circuit (ASIC) tailored for machine learning. The company has been experimenting with this type of solution in its data center for over a year and stated that its performance per watt are an order of magnitude better-optimized.
Sundar Pichai, who became Google’s CEO after the company reorganization that led to the creation of the Alphabet holding company, just had an interview with Forbes in which he spoke of his plans related to artificial intelligence. In November 2015, the company released TensorFlow, its own machine learning engine, as open source to stimulate its development. It’s just one of Google’s moves in this field.
In recent years, Google has been developing various services in order to make their interaction with users more natural and that requires appropriate developments in the machine learning field. However, software is not enough but it’s important that it runs on hardware that can exploit it to the fullest.
TPU requires fewer transistors to perform its operations. That’s because it’s more tolerant of reduced computational accuracy. The result is its ability to perform more operations per second, use more sophisticated and powerful machine learning models and apply them more quickly. The idea is to give users more intelligent results more quickly.
This approach also has other uses, such as in Google Street View to improve the accuracy and quality of maps and navigation. Machine learning has a variety of applications in the company’s services and that’s why it was decided to design TPU instead of using standard hardware solutions.
According to Google, the performance increase is equivalent to fast-forward the technology seven year in the future. Those are three generations of Moore’s Law, a truly remarkable progress that confirms the company’s will to be a leader in the machine learning field, a crucial factor in the evolution of many Google services.