Top

Google turns to machine learning and RAISR technology to save bandwidth

RAISR, which was introduced in November, uses machine learning to produce great quality versions of low-resolution images,

Photographers of all specialities, skills and genres have long made their home on Google+, sharing their work with a supportive community. Whether it’s of toys, travel or street art, each photo has a unique story to tell, and deserves to be viewed at the best possible resolution.

Traditionally, viewing images at high resolution has also meant using lots of bandwidth, leading to slower loading speeds and higher data costs. For many folks, especially those where data is pricey or the internet is spotty, this is a significant concern.

To help everyone be able to see the photos that photographers share to Google+ in their full glory, Google has turned to machine learning and a new technology called RAISR. RAISR, which was introduced in November, uses machine learning to produce great quality versions of low-resolution images, allowing users to see photos as the photographers intended them to be seen. By using RAISR to display some of the large images on Google+, Google has been able to use up to 75 per cent less bandwidth per image we’ve applied it to.

While Google has only begun to roll this out for high-resolution images when they appear in the streams of a subset of Android devices, the company is already applying RAISR to more than 1 billion images per week, reducing these users’ total bandwidth by about a third. In the coming weeks we plan to roll this technology out more broadly.

Next Story