You probably know as well as I do that the internet is littered with low-resolution images, either a limitation of a device's camera or purposely-downgraded for faster loading on slow connections. Unfortunately, enlarging an image many times over while still preserving detail is something only possible in episodes of CSI. But thanks to the magic of machine learning, Google has been developing a solution - RAISR, short for Rapid and Accurate Image Super-Resolution.

Existing methods of upsampling (generating a larger image from a smaller one) use simple algorithms to fill in detail with values from surrounding pixels. Most of these methods result in a blurry image that loses the fine details of the original image.

Left: Original image, Right: Bicubic upsampled version

Left: Original image, Right: Bicubic upsampled version

That's where RAISR comes in. Essentially, Google trains RAISR on 10,000 pairs of images (one low quality, one high) to create filters that recreate details close in approximation to the original images. There is a far more technical explanation at the source link below, but that's the general idea.


The steps of RAISR's algorithm, applied to an existing upscaled image.

Google hopes this can be used to restore images taken with low-resolution cameras, or as an improvement to pinch-to-zoom on mobile devices. This could even allow for less mobile data usage by transmitting images at a lower resolution and upsampling them on the recipient's end. I can't wait until the practical applications of this start to take shape, but it could be a while.