Over the past few years we’ve seen a number of Nvidia research projects that tap into its machine learning research and powerful graphics cards to automate dull creative tasks or make results seem more realistic – whether de-noising, resizing or removing objects, or more-realistic oil painting.
The company’s latest project, GauGAN, takes this to a whole new level, generating realistic landscapes based on simple blocks of colour. As you can see in the video below, you select the element type you want – sky, tree, mountain etc – and add shapes onto a background using a mixture of pen, pencil and paint bucket tools.
The software then generates a landscape based on these elements – with trees reflected in ponds and other signs of realism. And you can even change the season or time of day.
The results are more impressionistic that photoreal, which is why Nvidia has called the tech GauGAN – referencing the 19th century painter as well as the Generative Adversarial Networks than underpin how it works.
You can learn more about the tech behind it in Nvidia’s blog post. It was trained on a million reasons.
In the video, Bryan Catanzaro, vice president of applied deep learning research at Nvidia, says “Wouldn’t it be great if anyone could be an artist – if we could take our ideas and turn them into compelling images?” – as if the tech alone could replace artistic talent and a knowledge of expression, composition and lighting.
But it could be extremely useful for quickly producing landscapes that form part of other projects – whether seen through a windows or as a photos on a wall – or exist only as roughs. However, in our experience most of Nvidia’s research projects like these don’t become part of commercial applications.