27/4/2016

The Next Rembrandt: Training a deep learning machine to paint

 

Can technology and data bring back to life one of the greatest painters of all time? In order to answer this question, a group of data scientists, developers, engineers and Rembrandt experts, joined forces to create what is now known as ‘The Next Rembrandt’.

The painting was created using data from Rembrandt’s total body of work, deep learning algorithms and facial recognition techniques. It consists of over 148 million pixels, based on 168,263 painting fragments from Rembrandt’s oeuvre.

The project is a cooperation between presenting partner ING Bank, advertising agency J. Walter Thompson Amsterdam, supporting partner Microsoft and advisors from Delft University of Technology (TU Delft), The Mauritshuis and Museum Het Rembrandthuis.

Using data to paint

The first step to making The Next Rembrandt was analyzing all 346 of Rembrandt’s paintings using high resolution 3D scans and digital files, which were upscaled by a deep learning algorithm. Supporting partner Microsoft contributed their cloud platform Azure to host and analyze this data.

In order to determine the subject, the team conducted an analysis of Rembrandt’s corpus, determining that the most likely subject of another piece would be a middle aged Caucasian male.

This analysis, alongside input from Rembrandt experts, also helped the team master his style.

“Rembrandt relied on his innovative use of lighting to shape the features in his paintings. By using very concentrated light sources, he essentially created a ‘spotlight effect’ that gave great attention to the lit elements and left the rest of the painting shrouded in shadows. This resulted in some of the features being very sharp and in focus and others becoming soft and almost blurry, an effect that had to be replicated in the new artwork,” they explain.

A software system was designed that could understand Rembrandt based on his use of geometry, composition, and painting materials. A facial recognition algorithm identified and classified the most typical geometric patterns used by Rembrandt to paint human features. It then used the learned principles to replicate the style and generate new facial features for the painting.

202094-7._Film_Facefeatures-3216a1-large-1459772336.jpg

Next, these individual features were assembled into a fully formed face and bust according to Rembrandt’s use of proportions. When the 2D version of the painting was ready depth and texture were added. With the help of TU Delft, a height map was created to identify patterns on the surface of canvases. By transforming pixel data into height data, the computer could mimic the brushstrokes used by Rembrandt.

Finally, to bring the painting to life, an advanced 3D printer that is specially designed to make high end reproductions of existing artwork was used. In the end, 13 layers of UV-ink were printed, one on top of the other, to create a realistic painting texture.

202096-9._Film_Printing-d7aa2d-large-1459772338.jpg

Explore the project here

Comments