“This looks Photoshopped.” How many times have you heard — or said — those three skeptical words? And yet every year, Photoshop and tools like it get a little more sophisticated, and it gets tougher to tell what’s a ‘shop and what’s not. More research comes out, more programs are written to incorporate the new research, and then artists get their hands on the tools and develop a deft and subtle touch.
Now a group of researchers from Cornell University and Adobe have layered neural nets atop an image style transfer AI, to create an even more powerful image manipulation tool they’re calling Deep Photo Style Transfer. It takes a reference image, often heavily stylized, and an input image. Then it clones the style of the reference image onto the input image. What it spits out at the end is startling because of how seamless the changes can be.
Using “semantic segmentation,” the authors teased apart the concepts of edges, textures, content, and style to build their neural nets. You can think of it as a combination of the magic wand tool and the heal tool from Photoshop, or perhaps as a “format painter” like the one in Microsoft Word except for photos. The study authors used their tool to swap the textures of apples, for example, and to change the weather and time of day in photos.
Semantic segmentation is most valuable in the way it can be tuned for whatever input image it receives. In a mathematical modeling sense, a tree, a building, a face, or any other element in an image will have a different set of recurring angles and weights in its edges, which a model can use to distinguish one thing from another. We’re getting closer to being able to pick out cats in images without needing an entire supercomputer facility to do it.
For example, the authors explain in the, “consider an image with less sky visible in the input image; a transfer that ignores the difference in context between style and input may cause the style of the sky to ‘spill over’ the rest of the picture.” Deep style transfer is capable of accounting for these differences in context, so it respects the edges while confidently changing the textures.
“People are very forgiving when they see [style transfer images] in these painterly styles,” coauthor Kavita BalaThe Verge. “But with real photos there’s a stronger expectation of what we want it to look like, and that’s why it becomes an interesting challenge.”
Prior art in image style transfer has already hit the streets in app form; there’s an app called Prisma that can apply painterly styles onto images using AI. It’s like Photoshop, but way better than trying to get all those filters right yourself. MIT also released an app that let usersinto nightmare fuel. In this new work, the authors started with these same methods, and added another layer of AI to ensure the semantic details of the original image are preserved in the output image. The resulting neural net can tell what parts of an image are what. In short, you get your same input photo back, but seamlessly altered to have a different visual style. It’s like they drape the textures of the reference image atop the lines and edges of the input images.
The researchers are already thinking about other applications for photorealistic style transfer. “The question of how far you can push it is important,” said Bala. “Video is a logical thing for it to go to, and that, I expect, will happen.”
Source : https://www.extremetech.com/extreme/247152-new-application-layered-neural-nets-make-photoshop-freaky-good