Monday, April 11, 2016

Wed, April 13 - A Neural Algorithm of Artistic Style

A Neural Algorithm of Artistic Style. Leon A. Gatys, Alexander S. Ecker, Matthias Bethge. 2015.

implementation, arXiv

23 comments:

  1. This paper presents a mechanism by which the artistic style of an image of a painting can be isolated and then combined with another subject image in order to generate a new, unique image with the appearance of having been made in the style of the style source while retaining the content and general arrangement of the subject image. This process is accomplished through the minimization of the sum of the loss of two optimisation problems working on three images : the L2 distance between a noise image and the source subject image, optimized by gradient descent so that the noise image evolves into one that will generate the same response at a particular layer in the CNN as the source image; and gradient descent optimized by minimizing mean-squared distances between the Gram matrices (the inner products of the feature map vectors at a particular layer) between the same noise image and the original style source image, computed and weighted across all layers. By varying the weights of each component of the optimisation, the contributions from the subject and the style can be varied in the final resultant image.

    Questions/Discussion
    1) It seems this process has been applied mostly to scene-based images - would it work on object-centric images? If so, could it be also used to provide interpolation-type images where an iconic image of, say, a cat was applied to another iconic image, say of a chair, to produce a cat-chair?
    2) Could this process be used to generate scenes realistically by taking existing scene images and using them as the "style" source and iconic object images to use as the subject source?

    ReplyDelete
  2. summary:
    This paper presents a novel application of CNN: art style transfer on given image. The basic idea is CNN can perform good separation between content and texture of the image using the similar idea from paper "Understanding Deep Image Representations by Inverting Them". The detail in their method is jointly training content loss (with one layer) and style loss (with several layers) given an initial random sampling with carefully designed loss functions and several hyper parameters.


    question:
    Are there any fail cases for this style transferring? Any results that style and content can't be well separated this architecture?

    ReplyDelete
  3. This paper's main contribution is finding a plausible way of splitting the "content" of an image from its style. Roughly, it goes like this: start with a random noise vector and use the gradient to modify it until the distance from the original image and noise image is minimized in feature space at level l for each image. Something similar is done for style using covariance matrices.

    Q/D
    1. Average pooling? Why would average pooling give stronger gradients.
    2. Non-images. I think it would be interesting to see how well this would work for non-visual information (audio and text).

    ReplyDelete
  4. This paper presents a Deep Learning approach for finding separate content and style representations of an image. The authors use CNNs trained on object recognition to get content representations of objects which are robust to object variations and they use the correlations of these representations across the filter banks to compute a Gram matrix which represents the style of the image. Thus, they are able to abstract out the content from the style in 2 separate feature maps. Using these separate feature maps, the authors demonstrate using the content of one image and the style of another image (of an artwork) to generate novel and visually appealing images in a form of NPR.

    Discussion:
    1. The authors only mention calculating the gradient of the Gram Matrix loss for lower layers. Is there any particular reason for this? The paper does not make the reasoning quite clear.

    2. Any particular reason why beta is at least 1000xalpha? In general for convex set optimization, the coefficients sum to 1, hence this seems novel and counter-intuitive.

    ReplyDelete
    Replies
    1. Found this: https://deepart.io/
      Great application built around this paper.

      Delete
  5. The paper presents a way to blend the intermediate filter activations of a network when it's passed two dissimilar images (one having a distinct visual context and the other a distinct visual style). The authors go on to argue that the approach presented is not totally implausible as a biological representation of visual style. VGG features are used for this, but no comparison to other networks or methods is performed.


    Discussion:
    1) What exactly do they mean when they describe their style representation as being 'stationary'?
    2) Just for fun, what would results look like if you switch the inputs (treating the photo as the 'style' source and the painting as the 'content' source)?

    ReplyDelete
  6. The authors propose that a non-realistic image (like painting) can be generated by combining the visual style of an actual painting (or rather, image of an actual painting) and a subject image. Deep CNN can be used to separate out the style of a painting (similar to texture info from higher layers) and content from subject image (basically some higher layer of a classifier). This paper uses VGG-net (without their FC layers and max-pooling replaced with average-pooling).

    Discussion:
    1. The results shown in paper and Github examples use limited number of paintings to test. Also, all the paintings used are brush colored. I'm just curious if this can be extended to matching style of pencil sketch or crayons. For example, can we combine a pencil sketch and Brad Pitt's photo to get Brad Pitt's pencil sketch?

    ReplyDelete
    Replies
    1. I have one more question,
      2. In previous class, the authors used "Optical" flow to quantize the performance of renderer. How can we quantitatively assess the outputs of generated paintings from this paper?
      Example: in Github, under Content / Style Tradeoff, there are 4 paintings of Brad Pitt generated. How can we say which painting is good and which is not?

      Delete
  7. Abstract:

    The paper presents a neural network architecture to create stylistic image from given set content image and style image. They argue that the deep neural network learn enough information that they can be used to separate content and style information from the image. Then this information can be used to create stylistic images. For content information they use internal layer of VGG net and for style information they use correlation between various internal layers. Thus they create loss function which is a linear combination of the two losses. The final results are quite impressing as they are able to recreate image with style information borrowed fro famous art photos.

    Discussion:

    1) I am little unclear as how they capture the style information as correlation between layer activations. Could you please elaborate that part?

    2) The results look amazing to me, any other interesting work where people have used deep learning to create such artistic style images?

    ReplyDelete
  8. The paper presents a pretty cool method of image synthesis by combining style representation from one image with visual content from another. This is done by jointly minimizing two separate loss functions( one for style and the other for content similarity). Stylistic representations are achieved by correlation of different filter responses.

    Discussion:

    1) The loss function for content similarity is calculated over a single layer only. Is there any reason for this?
    Shouldn't it be possible to control the level of content (or color) retained by experimenting with different layers for the content loss?

    ReplyDelete
  9. This paper presents a method for generating images by combining the stylistic elements of one image and visual elemwnts of another. This is done by passing two of the images through VGG Net and minimizing the mean square loss at layer L. They then take a white noise image and vary it till it minimizes bpth the content and style image

    Question:
    1. What would happen if they used a natural image as style input?

    ReplyDelete
  10. In this paper, an approach to combine content of natural images with style from artistic paintings is shown. They use a pre trained CNN to combine the 2 feature spaces. They show how combining feature spaces at different levels of the CNN effect the output.

    Questions:

    Are they learning any weights for the CNN or just using it compute the error.

    Why not train a CNN which learns the style of the artistic images and then use it to generate more natural artistic images.

    ReplyDelete
  11. The folks used a CNN to separate image style from content. They then merged the style of one image (typically a painting) with the content of another (typically an image) to create a new picture that had the same content of the image, but the stylizations of the painting. The started with VGG net’s 16 convolutional layers and 5 pooling layers (ignored the fully connected). They modify a random seed image to minimize the joint of: 1) difference in network features for new image vs. original content image, and 2) the stylization parameters of a network they built on top of VGG matched for the new image and original style image. They can adjust the weight of style vs. content this way.


    It looks like color pallet is coupled with the style information. Can we undo that coupling to purely capture style?

    ReplyDelete
  12. This paper presents an approach for taking the style of one image, and applying it to another. Using the VGG net pretrained on object recognition, the authors separate the content and style of an image. This is accomplished by creating two loss functions. The style loss functions finds the mean-squared distance between the gram matrices of the style image and a generated image across all layers. The content loss function is the squared-error loss between the feature representations of the generated image and original image at a single layer. The final loss function used is a linear combination of the style loss function and content loss function. By minimizing the final loss function, the authors are able to produce an image that mix the content and style of the two input images.

    Questions:

    1. I'd like to see how this does on more varied types of images. Are there other places besides the paper and the github repo that have examples of this in action?

    2. I'm confused about the content loss function. Why is only one layer considered and how does final output change when a different layer is used.

    ReplyDelete
  13. This paper proposes a method for style transfer. The method generates a final image that appears to have been created in the style of the first image with the composition of the second. The method first generates a feature representation for the 'style' of the first image and the 'content' of the second. Then the problem is models as an optimization problem where a new image is created iteratively by minimizing a loss function between the content representation and the the style representation.

    1. Can we do this with videos?
    2. What other methods are used to generate artistic outputs?

    ReplyDelete
  14. This paper proposes a novel approach to rendering unique images by incorporating the content and style information from image and artwork in a deep learning framework. VGG network is used by the authors for this purpose. How style and content representations of an image can be separated using CNN has been described. Minimizing the loss between the input image and artwork with a white noise image is described as the objective function for optimization. Correlations among feature responses for each layer, between layers is used to learn texture features of the image that match the style representation of the input image.

    Questions-

    1. How does average pooling benefit compared to max pooling? I think average pooling will provide more spatial information per pixel, resulting in a smoother response.

    2. Other methods for similar purpose? How about object-centric images?

    ReplyDelete
  15. The paper presents a CNN model that separates a painting style from the content of an image, then apply the learned style to a content of a new scene. To capture the texture information, the designed a feature space based on the correlation between filter responses from different layers. The final loss function is a combination of the two loss functions.


    If we try to transfer the style of 2 different paintings done by the same artist using the content of another image, would the resulted images look similar? Is there a correlation between the artist style across different paintings and what the network is learning?

    ReplyDelete
  16. Summary
    This paper presents a method using CNNs (VGGNet) to extract content and style from images. They extract content from the layers using just the outputs of the layers, and extract the style using texture-extracting filter responses on the outputs of the layers. The function minimized by backpropagation is the sum of loss functions of content and style.
    Questions
    Have there been any works where other elements like content and style have been extracted?

    ReplyDelete
  17. The paper proposes a method for jointly combining the style and content of two different images to create a novel artwork. The authors utilize VGG network to generate the content feature vector and then they generate the style vector which they claim can be separated for an image. Then they create the new image from jointly minimizing the distance of white noise image from content and style representation.

    Discussion -
    Could you elaborate more on creating the new image part by combining the style and content?

    ReplyDelete
  18. Summary
    This paper presents a method using CNNs (VGGNet) to extract content and style from images. They extract content from the layers using just the outputs of the layers, and extract the style using texture-extracting filter responses on the outputs of the layers. The function minimized by backpropagation is the sum of loss functions of content and style.
    Questions
    Have there been any works where other elements like content and style have been extracted?

    ReplyDelete
  19. The paper introduces a deep neural network based system that creates artistic images of high perceptual quality. This is done by combining style representation from one image with visual content from another. A feature space that consists of the correlations between the different filter responses in each layer of the network is used to obtain a stationary multi scale representation of the input image which in turn captures the texture. The 2 images are passed through VGG net and loss functions are minimized. The loss function minimized during image synthesis contains 2 terms for content and style that are well separated.
    Question: How does correlations between the different filter responses represent style information?

    ReplyDelete
  20. This paper presents a system to build high-quality artistic images using CNNs. The authors use CNNs to get representations of the content and the texture of an image. The CNN they use for this task is VGGNet for content. They define two loss functions, one for the content and other for the stylistic representation and then try and minimize those.

    Discussion:
    Since they use a CNN trained on object recognition tasks, why would they only choose to deal with scene-centric images and not try object centric ones also?
    Can this work be extended to videos?

    ReplyDelete
  21. The paper presents a method to generate artistic images given a content image and a painting whose style is to be mimicked. The authors argue that CNNs trained on object detection are capable of capturing the high level content of an image. Hence they use the output of the conv layers of the VGG model to capture the content of the image. To obtain a representation of the style the authors use a feature space designed to capture texture information. The authors formulate the image generation task as a loss minimization problem, starting with a random noise image and optimizing the gradient descent till the noise image generates the same response to a particular CNN layer as the source image. For style distance between the Gram matrix of the noise image and the style image is minimized.
    Questions:
    1. What is meant by obtaining a “stationary” representation of style?
    2. Why does average pooling yield better results than max pooling?

    ReplyDelete