This paper presents an interactive visualization to allow the exploration of large image datasets by manipulating average images, putting forward both a tool for user-guided clustering and user-guided image alignment. By modifying an average image for a cluster of images within some category, a user tells the system what images should be a part of that cluster, and the system responds by selecting images such that the average of the images in the generated cluster is close to the user specified average.
Questions:
1) The patch features used are HOG features and color space histograms, has anyone tried doing this with deep features?
2) How would this work for datasets of non-iconic images?
3) It seems like there's no easy ways of manually generating good starting clusters. What about using class or attribute labels to generate initial clusters?
I think I've seen this tool used to show what the average person from specific ethnicities look like. It went viral a few months ago. This paper outlines an algorithm and tool that can identify and select modes in collections of images (taken from the internet). The tool can be used to select certain things to have higher weights in their weighted mode calculation. They can use their mode detection to propagate single annotations across that entire mode set.
Discussion: I wonder how this work could be applied to other data types beside images. Like for health data, a query that could reveal lifestyle modes for different people with specific conditions.
The authors present in this paper a tool to interactively visualize large, internet-scale image datasets via the medium of average images. The novelty of their approach is that they favor a human-in-the-loop process, using human user "brush strokes" and cues to adaptively weigh each image in the dataset and then compute the "weighted average image", made possible via a real time user interface. The tools they provide the user are 1. a coloring brush to specify desired colors, 2. a sketching brush to highlight which parts of the image are more important, 3. a explorer brush to select which region of an average image the AverageExplorer should find more matches for and 4. a tool to interactively cluster images based on the average image after a set of user edits. They show qualitatively that AverageExplorer gives sharper average images that various other contemporary techniques, and they also demonstrate via user studies the representative power of their technique and its ease of use.
Discussion: 1. The video example for online shopping makes a great argument to use this technique for semantic segmentation, not where you are segmenting the object but rather finding all the images with the segment constraints. Any examples of work that use this perhaps?
2. Are the clusters being generated manually with the user clicking the + button each time? Or is there some sort of pre-clustering that is then refined? This was not particularly obvious to me.
summary: This paper presents an interactive image explorer that is based on an average image given user selected category. HOG feature at different spacial regions are pre-computed for all the images in the database to make real-time interactive possible. Coloring and Sketching brush are implemented for use exploring similar objects but with slight different color or shape while explorer brush, by search images in the database specifically at the local patches, enabling user explore images across categories.
questions: It's true use HOG features would help on real-time interactive, but given it was published in 2014, what are the main concerns they didn't use CNN embedding given it can be pre-computed as well?
The paper presents a tool to explore and visualize large image dataset. The technique used is to visualize average of the images. Simple averaging of random images doesn't give the user the power to explore, so the paper gives users ability to define some constraint through a user interface. The weighted average of images is calculated with these constraints at real time and shown to user. The tool can help users to explore certain local regions of images also.
Discussion:
1) Examples of conv features being used in this kind of work?
2) Any other applications besides mentioned in the paper?
This paper discusses about an interactive user interface which provides a visualization of average imaging using large image databases like Google. The tool allows the user to search based on words, color of a part of image, line strokes or sharpen on small areas. Averaging is done in real time as the user interacts with the tool and it is weighted average where the weights depend on how well the image matches the user inputs. Unlike older algos of Average imaging which didn't align the images before averaging, this paper proposes max pooling which is robust to misalignment (small extent) and perform image warping to align the images.
This paper presents a tool to interactively visualize large image datatsets based on average image given a category. The tool computes weighted averages where the weights reflect the input of the user. This is all done in real time and max pooling is used to make the algorithm more robust to alignment issues.
This paper discusses about an interactive framework that allows the users to visualize a large image collection using average images. The user of the systems are given control of constraints to view average weighted image with real-time updates. The interface also tackles the problem of misaligned objects to prevent blurring of images.
Discussion - How would multi-scale exploration help in visualization of dataset with complex concpet.
This paper presents a mechanism to allow a user to quickly preview a large set of images by presenting a summary of the images in the set as an automated, weighted average of those images built around user-selected metrics. As a user continues to use the system to summarize a particular data collection, the average is updated to take into account the user's input, treating choices as constraints upon the average calculation.
Questions/Discussion : I'm interested in the application of deep features to this problem, in place of the HOG features and histograms of colors.
Summary: This paper presents an interactive framework for retrieving average images matching the user criteria. These average images are found by using weights which are calculated using the user’s constraints of strokes, brushes, and warps. This was shown to be useful in Online shopping, Interactive portraits, etc. Questions: How is sm calculated in seed patch ranking?
This paper provides a novel interactive framework towards generation of average images from large data collections and providing effective visualization. Use of average images has been prominent from the perspective of the field of art. It effectively captures the common and difference information between the images of the same category effectively providing a sharp yet dreamy looking image. This paper introduces a novel tool that gives users the power to generate average images as they like. Color brush, sketch brush and explorer brush are the tools that the users use to sharpen, focus and change background color of the average images. The software also sorts and aligns the images in real time. Different applications like online shopping, social media portraits etc. have been explained. Keypoint propagation is another application that the authors have mentioned where this tool can be utilized.
Questions: Has anyone approached this problem using Deep image descriptors as features? It would be nice to see how it performs.
The authors present an interactive tool to visualize large, internet-scale datasets as average images. They use a human-in-the-loop process, using human user "brush strokes" and cues to adaptively weigh each image in the dataset and compute the weighted average image, using a real time user interface. Applications of these include online shopping, interactive portraits, etc.
Discussion: What exactly are the conv features used here?
This paper presents an interactive tool to help users explore a large collection of average images which leads to user-guided clustering and alignment. As the user makes strokes, the weights of the images shift and the average image is recomputed in real-time. The authors provide a few brush tools in the system which have different purposes: the coloring and sketching brushes provide more visible details in the drawing, while the explorer brush provides a way to explore more hidden information.
The objects seem to be roughly centered in the images. How does this work with non-iconic images?
The paper introduces a framework that enables users to visualize a large image collection by using a weighted average of image collection where the weights reflect user indicated importance. The user interactively edits the average image using various types of brushes and warps, and with each edit, a new constraint is added to update the average. The proposed method is a novel way to perform user guided clustering and user guided alignment on visual data. Questions: 1.How can the proposed framework be extended so that it works on a scene-centric dataset as well? 2.What are the applications of the proposed visual data summarization method other than the ones mentioned in the paper?
The paper proposes an interactive tool to find the visual similar images in large databases. They use the concept of multiple mode based data clustering to separate the data into patches as chosen by the user. The spatial location of the mouse is directed by the 4 tools to help guide the process.
They utilize the mid level discriminative patches to find the nearest neighbors and then align them with each other. The results are shown impressive with user study to show how the images produced from the dataset are visually similar to human expectations.
Questions:
Most of the images i feel are heavily influenced by color is there a way to determine the context being averaged.
The images also seem to have a clear foreground and background separation, what happens in clutter.
This paper describes an interactive tool for image exploration which helps visualize images from large datasets. They make use of average images and capture the similarities and differences between images. This average is continuously updated using the user's input feedback. The user has the option to search, partially color, sharpen, apply sketch brush on the image.
Discussion: Have any deep learning features figured in this kind of work? Would this work well on MS-COCO like dataset?
The paper presents a novel method to visualise a large dataset by taking the average image of the dataset. The paper is inspired by the work of Jason Salavon where he computed the average image of a particular theme by manually aligning them. The authors automate this process by providing an interactive tool which allows users to search, partially color, sketch and sharpen the given average image. For the partially color and sketch options, each image in the database is reweighted according to its dot product with the user input. The authors demonstrate various applications of their tool including interactive exploration and visual data representation.
Question: The average image reminds me of the visualised outputs of the different layers of a CNN that we read in a previous paper. What can we expect if a CNN was trained on these average images.
This paper presents an interactive visualization to allow the exploration of large image datasets by manipulating average images, putting forward both a tool for user-guided clustering and user-guided image alignment. By modifying an average image for a cluster of images within some category, a user tells the system what images should be a part of that cluster, and the system responds by selecting images such that the average of the images in the generated cluster is close to the user specified average.
ReplyDeleteQuestions:
1) The patch features used are HOG features and color space histograms, has anyone tried doing this with deep features?
2) How would this work for datasets of non-iconic images?
3) It seems like there's no easy ways of manually generating good starting clusters. What about using class or attribute labels to generate initial clusters?
-Stefano
I think I've seen this tool used to show what the average person from specific ethnicities look like. It went viral a few months ago. This paper outlines an algorithm and tool that can identify and select modes in collections of images (taken from the internet). The tool can be used to select certain things to have higher weights in their weighted mode calculation. They can use their mode detection to propagate single annotations across that entire mode set.
ReplyDeleteDiscussion: I wonder how this work could be applied to other data types beside images. Like for health data, a query that could reveal lifestyle modes for different people with specific conditions.
https://feministphilosophers.files.wordpress.com/2011/02/averageface.jpg
Delete^^ thats the picture, but it was made from a tool used from another university.
The authors present in this paper a tool to interactively visualize large, internet-scale image datasets via the medium of average images. The novelty of their approach is that they favor a human-in-the-loop process, using human user "brush strokes" and cues to adaptively weigh each image in the dataset and then compute the "weighted average image", made possible via a real time user interface. The tools they provide the user are 1. a coloring brush to specify desired colors, 2. a sketching brush to highlight which parts of the image are more important, 3. a explorer brush to select which region of an average image the AverageExplorer should find more matches for and 4. a tool to interactively cluster images based on the average image after a set of user edits. They show qualitatively that AverageExplorer gives sharper average images that various other contemporary techniques, and they also demonstrate via user studies the representative power of their technique and its ease of use.
ReplyDeleteDiscussion:
1. The video example for online shopping makes a great argument to use this technique for semantic segmentation, not where you are segmenting the object but rather finding all the images with the segment constraints. Any examples of work that use this perhaps?
2. Are the clusters being generated manually with the user clicking the + button each time? Or is there some sort of pre-clustering that is then refined? This was not particularly obvious to me.
summary:
ReplyDeleteThis paper presents an interactive image explorer that is based on an average image given user selected category. HOG feature at different spacial regions are pre-computed for all the images in the database to make real-time interactive possible. Coloring and Sketching brush are implemented for use exploring similar objects but with slight different color or shape while explorer brush, by search images in the database specifically at the local patches, enabling user explore images across categories.
questions:
It's true use HOG features would help on real-time interactive, but given it was published in 2014, what are the main concerns they didn't use CNN embedding given it can be pre-computed as well?
Abstract:
ReplyDeleteThe paper presents a tool to explore and visualize large image dataset. The technique used is to visualize average of the images. Simple averaging of random images doesn't give the user the power to explore, so the paper gives users ability to define some constraint through a user interface. The weighted average of images is calculated with these constraints at real time and shown to user. The tool can help users to explore certain local regions of images also.
Discussion:
1) Examples of conv features being used in this kind of work?
2) Any other applications besides mentioned in the paper?
This paper discusses about an interactive user interface which provides a visualization of average imaging using large image databases like Google. The tool allows the user to search based on words, color of a part of image, line strokes or sharpen on small areas. Averaging is done in real time as the user interacts with the tool and it is weighted average where the weights depend on how well the image matches the user inputs. Unlike older algos of Average imaging which didn't align the images before averaging, this paper proposes max pooling which is robust to misalignment (small extent) and perform image warping to align the images.
ReplyDeleteThis paper presents a tool to interactively visualize large image datatsets based on average image given a category. The tool computes weighted averages where the weights reflect the input of the user. This is all done in real time and max pooling is used to make the algorithm more robust to alignment issues.
ReplyDeleteD: Conv features?
This paper discusses about an interactive framework that allows the users to visualize a large image collection using average images. The user of the systems are given control of constraints to view average weighted image with real-time updates. The interface also tackles the problem of misaligned objects to prevent blurring of images.
ReplyDeleteDiscussion -
How would multi-scale exploration help in visualization of dataset with complex concpet.
This paper presents a mechanism to allow a user to quickly preview a large set of images by presenting a summary of the images in the set as an automated, weighted average of those images built around user-selected metrics. As a user continues to use the system to summarize a particular data collection, the average is updated to take into account the user's input, treating choices as constraints upon the average calculation.
ReplyDeleteQuestions/Discussion :
I'm interested in the application of deep features to this problem, in place of the HOG features and histograms of colors.
-John Turner
Summary:
ReplyDeleteThis paper presents an interactive framework for retrieving average images matching the user criteria. These average images are found by using weights which are calculated using the user’s constraints of strokes, brushes, and warps. This was shown to be useful in Online shopping, Interactive portraits, etc.
Questions:
How is sm calculated in seed patch ranking?
This paper provides a novel interactive framework towards generation of average images from large data collections and providing effective visualization. Use of average images has been prominent from the perspective of the field of art. It effectively captures the common and difference information between the images of the same category effectively providing a sharp yet dreamy looking image. This paper introduces a novel tool that gives users the power to generate average images as they like. Color brush, sketch brush and explorer brush are the tools that the users use to sharpen, focus and change background color of the average images. The software also sorts and aligns the images in real time. Different applications like online shopping, social media portraits etc. have been explained. Keypoint propagation is another application that the authors have mentioned where this tool can be utilized.
ReplyDeleteQuestions:
Has anyone approached this problem using Deep image descriptors as features? It would be nice to see how it performs.
The authors present an interactive tool to visualize large, internet-scale datasets as average images. They use a human-in-the-loop process, using human user "brush strokes" and cues to adaptively weigh each image in the dataset and compute the weighted average image, using a real time user interface. Applications of these include online shopping, interactive portraits, etc.
ReplyDeleteDiscussion:
What exactly are the conv features used here?
This paper presents an interactive tool to help users explore a large collection of average images which leads to user-guided clustering and alignment. As the user makes strokes, the weights of the images shift and the average image is recomputed in real-time. The authors provide a few brush tools in the system which have different purposes: the coloring and sketching brushes provide more visible details in the drawing, while the explorer brush provides a way to explore more hidden information.
ReplyDeleteThe objects seem to be roughly centered in the images. How does this work with non-iconic images?
The paper introduces a framework that enables users to visualize a large image collection by using a weighted average of image collection where the weights reflect user indicated importance. The user interactively edits the average image using various types of brushes and warps, and with each edit, a new constraint is added to update the average. The proposed method is a novel way to perform user guided clustering and user guided alignment on visual data.
ReplyDeleteQuestions:
1.How can the proposed framework be extended so that it works on a scene-centric dataset as well?
2.What are the applications of the proposed visual data summarization method other than the ones mentioned in the paper?
The paper proposes an interactive tool to find the visual similar images in large databases. They use the concept of multiple mode based data clustering to separate the data into patches as chosen by the user. The spatial location of the mouse is directed by the 4 tools to help guide the process.
ReplyDeleteThey utilize the mid level discriminative patches to find the nearest neighbors and then align them with each other. The results are shown impressive with user study to show how the images produced from the dataset are visually similar to human expectations.
Questions:
Most of the images i feel are heavily influenced by color is there a way to determine the context being averaged.
The images also seem to have a clear foreground and background separation, what happens in clutter.
This paper describes an interactive tool for image exploration which helps visualize images from large datasets. They make use of average images and capture the similarities and differences between images. This average is continuously updated using the user's input feedback. The user has the option to search, partially color, sharpen, apply sketch brush on the image.
ReplyDeleteDiscussion:
Have any deep learning features figured in this kind of work?
Would this work well on MS-COCO like dataset?
The paper presents a novel method to visualise a large dataset by taking the average image of the dataset. The paper is inspired by the work of Jason Salavon where he computed the average image of a particular theme by manually aligning them. The authors automate this process by providing an interactive tool which allows users to search, partially color, sketch and sharpen the given average image. For the partially color and sketch options, each image in the database is reweighted according to its dot product with the user input.
ReplyDeleteThe authors demonstrate various applications of their tool including interactive exploration and visual data representation.
Question:
The average image reminds me of the visualised outputs of the different layers of a CNN that we read in a previous paper. What can we expect if a CNN was trained on these average images.