Mid-Level Feature Detection in Abstract Images, Poster 36
Abstract
Can abstract images be described and categorized? Humans use concepts like shape, proximity, and symmetry – can computers learn to recognize similar properties? Most image recognition research focusses on... [ view full abstract ]
Can abstract images be described and categorized? Humans use concepts like shape, proximity, and symmetry – can computers learn to recognize similar properties? Most image recognition research focusses on finding either high-level features, such as whether an image has a certain object in it (i.e. a cat), or low-level feature extraction, like edge detection. However, neither approach is particularly helpful for describing abstract images. Instead, this research focusses on the extraction of mid-level parameters such as roundness, messiness, or blurriness.
Multiple neural networks are used as a model to predict the value of these mid-level features after training their definitions based on thousands of example images. Each network can generate values for a single parameter or multiple parameters, leveraging the similarity between various parameters. The example images are generated specifically to define parameters with the Processing toolkit. Compared to using existing images, this gives the author much more control over the definitions of the parameters the networks learn. Multiple different training sets are created for each parameter to broaden the model's definition of it, increasing performance on completely new examples. Values for these mid-level aesthetic features could open up new opportunities in the generation of abstract images or art.
Authors
-
Hans Goudey '19.5
Topic Area
Science & Technology
Session
P2 » Poster Presentations: Group 2 and Refreshments (2:45pm - Friday, 20th April, MBH Great Hall, 331 and 338)