# Testing for beauty

Posted 4 years ago
7611 Views
|
2 Replies
|
10 Total Likes
|
 What do you think of the idea of automatically judging if a piece of data was beautiful? This could mean the data in an image (ImageData) or maybe the result of a computation (e.g. CellularAutomaton), or anything, although I am thinking of a list or an array of numbers primarily.My first thought was that there are many filters for image processing, but I don't know which might be useful. The next thing I think of is mathematical transforms. For example, taking the Fourier or Hadamard transform you expect the coefficients to decay, and if they don't then that would not be nice.This code deletes the constant term and does some measure of the variance, using Mean as a shortcut to counting the 0's and 1's, those closer to the min than the max respectively without knowing the length or dimension. (Note Fourier does not assume the size is a power of 2 but Hadamard does.) FourierBeauty[list_] := Mean[1. - Round[Rescale[Abs[Rest[Flatten[Fourier[list]]]]]]] Maybe for an image this might not be bad. Here is what it picks out of the ExampleData test images: Grid[{#, ExampleData[#]} & /@ MaximalBy[ExampleData["TestImage"], FourierBeauty[ ImageData[ Binarize[ ImageResize[ ColorConvert[ExampleData[#], "Grayscale"], {64, 64}]]]] &], Frame -> All] but here are the CAs it likes the most. MaximalBy[Range[0, 255], Sum[FourierBeauty[ CellularAutomaton[#, RandomInteger[1, 2^8], {{0, 2^8 - 1}}]], 100] &]->{1, 3, 5, 17, 57, 87, 119, 127} 
2 Replies
Sort By:
Posted 4 years ago
 As a general concept, it sounds like you're describing something similar to what is called computational aesthetics. I haven't kept up with the overall progress, but have worked on two specific issues of interest to me (part of my job is being a professional photographer and digital imagine consultant). They are: Auto-cropping. There are a variety of research papers on how to do this, but each one seems to scaffold yet more heavy-duty math on top of the last. I wanted to see if there was a way to do this more "simply" using a DNN. I played with it a bit, but it wasn't obvious to me how to encode the data (which is effectively a transform rather than a single sample), and the available training data also didn't fit very well with the way I wanted to model it. Part of my problem is I don't know much about parallel DNNs or RNNs, and I think something like that is needed, so back to virtual school for me. Choosing among a series of photos to pick the best one. I started on this nearly 20 years ago using analysis functions in Mathematica, and didn't come up with anything that could be generalized. Seems like GPUs & DNNs would provide much more powerful possibilities for revisiting this. Some of the teams that seem to be doing a lot in this area are the obvious suspects: Google (Photos), Adobe (Research), and some of the other cloud vendors with millions (or billions) of photos. Definitely something I'm interested in.