Message Boards Message Boards

8 Replies
9 Total Likes
View groups...
Share this post:

Object Differentiation in Image

Posted 11 years ago
I posted the following question in stackexchange
Trying to differentiate <> elements of the image, and although a method was identified in the site, it is still slow and maybe another avenue of exploration is available.

Wanted to see if MorphologicalComponents could be of help. It does allow to quickly split the image into the different letters (with the exception where the letters touch each other), but now Im stuck to see if there is a method that will allow to match or cluster the different images together.

 i = Import[""];
 bwimg = Binarize@i;
 myedges[v_] := ((ImageData@
 ImageConvolve[#, {{-3, -10, -3}, {0, 0, 0}, {3, 10,3}}])^2 + (ImageData@ImageConvolve[#, {{-3, 0, 3}, {-10, 0, 10}, {-3, 0, 3}}])^2) &@(Binarize@v);
 mc = MorphologicalComponents[bwimg];
 bb = ComponentMeasurements[mc, "BoundingBox"];
 itemCount = Last@First@ComponentMeasurements[mc, "LabelCount"];
myletters = Table[Tooltip[Image@myedges[ImageTrim[bwimg, x /. bb]], x], {x, itemCount}]
Module[{sort = Sort[ComponentMeasurements[mc, property], #1[[2]] > #2[[2]] &][[All, 1]]}, Tooltip[Image@myedges[ImageTrim[bwimg, # /. bb]], #] & /@ sort], {property, {"EulerNumber", "Eccentricity", "Orientation", "Circularity", "Complexity"}}]

Any ideas on how to take it from here (if this is indeed the correct path) would be welcomed.
POSTED BY: Diego Zviovich
8 Replies
Hello Diego,
I noticed that you haven't received a response in several days. Perhaps it is because you haven't specified what criteria you wish to use for matching or clustering.  Is your objective to collect the image into groups of the same letter?

POSTED BY: W. Craig Carter
Posted 11 years ago
Dear Craig, that is correct. The idea would be to count the tally of each type of letter.

Something like
Tally[{"a", "b", "c", "c", "d", "b", "c", "e"}]
(*{{"a", 1}, {"b", 2}, {"c", 3}, {"d", 1}, {"e", 1}}*)

The desired outcome would be to establish the frequency of occurrence of each letter in the alphabets pasta box. 
POSTED BY: Diego Zviovich
Hello Diego,
These are just ideas to try, but not solutions.  If you find a solution using this suggestion, perhaps you can post it here for others.

For each morphological component, construct a series of ImageRotate[image,thetas], for each of those series, try a TextRecognize[ImageRotate[image,theta]].  If you get any matches, tally them and then accept the most common recognition.

You may have to do something with ImageForestingComponents or WatershedComponents, or MorphologicalBinarize. Also, you may wish to Binarize your original image instead of looking at the edges only.
POSTED BY: W. Craig Carter
I am not sure if you have see this new function called Classify - look through examples in the linked article - there is handwriting recognition application. It's based on Machine Learning. I wonder if the idea can be used here too. We would segment image into small parts that cut out every single letter and then run Classify on the set of small images. This is future functionality documentation, but I wonder if you can already play with it on R-Pi. Some examples are mentioned here too:

POSTED BY: Sam Carrettie
Posted 11 years ago
Hi Sam, tested the concept with Classify yesterday in the RPi. Didn't work. I gather that it is because the different letters are rotated at different angles. What I would like to try later if we can use it to recognize whose handwriting a letter belongs to.
POSTED BY: Diego Zviovich
Posted 11 years ago
Hi WCC. No success with the TextRecognize

Will try with the WatershedComponents, etc.
 bb = ComponentMeasurements[mc, "BoundingBox"];
 itemCount = Last@First@ComponentMeasurements[mc, "LabelCount"];
 myletters =
   Table[ColorNegate@Image[ImageTrim[bwimg, x /. bb]], {x, itemCount}];
 {#, TextRecognize[#]} & /@

Table[ImageRotate[myletters[[8]], \[Alpha],

   Background -> White], {\[Alpha], Pi/72, 143/72 Pi, Pi/72}]

(* Taking the letter B to see if it is recognized*)
(* no success *)
POSTED BY: Diego Zviovich
Hello, Diego, Classify can recognize rotation but need to be trained sufficiently. For example - train on fine-stepped rotation 
learn = (# -> "A" & /@ Table[ImageRotate[Rasterize[Style["A", 20]], k,
      Background -> White], {k, 0, 2 Pi, .3}])~
  Join~(# -> "B" & /@ Table[ImageRotate[Rasterize[Style["B", 20]], k,
      Background -> White], {k, 0, 2 Pi, .3}])

cfun = Classify[learn];

Determine letter from random rotation
test = ImageRotate[Rasterize[Style[RandomChoice[{"A", "B"}], 20]], 2 Pi #, Background -> White] & /@ RandomReal[1, 10]

In[] = cfun[test]
Out[] = {"A", "A", "B", "A", "B", "B", "A", "A", "B", "B"}

So I imagine cutting up a letter and fine-step rotating it for training set could possibly do the job. Other details maybe important such as training image resolution for example.
POSTED BY: Vitaliy Kaurov
Posted 11 years ago
Hi Vitaly, very good point! I failed to rotate each image to provide more training material. The set was too small as you indicate. I'll prepare a more robust subset for training to then run against an out of sample image, then report back here.
POSTED BY: Diego Zviovich
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
or Discard

Group Abstract Group Abstract