Message Boards Message Boards

1
|
13988 Views
|
15 Replies
|
13 Total Likes
View groups...
Share
Share this post:

Orthogonalize and scale image

Posted 9 years ago

Hi,

I have taken on the task of figuring out how to take an image (like the attached skewed example) and 'deskew' the image to look the 'corrected' image. An 'eyeball' procedure seems to be an iterative combination of stretching and skewing until the 4 squares in the picture have the appropriate pitch and orthogonality. BUT, there has to be a more elegant way to do this. I have looked into the Gram-Schmidt and Householder procedures, but can't think how to apply this to an image. Also, one requirement is I:

  • Keep track of all transformations so that any new image is generated with the transformations built in, i.e. corrected.

My goal is to select three points and then apply an algorithm that will orthogonalize and scale the image to the know pitch of the grid and then spit out critical transformation values that can be applied during acquisition.

Please let me know if this is not clear.

Thanks in advance mrphud

Attachments:
POSTED BY: Cole Pierson
15 Replies
Posted 9 years ago

Update:

I linear model fit to subtract out was used to remove the tilt. From there I used as subset of the data to isolate only one of the square pillars and then used that as the Kernel for correlation with the original image. The results is a correlation contour map. I then use max detect and position to find the peaks.

kernel = correcteddata[[340 ;; 500, 60 ;; 100]];
correlation = ImageCorrelate[filterdataimage, kernelimage] // ImageAdjust
correlationdata = Transpose[ImageData[correlation]];
bin = MaxDetect[correlationdata]; pos = Position[bin, 1]

It seems to give me what I need. Some pretty pictures too!

POSTED BY: Cole Pierson
Posted 9 years ago

Hi Henrik,

In your last post, you select the corners where it went from image to blue background. In a real case I wouldn't have that luxury. A real image would have the skewing of the above image, but the outline would be square. Basically the features in the image are skewed but the borders of the acquired image itself are square.

So, I would need to select features within the image in order to deskew it. What I have done previously is use my eyes (the ultimate image processor) to locate corners of the squares and use those points in my orthonormalization algorithm.

How would I go about finding the corners of the features in the image? They are part of my reference sample. I can upload the data array if that would help. Note that the data is on a slightly sloped plane so the image is not flat. See attached.

Thanks mrphud

Attachments:
POSTED BY: Cole Pierson
Posted 9 years ago

Hello All,

Just an update. I have determined all of the appropriate matrix operations to orthonormalize the original image and the matrix transformations needed to adjust the original transformation matrix.

Now I am at a point that I think the previous responses are more suited for.

How would I locate at least three common points in the same image? To put this into context, I would like to find the position of say the lower left corner for each of the squares in the original distorted image. Knowing these position I can apply the matrix transformations I figured out.

The goal is to acquire an image and automatically correct it with my algorithm.

I should amend that I have the original data as an array of heights.

Thanks mrphud

POSTED BY: Cole Pierson

I guess my last post is at least an outline of an answer to you question. What did you try? Where is you code?

Regards -- Henrik

POSTED BY: Henrik Schachner

Well, I then understand your problem in not having a reference image. Maybe you can solve it like this (I am now working with the skewed image only!):

  • I first cropped the image to avoid any frame/axis in the image; as a compensation a small image pad is used (with an appropriate image acquisition this should not be necessary):

enter image description here

  • then I made kind of a mask of the image and detected the corners:

    (* setting all blue background pixel to black, then binarize: *)
    
    bckgrnd = {0.596078431372549`, 0.7686274509803922`, 
       0.9137254901960784`};
    imgskd1 = 
      Binarize[ColorConvert[
        ImageApply[If[# == bckgrnd, {0, 0, 0}, #] &, imgskd0], 
        "Grayscale"], 0.001];
    ipoints = ImageCorners[imgskd1, MaxFeatures -> 4];
    Show[imgskd1, Graphics[{PointSize[.05], Red, Point /@ ipoints}]]
    

enter image description here

  • finally one has to bring these points in a defined order so that they can be associated with reference points:

    (* ordering these points in a counterclock fashion: *)        
    middle = Mean[ipoints];
    posVec = # - middle & /@ ipoints;
    order = Ordering[ArcTan @@@ posVec];
    opoints = ipoints[[order]];
    (* defining reference points in the same order: *)        
    refpoints = {{0, 0}, {500, 0}, {500, 500}, {0, 500}};
    (* ... and performing the correction *)        
    gtf = Last@FindGeometricTransform[refpoints, opoints];
    ImagePerspectiveTransformation[imgskd, gtf, DataRange -> Full]
    

The result is:

enter image description here

Regards -- Henrik

POSTED BY: Henrik Schachner

Dear Henrik,

very nice solution indeed!

Best wishes,

Marco

POSTED BY: Marco Thiel

Dear Marco, thanks a lot for your nice compliment! Best wishes -- Henrik

POSTED BY: Henrik Schachner

Really nice.

The only thing I can suggest is, for purposes of working with hardware that does not have Mathematica, possibly a singular values decomposition can be useful. This is implemented in Lapack and so is available on many platforms. What I cannot say, unfortunately, is how best to apply it to the problem at hand (a bit too far from my expertise). But I'm pretty sure some form of SVD, on normalized pixel values or perhaps on a matrix given by the "corner" positions, should be applicable for finding the appropriate sheer.

POSTED BY: Daniel Lichtblau
Posted 9 years ago

Hi Henrik,

Yes I do have access to mathematica and I have been using that for my matrix operations. The transformation matrix does seem close, but this approach relies on a reference. And, I only get the reference after I correct the image. Once it's corrected, I don't need this operation.

With that said it is still good information for future.

At this point, I have determined the set of operations that are needed to orthogonalize the image and how to scale them appropriately. I am now trying to figure out how to translate these steps so that they work at an arbitrary initial image angle. Meaning that the original image is captured at say 30 degrees and then rotated to zero so I can perform the operation. I basically need to 'undo' the original rotation so that the operations I have just performed are relevant to the arbitrary rotation angle.

I thought that this would be a change of basis by an arbitrary rotation angle, but that doesn't seem to give me the desired results, so here I am.

Thanks mrphud

POSTED BY: Cole Pierson
Posted 9 years ago

Wow. Thanks Henrik! That looks really good. I will not be able to run this in mathematica so it is important that I determine a general algorithm for unskewing the image.

Thanks again

mrphud

POSTED BY: Cole Pierson

I will not be able to run this in mathematica ...

Well, I am assuming that you have at least some access to Mathematica, because you are posting on the Wolfram community platform!

enter image description here

This way you can get the information for your specific image correction, which can be used by some other program you are doing the actual correction with.

Regards -- Henrik

POSTED BY: Henrik Schachner
Posted 9 years ago

Henrik,

Thank you for showing how to extract the parts of TransformationFunction which correspond to translation and linear transformation. Could you please also explain how to deal with the bottom line of the TransformationFunction's matrix?

POSTED BY: Alexey Popkov

Hi Alexey,

I do not have any idea about the meaning of this bottom line! Actually before I posted the above "explanation" I made a search in the documentation and on StackExchange, but I could not find anything; so I was hoping I could get away without being asked that obvious question. I would like to know it myself! Surely someone from this community can help ...

Regards -- Henrik

POSTED BY: Henrik Schachner

Hi Cole,

the function FindGeometricTransform seems to be perfect for this task; with ImagePerspectiveTransformation the resulting transformation can then be applied:

imgskd = Import["skewed.png"];
imgcor = Import["corrected.png"];
gtf = Last@FindGeometricTransform[imgcor, imgskd];
ImagePerspectiveTransformation[imgskd, gtf, DataRange -> Full]

The outcome looks like this:

enter image description here

This is just a quick "proof of concept"; for better results you should crop you images properly.

Regards -- Henrik

POSTED BY: Henrik Schachner
Posted 9 years ago

UPDATE: So I found something called shear mapping. This seems to be exactly what I need to apply, but because I am starting with a sheared image in both x and y there is coupling between the axes. I have tried a while loop that iterates the shearing operation in both x and y until its "gone" but it feels like this has to be a known problem.

POSTED BY: Cole Pierson
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract