Message Boards Message Boards

13 Replies
1 Total Likes
View groups...
Share this post:

How can accurate measurements be made of angles found in an image?

Posted 9 years ago

I am a materials scientist studying the microstructures of alloys of tungsten, iron, and nickel processed using liquid-phase sintering. The alloys produce microstructures full of spheroid tungsten particles, some of them connected to other particles, within a matrix of iron-nickel alloy. In places where two particles connect, they form a "neck" between them, which appears as a solid-solid boundary extending in two directions to "triple-points" where the solid-liquid boundaries of the surfaces of the respective connected particles both intersect. The angles formed by the respective orientations of the three intersecting boudaries are known in literature as "dihedral angles" and are indicative of the respective energies of the boundaries that form them. The relative differences in boundary surface energies has dominating effects on the evolution of the microstructures of these alloys through the processing/sintering time, and so it is important to have tools that can take automated measurements of them as they appear in digital micrographs so that large samples of measurements may be accumulated to provide a large sample and more accurate estimates of the population statistics.

I have developed Mathematica code that will binarize the grayscale images that I have, and refine these binary images so that they closely represent the shapes and contours of the surface boundaries of particles. Also, I have used the ImageCorners operation to identify the triple-points in the images with good success. This leaves me with the task of taking measurements of the angles at these triple points. Most of the discussion that I have seen so far involves measurements of angles at "branches" where vertexes are identified and numbered and then used to find the angles using the "VectorAngle" operation. This approach may work, but the closest that I have come to isolating branches in my images is by using the Perimeter function to make an image of just the pixels at the particle boundaries and their intersections. This leaves me with a bit of code left to do, and I am just starting out in Mathematica and not quite up to that task yet. Any help is appreciated.

POSTED BY: Phillip Green
13 Replies
Posted 6 years ago

Hello David, May you please let me know how to export all the angles with coordinates in a text or excel file? I am pretty unfamiliar to Mathematica so struggling to get it.

Regards, Mirtunjay

POSTED BY: Mohammad Ashiq

I'll send you an email now.

POSTED BY: David Gathercole

Thanks for you insight on this. I understand tolerably well in "concept" the challenge and believe that some more work on the portion of the code that fits the data will produce results. However, I'm not confident that my skills in the Wolfram language will allow me to work constructively on such a problem, at least not yet. I would like to ask if you may consider collaborating on this problem in a more formal way, perhaps with the benefit of being published or for a consulting fee. Any arrangement I might make will need approval by my adademic advisor, but I believe this can happen. My contact information is as follows: Phillip Green,

POSTED BY: Phillip Green

The original linear fit is probably inaccurate for any set of points with duplicate x values, this is most prominent in those with many similar x values. My understanding of this can be illustrated by considering a vertical perimeter segment. Say we have pixels all along the line x=0, through y values {1,2,3,4,5}; what average value of y does a line through these points take at x=0?, 3. Thus we yield the constant function F[x]=3, but this line is a tangent to the fit we want!

As I say, this occurs for any multiple valued x axis data points, and thus influences the majority of our fits. This can be seen when the gradient by average division (my last post) is compared to the original fits. Most results are similar, but not identical!

If you extract and rewrite the average division gradient algorithm, trying to coax the trigonometry towards the data (at present arctan is one of the last things to be applied), I expect that the division by zero errors can be navigated safely. One solution would indeed be to express the vector for each line as the average of the vectors between the constituent points, and then pass the vectors to the vectorangle function and let it worry about all the zero division and infinities.

Once again, I doubt even a well implemented linear fit is what you ultimately want to use given the curvature of these shapes. I'd encourage you to take the cornerpoints data and try algorithms on a small set until you find something you're happy with. My original work was mostly concerned with extracting the correct data, turning pixel data into perimeter cycles.

POSTED BY: David Gathercole

Thanks for the response. I will attempt to implement this workaround to eliminate the improper fits/angles, but I have couple ideas that I wanted to get your opinion about. When I first begain working on this problem I thought to fit vectors to the perimeter points close to the corners and then use the VectorAngle operation to calculate the angle between them. However, I didn't yet find a good way to tell Mathematica to estimate the vector most closely aligned to the perimeter tangent.

All things considered, it seems to me that your algorithm does a reasonably accurate job of measureing angles where one of the line fits does not approach verticle and the graient infinity, so a simple solution might be to test for these condition and drop the results from each measurement where it is true. In the case of the microstructure I'm working with eliminating all those measurements should not bias the results since the orientations of particle contacts should be completely random (after all, they were processed in the absence of gravity). Of course the algorithm would not be as robust, but would still provide a good basis for further work in the field. Let me know what you think. Phill

POSTED BY: Phillip Green

In a fit of laziness I've used Fit to fit lines to the point sets in cornerpoints, where we are after the line gradients. The issue here is that Fit finds the y values in terms of the x values, and thus isn't appropriate for us where one x has multiple y images. There are various workarounds, but it would be better to just use a more suitable algorithm.

Take each set of points, and calculate the differences between successive points in each dimension. Then total these differences, and divide them to obtain the average gradient along each line. Plug these gradients into some trigonometry for the angle between the two lines.

ListPlot[#, PlotLabel -> 
    N[Abs[-180 / Pi ArcTan[(x - y)/(1 + x y) /. {
        x -> Divide @@ Total[Differences[#[[1]]]], 
        y -> Divide @@ Total[Differences[#[[2]]]]
}]], AspectRatio -> 1, PlotRange -> All] & /@ cornerpoints

This leads to a lot of division by zero complications (that I have not resolved in the above implementation), and makes the fitted lines harder to plot over the data; but most importantly, is always correct!

POSTED BY: David Gathercole

Hello David G., I was able to adjust the code to accept a full-sized digital image and ran the calculation with some initial success. Overall I believe that this code estimates each angle in fundamental sound method, with a few exceptions. The biggest problem seems to appear with angles where points along the perimeter adnacent to the corner appear to be nearly verticle with respect to the y-axis of the plot. The fitted line shown in some of these plots (I have shown the first 25) don't seem to fit well with the data points. I suspect there may be a problem related to the lack of a defined slope with lines are verticle. Do you have any thoughts?

POSTED BY: Phillip Green

Hello Dave G., I just read your comments and I am absorbing them now, but wanted to acknowledge them and thank you for them. I am glad this problem has piqued some interest out there. I am cautious in saying this, but it seems to me that extracting this type of individual measurement at this scale in microstructures is a bit novel and a departure from other work in image analysis that I've seen so far. I'll look at the notebook and let you know what I can do with it. Thanks again. Phill.

POSTED BY: Phillip Green

I haven't developed a particularly interesting angle analysis algorithm, however I've done a lot of implementation work that may be of interest to you.

Whilst Mathematica's image processing and morphological tools are fantastic, I felt one could only go so far with image based analysis. Ideally one would calculate an ordered perimeter so that the position of used data relative to each corner is well understood.

To this end I split the perimeter into closed cycles to order the data: one

Then I manually detected corners by traversing these cycles with a rolling average and logging high angular change.

This lets one directly look at the pixel data either side of a corner point. Ideally one might fit a curve to each meeting edge, allowing a low radius angle to be calculated from high radius data. I just use linear fits, fifteen pixels either side of the corners, and this is fairly ok. two

The whole procedure is fairly clean and commented in the attached notebook.

I have to agree with David on this one, this is an interesting problem!

POSTED BY: David Gathercole

Hello David. The approach that you describe is one that I have been considering for the last couple of weeks since I first saw a similar one mentioned in the Mathematica Stack Exchange for a similar problem (I think it was an effort to analyze biological structures which I have seen tons of over the years of studying stereology). I believe there is great promise in using such an approach since it effectively samples what are the relative orientations of the solid-liquid surfaces as the appear further and further away from their point of intersection (and this is of great interest to us). Of course the confidence interval for individual measurements would be large due to the noise at the digital "edges" in the binary representation of the particles, however, it is the mean of the population that we want. This is important because of two things. First, the images I have are of metallographic surfaces, or planes cut through the solid samples. It has been established that the measure of an angle formed in such a plane (also known as the "section angle") from a three-dimensional "dihedral" angle depends BOTH on the measure of the dihedral angle itself AND its orientation with respect to the orientation of the plane. Therefore, one cannot measure three-dimensional dihedral angles by observing two-dimensional metallographic planes. However, it has been shown that the population mean of section angles is equal to the population mean of dihedral angles in a given sample! Therefore, we primarily need a robust sample (and hence the desire for automation) of section angle measurements. Also, we have learned that the population distribution of the section angles gives and indication of the distribution of the dihedral angles, and therefore evaluating the frequency distribution and estimating the standard deviation is very useful as well. I have stored grayscale images of many thousands of these angles that are a rich source for this analysis.

As I mentioned, I was considering the disk approach, but I am still a Mathematica neophyte! I know how to develop a list of coordinates for the triple points, and also how to create a disk primitive with a specified radius and coordinates of the center point, however, I have some work to do. I see future steps like this: Use list of vertex coordinates to ask Mathematica to create disks with a defined radius at each vertex, combine the binary image of the microstructure with the newly created image containing the defined disks and use the microstructure image to mask/remove the portions of the disks that are bounded by the circumference of the disk and the two intersecting particle boundaries, then analyse the difference between areas of the original disks and the new areas with those sections removed. The results should be proportional to the angles at the radius distance from the point of intersention, and the trend with respect to changes in the chosen radius would indicate curvature. This is a lot of fun and I can't wait to try it out, but I just don't have the Wolfram chops yet. Any suggestions on code would be worth a lot to me! Much obliged, Phill.

POSTED BY: Phillip Green
Posted 9 years ago

This is an interesting problem. I have been thinking of an approach based disks of successive radii. Looking at the top image, begin at a vertex and define a disk region of a small radius. Count the black pixels within the disk. Do the same for successively greater radii. At each step, the rate at which the black pixel count grows is a function of the arc subtended by the black region at that radius. For any given radius, this may be a noisy number. But if the curve could be fit, perhaps from the fit could be determined an estimate of the angle of separation at the start. It would help if some assumption could be made regarding the form of the function to be fit.

POSTED BY: David Keith

Hello David, thank you very much for your thoughts about this problem. Allow me to add some additional discussion to make some clarifications.

The images that I posted are a small section of an image that is 5464x4080 pixels in width and height. The images were originally captured in grayscale at 150 dpi, and I have them saved in .tif format at this resolution. Of course the routines that I have used to produce the image showing the "corners" identified have also recognized similar features at the edges of the image, however, my plan is to eliminate any such artifacts by "cropping" the processed images along with the data collected from them once the code is satisfactory.

With regard to any approach for the angle measurement itself, there a few things to keep in mind. First, the angle at the triple-point is analogous to the wetting angle described in literature by "Young's Equation" which is used to determine if liquids will spread over the surface of a solid (whether the surface is hydrophobic or hydrophilic). This angle is defined as measured between the normals of the intersecting surfaces at the point of intersection. At later stages in the liquid-phase sintering process the surface energies will tend to dominate the contact angles and produce a classic shape at these points. The images posted here, however, are from a sample at only 1 minute sintering time, which is not enough time yet to achieve this shape in all contacts. Some of the contacts, as you can see, are still somewhat irregular. It is my belief that this sample was a good choice to start with because it is difficult and requires a robust method. My plan was to first try to fit vectors to the surface of the particles beginning at the triple points and sampling pixels at the surface up to a specified range, perhaps 5 or 10 pixels. Once I have a good method to fit these vectors, I am interested in adjusting the number of neighboring "surface pixels" the codes uses to observe the effect on the measurement outcomes. Of course following the classical definition it would be entirely proper to use one the first closest neighbor to fit the vector, but of course these pixels are placed in their positions by means of a process containing inherent errors and approximations and the resulting measurements would produce an unacceptably high variance.

I have posted a section of the original image at the original resolution of 150 dpi for you to look at.

POSTED BY: Phillip Green

Hi Phillip,

Would you mind posting some source data, for example one of your input images at max resolution.

My immediate commentary regards data cleanliness. The sample image you're working from contains four areas that are aligned with varied precision. You are examining corners along these boundaries, along with corners at the edges of the image (notably some noise in the top right has created at least one erroneous corner.)

I would suggest partitioning your input into each separate image, and then extracting the shapes entirely contained in each image. If you're expecting high variance this suggestion may be inappropriate, as it may skew your corner data towards that of smaller 'blobs'.

Finally, all of these corners feature highly curved edges, thus the higher resolution we examine the smaller the output angles will be. Will all the input data be at a consistent resolution? Is there a specific pixel radius you want to specify measuring at?


POSTED BY: David Gathercole
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
or Discard

Group Abstract Group Abstract