Message Boards Message Boards

Aftermath of the solar eclipse

enter image description here

POSTED BY: Marco Thiel
16 Replies

Thanks you for a very interesting analysis, professor Thiel. Here are a few pictures of the eclipse from Aberdeen, around 9:25 am. enter image description here

enter image description here

POSTED BY: Tanvi Chheda

Hi Tanvi and Vitaliy,

I know that this does not work quite well yet, but if I take the photo that Tanvi posted

img = Import["~/Desktop/Eclipse.jpg"]

and crop it a bit:

img2 = ImageTrim[img, {{447.386`, 409.278`}, {513.381`, 344.818`}}]

I get:

enter image description here

The question that also relates to Sander's question in Vitaliy's post is: how much of the sun was covered. The idea is to cover the crescent by a disk:

MorphologicalComponents[Binarize[img2, 0.4], Method -> "BoundingDisk"] // Colorize

enter image description here

then determine the shape of the crescent itself:

MorphologicalComponents[Binarize[img2, 0.4]] // Colorize

enter image description here

and then calculate the respective areas:

full = ComponentMeasurements[MorphologicalComponents[Binarize[img2, 0.4], Method -> "BoundingDisk"], "Area"][[1, 2]]
(*1206.5*)
crescent = ComponentMeasurements[MorphologicalComponents[Binarize[img2, 0.4]] // Colorize, "Area"][[1, 2]]
(*311.75*)

The ratio of which is:

crescent/full

This gives a covering of about $74\%$. This number is only a rough estimate. The fitted disk is not quite right as you can see in this plot:

ImageMultiply[img2, ColorNegate@(MorphologicalPerimeter[MorphologicalComponents[Binarize[img2, 0.4], Method -> "BoundingDisk"], 0.2] // Colorize)]

enter image description here

The black circle is the estimate of the shape of the full sun, which is obviously not quite what we want.

Cheers,

M.

PS: Tanvi, thanks a lot for posting the photos!

POSTED BY: Marco Thiel

Hi Marco,

inspired by your and Sander's posts on how much the sun was covered during the last eclipse, I also tried to write an evaluation from picture data. For this I use Tanvi's first picture (@Tanvi: Thanks for posting!), because in the second one the sun seem to be "cut" a bit at the edges by clouds. My ansatz is to use a model of the eclipse consisting of two equal sized disks overlapping each other. This simple model has five parameters (the number on the right side represents a fitness of match):

enter image description here

One can define a "fitness function" to quantify the goodness of match and try an optimization for this. Unfortunately I was not able to get that working perfectly - the result of NMaximize is not bad but by no means perfect. This seems strange because it is very easy to improve the solution by hand.

Once a result is obtained, e.g.:

enter image description here

one can very easy calculate the covering:

\[ScriptCapitalR] =  RegionDifference[Disk[{x, y}, r], Disk[{x + dr Cos[\[Phi]], y + dr Sin[\[Phi]]}, r]] /. opt;
Print["covering: ", 100 (1 - RegionMeasure[\[ScriptCapitalR]]/(Pi r^2) /. opt), "%"]

Otherwise I would not have a quick idea on how to do that - probably some ugly integrals ...

I have attached the notebook with the code. The next solar eclipse may come!

Henrik

Attachments:
POSTED BY: Henrik Schachner

Great Marco, excellent detective work ;) We know it was clear in Aberdeen on the eclipse day:

enter image description here

So I am curious - have you seen the eclipse - you must have had almost 100% sun coverage ? Also I am looking forward to doing a similar data digging when total solar eclipse will spam USA in 2017 - I just posted about the path of it (image below) - I hope folks in US will have similar devices that you featured.

enter image description here

POSTED BY: Vitaliy Kaurov
POSTED BY: Marco Thiel

Hi Vitaliy,

i guess it might become important for the global warming discussion one day. The good thing is that there is very good data coverage in some areas. I think two of the main problems are: (i) there is basically no data at the arctic and antarctic; these regions are very important for climate models. Of course you can use Mathematica to do eg. Kriging to estimate the temperatures in these remote regions - or use additional satellite data. (ii) climate change is a rather long term process. It is not weather change, but is supposed to take place over long periods of time. So these private weather stations would have to be operational for many years, much longer than the product exists.

In principle it should be possible to monitor (data mine with the help of DataDrop?) some extreme weather events, e.g. pressure profiles during a Hurricane etc.

There is something else that might be interesting. It is way off the original topic of the post but it is meant as a side remark, so I'll just post it here. The people from Netatmo used the indoor sound measurements, which are not publicly available, to analyse the sound levels during the football/soccer world championships to find out which fans are most enthusiastic. This was actually a quite nice addition to the fantastic blog posts (first / second) on the Wolfram blog. All three studies use data one is from a crowed sourced database.

For my own data at home I can do something similar:

data = SemanticImport["~/Desktop/Indoor_25_3_2015.csv"];

enter image description here

If you now plot the noise levels (column6) you get

DateListPlot[Transpose[{Lookup[data // Normal, "column2"][[23 ;;]], MovingAverage[Lookup[data // Normal, "column6"][[4 ;;]], 20] // N}], FrameLabel -> {"Time", "Noise level dB"}, LabelStyle -> Directive[Bold, Medium]]

enter image description here

This is the noise level at my home. You see that something changed at the end of August last year, which was when my daughter was born. Before Christmas, when the noise level went down because we travelled, the noise levels went up; this agrees with her getting unsettled before Christmas. After coming back from the Christmas holidays we came back and left for another week or so as you can clearly see from the data. It is actually quite interesting to see that there is an interesting pattern developing towards the end of the time series.

DateListPlot[Transpose[{Lookup[data // Normal, "column2"][[23 ;;]], 
    MovingAverage[Lookup[data // Normal, "column6"][[4 ;;]], 20] // N}][[-500 ;;]], FrameLabel -> {"Time", "Noise level dB"}, LabelStyle -> Directive[Bold, Medium], AspectRatio -> 1/3,Filling-> Bottom]

enter image description here

The high frequency component corresponds to days, i.e. there is a daily cycle of the noise levels. Interestingly, there is also a longer 10-11 day cycle. I have absolutely no idea where that comes from. Any hints are welcome!

Cheers,

Marco

POSTED BY: Marco Thiel

Marco, thank you for such extended reply! Do you think the types of data you were mining here have significance for the global warming discussion? Have you or your team ever done research in that direction?

POSTED BY: Vitaliy Kaurov

To highlight the diversity of the discussion we can use new in Wolfram Language function WordCloud:

post = Import["http://community.wolfram.com/groups/-/m/t/463721"];
Row[{
  WordCloud[ToUpperCase[DeleteStopwords[post]],
   ColorFunction -> "RustTones", Background -> Black, 
   ScalingFunctions -> (#^.1 &)],
  WordCloud[ToUpperCase[DeleteStopwords[post]],
   ColorFunction -> "RedBlueTones", ScalingFunctions -> (#^.1 &),
   Background -> ColorData["RedBlueTones"][0], 
   WordOrientation -> {{0, \[Pi]/2}}]}]

enter image description here

In the code above there are three handy tricks. First is that option WordOrientation has diverse settings for words' directions. Second is that a good choice ScalingFunctions can grant a good visual a peal, and simple power low I've chosen is often more flexible than logarithmic one. The third trick is subtler. It is the choice of background color to be the "bottom" color of the used ColorFunction. Then not only sizes of the words stress their weights, but also fading out into the background.

POSTED BY: Vitaliy Kaurov

Hi Vitaliy,

I have just seen that it won't take much before MMA10 arrives. Cannot wait to try stuff out. I have recently programmed a little miner for twitter. It would be nice to see how the WordCloud does in combination with that.

Cheers,

M.

POSTED BY: Marco Thiel

This related discussion seems very interesting to me: Solar eclipses on other planets

POSTED BY: Vitaliy Kaurov

I mention all of you and your ideas in this blog: Solar Eclipses from Past to Future, Earth to Jupiter. Thanks for wonderful contributions and fun!

POSTED BY: Vitaliy Kaurov

For anyone looking for more resources ahead of the April 8, 2024, eclipse (especially Wolfram Language resources for computing and analyzing eclipses), check out Stephen Wolfram's new book "Predicting the Eclipse: A Multimillennium Tale of Computation". You get a copy on Amazon: https://www.amazon.com/Predicting-Eclipse-Multimillennium-Tale-Computation/dp/1579550878

POSTED BY: Paige Vigliarolo

Thanks Dr. Kaurov, I took the photos on an iPad, there's no processing done on the pictures.

POSTED BY: Tanvi Chheda

Dear Henrik,

that looks really great. I was aware that my method was quite flawed. I only cover he crescent with a disk which is by no means ideal. I hoped to be able to use a bit of maths to correct for that. Your Manipulate GUI is really nice! Thanks for sharing.

Best wishes,

M.

POSTED BY: Marco Thiel

Tanvi, thank you for posting! Did you or someone you know took these photos? If yes, do you happen to know with what camera and whether the photos were post-processed?

POSTED BY: Vitaliy Kaurov

enter image description here -- you have earned Featured Contributor Badge enter image description here Your exceptional post has been selected for our editorial column Staff Picks http://wolfr.am/StaffPicks and Your Profile is now distinguished by a Featured Contributor Badge and is displayed on the Featured Contributor Board. Thank you!

POSTED BY: Moderation Team
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract