# Easy Focus Stacking with Mathematica: Live from M&M 2013

Posted 10 years ago
23597 Views
|
18 Replies
|
30 Total Likes
|
 My colleagues from Wolfram Research and I are currently attending Microscopy and Microanalysis 2013 conference in Indianapolis. It is really fun to mingle and chat with microscopy crowd spread among huge microscopes and monitors flashing myriads of invisible worlds blown up 1000s times. There is a lot of new technology talk & show going around and our image processing functionality fits right in. I just wanted to give you one example of that.A few times our Wolfram Booth was approached by folks wondering how easy would it be to do Focus Stacking in Mathematica. For a general reference:Focus Stacking (aka focal plane merging or z-stacking) is a digital image processing technique, which combines multiple images taken at different focus distances to give a resulting image with a greater depth of field than any of the individual source images. Focus stacking can be used in any situation where individual images have a very shallow depth of field; macro photography and optical microscopy are two typical examples. ~ WikipediaOur manager of Image Processing Team, Shadi Ashnai, is right here with us at the conference and she worked out an example of focus stacking in a matter of minutes. She kindly agreed to explain it in detail below.
18 Replies
Sort By:
Posted 8 years ago
 Dear Shadi,And about functions for image reductions from astrophotography? Are you thinking about this?
Posted 8 years ago
 Mathematica 10.2 has a new function for focus stacking called ImageFocusCombine. For now this is only one method implementation, with an option for choosing between speed and quality. Eventually, we will include more methods.
Posted 9 years ago
 Hi Everybody,I am astronomer, and I ve been studying also an astrophotography program based on Mathematica comands, as I am helping an amateur meteor capturing network using cheap cctvs:http:video meteor homepage. A proprietary program captures events and then an file .avi is created. Like this sample:sample of a video of celestial phenomenaSo, First of all, I have to clean the avi video preparing it to astrometry, I think doing that way:a - A video (could be a sequence of images also) is recorded with some celestial phenomena (in our example is a fireball);b - After the video (a), I take dark frames ( thermal noises from CCD): 2 secs of video (for example), and then I do a "Master Dark Frame". the principle of dark frames is taking pictures with the lens closed, using the same equipment and temperature, we want only the noise current being registered, so the images will be dark, and then averaging these frames doing a final Master Dark frame that will be used to subtracted the noises from the original image ( or video).c -I split the video (a) in frames (using virtual dub program): sample of videos frames of a celestial phenomena , and then I stack them in one single picture, and subtract it from the Master dark Frame (b). The goal here is a single image: the trajectory of the celestial object , with the dark current noise out.So, I did the following codes, using a settings of frames from an video file.avi for dark images,they are already separated in frames, I used Virtual Dub previously: sample of dark frames for doing the dark frames and the cleaning the sample video: Adding frames and taking an average frame for Dark Frame  imagDark = ImageTake[#, 463] & /@ Table[Import[ "C:\\Users\\decicco\\Desktop\\TratImag\\DarkSeparadoAvi\\" <> ToString[FileNames[][[i]]]], {i, 1, Length[FileNames[]]}]; ImageAdd[imagDark]; MasterDark = ImageApply[#/(27) &, %]  Then I get the frames from the sample video: SetDirectory["C:\\Users\\decicco\\Desktop\\TratImag\\AviSeparado\\"] FileNames[] imagBackGroun = ImageTake[#, 462] & /@ Table[Import[ "C:\\Users\\decicco\\Desktop\\TratImag\\AviSeparado\\" <> ToString[FileNames[][[i]]]], {i, 1, Length[FileNames[]]}] ImagEmpilhamento = ImageAdd[(*here the jpeg frames*)]; ImageSubtract[ImagEmpilhamento, MasterDark]; Export["C:\\Users\\decicco\\Desktop\\TratImag\\ImagemFinal\\ImagFinal.\ jpeg", %] But the final result is this: Well, any help is welcome!references: Richard Berry, http://www.stargazing.net/david/ccd/ccdmaglib.html
Posted 8 years ago
 Marcelo, I read your post but all files you linked to are gone. It is possible to attach files to Community posts. Could you please edit your post and attach small samples?
Posted 8 years ago
 Dear Vitaliy,The files are too large, so I share again their links. Please, any problem let me know. Salute!
Posted 8 years ago
 Marcelo,A couple things I noticed about your code:1) You can get the list of dark frames more simply: imagDarks = ImageTake[#,463]&/@Import["C:\\Users\\decicco\\Desktop\\TratImag\\DarkSeparadoAvi\\*.jpeg"]; 2) You need to average to get the master dark frame, not just add. By just adding you get a 'white' frame which reduces the final stack to 'black' when subtracted. I have the following function that I use for this: ImageMeanFlat[imageList_List] := Module[{sepList}, sepList = Map[ColorSeparate, imageList]; ColorCombine[{Image[Mean[Map[ImageData, Map[#[[1]] &, sepList]]]], Image[Mean[Map[ImageData, Map[#[[2]] &, sepList]]]], Image[Mean[Map[ImageData, Map[#[[3]] &, sepList]]]]}]]; You can also create a similar function using Median, which may also be useful. masterDark = ImageMeanFlat[imagDarks]; Some people like to create libraries of master dark frames based on temperature and exposure.3) You can do the same thing for the light frames (cropping the same as the darks): imagLights = ImageTake[#,463]&/@Import["C:\\Users\\decicco\\Desktop\\TratImag\\AviSeparado\\*.jpeg"]; 4) Then you subtract out the dark frame to get the final image (which can then be stretched using ImageAdjust): theImage = ImageSubtract[ImageMeanFlat[imagLights], masterDark]; Since this is all coming from an AVI, I'm guessing the overall exposure isn't terribly long, so there shouldn't be a big problem aligning the sub-frames. However, if you are trying to capture a transient event, like a meteor or an asteroid moving across the frame, be aware that the stacking process may smear that out unless you align each frame on the object you are trying to capture.Here is the final image I obtained from using the above procedure on your data (plus some stretching):Tom Attachments:
Posted 8 years ago
 Tom,Thanks! I implemented the code and worked fine!I tried using fit format, and the result was even better. As the following:First I converted the avi file to fit file , using iris software (freeware), and then imported them -  Table[Import[ "H:\\Images\\FitsConvert\\" <> ToString[i] <> ".fit"], {i, 1, 172}]; (*yes, I know there is much better code, but when importing the files , they came in a disorganized list!*) imagLights = #[[1]] & /@ %; images = ImageTake[#, 560] & /@ %; (Here I did a modification in the algorithm´s Tom, for grayscale)  ImageMeanFlat[imageList_List] := Module[{sepList}, sepList = imageList; Image[Mean[Map[ImageData, Map[# &, sepList]]]]] I did not use any master dark, so : imagFinal = ImageMeanFlat[images]; Export["H:\\Images\\ResultFinal\\imagtestePet1_a.bmp", %] And after adjusting contrast and brightness: Attachments:
Posted 8 years ago
 Marcelo,This looks really good, and you can see some of what I suspect are meteor tracks. One further simplification you can try is to just pull the AVI file into Mathematica directly, without converting to FITS:images = ImageTake[#, 560] & /@ Import["myvideo.avi","ImageList"]Video import like this, prior to stacking, is ideal for processing bright objects, like planetary images.Tom
Posted 8 years ago
 Dear Tom,I tried your code, but the output is a list like that: { , , , etc...} EDIT1: In[20]:= Internal\$VideoEncodings Out[20]= {"Animação", "BMP", "Cinepak", "DVCPRO - PAL", "DV/DVCPRO - \ NTSC", "DV - PAL", "Elementos Gráficos", "Foto - JPEG", "H.261", \ "H.263", "JPEG 2000", "Motion JPEG A", "Motion JPEG B", "MPEG-4 \ Video", "Nenhum", "Planar RGB", "PNG", "Sorenson Video", "TGA", \ "TIFF", "Uncompressed", "Vídeo", "Vídeo Componente"} In[21]:= Import["D:\\Dropbox\\Public\\M20150524_223758_PET_1.avi", \ "VideoEncoding"] Out[21]= "YUV" `
Posted 8 years ago
 I just checked your video and it has YUV encoding, and I wasn't able to import it directly. I did open it up in vdub and then saved it right back out. The encoding now reads as "uncompessed" and the above Import line does import it directly.
Posted 8 years ago
 Dear Tom,WORKED FINE!see the pic:Two things: Is there any loss of information during this reprocessing? And, yes, the video is a fireball crossing southern skies., cpatured by my camera The small trail, probably is a satellite. Now, I need do plot the trail of the meteor stronger, as you have noticed the trail has became smoth because of the averaging frames.
Posted 10 years ago
 Tom,I will look forward to what you can do and wouid be very happy to be a tester for anything you create.   I have quite a few sets of images that I did not get the other software to process well and can test with.   I do both tracked and untracked imaging with camera and telescopes.   I have only been at it for nine months and have wanted to use Mathematica for the process.   My guess is that are a lot of astrophotography folks who would be interested.   I give talks at the local astronomy clubs periodically and I think they would be very interested in a presentation on this at some point.
Posted 10 years ago
 Like!  This is focus stacking for two images.  Can anyone do it for n_/;And[n>2,IntegerQ]
Posted 10 years ago
 Seth, I am lucky enough to understand your programing metaphors because I speak (code?) the same language ;-) Actually it is very easy to apply Shadi's code for a larger set of images. The procedure above treated images as independent. So if you apply it as it is to a larger set - it will just work.
Posted 10 years ago
 Steven, both astro stacking and post processing of stacked images in Mathematica are doable and easy. We once prototyped such a program in Mathematica with an astronomer colleague, Tom Sherlock, who is also the author of this blog post: "Fixing Bad Astrophotography Using Mathematica 8 and Advanced Image Deconvolution"A quick list of steps to take for stacking (as far as I remember) were:1. increase brightness of images (ImageAdd or ImageAdjust)2. align images (ImageAlign)3. reverse step 1 (ImageSubtract or ImageAdjust)4. add images together (Fold of ImageAdd)If needed, I can provide you the exact steps once I am back to the office from the M&M conference.Also, for post processing of images, please look at some demonstrations or documentation examples like:- Adjusting RGB Color of Image- Image Deconvolution
Posted 10 years ago
 This is great.  Would it be possible to devise a stacking system for astrophotography in Mathematica similar to or better than the DeepSkyStacker or Registax programs?
Posted 10 years ago
 Steven,I'm working on a blog post to describe this process.  It works similarly to Deep Sky Stacker and Registax, but it does the sub-frame alignments automatically. Right now I'm using it to reduce noise in my images since I can't get my scope to track accurately without guiding for more than about 1 minute for each sub-frame, rather than using it to combine parts from images.  However, Shadi's technique might be useful to build images of galaxies because their dynamic range is so great.
Posted 10 years ago
 Lets import an image from the Wikipedia page about Focus Stacking. Note that the first and second image parts are out-of-focus: the head or tail of the fly. The goal is to get a result comparable to the 3rd image part below where all in-focus areas are merged in single crispy image.i = Import["http://goo.gl/xMThiZ"]Because it is one large image, we start by grabbing first 2 images to process further.images = {i1, i22} = Most@Flatten@ImagePartition[i, {{Scaled[1/3]}, Scaled[1]}] Next we need to align images perfectly so no artifacts will appear during mergingi2 = RemoveAlphaChannel@ImageAlign[i1, i22];Now if we blur images and find difference with original we can find sharper domainslist = ImageAdjust@ImageDifference[Blur[#, 5], #] & /@ {i1, i2}Having that we make masks covering those domainsmasks = FillingTransform@Dilation[Binarize[#], 1] & /@ listAnd use those masks to set an alpha (transparency) channel in the imageslist2 = MapThread[SetAlphaChannel, {{i1, i2}, masks}] And now we can compose these with original to get complete-in-focus image. And after slight sharpening, voila, as crispy as we can get it. Note how in this image both head and tail of the fly are sharp.ImageCompose[i1, ImageCompose @@ list2] // Sharpen
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments