I recently discovered the use of a 3D printer, my first thoughts went straight to my hobby, tabletop War-gaming.
The idea being that I would be able to use Wolfram Language to produce a 3D model of an object after passing into it some photos I have taken.
This I have found is not as easy for someone as new as me to the language, it requires image processing ect.
To start with once an image is imported I need to find a way to isolate the section of the image I need.
Then convert that into a 2D image of the subject that only contains the relevant edges (edge detect seems to detect any edge what so ever most of which are not needed).
I am hoping this can then be converted into a usable set of data.
Combining the data that has been produce from consecutive photos all of the same item from different angles.
Convert all the data on the vertices required to make up the model to produce a 3D model that can then be exported in the required format.
Has anyone tried to do this before?
Is there anything that can be improved upon in terms of my approach?
Does anyone have any coding or advice that I can use to allow me to execute this little project of mine?
This would be quite an endeavour I think. Your method sounds like space-carving:
or am I mistaking?
You could also use two photo of just slightly different angle (like human eyes) and then you can perceive depth using some image analysis (e.g. ImageDisplacements).
That does seem to be pretty much the right idea, seems I must at least be heading in the right direction. I am just struggling with the implementation.