Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Image Processing sorted by active[GIF] Visualizing Interstellar's Wormhole: from article to programming
http://community.wolfram.com/groups/-/m/t/852052
*Click on image to zoom. Press browser back button to return to the post.*
----------
[![enter image description here][5]][5]
Let me start off by saying that I know almost nothing about general relativity, but I thought it was really fun translating the equations presented in [this paper](http://arxiv.org/abs/1502.03809) by Oliver James, Eugenie von Tunzelmann, Paul Franklin, and Kip Thorne into notebook expressions.
Embedding Diagrams
==================
The paper gives some really cool figures to show the curvature of 4-dimensional spacetime in the region around a wormhole. The physics of the wormhole is essentially described by three parameters:
1. $\rho$ - the radius of the wormhole
2. $a$ - the length of the wormhole
3. $\mathcal{M}$ - a parameter describing the curvature, described in the paper as the "gentleness of the transition from the wormhole's cylindrical interior to its asymptotically flat exterior"
To look at the curvature for a given set of parameters, we only really care about the ratios $a/\rho$ and $\mathcal{M}/\rho$.
Taking equations (5) and (6) from the paper, we can plot the curvature for any pair of these parameters using cylindrical coordinates. Since the $z$ coordinate is described found via numerical integration, I chose to speed up the `ParametricPlot3D` by first forming an interpolating function.
embeddingDiagram[a_, M_, lmax_: 4] := Module[{ρ = 1, z, zz, x, r},
x[l_] := (2 (Abs[l] - a))/(π*M);
r[l_] := ρ +
UnitStep[
Abs[l] - a] (M (x[l]*ArcTan[x[l]] - 1/2 Log[1 + (x[l])^2]));
z[l_] :=
NIntegrate[Sqrt[
1 - (UnitStep[
Abs[ll] -
a] (2 ArcTan[(2 (-a + Abs[ll]))/(M π)] Sign[
l])/π)^2], {ll, 0, l}];
zz = Interpolation@({#, z[#]} & /@ Subdivide[lmax, 20]);
ParametricPlot3D[{{r[l] Cos[t], r[l] Sin[t], zz[l]}, {r[l] Cos[t],
r[l] Sin[t], -zz[l]}}, {l, 0, lmax}, {t, 0, 2 π},
PlotStyle -> Directive[Orange, Specularity[White, 50]],
Boxed -> False,
Axes -> False,
ImageSize -> 500,
PlotPoints -> {40, 15}]
]
and here are three examples shown in the paper,
embeddingDiagram[0.005, 0.05/1.43]
embeddingDiagram[0.5, 0.014]
embeddingDiagram[0.5, 0.43, 10]
![enter image description here][2]
Tracing rays through the wormhole
=================================
The appendix to the paper describes a procedure for creating an image taken from a camera on one side of the wormhole. The procedure involves generating a map from one set of spherical polar coordinates (the "camera sky") to the "celestial spheres" describing the two ends of the wormhole.
First a location is chosen for the camera, then light rays are traced backwards in time from the camera to one of the two celestial spheres. This ray tracing involves solving 5 coupled differential equations back from $t=0$ to minus infinity (or a large negative time).
For this I use [`ParametricNDSolve`](http://reference.wolfram.com/language/ref/ParametricNDSolve.html). The functions being solved for are the spherical coordinates of the light rays and their momenta.
The parameters for `ParametricNDSolve` are the wormhole parameters listed above, the camera's position `{lcamera, θcamera, ϕcamera}` and the "camera sky" coordinates, used to build the map. Rather than walk through their derivation (again, not a cosmologist), I cite the paper for the equations given below:
rayTrace = Module[{
(* auxilliary variables *)
nl, nϕ, nθ, pϕ,
bsquared, M, x, r, rprime,
(* parameters for ParametricNDSolve *)
θcamsky, \
ϕcamsky, ρ, lcamera, θcamera, ϕcamera, W, a,
(* time dependent parameters to be solved for *)
l, θ, ϕ, pl, pθ,
(* the time variable *)
t
},
(* Eq. (7) *)
M = W/1.42953;
(*Eq. 5 *)
x[l_] := (2 (Abs[l] - a))/(π*M);
r[l_] := ρ +
UnitStep[
Abs[l] - a] (M (x[l]*ArcTan[x[l]] - 1/2 Log[1 + (x[l])^2]));
rprime[l_] :=
UnitStep[
Abs[l] -
a] (2 ArcTan[(2 (-a + Abs[l]))/(M π)] Sign[l])/π;
(* Eq. A.9b *)
nl = -Sin[θcamsky] Cos[ϕcamsky];
nϕ = -Sin[θcamsky] Sin[ϕcamsky];
nθ = Cos[θcamsky];
(*Eq. A.9d*)
pϕ = r[lcamera] Sin[θcamera] nϕ;
bsquared = (r[lcamera])^2*(nθ^2 + nϕ^2);
ParametricNDSolveValue[{
(* Eq. A.7 *)
l'[t] == pl[t],
θ'[t] == pθ[t]/(r[l[t]])^2,
ϕ'[t] == pϕ/((r[l[t]])^2 (Sin[θ[t]])^2),
pl'[t] == bsquared*rprime[l[t]]/(r[l[t]])^3,
pθ'[t] ==
pϕ^2/(r[l[t]])^2 Cos[θ[t]]/(Sin[θ[t]])^3,
(* Eq. A.9c *)
pl[0] == nl,
pθ[0] == r[lcamera] nθ,
(* Initial conditions, paragraph following Eq. A.9d *)
l[0] == lcamera,
θ[0] == θcamera,
ϕ[0] == ϕcamera
},
{l, θ, ϕ, pl, pθ},
{t, 0, -10^6},
{θcamsky, ϕcamsky,
lcamera, θcamera, ϕcamera, ρ, W, a}]];
Now to use the `rayTrace` function - we want to build up an array of values for which we can use a `ListInterpolation` function to map any direction in the camera's local sky to coordinates in one of the celestial spheres. Exactly which celestial sphere is determined by the sign of the lenght coordinate, `l`. The size of the array is very important. I find that it is important to use an odd number of array elements, or you'll end up with an ugly vertical line in the center of your image.
generateMap[nn_, lc_, θc_, ϕc_, ρ_, W_, a_] :=
ParallelTable[{Mod[#2/π, 1], Mod[#3/(2 π), 1], #1} & @@
Through[rayTrace[θ, ϕ, lc, θc, ϕc, ρ,
W, a][-10^6]][[;; 3]], {θ,
Subdivide[π, nn]}, {ϕ, Subdivide[2 π, nn]}]
Finally you need a function to transform the two input images using the map generated by the above function. I would be very happy if someone could suggest a method to do this better - perhaps using `ImageTransformation`? I was able to make something work with `ImageTransformation` but it was much less efficient than this. Essentially, `ImageTransformation` can map pixels from one part of an image to another, but they won't grab pixels from another image. You could create a composite image, with the two stacked on top of each other, or you could use the transformation function on each one separately and then combine them.
blackHoleImage[foreground_, background_, map_] :=
Module[{raytracefunc, img1func, img2func, nrows, ncols, mapfunc},
{nrows, ncols} = Reverse@ImageDimensions@foreground;
raytracefunc =
ListInterpolation[#, {{1, nrows}, {1, ncols}},
InterpolationOrder -> 1] & /@ Transpose[(map), {2, 3, 1}];
img1func =
ListInterpolation[#, {{0, 1}, {0, 1}}] & /@
Transpose[(foreground // ImageData), {2, 3, 1}];
img2func =
ListInterpolation[#, {{0, 1}, {0, 1}}] & /@
Transpose[(background // ImageData), {2, 3, 1}];
mapfunc[a_, b_, x_ /; x <= 0] := Through[img2func[a, b]];
mapfunc[a_, b_, x_ /; x > 0] := Through[img1func[a, b]];
Image@Array[
mapfunc @@ Through[raytracefunc[#1, #2]] &, {nrows, ncols}]
]
Low-resolution test
===================
To generate a map using `nn=501` takes about 15 to 20 minutes on my PC, so it's no good for testing the effects of various parameters. So we'll make a much smaller map, and the quality of the image will be lower. We can grab a couple of images from the cite listed in the paper,
foreground=Import["http://www.dneg.com/wp-content/uploads/2015/02/InterstellarWormhole_Fig6b.jpg"];
background=Import["http://www.dneg.com/wp-content/uploads/2015/02/InterstellarWormhole_Fig6a.jpg"];
and make a 101 by 101 map in under a minute:
map1 =
generateMap[101, 6.0, π/2, 0, 5.0, 0.07, 2.4]; // AbsoluteTiming
(* {36.2135, Null} *)
Here I've taken some some paramters I think make a cool picture ($ \rho = 5.0$, $a = 2.4$, $W = \mathcal{M}/1.43 = 0.07$) and put the camera at $\{l, \theta, \phi\} = \{ 6, \pi/2, 0 \}$. Since the map is low resolution, I can reduce the resolution of the images to get a quick result,
blackHoleImage[ImageResize[foreground, 500],
ImageResize[background, 500], map1]
[![enter image description here][3]][4]
But if you set `nn=501` and don't resize the images you get
![enter image description here][5]
Alien invasion
==============
Have you ever read the *Commonwealth Saga* by Peter F. Hamilton, wherein an alien invades human-held territories via wormhole with the intent of exterminating our species?
![Mathematica graphics](http://i.imgur.com/DQ33Qkh.gif)
[here](https://www.dropbox.com/s/f9zu8eyjnxx77jj/out.mp4?dl=1) is a better quality, lower filesize mp4 of the above animation. To make this one, I varied the wormhole width from 0 up to 5, then used `ImageCompose` to add in [this](http://www.wpclipart.com/cartoon/aliens/alien_ship/flying_saucer_2_T.png) stock image flying saucer, then shrunk the wormhole back to zero width.
Unfinished tasks
--
I think it would be very interesting to take the result of `rayTrace` and plot it on top of the embedding diagram, but I haven't quite figured this out.
I also think it would be pretty neat to take a terrestrial picture (say of the White House), and have a wormhole open up in the background. Finally, I would be very pleased to figure out how to put the wormhole at any position in an image I want, rather than just the center.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=frame_100.png&userId=130877
[2]: http://i.stack.imgur.com/f3drY.png
[3]: http://i.stack.imgur.com/cvKBK.png
[4]: http://i.stack.imgur.com/cvKBK.png
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=frame_100.jpg&userId=20103Jason Biggs2016-05-06T13:44:35ZSet Image acquisition ($ImagingDevice) on Unix?
http://community.wolfram.com/groups/-/m/t/1288310
When I do<br>
$ImagingDevice
I get the following message
Message[$ImagingDevice::notsupported, "Unix"]
What can I do to fix it?Santiago Hincapie2018-02-19T23:04:51ZRead SEGY data in Mathematica?
http://community.wolfram.com/groups/-/m/t/1272436
Please I need code/software to read SEGY data in order to input to MathematicaJohn Smith Jr.2018-01-25T17:35:21ZSolving Puzzle : 4 Pics 1 Word
http://community.wolfram.com/groups/-/m/t/1283422
4 Pics 1 Word is the Android and IOS puzzle that you guess what is the word based on four pictures that have something in common.
For example, the answer of the next screenshot is "TOOL".
![enter image description here][1]
**Goal**
-----------------------
Solve this puzzle by using Mathematica.
![enter image description here][2]
**Recognize Characters**
-----------------------
TextRecognize[] cannot recognize characters from full screenshot.
TextRecognize[screenshot, Language -> "English", RecognitionPrior -> "Character"]
Output is Null.
Find 12 box characters areas.
img2 = ImageTrim[screenshot, {{0, 150}, {900, 450}}];
img2b = Binarize[img2];
mask = MorphologicalTransform[img2b, "BoundingBoxes", Infinity];
mask = Thinning[mask, 4];
corners = Sort[ImageCorners[mask]];
HighlightImage[img2b, corners]
![enter image description here][3]
Trim 12 box characters and assemble them.
c2 = Partition[corners, 8];
rectanglerule = {a_, b_, c_, d_, e_, f_, g_, h_} -> {{a, b, e, f}, {c, d, g, h}};
c3 = Flatten[c2 /. rectanglerule, 1];
asm = ImageTrim[img2b, #] & /@ c3 // ImageAssemble
![enter image description here][4]
TextRecognize[] can recognize 12 characters.
TextRecognize[asm, RecognitionPrior -> "Word", Language -> "English"] // Characters
Output is {"L", "T", "K", "W", "J", "M", "H", "D", "U", "O", "O", "I"}.
**Recognize Word Length of the Answer**
-----------------------
In this screenshot, the word length of the answer is four.
![enter image description here][5]
Trim one line within the input area of the answer(above red line).
img3 = ImageTrim[screenshot, {{0, 555}, {1080, 555}}];
img3b = Binarize[img3]
Output is
![enter image description here][6]
The word length of the answer is the number of white area -1 (In this case, 4 = 5-1).
The elements of black line are 0 and the elements of white line are 1.
getWordLength[screenshot_] := Module[{img3, img3b},
img3 = ImageTrim[screenshot, {{0, 555}, {1080, 555}}];
img3b = Binarize[img3];
(ImageData[img3b] //. {x___, a_, a_, y___} -> {x, a, y} // Flatten //
Total) - 1
];
getWordLength[screenshot]
Output is 4.
**Find Candidates**
-----------------------
Find candidates using DictionaryLookup[].
getCandidates[string_String, n_] :=
Module[{list},
list = StringJoin /@ Permutations[Sort[Characters[string]], {n}];
Select[list,
Length[DictionaryLookup[#, IgnoreCase -> True]] != 0 &] //
ToUpperCase
];
getCandidates[StringJoin[{"L", "T", "K", "W", "J", "M", "H", "D", "U", "O", "O", "I"}], 4]
Output is {"DHOW", "DOLT", "DOOM", "DOTH", "HILT", "HOLD", "HOLT", "HOOD", "HOOK", "HOOT", "HOWL", "HTML", "HULK", "IDOL", "JILT", "JODI", "JOLT", "JOWL", "JUDO", "KILO", "KILT", "KITH", "KOHL", "LIDO", "LIMO", "LOKI", "LOOK", "LOOM", "LOOT", "LOUD", "LOUT", "LUDO", "MILD", "MILK", "MILO", "MILT", "MOHO", "MOIL", "MOLD", "MOLT", "MOOD", "MOOT", "MOTH", "ODOM", "OHIO", "OMIT", "THOU", "THUD", "TOIL", "TOJO", "TOLD", "TOOK", "**TOOL**", "WHIM", "WHIT", "WHOM", "WILD", "WILT", "WITH", "WOLD", "WOOD", "WOOL"}.
Include the answer "TOOL".
**Identify Pictures**
-----------------------
Identify one of pictures.
ImageTrim[screenshot, {{575, 1215}, {950, 1660}}]
ImageIdentify[%, All, 10]
![enter image description here][7]
Trim 4 pictures areas and identify them.
getCommonNames[screenshot_, n_: 10] :=
Module[{img41, img42, img43, img44, entities, cn},
img41 = ImageTrim[screenshot, {{55, 700}, {505, 1145}}];
img42 = ImageTrim[screenshot, {{575, 700}, {950, 1145}}];
img43 = ImageTrim[screenshot, {{55, 1215}, {505, 1660}}];
img44 = ImageTrim[screenshot, {{575, 1215}, {950, 1660}}];
entities = ImageIdentify[#, All, n] & /@ {img41, img42, img43, img44} // Flatten;
cn = CommonName /@ entities;
StringSplit /@ cn // Flatten // Union // Sort // ToUpperCase
];
cns = getCommonNames[screenshot]
Output is {"AIRPLANE", "ALLIGATOR", "ASTRONOMICAL", "BIT", "BOTTLE", "BUFFER", "CARPENTER'S", "CASSEGRAINIAN", "CHUCK", "CHURCHKEY", "CLIP", "COLLET", "COMPOUND", "CUTLERY", "CUTTER", "DRILL", "DRYER", "EDGE", "ELECTRIC", "FASTENING", "HAIR", "HAMMER", "KHUKURI", "KNIFE", "LEVER", "MALLET", "OF", "OPENER", "OPTICAL", "PAIR", "PIPE", "PLIERS", "POCKET", "POWER", "PROPELLER", "REFLECTING", "REGULATOR", "SCISSORS", "SLEDGEHAMMER", "SPIGOT", "TAILPIPE", "TELESCOPE", "**TOOL**", "TWIST", "VIAL", "WIRE"}.
Include the answer "TOOL".
**Get Answer**
-----------------------
The answer is the element common to both candidate words and what pictures are.
Intersection[
{"DHOW", "DOLT", "DOOM", "DOTH", "HILT", "HOLD", "HOLT", "HOOD",
"HOOK", "HOOT", "HOWL", "HTML", "HULK", "IDOL", "JILT", "JODI",
"JOLT", "JOWL", "JUDO", "KILO", "KILT", "KITH", "KOHL", "LIDO",
"LIMO", "LOKI", "LOOK", "LOOM", "LOOT", "LOUD", "LOUT", "LUDO",
"MILD", "MILK", "MILO", "MILT", "MOHO", "MOIL", "MOLD", "MOLT",
"MOOD", "MOOT", "MOTH", "ODOM", "OHIO", "OMIT", "THOU", "THUD",
"TOIL", "TOJO", "TOLD", "TOOK", "TOOL", "WHIM", "WHIT", "WHOM",
"WILD", "WILT", "WITH", "WOLD", "WOOD", "WOOL"},
{"AIRPLANE", "ALLIGATOR", "ASTRONOMICAL", "BIT", "BOTTLE", "BUFFER",
"CARPENTER'S", "CASSEGRAINIAN", "CHUCK", "CHURCHKEY", "CLIP",
"COLLET", "COMPOUND", "CUTLERY", "CUTTER", "DRILL", "DRYER", "EDGE",
"ELECTRIC", "FASTENING", "HAIR", "HAMMER", "KHUKURI", "KNIFE",
"LEVER", "MALLET", "OF", "OPENER", "OPTICAL", "PAIR", "PIPE",
"PLIERS", "POCKET", "POWER", "PROPELLER", "REFLECTING", "REGULATOR",
"SCISSORS", "SLEDGEHAMMER", "SPIGOT", "TAILPIPE", "TELESCOPE",
"TOOL", "TWIST", "VIAL", "WIRE"}]
Output is {"TOOL"}.
**Integrate**
-----------------------
Get 12 box characters from a screenshot.
getCharacters[screenshot_] :=
Module[{img2, img2b, mask, rectanglerule, corners, c2, c3, asm},
img2 = ImageTrim[screenshot, {{0, 150}, {900, 450}}];
img2b = Binarize[img2];
mask = MorphologicalTransform[img2b, "BoundingBoxes", Infinity];
mask = Thinning[mask, 4];
corners = Sort[ImageCorners[mask]];
c2 = Partition[corners, 8];
rectanglerule = {a_, b_, c_, d_, e_, f_, g_, h_} -> {{a, b, e, f}, {c, d, g, h}};
c3 = Flatten[c2 /. rectanglerule, 1];
asm = ImageTrim[img2b, #] & /@ c3 // ImageAssemble;
TextRecognize[asm, RecognitionPrior -> "Word", Language -> "English"] // Characters // ToUpperCase
];
Integrate all functions.
Pics41[screenshot_] := Module[{answer, chars, n, cadidates, cns},
answer = {};
chars = getCharacters[screenshot];
n = getWordLength[screenshot];
Print["Word Length: ", n];
Print["characters: ", chars[[{2, 4, 6, 8, 10, 12, 1, 3, 5, 7, 9, 11}]]];
Print["searching..."];
cadidates = getCandidates[StringJoin[chars], n];
cns = getCommonNames[screenshot];
answer = Intersection[cadidates, cns];
If[answer != {}, Print["found"]; answer, Print["not found"]; cadidates]
];
Another example is
![enter image description here][8]
![enter image description here][9]
**Finally**
-----------------------
I have some problems.
- Pics41[] cannot often get an answer. Need more information from each picture.
For example, Pics41[] cannot get the answer "COMIC".
![enter image description here][10]
![enter image description here][11]
- Some manual operations are necessary.
- launch 4 Pics 1 Word
- get screenshot
- mail it to my PC
- import it to Mathematica
I would like that Mathematica can handle 4 Pics 1 Word as directly as possible.
Any ideas very welcome.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1091501.jpg&userId=1013863
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=244402.jpg&userId=1013863
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=499903.jpg&userId=1013863
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=723404.jpg&userId=1013863
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=264405.jpg&userId=1013863
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=746806.jpg&userId=1013863
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=589007.jpg&userId=1013863
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1003308.jpg&userId=1013863
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=979209.jpg&userId=1013863
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=737810.jpg&userId=1013863
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=960111.jpg&userId=1013863Kotaro Okazaki2018-02-12T12:31:20ZPixel Editor for Images
http://community.wolfram.com/groups/-/m/t/1286303
Often I would edit bitmaps in programs like MS Paint and then import them back into my notebook. To save time I developed a tool to edit the pixels of an image within a notebook: Wolfram Paint.
![enter image description here][1]
----------
Features include:
- Three tools: pen (single pixel change), bucket (change all identical pixel groups), and eye drop (get pixel color)
- Mouse down uses primary color, `Control` + mouse uses secondary color
- Editable "favorite" colors panel
- [`DominantColors`][2] automatically loaded as "favorites"
- Uses RGBA color space
- Resizable image, inset, and zoom
- "Undo" to a state that you first manually bookmark
- click-and-drag moves the zoomed region (left panel) or edits pixels (right panel)
I'm sure I could mimic other tools like drawing lines, circles, or other primitives, but as a proof-of-concept I'm pleased that it was only 300 lines of code.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=WLPaint.PNG&userId=829295
[2]: http://reference.wolfram.com/language/ref/DominantColors.htmlKevin Daily2018-02-16T04:29:47Z