Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Staff Picks sorted by activeTemperature in Champaign, IL from 1973-2017 - Visualization
http://community.wolfram.com/groups/-/m/t/1205691
Oftentimes when browsing the [DataIsBeautiful][1] subreddit I look at the content and think to myself, "this would be pretty easy to make in Wolfram Language," at which point I promptly forget I had that thought and move on to the next post (where I likely have the same thought).
Today I stumbled upon [a post][2] by user with a beautiful graphic visualizing the hourly temperature in Brooklyn from 1973 through 2017. This time I decided to act upon my "this would be pretty easy to make in Wolfram Language" thoughts, and went at it.
Wolfram Language's `WeatherData` functionality made gathering the temperatures a breeze (weather puns), and I was able to bring all the data into Mathematica with a single line of code:
WeatherData[Entity["City", {"Champaign", "Illinois", "UnitedStates"}],
"Temperature", {{1970, 1, 1}, Today, "Day"}]
After that it was just a matter of converting the degrees to Fahrenheit, getting the colors right (I really liked the colors in the original graphic, so I tried to mimic them), and then creating and formatting the rows (click to zoom).
[![Champaign Weather][3]][4]
I really don't want to take anything away from the original creator (here is a [GitHub link][5] with his R source code), as his was much more carefully done (he took care of leap years so his hours actually line up, etc...), I just wanted to demonstrate how easy it was to accomplish something similar in the Wolfram Language.
Here is [the notebook][6] (also attached) I used to create this (in the Cloud some things don't line up quite the same as they do on my Desktop, but the end result is the same). All that would be required to make this same chart for another city would be to edit the city entity in the first line of code, and perhaps edit the scaling to better accommodate the maximum and minimum temperature for that region.
----------
# CODE
champaignTemp=WeatherData[
Entity["City",{"Champaign","Illinois","UnitedStates"}],"Temperature",{{1970,1,1},Today,"Day"}];
datePath=champaignTemp["DatePath"]/.{x_,y_Quantity}:>{x,UnitConvert[y,"DegreesFahrenheit"]};
{min,max}=MinMax[datePath[[All,2]]]
![enter image description here][7]
mean=Mean@datePath[[All,2]]
![enter image description here][8]
$blendingColors={RGBColor[1/6,0,2/3],RGBColor[2/3,1/4,5/12],RGBColor[1,9/10,0]};
Graphics[Table[{Blend[$blendingColors,x],Disk[{8x,0}]},{x,0,1,1/8}]]
(*quick pic for looking at how the bleding will turn out*)
![enter image description here][9]
cf[x_]:=Blend[$blendingColors,x]
$legend=BarLegend[{cf[#]&,{0,1}},"Ticks"->{0,.5,1},"TickLabels"->(Style[#,FontFamily->"Avenir"]&/@
{Quantity[-10,"DegreesFahrenheit"],Quantity[45,"DegreesFahrenheit"],Quantity[100,"DegreesFahrenheit"]}),
LegendLabel->Style["Temperature",FontFamily->"Avenir"]];
scaledDatePath=datePath/.{x_,y_}:>
{x,Rescale[y,{Quantity[-10,"DegreesFahrenheit"],Quantity[100,"DegreesFahrenheit"]}]};
sortedScaledByYear=GroupBy[scaledDatePath,#[[1,1,1]]&];
Table[imgData[i]=Table[#,4]&/@(Blend[$blendingColors,#]&/@sortedScaledByYear[i][[All,2]])
//Transpose,{i,sortedScaledByYear//Keys}];
imgData[2017]=Transpose[Join[imgData[2017]//Transpose,
Table[{White,White,White,White},365-Length@Last@sortedScaledByYear]]];
grid=Grid[
Join[
{{Null,Column[{Style["Daily Temperature in Champaign, IL",FontFamily->"Helvetica",
18,Bold],Style["1973 - 2017",FontFamily->"Avenir",13]}],SpanFromLeft}
},
Table[{Style[i,FontFamily->"Avenir"],Image[imgData[i],ImageSize->1200],SpanFromLeft},
{i,sortedScaledByYear//Keys}],
{{Null,Style["January",FontFamily->"Avenir",13],Style["December",FontFamily->"Avenir",13]}}
]
,
Spacings->{.5,0.1},
Alignment->{{Left,Left,Right},Top}
];
final=Framed[Labeled[grid,$legend,Right],ImageMargins->10,FrameStyle->None]
SetDirectory[NotebookDirectory[]];
finalImg=Rasterize[final,ImageResolution->200]
Export["ChampaignWeather.png",finalImg]
[1]: https://www.reddit.com/r/dataisbeautiful/
[2]: https://www.reddit.com/r/dataisbeautiful/comments/77an8j/temperature_of_every_hour_in_ny_since_1973_oc/
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ChampaignWeather.png&userId=570780
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ChampaignWeather.png&userId=570780
[5]: https://gist.github.com/yeedle/82c5a7d73ab785eaaa8a220b5b0f1bf2
[6]: https://www.wolframcloud.com/objects/user-770cd577-83d6-4fc1-9b12-8de56235edbb/personalDeployments/ChampaignWeather.nb
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4ertwg.png&userId=11733
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ert54ert.png&userId=11733
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=saf43qtegwdfav.png&userId=11733Kyle Martin2017-10-20T01:44:20Z[GIF] Vitals (Animated von Mises distribution)
http://community.wolfram.com/groups/-/m/t/1199921
![Animated von Mises distribution][1]
**Vitals**
This one is pretty simple: there are 22 rows of dots, translating left or right depending on whether the row number is even or odd. Within each row, you can see the dots as plotting the density of the [von Mises distribution][2]. Specifically, the radius of each dot is the value of the von Mises pdf at that point.
Note that the von Mises distribution is like a Gaussian distribution on the circle. In particular, it is a periodic probability distribution, which is why each row is periodic, showing a total of 5 periods.
Here's the code:
Manipulate[
Show[
GraphicsGrid[
Table[{
Graphics[{
Lighter[ColorData["Rainbow"][1 - (n + 10)/21], .2],
Disk[{#, 0}, PDF[VonMisesDistribution[0, .3], Mod[# + (-1)^n t, 2 π, -π]]] & /@ Range[-4 π, 4 π, π/6]},
PlotRange -> {{-4 π - π/12, 4 π + π/12}, {-.5, .5}}]},
{n, -10, 11}],
Spacings -> {-6, Automatic}], ImageSize -> 600, Background -> GrayLevel[.1],
PlotRangePadding -> {None, Scaled[.0242]}
],
{t, 0, 2 π}]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dots12sc.gif&userId=610054
[2]: https://en.wikipedia.org/wiki/Von_Mises%E2%80%93Fisher_distributionClayton Shonkwiler2017-10-09T00:55:26Z[GIF] Trifolium (Envelope of the trifolium curve)
http://community.wolfram.com/groups/-/m/t/1202255
![Envelope of the trifolium curve][1]
**Trifolium**
The animation shows 800 tangent lines to the trifolium as they slowly move around the curve. In order to make it, I first ran `PlaneCurveData["Trifolium", "ParametricEquations"]` to get the parametrization, which I then rotate to get a vertically-oriented trifolium:
tri[t_] := RotationMatrix[π/6].{-Cos[t] Cos[3 t], -Cos[3 t] Sin[t]};
After that, it's just a matter of creating the tangent lines with `InfiniteLine[]` and choosing some colors. Here's the code:
With[{d = 2 π/800.},
Manipulate[
Show[
Table[
Graphics[{Thickness[.001], Opacity[.8], Hue[Mod[(s + t)/π, 1]],
InfiniteLine[tri[t + s], tri'[t + s]]},
PlotRange -> {{-1.4, 1.4}, {-1.18125`, 1.61875`}}],
{t, 0., π - d, d}],
ImageSize -> 540, Background -> GrayLevel[.1]],
{s, 0, d}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=lem8.gif&userId=610054Clayton Shonkwiler2017-10-13T03:19:40Z[GIF] Grid (Transformation of the square grid)
http://community.wolfram.com/groups/-/m/t/1204826
![Transformation of the square grid][1]
**Grid**
Unlike [_Part of the Journey_][2], [_Play_][3], and [_Limits_][4], this is not a conformal transformation of a regular grid on the plane. Instead, I've taken the square grid, inverse steregraphically projected it to the sphere, then orthogonally projected back to the plane, producing a collection of curves contained in the unit disk. This is not conformal since orthogonal projection does not preserve angles.
In the animation, I'm translating the entire grid by $-t (1,2)$ as $t$ varies from 0 to 1, which is a symmetry of the square grid, and applying the inverse-stereographic-project-then-orthogonally-project transformation.
There are a couple of quirks in the code. First, the `Disk[]` is there because I didn't extend the grid out far enough to actually fill in the center (which would have been computationally expensive); instead I just placed a small disk in the center to cover the hole in the middle. Second, the funny business on `x` in the `Table[]` is because I'm using progressively less precision for the grid lines which cluster in the center in order to cut down on computational complexity that doesn't actually contribute anything visible.
Anyway, here is the code:
InverseStereo[{x_, y_}] := {2 x/(1 + x^2 + y^2), 2 y/(1 + x^2 + y^2), (x^2 + y^2 - 1)/(1 + x^2 + y^2)};
With[{d = 30, cols = RGBColor /@ {"#FF5151", "#000249"}},
Manipulate[
Graphics[
{cols[[1]], Disk[{0, 0}, .07], Thickness[.003],
Line /@ # & /@ (Transpose /@ Table[InverseStereo[# - t {1, 2}][[;; 2]] & /@ {{n, x}, {x, n}},
{n, -d - 0.5, d + 0.5, 1},
{x, Join[Range[-d, -20], Table[-20 + i Abs[n]/40, {i, 1, 1600/Abs[n]}], Range[20, d]]}])},
Background -> cols[[-1]], ImageSize -> 540, PlotRange -> 1.1],
{t, 0., 1}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=stereotiles12c.gif&userId=610054
[2]: http://community.wolfram.com/groups/-/m/t/1171765
[3]: http://community.wolfram.com/groups/-/m/t/1173214
[4]: http://community.wolfram.com/groups/-/m/t/1179440Clayton Shonkwiler2017-10-18T05:10:47ZBasis of factors for large degree polynomials
http://community.wolfram.com/groups/-/m/t/1198917
In $\mathbb{Z}_2$, the polynomial $x^{2^6}+x+1$ or $x^{64}+x+1$ factors into $x^4+x+1$, $x^{12}+x^9+x^5+x^2+1$, $x^{12}+x^9+x^5+x^4+x^2+x+1$, $x^{12}+x^9+x^8+x^5+1$, $x^{12}+x^9+x^8+x^5+x^4+x+1$, $x^{12}+x^9+x^8+x^6+x^3+x^2+1$. Below, under $1+x+x^{64}$, you can see the degree 12 factors arranged as columns, followed by the basis (a gray bar separates factors and basis). The same is shown for $n$ from 5 to 13.
[![factors and basis][1]][1]
In $\mathbb{Z}_2$, $x^{2^n}+x+1$ has many factors of degree $2 n$ and the number of basis elements always seems to be $n-2$. Here are pictures of the basis for $n$ from 7 to 18.
[![basis in z2][2]][2]
Here's Mathematica code for the first image.
data = Table[Module[{ polynomials, len, polyandbasis},
polynomials = Last[Sort[SplitBy[SortBy[CoefficientList[#, x] & /@ (First /@
FactorList[x^(2^power) + x + 1, Modulus -> 2]), {Length[#], Reverse[#]} &], Length[#] &]]];
len = Length[polynomials[[1]]];
polyandbasis = Flatten /@ Transpose[{ 3 Transpose[polynomials], Table[{0, 1, 0}, {len}],
3 Transpose[Select[RowReduce[polynomials, Modulus -> 2], Total[#] > 0 &]]}];
Column[{Text[x^(2^power) + x + 1], ArrayPlot[polyandbasis, PixelConstrained -> True,
ImageSize -> {800, 2 len + 4}, Frame -> False]}, Alignment -> Center]], {power, 5, 13}];
Column[{Row[Take[data, 6], Spacer[30]], Row[Take[data, {7, 8}], Spacer[60]], Row[Take[data, {9}]]}]
First question: Does the $\mathbb{Z}_2$ polynomial $x^{2^n}+x+1$ have a particular name? It has a lot of nice properties.
I'd like to make pictures of higher order basis elements. Unfortunately, Mathematica doesn't want to Factor $x^{1048576}+x+1$, claiming it's out of bounds. Also, PolynomialGCD doesn't like high exponents. I've looked at the [Cantor–Zassenhaus algorithm](https://en.wikipedia.org/wiki/Cantor%E2%80%93Zassenhaus_algorithm) and other factorization methods over finite fields, but didn't readily understand them.
Is there some clever way to get the basis of the $\mathbb{Z}_2$ factors of $x^{2^n}+x+1$ for $n$ from 19 to 120 in Mathematica? Is there some nice way of quickly getting some of the degree $2n$ factors.
(Also at [math.stackexchange](https://math.stackexchange.com/questions/2460638/basis-of-factors-for-large-degree-polynomials) )
[1]: https://i.stack.imgur.com/GGXyR.jpg
[2]: https://i.stack.imgur.com/SwUO0.jpgEd Pegg2017-10-06T18:33:27ZLines of 4 points and 3 degrees above zero
http://community.wolfram.com/groups/-/m/t/1190377
At [Extreme Orchards for Gardner](http://community.wolfram.com/groups/-/m/t/947771) I mentioned barycentric coordinates. I found an interesting connection between the terms of degree 0 1 2 3 and barycentrics for record-setting lines in 3D space. The power set has the *ab* terms of degree 0 to 3. I calculate points of permutations of this power set. By appending 1 to each, the points can be gathered into lines with RowReduce. And then the graphic, showing how 64 points can be arranged in 66 lines of 4 points.
powerset ={{0, 0, 0, 1}, {0, 0, a, b}, {0, a^2, a b, b^2}, {a^3, a^2 b, a b^2, b^3}};
tetra = {{-1, -1, -1}, {-1, 1, 1}, {1, -1, 1}, {1, 1, -1}};
FromBarycentrics[{m_, n_, o_, p_}, {{x1_, y1_, z1_}, {x2_, y2_, z2_}, {x3_, y3_, z3_}, {x4_, y4_, z4_}}] := {m x1 + n x2 + o x3 + (1 - m - n - o) x4, m y1 + n y2 + o y3 + (1 - m - n - o) y4, m z1 + n z2 + o z3 + (1 - m - n - o) z4};
pts = Sort[FromBarycentrics[#/Total[#], tetra] & /@ Flatten[Permutations[#] & /@ (powerset /. {a -> 5, b -> 9}), 1]];
lines = Select[Union[Flatten[#, 1]] & /@ SplitBy[SortBy[Subsets[Append[#, 1] & /@ pts, {2}], RowReduce], RowReduce], Length[#] == 4 &];
lindex = Flatten[Position[pts, Drop[#, -1]] & /@ #] & /@ lines;
Graphics3D[{Sphere[#, .092] & /@ pts, Tube[pts[[#]], .02] & /@ lindex}, SphericalRegion -> True, Boxed -> False, ImageSize -> 800]
![lines of 4][1]
In above, I use $a=5, b=9$, but any reasonable values work. 0001 0011 0111 1111 1122 1112 1222 0012 0112 0122 1123 1223 1233 0123 2345 can be used instead to produce 241 lines of 5, a sporadic 3D extension of my previous graphic.
![lines of 5][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=linesof4.jpg&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=trees25-18-5.gif&userId=21530Ed Pegg2017-09-23T01:21:42Z[GIF] Voronoi visualization
http://community.wolfram.com/groups/-/m/t/1202074
I recently saw a gif showing the [growth of a Voronoi diagram][1] on [this][2] wiki page. This gif shows a Voronoi diagram but restricts each cell to lie in a disk that slowly grows over time.
I decided to recreate this with the Wolfram Language and thought I'd share the code and final result here.
#Visualization
First and foremost, here's the result:
![enter image description here][3]
#Code
First I start off with 20 random points in 2D:
pts = RandomReal[{-1, -1}, {20, 2}];
Then I extract each point's Voronoi cell by calling `VoronoiMesh` and then arranging the primitives to correspond to `pts`.
voronoi = VoronoiMesh[pts, {{-1, 1}, {-1, 1}}];
prims = BoundaryDiscretizeRegion /@ MeshPrimitives[voronoi, 2];
prims = Table[First[Select[prims, RegionMember[#, p] &]], {p, pts}];
Let's quickly pause to make sure the cells correspond to the correct point.
MapThread[Show[#1, Epilog -> {Red, PointSize[Large], Point[#2]}] &, {prims, pts}][[1 ;; 5]]
![enter image description here][4]
Now that we have the primitives, we can show the scene with disks of radius $r$ by applying `RegionIntersection` at each cell with a disk of radius $r$.
First we will discretize a disk to force `RegionIntersection` to return a `BoundaryMeshRegion`.
disk[{x_, y_}, d_, n_:100] := BoundaryMeshRegion[CirclePoints[{x, y}, d, n], Line[Mod[Range[n + 1], n, 1]]]
Now at radius $r$ we intersect, which I packed into a function. First, here's the code for a single cell. It will take the Voronoi cell, its corresponding point, and a color for styling purposes.
colors = RandomColor[RGBColor[_, _, _, 0.3], 20];
PartialVoronoiCell[r_][p_, cell_, color_] :=
BoundaryMeshRegion[
RegionIntersection[disk[p, r], cell],
MeshCellStyle -> {1 -> Directive[Thick, GrayLevel[.5]], 2 -> color}
]
The main function will effectively map over each point. When $r \leq 0$, we just show the points.
PartialVoronoiCells[_?NonPositive] = Graphics[Point[pts], PlotRange -> {{-1, 1}, {-1, 1}}, PlotRangePadding -> Scaled[.0125]];
PartialVoronoiCells[r_] :=
Show[
MapThread[PartialVoronoiCell[r], {pts, prims, colors}],
Epilog -> Point[pts], PlotRange -> {{-1, 1}, {-1, 1}}, PlotRangePadding -> Scaled[.0125]
]
This function is fast enough to visualize the growth with `Manipulate`.
Manipulate[PartialVoronoiCells[r], {r, 0, 1}]
![enter image description here][5]
[1]: https://en.wikipedia.org/wiki/Voronoi_diagram#/media/File:Voronoi_growth_euclidean.gif
[2]: https://en.wikipedia.org/wiki/Voronoi_diagram
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=voronoi.gif&userId=46025
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-10-1221.55.07.png&userId=46025
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.gif&userId=46025Chip Hurst2017-10-13T02:59:08ZTriangular and pyramidal numbers
http://community.wolfram.com/groups/-/m/t/1205088
f(n)="The triangular numbers"=1+2+ . . .+n. Using 9 for n,
Graphics[{PointSize[Large], Point[Permutations[Range@9, {2}]]}]
![enter image description here][1]
I.e.,the unequal pairs of numbers up to 9 plots as a 9*9 square minus
its length 9 diagonal. But this is just two copies of 1+ . . .+8. So
![enter image description here][2]
Solving for f(n)
RSolve[%, f[n], n] // Factor
![enter image description here][3]
This is essentially how Gauß, the youngster, startled his grammar school teacher.
In highschool, my friend Reed Carson and I got this a different way - by adding up the areas of n+1 triangles:
Table[{x, y}, {x, 4}, {y, x}]
> {{{1, 1}}, {{2, 1}, {2, 2}}, {{3, 1}, {3, 2}, {3, 3}}, {{4, 1}, {4,
> 2}, {4, 3}, {4, 4}}}
Graphics[{EdgeForm[White],
Blue, % /. {x_Integer, y_} -> Rectangle[{x, y}]}] /. {L___,
Rectangle[{x_Integer, y_}]} :> {L,
Triangle[{{x, y}, {x + 1, y}, {x + 1, y + 1}}], Red,
Triangle[{{x, y}, {x, y + 1}, {x + 1, y + 1}}]}
![enter image description here][4]
Now for 3D. Using as coordinates the integers 1, 2, ..., n taken three at a time,
Graphics3D[Sphere[Permutations[Range@9, {3}], 1/2]]
![enter image description here][5]
The unequal triads of numbers up to 9 plots as a 9x9x9 cube minus
those diagonals which have two or three coordinates the same:
Graphics3D[Sphere[Select[Tuples[Range@9, 3], Not[UnsameQ @@ #] &], 1/2]]
![enter image description here][6]
This is six triangulars of size 8 (= n-1) plus the central column of 9 (=n).
The former is six pyramids of size 7 (= n-2). Together they make a cube of size n=9.
So the equation for the pyramidal numbers solves as
RSolve[6 f[n - 2] + 6 n (n - 1)/2 + n == n^3, f[n], n] // Factor
![enter image description here][7]
Instead of 1 + 3 + 6 + . . . + n, summing $1^2 + 2^2 + . . . + n^2$ gives the "square pyramidal numbers",
Table[Sphere[{x, y, z}, 1/2], {z, 7}, {y, z}, {x, z}] // Graphics3D
![enter image description here][8]
each of which is the sum of two consecutive ordinary pyramidal numbers:
% /. Sphere[{x_, y_, z_}, 1/2] :> Sphere[{x, y, z}, 1/4] /; x < y
![enter image description here][9]
So the sum of the first n squares, $1^2 + 2^2 + . . . + n^2$, is
Factor[1/6 (n-1) (n+1) n+1/6 (n+1) (n+2) n]
![enter image description here][10]
(You can also see that each square (each layer) is the sum of two consecutive triangular numbers.)
But, as before, my friend Reed and I got this geometrically, as the the sum of a big pyramid plus two triangular arrays of half-cubes, plus n "notched" cubes (shown transparent) running up the long diagonal, each notch being a little pyramid of volume ⅓:
Graphics3D[{White,
Table[If[x == y, {Opacity[.0], #}, #] &@
Cuboid[{x - 1, y - 1, 5 - z}], {z, 5}, {y, z}, {x, z}],
Polygon[{{{0, 0, 0}, {5, 0, 0}, {5, 5, 0}, {0, 5, 0}}, {{0, 0,
0}, {0, 0, 5}, {5, 0, 0}}, {{0, 0, 5}, {5, 5, 0}, {5, 0,
0}}, {{0, 0, 5}, {0, 0, 0}, {0, 5, 0}}, {{0, 0, 5}, {5, 5,
0}, {0, 5, 0}}}]}]
![enter image description here][11]
For 5 = n, the volume of this is big pyramid + n notched cubes + two triangles of half-cubes , or
Factor[n^3/3+(1-1/3) n+(2 n (n-1))/(2 2)]
![enter image description here][12]
(For this problem, we don't need to know that the volume of a pyramid is always ⅓ the volume of its circumscribing prism, because a cube can always dissect into three copies of the (square based) pyramid we're discussing.)
![enter image description here][13]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=78961.png&userId=20103
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=73512.png&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=29923.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=64344.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=35585.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=68706.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=25207.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=99038.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=103309.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=175810.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=797911.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=852212.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=902213.png&userId=20103Bill Gosper2017-10-18T15:46:36ZMike Foale's Machine Learning Flight System
http://community.wolfram.com/groups/-/m/t/1179035
## Introduction ##
A few weeks ago, I had the privilege of speaking with Michael Foale -- astronaut, astrophysicist, entrepreneur, and Mathematica fan -- about his recent work with the Wolfram Language on the Raspberry Pi. As an experienced pilot, Mike thought he could use the Wolfram Language's machine learning capabilities to alert pilots to abnormal or unsafe flying conditions. Using a collection of sensors attached to a Pi, the Wolfram Language's Classify function would constantly analyze things like altitude, velocity, roll, pitch, and yaw to determine whether the pilot might be about to lose control of the plane. You can read [Mike's User story here][1], his experience with [Mathematica on the Space Station Mir here][2]. and the full code for Mike's Imminent Loss Of Control Identification system, or ILOCI, [is here][3].
## Getting the Initial Data ##
Mike decided that the Raspberry Pi would be a good base for this project because of how easy it is to connect various sensors to the device. In his case, there were 3 sensors he wanted to read from: an accelerometer, an aero sensor attached to the tail, and a flex sensor attached to the rudder. All of these are attached via GPIO pins, which the Wolfram Language can communicate through. In Mike's case, he wrote a MathLink driver in C to do this, but since he began this project we have been pushing to make GPIO communication easier. Now, the Wolfram Language can read from sensors with [the DeviceOpen and DeviceRead functions][4]. If Mike were to re-do this project, that would have saved him quite a bit of time and debugging!
![Left: Prototype Pi and sensor setup. Right: ILOCI attached to a small plane for a test flight.][5]
Before the Classify function can tell whether current flying conditions are normal or not, it must first learn what normal is. To do this, Mike took his Pi setup on a test-flight where he occasionally forced his plane to stall, or come close to stalling (please, do not try this for yourself!). After landing, he manually separated the sensor readings into "in family" and "out of family" moments -- that is, normal flying conditions and moments where loss of control is imminent.
## Importing the Data ##
For Mike's flights, he had the Pi save data from an initial test flight to a CSV file and had Mathematica import that file to train his Classify function on. For example:
datafile = FileNameJoin[{"","home","pi","Documents","training.dat"}];
data = Import[datafile];
For convenience's sake, I've embedded this same data in a notebook attached to this post, so that you can test and manipulate this data on your own as well. This data, saved under the variable "data", was divided up by Mike into the "in family" and "out of family" periods mentioned earlier:
noloc1 = Take[data, {3500, 3800}]; (* Ground, engine off *)
noloc2 = Take[data, {3900, 4200}]; (* Ground, engine on *)
noloc3 = Take[data, {4500, 4600}]; (* Takeoff, climbing at 60 knots *)
noloc4 = Take[data, {4800, 4900}]; (* Climbing, 70 knots *)
noloc5 = Take[data, {5000, 5200}]; (* Climbing, 50 knots *)
noloc6 = Take[data, {6200, 6300}]; (* Gliding, 42 knots *)
noloc7 = Take[data, {6900, 7100}]; (* Climbing, 60 to 70 knots *)
noloc8 = Take[data, {9200, 9400}]; (* Landing *)
noloc9 = Take[data, {9300, 9400}]; (* Rolling out*)
loc1 = Take[data, {6450, 6458}]; (* Straight stall *)
loc2 = Take[data, {6480, 6484}]; (* Left stall *)
loc3 = Take[data, {6528, 6534}]; (* Right stall *)
loc4 = Take[data, {6693, 6700}]; (* Right Yaw *)
loc5 = Take[data, {6720, 6727}]; (* Left Yaw *)
You might notice that the list above doesn't cover the entire dataset. That's because some of the data is kept aside for verification, to ensure the Classify function is recognizing "in family" and "out of family" moments correctly. This is a very important part of training any artificially intelligent program! Below are some plots of the above datasets, just to get a visual feel for what the Pi is "seeing" during a flight.
ListLinePlot[Transpose[noloc4],
PlotLegends -> {"AccelerationZ", "AccelerationY", "AccelerationX",
"RudderDeflection", "ElevatorDeflection", "TailYawRelative",
"TailAngleAttackRelative", "TailAirspeedRelative"}, Frame -> True]
![Sensor readings while climbing at 70 knots][6]
ListLinePlot[Transpose[loc1],
PlotLegends -> {"AccelerationZ", "AccelerationY", "AccelerationX",
"RudderDeflection", "ElevatorDeflection", "TailYawRelative",
"TailAngleAttackRelative", "TailAirspeedRelative"}, Frame -> True]
![Sensor readings while stalled][7]
These plots give us a good idea of what happens at some specific instances, but what does the flight as a whole look like? Recall from above that takeoff begins around datapoint 4500, the first stall around 6450, landing around 9200, and the rollout ends around 9400. According to Mike, the rudder movement around 11000 is simply moving the rudder to steer the aircraft back into the hangar.
ListLinePlot[Transpose[data],
PlotLegends -> {"AccelerationZ", "AccelerationY", "AccelerationX",
"RudderDeflection", "ElevatorDeflection", "TailYawRelative",
"TailAngleAttackRelative", "TailAirspeedRelative"}, Frame -> True,
ImageSize -> Large]
![Sensor readings from whole flight][8]
## Training the Classifier Data ##
After separating the data, Mike created Rules to label these moments as "Normal", "Stall Response", "Yaw Response Left", and "Yaw Response Right"; respectively "N", "D", "L", and "R". These Rules teach Classify which patterns belong to which label, so that later on Classify can tell what the appropriate label is for incoming, unlabeled data. Note that the ConstantArray functions simply repeat the data 10 times so the "out of family" moments are not overshadowed by the "in family" ones.
normal = Join[noloc1, noloc2, noloc3, noloc4, noloc5, noloc6, noloc7, noloc8];
normalpairs = Rule[#1, "N"] & /@ normal;
down = Flatten[ConstantArray[Join[loc1, loc2, loc3], 10], 1];
downpairs = Rule[#1, "D"] & /@ down;
left = Flatten[ConstantArray[loc4, 10], 1];
leftpairs = Rule[#1, "L"] & /@ left;
right = Flatten[ConstantArray[loc5, 10], 1];
rightpairs = Rule[#1, "R"] & /@ right;
Finally, with the data segmented and labeled, Mike created a ClassifierFunction able to take live data from the sensors, then quickly tell the pilot when something is wrong and how to correct it.
classify = Classify[Join[normalpairs, downpairs, leftpairs, rightpairs], Method -> {"NeuralNetwork", PerformanceGoal -> "Quality"}]
ClassifierInformation[classify]
![Output of the ClassifierInformation function][9]
Let's use this ClassifierFunction on a couple of the data points that we set aside earlier, to be sure the ClassifierFunction is correct, and to show how a ClassifierFunction is used.
nolocVerify = Take[data, {4600, 4700}];
locVerify = Take[data, {6587, 6595}];
classify[nolocVerify]
classify[locVerify]
![Results of Mike's classifier function on untrained data][10]
Recall that the ClassifierFunction returns one of 4 labels for each data point: "N" for normal, "L" for left yaw response, "R" for right yaw response, and "D" for down response. Mike's ClassifierFunction perfectly recognizes the first verification set, and comes close to being perfect on the second set. Not bad, given how little data he had to train it with!
This use-case gives a pretty good idea of how the ClassifierFunction works, but for a more in-depth example you can watch [this video from Wolfram Research][11].
## Using the ClassifierFunction on Real Data ##
At the moment I am neither a pilot nor the owner of an airplane, so performing a live test of Mike's ClassifierFunction would be a bit challenging. Fortunately, the Wolfram Language makes it easy to take Mike's recorded data and re-run it as though the ClassifierFunction were receiving this data in real time. First, we need to import the data that Mike recorded in his test flight. We actually did this already, when we called Import on the file containing the data. Next we'll import the timing data from the test flight. This is the absolute time in seconds and microseconds from the beginning and end of the flight, so subtracting the beginning time from the end time gives us the total time of the flight. With the times and the data known, we can determine how often the Pi read in measurements on the test flight. We will use that frame time to "replay" the test flight accurately. Mike's ILOCI system records timing information that we can Import from, but again I'll include it here for the sake of convenience:
timing = {1466861802, 255724, 1466867826, 498879, 11660};
dataseconds = timing[[3]] - timing[[1]];
datausecs = timing[[4]] - timing[[2]];
frametime = (dataseconds + datausecs*1.0*^-6)/(Length[data] - 1)
Now, let's use that frame length to create an animated output of the ClassifierFunction. This goes through the whole dataset and runs at the same rate as the Pi did. If we were actually flying an airplane, this would show us what the Pi thinks about our current environment, whether our motion is "in-family" or "out-of-family".
Animate[
classify[ data[[frame]] ],
{frame, 1, Length[data], 1},
AnimationRate -> (1/frametime),
FrameLabel -> {{None, None}, {None, "Full Flight Playback"}}
]
This would take a little over 90 minutes to run, and the non-normal readings go by fairly quickly, so let's focus in on some of the more interesting sections. First, let's see the straight stall that was reserved for verification. Again, the ClassifierFunction was not trained using this set -- it is brand new as far as the Classifier is concerned.
Animate[
classify[ data[[frame]] ],
{frame, 6580, 6595, 1},
AnimationRate -> (1/frametime),
FrameLabel -> {{None, None}, {None, "Stall Playback"}}
]
![Animate of the stall playback][12]
Notice that the Classifier constantly reads the situation and updates it's classification accordingly. Next let's look at one of the verification sets where everything was normal. It should read as "N" for the entire set:
Animate[
classify[ data[[frame]] ],
{frame, 6300, 6400, 1},
AnimationRate -> (1/frametime),
FrameLabel -> {{None, None}, {None, "Normal Playback"}}
]
![Animate of the normal playback][13]
## Conclusion ##
If we go back and count, there is about 60 lines of code. That's all that was needed to create plots, animations, and a neural net-based Classifier that might one day save lives. This is what makes the Wolfram Language such a powerful choice for projects like this -- quick prototyping and a plethora of built-in functions allows users to create some truly unique projects, regardless of experience or expertise. We hope that this will inspire you to start experimenting with your own ideas with the Wolfram Language!
[1]: http://www.wolfram.com/mathematica/customer-stories/training-a-neural-network-to-think.html
[2]: http://www.wolfram.com/mathematica/customer-stories/astronaut-places-a-customer-service-call-to-wolfram-research-from-space-station-mir.html
[3]: https://github.com/cfoale/ILOCI
[4]: http://reference.wolfram.com/language/guide/UsingConnectedDevices.html
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=PiAndPlaneSetup.png&userId=313765
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=LinePlot1.png&userId=313765
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=LinePlot2.png&userId=313765
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=LinePlot3.png&userId=313765
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ClassifierInfo.png&userId=313765
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ClassifierResults.png&userId=313765
[11]: https://youtu.be/ce6UptPYKxI?t=20m40s
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=StallPlayback.gif&userId=313765
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=NormalPlayback.gif&userId=313765Brett Haines2017-09-07T18:16:06ZSimulating GeoNearest with GeoEntities & GeoDisk
http://community.wolfram.com/groups/-/m/t/1203815
Imagine you have a set of geo positions that outline a path and you need to find all ZIP cods within 5 ml distance from this path. `GeoNearest` computation might time out due to complexity even for a single geo position, and even more so for, say, a few hundreds of them along a path. The computation is hard, because it involves lots of intersections with the ZIPCode polygons. But there is an easy work around via `GeoEntities` suggested [here][1]. I will show how to generalize it to a path. Import the data (attached) and extract positions:
data = Import["Line2.kml"];
pos = First[Cases[data, _GeoPosition, Infinity]]
![enter image description here][2]
Sample 5 ml geo discs along the path in am optimal step: not to small to be redundantly overlapping and not too large to have gaps in coverage:
disks = GeoDisk[GeoPosition[#], Quantity[5, "Miles"]] & /@ pos[[1, 1 ;; -1 ;; 30]]
![enter image description here][3]
As we can see this is pretty good coverage:
GeoGraphics[{disks, Point[pos]}]
![enter image description here][4]
But why at step 30 ? Well first it could be trial and error. But simple calculations show the same. We need number of points long the path (for disk centers):
Length[First[pos]]
(* 257 *)
Length of path in miles:
UnitConvert[GeoLength[GeoPath[pos]], "Miles"]
(* Quantity[29.52736447128438`, "Miles"] *)
And now the upper bound for step which I then decreased a bit to 30 for better coverage:
Length[First[pos]] Quantity[5, "Miles"]/UnitConvert[GeoLength[GeoPath[pos]], "Miles"]
(*43.51895345247198`*)
Now easily get all ZIPs with `GeoEntities`:
zips = Flatten[GeoEntities[disks, "ZIPCode"]]
![enter image description here][5]
and visualize the result. We take `Union` below to remove duplicates. Note high diversity of ZIP regions sizes. ZIP regions obviously span beyond the disks, but all disks are contained within most outer border of ZIPs - so we did not miss anything.
GeoGraphics[{disks, Point[pos], {EdgeForm[{Red, Thick}], FaceForm[], Polygon[Union[zips]]}}]
![enter image description here][6]
[1]: http://community.wolfram.com/groups/-/m/t/1199959
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=we5q34terg.png&userId=11733
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-10-16at3.14.58AM.png&userId=11733
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=45trhgar4q3t5wthr.png&userId=11733
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-10-16at3.16.45AM.png&userId=11733
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rehgw5465ujetyhrsg.png&userId=11733Vitaliy Kaurov2017-10-16T08:19:29ZAffordable Care Act: an experiment on possible new regulations
http://community.wolfram.com/groups/-/m/t/1201589
President Trump will evidently propose today (10/12/2017) some expansion of alternative health insurance arrangements such as "short term health insurance" or "Association Health Plans." Supporters say this proposal will give more Americans the chance to buy health insurance policies at lower prices and that better fit their needs. Critics say that this expansion will result in healthier people deserting insurance plans in which the price can not depend on the expected claims of the individual. This desertion will in turn cause expected claims in the original pool to rise, leading to insurers losing money and raising prices.
Can Mathematica quantify this possibility? Thus, a little quick experiment.
I start with a reparameterized version of the lognormal distribution in which the first parameter is the mean and the second parameter is the ratio of the median to the mean.
LogNormalDistribution3[m_,ν_]:=LogNormalDistribution[Log[m]-Log[1/ν],Sqrt[2] Sqrt[Log[1/ν]]]
Here is probability density function for the values of parameters used below:
Plot[PDF[LogNormalDistribution3[7000, 0.27], x], {x, 0, 10^3}, PlotTheme -> "Business"]
![enter image description here][1]
I now create a risk pool with 100, 000 persons with mean claims of 7,000 and a ratio of median to mean of 0.27. This models fairly well the current situation. The actual selection of the mean does not affect the results. The median to mean ratio does affect results, however.
rv = RandomVariate[LogNormalDistribution3[7000, 0.27], 100000]
![enter image description here][2]
Now, create a function that computes the mean claims of those not defecting to an alternative pool when those with expected claims below the median have a specified probability of defecting to the alternative pool. We could create a more general defection function that made the probability of defection depend inversely on the expected claims of the individual, but this approach strikes me as reasonable for a quick and dirty analysis.
residualExpectedClaims[rv_,defectionProbability_]:=Mean@Pick[rv,
With[{median=Median[rv]},Map[If[#<median,RandomVariate[BernoulliDistribution[1-defectionProbability]],1]&,rv]],1]
Now let's run an experiment in which 25% of those below the median defect.
r025 = residualExpectedClaims[rv, 0.25]
We can now compute the fractional increase in risk in the original pool.
(r025-Mean[rv])/Mean[rv]
The answer is about 13%.
We can also make a table showing how the residual expected claims vary as the fraction of defectors increases:
originalPoolExpectedClaims=Table[{defectPct,residualExpectedClaims[rv,defectPct]},{defectPct,0,0.9,0.1}]
Here's the output:
> {{0., 7000.}, {0.1, 7322.07}, {0.2, 7684.62}, {0.3, 8111.13}, {0.4, 8552.09},
> {0.5, 9061.95}, {0.6, 9677.75}, {0.7, 10359.4}, {0.8, 11146.6}, {0.9, 12142.1}}
ListLinePlot[originalPoolExpectedClaims, PlotTheme -> "Business"]
![enter image description here][3]
There's obviously more that can be done, but I thought Mathematica did a great job here in quickly modeling out the consequences of what may be a very important policy change in the United States.
# Closed form solution
It turns out there is a closed form solution to the problem I originally posed and for which I originally simulated results. I present it here.
Recall that the project is to determine the ratio between the expected values (means) of two distributions: an original distribution and a post-defection distribution. The original distribution represents the health insurance claims filed by a pool of insureds. It is described as a lognormal distribution that has a mean of *m* and a ratio between its median and its mean of *v*. One can think of *v* as a measure of the heterogeneity of claims. Low *v* means claims are highly heterogeneous; high *v* means the distribution is more symmetric and that claims are fairly homogeneous.
LogNormalDistribution3[m_, \[Nu]_] :=
LogNormalDistribution[Log[m] - Log[1/\[Nu]],
Sqrt[2] Sqrt[Log[1/\[Nu]]]];
original = LogNormalDistribution3[m, v];
The idea of the post-defection distribution is that, because of hypothesized implementations of an Executive Order signed today by President Trump, some fraction of those below the median level of claims might purchase alternative forms of insurance. We can describe the post-defection distribution as a mixture distribution in which the weights are *a* and 1-*a*. The components are both truncated distributions, the first being right-truncated at the median and the second being left-truncated at the median.
postDefection =
MixtureDistribution[{a,
1 - a}, {TruncatedDistribution[{0, m*v}, original],
TruncatedDistribution[{m*v, \[Infinity]}, original]}]
We can now compute the mean of the post-defection distribution. The output is called *ratio*. Warning: this may take up to 10 minutes, depending on one's computer. Apparently, the underlying integral is somewhat complex.
ratio = FullSimplify[
Refine[Mean[postDefection], 0 < a < 1 && m > 0 && 0 < v < 1]/m]
The result is:
$$(2 a-1) \text{erfc}\left(\sqrt{-\log (v)}\right)-2 a+2$$
What we see is that the mean *m* has dropped out and that the ratio depends simply on the weighting *a* placed on those with claims less than the median and *v* the heterogeneity of the original distribution.
I would prefer to describe the scenario not in terms of weights attached to those with claims less than the median but rather with a fraction *b* of those with claims below the median that will defect to alternative forms of insurance due to implementations of the Trump executive order. This can be done with a reparameterization rule:
reparameterizationRule = a -> ((1 - b)*(1/2))/((1 - b)*(1/2) + 1/2);
FullSimplify[ratio /. reparameterizationRule]
The result is
$$\frac{b \text{erfc}\left(\sqrt{-\log (v)}\right)-2}{b-2}$$
We can thus write our final function *meanRatio* that yields the desired result:
meanRatio[v_, b_] := (-2 + b*Erfc[Sqrt[-Log[v]]])/(-2 + b)
If we set v=0.27 and b=0.25 -- which was what was done in the original simulation -- we obtain 1.12777, which is extremely close to the simulated result. If 25% of those with claims below the median defect to alternative forms of insurance, the mean claims of those remaining in the original pool goes up about 13%.
The code below produces a surface showing how different values of *v* and *b* determine the ratio of mean claims post defection to original mean claims.
Plot3D[meanRatio[v, b], {v, 0.05, 0.95}, {b, 0, 1},
AxesLabel -> {"median to mean ratio", "defector fraction", "new mean ratio"},
ImageSize -> 600, ColorFunction -> "TemperatureMap", MeshFunctions -> {#3 &}]
![enter image description here][4]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=erttrtgsfd345rehgsda.png&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=34yewgadfs.png&userId=11733
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ert4354whtrsbdv.png&userId=11733
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-10-20at5.35.48AM.png&userId=11733Seth Chandler2017-10-12T13:45:43Z[WSS16] Quantum Computing with the Wolfram Language
http://community.wolfram.com/groups/-/m/t/897811
**Introduction to the Problem**
While a gate-based quantum computer has yet to be implemented at the level of more than a handful of qubits, and some worry that the decoherence problem will remain an obstacle to real-world use of these machines; the field of theoretical quantum computing has its own virtue apart from these problems of construction and implementation. The theory of quantum computation and quantum algorithms have been used as powerful tools to tackle long-standing problems in classical computation such as proving the security of certain encryption schemes and refining complexity classifications for
some approaches to the Traveling Salesman problem. Moreover, learning how to apply quantum effects like superposition, interference, and entanglement in a useful, computational, manner can help students gain a better understanding of how the quantum world really works. These educational and research advantages of quantum computing, along with the ever-present goal of designing new quantum algorithms that can provide us with speedups over their classical counterparts, furnish ample reason to make the field as accessible as possible. The goal of this project was to do just that by using the Wolfram language to design functionality that allows for researchers and students alike to engage with quantum computing in a meaningful way.
**Getting it Done**
This project involved the design and development of a suite of functions that allows for the simulation of quantum computing algorithms. The overarching goal was a framework that allows for easy implementation of quantum circuits, with minimal work done by the user. The specific design challenges were to have a tool simple enough to be used as an educational aide and powerful enough for researchers. To this end circuits can be built iteratively, allowing students, and those new to quantum computing, to build a working knowledge of the field as they increase the complexity of the algorithms. The system has a universal set of gates allowing it to carry out any operation possible for a quantum computer (up to limits on the number of qubits due to the size of the register).
----------
*Short note on this: I have not rigorously tested the system yet, but unless you want to wait several hours for your computation to complete, I suggest not attempting computations with more than ~20 qubits. To classically simulate an N-qubit register, requires a state vector of length 2<sup>N</sup>. Interestingly, it is this insight into the computational difficulty of simulating a quantum state that led Feynman to realize the power that quantum computing could have.*
----------
The project has functionality for the following gates: Hadamard, X, Y, Z, Rotation (any angle, about any axis), C-NOT, C-anything, SWAP, and QFT. It takes input in standard quantum circuit notation, and can output circuit diagrams, and the corresponding unitary transformation matrix as well as return the probabilities for results of measurements on a given qubit. Moreover, there is built in functionality for easy circuit addition, allowing one to stitch together large circuits from smaller ones, a boon for comprehension and testing.
**A Simple Example**
We initialize some random circuit by specifying it's corresponding circuit notation. For sake of brevity, we start with a medium-sized circuit that is already formed, and perform operations on it, but one can easily build a circuit up qubit-by-qubit and gate-by-gate with the applyQ and circuitQ functions. Below we name some variable `quantumCircuit` using the function `circuitQ` to which we pass some circuit notation. This notation is just a matrix representing the quantum logic circuit, with the gates and qubits arranged schematically.
quantumCircuit= circuitQ[{{"H", "R[1,Pi/2]", "N", "SWAP1"}, {"H", 1, "C",
"SWAP2"}, {"X", 1, "C", 1}}];
`circuitQ` outputs the circuit diagram corresponding to the notation given:
![enter image description here][1]
But, say I wish to alter the circuit. We can add in as many layers of gates or extra qubits as we wish, without having to deal with the pesky notation matrix. Here I add a Hadamard gate to the second qubit after the SWAP using the function `applyQ`:
applyQ[quantumCircuit, "H", 2]
the output of which is:
![enter image description here][2]
One can also use `Append`, `Join`,`Nest` and a variety of other Wolfram language functions to build up highly complex circuits. However, the `circuitQ` function is overloaded, and one can also perform computations with it. We will now build the actual unitary transformation matrix that corresponds to the circuit diagram:
unitar=matrixBuild@quantumCircuit
which, for our circuit, produces:
![enter image description here][3].
Now we can easily perform operations with circuit. Let's specify some random 3 qubit initial state (in the computational basis):
initalState = {1, 0, 0, 1, 0, 0, 1, 0} // Normalize
![enter image description here][4]
We can pass this initial state to the circuit easily with:
premeasure=unitar.initialState
which gives back the state of the quantum register (in this case our 3 qubits) after they have been operated on by the circuit, but pre-measurement:
![enter image description here][5]
We can now sample our state using the `projection` function. Here we will calculate the probability of getting state |0> when measuring qubit #3:
projection[2,0,premeasure]
which, for our case, gives back a probability of 2/3.
**Wrap Up**
This was only a very simple example. Using `applyQ` and `circuitQ` one can build and modify highly complex quantum circuits easily. `matrixBuild` does all the math of calculating the corresponding unitary transformation matrix for you. All that is left is for the user to pass an initial state and see the output. A good learning technique is to start with a very simple circuit and initial state, and slowly build up in complexity, performing measurements at each step, to build an intuition and working knowledge of any given quantum circuit.
An obvious next step for the project would be to add functionality that allows for the easy implementation of a general quantum oracle. I would also like to add more gates to the gate library, including: $\sqrt{SWAP}$, Tofolli, and QFT<sup>-1</sup> which were left out due to lack of time and are trivial to implement. These tools would make it significantly easier for researchers to model any given quantum circuit.
**Where is the NKS?**
Finding quantum algorithms that perform useful tasks faster than their classical counterparts is an open area of research. However, it is often quite difficult to design these algorithms to take advantage of interference, as well as the structure in a given computational problem that may be useful to exploit. As such, there are only a small number of important quantum algorithms that are currently known. Hopefully this tool will allow for NKS-style search experiments for interesting behavior in quantum circuits. Similar searches have been carried out for classical circuits, and the tools I built will make it easy to generate vast sets of random quantum circuits that follow certain rules. What remains is to build useful analytic tools for combing the space.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=circuit.jpeg&userId=896802
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=circuit2.jpeg&userId=896802
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=unitar.jpeg&userId=896802
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3114initialstate.jpeg&userId=896802
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=finalstate.jpeg&userId=896802Aaron Tohuvavohu2016-08-02T02:35:38ZCreate and deploy your own paclets in the Workbench
http://community.wolfram.com/groups/-/m/t/961186
# References
[![enter image description here][7]][8]
- [GitLink for Wolfram Language][9]
- [What is a “Paclet”?][10]
- [What can I do with “Syntax Templates” section at start of Symbol Pages (in Workbench)?][11]
# Eclipse work flow
If necessary, install the Wolfram WorkBench Plugin for Eclipse.
http://support.wolfram.com/kb/27221
In Windows - Preferences - Wolfram - Special make the following changes
![enter image description here][1]
Apply and press OK. Then re-open preferences and go to Paclet Development. Enable function paclet support.
![enter image description here][2]
# Create a new application project
![enter image description here][3]
![enter image description here][4]
Open the ***PacletInfo.m*** file and useful buttons now appear.
![enter image description here][5]
Also edit your project properties or the nice buttons may vanish after a build or when you close the project.
![enter image description here][6]
# Build
Build your project and documentation as usual and then create your Paclet file and deploy.
# Install
You can install from a local file path, http or ftp as print definitions reveals.
PacletInstall[filepath]
Needs["GeneralUtilities`"]
PrintDefinitions["PacletInstall"]
PrintDefinitions["PacletManager`Manager`Private`installPacletFromFileOrURL"]
There is no current support for https but this can be patched or you can easily call PacletInstall after downloading and creating a new file name for the download. This could be handy if you wanted to install from GitHub.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdf34q5w6yethdgsdfa.png&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rthe567iteyrtq54htefadvs.png&userId=11733
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=345tjhgfsdase56eutyjhstdgas.png&userId=11733
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=safd345q65uyhrtgfdasr43q5ega.png&userId=11733
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=asdf435657iifgshr54657eiutj.png&userId=11733
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fghu356rtgerfsgw6ejtrsa.png&userId=11733
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=logo.png&userId=11733
[8]: https://github.com/WolframResearch/GitLink
[9]: https://github.com/WolframResearch/GitLink
[10]: http://mathematica.stackexchange.com/questions/1660/what-is-a-paclet
[11]: http://mathematica.stackexchange.com/questions/66754Emerson Willard2016-11-11T13:22:20Z