Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Scienceshowthread.php?threadid=399 sorted by activeWhat's the difference between () and [ ]? and some programming questions.
http://community.wolfram.com/groups/-/m/t/1204255
1, I accidentally typed cos() instead of Cos[ ], what happened here?
f[t_] := ((Exp[-b*t])*cos (w*t - alpha))
D[f[t], t]
Out:cos E^(-b t) w - b cos E^(-b t) (-alpha + t w)
Mathematica version 11.0.2
2, What's the difference between ( ) and [ ] when dealing with function and variables?
3, how to change the output of a text into handwriting style? and how to change them back into computer language?J C2017-10-17T23:33:00ZUsing Automatic for FaceGrids with Plot3D not working as for Plot
http://community.wolfram.com/groups/-/m/t/1204646
Hi,
I am sure there is a good reason why I don´t seem to be able to do:
Plot3D[Evaluate[Abs[zin2/.{f->f0}]], {ca, c1/2, 2*c1}, {cb, c2/2, 2*c2},
FaceGrids->
{{{0,0,1}, {{c1},{c2}}},
{{1,0,0}, {{c2},Automatic}},
{{0,-1,0}, {{c1},{All}}}
}
]
but this will plot with a specific vertical grid and automatic horizontal grid lines with no problems:
Plot[Evaluate[Abs[zin2/.{{cb->c2, f->f0}}]], {ca, 0.5 c1, 2 c1}, GridLines->{{c1},Automatic}]
The documentation for FaceGrids specifically states that ¨For each face, specifications {xgridi,ygridi} can be given to determine the arrangement of grid lines. These specifications have the form described in the notes for GridLines.¨
GridLines states that ¨For each direction, the following grid line options can be given:¨ and it proceeds to list 5 options including the Automatic option.
Any help would be greatly appreciated.
Thanks,
JorgeJorge Diaz-Santiago2017-10-18T02:13:57ZHow to use Product function with i when i is index of variables ?
http://community.wolfram.com/groups/-/m/t/1204490
Hello,
I am trying to write the following polynomial
P=(1+p_1+p_1^2)*(1+p_2+p_2^2)*...(1+p_i+p_i^2)... ...(1+p_n+p_n^2)
So I use Product function in the following way
P=Product[(1+p_i+p_i^2),{i,1,n}]
and get wrong output.
Please help
Thanks in advanceMarina Kogan2017-10-17T20:14:02ZCalculate mean graph distance in a citation network?
http://community.wolfram.com/groups/-/m/t/1204062
Hi,
I'm a PhD student. For my research, I'm trying to find the shortest path length of a citation network. As I understand, I can use the MeanGraphDistance function for this purpose in mathematica. However, for a citation graph similar to the following, the mean graph distance shows as infinity. Moreover, the number of connected components in this graph show up as 10 though it is a connected graph.
g = Graph[{1 -> 2, 1 -> 3, 1 -> 4, 2 -> 5, 2 -> 6, 2 -> 3, 3 -> 7,
3 -> 8, 4 -> 9, 4 -> 7, 4 -> 8, 4 -> 10}];
ConnectedComponents[g]
MeanGraphDistance[g]
This is the output I get
{{7}, {8}, {3}, {5}, {6}, {2}, {9}, {10}, {4}, {1}}
\[Infinity]
I'm not able to understand why. Any help would be greatly appreciated.Praveena Chandra2017-10-17T01:56:12Z[✓] Multiply more than 2 matrices?
http://community.wolfram.com/groups/-/m/t/1203793
I am trying to find the stiffness matrix of a bilinear rectangular element, so I need to multiply three matrices together in Mathematica. Multiplying three at once didn't seem to work, so then I decided to multiply the first two matrices, and then that result by the third matrix. That doesn't seem to be working, either. It seems that Mathematica is not seeing my [T] matrix as a matrix, but rather just a variable - even after I defined [T] above. Attached is what I've entered.
Any help is greatly appreciated!
Clear [x, y]
Clear [B]
B = ( {
{(y - 10)/100, 0, (10 - y)/100, 0, y/100, 0, -y/100, 0},
{0, (x - 10)/100, 0, -x/100, 0, x/100, 0, (10 - x)/100},
{(x - 10)/100, (y - 10)/100, -x/100, (10 - y)/100, x/100,
y/100, (10 - x)/100, -y/100}
});
Clear [M]
M = ({
{10666.67, 2666.67, 0},
{2666.67, 10666.67, 0},
{0, 0, 4000}
});
Clear [Q]
Q = Transpose[B].M.B // MatrixForm
Integrate[Q, {x, 0, 10}, {y, 0, 10}]Tracy Borne2017-10-16T17:57:26ZAvoid to evaluate cells twice for the initialization of a Manipulate?
http://community.wolfram.com/groups/-/m/t/1204311
This manipulate program below allows the user to enter points on a LocatorPane and then draws a line through the points.
![enter image description here][1]
The Manipulate has an Initialization, where the programmer can define initial values for the points. The initial values are the variable ipts in the code below. What I observe is that the definition of the Manipulate must be evaluated twice before the modified values are displayed. Suspect that I am misunderstanding some fundamental concept about how initialization works. Could someone point out how to revise the code?
On a possibly related note, what must be done so that the variables, ipts, xMin and xMax in the code below have a scope local to the Manipulate?
myTest2 = Manipulate[
(*User points*)
posSorted = Sort[pos];
xMin = Min[posSorted[[All, 1]] ];
xMax = Max[posSorted[[All, 1]] ];
gvfuncUser = Interpolation[posSorted, InterpolationOrder -> 1];
myPlot =
Plot[gvfuncUser[x], {x, xMin, xMax},
PlotRange -> {{0, 1}, {0, 1}}, Frame -> True, ImageSize -> 400];
Grid[{
{LocatorPane[Dynamic@pos, myPlot, LocatorAutoCreate -> True,
ContinuousAction -> False] }
}]
,
(*list of controls*)
{{pos, ipts}, ControlType -> None}
(*Initialization*)
, TrackedSymbols :> {pos, ipts}
, Initialization :> (
ipts = {{0.15, 0.35}, {0.25, 0.15}, {0.50, 0.17}, {0.75,
0.18}, {1, 1}};
posSorted = {}; (*do this so that posSorted is local to this Manipulate (?) *)
gvfuncUser = {};
)
, SynchronousUpdating -> False
];
myTest2
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Manipulate_Initialization.PNG&userId=894223Robert McHugh2017-10-17T04:38:31Z[✓] Speed up Sum while implementing the Jacobi method for linear systems?
http://community.wolfram.com/groups/-/m/t/1203706
Hi,
I've come across a strange performance issue while implementing the Jacobi method. In the attachment, the last parameter in the "Jacobi" function, just controls how a certain sum is computed: using the Mathematica Sum (WithSum==1) or by standard partial sums. In my understanding, this should be equivalent, however I see a whole different story...
For very small dimensions, say n <= 200, using Sum gives slightly smaller computational times. However, from a certain point on ( in my case n = 250), Sum suddenly starts taking forever... For n=250 using sum takes almost 40 times more then not using Sum!
How can this be??João Janela2017-10-16T11:45:46Z[✓] Get two significant digits to the right of the decimal point?
http://community.wolfram.com/groups/-/m/t/1203574
N[x,3] gives me 1.10 and 0.982, when what I want is 0.98.
How do I get two and only two digits to the right of the decimal point?Nelson Zink2017-10-15T21:09:09ZUsing .NET with MONO under OSX 10.10
http://community.wolfram.com/groups/-/m/t/472977
Hello,
I'm quite new using Mathematica 10.0.0.2 Home Edition and .NET on my Mac Book running OSX10.10.
In order to use .NET I installed the latest MONO version "MonoFramework-MDK-3.12.1-macosx10.xamarin.x86.pkg".
MONO install path is "/Library/Frameworks/Mono.framework/Versions/3.12.1/lib/libmonoboehm-2.0.1.dylib"
But when I try to establish a .NET communication e.g. with
Needs["NETLink`"]
InstallNET[];
the only message is :
Cannot set current directory to \
/Applications/Mathematica.app/Contents/Contents/Frameworks/mathlink.\
framework/. >>
The path is wrong as the Mathematica installation path is :
/Applications/Mathematica.app/Contents/SystemFiles/Links/NETLink
So the path "../contents/contents/.." seems to be the issue.
Anybody out there to provide some advice ?
Thank a lot in advance.
ChristianChristian Boge2015-04-03T13:15:31ZAvoid GeoNearest to timeout?
http://community.wolfram.com/groups/-/m/t/1199959
This is the complete code that shows the issue:
GeoNearest["ZIPCode", GeoPosition[{40.11, -88.24}], {All, Quantity[5,"Miles"]}]
Error message:
A network operation for Geonearest timed out. Please try again later.
Tried 10/8/2017 9:40 PM PDT
I am using the WOLFRAM PROGRAMMING LAB with a URL that starts with https://lab.wolframcloud.comSteve LaDuke2017-10-09T04:47:06Z[✓] Find all ZipCodes within 5 miles of a KML line?
http://community.wolfram.com/groups/-/m/t/1203262
I drew a line using Google Earth. I saved it as "Line2.kml" (not kmz). I uploaded it to Wolfram Programming Lab.
I can successfully import the line into Wolfram:
kml1 = Import["Line2.kml", "Graphics"];
I can find all of the zip codes within 5 miles of a single point:
zipCodeList = GeoEntities[GeoDisk[GeoPosition[{46.73333672951149,-117.009711345202,0}], Quantity[5, "Miles"]], "ZIPCode"]
Is there a way to find all of the zip codes within 5 miles of all of the points?Steve LaDuke2017-10-15T07:55:31ZSimulating GeoNearest with GeoEntities & GeoDisk
http://community.wolfram.com/groups/-/m/t/1203815
Imagine you have a set of geo positions that outline a path and you need to find all ZIP cods within 5 ml distance from this path. `GeoNearest` computation might time out due to complexity even for a single geo position, and even more so for, say, a few hundreds of them along a path. The computation is hard, because it involves lots of intersections with the ZIPCode polygons. But there is an easy work around via `GeoEntities` suggested [here][1]. I will show how to generalize it to a path. Import the data (attached) and extract positions:
data = Import["Line2.kml"];
pos = First[Cases[data, _GeoPosition, Infinity]]
![enter image description here][2]
Sample 5 ml geo discs along the path in am optimal step: not to small to be redundantly overlapping and not too large to have gaps in coverage:
disks = GeoDisk[GeoPosition[#], Quantity[5, "Miles"]] & /@ pos[[1, 1 ;; -1 ;; 30]]
![enter image description here][3]
As we can see this is pretty good coverage:
GeoGraphics[{disks, Point[pos]}]
![enter image description here][4]
But why at step 30 ? Well first it could be trial and error. But simple calculations show the same. We need number of points long the path (for disk centers):
Length[First[pos]]
(* 257 *)
Length of path in miles:
UnitConvert[GeoLength[GeoPath[pos]], "Miles"]
(* Quantity[29.52736447128438`, "Miles"] *)
And now the upper bound for step which I then decreased a bit to 30 for better coverage:
Length[First[pos]] Quantity[5, "Miles"]/UnitConvert[GeoLength[GeoPath[pos]], "Miles"]
(*43.51895345247198`*)
Now easily get all ZIPs with `GeoEntities`:
zips = Flatten[GeoEntities[disks, "ZIPCode"]]
![enter image description here][5]
and visualize the result. We take `Union` below to remove duplicates. Note high diversity of ZIP regions sizes. ZIP regions obviously span beyond the disks, but all disks are contained within most outer border of ZIPs - so we did not miss anything.
GeoGraphics[{disks, Point[pos], {EdgeForm[{Red, Thick}], FaceForm[], Polygon[Union[zips]]}}]
![enter image description here][6]
[1]: http://community.wolfram.com/groups/-/m/t/1199959
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=we5q34terg.png&userId=11733
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-10-16at3.14.58AM.png&userId=11733
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=45trhgar4q3t5wthr.png&userId=11733
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-10-16at3.16.45AM.png&userId=11733
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rehgw5465ujetyhrsg.png&userId=11733Vitaliy Kaurov2017-10-16T08:19:29ZMake Mathematica's interface less blurry?
http://community.wolfram.com/groups/-/m/t/1202244
I run Mathematica 11.2 in Windows 10. I have a 4k monitor (resolution 3840x2160) at work and another at home running at recommended 150% scale. The Mathematica interface looks really blurry and it is painful to read (see attached image, the window behind Mathematica is the browser window where this message was being composed. Browser text is very sharp, as is the rest of Windows. Mathematica text is blurry).
My laptop (Surface book) runs at a resolution of 3000x2000 and 200% scale and Mathematica there looks even blurrier.
High dpi monitors have been out for many years and Mathematica has always been blurry for me on them. Is there a way of making it give good text? Am I missing some non-obvious setting that improves this?
Luis.Luis Rademacher2017-10-13T02:23:21ZGet gradient of PredictorFunction with respect to input?
http://community.wolfram.com/groups/-/m/t/1203073
For special forms of the PredictorFunction, there is an analytical formula for the gradient of the predictor wrt the input, x. For example, in a Gaussian Process, the prediction at $x$ is the posterior mean $m(x)$, and it is a linear combination of the kernel used
$$ m(x) = \sum a_{n} * k(x_n, x) $$
Hence, it is possible to obtain analytically the gradient of the mean wrt $x$ by taking a linear combination of the gradient of $k$. I was wondering, is it already implemented in the Wolfram function `Predict[]` or maybe `PredictorFunction[]` ? If not, is there an easy way to find the pieces needed? E.g. kernel parameters and kernel used, and possibly its gradient wrt $x$?
ThanksUmberto Noe2017-10-16T01:41:36Z[✓] Give a different color to each point in a plot of points?
http://community.wolfram.com/groups/-/m/t/1203194
Hello everyone, I'm new in mathematica so I have had some problems with it :/ . I have to make a plot from a science data in a 3D plot using points, and I have to give to each point a different color... and that's where is my issue. I've been trying with this
colorfunc = ColorData["Rainbow"];
norm = (vels - Min[vels])*1000; (*Some of the velocities are <0 so I plus them a value to avoid this.*)
cs = Normalize[norm];
Graphics3D[{PointSize[0.005], colorfunc@#[[1]], Point[#[[2]]]} & /@ Transpose[{cs, coord}], Axes -> True]
where coord is a 3D list of points (x,y and z cords),vels is a list with velocities, cs is the list containing the data for the colors. My problem is that when I execute the code the graph that I get has all the points with pretty similar colors, and I already know that I should see a difference between them, so I'm asking you for help with this. How should I give a different color to each point in such a way that I can see the difference between each point?
Thank you.Brayan Del Valle2017-10-15T00:48:20Z[✓] Compute a triple integral with NIntegrate?
http://community.wolfram.com/groups/-/m/t/1196434
I am trying to numerically compute a triple integral. The code is as follows:
I61 =(8 (((-1+E^(-(1/2) (u+v+z)^2)) z^2)/(u+v+z)^2-((-1+E^(-(1/2) (u+v+z)^2)) (v+z)^2)/(u+v+z)^2+((-E^(-(z^2/2))+E^(-(1/2) (u+v+z)^2)) (v+z)^2)/((u+v) (u+v+2 z))+(E^(-(1/2) (u+v+z)^2) (-1+E^(1/2 u (u+2 (v+z)))) z^2)/(u (u+2 (v+z)))))/(v z^2 (v+z)^2 (v+2 z))
I62 = Simplify[Together[I61]]
NIntegrate[I62, {z, 0, Infinity}, {u, 0, Infinity}, {v, 0, Infinity}, PrecisionGoal -> 4]
This works if I set PrecisionGoal to be 4. But increasing it 5 causes an error:
"The integrand ... has evaluated to Overflow, Indeterminate, or Infinity for all sampling points in the
region with boundaries {{3.,1.}, {0.,1.70984*10^14},{\[Infinity],7.}}"
Is there anyway to increase the precision a bit?Xing Shi Cai2017-10-02T14:29:30ZComputer Based Maths, list of educational outcomes.
http://community.wolfram.com/groups/-/m/t/1200235
Computer Based Maths ( https://www.computerbasedmath.org/about ) wish to consult you on our list of educational outcomes. These are the long-term goals that we want students learning mathematics to achieve through their schooling.
As valuable members of our community, we would like your feedback, to critique, compliment or suggest improvements upon the fundamentals that drive the initiative.
So please take the time to step through the list of outcomes, including the details and provide some feedback on what you think (comment / post below).
https://www.computerbasedmath.org/outcomes
This link is for the online list of the outcomes, found on that same page is a link to download the outcomes in PDF format should that prove useful!Mark Braithwaite2017-10-09T10:54:59Z[GIF] Voronoi visualization
http://community.wolfram.com/groups/-/m/t/1202074
I recently saw a gif showing the [growth of a Voronoi diagram][1] on [this][2] wiki page. This gif shows a Voronoi diagram but restricts each cell to lie in a disk that slowly grows over time.
I decided to recreate this with the Wolfram Language and thought I'd share the code and final result here.
#Visualization
First and foremost, here's the result:
![enter image description here][3]
#Code
First I start off with 20 random points in 2D:
pts = RandomReal[{-1, -1}, {20, 2}];
Then I extract each point's Voronoi cell by calling `VoronoiMesh` and then arranging the primitives to correspond to `pts`.
prims = BoundaryDiscretizeRegion /@ MeshPrimitives[VoronoiMesh[pts], 2];
prims = Table[First[Select[prims, RegionMember[#, p] &]], {p, pts}];
Let's quickly pause to make sure the cells correspond to the correct point.
MapThread[Show[#1, Epilog -> {Red, PointSize[Large], Point[#2]}] &, {prims, pts}][[1 ;; 5]]
![enter image description here][4]
Now that we have the primitives, we can show the scene with disks of radius $r$ by applying `RegionIntersection` at each cell with a disk of radius $r$.
First we will discretize a disk to force `RegionIntersection` to return a `BoundaryMeshRegion`. We will also restrict the intersection to lie in $[-1, 1] \times [-1, 1]$.
disk[{x_, y_}, d_, n_: 100] := BoundaryMeshRegion[CirclePoints[{x, y}, d, n], Line[Mod[Range[n + 1], n, 1]]]
bound = BoundaryDiscretizeRegion[Cuboid[{-1, -1}, {1, 1}]];
Now at radius $r$ we intersect, which I packed into a function. First, here's the code for a single cell. It will take the Voronoi cell, its corresponding point, and a color for styling purposes.
colors = RandomColor[RGBColor[_, _, _, 0.3], 20];
PartialVoronoiCell[r_][p_, cell_, color_] :=
BoundaryMeshRegion[
RegionIntersection[bound, disk[p, r], cell],
MeshCellStyle -> {1 -> Directive[Thick, GrayLevel[.5]], 2 -> color}
]
The main function will effectively map over each point. When $r \leq 0$, we just show the points.
PartialVoronoiCells[_?NonPositive] = Graphics[Point[pts], PlotRange -> {{-1, 1}, {-1, 1}}, PlotRangePadding -> Scaled[.0125]];
PartialVoronoiCells[r_] :=
Show[
MapThread[PartialVoronoiCell[r], {pts, prims, colors}],
Epilog -> Point[pts], PlotRange -> {{-1, 1}, {-1, 1}}, PlotRangePadding -> Scaled[.0125]
]
This function is fast enough to visualize the growth with `Manipulate`.
Manipulate[PartialVoronoiCells[r], {r, 0, 1}]
![enter image description here][5]
[1]: https://en.wikipedia.org/wiki/Voronoi_diagram#/media/File:Voronoi_growth_euclidean.gif
[2]: https://en.wikipedia.org/wiki/Voronoi_diagram
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=voronoi.gif&userId=46025
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshot2017-10-1221.55.07.png&userId=46025
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=screenshot.gif&userId=46025Chip Hurst2017-10-13T02:59:08Z[✓] Obtain inverse of a function?
http://community.wolfram.com/groups/-/m/t/1202155
I'm quite new to Mathematica and have so far been unable to resolve the following (minor) technical issue. In principle, the task is very straightforward: I'd like to define the inverse of the function F(x) = 1 - x^2/(2cosh(x)-2) *for x >= 0*, but because F is not a 1-1 function on the reals, I often get answers with the wrong sign if I set:
g[x_] = InverseFunction[F][x]
I need to compose g with another function, so it's not enough to just reflect the plot of F in the line y=x.
The easy fix I've found is just to let
g[x_] = Abs[InverseFunction[F][x]]
but this feels like a bit of a cheat, and on my machine, it takes quite a long time to generate a plot (it is possible to speed up this process?)
Instead, I've been trying to define g as a function with a restricted domain using ConditionalExpression, as in the example at
http://reference.wolfram.com/language/ref/InverseFunction.html
I must be doing something wrong, because I don't get any plot whatsoever!
Any help would be much appreciatedOliver Feng2017-10-12T15:52:55Z[GIF] Trifolium (Envelope of the trifolium curve)
http://community.wolfram.com/groups/-/m/t/1202255
![Envelope of the trifolium curve][1]
**Trifolium**
The animation shows 800 tangent lines to the trifolium as they slowly move around the curve. In order to make it, I first ran `PlaneCurveData["Trifolium", "ParametricEquations"]` to get the parametrization, which I then rotate to get a vertically-oriented trifolium:
tri[t_] := RotationMatrix[π/6].{-Cos[t] Cos[3 t], -Cos[3 t] Sin[t]};
After that, it's just a matter of creating the tangent lines with `InfiniteLine[]` and choosing some colors. Here's the code:
With[{d = 2 π/800.},
Manipulate[
Show[
Table[
Graphics[{Thickness[.001], Opacity[.8], Hue[Mod[(s + t)/π, 1]],
InfiniteLine[tri[t + s], tri'[t + s]]},
PlotRange -> {{-1.4, 1.4}, {-1.18125`, 1.61875`}}],
{t, 0., π - d, d}],
ImageSize -> 540, Background -> GrayLevel[.1]],
{s, 0, d}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=lem8.gif&userId=610054Clayton Shonkwiler2017-10-13T03:19:40ZWhy a Histogram is not returned properly as image with CloudDeploy?
http://community.wolfram.com/groups/-/m/t/952355
In Notebook the below code runs properly
myDist[Latitude_, Longitude_] :=
EstimatedDistribution[
WeatherData[{Latitude, Longitude},
"MaxWindSpeed", {{1990}, {2015}, "Year"}],
GumbelDistribution[\[Alpha], \[Beta]],
ParameterEstimator -> "MaximumLikelihood", WorkingPrecision -> 25]
URLExecute[CloudDeploy[api], {"Latitude" -> 40, "Longitude" -> 18}]
and the result is look like this
![enter image description here][1]
But I need to deploy to the cloud, and after CloudDeploy and APIFunction used the respond is sometimes only the data set instead of the PNG image with the histogram. Even if I used the same coordinates as input. (i.e 40,18)
![enter image description here][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=histogram.png&userId=392591
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=hist2.png&userId=392591Balazs Kisfali2016-10-31T06:46:16ZAffordable Care Act: an experiment on possible new regulations
http://community.wolfram.com/groups/-/m/t/1201589
President Trump will evidently propose today (10/12/2017) some expansion of alternative health insurance arrangements such as "short term health insurance" or "Association Health Plans." Supporters say this proposal will give more Americans the chance to buy health insurance policies at lower prices and that better fit their needs. Critics say that this expansion will result in healthier people deserting insurance plans in which the price can not depend on the expected claims of the individual. This desertion will in turn cause expected claims in the original pool to rise, leading to insurers losing money and raising prices.
Can Mathematica quantify this possibility? Thus, a little quick experiment.
I start with a reparameterized version of the lognormal distribution in which the first parameter is the mean and the second parameter is the ratio of the median to the mean.
LogNormalDistribution3[m_,ν_]:=LogNormalDistribution[Log[m]-Log[1/ν],Sqrt[2] Sqrt[Log[1/ν]]]
Here is probability density function for the values of parameters used below:
Plot[PDF[LogNormalDistribution3[7000, 0.27], x], {x, 0, 10^3}, PlotTheme -> "Business"]
![enter image description here][1]
I now create a risk pool with 100, 000 persons with mean claims of 7,000 and a ratio of median to mean of 0.27. This models fairly well the current situation. The actual selection of the mean does not affect the results. The median to mean ratio does affect results, however.
rv = RandomVariate[LogNormalDistribution3[7000, 0.27], 100000]
![enter image description here][2]
Now, create a function that computes the mean claims of those not defecting to an alternative pool when those with expected claims below the median have a specified probability of defecting to the alternative pool. We could create a more general defection function that made the probability of defection depend inversely on the expected claims of the individual, but this approach strikes me as reasonable for a quick and dirty analysis.
residualExpectedClaims[rv_,defectionProbability_]:=Mean@Pick[rv,
With[{median=Median[rv]},Map[If[#<median,RandomVariate[BernoulliDistribution[1-defectionProbability]],1]&,rv]],1]
Now let's run an experiment in which 25% of those below the median defect.
r025 = residualExpectedClaims[rv, 0.25]
We can now compute the fractional increase in risk in the original pool.
(r025-Mean[rv])/Mean[rv]
The answer is about 13%.
We can also make a table showing how the residual expected claims vary as the fraction of defectors increases:
originalPoolExpectedClaims=Table[{defectPct,residualExpectedClaims[rv,defectPct]},{defectPct,0,0.9,0.1}]
Here's the output:
> {{0., 7000.}, {0.1, 7322.07}, {0.2, 7684.62}, {0.3, 8111.13}, {0.4, 8552.09},
> {0.5, 9061.95}, {0.6, 9677.75}, {0.7, 10359.4}, {0.8, 11146.6}, {0.9, 12142.1}}
ListLinePlot[originalPoolExpectedClaims, PlotTheme -> "Business"]
![enter image description here][3]
There's obviously more that can be done, but I thought Mathematica did a great job here in quickly modeling out the consequences of what may be a very important policy change in the United States.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=erttrtgsfd345rehgsda.png&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=34yewgadfs.png&userId=11733
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ert4354whtrsbdv.png&userId=11733Seth Chandler2017-10-12T13:45:43ZUse SemanticImport in 11.2.0.0?
http://community.wolfram.com/groups/-/m/t/1200910
I can't get SemanticImport to work in 11.2.0.0. Even the Documentation files won't function. Mac; OS High Sierra; MacBookProColin Grace2017-10-10T07:04:50ZFind the combinations of 10 inputs that result in a Boolean Logic (0 or 1)?
http://community.wolfram.com/groups/-/m/t/1201941
I have a complex Boolean logic with, say, 10 inputs and one output. Is there any mathematical method or an algorithm to identify all the possible combinations of the inputs that can result in a specific output (0 or 1)? Note that the logic is known. But we don't want to 'try' or simulate all possible combinations of input and check which ones result in the specific output. We would like to solve the problem backward: know the desired output, know the logic, find out the input combinations that can result in that output.Hamid Jahanian2017-10-12T10:19:26ZUse NIntegrate with vectors?
http://community.wolfram.com/groups/-/m/t/1202204
Is there a way to get Mathematica to provide a meaningful answer - perhaps semi-numerically - for the following numerical integral over vectors? Note that it is OK to assume a value for \Alpha. Additionally the vector $\vec{x}$ is not being integrated over. So, if absolutely essential, different values of $\vec{x}$ could be taken for the numerical integration.
$Assumptions = Element[p1v | p3v | p4v | p5v | xv, Vectors[3, Reals]];
a = Simplify[ReleaseHold[Hold[E^((-I)*p1v . xv)]]]
b = Simplify[ReleaseHold[Hold[(p1v . p1v + p3v . p3v)/
((p3v - p1v)*(p3v - p1v)*(\[Alpha]^2*p3v . p3v + 1)^2)]]]
jj = FullSimplify[a*b]
Now, the following symbolic integral doesn't seem to work, i.e. Mathematica just spits back the input
Integrate[(p1v . p1v + p3v . p3v)/(E^(I*p1v . xv)*
((p1v - p3v)^2*(1 + \[Alpha]^2*p3v . p3v)^2)),
{p1v, -Infinity, Infinity}, {p3v, -Infinity, Infinity}]
but neither do the following NIntegrate commands work
NIntegrate[(p1v . p1v + p3v . p3v)/(E^(I*p1v . xv)*
((p1v - p3v)^2*(1 + p3v . p3v)^2)), {p1v, 0, 1}, {p3v, 0, 1},
{xv, 0, 1}]
NIntegrate::inumr: The integrand (E^(-I p1v.xv) (p1v.p1v+p3v.p3v))/((p1v-p3v)^2 (1+p3v.p3v)^2) has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,1},{0,1},{0,1}}.
Note that above, \Alpha was taken to be zero, and a simultaneous integration over $\vec{x}$ was attempted, if Mathematica can't do any kind of semi-numerical integration.
The following NIntegrate doesn't work either - probably because I don't know how to make Mathematica perform a numerical integration with an algebraic parameter.
NIntegrate[(p1v . p1v + p3v . p3v)/(E^(I*p1v . xv)*
((p1v - p3v)^2*(1 + p3v . p3v)^2)), {p1v, 0, 1}, {p3v, 0, 1}]
NIntegrate::inumr: The integrand (E^(-I p1v.xv) (p1v.p1v+p3v.p3v))/((p1v-p3v)^2 (1+p3v.p3v)^2) has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,1},{0,1}}.
If there is a way to conclusively know before integration whether the integrals are non-convergent, that would be very helpful, but that's also unknown to me how to do that in Mathematica.Arny Toynbee2017-10-12T16:51:05ZIncrease speed in notebook that uses Dynamic[FinancialData]
http://community.wolfram.com/groups/-/m/t/1200792
I want to make a cfd that presents an analysis of the course of stock prices obtained from FinancialData and in which the user can jump from one stock to another. But when I for instance make a notebook with the following commands
Dynamic[p]
Dynamic{FinancialData[p,{2017,1,1}]
p="GE"
Mathematica becomes awfully slow, while other programs at the same time keep running at the usual speed. Is there any remedy for this problem?
In the Windows 10 task manager I can see that Mathematica uses only about 1% of the Intel i5 processor and 92 MB of the 8 GB memory while the Mathematica Kernel uses about 5% of the processor, 288 MB of the memory and 1 to 2 Mbps of my 50 Mbps download internet connection speed.Laurens Wachters2017-10-10T12:35:43Z[✓] Choose dates dynamically with a Slider?
http://community.wolfram.com/groups/-/m/t/1195481
I would like to select a date from a list of dates using a slider, and show the chosen date right underneath the slider in the form of a date string..
Take for instance the following simple list:
list={{2000, 1, 3}, {2000, 1, 4}, {2000, 1, 5}, {2000, 1, 6}, {2000, 1, 7}, {2000, 1, 10}, {2000, 1, 11}, {2000, 1, 12}, {2000, 1, 13}, {2000, 1, 14}}
So I could for instance choose with the slider the third element of this list, which is {2000, 1,5}, and then convert this with the command DateString[DatObject[{2000, 1, 7}]] to the data string "Wed 5 Jan 2000". Now I want this date string shown underneath the slider instead of the usual number, which in this case would be 3.
The slider would be for instance:
Slider[Dynamic[m],{1,10,1}
The problem is now that Dynamic[m] is not accepted by Part. With the command
list[[Dynamic[m]]]
one gets an error message that the result from the slider cannot be used as a part specification.
How do I solve this problem? Thanks in advance for your help.Laurens Wachters2017-10-01T20:35:36Z[WSS16] Quantum Computing with the Wolfram Language
http://community.wolfram.com/groups/-/m/t/897811
**Introduction to the Problem**
While a gate-based quantum computer has yet to be implemented at the level of more than a handful of qubits, and some worry that the decoherence problem will remain an obstacle to real-world use of these machines; the field of theoretical quantum computing has its own virtue apart from these problems of construction and implementation. The theory of quantum computation and quantum algorithms have been used as powerful tools to tackle long-standing problems in classical computation such as proving the security of certain encryption schemes and refining complexity classifications for
some approaches to the Traveling Salesman problem. Moreover, learning how to apply quantum effects like superposition, interference, and entanglement in a useful, computational, manner can help students gain a better understanding of how the quantum world really works. These educational and research advantages of quantum computing, along with the ever-present goal of designing new quantum algorithms that can provide us with speedups over their classical counterparts, furnish ample reason to make the field as accessible as possible. The goal of this project was to do just that by using the Wolfram language to design functionality that allows for researchers and students alike to engage with quantum computing in a meaningful way.
**Getting it Done**
This project involved the design and development of a suite of functions that allows for the simulation of quantum computing algorithms. The overarching goal was a framework that allows for easy implementation of quantum circuits, with minimal work done by the user. The specific design challenges were to have a tool simple enough to be used as an educational aide and powerful enough for researchers. To this end circuits can be built iteratively, allowing students, and those new to quantum computing, to build a working knowledge of the field as they increase the complexity of the algorithms. The system has a universal set of gates allowing it to carry out any operation possible for a quantum computer (up to limits on the number of qubits due to the size of the register).
----------
*Short note on this: I have not rigorously tested the system yet, but unless you want to wait several hours for your computation to complete, I suggest not attempting computations with more than ~20 qubits. To classically simulate an N-qubit register, requires a state vector of length 2<sup>N</sup>. Interestingly, it is this insight into the computational difficulty of simulating a quantum state that led Feynman to realize the power that quantum computing could have.*
----------
The project has functionality for the following gates: Hadamard, X, Y, Z, Rotation (any angle, about any axis), C-NOT, C-anything, SWAP, and QFT. It takes input in standard quantum circuit notation, and can output circuit diagrams, and the corresponding unitary transformation matrix as well as return the probabilities for results of measurements on a given qubit. Moreover, there is built in functionality for easy circuit addition, allowing one to stitch together large circuits from smaller ones, a boon for comprehension and testing.
**A Simple Example**
We initialize some random circuit by specifying it's corresponding circuit notation. For sake of brevity, we start with a medium-sized circuit that is already formed, and perform operations on it, but one can easily build a circuit up qubit-by-qubit and gate-by-gate with the applyQ and circuitQ functions. Below we name some variable `quantumCircuit` using the function `circuitQ` to which we pass some circuit notation. This notation is just a matrix representing the quantum logic circuit, with the gates and qubits arranged schematically.
quantumCircuit= circuitQ[{{"H", "R[1,Pi/2]", "N", "SWAP1"}, {"H", 1, "C",
"SWAP2"}, {"X", 1, "C", 1}}];
`circuitQ` outputs the circuit diagram corresponding to the notation given:
![enter image description here][1]
But, say I wish to alter the circuit. We can add in as many layers of gates or extra qubits as we wish, without having to deal with the pesky notation matrix. Here I add a Hadamard gate to the second qubit after the SWAP using the function `applyQ`:
applyQ[quantumCircuit, "H", 2]
the output of which is:
![enter image description here][2]
One can also use `Append`, `Join`,`Nest` and a variety of other Wolfram language functions to build up highly complex circuits. However, the `circuitQ` function is overloaded, and one can also perform computations with it. We will now build the actual unitary transformation matrix that corresponds to the circuit diagram:
unitar=matrixBuild@quantumCircuit
which, for our circuit, produces:
![enter image description here][3].
Now we can easily perform operations with circuit. Let's specify some random 3 qubit initial state (in the computational basis):
initalState = {1, 0, 0, 1, 0, 0, 1, 0} // Normalize
![enter image description here][4]
We can pass this initial state to the circuit easily with:
premeasure=unitar.initialState
which gives back the state of the quantum register (in this case our 3 qubits) after they have been operated on by the circuit, but pre-measurement:
![enter image description here][5]
We can now sample our state using the `projection` function. Here we will calculate the probability of getting state |0> when measuring qubit #3:
projection[2,0,premeasure]
which, for our case, gives back a probability of 2/3.
**Wrap Up**
This was only a very simple example. Using `applyQ` and `circuitQ` one can build and modify highly complex quantum circuits easily. `matrixBuild` does all the math of calculating the corresponding unitary transformation matrix for you. All that is left is for the user to pass an initial state and see the output. A good learning technique is to start with a very simple circuit and initial state, and slowly build up in complexity, performing measurements at each step, to build an intuition and working knowledge of any given quantum circuit.
An obvious next step for the project would be to add functionality that allows for the easy implementation of a general quantum oracle. I would also like to add more gates to the gate library, including: $\sqrt{SWAP}$, Tofolli, and QFT<sup>-1</sup> which were left out due to lack of time and are trivial to implement. These tools would make it significantly easier for researchers to model any given quantum circuit.
**Where is the NKS?**
Finding quantum algorithms that perform useful tasks faster than their classical counterparts is an open area of research. However, it is often quite difficult to design these algorithms to take advantage of interference, as well as the structure in a given computational problem that may be useful to exploit. As such, there are only a small number of important quantum algorithms that are currently known. Hopefully this tool will allow for NKS-style search experiments for interesting behavior in quantum circuits. Similar searches have been carried out for classical circuits, and the tools I built will make it easy to generate vast sets of random quantum circuits that follow certain rules. What remains is to build useful analytic tools for combing the space.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=circuit.jpeg&userId=896802
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=circuit2.jpeg&userId=896802
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=unitar.jpeg&userId=896802
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3114initialstate.jpeg&userId=896802
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=finalstate.jpeg&userId=896802Aaron Tohuvavohu2016-08-02T02:35:38Z[✓] Represent a vector moving on three mutually perpendicular circles?
http://community.wolfram.com/groups/-/m/t/1201063
Dear Friends,
I have three concentric circles in planes mutually perpendicular to each other (say, x2 + y2 = r2, z2 + y2 = r2, x2 + z2 = r2). There is a vector R (theta, phi), which is constrained to move only on the circumference of these circles (having its tail fixed at the common center of the three circles). How can I represent this vector mathematically in Mathematica, in order to find its dot product with another fixed vector F(theta_1, phi_1).
Will appreciate any suggestion regarding this.
ThanksS G2017-10-11T07:21:59ZSemanticImport fails for every file in Mathematica 11.0
http://community.wolfram.com/groups/-/m/t/904715
This is a follow up on this [post][1].
Using SemanticImport on any file, even those mentioned in the Documentation examples e.g. :
SemanticImport["ExampleData/RetailSales.tsv"]
Gives an error:
**SemanticImport::unexpinvaliderr: Unexpected invalid input was detected. Processing will not continue**
I came across an issue with the SemanticImport as my previous attempt to import a large csv file caused a crash of the Mathematica Kernel and took Windows 10 Pro 64-bit with it as it consumed all remaining 4GB RAM.
The work-around mentioned in the aforementioned post still works i.e. make sure you change the path to the temp directory i.e.
Block[{$TemporaryDirectory = "C:\\temp"}, SemanticImport["ExampleData/RetailSales.tsv"]]
Is this a known bug?
Cheers,
Dave
[1]: http://community.wolfram.com/groups/-/m/t/819494?_19_redirect=http://community.wolfram.com/dashboard?p_p_id=3&p_p_lifecycle=0&p_p_state=maximized&p_p_mode=view&_3_groupId=0&_3_keywords=semanticimport&_3_struts_action=%252Fsearch%252Fsearch&_3_redirect=%252Fweb%252Fcommunity%252F%253Fsource%253DnavDave Middleton2016-08-13T23:48:12ZWolfram Player for iOS
http://community.wolfram.com/groups/-/m/t/1197947
[![enter image description here][1]][2]
We are [**excited to announce**][2] the release of Wolfram Player for iOS! Now you can harness the power of the Computable Document Format (CDF) anytime, anywhere. Wolfram Player syncs up with other apps on your mobile device—allowing you to access documents from any source—including from the Wolfram Cloud, iTunes, Dropbox and more!
One of the biggest features of this app is the ability to sideload documents into Wolfram Player from other applications. Additionally, Demonstrations and other Manipulate figures look and feel a lot more like what we envision them to be. The intuitive tactile interface offered by Wolfram Player gives a true hands-on approach to interactive modeling in a way that's never been done before.
When integrated with the Wolfram Cloud app, you can seamlessly move between designing notebooks and pulling them down for testing, thereby paving the way for increased collaboration and efficiency for a variety of projects.
[**Check it out for yourself**][3].
[![enter image description here][4]][3]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=player_image.jpg&userId=515558
[2]: http://blog.wolfram.com/2017/10/04/notebooks-in-your-pocket-wolfram-player-for-ios-is-now-shipping/
[3]: https://itunes.apple.com/us/app/wolfram-player/id1059014516
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Wolfram_Notebooks_Timeline2.png&userId=11733Jesse Dohmann2017-10-04T16:21:05ZCreate and deploy your own paclets in the Workbench
http://community.wolfram.com/groups/-/m/t/961186
# References
[![enter image description here][7]][8]
- [GitLink for Wolfram Language][9]
- [What is a “Paclet”?][10]
- [What can I do with “Syntax Templates” section at start of Symbol Pages (in Workbench)?][11]
# Eclipse work flow
If necessary, install the Wolfram WorkBench Plugin for Eclipse.
http://support.wolfram.com/kb/27221
In Windows - Preferences - Wolfram - Special make the following changes
![enter image description here][1]
Apply and press OK. Then re-open preferences and go to Paclet Development. Enable function paclet support.
![enter image description here][2]
# Create a new application project
![enter image description here][3]
![enter image description here][4]
Open the ***PacletInfo.m*** file and useful buttons now appear.
![enter image description here][5]
Also edit your project properties or the nice buttons may vanish after a build or when you close the project.
![enter image description here][6]
# Build
Build your project and documentation as usual and then create your Paclet file and deploy.
# Install
You can install from a local file path, http or ftp as print definitions reveals.
PacletInstall[filepath]
Needs["GeneralUtilities`"]
PrintDefinitions["PacletInstall"]
PrintDefinitions["PacletManager`Manager`Private`installPacletFromFileOrURL"]
There is no current support for https but this can be patched or you can easily call PacletInstall after downloading and creating a new file name for the download. This could be handy if you wanted to install from GitHub.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdf34q5w6yethdgsdfa.png&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=rthe567iteyrtq54htefadvs.png&userId=11733
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=345tjhgfsdase56eutyjhstdgas.png&userId=11733
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=safd345q65uyhrtgfdasr43q5ega.png&userId=11733
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=asdf435657iifgshr54657eiutj.png&userId=11733
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fghu356rtgerfsgw6ejtrsa.png&userId=11733
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=logo.png&userId=11733
[8]: https://github.com/WolframResearch/GitLink
[9]: https://github.com/WolframResearch/GitLink
[10]: http://mathematica.stackexchange.com/questions/1660/what-is-a-paclet
[11]: http://mathematica.stackexchange.com/questions/66754Emerson Willard2016-11-11T13:22:20ZAnalysis of rates of murder by firearms in the US
http://community.wolfram.com/groups/-/m/t/1200009
## Data Collection ##
###Murder by firearms###
The FBI, through its criminal justice information services division, collects information on murders by types of weapon, at a state level. Statistics available are from 2011.
[https://www.fbi.gov/about-us/cjis/ucr/crime-in-the-u.s/2011/crime-in-the-u.s.-2011/tables/table-20][1]
murderData =
Import["https://www.fbi.gov/about-us/cjis/ucr/crime-in-the-u.s/2011/\
crime-in-the-u.s.-2011/tables/table-20", "Data"][[3, 2, 3,
2]]; murderLbl = First@murderData; murderData =
Most@First@Rest@murderData;
murderData[[All, 1]] =
Check[Interpreter["AdministrativeDivision"][#], #] & /@
murderData[[All, 1]];
murderData[[12, 1]] = Entity["AdministrativeDivision", {"Illinois", "UnitedStates"}]
MapIndexed[Sequence, murderLbl]
(*{"State", {1}, "Total murders 1", {2}, "Total firearms", {3}, "Handguns", {4}, "Rifles", {5}, "Shotguns", {6}, "Firearms (type unknown)", {7}, "Knives or cutting instruments", {8}, "Other \
weapons", {9}, "Hands, fists, feet, etc. 2", {10}}*)
###Population Data###
Population data is readily available in Wolfram Mathematica.
pop = EntityValue[murderData[[All, 1]], "Population"];
###Legislative Control###
State and legislative control data can be obtained from the National Conference of State Legislatures.
[http://www.ncsl.org/Portals/1/Documents/Elections/Legis_Control_ 2016_Apr20.pdf][2]
Data was scrubbed and placed in variable stateComposition that holds the information on which party holds the power for the state. Finally a dataset was created holding the political control of each state.
dComp = Dataset[
Association[(Thread[
Rule[stateComposition[[All,
1]], (AssociationThread[{"abbreviation", "party"}, #] & /@
stateComposition[[All, 2 ;; 3]])]])]]
GeoRegionValuePlot[(dComp[All, "party"] // Normal //
Normal) /. {"Dem" -> 1, "Rep" -> 2, "Divided" -> 3},
ColorRules -> {1 -> Blue, 2 -> Red, 3 -> Yellow},
PlotLegends ->
Placed[SwatchLegend[{Blue, Red, Yellow}, {"Democrat", "Republican",
"Divided"}, LegendFunction -> "Frame"], Bottom]]
![enter image description here][3]
###Gun Freedom Index###
Used Guns & Ammo Magazine data to rank states numerically based on the following categories.
1. Right to Carry: how restrictive each state are in prohibiting
carry in different locations, how readily can citizens obtain
permits, etc.
2. Modern Sporting Rifles: restrictions on
semiautomatic firearms not regulated by NFA and restrictions on
magazine capacity and/or accessories.
3. NFA: The National Firearms
Act (NFA) of 1934 has placed certain restrictions on the purchase of
certain categories of weaponry. States can further restrict and
regulate these weapons (machine guns, silencers, short-barrelled
rifles and shotguns, etc..
4. Castle Doctrine: English common law
established that a man's home is his castle and has a right to
defend it. Status and case law in each state can regulate and impose
restrictions in a citizen's ability for self defense.
5. Miscellaneous: issues such as purchase/registrations requirements,
gun ownership percentage, availability of ranges, etc.
[http://www.gunsandammo.com/network-topics/culture-politics-network/best-states-for-gun-owners-2014/][4]
SetDirectory[NotebookDirectory[]];
gunFreedomIndexData = SemanticImport["gunfreedomindex.xlsx"];
gfi[state_] :=
Flatten@Normal[
Normal[gunFreedomIndexData[
Select[#State == state &], {"Ranking", "total"}][Values]]]
gfi = gfi[#] & /@ murderData[[All, 1]];
(*data={#\[LeftDoubleBracket]1\[RightDoubleBracket],100000#\
\[LeftDoubleBracket]2\[RightDoubleBracket]/QuantityMagnitude[#\
\[LeftDoubleBracket]3\[RightDoubleBracket]]//N,#\[LeftDoubleBracket]4\
\[RightDoubleBracket],#\[LeftDoubleBracket]5\[RightDoubleBracket]}&/@(\
Flatten[#]&/@Transpose[{murderData[[All,{1,3}]],pop,gfi}])*)
ds = Dataset[Association[(Thread[Rule[data[[All, 1]], (AssociationThread[{"gunFreedomIndex",
"murderByFirearm"}, #] & /@ data[[All, {3, 2}]])]])]]
###Auxiliary functions for Data Visualization###
colorRules = {"Dem" -> (BaseStyle -> {FontColor -> White,
Background -> Blue}),
"Divided" -> (BaseStyle -> {FontColor -> Black,
Background -> Yellow}),
"Rep" -> (BaseStyle -> {FontColor -> White, Background -> Red})};
text[state_, function_] :=
Text[dComp[state, "abbreviation"], function[state],
dComp[state, "party"] /. colorRules]
##Data Visualization##
###Murder Rates by Firearms vs. Gun Control###
Let's chart the murder rates vs. the gun freedom ranking. This should give us an indication if the gun control restrictions have any influence on murder rates committed with firearms.
coords[state_] :=
ds[state][{"gunFreedomIndex", "murderByFirearm"}] // Values // Normal
lmf = LinearModelFit[coords /@ Normal[Keys[ds]], x, x];
Column[{Show[
ListPlot[coords /@ Normal[Keys[ds]], PlotTheme -> "Detailed",
FrameLabel -> {"Gun Freemdom Ranking",
"MurderByFireArmRate (per 100k)"},
Epilog ->
Inset[Style[
"\!\(\*SuperscriptBox[\(R\), \(2\)]\)=" <>
ToString@lmf["AdjustedRSquared"]], {5, 11}],
PlotRangePadding -> None, PlotRange -> {{0, 52}, {0, 12}},
ImageSize -> Large,
PlotLabel ->
Style["Murder Rate vs Gun Control Measures", Black, Bold]],
Plot[lmf[x], {x, 0, 52}],
Graphics[text[#, coords] & /@ Normal[Keys[ds]]]],
Row[{Text["Democrat",
BaseStyle -> {FontColor -> White, Background -> Blue}],
Text[" "],
Text["Divided", BaseStyle -> {Background -> Yellow}],
Text[" "],
Text["Republican",
BaseStyle -> {FontColor -> White, Background -> Red}]}]},
Alignment -> Center]
![enter image description here][5]
There seems to be no correlation between murders by firearms and the gun freedom index.
###Murder Rates by Firearms vs. Gun Ownership Rates###
The article by [Bindu Kalesan, et al "Gun Ownership and social culture"][6] provide some data points on the gun ownership rates by state.
gunOwnerShip = Import["GunOwnershipRate.xlsx", {"Data", 1}];
gunOwnerShip[[All, 1]] =
Interpreter["AdministrativeDivision"][#] & /@ gunOwnerShip[[All, 1]];
go = Dataset[Association[Rule[#1, #2] & @@@ gunOwnerShip]]
gof[state_] := {go[state], ds[state]["murderByFirearm"]}
lmf = LinearModelFit[gof /@ Normal[Keys[ds]], x, x];
Column[{Show[
ListPlot[gof /@ Normal[Keys[ds]], PlotTheme -> "Detailed",
FrameLabel -> {"Gun Ownership (%)",
"MurderByFireArmRate (per 100k)"},
Epilog ->
Inset[Style[
"\!\(\*SuperscriptBox[\(R\), \(2\)]\)=" <>
ToString@lmf["AdjustedRSquared"]], {60, 11}],
PlotRangePadding -> None, ImageSize -> Large,
PlotRange -> {{0, 70}, {0, 12}},
PlotLabel ->
Style["Murder Rate vs Gun Ownership (%)", Black, Bold]],
Plot[lmf[x], {x, 0, 70}],
Graphics[text[#, gof] & /@ Normal[Keys[ds]]]],
Row[{Text["Democrat",
BaseStyle -> {FontColor -> White, Background -> Blue}],
Text[" "],
Text["Divided", BaseStyle -> {Background -> Yellow}],
Text[" "],
Text["Republican",
BaseStyle -> {FontColor -> White, Background -> Red}]}]},
Alignment -> Center]
![enter image description here][7]
In this case, again, we don't see a correlation between citizen gun ownership and murder rates.
We can't say the same regarding the availability of guns to law abiding citizens.
gRestriction[state_] := {ds[state]["gunFreedomIndex"], go[state]};
lmf = LinearModelFit[gRestriction /@ Normal[Keys[ds]], x,
x]; Column[{Show[
ListPlot[gRestriction /@ Normal[Keys[ds]], PlotTheme -> "Detailed",
FrameLabel -> {"Gun Freedom Ranking", "Gun Ownership (%)"},
Epilog ->
Inset[Style[
"\!\(\*SuperscriptBox[\(R\), \(2\)]\)=" <>
ToString@lmf["AdjustedRSquared"]], {5, 48}],
PlotRangePadding -> None, ImageSize -> Large,
PlotRange -> {{0, 52}, {0, 70}},
PlotLabel ->
Style["Gun Ownership (%) vs. Gun Freedom Ranking", Black, Bold]],
Plot[lmf[x], {x, 0, 52}],
Graphics[text[#, gRestriction] & /@ Normal[Keys[ds]]]],
Row[{Text["Democrat",
BaseStyle -> {FontColor -> White, Background -> Blue}],
Text[" "],
Text["Divided", BaseStyle -> {Background -> Yellow}],
Text[" "],
Text["Republican",
BaseStyle -> {FontColor -> White, Background -> Red}]}]},
Alignment -> Center]
![enter image description here][8]
Let's explore the data against socioeconomic factors now.
###Murder Rates vs Income Inequality###
The Gini Index is a measure of income inequality:
gini[state_] := {EntityValue[state, "GiniIndex"],
ds[state]["murderByFirearm"]}
lmf = LinearModelFit[gini /@ Normal[Keys[ds]], x, x]
Column[{Show[
ListPlot[gini /@ Normal[Keys[ds]], PlotTheme -> "Detailed",
FrameLabel -> {"Gini Index", "MurderByFireArmRate (per 100k)"},
Epilog ->
Inset[Style[
"\!\(\*SuperscriptBox[\(R\), \(2\)]\)=" <>
ToString@lmf["AdjustedRSquared"]], {0.53, 8}],
PlotRangePadding -> None, ImageSize -> Large,
PlotRange -> {{0.4, 0.55}, {0, 12}},
PlotLabel -> Style["Murder Rate vs Gini Index", Black, Bold]],
Plot[lmf[x], {x, 0.4, 0.55}],
Graphics[text[#, gini] & /@ Normal[Keys[ds]]]],
Row[{Text["Democrat",
BaseStyle -> {FontColor -> White, Background -> Blue}],
Text[" "],
Text["Divided", BaseStyle -> {Background -> Yellow}],
Text[" "],
Text["Republican",
BaseStyle -> {FontColor -> White, Background -> Red}]}]},
Alignment -> Center]
![enter image description here][9]
###Murder Rate vs. Poverty Level###
Poverty level data was obtained from the Census Bureau. Numbers represent estimated number of individuals in 2009 living below the poverty level.
[(http://www2.census.gov/library/publications/2011/compendia/statab/131ed/tables/12s0709.xls)][10]
stateRules =
Thread[Rule[
EntityValue[
EntityList[
EntityClass["AdministrativeDivision", "AllUSStatesPlusDC"]],
"StateAbbreviation"],
EntityList[
EntityClass["AdministrativeDivision", "AllUSStatesPlusDC"]]]]
povertyLevel =
Dataset[Association[
Rule[#1, #2] & @@@ {{"AL", 17.5}, {"AK", 9.}, {"AZ", 16.5}, {"AR",
18.8}, {"CA", 14.2}, {"CO", 12.9}, {"CT", 9.4}, {"DE",
10.8}, {"DC", 18.4}, {"FL", 14.9}, {"GA", 16.5}, {"HI",
10.4}, {"ID", 14.3}, {"IL", 13.3}, {"IN", 14.4}, {"IA",
11.8}, {"KS", 13.4}, {"KY", 18.6}, {"LA", 17.3}, {"ME",
12.3}, {"MD", 9.1}, {"MA", 10.3}, {"MI", 16.2}, {"MN",
11.}, {"MS", 21.9}, {"MO", 14.6}, {"MT", 15.1}, {"NE",
12.3}, {"NV", 12.4}, {"NH", 8.5}, {"NJ", 9.4}, {"NM",
18.}, {"NY", 14.2}, {"NC", 16.3}, {"ND", 11.7}, {"OH",
15.2}, {"OK", 16.2}, {"OR", 14.3}, {"PA", 12.5}, {"RI",
11.5}, {"SC", 17.1}, {"SD", 14.2}, {"TN", 17.1}, {"TX",
17.2}, {"UT", 11.5}, {"VT", 11.4}, {"VA", 10.5}, {"WA",
12.3}, {"WV", 17.7}, {"WI", 12.4}, {"WY", 9.8}} /. stateRules]]
poverty[state_] := {povertyLevel[state], ds[state]["murderByFirearm"]}
lmf = LinearModelFit[poverty /@ Normal[Keys[ds]], x, x]
Column[{Show[
ListPlot[poverty /@ Normal[Keys[ds]], PlotTheme -> "Detailed",
FrameLabel -> {"Individuals under Poverty Line (%)",
"MurderByFireArmRate (per 100k)"},
Epilog ->
Inset[Style[
"\!\(\*SuperscriptBox[\(R\), \(2\)]\)=" <>
ToString@lmf["AdjustedRSquared"]], {23, 6}],
PlotRangePadding -> None, ImageSize -> Large,
PlotRange -> {{5, 25}, {0, 12}},
PlotLabel -> Style["Murder Rate vs Poverty Level", Black, Bold]],
Plot[lmf[x], {x, 5, 25}],
Graphics[text[#, poverty] & /@ Normal[Keys[ds]]]],
Row[{Text["Democrat",
BaseStyle -> {FontColor -> White, Background -> Blue}],
Text[" "],
Text["Divided", BaseStyle -> {Background -> Yellow}],
Text[" "],
Text["Republican",
BaseStyle -> {FontColor -> White, Background -> Red}]}]},
Alignment -> Center]
![enter image description here][11]
###Murder Rates vs Race Composition###
Population estimates by race were obtained from the US Census Bureau. (https://www.census.gov/popest/data/state/asrh/2014/index.html)
race = Import["PEP_2011_PEPSR5H.xls", {"Data", 1}];
racelbl = Rest@First@race; race = Rest@Rest@race;
race[[All, 1]] =
Interpreter["AdministrativeDivision"][#] & /@ race[[All, 1]];
dsRace = Dataset[
Association[
Thread[Rule[race[[All, 1]],
AssociationThread[racelbl, #] & /@ race[[All, 2 ;;]]]]]]
aaRate[state_] := {100 dsRace[state, "africanamericanRate"],
ds[state, "murderByFirearm"]}
lmf = LinearModelFit[aaRate /@ Normal[Keys[ds]], x, x]
Column[{Show[
ListPlot[aaRate /@ Normal[Keys[ds]], PlotTheme -> "Detailed",
FrameLabel -> {"African American Population (%)",
"MurderByFireArmRate (per 100k)"}, PlotRangePadding -> None,
ImageSize -> Large, PlotRange -> {{0, 60}, {0, 12}},
Epilog ->
Inset[Style[
"\!\(\*SuperscriptBox[\(R\), \(2\)]\)=" <>
ToString@lmf["AdjustedRSquared"]], {50, 10}],
PlotLabel ->
Style["Murder Rate vs African American Population(%)", Black,
Bold]], Plot[lmf[x], {x, 0, 60}],
Graphics[text[#, aaRate] & /@ Normal[Keys[ds]]]],
Row[{Text["Democrat",
BaseStyle -> {FontColor -> White, Background -> Blue}],
Text[" "],
Text["Divided", BaseStyle -> {Background -> Yellow}],
Text[" "],
Text["Republican",
BaseStyle -> {FontColor -> White, Background -> Red}]}]},
Alignment -> Center]
![enter image description here][12]
The Pew Hispanic center monitors the trends of hispanic population in the US. Breakout of the population population by states is availabe t their website.
[http://www.pewhispanic.org/states/][13]
hispanicData =
Import["http://www.pewhispanic.org/files/states/xls/ALL_11.xlsx", \
{"Data", 1}];
hispanicDataLbl = hispanicData[[5]];
hispanicData = hispanicData[[7 ;;]];
hispanicData[[All, 1]] =
Interpreter ["AdministrativeDivision"][#] & /@
hispanicData[[All, 1]];
dsHispanic =
Dataset[Association[
Thread[Rule[hispanicData[[All, 1]], hispanicData[[All, 3]]]]]];
hispanic[state_] := {100 dsHispanic[state],
ds[state, "murderByFirearm"]}
lmf = LinearModelFit[hispanic /@ Normal[Keys[ds]], x, x]
Column[{Show[
ListPlot[hispanic /@ Normal[Keys[ds]], PlotTheme -> "Detailed",
FrameLabel -> {"Hispanic Population (%)",
"MurderByFireArmRate (per 100k)"}, PlotRangePadding -> None,
ImageSize -> Large, PlotRange -> {{0, 60}, {0, 12}},
Epilog ->
Inset[Style[
"\!\(\*SuperscriptBox[\(R\), \(2\)]\)=" <>
ToString@lmf["AdjustedRSquared"]], {55, 3}],
PlotLabel ->
Style["Murder Rate vs Hispanic Population(%)", Black, Bold]],
Plot[lmf[x], {x, 0, 60}],
Graphics[text[#, hispanic] & /@ Normal[Keys[ds]]]],
Row[{Text["Democrat",
BaseStyle -> {FontColor -> White, Background -> Blue}],
Text[" "],
Text["Divided", BaseStyle -> {Background -> Yellow}],
Text[" "],
Text["Republican",
BaseStyle -> {FontColor -> White, Background -> Red}]}]},
Alignment -> Center]
![enter image description here][14]
[1]: https://www.fbi.gov/about-us/cjis/ucr/crime-in-the-u.s/2011/crime-in-the-u.s.-2011/tables/table-20
[2]: http://www.ncsl.org/Portals/1/Documents/Elections/Legis_Control_%202016_Apr20.pdf
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4951dComp.png&userId=78214
[4]: http://www.gunsandammo.com/network-topics/culture-politics-network/best-states-for-gun-owners-2014/
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mrvsgc.png&userId=78214
[6]: http://injuryprevention.bmj.com/content/early/2015/06/09/injuryprev-2015-041586.full.pdf?keytype=ref&ijkey=doj6vx0laFZMsQ2
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mrgo.png&userId=78214
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gogfr.png&userId=78214
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mrgini.png&userId=78214
[10]: http://www2.census.gov/library/publications/2011/compendia/statab/131ed/tables/12s0709.xls
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mrpl.png&userId=78214
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mraa.png&userId=78214
[13]: http://www.pewhispanic.org/states/
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mrhp.png&userId=78214William Playfair2017-10-09T04:06:31ZSolve f(X)=0 where X is a matrix and f(X) is a complex matrix?
http://community.wolfram.com/groups/-/m/t/1191309
Why is the solution a {2,2,2} dimension instead of {2,2}?
(Thank to Bill Simpson)
Clear[a, b, c, d];
X = {{a, b}, {c, d}};
h1 = RandomComplex[{2 + I, 10 + 20 I}, {2, 2}];(*MIMO channel*)
h2 = RandomComplex[{2 + I, 10 + 20 I}, {2, 2}];
myu = RandomReal[1, 2]; (*probabilities of the realization of the MIMO channels*)
Lambda = 0.3;
FX1 = myu[[1]]*ConjugateTranspose[h1].Inverse[
ConjugateTranspose[h1.X.h1]/0.2 + IdentityMatrix[2]].h1;
FX2 = myu[[2]]*ConjugateTranspose[h2].Inverse[
ConjugateTranspose[h2.X.h2]/0.2 + IdentityMatrix[2]].h2;
FXlambda = FX1 + FX2;
Map[X /. # &,
NSolve[myu[[1]]*
ConjugateTranspose[h1].Inverse[
ConjugateTranspose[h1.X.h1]/0.2 + IdentityMatrix[2]].h1 -
Lambda*IdentityMatrix[2] == RandomComplex[{0, 0}, {2, 2}], {a, b, c, d}]]Massa Ndong2017-09-24T14:21:25ZWhy does Multinomial[-1/2, -1/2, 1] give Indeterminate rather than 1/Pi?
http://community.wolfram.com/groups/-/m/t/1200741
Is this a bug? Or there might be something I don't understand.
In[1]:= Multinomial[-1/2, -1/2, 1]
"During evaluation of In[1]:= Infinity::indet: Indeterminate expression 0 ComplexInfinity encountered."
Out[1]= Indeterminate
None of the arguments are located at poles of the factorial function (negative integers), so I expected this answer:
In[2]:= (-1/2 - 1/2 + 1)! / ((-1/2)! (-1/2)! (1)!)
Out[2]= 1/Pi
Checking with Simplify using Assumptions:
In[3]:= Assuming[a == -1/2 && b == -1/2 && c == 1,
Simplify[ Multinomial[a, b, c] == 1/Pi ]
]
Out[3]= True
Thanks!
(I'm running 11.2)
In[4]:= $Version
Out[4]= "11.2.0 for Mac OS X x86 (64-bit) (September 11, 2017)"Brad Chalfan2017-10-09T22:24:33ZPlot a polyhedron/region trapped between 4 planes?
http://community.wolfram.com/groups/-/m/t/1200672
I need to plot the region trapped between 4 planes
x = y = z = x + y + z - 1 = 0.
Here, is the code that I used:
RegionPlot3D[ContourPlot3D[x == 0, {x, 0, 1}, {y, 0, 1}, {z, 0, 1}],
ContourPlot3D[y == 0, {x, 0, 1}, {y, 0, 1}, {z, 0, 1}],
ContourPlot3D[z == 0, {x, 0, 1}, {y, 0, 1}, {z, 0, 1}],
ContourPlot3D[x + y + z - 1 == 0, {x, 0, 1}, {y, 0, 1}, {z, 0, 1}]]
But, there are other extra bits that I do not know how to delete them.Amir Baghban2017-10-09T19:17:40Z[WSS16] Image Colorization
http://community.wolfram.com/groups/-/m/t/884348
The aim of my project for the Wolfram Science Summer School was to build a neural network which could be able to colorize grayscale images in a realistic way. The network has been built following the article [1]. In this paper, the authors propose a fully automated approach for colorization of grayscale images, which uses a combination of global image features, which are extracted from the entire image, and local image features, which are computed from small image patches. Global priors provide information at an image level such as whether or not the image was taken indoors or outdoors, whether it is day or night, etc., while local features represent the local texture or object at a given location. By combining both features, it's possible to leverage the semantic information to color the images without requiring human interaction. The approach is based on Convolutional Neural Networks, which have a strong capacity for learning and is trained to predict the chrominance of a grayscale image using the CIE L*a*b* colorspace. Predicting colors has the nice property that training data is practically free: any color photo can be used as a training example.
**Net Layers**
The model consists of four main components: a low-level features network, a mid-level features network, a global features network, and a colorization network. First, a common set of shared low-level features are extracted from the image. Using these features, a set of global image features and mid-level image features are computed. Then, the mid-level and the global features are both fused by a "fusion layer" and used as the input to a colorization network that outputs the final chrominance map.
Each layer has a ReLu transfer function except for the last convolution of the colorization network, where a sigmoid function is applied.The model is able to process images of any size, but it is most efficient when the input images are 224x224 pixels, as the shared low-level features layers can share outputs. Note that when the input image size is of a different resolution, while the low-level feature weights are shared, a rescaled image of size 224x224 must be used for the global features network.This requires processing both the original image and the rescaled image through the low-level features network, increasing both memory consumption and computation time. For this reason, we trained
the model exclusively with images of size 224x224 pixels.
*Low-Level Features Network*
A 6-layer Convolutional Neural Network obtains low-level features directly from the input image. The convolution filter bank the network represents are shared to feed both the global features network and the mid-level features network. In order to reduce the size of the feature maps, we use convolution layers with increased strides instead of using max-pooling layers (as usual for similar kinds of networks). If padding is added to the layer, the output is effectively half the size of the input layer. We used 3x3 convolution kernels exclusively and a padding of 1x1 to ensure the output is the same size (or half if using a stride of 2) as the input.
*Global Features Network*
The global image features are obtained by further processing the low-level features with four convolutional layers followed by three fully-connected layers.This results in a 256-dimensional vector representation of the image.
*Mid-Level Features Network*
The mid-level features are obtained by processing the low-level features further with two convolutional layers. The output is bottlenecked from the original 512-channel low-level features to 256-channel mid-level features. Unlike the global image features, the low-level and mid-level features networks are fully convolutional networks, such that the output is a scaled version of the input.
*Fusion Layer*
In order to be able to combine the global image features, a 256-dimensional vector, with the (mid-level) local image features, a 28x28x256-dimensional tensor, the authors introduce a fusion layer. This can be thought of as concatenating the global features with the local features at each spatial location and processing them through a small one-layer network.This effectively combines the global feature and the local features to obtain a new feature map that is, as the mid-level features, a 3D volume.
*Colorization Network*
Once the features are fused, they are processed by a set of convolutions and upsampling layers, which use the nearest neighbour technique so that the output is twice as wide and twice as tall. These layers are alternated until the output is half the size of the original input. The
output layer of the colorization network consists of a convolutional layer with a Sigmoid transfer function that outputs the chrominance of the input grayscale image. Finally, the computed chrominance is upsampled and combined with the input intensity/luminance image to produce the resulting color image. In order to train the network, we used the Mean Square Error (MSE) criterion. Given a color image for training, the input of the model is the grayscale image while the target output is the a*b* components of the CIE L*a*b* colorspace. The a*b* components are globally normalized so they lie in the [0,1] range of the Sigmoid transfer function.
*Colorization with Classification*
While training with only color images using the MSE criterion does give good performance, sometimes it could make obvious mistakes due to not properly learning the global context of the image, e.g., whether it is indoors or outdoors. As learning these networks is an non-convex
problem, we facilitated the optimization by also training for classification jointly with the colorization. As we trained the model using a large-scale dataset for classification of N classes (Mathematica ImageIdentify dataset), we had classification labels available for training. These labels correspond to a global image tag and thus can be used to guide the training of the global image features. We did this by introducing another very small neural network that consists of two fully-connected layers: a hidden layer with 256 outputs and an output layer with as many outputs as the number of classes in the dataset. The input of this network is the second to last layer of the global features network with 512 outputs. We trained this network using the cross-entropy loss, jointly with the MSE loss for the colorization network.
*Implementation*
The aim of my project was to build the network described in the paper using the new NeuralNetworks framework of Mathematica 11. In order to achieve this, some adjustments were needed.
First of all, we decided to train and evaluate the network only on images of 224x224 pixels size, in order to use (and train) only one low-level features network, instead of two with shared weights and different outputs.
The final network has two inputs: the first one is the colored 224x224 px image, encoded by the "NetEncoder" function in LAB colorspace, the second one the class of the image. The two outputs (named "Loss" and "Output") represent the values of the two loss function used (one for the colorization, the other one for the classification), which are then summed together by the NetTrain function. The three color channels of the input image are split by the split layer: the L channel feeds the "low-level features" network, while the a,b channels are scaled and concatenated in order to obtain a target set for the mean squared loss function comparable with the output of the colorization network. The fusion layer has been replaced by a broadcast layer, which joins the rank 3- tensor, output of the mid-level network, with the vector from the global features network. However, the way they are combined is not exactly the same as the one described in the paper. To evaluate the trained network on a grayscale image it's necessary to drop some branches of the network, such as the classification network and the layers that process the a,b channels of the colored input image in order to produce the target set for the colorization loss function.
![Network described in the paper][1]
![Network implementation with Mathematica NeuralNetworks framework][2]
**Results**
![enter image description here][3]
**Conclusions**
The network described in the paper has been trained on the Places scene dataset [Zhou et al. 2014], which consists of 2,448,872 training images and 20,500 validation images, with 205 classes corresponding to the types of the scene. They filtered the images by removing grayscale images and those that have little color variance with a small automated script. They trained using a batch size of 128 for 200,000 iterations corresponding to roughly 11 epochs. This takes roughly 3 weeks on one core of a NVIDIAR TeslaR K80 GPU.
We needed to introduce some new layers in the existing framework and to fix some bugs, so we were able to train our network only for 14 hours on a dataset of 350000 images on one core of a GPU Titan machine. Furthermore, the images in our training set mainly represent specific items, so probably better results may be achieved introducing also images of different types of subjects (landscapes, human created images, indoors, etc). The results we obtained are showed in the section above and are quite good. We are confident that with a deeper and longer training our network would give considerably better results.
**Open Problems / Future Developments**
Due to the separation between the global and local features, it is possible to use global features computed on one image in combination with local features computed on another image, to change the style of the resulting colorization. One of the more interesting things the model can do is adapting the colorization of one image to the style of another. This is straight-forward to do with this model due to the decorrelation between the global features and the mid-level features. In order to colorize an image A using the style taken from an image B, it's necessary to compute the mid-level local features of image A and the global features from image B. Than it's possible to fuse these features and process them with the colorization network. Both the local and the global features are computed from grayscale images: it's not necessary to use any color
information at all.
The main limitation of the method lies in the fact that it is datadriven and thus will only be able to colorize images that share common properties with those in the training set. In order to evaluate on significantly different types of images, it would be necessary to train a
the model for all type of images (indoor, outdoor, human-created...). In order to obtain good style transfer results, it is important for both images to have some semantic level of similarity between them.
**References**
[1] Satoshi Iizuka, Edgar Simo - Serra, and Hiroshi Ishikawa."Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification".
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=netwPicture.png&userId=884315
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MyNetwork.png&userId=884315
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=result2.png&userId=884315Sabrina Giollo2016-07-07T19:53:19ZVariable Styles inside Text not supported any more?
http://community.wolfram.com/groups/-/m/t/1200540
In teaching Mathematica, I frequently annotate code with text cells that include System names and other small bits of code. Until now, if you were writing something like "The name for the number pi is Pi." you could highlight Pi, and covert it to Input style. In version 11.2, it I do this, the whole cell is converted to Input. Is there a new setting that overrides this behavior?
Ken Levasseur
UMass LowellKen Levasseur2017-10-09T14:25:52Z[✓] Know the number of braces in an expression?
http://community.wolfram.com/groups/-/m/t/1197512
An expression may contains many braces, for example
a = {1, 2, {3, 4, {5}}}
How can I know how many braces are there in an expression by using a function?Math Logic2017-10-03T20:54:00ZExtract Images from particular page of PDF
http://community.wolfram.com/groups/-/m/t/1200163
Hi!
I am trying to extract the images stored on a particular page of a pdf. I can open the page using the following code:
Import["c:\\ABCDE.pdf", {"Pages", 4}]
How would I go about importing all the images only on this page? The below line extracts all the images from all pages:
Import["c:\\ABCDE.pdf", "Images"]
As I am working with a large file, I need to exatract the images on a page by page basis.
I hope someone can help!
Thanks and Regards,
Priyan.Priyan Fernando2017-10-09T11:18:49ZAvoid graphics-jitter when Animating Plot?
http://community.wolfram.com/groups/-/m/t/1198937
On my system, this produces a result where the vertical-axis labels jitter left and right by several pixels as the animation progresses:
Module[{tableOfExamples =
Table[Plot[E^(-Pi x^2), {x, -Pi, +Pi},
PlotRange -> {Full, {-1.2, +1.2}}, ImageSize -> 270], 32]},
Animate[tableOfExamples[[k]], {k, 1, Length[tableOfExamples], 1}]]
*All 32 plots in the animated table are identical*, so there's no difference among them to account for the jitter.
I see the jitter when I have magnification for the notebook set to 125% (setting in the lower-right corner of the notebook). When I change the magnification to 100% or 150%, I don't see any jitter.
I'm on Windows 10, and my screen resolution is 1920 x 1080.
![enter image description here][1]
The jitter on my screen is faster, but my screen-capture software under-sampled it.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=jitter.gif&userId=1009289Joe Donaldson2017-10-07T02:26:57Z[✓] Do threshold Histogram?
http://community.wolfram.com/groups/-/m/t/1200032
Hello Friends,
I am trying to develop a threshold histogram on Mathematica.
Its not also available in the Help.
Kindly please advise how to do it (attached Figure).
ThanksMan Oj2017-10-09T06:25:17ZObtain exact results from GenerateConditions?
http://community.wolfram.com/groups/-/m/t/1199756
In[1]:= squareRootOfTermPlusOne[term_] :=
Sum[Binomial[1/2, n]*term^n, {n, 0, Infinity},
GenerateConditions -> True]
In[2]:= squareRootOfTermPlusOne[x]
Out[2]= ConditionalExpression[Sqrt[1 + x], Abs[x] < 1]
It says the infinite sum converges if and only if
Abs[x]<1
But here are counterexamples of convergence on arguments with an absolute value not less than 1:
In[3]:= squareRootOfTermPlusOne /@ {+1, -1, I}
Out[3]= {Sqrt[2], 0, Sqrt[1 + I]}
And similarly for GeneratingFunction:
In[11]:= GeneratingFunction[Binomial[1/2, n], n, x,
GenerateConditions -> True]
Out[11]= ConditionalExpression[Sqrt[1 + x], Abs[x] < 1]Joe Donaldson2017-10-08T23:33:23Z[✓] Find the critical solutions?
http://community.wolfram.com/groups/-/m/t/1199464
The question states, "Given that dP/dx=3P-2P^2 for the population P of a certain species at time t, find the critical solutions (equilibrium solutions)" I'm not exactly sure how to do that. I've tried using DSolveValue and Solve but they give two different answers so I'm not sure which one is right or if either are right. I'm new to mathematica so I don't know what to use
In[3]:= (*2.1*)
DSolveValue[P'[x] == 3*P[x] - 2*P[x]^2, P, x]
Out[3]= Function[{x}, (3 E^(3 x))/(2 E^(3 x) + E^(3 C[1]))]
In[6]:= Clear[P, x]
Solve[P'[x] == 3*P[x] - 2*P[x]^2, P[x]]
Out[7]= {{P[x] -> 1/4 (3 - Sqrt[9 - 8 Derivative[1][P][x]])}, {P[x] ->
1/4 (3 + Sqrt[9 - 8 Derivative[1][P][x]])}}Brendan Isaac2017-10-08T17:07:45Z[GIF] Vitals (Animated von Mises distribution)
http://community.wolfram.com/groups/-/m/t/1199921
![Animated von Mises distribution][1]
**Vitals**
This one is pretty simple: there are 22 rows of dots, translating left or right depending on whether the row number is even or odd. Within each row, you can see the dots as plotting the density of the [von Mises distribution][2]. Specifically, the radius of each dot is the value of the von Mises pdf at that point.
Note that the von Mises distribution is like a Gaussian distribution on the circle. In particular, it is a periodic probability distribution, which is why each row is periodic, showing a total of 5 periods.
Here's the code:
Manipulate[
Show[
GraphicsGrid[
Table[{
Graphics[{
Lighter[ColorData["Rainbow"][1 - (n + 10)/21], .2],
Disk[{#, 0}, PDF[VonMisesDistribution[0, .3], Mod[# + (-1)^n t, 2 π, -π]]] & /@ Range[-4 π, 4 π, π/6]},
PlotRange -> {{-4 π - π/12, 4 π + π/12}, {-.5, .5}}]},
{n, -10, 11}],
Spacings -> {-6, Automatic}], ImageSize -> 600, Background -> GrayLevel[.1],
PlotRangePadding -> {None, Scaled[.0242]}
],
{t, 0, 2 π}]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dots12sc.gif&userId=610054
[2]: https://en.wikipedia.org/wiki/Von_Mises%E2%80%93Fisher_distributionClayton Shonkwiler2017-10-09T00:55:26ZWhat's the difference between plotting and solving?
http://community.wolfram.com/groups/-/m/t/1198413
Hello community,
this is something that's been bugging me for a long time, I simply don't get it: Why can Mathematica plot some functions without any problem, but not numerically solve an equation containing this function? I mean, I can *see* that Mathematica obviously calculated the values for plotting. Now why can't NSolve just use these values, compare them to the value I want to solve for (e.g., 0) and then tell me for which x the function f(x) is closest to 0? I had to build my own functions doing exactly that, and that's cumbersome and very prone to errors since I'm not a programmer.
Any insight provided to me about this issue is appreciated!Sebastian Neumann2017-10-05T10:21:18ZSolve these trigonometric equations?
http://community.wolfram.com/groups/-/m/t/1199382
I tried to solve these two eqn by solve function and reduce , but it does not work . can anyone help me , please ??
eq1 = (1500 - 100/Sin[alp])/Sin[180 - alp - th2] == (1176.795 - 100/Tan[alp])/Sin[th2];`
eq2 = 200 Cos[90 - alp - th2] - 500 Sin[alp] == 100;
Solve[{eq1,eq2},alp]Ahmed Khodari2017-10-08T18:37:10ZWhat is the logic of transparency handling by image-processing functions?
http://community.wolfram.com/groups/-/m/t/1195286
When working on [this][1] post I [discovered ][2] that starting from *Mathematica* 10.0 the behavior of such functions as `Blur`, `GaussianFilter`, `ImageConvolve` and some others was changed. Namely, they no longer affect alpha channel of an `Image` they are applied to. Unfortunately the Documentation is completely silent about this important change which has broken all dependent code, including [this][3] ingenious function written by [Heike][4].
My interpretation was that the purpose of this incompatible change is to make transparency handling by all image processing functions be consistent. In versions 8 and 9 `ImageAdjust`, `Dilation` and some other similar functions weren't applied to alpha channel. The consistent implementation is to make them all to ignore alpha channel or make them all to process alpha channel (or, better, make this behavior optional). But further investigation showed that even in the latest version 11.2.0 there are strongly related functions which still affect alpha channel: `ImageCorrelate` and `ImageFilter` (and may be others, I didn't test *every* function).
Is it a bug that `ImageCorrelate` and `ImageFilter` still affect alpha channel? If so, we should expect another code-breaking change in one of the future releases of *Mathematica*.
Or is it *by design* and we can rely on this functionality? If so, what is the logic behind this design decision? Currently I see no way to *predict* this behavior, and there is also no information in the Docs. Is there any method to know besides testing every function by hand?
<sub>*(Cross-posted on [Mathematica.SE][5].)*</sub>
[1]: http://community.wolfram.com/groups/-/m/t/1188397
[2]: https://mathematica.stackexchange.com/q/156210/280
[3]: https://mathematica.stackexchange.com/a/4155/280
[4]: https://mathematica.stackexchange.com/users/46/heike
[5]: https://mathematica.stackexchange.com/q/156384/280Alexey Popkov2017-10-01T00:46:47ZFourierCosTransform and FourierTransform don't agree for even functions.
http://community.wolfram.com/groups/-/m/t/1199360
The Wolfram-Language specification [states][1]:
> Results from FourierCosTransform and FourierTransform agree for even functions
In this respect, Mathematica does not conform to the Wolfram-Language specification:
In[1]:= x = E^(+I s)/2 + E^(-I s)/2;
In[2]:= FourierTransform[x, s, t]
Out[2]= Sqrt[\[Pi]/2] DiracDelta[-1 + t] +
Sqrt[\[Pi]/2] DiracDelta[1 + t]
In[3]:= FourierCosTransform[x, s, t]
Out[3]= 0
Moreover, Mathematica gives inconsistent results on arguments claimed equal by Mathematica:
In[4]:= x == ExpToTrig[x] // Simplify
Out[4]= True
In[5]:= x = ExpToTrig[x]
Out[5]= Cos[s]
In[6]:= FourierCosTransform[x, s, t]
Out[6]= Sqrt[\[Pi]/2] (DiracDelta[-1 + t] + DiracDelta[1 + t])
[1]: http://reference.wolfram.com/language/ref/FourierCosTransform.htmlJoe Donaldson2017-10-08T07:39:58ZBasis of factors for large degree polynomials
http://community.wolfram.com/groups/-/m/t/1198917
In $\mathbb{Z}_2$, the polynomial $x^{2^6}+x+1$ or $x^{64}+x+1$ factors into $x^4+x+1$, $x^{12}+x^9+x^5+x^2+1$, $x^{12}+x^9+x^5+x^4+x^2+x+1$, $x^{12}+x^9+x^8+x^5+1$, $x^{12}+x^9+x^8+x^5+x^4+x+1$, $x^{12}+x^9+x^8+x^6+x^3+x^2+1$. Below, under $1+x+x^{64}$, you can see the degree 12 factors arranged as columns, followed by the basis (a gray bar separates factors and basis). The same is shown for $n$ from 5 to 13.
[![factors and basis][1]][1]
In $\mathbb{Z}_2$, $x^{2^n}+x+1$ has many factors of degree $2 n$ and the number of basis elements always seems to be $n-2$. Here are pictures of the basis for $n$ from 7 to 18.
[![basis in z2][2]][2]
Here's Mathematica code for the first image.
data = Table[Module[{ polynomials, len, polyandbasis},
polynomials = Last[Sort[SplitBy[SortBy[CoefficientList[#, x] & /@ (First /@
FactorList[x^(2^power) + x + 1, Modulus -> 2]), {Length[#], Reverse[#]} &], Length[#] &]]];
len = Length[polynomials[[1]]];
polyandbasis = Flatten /@ Transpose[{ 3 Transpose[polynomials], Table[{0, 1, 0}, {len}],
3 Transpose[Select[RowReduce[polynomials, Modulus -> 2], Total[#] > 0 &]]}];
Column[{Text[x^(2^power) + x + 1], ArrayPlot[polyandbasis, PixelConstrained -> True,
ImageSize -> {800, 2 len + 4}, Frame -> False]}, Alignment -> Center]], {power, 5, 13}];
Column[{Row[Take[data, 6], Spacer[30]], Row[Take[data, {7, 8}], Spacer[60]], Row[Take[data, {9}]]}]
First question: Does the $\mathbb{Z}_2$ polynomial $x^{2^n}+x+1$ have a particular name? It has a lot of nice properties.
I'd like to make pictures of higher order basis elements. Unfortunately, Mathematica doesn't want to Factor $x^{1048576}+x+1$, claiming it's out of bounds. Also, PolynomialGCD doesn't like high exponents. I've looked at the [Cantor–Zassenhaus algorithm](https://en.wikipedia.org/wiki/Cantor%E2%80%93Zassenhaus_algorithm) and other factorization methods over finite fields, but didn't readily understand them.
Is there some clever way to get the basis of the $\mathbb{Z}_2$ factors of $x^{2^n}+x+1$ for $n$ from 19 to 120 in Mathematica? Is there some nice way of quickly getting some of the degree $2n$ factors.
(Also at [math.stackexchange](https://math.stackexchange.com/questions/2460638/basis-of-factors-for-large-degree-polynomials) )
[1]: https://i.stack.imgur.com/GGXyR.jpg
[2]: https://i.stack.imgur.com/SwUO0.jpgEd Pegg2017-10-06T18:33:27Z