Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions from all groups sorted by activeWhy the multiplication of these terms result in 0 using Wolfram|Alpha?
http://community.wolfram.com/groups/-/m/t/1167388
Hello
This is my first time using WA and also it was a looong time since I took an Algebra class, and now I am taking Calculus. While I understand the terms and procedures, I get lost in the algebra parts.
Consider this part of an exercise and see the multiplication in the numerator:
![example][1]
As you can see, the product from the multiplication on the fraction numerator according to my professor, is (x^2 + x) - x^2 but when I try to multiply that numerator in WA, the result is x = 0. Is this because WA is just giving me the value of x? Here is my input on WA: Solve (sqrt(x^2+x) - x) (sqrt(x^2+x) + x)
So what can I do to obtain same results as we do in class?
Please... thank you!!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Example.jpg&userId=1167310Samuel Otero2017-08-19T07:06:47ZElster Type A1800 energy meter with DLMS compatibilty
http://community.wolfram.com/groups/-/m/t/1167376
I have Elster A1800 Type energy meter model: A1830RAL Style: ZE3KL4R00LH-PA. I want to ask that is this model compatible with DLMS/COSEM or not? Or I will have to use a communication modeule for DLMS protocol?Jack Lohana2017-08-19T05:05:56Z[GIF] Pack It In (Stereographic projection of circles)
http://community.wolfram.com/groups/-/m/t/1166591
![Stereographic projection of circles centered at cube vertices][1]
**Pack It In**
Consider the eight vertices of the cube as points on the unit sphere, then make (spherical) circles centered on those points with radii chosen so that the circles are just tangent. Then this gives a packing of the sphere by 8 circles of (spherical) radius $\frac{1}{2} \arccos \left(\frac{1}{3}\right) \approx 0.61548$. (Of course, this is not the optimal packing by 8 circles; that is given by circles centered on the vertices of the square antiprism.)
Now, rotate the sphere and the eight circles around the $x$-axis and stereographically project down to the plane. After adding some color, the result is this animation.
Here's the code:
Stereo[{x_, y_, z_}] := 1/(1 - z) {x, y};
DynamicModule[{p, b,
verts = Normalize /@ PolyhedronData["Cube", "VertexCoordinates"],
t = 1/2 ArcCos[1/3],
cols = RGBColor /@ {"#EF6C35", "#2BB3C0", "#161C2E"}},
Manipulate[
p = RotationMatrix[θ, {1, 0, 0}].# & /@ verts;
b = Orthogonalize[NullSpace[{#}]] & /@ p;
Graphics[{EdgeForm[None],
Table[{cols[[Floor[(i - 1)/4] + 1]],
Polygon[Table[Stereo[Cos[t] p[[i]] + Sin[t] (Cos[s] b[[i, 1]] + Sin[s] b[[i, 2]])], {s, 0, 2 π, 2 π/200}]]},
{i, 1, Length[verts]}]}, PlotRange -> Sqrt[6],
Background -> cols[[-1]], ImageSize -> 540],
{θ, 0, π/2}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=packing5.gif&userId=610054Clayton Shonkwiler2017-08-17T21:46:10ZDesign a three-dimensional ConvolutionLayer?
http://community.wolfram.com/groups/-/m/t/1096361
Is it possible to design a three-dimensional ConvolutionLayer in version 11.1.1?Luis Mendes2017-05-16T21:46:07ZClassification methods formula
http://community.wolfram.com/groups/-/m/t/1166949
Dear Sir or Madam,
I have been used a support vector machine method for classifying the data by using "classify" function and I extracted the probabilities of the classes . Let me know how I can access to methodology and formulas of the support vector machine model and probability formulas in classification to cite them as a reference?
I really appreciate it in advance.
Best Regards,
Vahidvahid karimi2017-08-18T12:28:02ZAvoid wrong results while using SystemModeler Complex Product Block?
http://community.wolfram.com/groups/-/m/t/1166757
I've come across an issue where the Complex Product block in SystemModeler just produces outright incorrect results.
I ran a test simulation where I multiplied 1+3i with 100+200i using the block and got the result of -30,000+40,000i. How is it even possible to get a result this wrong? Attached are images showing the setup.
Any help would be appreciated.Daniel Frew2017-08-18T00:55:20Z[✓] Calculate Bessel Function zeros? (Can PayPal for solution)
http://community.wolfram.com/groups/-/m/t/1165759
I am working on a drum synthesizer based off the Bessel Function zeros. (The modal frequencies of a circular drum membrane are predicted by the Bessel Function zeros.)
To built it, I have been manually calculating Bessel zeros using Casio's calculator:
http://keisan.casio.com/exec/system/1180573472
This is working. However, it is very slow work, as I am calculating each zero one by one, manually.
Bessel equations in Wolfram are incredibly easy by contrast. How they work is summarized here:
http://mathworld.wolfram.com/BesselFunctionZeros.html
> the Bessel function ![enter image description here][1] for nonnegative integer values of n and k can be found in the Wolfram Language using the command BesselJZero[n, k].
In an ideal world, I'd like bessel zeros to 6 significant digits for, n = 0...99 and k = 1...100. This would produce a table or list of 10,000 Bessel zeros.
If it is easy, can anyone here maybe do me a huge favor and punch these into your Wolfram Language system to produce a table or list you can share? I would be happy to PayPal you $20 for your effort if so. If it's more work than that, let me know what it would cost.
Otherwise, how would I set up my Windows system to do this? Can I work with Wolfram Language from Windows? I don't have a Raspberry Pi.
I've calculated 1200 of these things manually which as you can imagine has been very tedious. While I'm getting the results I want, I will likely need at least 2000 more to get truly realistic results. I'd hate to spend days and weeks manually working out Wolfram can spit out in 5 minutes!
In an ideal world, I'd like a table like this in Excel or any other workable format:
![enter image description here][2]
Very hopeful for any help. Thank you very much.
Mike
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Inline16.gif&userId=1165745
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=wolframbessels.PNG&userId=1165745Michael M2017-08-16T04:54:28ZHow to "cheat" on Flash games using Mathematica
http://community.wolfram.com/groups/-/m/t/1166766
A friend of mine has shown to me this Facebook game, 8 ball pool. I particularly don't like playing games, and this pool game has a prediction-line for the balls (which I find it kind of a cheat, since real life pool there no guides). And I thought to myself it I could concoct some code to extend the prediction-line to improve aiming, just as a fun exercise (which in itself is more enjoyable to play the actual game).
Image below shown the extend lines.
![enter image description here][1]
And the code:
This part is for positioning. I open the Facebook page in one screen while the other one has Mathematica.
![enter image description here][2]
This part is basically cropping the screenshot, making it black-and-white, finding the cue ball and the prediction-line and overlaying it on top of the original image.
![enter image description here][3]
This part dynamically takes screenshots and output the overlay image, which is the first image shown in this post.
![enter image description here][4]
## Disclaimer ##
Although I don't find this type of thing cheating per se since you can achieve basically the same thing putting a ruler on your PC or phone.
But I would advise against cheating on games only on the grounds that it makes the game less fun (that's why I put the code as images and I'm in no way responsible for your use of this code.). This post is only to show another side of Mathematica; with a few minutes of programming, it was possible to "cheat" in a game fairly easily (but buggy-ly).
A better application of this would be to use an NN to play games, training an AI, etc (although the crude application of capturing the screen is a huge bottleneck...)
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture16.png&userId=845022
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2017-08-17_222251.png&userId=845022
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2017-08-17_222300.png&userId=845022
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2017-08-17_222313.png&userId=845022Thales Fernandes2017-08-18T01:39:54Z[WMP17] Convert Pixel Image into Vector Graphics
http://community.wolfram.com/groups/-/m/t/1160838
As part of my Wolfram Mentorship Program, I chose to work on this interesting and challenging project that combines image processing, artificial intelligence, graph theory and Wolfram Language (who can think of better combination?!). Now since part of the work is confidential and can't be shared (as it uses some of internal WL code), it's useful to mention that this post is not intended to be a full step by step tutorial but rather a conceptual overview on what we are working on. Hope you'll enjoy it and find it useful !
## Project Description ##
Write a function to find an approximation to images as vector graphics (effectively: "convert GIF to SVG'). Flags may be a helpful example to consider. There will be a tradeoff between "fitting' the image, and having a simple "model' in terms of vector graphics.
Category: Algorithms, Artificial Intelligence
Suggested readings:
- http://reference.wolfram.com/language/guide/ImageProcessing.html
- http://reference.wolfram.com/language/ref/Interpolation.html
-------
## Introduction ##
Two typical representations for pictures are raster images and vector graphics. Images are represented by an array of picture-elements called pixels, each has a numerical value to indicate its color. Pixel based image representation has revealed its drawbacks in many applications such as image magnification and printing. Vector graphics defines images using graphics objects like curves and surfaces, rather than discrete points of pixels. The vector representation has many interesting features; such as being scalable, compact and resolution independent.
Conversion between pixel images and vector graphics is always desired. The conversion from vectors to images in WL is straightforward using Rasterize function. However, the opposite is surprisingly not easy at all and has attracted increasing interest in recent years. The following figure illustrates some possible issues in reconstructing line edges and surface colors from an image.
![Possible issues with ImageGraphics[] function][1]
-------
## Approach ##
The conversion process, in general, is divided into three stages: first of all, edge detection is performed to highlight image outline, the result is chain of pixels. Actually, not all those pixels are necessary to reconstruct original image, so a feature extraction stage is required to keep only important pixels (edges, corners, etc.). Finally, the key points are connected together to form graphics lines & meshes, as shown below, to reconstruct image outlines & surface colors. The following figure illustrate these stages:
![Conversion from Raster Image to Vector Graphics][2]
The scope of this post covers converting line drawing images to vector graphics (i.e. reconstructing shape outline only without surface color).
Image to graph
--
It is convenient to use graph object as an intermediate representation of image features, where key points represents graph nodes and image lines are graph links. Once the graph is constructed, it will be easy to replace graph links with the actual curve shape. A good starting point could be from *MorphologicalGraph* functions starting with skeleton image (e.g. ![skeleton image][3] which is the result of *Binarize* and *Thinning* functions applied to the input image). Then modify graph links to have the exact shape of original image.
The process is divided into three steps:
1. Convert image into list of vertices and links
2. Find connection between vertices and links
3. Find edges
**Feature Extraction (Vertices & Links)**
In the first step, we use *SkeletonEndPoints* and *SkeletonBranchPoints* as feature points. They will serve as graph's edges, which will define the shape of our vector graphics. Each vertex is given a unique index as shown below. I tested the function on a simple image, with 3 endpoints and 1 branch point (I should have included a loop for completeness). Each node has given a unique index (I marked vertices in yellow for better visualization). One tricky situation is when we have a circle (or any isolated loop) that doesn't contain any branch. in this case, *MorphologicalTransform* will return an empty result, so we should think of something else. I found *ImageCorners* pretty useful, but it return (x,y)-coordinates instead on (row,column)-pairs. So you need to convert them first before producing an image that can be added with *ImageAdd* to the vertices.
findVertices[skeleton_, opt : OptionsPattern[Graph]] := Module[
{vertices, others},
vertices = ImageAdd[
MorphologicalTransform[skeleton, "SkeletonEndPoints", Padding -> 0],
MorphologicalTransform[skeleton, "SkeletonBranchPoints", Padding -> 0]];
{vertices, others}
];
The result of this function is shown below.
![vertices][4]
Subtracting vertices from the image produces line segments. Those are the links that will connect graph edges. They will be fed to the simplification algorithm in the next stage. Note that each link is given a unique index among links and vertices. Number of segments as well as the length of each segment is also measured.
findLinks[skeleton_, vertices_, vertexCount_] := Module[
{dims = ImageDimensions[skeleton], linkComponents, others},
linkComponents = MorphologicalComponents@ImageSubtract[skeleton, vertices];
linkComponents = Replace[linkComponents, Except[0, n_] :> n + vertexCount, {2}];
{linkComponents, others}
]
![links][5]
**Features to Graph (Links and Nodes)**
The second step is by far the most involving step in the whole process; finding which vertex is connected to which link. Connection can be strong (from four directions) or weak (also include diagonals). There is also some special cases like finding short connections that consist of only 2 pixels, etc.A good starting point is from *ComponentMeasurements* by measuring *"Neighbors"* of vertices and links. To find strong connections you can set *CornerNeighbors* to *False*, while setting it to *True* yields weak connections. A sample output is shown below.
![Strong Connections][6] ![Weak Connections][7]
Note that point 4 and link 6 are strongly-connected. Point 2 is clearly a branch point since it is connected to three links (5,6,7). Now without further details of internal subfunctions involved in this step, I'll show the general function to give you an idea on how complex this step is.
Options[morphologicalGraphics] = Options[Graph];
morphologicalGraphics[skeleton_, opts : OptionsPattern[morphologicalGraphics]] := Module[
{vertices, vertexComponents, linkComponents, strong4, weak8,
redundantEdges, directEdges, linkedEdges, loopEdges, cleanEdges, extraEdges, allEdges, allLines},
{vertices, others} = findVertices[skeleton];
{linkComponents, others} = findLinks[skeleton, vertices, otehrs];
{strong4, others} = findStrongConnections[vertexComponents, linkComponents, others];
{weak8, others} = findWeakConnections[vertexComponents, linkComponents, others];
redundantEdges = findRedundantEdges[strong4, others];
{directEdges, linkedEdges, loopEdges, cleanEdges} = findEdges[weak8,redundantEdges, others];
{extraEdges, others} = findExtraEdges[strong4, linkComponents, weak8, others];
(* convert all edges into list of points *)
allLines = Join[
Replace[List @@@ directEdges, Dispatch@vertexCoordinates, {2}],
Replace[Join[List @@@ cleanEdges, List @@@ extraEdges], Dispatch@Join[vertexCoordinates, linkPositions], {2}]
]
];
The final output of the graph is converted to a list of coordinates defining the lines in the original image. Those points are not necessarily listed in the correct order. Furthermore, not all the points are needed to reproduce the lines segment. This allows more room for line simplification process (next stage) which result in smoother vector graphics, as well as smaller file size.
The following code is to test *morphologicalGraphics* function we show above.
Framed@Module[{x},
x = morphologicalGraphics[skeleton];
Image[skeleton, ImageSize -> Small] ->
Graphics[{Line[x], Red, PointSize[0.03], Point /@ x, Blue,
Point[Join[First /@ x, Last /@ x]]},
PlotLabel -> "Blue edges are correct, \nred points are not in the correct order", ImageSize -> 250]
]
![Result of morphologicalGraphics][8]
----------
## Line Simplefication ##
Douglas-Peucker algorithm is used to simplify curves and lines connecting graph edges (called polylines). Simplification in this context refers to reducing the number of points needed to draw a polyline while maintaining its basic shape. The algorithm assumes points connecting two edges to be in right order (this step is done in *pointsToPath* function). This process is divided into two steps:
1. Rearrange list of points
2. Simplify line segments
**Rearrange points as Path**
In the first step, we'll benefits from the intermediate representation we have. Since we convert raster image into a graph of nodes and links, we can use graph theory to rearrange list of points we have by creating a *DelaunayMesh* then trace the shortest path from the start to the end using *FindHamiltonianPath*.
Two special cases are there: 1) when a starting point meets the end point we have a loop and FindHamiltonianPath fails to find shortest path. One possible solution is by using *FindHamiltonianCycle*. 2) if by any chance we had all points aligned in a straight line, *DelaunayMesh* produces an empty region (since points are aligned in 1D). In this case we simply use *Sort* on the points we have. The following code illustrates this step.
pointsToPath[points2_List] := Module[
{a, b, points, Region, edges, graph, path},
points = points2;
(* Loop detection :TODO:need enhancement using FindHamiltonianCycle *)
If[points[[1]] == points[[-1]],
points = Drop[points, -1];
{a, b} = TakeDrop[points, {Position[points, Last@Nearest[points, points[[1]], 2]][[1, 1]]}];
points = Join[b, a];
];
(* Create a Delaunay mesh *)
Region = DelaunayMesh[points];
If[Region === EmptyRegion[2], Return[Sort@points]];
(* Mesh to graph *)
edges = Map[Sort, MeshCells[Region, 1][[All, 1]]];
graph = Graph[Range[Max@edges], Apply[UndirectedEdge, edges, {1}],
EdgeWeight -> Map[EuclideanDistance @@ points[[#]] &, edges]];
path = FindHamiltonianPath[graph, 1, Length[points]];
Part[points, path]
]
And here is the result.
![rearranged points][9]
**Simplification Step.**
Once the points are in the correct order, we can apply Douglas-Peucker algorithm. Refer to the previous figure, the algorithm will not change blue points (edges), it will simplify red points (segment points) to reconstruct curves that connect blue points with minimum number of red points. The process works as follow:
First, Connect two edges (p1, p2) with line segment and measure distance between all intermediate points (qi) and this line.
pointLineDistance[q_, {p1_, p2_}] := With[
{eqn = (q - p1).(p2 - p1) / (p2 - p1).(p2 - p1)},
Which[
eqn <= 0 , Norm[q - p1],
eqn >= 1 , Norm[q - p2],
True , Norm[q - (p1 + eqn (p2 - p1))]
]
];
Then, if the distance to the furthest point is greater than threshold (smoothness factor), pick it as a new edge and split line into two segments. Otherwise, the segmentation process is done and all other intermediate points can be eliminated.
lineSplit[segment[points_List], threshold_] := Module[
{dists, dmax = 0, pos},
dists = Map[pointLineDistance[#, {points[[1]], points[[-1]]}] &, points[[2 ;; -2]]];
dmax = Max[dists];
If[dmax > threshold,
pos = Position[dists, dmax][[1, 1]] + 1;
{segment[points[[1 ;; pos]]], segment[points[[pos ;;]]]},
segment[points, done]
]
] /; Length[points] > 2
lineSplit[segment[points_List], threshold_] := segment[points, done];
lineSplit[segment[points_List, done], threshold_] := segment[points, done];
Finally, recursively apply segmentation process using *ReplaceRepeated* until all segments are marked *done*. Note that I used *First/@First/@ ...* to access data points in the structure *segment[{points,_}, _]* which is used by *lineSplit* function.
simplifyPolyline[points_List, threshold_] :=
Append[First /@ First /@ Flatten[
ReplaceRepeated[{segment[points]},
s_segment :> lineSplit[s, threshold]]
], Last[points]
];
A simple example illustrates the simplification process is shown bellow. For better approximating curvy shapes, line segments can be used as supporting lines for *BSpline* curves.
![line simplification][10]
----------
## SketchGraphics Function ##
Finally, we are ready to rap everything up in a nice and compact function. As we described in the far beginning, the function accepts a raster line drawing image (skeleton) and produces a vector graphics after converting image into graph then simplify links between nodes.
sketchGraphics[skeleton_Image, polyline_, smoothness_,
opts : OptionsPattern[Graph]] := Module[
{allLines, allPaths},
allLines = morphologicalGraphics[skeleton];
allPaths = Map[ simplifyPolyline[#, smoothness] &,
pointsToPath[#] & /@ allLines];
Graphics[polyline /@ allPaths, Sequence @@ Flatten@{opts}]
];
And here we are. Left: input, Right: output.
![final result][11]
----------
## More Tests ##
![Test 1 PNG][12] == ![Test 1 SVG][13]
![Test 2 PNG][14] == ![Test 2 SVG][15]
![Test 3 PNG][16] == ![Test 3 SVG][17]
![Test 4 PNG][18] == ![Test 4 SVG][19]
![Test 5 PNG][20] == ![Test 5 SVG][21]
![Test 6 PNG][22] == ![Test 6 SVG][23]
----------
## Technical Notes ##
- **Using ImageCorners**:
I mentioned that using this function can help solving loop problem when no branch or endpoint is found. However, some processing is needed before you can add it to vertices image. *ImageCorners* processes list of graphics coordinates (x,y), so you'll need to convert it to matrix coordinates (row, column) then to binary image. The following code is one way to do that (I guess there should be a better way to get rid of the *For-loop*):
vertices = Transpose@ConstantArray[0, ImageDimensions[skeleton]];
height = Last@ImageDimensions[skeleton];
vertexCoordinates = ImageCorners[skeleton, 1, 0.001, 2];
{row, col} = IntegerPart@Apply[{height - (#2 - .5), #1 + .5} &, vertexCoordinates, {1}] // Transpose;
For[i = 1, i <= Length[vertexCoordinates], i++,
vertices[[ row[[i]], col[[i]] ]] = 1;
];
Image[vertices]
![coordinate systems][24]
- **More details on *pointsToPath* function**
In the simplification step, we are required to make sure that points in a line segment are in the correct order so that we can use Douglas-Peucker algorithm. This step involves two operations: constructing a Delaunay mesh and then Finding the Hamiltonian path. the following three figures shows (from left to right) list of points, distorted graphics object, sorted point on the mesh.
![Point to Path][25]
- **From Image to Graphics:**
In case you'd like to see how I created the following illustration, here is the code from Electromagnetics Package
![Conversion from Raster Image to Vector Graphics][26]
<< NDSolve`FEM`;
img[size_] := ColorNegate@Binarize@Rasterize[Style["Hi", size]];
{{
ArrayPlot[
pimg = ReplacePart[
MorphologicalComponents[img[20]], {5, 19} -> 1], Mesh -> All,
ImageSize -> Small, PlotLabel -> "Pixel Image",
ColorRules -> {0 -> White, 1 -> Red, 2 -> Green, 3 -> Blue}],
cimg = ImageCorners[img[700], 0.5];
HighlightImage[ColorNegate@Thinning[EdgeDetect@img[700]],
{PointSize[0.03], Green,
Point /@ cimg[[{1, 2, 13, 14, 3, 4, 20, 19, 9, 8, 18, 17, 1}]],
Blue, cimg[[{5, 6, 16, 11, 22, 21, 10, 15, 7, 12, 5}]],
Red, cimg[[23 ;; -1 ;; 3]]},
ImageSize -> Small, PlotLabel -> "Points to Graphics Lines"],
}, {
toMesh[poly_, color_] :=
MeshRegion[
ToElementMesh[poly, MaxCellMeasure -> 570, AccuracyGoal -> .125],
MeshCellStyle -> {{1, All} -> color}, PlotTheme -> "Lines"];
Labeled[Show[{
toMesh[
Polygon@cimg[[{1, 2, 13, 14, 3, 4, 20, 19, 9, 8, 18, 17, 1}]],
Green],
toMesh[Polygon@cimg[[{5, 6, 16, 11, 22, 21, 10, 15, 7, 12, 5}]],
Blue],
toMesh[Disk[Mean /@ Transpose@cimg[[23 ;;]], 45], Red]}
], "Points to Graphics Mesh", Top, FrameMargins -> 5,
LabelStyle ->
Directive[GrayLevel[.5], FontSize -> 12,
FontFamily -> "Helvetica"]],
Labeled[
gimg = Graphics[{Green,
Polygon@cimg[[{1, 2, 13, 14, 3, 4, 20, 19, 9, 8, 18, 17, 1}]],
Blue, Polygon@cimg[[{5, 6, 16, 11, 22, 21, 10, 15, 7, 12, 5}]],
Red, Disk[Mean /@ Transpose@cimg[[23 ;;]], 45]}],
"Vector Graphics", Top, FrameMargins -> 7,
LabelStyle ->
Directive[GrayLevel[.5], FontSize -> 12,
FontFamily -> "Helvetica"]]
}} // Grid
----------
## Acknowledgment ##
At the end of this post, I'd like to thanks Wolfram Research for allowing this amazing opportunity, and special thanks to my mentors Markus van Almsick and Todd Rowland.And I guess it would be nice to finish this post with this piece of code. This time I converted the whole code into Wolfram Package :)
SetDirectory[NotebookDirectory[]];
Needs["sketchGraphics`"];
skeleton = Thinning@Binarize@EdgeDetect@Rasterize@Style["Thanks Markus !",
FontSize -> 40, FontFamily -> "Bold Italic Art"];
sketchGraphics[skeleton, BSplineCurve, 0.2];
And here is the result !
![Thanks Markus][27]
**References:**
- Mark McClure, "Polyline Simplification", http://demonstrations.wolfram.com/PolylineSimplification/
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.PNG&userId=884569
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.PNG&userId=884569
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=884569
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.PNG&userId=884569
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.PNG&userId=884569
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.PNG&userId=884569
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8.PNG&userId=884569
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9.PNG&userId=884569
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.PNG&userId=884569
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.PNG&userId=884569
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12.PNG&userId=884569
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=13.png&userId=884569
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=14.svg&userId=884569
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15.png&userId=884569
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15.svg&userId=884569
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16.png&userId=884569
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16.svg&userId=884569
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=17.png&userId=884569
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=17.svg&userId=884569
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=18.png&userId=884569
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=18.svg&userId=884569
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19.png&userId=884569
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19.svg&userId=884569
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=98495.PNG&userId=884569
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20.PNG&userId=884569
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.PNG&userId=884569
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20.svg&userId=884569Sa'di Altamimi2017-08-09T15:13:40ZTopology Optimization in Stress-Strain problem
http://community.wolfram.com/groups/-/m/t/1163322
## Introduction ##
Good day for everyone.
I want to represent one of my work in Wolfram Mathematica. The main idea was a creating of fast algorithm of topology optimization for solid objects. The most commons methods for this is a BESO (Bi-directional evolutionary structural optimization) and SIMP (Solid isotropic material with penalization). Both methods have the same idea: minimization of function like a full-strain energy in object or heat flow through the external surface of object ant etc. Optimization methods have used in aircraft/spacecraft engineering or automobile engineering. So I want to create my own algorithm in Wolfram Mathematica for topology optimization. I wll show some examples with different boundary conditions for proving that this work with different shapes and constraints.
## Some examples ##
Examples of this optimizations:
Minimization of maximum of temperature in radiator: We have metal plate. We must create the shape of this plate which provide us the minimum of maximum temperature in this plate. But we have some constraints: we have a fixed amount of material. We can wisely redistribute all material in space for creating optimal shape. At the beginning of optimization process we have homogeneus structure. Then we apply two algorithms.
**Initial** state:
![enter image description here][1]
**BESO** optimization:
![enter image description here][2]
**SIMP** Optimization:
![enter image description here][3]
As we see, those algorithms had as results the differents shapes. The main reason is a different types of redistribution of material in space. **BESO** can only move the mass in space, so we will have the void in space. It marked as *black* regions on image. In **SIMP** method we can partially fill the space by material, so we will have *gray* regions. We can consider many examples of applications both of this methods.
## Planning of the work ##
But I want to show the main Idea of this topic: I wanted to realize SIMP method in Wolfram Mathematica and build my own method for topology optimization. I divided this plan in some steps:
- FEM modelling of 2D objects: plains, shells, beam, etc
- Reproducing the SIMP algorithm in Mathematica over 2D regions
- Checking the results
- FEM modelling of 3D object
- Reproducing the SIMP algorithm over 3D regions
- Checking the results
- Thinking about own idea of topology optimization for minimization of strain energy
- Realization of this idea
- Compairing new method with SIMP
- Optimization of new algorithm
## First steps ##
SIMP method based on FEM. It suppose to create virtual field of density. So we have a new parameter for each Finite Element: density. Then we try to find the value of each density which provide us a minimum of goal function. In Wolfram Mathematica we have big diversity of methods for discretization of regions and further analysis. We have a rectangle and discretize it by triangles:
Reg = DiscretizeRegion[Parallelogram[{0, 0}, {{0, a1}, {a2, 0}}], MeshCellLabel -> {0 -> "Index"},
MaxCellMeasure -> .01];
Coord = MeshCoordinates[Reg];
Polys = MeshCells[Reg, 2];
ActionSquare = Show[Reg, ImageSize -> 600]
![enter image description here][4]
Polys contains all information about each Finite Element (Triangles), Coord contains all coordinates of all points. By this data we can construct the Stiffness Matrix of this object for solving the Stress-Strain problem. As the result of solving this problem we get the vectors of displacements, strains and stresses in each finite element. We can visualize it, as examples: von Mises stresses in axis-Symmetrical problem, red - biggest stresses, blue and violet - lowest stresses:
![enter image description here][5]
## 2D Solutions of SIMP method ##
Then we can use SIMP method algorithm on this solution for redistributing the material in space, we move the material from non-stressed areas into area with big amount of stresses. SIMP - iterative method, so we will get a result after some steps. SIMP also have a big amount of control parameters: penalizing factor, continuum coefficient and etc.
- Big size of cells:
![enter image description here][6]
- Small cells:
![enter image description here][7]
We can analyse the main criteria of optimization: ***full energy of strain***
![enter image description here][8]
**X-axis** is an iteration of SIMP method, so we get final solution at 12~13 step. Different dashing - different parameters of optimization, we see that it change only the "way" of optimization. We can see that energy decreases from step to step, so the main goal was completed. So I realized the SIMP method in WM. You can look at this on my [GitHub][9].
## Main Part ##
We must do the same things in 3D. I will skip all explanations in this part because they are the same as in 2D. I will show some examples of optimization:
- Initial conditions
![enter image description here][10]
- Result of SIMP Optimization
![enter image description here][11]
But I must say that in case of 3D problem the level of computations grows very fast. So on my notebook it is very hard to analyse the big amount of Finite Elements for Topology optimization. I began to seek the to minimize my computations. We have only a hypothesis about existing only one global extremum point of our function. If we consider that it is ***true*** we have a very fast solution. We must construct a goal function of many arguments.
goalFunction = displacement.StifnessMatrix.displacement;
massEquation =
Sum[V[[k]]*density[[k]], {k, 1, Length[Tetras]}] ==
Sum[V[[k]], {k, 1, Length[Tetras]}]*0.75;
densityEq = Thread[0.0 <= # <= 1 &[density]];
The goal function is a strain energy in solid object. Arguments - densities of Finite Elements. And also we have some constraints, we can't involve new mass in space, our elements can't be overcrowded. ***displacement*** it is a vector of displacements of each points of our discretized object. We get it from solving Stress-Strain problem. StiffnessMatrix we get from the properties of our object. ***density*** is vector of density of each finite element.
sysOfEq = Flatten[{goalFunction, massEquation, Equation, densityEq}];
variables = Flatten[{density, DisplacementI}];
Here we construct the system of equation for our problem and vector of our variables. I suppose to use **FindMinimum** function for finding the global minimum of our function. But we must define the initial point for seeking the minimum point.
initDis = Thread[{#, 0} &[DisplacementI]];
initDen = Thread[{#, 0.5} &[density]];
initCond = Flatten[{initDen, initDis}, 1];
Then we try to run the FindMinimum with this initial conditions and equations:
resultFMin =
Quiet[FindMinimum[sysOfEq, initCond,
Method -> "InteriorPoint"]]; // AbsoluteTiming
And after this, I got the incredible results. The vectors of density for **SIMP** method and **FindMinimum** сoincide almost completely. I got the increasing of computation speed near x40 in compairing with SIMP algorithm. You can look at all result also on my [GitHub][12].
## Conclusions ##
- FindMinimum give the same result as the SIMP method for stress-strain problem;
- It has a very big advantage in time in compairing with SIMP, SIMP/FindMinimumTime~40-50;
- I must prove the hypothesis about the one global minimum of the multidimensional function. If anyone can give me some advices about this.
- We can use this for constructing optimized models for 3D printing. It was introduced in WM v11.0.0
![enter image description here][13]
## Further explorations ##
- In Mathematica we can find some FEM packages: NDSolve`FEM` or ACEFEM package from Korelc Jože. I want to try connect them in something.
- Create the CDF application for simpliest application of Topology Optimization.
- Further analysis of finding the global minimum of goal function.
- Improving the algorithm of constructing StiffnessMatrix and finding minimum. I mean construct own specialized function for this problem.
## Questions ##
- How I can prove the existing only one minimum of multidimensional function with Mathematica.
## Link for video ##
[Video][14]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.PNG&userId=1083954
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=22.PNG&userId=1083954
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=33.PNG&userId=1083954
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=44.PNG&userId=1083954
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=55.PNG&userId=1083954
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=66.PNG&userId=1083954
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=77.PNG&userId=1083954
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=88.PNG&userId=1083954
[9]: https://github.com/AndreyKrotkikh/TopologyOptimization
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=99.PNG&userId=1083954
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=111.PNG&userId=1083954
[12]: https://github.com/AndreyKrotkikh/TopologyOptimization
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=122.jpg&userId=1083954
[14]: https://drive.google.com/open?id=0BwoYVUBhV8KJbGdwNmpKZWk4MWcAndrey Krotkikh2017-08-11T19:08:54Z[WSS17] Non-SyntaxQ correction to SyntaxQ
http://community.wolfram.com/groups/-/m/t/1136114
## Main Idea ##
The main Idea of this project is: Find ways for correcting non-SyntaxQ expressions to SyntaxQ expressions.
As example: we have this string: `"Fold[Plus,a,{b,c,d,e}]". We can remove it one character from this string and get a `StringLength["Fold[Plus,a,{b,c,d,e}]"]` different strings. We can get it by
inp = "Fold[Plus,a,{b,c,d,e}]";
StringDrop[inp, {#}] & /@ Range[StringLength[inp]]
So, it will 22 different strings:
`old[Plus,a,{b,c,d,e}]
Fld[Plus,a,{b,c,d,e}]
...
Fold[Plus,a,{b,c,d,e]
Fold[Plus,a,{b,c,d,e}`
Some of them will give us `True` for function `SyntaxQ`. Let's try sort it by `Select`:
dropped = StringDrop[inp, {#}] & /@ Range[StringLength[inp]]
Select[dropped, Not[SyntaxQ[#]] &]
And we got only 4 results from 22:
`FoldPlus,a,{b,c,d,e}]
Fold[Plus,a,b,c,d,e}]
Fold[Plus,a,{b,c,d,e]
Fold[Plus,a,{b,c,d,e}
`
Let's try to evaluate it by function `ToExpression`.
Map[ToExpression, Select[dropped, Not[SyntaxQ[#]] &]]
And we get list of errors:
`{$Failed, $Failed, $Failed, $Failed}`
It means that code are not correct, but it will be necessary to check `ToExpression` on not filtered list of strings:
Map[ToExpression, dropped]
`{old[Plus,a,{b,c,d,e}]
Fld[Plus,a,{b,c,d,e}]
Fod[Plus,a,{b,c,d,e}]
Fol[Plus,a,{b,c,d,e}]
$Failed
lus[lus[lus[lus[a,b],c],d],e]
Pus[Pus[Pus[Pus[a,b],c],d],e]
Pls[Pls[Pls[Pls[a,b],c],d],e]
Plu[Plu[Plu[Plu[a,b],c],d],e]
Plusa[Plusa[Plusa[b,c],d],e]
b+c+d+e+Null
a b+a c+a d+a e
$Failed
a+c+d+e+Null
a+bc+d+e
a+b+d+e+Null
a+b+cd+e
a+b+c+e+Null
a+b+c+de
a+b+c+d+Null
$Failed
$Failed`
As we see, some of them were evaluated without any errors. It is normal because of `old, Fld, Fod, Fol, etc.,` counts as non defined functions. `Null` also counts as argument of function. It is the main reason of filtering the *dropped* list of strings. Next step of this topic is the finding way of correction our strings. So, we must construct the algorithm which will correct all `$Failed` strings into SyntaxQ forms.
![Graph of correction][1]
## "Intellectual" inserting ##
The general way to correction is defining of missing character and adding it in every place of our string. Then we must check all our new inputs for Syntax corectness.
Inserting of missed character, general way:
correctInserting[str_String, strinst_String] :=
Module[{s = str, s1, s2, result = {}, i, symb = strinst},
s1 = StringSplit[str, x : PunctuationCharacter :> x];
For[
i = 1, i <= Length[s1], i++,
If[
Not[MemberQ[$SystemSymbols, s1[[i]]]],
s2 =
{# + Total[StringLength[#] & /@ (s1[[1 ;; i - 1]])],
StringInsert[s1[[i]], symb, {#}]} &
/@ Range[StringLength[s1[[i]]] + 1];
AppendTo[
result, {s2[[#, 1]], {s1[[1 ;; i - 1]], s2[[#, 2]],
s1[[i + 1 ;;]]}} & /@ Range[StringLength[s1[[i]]] + 1]]
]
];
DeleteDuplicates[Transpose[{Transpose[Flatten[result, 1]][[1]],
StringJoin /@ Transpose[Flatten[result, 1]][[2]]}]]
So, this function won't insert character into functions which are in:
Names["System`*"]
## First approach ##
I tried 2 approaches: using of string patterns and using internal service of Mathematica for checking the errors. During my work in WSS2017 I chosed the second way. But let's explain why? The main idea of pattern system was a decomposition of our string into substrings. As example:
`Fold[f,x,{1,2,3}] -> {Fold,f,x,{1,2,3}}`
Then we check each element by SyntaxQ;
SyntaxQ/@{Fold,f,x,{1,2,3}}
Then we convert a list of boolean values into list of 1 and 0 by `Boole`.
Boole[SyntaxQ/@{Fold,f,x,{1,2,3}}]
I supposed that a decomposition which have a biggest number in binary representation is right correction of our string. But this way can't maintain all possible variant. Introducing the function equal to `Boole` but in terms of ternary system or quarternary. I tried to reproduce the ternary equivalent of `Boole` and it can maintain more kinds of inputs.
But this system have many disadvantages: slow, inefficient, etc.
I will show the last version of string patterns system of this:
brPrSol3[str_String] :=
Module[{x = str, result},
fbr =
FixedPointList[
StringReplace[
#,
x__ ~~ "[" ~~ y__ ~~ "]" ~~ z___ /;
StringCount[y, "["] == StringCount[y, "]"] &&
StringCount[x, {"[", "]"}] == 0 &&
ContainsAny[Names["System`*"], {x}] &&
StringCount[z, {"[", "]"}] == 0 &&
commaCounter[x] ==
StringCount[emptyBracketCollapse@bracesCollapse@y, ","]
:> y
] &,
str
];
result = Flatten[fbr]
]
It worked very slow and always I could give input which would not give me correct output:
Subsets[{a, b, c]
You can look at this pattern system on my [GitHub][2] or by this [directlink][3].
## Second approach ##
The second way is using `Block` construction:
messageAnalysis[str_] :=
Module[{mess, result},
Block[{$MessageList = {}},
Quiet[
ToExpression[str];
mess = $MessageList;
];
result = (mess === {})
];
result
];
So, If we have any message it will give us `False`. And this function `messageAnalyser` allow to analyse all our construction more precisely.
You can look at this more [here][4]. `messageAnalysis` allow us to check each results of `correctInserting`. Then we introduce some filters:
testAnalyser[str_String, initStr_, nulltest_] :=
messageAnalysis[str] &&
EditDistance[
StringDelete[ToString[ToExpression[str], InputForm],
WhitespaceCharacter], initStr] >=
1 && (StringCount[
StringDelete[ToString[ToExpression[str], InputForm],
WhitespaceCharacter], "Null"] == 0 || Not[nulltest])
By `nulltest` we can sort result into two piles: `Not contains Null` and `Can contains Null`. Also we remove all results which have `EditDistance` with initial string less than 1. Full code you can look [here][5].
## Panel Version ##
Also we tried to do something like an application for live-correcting input cell:
Panel[
Style[
Column[
{
InputField[Dynamic[inpstr], String, ContinuousAction -> True],
Dynamic[Quiet[syntaxCorrector[inpstr]]]
}
], Background -> White
]
]
![1st example][6]
![2nd example][7]
![enter image description here][8]
![enter image description here][9]
## "Prototype" of Neural Net ##
We built a prototype of neural net, that maybe be able to learn a templates of Wolfram Language. I don't have enought time for immerse myself in the topic of neural networks for finishing this part of WSS project.
validCharacters = StringJoin@CharacterRange[30, 125];
net00 = NetGraph[
<|
"revIn" -> SequenceReverseLayer[],
"encGRU<" -> GatedRecurrentLayer[96],
"revOut" -> SequenceReverseLayer[],
"encGRU>" -> GatedRecurrentLayer[96],
"cat" -> CatenateLayer[2],
"decGRU" -> GatedRecurrentLayer[96],
"classifier" -> NetMapOperator[
NetChain[{LinearLayer[StringLength[validCharacters]],
SoftmaxLayer[]}]
]
|>,
{NetPort["Input"] -> "revIn" -> "encGRU<" -> "revOut",
NetPort["Input"] -> "encGRU>", {"revOut", "encGRU>"} ->
"cat" -> "decGRU" -> "classifier" },
"Input" -> NetEncoder[{"Characters", validCharacters, "UnitVector"}],
"Output" -> NetDecoder[{"Characters", validCharacters}]
]
![Neural Net][10]
## Conslusions ##
- We have a algorithm that can work with all types of string where we can correct it only by adding 1 character.
- We have a prototype of neural net and dataset for it.
## Future directions ##
- Complete the neural net.
- Introduce the new features in algorithm.
- Improve algorithm by adding a possibility to add in string 2 or more additional characters.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ssss.PNG&userId=1083954
[2]: https://github.com/AndreyKrotkikh
[3]: https://github.com/AndreyKrotkikh/WSS2017/blob/master/SyntaxCorrector.nb
[4]: https://github.com/AndreyKrotkikh/WSS2017/blob/master/FinalVersion.nb
[5]: https://github.com/AndreyKrotkikh/WSS2017/blob/master/FinalVersion.nb
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=s1.PNG&userId=1083954
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=s2.PNG&userId=1083954
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=s3.PNG&userId=1083954
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=s4.PNG&userId=1083954
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=s5.PNG&userId=1083954Andrey Krotkikh2017-07-05T19:25:02Z[✓] Plot in 3D a X-zylo ?
http://community.wolfram.com/groups/-/m/t/1165290
X-zylo is a simple cylindrical object with Sine(Sin[t]) shaped undulation on top.
You can find link below. It's a flying gyroscope.
https://www.amazon.com/X-Zylo-X-zylo-Flying-Gyroscope/dp/B0069G6CKG
How can I make this? (cylindrical parametric function? Just bending/flexing the Sin[t], {t,0,10Pi} in 360 degree)
Plot3D
ParametricPlot3D
RevolutionPlot3D
in any form to make it..
Thank in advance!Donghwan Roh2017-08-15T11:44:15ZFind all pairs of common tangents to a curve?
http://community.wolfram.com/groups/-/m/t/1166192
My question is kind of an extension of a question asked previously (https://mathematica.stackexchange.com/questions/25892/common-tangent-to-a-curve) . The difference is that(instead of just one pair) I need to find all pairs of common tangents to a plot.
The list is
normalizedfplot={{0.01, 0.}, {0.02, -0.0000539749}, {0.03, -0.000101768}, {0.04, \
-0.000147006}, {0.05, -0.000191386}, {0.06, -0.000235889}, {0.07, \
-0.000281142}, {0.08, -0.000327577}, {0.09, -0.000375499}, {0.1, \
-0.000425131}, {0.11, -0.000476637}, {0.12, -0.000530134}, {0.13, \
-0.000585708}, {0.14, -0.000643418}, {0.15, -0.000703296}, {0.16, \
-0.00076536}, {0.17, -0.000829608}, {0.18, -0.000896023}, {0.19, \
-0.000964576}, {0.2, -0.00103522}, {0.21, -0.0011079}, {0.22, \
-0.00118255}, {0.23, -0.00125909}, {0.24, -0.00133741}, {0.25, \
-0.00141742}, {0.26, -0.00149898}, {0.27, -0.00158197}, {0.28, \
-0.00166622}, {0.29, -0.00175156}, {0.3, -0.0018378}, {0.31, \
-0.00192473}, {0.32, -0.00201211}, {0.33, -0.00209968}, {0.34, \
-0.00218713}, {0.35, -0.00227415}, {0.36, -0.00236037}, {0.37, \
-0.00244538}, {0.38, -0.00252871}, {0.39, -0.00260987}, {0.4, \
-0.00268827}, {0.41, -0.00276326}, {0.42, -0.00283413}, {0.43, \
-0.00290008}, {0.44, -0.00296023}, {0.45, -0.00301362}, {0.46, \
-0.00305925}, {0.47, -0.00309612}, {0.48, -0.00312327}, {0.49, \
-0.00313991}, {0.5, -0.00314552}, {0.51, -0.00313991}, {0.52, \
-0.00312327}, {0.53, -0.00309612}, {0.54, -0.00305925}, {0.55, \
-0.00301362}, {0.56, -0.00296023}, {0.57, -0.00290008}, {0.58, \
-0.00283413}, {0.59, -0.00276326}, {0.6, -0.00268827}, {0.61, \
-0.00260987}, {0.62, -0.00252871}, {0.63, -0.00244538}, {0.64, \
-0.00236037}, {0.65, -0.00227415}, {0.66, -0.00218713}, {0.67, \
-0.00209968}, {0.68, -0.00201211}, {0.69, -0.00192473}, {0.7, \
-0.0018378}, {0.71, -0.00175156}, {0.72, -0.00166622}, {0.73, \
-0.00158197}, {0.74, -0.00149898}, {0.75, -0.00141742}, {0.76, \
-0.00133741}, {0.77, -0.00125909}, {0.78, -0.00118255}, {0.79, \
-0.0011079}, {0.8, -0.00103522}, {0.81, -0.000964576}, {0.82, \
-0.000896023}, {0.83, -0.000829608}, {0.84, -0.00076536}, {0.85, \
-0.000703296}, {0.86, -0.000643418}, {0.87, -0.000585708}, {0.88, \
-0.000530134}, {0.89, -0.000476637}, {0.9, -0.000425131}, {0.91, \
-0.000375499}, {0.92, -0.000327577}, {0.93, -0.000281142}, {0.94, \
-0.000235889}, {0.95, -0.000191386}, {0.96, -0.000147006}, {0.97, \
-0.000101768}, {0.98, -0.0000539749}, {0.99, 0.}}
The plot looks like this
![enter image description here][1]
Now when I use the code used in the solution to the previous problem, it gives me only one solution.
fo = Interpolation[normalizedfplot, Method -> "Spline"];
boundary = FindRoot[{y1==fo[x1],y2==fo[x2], fo'[x1]==fo'[x2], fo'[x1]==(y2-y1)/(x2-x1)},
{x1,0.01},{x2,0.5},{y1,-0.025},{y2,-0.0275}]
{x1 -> 0.00124073, x2 -> 0.419065, y1 -> 0.000055584,
y2 -> -0.0028277}
This gives me only one pair of solution (if lucky). I need a code that will work for any plot and give me all pairs of common tangents (like there are two pairs in the above plot-one pair given above and the other pair approximately 0.6-0.99). Using NSolve did not help either.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4042improvedsol3.png&userId=1146915Deb Chicago2017-08-16T23:49:26ZAvoid problems with Fluid library in System Modeler 5?
http://community.wolfram.com/groups/-/m/t/1159830
I'm happy to see version 5 now supports the Media library. Is there some reason the Fluid library is not supported?
I of course tried 3.2.2 Fluid library from modelica.org with mixed results.
It would be helpful if I knew what it is about the Fluid library System Modeler 5 has trouble with and when you plan to support it.
Thanks.Eric Meyers2017-08-07T15:16:26ZManipulate table-functions?
http://community.wolfram.com/groups/-/m/t/1166499
Hi,
I am searching for a solution regarding following situation.
To be more precise, to what extend can a table-function be manipulated?
L = 2;
Q = 2;
K = 3;
M = 3;
A = 2;
rpixel = 10.0;
dphi = 2*Pi/L ;
deltaphiGrad = 10.0;
deltaphi =
deltar = 1.0;
phi[l_, q_, m_] := l*dphi + q*deltaphi/m - dphi/2(* --- Position in Phi --- *)
xcontrol = MatrixForm[Table[phi[l, q, m], {m, 1, M}, {l, 1, L}, {q, 0, m*A*(Q - 1)}]]
In the last line I have a table-function which generates 2 fields of numbers, wherein the 1st row contains 3 values, 2nd row 5 values and the 3rd row 7 values. With the variable A it is possible to extend the values in each row.
Question: My goal is that the variable A should only be applied when m will be incremented to 2.
xcontrol = MatrixForm[Table[phi[l, q, m], {m, 1, M}, {l, 1, L}, {q, 0, m*A*(Q - 1)}]]
This leads that the 1st row will be computed independently of A and only the 2nd and 3rd row will be extended.
So, do I have a possibility to manipulate the table-function to achieve that? Maybe some kind of programmatic table functions are needed?
Many thanks in advance for some hints!Andreas Maier2017-08-17T11:12:01Z[✓] Solve and plot an integration with 2 variables?
http://community.wolfram.com/groups/-/m/t/1166540
Dear All,
I've tried to plot a function containing an integration with two variables, and encountered converging problem.
The code has running for more than 24 hours but still running....
Could you kindly help me what could be the problem? Is there any better way to write the code that it converge much quicker?
Thank you very much in advance!
Yujin
The code is attached!Yujin Tong2017-08-17T12:28:39Z[✓] Access an image in a System.Windows.Media.Imaging.CachedBitmap ?
http://community.wolfram.com/groups/-/m/t/1148854
I am trying to use a camera with a .NET interface. The code below seems like it may be capturing an image into a System.Windows.Media.Imaging.CachedBitmap object.
Any ideas how to access this from within Mathematica? Eventually, I will need a solution that runs quickly. Saving to disk using .NET and then reading the file into Mathematica would functionally give me what I need, but will be too slow.
Needs["NETLink`"];
InstallNET[];
LoadNETAssembly["C:\\XIMEA\\API\\x64"];
LoadNETType["xiApi.NET.xiCam"];
myCam = NETNew["xiApi.NET.xiCam"];
myCam@OpenDevice[0];
(*Start acquisition *)
myCam@StartAcquisition[];
timeouts = 1000;
myCam@GetImage[myImage, timeouts]
myCam@StopAcquisition[];
myImage
Output of last statement is:
« NETObject[System.Windows.Media.Imaging.CachedBitmap]»Jeff Burns2017-07-18T22:49:33Z[✓] ContourPlot 2 variables functions in W|A?
http://community.wolfram.com/groups/-/m/t/1166234
Hello i was testing out wolfram|Alpha for the first times with 2 variables functions and it seems that it's pretty inacurate.
As u can see in the image below, where i zoomed the contour plot, a point that is in clearly in the orange part has a lower value than a point clearly in the red part. it should be the contrary since the whiter the higher right ?
[link to the wolfram page to see it in live][1]
![enter image description here][2]
[1]: https://www.wolframalpha.com/input/?i=Plot3D%5B%28472,5%2bx%29*%28101,8%2by%29,%20%7Bx,%200,%20500%7D,%20%7By,%200,%2020%7D%5D
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=wolframe.png&userId=1166218ALEXANDRE SCHYNS2017-08-17T04:10:50Z[✓] Solve second order differential equation with W|A?
http://community.wolfram.com/groups/-/m/t/1166330
Hello all,
First off, thank you to anyone who can respond to my query.
Can www.wolframalpha.com solve the following?
y''+0.11y'+2sin(y), y(0) = 0.15, y'(0)=0
I want to use a numeric method to find an estimate when x = 1, or find the true value, but when I type in:
"Runge-Kutta method, y''+0.11y'+2sin(y)=0, y(0) = 0.15, y'(0)=0, from 0 to 1, h = 0.2"
It seems like it doesn't know how to interpret it.
All the numeric examples I can see appear to only be first order so I wondered if that was the problem?
If it can't, how might I get around it?
Thanks,
SteveSteven Wood2017-08-17T03:22:29ZHow does Reduce solve Bivariate Diophantine quadratic equations?
http://community.wolfram.com/groups/-/m/t/1165255
Hi,
I'm a number theory hobbyist and currently playing with prime numbers. While working with Mersenne's Primes, I came across Diophantine bivariate quadratic equations of the form:
x + y + 2 * n * x * y = (2^(n-1) - 1) / n
where n is an odd prime.
I found that the Reduce command works a bit faster than my own C++ algorithm that runs on my laptop.
I went thro' the relevant reference page on your website and found that this equation falls in the catagory of "Hyperbolic-Type Equations with Square Determinants". In this case, the equation is reduced to "D*f(x,y) - g". I did the computation and found that this value comes out to be "2n(2nx + 1)(2ny + 1)". The resulting linear functions only produce the trivial solutions: (0, c) and (c, 0) where c = (2^(n-1) - 1)/n.
So, my questions are as follows:
1. Does your algorithm do the simple exhaustive brute force to find the solutions. Or some other advanced algorithm is used.
I'm asking because my algorithm is plain brute force and yours is a bit faster than mine. So, is it just because of your high-end server grade hardware infrastructure or the algo as well.
2. Is there any way to tell whether a particular equation of this form has a solution or not (other than the trivial ones) without going through the brute force. I've looked around a bit and at this point, it seems negative.
Thanks for your time.
manojman t2017-08-15T10:18:18ZGoogle Translate Structure (TextStructure like function)
http://community.wolfram.com/groups/-/m/t/1096168
`TextStructure` is a very nice new function in Mathematica. It can create amazing things like:
TextStructure@"If love be blind, love cannot hit the mark."
![TextStructure][1]
Can we do the same for translations?
This piece of code downloads a JSON-like code from google translate without the need for API calls (which I never bothered to learn).
GoogleTranslate[str_String] := GoogleTranslate@str = Import[
StringTemplate["https://translate.googleapis.com/translate_a/single?client=gtx&sl=`1`&tl=`2`&dt=t&q=`3`"][
"pt", "en", URLEncode@str], "JSON"][[1, 1, 1]]
And this other piece of code formats the translation.
MakeBoxes[TranslateElement[main_, down_], _] := GridBox[
{{MakeBoxes@main}, {StyleBox[MakeBoxes@down, "TextElementLabel"]}},
BaseStyle -> "TextElementGrid"]
GoogleTranslateStructure[str_String] := Block[{sentence, words, phrase},
sentence = StringSplit[str, p:"."|"," :> p] //. {a___String, s_String, p:"."|",", b___String} :> {a, s<>p, b};
phrase = Table[
words = StringSplit[sentence[[i]], WhitespaceCharacter];
TranslateElement[Row@Riffle[TranslateElement @@@ Transpose@{words, GoogleTranslate /@ words}, " "], GoogleTranslate@sentence[[i]]]
, {i, Length@sentence}];
If[Length@sentence == 1,
phrase[[1]],
TranslateElement[Row@Riffle[phrase, " "], GoogleTranslate@str]
]
]
A usage example would be:
GoogleTranslateStructure@"Se amor é cego, nunca acerta o alvo."
![GoogleTranslateStructure][2]
Changing the language from English to Japanese (which I don't speak, btw):
![JP][3]
Or French:
![FR][4]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2017-05-17_004746.png&userId=845022
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2017-05-17_005236.png&userId=845022
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2017-05-17_005549.png&userId=845022
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2017-05-17_005723.png&userId=845022Thales Fernandes2017-05-17T03:57:58ZRead sensor data of I2C BMP180 directly from MMA?
http://community.wolfram.com/groups/-/m/t/1165435
Excuse me please, are there any example codes available for I2C devices like BMP180, reading the sensor data directly from Mathematica?
Thank you,
LiborLibor Caha2017-08-15T19:13:05Z[✓] Verify a symbolic equality?
http://community.wolfram.com/groups/-/m/t/1165803
Hello everyone. Is it possible to verify the following identity in Mathematica?
Sum[Subscript[b, m] z^(-m), {m, 0, M}]/
Sum[Subscript[a, n] z^(-n), {n, 0, N}] == (Subscript[b, 0]/
Subscript[a,
0]) (Sum[(Subscript[b, m]/Subscript[b, 0]) z^(-m), {m, 0, M}]/
Sum[(Subscript[a, n]/Subscript[a, 0]) z^(-n), {n, 0, N}])
With == the output is not True or False, it's the same as input.
Thank you for your help.Gennaro Arguzzi2017-08-16T10:25:15Z[✓] CountourPlot of (Cos(2x))^2 ==0 ?
http://community.wolfram.com/groups/-/m/t/1164106
Hello,
I am trying to plot the countour curve of (cos(2x))^2 ==0 on the xy-plane for {x, -pi/2, pi/2} and {y,0,2pi}, but the results is an empty plot in mathemaitca. However, I obtained the correct plot for cos(2x)==0.
I used the following command lines:
ContourPlot[Cos[2x]^2 ==0, {x, -Pi/2, Pi/2}, {y, 0, 2Pi}]
How can I get the correct graph for this?
Thank you,
BhagyaBhagya Athu2017-08-13T03:22:00Z[✓] Simple rewrite rule using String function works but shows error?
http://community.wolfram.com/groups/-/m/t/1164621
I used Matlab/Mathematica during my PhD in 1993 and now trying to learn the fascinating Wolfram Language. So this might be a newby question so
bear with me.
The below simple expression
{"a.del", "b.del", "c.del"} /. s__ -> StringDelete[s, ".del"]
generates the expectecd content
{"a", "b", "c"}
but generates also the following error
StringDelete::strse: String or list of strings expected at position 1 in StringDelete[s,.del].
What am I doing wrong?
Thanks for your help.
Regards
MarkusMarkus Sonderegger2017-08-13T21:58:22ZDoes "FindRoot" work with numerical functions?
http://community.wolfram.com/groups/-/m/t/1158470
I have a problem with FindRoot, I want to find the root of a function but it consist of two parts, an analytical function and a numerical function, for example: imagine I have f1(x)= x that's my analytical function but the other one it doesn't depends on a variable it just print me a number when I call it, if I do f2(1.2)=5.6 for say something, but i can not do f2(x) that is not how is built my second function, it doesn't have a known shape because is quite complicated so is a numerical approach.
The thing is I want to solve this two equations for Liquid-Vapor equilibrium, there are 2 equations and this two depend on two functions, one analytic ant the other numeric, as I explained before, I don't show how they depend because the code is large and complicated, but in the document I attached, you can see in a good way what I want to solve.
p[ T, \[Rho]V] == p[ T, \[Rho]L],
\[Beta]\[Mu][ T, \[Rho]V] == \[Beta]\[Mu][ T, \[Rho]L]
Thank you and I hope you understood my problem.Charls Mejía2017-08-04T03:06:08ZNA handling in TimeSeries["Date"] ... “0. + NA” ?
http://community.wolfram.com/groups/-/m/t/1165386
I have an issue with the handling of TimeSeries, which is best described by the following MWE:
data = Import["https://pastebin.com/raw/Xj6x3PSe"];
dates = Table[DateList[{StringTake[data[[k, 1]], 10], {"Day", "Month","Year"}}], {k, 2, Length[data]}];
tsdata = TimeSeries[data[[2 ;;, 2 ;;]], {dates}];
tsdata["August 26, 2008"]
The resulting output is:
> {6.67345,6.55845,7.46281,7.32023,6.89391,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,1.82743,1.77073,1.65871,1.56591,1.52516,1.46369,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,1.47368,1.57085,1.65731,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,1.18979,1.02756,1.08294,1.12263,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA,0. +NA}
However, it should be:
tsdata["Values"][[7, ;;]]
> {6.67345,6.55845,7.46281,7.32023,6.89391,NA,NA,NA,NA,NA,NA,1.82743,1.77073,1.65871,1.56591,1.52516,1.46369,NA,NA,NA,NA,NA,NA,NA,1.47368,1.57085,1.65731,NA,NA,NA,NA,NA,NA,1.18979,1.02756,1.08294,1.12263,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA}
I can't figure out what causes this behaviour. Any suggestion is much appreciated.
(I'm using Mathematica 11.0.1.0)
For reference, I raised the same question on [Mathematica Stackexchange][1].
[1]: https://mathematica.stackexchange.com/q/153732/29792Fabian Wo2017-08-15T13:51:12ZTOUCH PHAT evaluation on RaspberryPi Zero
http://community.wolfram.com/groups/-/m/t/1165354
PIMORONI TOUCH PHAT was tested with RaspberryPi Zero Mathematica. Without Mathematica I2C functions but using standard process i2cget and i2cput, we can control i2c peripherals connected with RaspberryPi Zero.
![enter image description here][1]
Unfortunately, TOUCH PHAT configuration of LED address and CTouch address is reversed order each other, Mathematica program must convert the address that obtained CTouch address to the LED address.
chipAdr = "0x2c";
rGet[reg_] :=
StringTrim[
RunProcess[{"i2cget", "-y", "1", chipAdr, reg}, "StandardOutput"]];
rPut[reg_, value_] :=
StringTrim[
RunProcess[{"i2cset", "-y", "1", chipAdr, reg, value}, "StandardOutput"]];
map = <| "0x01" -> "0x20", "0x02" -> "0x10", "0x04" -> "0x08",
"0x08" -> "0x04", "0x10" -> "0x02", "0x20" -> "0x01" |>;
While[True,
rPut["0x00", "0x00"];
While[(rcode = rGet["0x03"]) == "0x00"];
rPut["0x74", map[rcode]];
rPut["0x74", "0x00"];
]
$Aborted
The looped process may be aborted with Command+dot.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=84852017-08-1520.08.28.jpg&userId=897049Hirokazu Kobayashi2017-08-15T11:46:44Z[✓] Transfer HTML table to Excel using Wolfram Language?
http://community.wolfram.com/groups/-/m/t/1148737
In the link below there is a table that I need to transfer to Excel. The table is necessary in a Mathematica program I'm now developing. This must be made in the Mathematica environment, otherwise it will be necessary to stop the program, do it by hand, and import the Excel file. I'm aware about the Mathematica Link for Excel addon, but I can't use it because other users of my program may not have it.
[Table currency values in Brazil][1]
I will appreciate any help, even if it is "not possible".
Anderson
[1]: http://www.excelcontabilidade.com.br/indice/5/IPCAAnderson Gaudio2017-07-18T19:42:53ZDefine a function and then use it in an equation?
http://community.wolfram.com/groups/-/m/t/1161091
I defined a function and then substituted it into an equation in two ways.
The first way (eq1, q2, eq2) didn't work, and the second (3, eq3) otherwise.
In[159]:= eq1 = (
A1 Sqrt[Dc] Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Dc E^((lh Sqrt[s + \[Theta]c])/Sqrt[Dc] - (
x Sqrt[s + \[Theta]c])/Sqrt[Dc]))/(
Sqrt[s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (
A1 Dc Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (-A2 +
pinf (s + \[Theta]c))/(s (s + \[Theta]c)) - (
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1 (-A2 + pinf (s + \[Theta]c)))/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) +
E^((Sqrt[s] x)/Sqrt[Dp]) b[1][s] + (
E^((2 lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[s] - Sqrt[Dp] \[Alpha]1) b[1][s])/(
Sqrt[s] + Sqrt[Dp] \[Alpha]1)
Out[159]= (
A1 Sqrt[Dc] Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Dc E^((lh Sqrt[s + \[Theta]c])/Sqrt[Dc] - (x Sqrt[s + \[Theta]c])/
Sqrt[Dc]))/(Sqrt[s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (
A1 Dc Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (-A2 +
pinf (s + \[Theta]c))/(s (s + \[Theta]c)) - (
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1 (-A2 + pinf (s + \[Theta]c)))/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) +
E^((Sqrt[s] x)/Sqrt[Dp]) b[1][s] + (
E^((2 lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[s] - Sqrt[Dp] \[Alpha]1) b[1][s])/(
Sqrt[s] + Sqrt[Dp] \[Alpha]1)
In[160]:= q2[x_, s_] := eq1
In[161]:= eq2 = \[Alpha]1 q2[lh, s] -
\!\(\*SuperscriptBox[\(q2\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[lh, s] == 0
Out[161]= \[Alpha]1 ((
A1 Sqrt[Dc] Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Dc E^((lh Sqrt[s + \[Theta]c])/Sqrt[Dc] - (
x Sqrt[s + \[Theta]c])/Sqrt[Dc]))/(
Sqrt[s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (
A1 Dc Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (-A2 +
pinf (s + \[Theta]c))/(s (s + \[Theta]c)) - (
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1 (-A2 + pinf (s + \[Theta]c)))/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) +
E^((Sqrt[s] x)/Sqrt[Dp]) b[1][s] + (
E^((2 lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[s] - Sqrt[Dp] \[Alpha]1) b[1][s])/(
Sqrt[s] + Sqrt[Dp] \[Alpha]1)) == 0
In[162]:=
q3[x_, s_] := (
A1 Sqrt[Dc] Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Dc E^((lh Sqrt[s + \[Theta]c])/Sqrt[Dc] - (
x Sqrt[s + \[Theta]c])/Sqrt[Dc]))/(
Sqrt[s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (
A1 Dc Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (-A2 +
pinf (s + \[Theta]c))/(s (s + \[Theta]c)) - (
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1 (-A2 + pinf (s + \[Theta]c)))/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) +
E^((Sqrt[s] x)/Sqrt[Dp]) b[1][s] + (
E^((2 lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[s] - Sqrt[Dp] \[Alpha]1) b[1][s])/(
Sqrt[s] + Sqrt[Dp] \[Alpha]1)
In[163]:= eq3 = \[Alpha]1 q3[lh, s] -
\!\(\*SuperscriptBox[\(q3\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[lh, s] == 0 // Simplify // Normal
Out[163]= True
![enter image description here][1]
![enter image description here][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=function_Page_1.jpg&userId=586844
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=function_Page_2.jpg&userId=586844Zhonghui Ou2017-08-10T03:59:30ZUse Wolfram|Alpha in Python?
http://community.wolfram.com/groups/-/m/t/1165138
Hi there! I'm developing a program in Python to calculate the energy of a primary cosmic ray particle using the NKG-function. Part of the process involves solving an equation in x in the form ax^1.2 + bx^2.2 + cx^3.2 + dx^4.2 - s = 0, where a, b, c, d and s are known variables.
I've installed the Wolfram Alpha package using pip, and am working in the IDE PyCharm. The code correctly calculates the value of x, but I think it is stored as a string, rather than a float. In order to use this x value in further calculations, I need to convert it into a float. I know how to do this in Python, but was wondering how this is done using the Wolfram Alpha language?
This is the section of the code so far:
client = wf.Client("app_ID") #app_ID is the app id
q = "Solve 274770*x**1.2 + 8271.8*x**2.2 + 79.555*x**3.2 + 0.22184*x**4.2 - 35435557377 = 0"
res = client.query(q)
answer = next(res.results).text
print(answer)
ra1 = answer
print("ra1:", ra1)
ra2 = (k*r0**eta) / (measured_density_a*(h*ra1**0.2 + g*ra1**1.2 + i*ra1**2.2 + e*ra1**3.2))
print("ra2", ra2)
This is the output when I run the code:
x = 385.279
ra1: x = 385.279
Traceback (most recent call last):
File "C:/Users/Nadia/PycharmProjects/Particle_Energy/energy_code/complete_code_wolfram.py", line 282, in <module>
ra2 = (k * r0 ** eta) / (measured_density_a * (h * ra1 ** 0.2 + g * ra1 ** 1.2 + i * ra1 ** 2.2 + e * ra1 ** 3.2))
TypeError: unsupported operand type(s) for ** or pow(): 'str' and 'float'
Thank you in advance!Nadia Singh2017-08-14T22:58:05ZInstall Mathematica on a Mac?
http://community.wolfram.com/groups/-/m/t/1165006
I just downloaded Mathematica for Mac, but I couldn't launch the installer? Anyone know how?Jiann Yaw Lee2017-08-14T17:50:00ZA predictor based on cellular automata
http://community.wolfram.com/groups/-/m/t/1058981
Hi, I'm a trader on financial markets, quantitative researcher and developer of trading algorithms. I'm not a user of Wolfram products yet. I've red the book NKS and found it very interesting at all, and expecially about some cellular automata having universal-computing capabilities. My intent is to develop something like a function() coding such an automaton, taking as input a normalized series of hystorical prices, and giving as output a forecast of next day price. The automata's rules should be set by a genetic algorithm, working on a training set of hystorical data. I'd love to know if someone of you is working on something like this. If you please, you can take a look at my job at [www.co-re.it][1]
Thanks - Marcello Calamai
[1]: http://www.co-re.it "COmputational REsearch"Marcello Calamai2017-04-11T14:17:34ZBuilding volatility objects – data science approach
http://community.wolfram.com/groups/-/m/t/1164432
*NOTE: all Wolfram Language code and data are available in the attached notebook at the end of the post.*
----------
Novel method for volatility objects building is being discussed in the attached note. Machine learning techniques are being applied to financial derivatives volatility building and the method of predictions fits sample data well. The resulting objects are reasonable, they build fast and produce logically correct estimates within given domains. As such, data science route offers interesting alternative to traditional modelling assumptions.
![enter image description here][1]
#Introduction#
Volatility plays critical role in the modelling and valuation of financial derivatives, and therefore it is not surprising to see continuous attention and focus of many quants and researchers alike on this subject, pattern decomposition and process modelling. Knowing the 'right' volatility and being able to estimate its path in the future is therefore seen as critical ingredient of consistent derivatives pricing.
Financial volatilities are either given (quoted by the broker-dealers) or implied (derived from option prices). By its nature, financial products volatilities are generally 'forward-looking' rather than being historical / realised volatilities. This phenomenon stems from the principles of risk-neutral pricing. When the volatility is quoted, its origin is in many instances derived from volatility models. These can be simple (such as B/S) or more complex. Local volatility SABR and the 'mixture' models such as SVLV are currently the most used volatility models in the market. To operate properly, all these models require extensive calibration to the market data.
We propose *alternative method* for volatility object building that utilises data science approach. Using Mathematica's routines for machine learning, we use **predictor** functionality to build volatility oaths based on 'learning' from quoted data. We will look at three volatility objects - **FX**, **Equity** and **Interest rate** swaptions to show Mathematica's capabilities in the construction and management of volatility objects by 'learning' from given examples. The method is generally fast and can be fully automated. This improves its usability and future application in quantitative finance.
#FX Volatility#
FX volatility is quoted in the market either in 1D or 2D directions. The former is generally a vector of at-the-money (ATM) volatilities for different option maturities, whilst the latter is a 2D surface that in addition to maturity dimension introduces option strikes. These are generally shown on horizontal axis with quoted expression as FX delta. 50 delta is equal to ATM, 10 and 25 delta represent out-of--the money (OTM) calls whilst 75 and 90 reflect puts.
The non-negativity of FX market leads to a log-normal assumptions about the FX data distribution, and therefore the nature of quoted volatility is log-normal (or also known as relative volatility).
We use the recent FX volatility data for JPY/BRL currency cross. Option maturity range from 1 day to 10 years and the FX smile is defined for both calls and puts on the above strike scale.
fxmat = {1/360, 7/360, 14/360, 21/360, 30/360, 60/360, 90/360,
120/360, 150/360, 180/360, 1, 1.5, 2, 3, 4, 5, 7, 10};
fxdelta = {10, 25, 50, 75, 90};
xtbl = Table[{i, j}, {i, fxmat}, {j, fxdelta}] // Transpose;
fxvols0 = {{28.9347360822946, 23.266, 18.69473828125, 17.336,
17.082}, {22.817, 18.889, 18.7004305555556, 14.768,
14.435}, {22.462, 18.476, 18.6928159722222, 14.352,
14.026}, {22.741, 18.775, 18.6928532986111, 14.673,
14.356}, {23.885, 20.609, 18.6928532986111, 17.419,
17.814}, {23.41, 20.449, 18.6928159722222, 17.174,
17.161}, {23.811, 20.843, 18.7004305555556, 17.395,
17.229}, {24.998, 21.005, 18.6759444444444, 17.188,
17.089}, {23.849, 20.471, 18.7022222222222, 16.766,
16.488}, {23.107, 20.095, 18.6926666666667, 16.466,
16.117}, {23.099, 19.807, 18.7308888888889, 15.833,
15.346}, {22.2539294889054, 20.404, 18.578, 16.29,
15.8708687695123}, {22.7761138678155, 20.642, 18.8, 16.449,
16.1821988373345}, {22.0841109536103, 20.326, 18.7, 16.059,
15.4239471417806}, {22.6021123295428, 20.563, 18.6926666666667,
16.158, 15.6147034126386}, {21.4177604234308, 20.017,
18.7308888888889, 15.563, 14.4858894630447}, {22.4813659703195,
20.508, 18.7078518518519, 15.849,
15.023193562278}, {23.0150689352065, 20.75, 18.7104691358025,
16.003, 15.3165637372759}};
fxvols = fxvols0 // Transpose;
The FX volatility surface looks as follows:
TableForm[fxvols // Transpose, TableHeadings -> {fxmat, fxdelta}]
![enter image description here][2]
We can visualise it as follows:
ListPlot3D[fxvols, PlotLabel -> Style["JPY/BRL vol surface", 14]]
![enter image description here][3]
##Training predictor on the FX volatility data##
We use the quoted volatility data as a 'training set' to discover the pattern in the data for predictor purposes. **Predict** function is our main tool for this task and we build two objects:
- Vol object with Gaussian process method
- Vol object with Random forest method
We first build the training set object and format it in required direction
fxvoldata0 =
Table[{fxmat[[i]], fxdelta[[j]]} -> fxvols[[j, i]], {j, 1,
Length[fxdelta]}, {i, 1, Length[fxmat]}];
fxvoldata1 = Flatten[fxvoldata0, 1];
Using the data object, we now train two predictors:
fxvmodelGP =
Predict[fxvoldata1, PerformanceGoal -> "Quality",
Method -> "GaussianProcess"]
fxvmodelRF =
Predict[fxvoldata1, PerformanceGoal -> "Quality",
Method -> "RandomForest"]
![enter image description here][4]
and examine the information about each predictor function
{PredictorInformation[fxvmodelGP],
PredictorInformation[fxvmodelRF]} // Row
![enter image description here][5]
We can now test the predictors on same sample data:
{fxvmodelGP[{7, 10}], fxvmodelRF[{7, 10}], fxvmodelGP[{1/2, 50}],
fxvmodelRF[{1/2, 50}]}
> {22.2577, 22.2484, 18.6702, 18.6932}
We observe decent fit to the original data. Using the model, we can now build the entire volatility object, filling the gaps in the quoted spectrum:
fxmodres =
Table[{i, j, fxvmodelGP[{i, j}]}, {i, 0.5, 10, 0.25}, {j, 5, 95, 5}];
fxmoevals = Flatten[fxmodres, 1];
ListPlot3D[%,
PlotLabel ->
Style["Predicted FX Vol Surface: Gaussian Process approach", Blue,
Italic, 15], AxesLabel -> {"Tenor", "Delta"},
ColorFunction -> "Rainbow", ImageSize -> 400]
![enter image description here][6]
Gaussian process model builds smooth and well-behaved volatility surface in both dimensions. The model nicely smooches the edges observed in the original data
fxmodres2 =
Table[{i, j, fxvmodelRF[{i, j}]}, {i, 0.5, 10, 0.25}, {j, 5, 95,
10}];
Flatten[fxmodres2, 1];
ListPlot3D[%, ColorFunction -> "TemperatureMap",
PlotLabel ->
Style["Predicted FX Vol Surface: Random Forest approach", Blue,
Italic, 15], AxesLabel -> {"Tenor", "Delta"},
ColorFunction -> "TemperatureMap", ImageSize -> 400]
![enter image description here][7]
The nature of the Random forest model means that the modelled surface looks step-wise. If the smoothness is preferable for the vol object construction, then Gaussian process is a better choice.
#Equity volatility#
We now look at the Equity volatility data and will build vol objects in a similar way to the FX case above. Equity volatility data exist in the 2D surface format : (i) in option maturity dimension and (ii) option strike dimension. In this way they closely resemble the FX volatilities. By nature, the equity volatilities are also log-normal since equity prices are always positive.
Equity options maturity typically range from 1 month up to 5 years, whereas option strikes - expressed in terms of 'moneyness' - range between 40 and 200%.
We take the recent Nikkei 225 equity volatility data defined on the grid mentioned above:
eqdates = {1/12, 2/12, 3/12, 6/12, 1, 3/2, 2, 3, 4, 5};
eqmoney = {0.4, 0.6, 0.8, 0.9, 0.95, 0.975, 1, 1.025, 1.05, 1.1, 1.2,
1.3, 1.5, 1.75, 2};
eqv = {{54.743, 42.171, 33.275, 24.208, 20.015, 17.999, 16.541,
15.679, 15.173, 15.752, 18.185, 20.298, 24.619, 27.651,
29.413}, {46.068, 39.862, 29.681, 22.664, 19.624, 18.228, 17.138,
16.358, 15.777, 15.358, 16.869, 18.244, 21.776, 24.987,
27.007}, {42.368, 38.084, 27.929, 22.151, 19.673, 18.547, 17.639,
16.995, 16.55, 16.053, 16.726, 17.704, 20.029, 23.105,
25.135}, {42.136, 34.798, 25.302, 21.623, 19.971, 19.281, 18.721,
18.289, 17.966, 17.57, 17.655, 18.439, 19.792, 21.623,
23.704}, {38.829, 30.246, 23.945, 21.428, 20.393, 19.97, 19.614,
19.322, 19.087, 18.762, 18.574, 18.876, 19.975, 21.059,
22.035}, {35.555, 28.012, 23.123, 21.135, 20.363, 20.046, 19.775,
19.547, 19.358, 19.079, 18.844, 18.951, 19.739, 20.737,
21.493}, {33.111, 26.718, 22.555, 20.905, 20.28, 20.022, 19.799,
19.608, 19.446, 19.198, 18.952, 18.968, 19.518, 20.418,
21.127}, {30.028, 25.228, 21.861, 20.629, 20.171, 19.981, 19.813,
19.667, 19.54, 19.339, 19.108, 19.058, 19.347, 20.034,
20.687}, {28.217, 24.273, 21.397, 20.413, 20.049, 19.896, 19.761,
19.641, 19.536, 19.366, 19.156, 19.084, 19.245, 19.765,
20.341}, {26.918, 23.453, 20.936, 20.115, 19.812, 19.683, 19.568,
19.466, 19.376, 19.227, 19.033, 18.952, 19.04, 19.439, 19.936}};
eqvols = eqv // Transpose;
TableForm[eqv, TableHeadings -> {eqdates, eqmoney}]
![enter image description here][8]
This is the equity volatility surface quoited in the market
eqtab = Table[{eqdates[[i]], eqmoney[[j]], eqv[[i, j]]}, {i, 1,
Length[eqdates]}, {j, 1, Length[eqmoney]}];
Flatten[eqtab, 1];
ListPlot3D[%, PlotLabel -> Style["Nikkei 225 vol surface", 14]]
![enter image description here][9]
The skew above is a typical feature of the equity markets.
##Training predictor on the Nikkei volatility data##
We first configure the vol data object for the modelling purposes
eqs = Table[{eqdates[[i]], eqmoney[[j]]} -> eqv[[i, j]], {i, 1,
Length[eqdates]}, {j, 1, Length[eqmoney]}];
eqdataset = Flatten[eqs, 1];
and then train two predictors:
- Gaussian process
- Neural network
eqvolmodelGP = Predict[eqdataset, Method -> "GaussianProcess"]
eqvolmodelNN = Predict[eqdataset, Method -> "NeuralNetwork"]
![enter image description here][10]
Obtain information about each method:
{PredictorInformation[eqvolmodelGP], PredictorInformation[eqvolmodelNN]}
![enter image description here][11]
We test each predictor on a sample data
{eqvolmodelGP[{1, 0.6}], eqvolmodelNN[{1, 0.6}], eqvolmodelGP[{3, 2}], eqvolmodelNN[{3, 2}]}
> {29.8471, 31.0239, 20.6998, 20.9776}
We can see a decent fit to the original data.
We now generate the full volatility surface by extending the boundaries outside the original domain:
eqmodres =
Table[{i, j, eqvolmodelGP[{i, j}]}, {i, 0.5, 10, 0.25}, {j, 0.2, 3,
0.1}];
eqmodres = Flatten[eqmodres, 1];
ListPlot3D[%,
PlotLabel ->
Style["Nikkei EQ Vol Surface: Gaussian Process approach", Blue,
Italic, 15], AxesLabel -> {"Tenor", "Money"},
ColorFunction -> "Rainbow", ImageSize -> 400]
![enter image description here][12]
eqmodres =
Table[{i, j, eqvolmodelNN[{i, j}]}, {i, 0.5, 10, 0.25}, {j, 0.2, 3,
0.1}];
eqmodres = Flatten[eqmodres, 1];
ListPlot3D[%,
PlotLabel ->
Style["Nikkei EQ Vol Surface: Neural network approach", Blue,
Italic, 15], AxesLabel -> {"Tenor", "Money"},
ColorFunction -> "TemperatureMap", ImageSize -> 400]
![enter image description here][13]
Both predictors produce smooth volatility objects, with Neural network being closer to the underlying data.
#Swaption cube#
Our third example is based on more complex case - 3D swaption cube. Interest rate swaptions are defined on 3D scale - (i) option maturity, (ii) underlying swap maturity and (iii) strike. This makes the case more complicated. Option maturities range from 1month to 30 years, swap maturities are typically between 1year and 30 years and strikes are usually in the range of -200 to 200 where the number represents the basis point offset from ATM swap rate.
Since in many currencies the rates are now negative, the market has moved from quoting the log-normal volatilities to the normal ones. These are also known as 'absolute' volatilities and are usually expressed on rates convention basis.
We take the recent EUR swaption volatility data and create training set for the Mathematica's predictor:
optmat = {1/2, 3/4, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 25, 30};
swmat = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 20, 25, 30};
swox = {-200, -150, -100, -75, -50, -25, 0, 25, 50, 75, 100, 150, 200};
![enter image description here][14]
Fully-defined cubes, such as the EUR one are generally large:
Map[Length, swv] // Total
> 3315
##Training the predictor on the EUR swaption volatility data##
We first build the vol object from the data
swvres1 =
Table[{optmat[[i]], swmat[[k]], swox[[j]]} -> swv[[i, j]], {k, 1,
Length[swmat]}, {j, 1, Length[swox]}, {i, 1, Length[optmat]}];
swvres2 = Flatten[swvres1, 2];
and create three predictors:
- Gaussian process
- Neural network
- Random forest
swvolmodGP =
Predict[swvres2, Method -> "GaussianProcess",
PerformanceGoal -> "Quality"]
![enter image description here][15]
swvolmodNN =
Predict[swvres2, Method -> "NeuralNetwork",
PerformanceGoal -> "Quality"]
![enter image description here][16]
swvolmodRF =
Predict[swvres2, Method -> "RandomForest",
PerformanceGoal -> "Quality"]
![enter image description here][17]
Whilst the Neural network and Random forest are generally fast to build, the Gaussian process is slower
{Predict[swvres2, Method -> "GaussianProcess",
PerformanceGoal -> "Quality"] // Timing,
Predict[swvres2, Method -> "NeuralNetwork",
PerformanceGoal -> "Quality"] // Timing,
Predict[swvres2, Method -> "RandomForest",
PerformanceGoal -> "Quality"] // Timing}
![enter image description here][18]
We test the predictors on the sample data
{swvolmodGP[{10, 1, 0}], swvolmodNN[{10, 1, 0}],
swvolmodRF[{10, 1, 0}]}
> {0.698667, 0.698477, 0.69649}
We again observe decent fit to the original data.
Using the three vol models, we predict the volatility data and fill the cubes:
volmodGP =
Table[swvolmodGP[{i, j, k}], {i, 1, 5, 0.5}, {j, 1, 10,
1}, {k, -100, 100, 50}];
volmodNN =
Table[swvolmodNN[{i, j, k}], {i, 1, 10, 0.25}, {j, 1, 10,
1}, {k, -200, 200, 25}];
volmodRF =
Table[swvolmodRF[{i, j, k}], {i, 1, 10, 0.25}, {j, 1, 10,
1}, {k, -200, 200, 25}];
{ListPlot3D[Table[volmodNN[[i]], {i, Length[volmodNN]}],
ColorFunction -> "Rainbow",
PlotLabel -> Style["EUR Swaption cube: NN approach", 12],
AxesLabel -> {"Opt Tenor", "Swap Tenor"}, ImageSize -> 250],
ListPlot3D[Table[volmodGP[[i]], {i, Length[volmodGP]}],
ColorFunction -> "TemperatureMap",
PlotLabel -> Style["EUR Swaption cube: GP approach", 12],
AxesLabel -> {"Opt Tenor", "Swap Tenor"}, ImageSize -> 250],
ListPlot3D[Table[volmodRF[[i]], {i, Length[volmodRF]}],
ColorFunction -> "NeonColors",
PlotLabel -> Style["EUR Swaption cube: RF approach", 12],
AxesLabel -> {"Opt Tenor", "Swap Tenor"}, ImageSize -> 250]}
![enter image description here][19]
All three predictors correctly show the flattening of the surfaces for higher strikes. Neutral network produces the smoothest surface, and additionally is the fastest to build the object. As such, it may be well suitable for live market data and active volatility management.
#Conclusion#
The objective of this note was to show that machine learning method offered viable alternative to traditional volatility models using single or multi-factor processes.. Data science approach is attractive as it actively 'learns' from available data samples and adjusts its parameters when either market conditions or direction change. Built-in Mathematica's Predict function provides excellent routines for volatility data fitting and three tested methods provide reasonable prediction for the modelled data. More importantly, higher dimensions, such as cubes, pose no problem for object rendering. This remains robust and fast.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=FXVolObject.jpg&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sadf453yrtehdgfs.png&userId=11733
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=107222.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-08-14at13.29.58.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-08-14at13.30.14.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=86825.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=76126.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=59597.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19818.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-08-14at13.38.23.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=622210.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=546611.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1029512.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=938213.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=599914.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1037115.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=182316.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=944017.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=138619.png&userId=20103Igor Hlivka2017-08-13T18:06:19Z[✓] Ratios in expanded Fibonacci series?
http://community.wolfram.com/groups/-/m/t/1164443
My father is interested in a formula for the ratio of "expanded" Fibonacci series. The "normal" series adds 2 numbers together (1,1,2,3,5...). The ratio is 1.61803398... By adding 3 numbers together, regardless of the initial three numbers, the ratio is 1.83928... By adding 4 numbers together, the ratio is 1.9275620... And so on. The formula for normal Fibonacci numbers is (1+SQRT 5)/2. What is the formula for "expanded" series?Mark Good2017-08-13T18:38:42ZGlobal optimization with ParametricIPOPTMinimize
http://community.wolfram.com/groups/-/m/t/1164680
The documentation provides a step-by-step example of using ParametricIPOPTMinimize for global optimization but doesn't provide a function for doing it routinely. The code defining iMin, which uses ParametricIPOPTMinimize, is in the attached notebook
In[11]:= AbsoluteTiming @
NMinimize[{x^2 - y^2,
Cos[x - y] >= 0.5, -5 <= x <= 5, -5 <= y <= 5}, {x, y}]
During evaluation of In[11]:= NMinimize::cvmit: Failed to converge to the requested accuracy or precision within 100 iterations.
Out[11]= {0.455478, {-24.9443, {x -> 0.235988, y -> -5.}}}
In[12]:= AbsoluteTiming @
iMin[x^2 - y^2, {Cos[x - y] >= 0.5}, {{x, -5, 5}, {y, -5, 5}}, 5, 0]
During evaluation of In[12]:= {{Solve_Succeeded,5}}
Out[12]= {0.151122, {-24.9443, {x -> -0.235988, y -> 5.}}}
In[13]:= AbsoluteTiming @
NMinimize[{x Sin[10 x] y ^2 Cos[5 y], (x + 1)^2 + (y + 1)^2 <=
1}, {{x, -2, 0}, {y, -2, 0}}, Method -> "DifferentialEvolution"]
Out[13]= {0.286165, {-5.12441, {x -> -1.41862, y -> -1.90816}}}
In[14]:= AbsoluteTiming @
iMin[x Sin[10 x] y ^2 Cos[5 y], {(x + 1)^2 + (y + 1)^2 <=
1}, {{x, -2, 0}, {y, -2, 0}}, 25, 0]
During evaluation of In[14]:= {{Solve_Succeeded,25}}
Out[14]= {0.147763, {-5.12441, {x -> -1.41862, y -> -1.90816}}}Frank Kampas2017-08-14T15:43:25Z[✓] Plot the circles within a Manipulate?
http://community.wolfram.com/groups/-/m/t/1164758
Consider the following code:
hypocycloid[tend] :=
Module[{a = 4, b = 1, x, y, p},
x[t_, p_] := b (a/b - 1) Cos[t] + b p Cos[(a/b - 1) t];
y[t_, p_] := b (a/b - 1) Sin[t] - b p Sin[(a/b - 1) t];
ParametricPlot[{x[t, 1], y[t, 1]}, {t, 0, tend},
PlotStyle -> {Red, Thick}, Ticks -> None, ImageSize -> 225,
Epilog -> {Thick, Blue, Circle[{0, 0}, a], Black,
Circle[{x[tend, 0], y[tend, 0]}, b], Red, PointSize[Large],
Point[{x[tend, 1], y[tend, 1]}], Black,
Line[{{x[tend, 0], y[tend, 0]}, {x[tend, 1], y[tend, 1]}}]},
PlotRange -> (1 + 1/20) {-a, a}]];
Manipulate[Quiet@hypocycloid[tend], {tend, .0001, 2*Pi}]Lisan Lisov2017-08-14T11:04:42Z[✓] Use your own functions in a parametric plot?
http://community.wolfram.com/groups/-/m/t/1164404
Consider the following code:
x[t_] := (a - b) Cos[t] + b Cos[(a/b - 1) t]
y[t_] := (a - b) Sin[t] - b Sin[(a/b - 1) t
for example
ParametricPlot[{x[t_], y[t_] }, {t, 0, 12 \[Pi]}, .....]
I tried different ways without success.Lisan Lisov2017-08-13T13:33:47ZSolve an equation with Laplace and Inverse Laplace Transform?
http://community.wolfram.com/groups/-/m/t/1164710
The problem with one equation and 3 conditions can be solved by Laplace Transform and Inverse Laplace Transform.
But the solution is wrong, and it drops far below the numerical solution.
This program needs be done with Mathematica and Maple.
Would you like to help me with checking the program?
Thanks.
(*Citrate direvative with respect to time.*)
(* Sqrt[Dc]F1\[Lambda]->A1, cinf\[Theta]c\[Lambda]->A2, \
Dp\[Theta]c/(Dc-Dp)->A3, (1+Dc/(Dc-Dp))\[Theta]c->A4, \
Dc\[Theta]c/(Dc-Dp)->A5, (Dc F1 \[Lambda])/(Dc-Dp)->A6,(Sqrt[Dc] F1 \
\[Alpha]1 \[Lambda] )/ \[Theta]c->A7, Sqrt[Dp] pinf \[Alpha]1->A8, \
cinf Sqrt[Dp] \[Alpha]1 \[Lambda]->A9,(-lh+x)/(2 Sqrt[Dc] )->A10, \
(Dc^(3/2) F1 \[Lambda])/((Dc-Dp) Sqrt[\[Pi]] )->A11,(-lh+x)/(2 \
Sqrt[Dp ])->A12, ( cinf Sqrt[\[Theta]c] \[Lambda] )/Sqrt[\[Pi]]->A13, \
cinf Sqrt[\[Theta]c] \[Lambda]-> A14*)
(*The sink term in the equation.*)
ct[x_, t_] := ((E^(-((lh - x)^2/(4 Dc t)) - t \[Theta]c)) F1 Sqrt[
Dc] )/(Sqrt[\[Pi]] Sqrt[t]) - cinf*\[Theta]c*E^(-t* \[Theta]c)
(*The model includes euqation, left and right boundary conditions, \
and initial condition.*)
(*Phosphate equation. *)
eq1 = D[p[x, t], t] == Dp*D[p[x, t], {x, 2}] + \[Lambda]*ct[x, t] //
Simplify // Normal
\!\(\*SuperscriptBox[\(p\), \*
TagBox[
RowBox[{"(",
RowBox[{"0", ",", "1"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, t] ==
E^(-t \[Theta]c) ((Sqrt[Dc] E^(-((lh - x)^2/(4 Dc t))) F1)/(
Sqrt[\[Pi]] Sqrt[t]) - cinf \[Theta]c) \[Lambda] + Dp
\!\(\*SuperscriptBox[\(p\), \*
TagBox[
RowBox[{"(",
RowBox[{"2", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, t]
(*Left boundary condition.*)
eq2 = D[p[x, t], x] == \[Alpha]1*p[x, t] /. x -> lh
\!\(\*SuperscriptBox[\(p\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[lh, t] == \[Alpha]1 p[lh, t]
(*Right boundary condition.*)
eq3 = D[p[x, t], x] == 0 /. x -> +\[Infinity]
\!\(\*SuperscriptBox[\(p\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[\[Infinity], t] == 0
(*Initial condition.*)
eq4 = p[x, t] == pinf /. t -> 0
p[x, 0] == pinf
(*Laplacian Transform of eq1-eq4 with respect to t, p(x,t)->q (x,s). \
eq1-eq4 -> eq6-eq9.*)
eq6 = LaplaceTransform[eq1, t, s] // Simplify // Normal
(cinf \[Theta]c \[Lambda])/(s + \[Theta]c) +
s LaplaceTransform[p[x, t], t, s] == (
Sqrt[Dc] E^(-(Sqrt[s + \[Theta]c]/Sqrt[(Dc/(lh - x)^2)]))
F1 \[Lambda])/Sqrt[s + \[Theta]c] + Dp LaplaceTransform[
\!\(\*SuperscriptBox[\(p\), \*
TagBox[
RowBox[{"(",
RowBox[{"2", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, t], t, s] + p[x, 0]
eq7 = LaplaceTransform[eq2, t, s] // Simplify // Normal
\[Alpha]1 LaplaceTransform[p[lh, t], t, s] == LaplaceTransform[
\!\(\*SuperscriptBox[\(p\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[lh, t], t, s]
eq8 = LaplaceTransform[eq3, t, s] // Simplify // Normal
LaplaceTransform[
\!\(\*SuperscriptBox[\(p\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[\[Infinity], t], t, s] == 0
eq9 = LaplaceTransform[eq4, t, s] // Simplify // Normal
(pinf == p[x, 0])/s
(*Simplify the nonhomogeneous term, the sink term of eq6. *)
eq10 = FullSimplify[(cinf \[Theta]c \[Lambda])/(s + \[Theta]c) - (
Sqrt[Dc] E^(-(Sqrt[s + \[Theta]c]/Sqrt[(Dc/(lh - x)^2)]))
F1 \[Lambda])/Sqrt[s + \[Theta]c], (lh - x) < 0 &&
Dc > 0 && (s + \[Theta]c) > 0]
((cinf \[Theta]c -
E^((lh - x) Sqrt[(s + \[Theta]c)/Dc]) F1 Sqrt[
Dc (s + \[Theta]c)]) \[Lambda])/(s + \[Theta]c)
eq11 = Apart[eq10]
(cinf \[Theta]c \[Lambda])/(s + \[Theta]c) - (
E^((lh - x) Sqrt[(s + \[Theta]c)/Dc]) F1 Sqrt[
Dc (s + \[Theta]c)] \[Lambda])/(s + \[Theta]c)
eq12 = (cinf \[Theta]c \[Lambda])/(s + \[Theta]c) - (
E^((lh - x) /Sqrt[Dc ] Sqrt[s + \[Theta]c]) F1 Sqrt[
Dc ] \[Lambda])/
Sqrt[ (s + \[Theta]c)] /. {Sqrt[Dc] F1 \[Lambda] -> A1,
cinf \[Theta]c \[Lambda] -> A2}
A2/(s + \[Theta]c) - (
A1 E^(((lh - x) Sqrt[s + \[Theta]c])/Sqrt[Dc]))/Sqrt[s + \[Theta]c]
(*eq6-eq8 become eq13-eq15 by Laplace Transform as p(x,t)->q (x,s).*)
eq13 = s *q[x, s] - Dp*D[q[x, s], {x, 2}] - pinf + eq12 == 0
-pinf + A2/(s + \[Theta]c) - (
A1 E^(((lh - x) Sqrt[s + \[Theta]c])/Sqrt[Dc]))/Sqrt[
s + \[Theta]c] + s q[x, s] - Dp
\!\(\*SuperscriptBox[\(q\), \*
TagBox[
RowBox[{"(",
RowBox[{"2", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[x, s] == 0
eq14 = \[Alpha]1*(q[x, s] /. x -> lh) - (D[q[x, s], x] /. x -> lh) == 0
\[Alpha]1 q[lh, s] -
\!\(\*SuperscriptBox[\(q\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[lh, s] == 0
eq15 = (D[q[x, s], x] /. x -> \[Infinity]) == 0
\!\(\*SuperscriptBox[\(q\), \*
TagBox[
RowBox[{"(",
RowBox[{"1", ",", "0"}], ")"}],
Derivative],
MultilineFunction->None]\)[\[Infinity], s] == 0
(*Solve the equation eq13 with the left boundary condition eq14.*)
eq16 = DSolve[{eq13, eq14}, q[x, s], {x, s},
GeneratedParameters -> B ] // Simplify // Normal
{{q[x, s] -> (E^(-((Sqrt[s] x)/Sqrt[
Dp])) (A1 Sqrt[Dc]
s (s + \[Theta]c) (Sqrt[
Dc] (Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp]) \[Alpha]1 -
E^((Sqrt[s] x)/Sqrt[
Dp] + ((lh - x) Sqrt[s + \[Theta]c])/Sqrt[
Dc]) (Sqrt[s] + Sqrt[Dp] \[Alpha]1)) +
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp]) Sqrt[
s + \[Theta]c]) + (-Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[
Dp]) \[Alpha]1 +
E^((Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[s] + Sqrt[Dp] \[Alpha]1)) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c)) (-A2 +
pinf (s + \[Theta]c)) +
s (E^((2 lh Sqrt[s])/Sqrt[
Dp]) (Sqrt[s] - Sqrt[Dp] \[Alpha]1) +
E^((2 Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[s] + Sqrt[Dp] \[Alpha]1)) (s + \[Theta]c)^(
3/2) (-Dc s + Dp (s + \[Theta]c)) B[1][s]))/(s (Sqrt[s] +
Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)^(
3/2) (-Dc s + Dp (s + \[Theta]c)))}}
(*Simplify the solution eq16 in eq17-eq18.*)
eq17 = (q[x, s] /. eq16[[1]])
(E^(-((Sqrt[s] x)/Sqrt[
Dp])) (A1 Sqrt[Dc]
s (s + \[Theta]c) (Sqrt[
Dc] (Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp]) \[Alpha]1 -
E^((Sqrt[s] x)/Sqrt[Dp] + ((lh - x) Sqrt[s + \[Theta]c])/
Sqrt[Dc]) (Sqrt[s] + Sqrt[Dp] \[Alpha]1)) +
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp]) Sqrt[
s + \[Theta]c]) + (-Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[
Dp]) \[Alpha]1 +
E^((Sqrt[s] x)/Sqrt[Dp]) (Sqrt[s] + Sqrt[Dp] \[Alpha]1)) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c)) (-A2 +
pinf (s + \[Theta]c)) +
s (E^((2 lh Sqrt[s])/Sqrt[Dp]) (Sqrt[s] - Sqrt[Dp] \[Alpha]1) +
E^((2 Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[s] + Sqrt[Dp] \[Alpha]1)) (s + \[Theta]c)^(
3/2) (-Dc s + Dp (s + \[Theta]c)) B[1][s]))/(s (Sqrt[s] +
Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)^(
3/2) (-Dc s + Dp (s + \[Theta]c)))
eq18 = ExpandAll[eq17, x]
(A1 Sqrt[Dc] Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Dc E^((lh Sqrt[s + \[Theta]c])/Sqrt[Dc] - (x Sqrt[s + \[Theta]c])/
Sqrt[Dc]))/(Sqrt[s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (
A1 Dc Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (-A2 +
pinf (s + \[Theta]c))/(s (s + \[Theta]c)) - (
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1 (-A2 + pinf (s + \[Theta]c)))/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) +
E^((Sqrt[s] x)/Sqrt[Dp]) B[1][s] + (
E^((2 lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[s] - Sqrt[Dp] \[Alpha]1) B[1][s])/(
Sqrt[s] + Sqrt[Dp] \[Alpha]1)
(*Define the solution q1[x,t] with eq18 after Laplace Transform to \
examine the right boundary condition and determine B[1][s].*)
q1[x_, t_] := (
A1 Sqrt[Dc] Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Dc E^((lh Sqrt[s + \[Theta]c])/Sqrt[Dc] - (
x Sqrt[s + \[Theta]c])/Sqrt[Dc]))/(
Sqrt[s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (
A1 Dc Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (-A2 +
pinf (s + \[Theta]c))/(s (s + \[Theta]c)) - (
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1 (-A2 + pinf (s + \[Theta]c)))/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) +
E^((Sqrt[s] x)/Sqrt[Dp]) B[1][s] + (
E^((2 lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) (Sqrt[s] - Sqrt[Dp] \[Alpha]1) B[1][s])/(
Sqrt[s] + Sqrt[Dp] \[Alpha]1)
(*Use the right boundary condition eq15 to determine B[1][s], \
B[1][s]=0.*)
eq19 = D[q1[x, t], x]
(A1 Sqrt[Dc]
E^((lh Sqrt[s + \[Theta]c])/Sqrt[Dc] - (x Sqrt[s + \[Theta]c])/
Sqrt[Dc]))/(-Dc s + Dp (s + \[Theta]c)) - (
A1 Sqrt[Dc] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[Dp]) Sqrt[
s])/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Dc E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[Dp]) Sqrt[
s] \[Alpha]1)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1 (-A2 + pinf (s + \[Theta]c)))/(
Sqrt[s] (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c)) + (
E^((Sqrt[s] x)/Sqrt[Dp]) Sqrt[s] B[1][s])/Sqrt[Dp] - (
E^((2 lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[Dp]) Sqrt[
s] (Sqrt[s] - Sqrt[Dp] \[Alpha]1) B[1][s])/(
Sqrt[Dp] (Sqrt[s] + Sqrt[Dp] \[Alpha]1))
(*The particular solution.*)
eq30 = eq18 /. B[1][s] -> 0
(A1 Sqrt[Dc] Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) - (
A1 Dc E^((lh Sqrt[s + \[Theta]c])/Sqrt[Dc] - (x Sqrt[s + \[Theta]c])/
Sqrt[Dc]))/(Sqrt[s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (
A1 Dc Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (-A2 +
pinf (s + \[Theta]c))/(s (s + \[Theta]c)) - (
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1 (-A2 + pinf (s + \[Theta]c)))/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c))
(*Use the Inverse Laplace Transform on eq30 to get the solution of \
eq1-eq4. There are 4 components in eq30 and use the Inverse Laplace \
Transform by 3 steps, eq30[[4]], eq30[[2]], and \
eq30[[1]]+eq30[[3]]+eq30[[5]].*)
(*The first step. Inverse Laplace transform of eq30[[4]].*)
eq31 = eq30[[4]] // Simplify // Normal
(-A2 + pinf (s + \[Theta]c))/(s (s + \[Theta]c))
(*The first part of solution.*)
eq32 = InverseLaplaceTransform[eq31, s, t] // Simplify // Normal
pinf + (A2 (-1 + E^(-t \[Theta]c)))/\[Theta]c
(*The second step. Inverse Laplace transform of eq30[[2]]: divide \
eq30[[2]] into 2 factors, take Inverse Laplace Transform on them \
repsectively, and then convolute them.*)
eq33 = FullSimplify[eq30[[2]], Dc > 0 && (s + \[Theta]c) > 0]
(A1 Dc E^((lh - x) Sqrt[(s + \[Theta]c)/Dc]))/(Sqrt[
s + \[Theta]c] (Dc s - Dp (s + \[Theta]c)))
(*The first factor.*)
eq34 = (A1 Dc E^((lh - x) Sqrt[(s + \[Theta]c)/Dc]))/Sqrt[
s + \[Theta]c]
(A1 Dc E^((lh - x) Sqrt[(s + \[Theta]c)/Dc]))/Sqrt[s + \[Theta]c]
(*The second factor.*)
eq35 = FullSimplify[eq33/eq34, Dc > 0 && (s + \[Theta]c) > 0]
1/(Dc s - Dp (s + \[Theta]c))
eq36 = eq34 /. {s -> s1 - \[Theta]c}
(A1 Dc E^(Sqrt[s1/Dc] (lh - x)))/Sqrt[s1]
eq37 = E^(-\[Theta]c*t) InverseLaplaceTransform[eq36, s1, t] //
Simplify // Normal
(A1 E^(-((lh - x)^2/(4 Dc t)) - t \[Theta]c) Sqrt[Dc/(
t (lh - x)^2)] (-lh + x))/(Sqrt[1/Dc] Sqrt[\[Pi]])
eq38 = FullSimplify[eq37, (lh - x) < 0 && Dc > 0 && t > 0]
(A1 Dc E^(-((lh - x)^2/(4 Dc t)) - t \[Theta]c))/(Sqrt[\[Pi]] Sqrt[t])
eq39 = InverseLaplaceTransform[eq35, s, t] // Simplify // Normal
E^((Dp t \[Theta]c)/(Dc - Dp))/(Dc - Dp)
eq40 = (eq39 /. {t -> (T - t)})*eq38
(A1 Dc E^(-((lh - x)^2/(4 Dc t)) - t \[Theta]c + (
Dp (-t + T) \[Theta]c)/(Dc - Dp)))/((Dc - Dp) Sqrt[\[Pi]] Sqrt[t])
(*Parameters' substitution.*)
eq41 = eq40 /. {A1 -> Sqrt[Dc] F1 \[Lambda]} /. {(Dp*\[Theta]c)/(
Dc - Dp) -> A3} /. {-((lh - x)^2/(4 Dc t)) -> -(A10^2/t)} /. {(
Dc^(3/2) F1 \[Lambda])/((Dc - Dp) Sqrt[\[Pi]] ) -> A11}
(A11 E^(-(A10^2/t) + A3 (-t + T) - t \[Theta]c))/Sqrt[t]
(*The second part of solution with the form of convolution.*)
eq42 = Integrate[eq41, {t, 0, T}] // Simplify // Normal
\!\(
\*SubsuperscriptBox[\(\[Integral]\), \(0\), \(T\)]\(
\*FractionBox[\(A11\
\*SuperscriptBox[\(E\), \(\(-
\*FractionBox[
SuperscriptBox[\(A10\), \(2\)], \(t\)]\) + A3\ \((\(-t\) + T)\) -
t\ \[Theta]c\)]\),
SqrtBox[\(t\)]] \[DifferentialD]t\)\)
(*The third step. Inverse Laplace transform of \
eq30[[1]]+eq30[[3]]+eq30[[5]]: divide it into 2 factors, take Inverse \
Laplace Transform on them repsectively, and then convolute them.*)
eq50 = eq30[[1]] + eq30[[3]] + eq30[[5]]
(A1 Sqrt[Dc] Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]))/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) (-Dc s +
Dp (s + \[Theta]c))) + (
A1 Dc Sqrt[Dp]
E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1)/((Sqrt[s] + Sqrt[Dp] \[Alpha]1) Sqrt[
s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) - (
Sqrt[Dp] E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]) \[Alpha]1 (-A2 + pinf (s + \[Theta]c)))/(
s (Sqrt[s] + Sqrt[Dp] \[Alpha]1) (s + \[Theta]c))
(*The first factor of eq30[[1]]+eq30[[3]]+ eq30[[5]].*)
eq51 = (E^((lh Sqrt[s])/Sqrt[Dp] - (Sqrt[s] x)/Sqrt[
Dp]))/(Sqrt[s] + Sqrt[Dp] \[Alpha]1) // Simplify // Normal
E^((Sqrt[s] (lh - x))/Sqrt[Dp])/(Sqrt[s] + Sqrt[Dp] \[Alpha]1)
(*eq51 can't be Inverse Laplace Transformed in Mathematica, but Maple \
otherwise.*)
eq52 = InverseLaplaceTransform[eq51, s, t] // Simplify // Normal
InverseLaplaceTransform[E^((Sqrt[s] (lh - x))/Sqrt[Dp])/(
Sqrt[s] + Sqrt[Dp] \[Alpha]1), s, t]
(*The inverse Laplace Transform of eq51 in Maple.*)
eq53 = (E^(-((lh - x)^2/(4 *Dp *t)))) / Sqrt[\[Pi]*t] -
Sqrt[Dp]* \[Alpha]1*
Erfc[(-lh + x)/(2 Sqrt[Dp *t]) + Sqrt[Dp *t]*\[Alpha]1]*
E^(\[Alpha]1*(Dp*\[Alpha]1*t - lh + x))
E^(-((lh - x)^2/(4 Dp t)))/(Sqrt[\[Pi]] Sqrt[t]) -
Sqrt[Dp] E^(\[Alpha]1 (-lh + x +
Dp t \[Alpha]1)) \[Alpha]1 Erfc[(-lh + x)/(2 Sqrt[Dp t]) +
Sqrt[Dp t] \[Alpha]1]
(*Parameters' substitution.*)
eq54 = eq53 /. {(-lh + x) -> 2*A12*Sqrt[Dp], (-lh + x)/(
2 Sqrt[Dpt ]) -> A12/Sqrt[
t], -((lh - x)^2/(4 Dp t)) -> -(A12^2/t)}
E^(-(A12^2/t))/(Sqrt[\[Pi]] Sqrt[t]) -
Sqrt[Dp] E^(\[Alpha]1 (2 A12 Sqrt[Dp] +
Dp t \[Alpha]1)) \[Alpha]1 Erfc[(A12 Sqrt[Dp])/Sqrt[Dp t] +
Sqrt[Dp t] \[Alpha]1]
(*The second factor of eq30[[1]]+eq30[[3]]+ eq30[[5]].*)
eq55 = FullSimplify[eq50[[1]]/eq51] + FullSimplify[eq50[[2]]/eq51] +
FullSimplify[eq50[[3]]/eq51]
(A1 Sqrt[Dc] Sqrt[Dp])/(-Dc s + Dp (s + \[Theta]c)) + (
A1 Dc Sqrt[Dp] \[Alpha]1)/(
Sqrt[s + \[Theta]c] (-Dc s + Dp (s + \[Theta]c))) + (
Sqrt[Dp] \[Alpha]1 (A2 - pinf (s + \[Theta]c)))/(s (s + \[Theta]c))
(*The first part of eq55.*)
eq56 = InverseLaplaceTransform[eq55[[1]], s, t] // Simplify // Normal
-((A1 Sqrt[Dc] Sqrt[Dp] E^((Dp t \[Theta]c)/(Dc - Dp)))/(Dc - Dp))
(*The second part of eq55.*)
eq57 = eq55[[2]] /. {s -> (s1 - \[Theta]c)}
(A1 Dc Sqrt[Dp] \[Alpha]1)/(Sqrt[s1] (Dp s1 - Dc (s1 - \[Theta]c)))
eq58 = E^(-\[Theta]c*t)*InverseLaplaceTransform[eq57, s1, t] //
Simplify // Normal
-((A1 Sqrt[Dp] E^((Dp t \[Theta]c)/(Dc - Dp)) \[Alpha]1 Sqrt[(
Dc t \[Theta]c)/(Dc - Dp)] Erf[Sqrt[(Dc t \[Theta]c)/(Dc - Dp)]])/(
Sqrt[t] \[Theta]c))
(*The third part of eq55.*)
eq59 = InverseLaplaceTransform[eq55[[3]], s, t] // Simplify // Normal
(Sqrt[Dp] \[Alpha]1 (A2 - A2 E^(-t \[Theta]c) -
pinf \[Theta]c))/\[Theta]c
(*The Inverse Laplace Transform of eq55.*)
eq60 = eq56 + eq58 + eq59 // Simplify // Normal
Sqrt[Dp] (-((A1 Sqrt[Dc] E^((Dp t \[Theta]c)/(Dc - Dp)))/(
Dc - Dp)) + (\[Alpha]1 (A2 - A2 E^(-t \[Theta]c) -
pinf \[Theta]c))/\[Theta]c - (
A1 E^((Dp t \[Theta]c)/(Dc - Dp)) \[Alpha]1 Sqrt[(Dc t \[Theta]c)/(
Dc - Dp)] Erf[Sqrt[(Dc t \[Theta]c)/(Dc - Dp)]])/(
Sqrt[t] \[Theta]c))
(*Parameters' substitution.*)
eq61 = Expand[
eq60 /. {A1 -> Sqrt[Dc] F1 \[Lambda],
A2 -> cinf \[Theta]c \[Lambda],
Sqrt[(Dc t \[Theta]c)/(Dc - Dp)] ->
Sqrt[(Dc \[Theta]c)/(Dc - Dp)] Sqrt[t]}]
-Sqrt[Dp] pinf \[Alpha]1 - (
Dc Sqrt[Dp] E^((Dp t \[Theta]c)/(Dc - Dp)) F1 \[Lambda])/(Dc - Dp) +
cinf Sqrt[Dp] \[Alpha]1 \[Lambda] -
cinf Sqrt[Dp] E^(-t \[Theta]c) \[Alpha]1 \[Lambda] - (
Sqrt[Dc] Sqrt[Dp] E^((Dp t \[Theta]c)/(Dc - Dp)) F1 \[Alpha]1 Sqrt[(
Dc \[Theta]c)/(
Dc - Dp)] \[Lambda] Erf[
Sqrt[t] Sqrt[(Dc \[Theta]c)/(Dc - Dp)]])/\[Theta]c
(*Parameters' substitution.*)
eq62 = eq61 /. {(Dp*\[Theta]c)/(Dc - Dp) ->
A3, (1 + Dc/(Dc - Dp)) \[Theta]c -> A4, (Dc*\[Theta]c)/(
Dc - Dp) -> A5} /. {(Dc F1 \[Lambda])/(Dc - Dp) -> A6, (
Sqrt[Dc] F1 \[Alpha]1 \[Lambda] )/ \[Theta]c -> A7}
-A6 Sqrt[Dp] E^(A3 t) - Sqrt[Dp] pinf \[Alpha]1 +
cinf Sqrt[Dp] \[Alpha]1 \[Lambda] -
cinf Sqrt[Dp] E^(-t \[Theta]c) \[Alpha]1 \[Lambda] -
Sqrt[A5] A7 Sqrt[Dp] E^(A3 t) Erf[Sqrt[A5] Sqrt[t]]
(*Parameters' substitution.*)
eq63 = eq62 /. {Sqrt[Dp] pinf \[Alpha]1 -> A8,
cinf Sqrt[Dp] \[Alpha]1 \[Lambda] -> A9}
-A8 + A9 - A6 Sqrt[Dp] E^(A3 t) - A9 E^(-t \[Theta]c) -
Sqrt[A5] A7 Sqrt[Dp] E^(A3 t) Erf[Sqrt[A5] Sqrt[t]]
eq64 = eq54*(eq63 /. {t -> (T - t)}) // Simplify // Normal
(-A8 + A9 - A6 Sqrt[Dp] E^(A3 (-t + T)) - A9 E^((t - T) \[Theta]c) -
Sqrt[A5] A7 Sqrt[Dp] E^(A3 (-t + T))
Erf[Sqrt[A5] Sqrt[-t + T]]) (E^(-(A12^2/t))/(
Sqrt[\[Pi]] Sqrt[t]) -
Sqrt[Dp] E^(\[Alpha]1 (2 A12 Sqrt[Dp] +
Dp t \[Alpha]1)) \[Alpha]1 Erfc[(
A12 Sqrt[Dp] + Dp t \[Alpha]1)/Sqrt[Dp t]])
(*The third part of solution with the form of convolution.*)
eq65 = Integrate[eq64, {t, 0, T}] // Simplify // Normal
\!\(
\*SubsuperscriptBox[\(\[Integral]\), \(0\), \(T\)]\(\(\((\(-A8\) +
A9 - A6\
\*SqrtBox[\(Dp\)]\
\*SuperscriptBox[\(E\), \(A3\ \((\(-t\) + T)\)\)] - A9\
\*SuperscriptBox[\(E\), \(\((t - T)\)\ \[Theta]c\)] -
\*SqrtBox[\(A5\)]\ A7\
\*SqrtBox[\(Dp\)]\
\*SuperscriptBox[\(E\), \(A3\ \((\(-t\) + T)\)\)]\ Erf[
\*SqrtBox[\(A5\)]\
\*SqrtBox[\(\(-t\) + T\)]])\)\ \((
\*FractionBox[
SuperscriptBox[\(E\), \(-
\*FractionBox[
SuperscriptBox[\(A12\), \(2\)], \(t\)]\)], \(
\*SqrtBox[\(\[Pi]\)]\
\*SqrtBox[\(t\)]\)] -
\*SqrtBox[\(Dp\)]\
\*SuperscriptBox[\(E\), \(\[Alpha]1\ \((2\ A12\
\*SqrtBox[\(Dp\)] + Dp\ t\ \[Alpha]1)\)\)]\ \[Alpha]1\ Erfc[
\*FractionBox[\(A12\
\*SqrtBox[\(Dp\)] + Dp\ t\ \[Alpha]1\),
SqrtBox[\(Dp\ t\)]]])\)\) \[DifferentialD]t\)\)
(*The solution of eq1-eq4.*)
eq66 = (eq32 + eq42 + eq65) // Simplify // Normal
pinf + (A2 (-1 + E^(-t \[Theta]c)))/\[Theta]c + \!\(
\*SubsuperscriptBox[\(\[Integral]\), \(0\), \(T\)]\(
\*FractionBox[\(A11\
\*SuperscriptBox[\(E\), \(\(-
\*FractionBox[
SuperscriptBox[\(A10\), \(2\)], \(t\)]\) + A3\ \((\(-t\) + T)\) -
t\ \[Theta]c\)]\),
SqrtBox[\(t\)]] \[DifferentialD]t\)\) + \!\(
\*SubsuperscriptBox[\(\[Integral]\), \(0\), \(T\)]\(\(\((\(-A8\) +
A9 - A6\
\*SqrtBox[\(Dp\)]\
\*SuperscriptBox[\(E\), \(A3\ \((\(-t\) + T)\)\)] - A9\
\*SuperscriptBox[\(E\), \(\((t - T)\)\ \[Theta]c\)] -
\*SqrtBox[\(A5\)]\ A7\
\*SqrtBox[\(Dp\)]\
\*SuperscriptBox[\(E\), \(A3\ \((\(-t\) + T)\)\)]\ Erf[
\*SqrtBox[\(A5\)]\
\*SqrtBox[\(\(-t\) + T\)]])\)\ \((
\*FractionBox[
SuperscriptBox[\(E\), \(-
\*FractionBox[
SuperscriptBox[\(A12\), \(2\)], \(t\)]\)], \(
\*SqrtBox[\(\[Pi]\)]\
\*SqrtBox[\(t\)]\)] -
\*SqrtBox[\(Dp\)]\
\*SuperscriptBox[\(E\), \(\[Alpha]1\ \((2\ A12\
\*SqrtBox[\(Dp\)] + Dp\ t\ \[Alpha]1)\)\)]\ \[Alpha]1\ Erfc[
\*FractionBox[\(A12\
\*SqrtBox[\(Dp\)] + Dp\ t\ \[Alpha]1\),
SqrtBox[\(Dp\ t\)]]])\)\) \[DifferentialD]t\)\)
![enter image description here][1]
![enter image description here][2]
![enter image description here][3]
![enter image description here][4]
![enter image description here][5]
![enter image description here][6]
![enter image description here][7]
![enter image description here][8]
![enter image description here][9]
![enter image description here][10]
![enter image description here][11]
![enter image description here][12]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14_Page_1.jpg&userId=586844
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14_Page_2.jpg&userId=586844
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14_Page_3.jpg&userId=586844
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14_Page_4.jpg&userId=586844
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14_Page_5.jpg&userId=586844
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14_Page_6.jpg&userId=586844
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14_Page_7.jpg&userId=586844
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14_Page_8.jpg&userId=586844
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14_Page_9.jpg&userId=586844
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14-Maple_Page_1.jpg&userId=586844
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=analyticalsolution-20170610-14-Maple_Page_2.jpg&userId=586844
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=14-2.jpg&userId=586844Zhonghui Ou2017-08-14T04:43:55Z[✓] Avoid XLSX import failure on first row in small file?
http://community.wolfram.com/groups/-/m/t/1163097
Hello,
I have an Excel workbook with one sheet containing 86370 rows and 28 columns of numeric cells. File size is ~ 27MB.
I cannot successfully import this file, even if I ask for just the first row. The Java heap always explodes, regardless of JVM heap parameters -Xmx and -Xms (or where they're specified).
For example,
Import["file.xlsx",{Data,1,1}]
fails.
It seems that WRI does not take the Excel file format seriously.
My workaround saves to and imports from binary. But since I'm often interacting with internal/external customers via Excel, I'd strongly prefer not to introduce additional data products, to make the workflow simpler, more transparent, and less error-prone.
Here's hoping that XLSX import/export gets some TLC from WRI in the near future, with full/efficient support for its maximum specs (~ 1M rows, 65K cols, unlimited sheets as of ~ Excel 2016).
Am I missing something?
VinceVincent Virgilio2017-08-11T17:44:08Z[✓] Cell tag containing a comma?
http://community.wolfram.com/groups/-/m/t/1163251
Is there a way to define a cell tag which contains a comma, so that the comma remains part of the cell tag, and not a separator which divides the tag into two different tags?
The reason for the question is that in an article text I want to make references to the items in the bibliography/list of referenfces as hyperlinks which point to the corresponding items in the list of references. One can get the hyperlink into the text by clicking on the tag name. Now my references are in Harvard style, so the citation should be in the text something like "(Wolfram, 2017)". However, if I specify the cell tag as "(Wolfram, 2017)", then it becomes two tags, one is "(Wolfram" and the other "2017"). Thus, to get the citation into the text as a hyperlink, I need to create two hyperlinks, and connect them with a comma. I tried to embed the cell tag with the comma between citation marks, curly brackets, to precede the comma with a backslash etc., but nothing helped. Is there a way to get around this???Imre Pázsit2017-08-12T00:53:33ZDeploy a dynamic notebook to the Wolfram Cloud?
http://community.wolfram.com/groups/-/m/t/1160191
How can I deploy a dynamic notebook (nb file) in the cloud without the user can modify it and see the code?
I did the following:
1.- upload the nb file to the cloud
2.- deploy it as a dynamic object
3.- double click to the resulting URL
As a result I got an editable websiteJorge Manrique2017-08-08T01:47:10Z[✓] Use DynamicModule for better presentation?
http://community.wolfram.com/groups/-/m/t/1163875
Hi
I have the program for solving equations by FDM & FEM , so i need the program in DynamicModule that show both FDM & FEM results to see the difference between them , and i want to present them by DynamicModule for better & professional presenting , can U help me to find my errors on program that i write in DynamicModule?
Thank You
"FDM Code"
ClearAll["Global`*"];
n = Input["n"];
m = n - 1; \[Alpha] = -4 + h^2; \[Beta] = 6 - 2 h^2 + h^4; h = 1/n;
Do[Do[a[i, j] = 0, {i, 1, m}], {j, 1, m}];
Do[b[j, 1] = 0, {j, 1, m}]; b[m, 1] = -9 - 3 \[Alpha] - 5 h^3;
b[1, 1] = 3 + 2 h - h^2; b[2, 1] = -1;
b[m - 1, 1] = -3; If[n == 4, b[2, 1] = -4, b[2, 1] = -1];
Do[a[j, j] = \[Beta], {j, 1, m - 1}];
Do[a[j, j] = \[Beta] - 3, {j, m, m}];
Do[a[j, j + 1] = \[Alpha], {j, 1, m - 1}];
Do[a[j + 1, j] = \[Alpha], {j, 1, m - 1}];
Do[a[j + 1, j] = \[Alpha] + 1, {j, m - 1, m}];
Do[a[j, j + 2] = 1., {j, 1, m - 2}];
Do[a[j + 2, j] = 1., {j, 1, m - 2}];
A = Array[a, {m, m}];
B = Array[b, {m, 1}];
Y = Flatten[LinearSolve[A, B]];
ListPlot[{Y}, Joined -> True, PlotMarkers -> Automatic]
"FEM Code"
DSolve[{u''''[x] + u''[x] + u[x] == 0, u[0] == 1, u'[0] == 2,
u[1] == 3, u'''[1] == 5}, u[x], x];
(*Plot[u[x]/.%,{x,1,2}]*)
u[x_] = u[x] /. %;
{u[0.25], u[0.5], u[0.75]}
"DynamicModule Code "
DynamicModule[
m = n - 1; \[Alpha] = -4 + h^2; \[Beta] = 6 - 2 h^2 + h^4; h = 1/n;
Do[Do[a[i, j] = 0, {i, 1, m}], {j, 1, m}];
Do[b[j, 1] = 0, {j, 1, m}]; b[m, 1] = -9 - 3 \[Alpha] - 5 h^3;
b[1, 1] = 3 + 2 h - h^2; b[2, 1] = -1; b[m - 1, 1] = -3;
If[n == 4, b[2, 1] = -4, b[2, 1] = -1];
Do[a[j, j] = \[Beta], {j, 1, m - 1}];
Do[a[j, j] = \[Beta] - 3, {j, m, m}];
Do[a[j, j + 1] = \[Alpha], {j, 1, m - 1}];
Do[a[j + 1, j] = \[Alpha], {j, 1, m - 1}];
Do[a[j + 1, j] = \[Alpha] + 1, {j, m - 1, m}];
Do[a[j, j + 2] = 1., {j, 1, m - 2}];
Do[a[j + 2, j] = 1., {j, 1, m - 2}];
A = Array[a, {m, m}];
B = Array[b, {m, 1}];
Y = Flatten[LinearSolve[A, B]]; {z = 0.00001}, {Panel[
SetterBar[Dynamic[n], Range[4, 20]],
Dynamic[ListPlot[{Y}, Joined -> True, PlotMarkers -> Automatic]]]}]Saleh Baradaran2017-08-12T13:59:25Z[CALL] Common mistakes in using Wolfram Language & Mathematica
http://community.wolfram.com/groups/-/m/t/1070264
[Wolfram Language][1] (WL) is a powerful multi-paradigm programing language. There is a set of common mistakes that repeatedly tend to entrap new users. **This is a call to describe such mistakes building a "black-listing" guide for novice coders.** Please consider contributing. I suggest following simple rules (with gratitude adapted from a [similar effort][2]):
- One topic per answer
- Focus on non-advanced uses (it is intended to be useful for beginners and as a question closing reference)
- Include a self explanatory title in header style (example: "# Basic built-in function syntax"; see [syntax guide][3] )
- Explain the symptoms, the mechanism behind the scenes and all possible causes and solutions you can think of. Be sure to include a beginner's level explanation (and a more advance one too, if you can)
*Please, use "**Reply**" to a specific comment for structured clarity of nested comments.*
----------
## Table of Contents
- [Basic syntax of built-in functions][4]
- [Learn how to use the Documentation Center effectively][5]
- [Sorting numerical data and the behavior of Sort][6]
- [What does @#(%=<\[!} et cetera mean?][7]
- [Consider Reap/Sow Instead of AppendTo][8]
- [Case sensitivity and typos][9]
[1]: https://www.wolfram.com/language
[2]: https://mathematica.stackexchange.com/q/18393/13
[3]: http://community.wolfram.com/groups/-/m/t/270507
[4]: http://community.wolfram.com/groups/-/m/t/1069885
[5]: http://community.wolfram.com/groups/-/m/t/1070285
[6]: http://community.wolfram.com/groups/-/m/t/1070705
[7]: http://community.wolfram.com/groups/-/m/t/1070946
[8]: http://community.wolfram.com/groups/-/m/t/1084289
[9]: http://community.wolfram.com/groups/-/m/t/1084920Vitaliy Kaurov2017-04-23T23:54:23Z[✓] Meaning of & in output - derivative ?
http://community.wolfram.com/groups/-/m/t/1164220
Hello everyone. What is the meaning of & in output in the following code?
g[x_] := x^9
Derivative[1][g]
9 #1^8 &
Thank you for your help.Gennaro Arguzzi2017-08-13T10:00:22ZTry to beat these MRB constant records!
http://community.wolfram.com/groups/-/m/t/366628
POSTED BY: Marvin Ray Burns .
I think this important point got buried near the end.
When it comes to mine and a few more educated people's passion to calculate many digits and the dislike possessed by a few more educated people; it is all a matter telling us that the human mind is multi faceted in giving passion, to person a, for one task and to person b for another task!
The MRB constant is defined below. See http://mathworld.wolfram.com/MRBConstant.html
> ![enter image description here][2] ![enter image description here][3]
Here are some record computations. If you know of any others let me know..
1. On or about Dec 31, 1998 I computed 1 digit of the (additive inverse of the) MRB constant with my TI-92's, by adding 1-sqrt(2)+3^(1/3)-4^(1/4) as far as I could. That first digit by the way is just 0.
2. On Jan 11, 1999 I computed 3 digits of the MRB constant with the Inverse Symbolic Calculator.
3. In Jan of 1999 I computed 4 correct digits of the MRB constant using Mathcad 3.1 on a 50 MHz 80486 IBM 486 personal computer operating on Windows 95.
4. Shortly afterwards I computed 9 correct digits of the MRB constant using Mathcad 7 professional on the Pentium II mentioned below.
5. On Jan 23, 1999 I computed 500 digits of the MRB constant with the online tool called Sigma.
6. In September of 1999, I computed the first 5,000 digits of the MRB Constant on a 350 MHz Pentium II with 64 Mb of ram using the simple PARI commands \p 5000;sumalt(n=1,((-1)^n*(n^(1/n)-1))), after allocating enough memory.
7. On June 10-11, 2003 over a period, of 10 hours, on a 450mh P3 with an available 512mb RAM: I computed 6,995 accurate digits of the MRB constant.
8. Using a Sony Vaio P4 2.66 GHz laptop computer with 960 MB of available RAM, on 2:04 PM 3/25/2004, I finished computing 8000 digits of the MRB constant.
9. On March 01, 2006 with a 3GH PD with 2GBRAM available, I computed the first 11,000 digits of the MRB Constant.
10. On Nov 24, 2006 I computed 40, 000 digits of the MRB Constant in 33hours and 26min via my own program in written in Mathematica 5.2. The computation was run on a 32-bit Windows 3GH PD desktop computer using 3.25 GB of Ram.
11. Finishing on July 29, 2007 at 11:57 PM EST, I computed 60,000 digits of MRB Constant. Computed in 50.51 hours on a 2.6 GH AMD Athlon with 64 bit Windows XP. Max memory used was 4.0 GB of RAM.
12. Finishing on Aug 3 , 2007 at 12:40 AM EST, I computed 65,000 digits of MRB Constant. Computed in only 50.50 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 5.0 GB of RAM.
13. Finishing on Aug 12, 2007 at 8:00 PM EST, I computed 100,000 digits of MRB Constant. They were computed in 170 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 11.3 GB of RAM. Median (typical) daily record of memory used was 8.5 GB of RAM.
14. Finishing on Sep 23, 2007 at 11:00 AM EST, I computed 150,000 digits of MRB Constant. They were computed in 330 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 22 GB of RAM. Median (typical) daily record of memory used was 17 GB of RAM.
15. Finishing on March 16, 2008 at 3:00 PM EST, I computed 200,000 digits of MRB Constant using Mathematica 5.2. They were computed in 845 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 47 GB of RAM. Median (typical) daily record of memory used was 28 GB of RAM.
16. Washed away by Hurricane Ike -- on September 13, 2008 sometime between 2:00PM - 8:00PM EST an almost complete computation of 300,000 digits of the MRB Constant was destroyed. Computed for a long 4015. Hours (23.899 weeks or 1.4454*10^7 seconds) on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 91 GB of RAM. The Mathematica 6.0 code used follows:
Block[{$MaxExtraPrecision = 300000 + 8, a, b = -1, c = -1 - d,
d = (3 + Sqrt[8])^n, n = 131 Ceiling[300000/100], s = 0}, a[0] = 1;
d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
For[k = 0, k < n, c = b - c;
b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
N[1/2 - s/d, 300000]]
17. On September 18, 2008 a computation of 225,000 digits of MRB Constant was started with a 2.66GH Core2Duo using 64 bit Windows XP. It was completed in 1072 hours. Memory usage is recorded in the attachment pt 225000.xls, near the bottom of this post. .
18. 250,000 digits was attempted but failed to be completed to a serious internal error which restarted the machine. The error occurred sometime on December 24, 2008 between 9:00 AM and 9:00 PM. The computation began on November 16, 2008 at 10:03 PM EST. Like the 300,000 digit computation this one was almost complete when it failed. The Max memory used was 60.5 GB.
19. On Jan 29, 2009, 1:26:19 pm (UTC-0500) EST, I finished computing 250,000 digits of the MRB constant. with a multiple step Mathematica command running on a dedicated 64bit XP using 4Gb DDR2 Ram on board and 36 GB virtual. The computation took only 333.102 hours. The digits are at http://marvinrayburns.com/250KMRB.txt . The computation is completely documented in the attached 250000.pd at bottom of this post.
20. On Sun 28 Mar 2010 21:44:50 (UTC-0500) EST, I started a computation of 300000 digits of the MRB constant using an i7 with 8.0 GB of DDR3 Ram on board.; But it failed due to hardware problems.
21. I computed 299,998 Digits of the MRB constant. The computation began Fri 13 Aug 2010 10:16:20 pm EDT and ended 2.23199*10^6 seconds later |
Wednesday, September 8, 2010. I used Mathematica 6.0 for Microsoft
Windows (64-bit) (June 19, 2007) That is an average of 7.44 seconds per digit.. I used my Dell Studio XPS 8100 i7 860 @ 2.80 GH 2.80 GH
with 8GB physical DDR3 RAM. Windows 7 reserved an additional 48.929
GB virtual Ram.
22. I computed exactly 300,000 digits to the right of the decimal point
of the MRB constant from Sat 8 Oct 2011 23:50:40 to Sat 5 Nov 2011
19:53:42 (2.405*10^6 seconds later). This run was 0.5766 seconds per digit slower than the
299,998 digit computation even though it used 16GB physical DDR3 RAM on the same machine. The working precision and accuracy goal
combination were maximized for exactly 300,000 digits, and the result was automatically saved as a file instead of just being displayed on the front end. Windows reserved a total of 63 GB of working memory of which at 52 GB were recorded being used. The 300,000 digits came from the Mathematica 7.0 command
Quit; DateString[]
digits = 300000; str = OpenWrite[]; SetOptions[str,
PageWidth -> 1000]; time = SessionTime[]; Write[str,
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> digits + 3, AccuracyGoal -> digits,
Method -> "AlternatingSigns"]]; timeused =
SessionTime[] - time; here = Close[str]
DateString[]
23. 314159 digits of the constant took 3 tries do to hardware failure. Finishing on September 18, 2012 I computed 314159 digits, taking 59 GB of RAM. The digits are came from the Mathematica 8.0.4 code
DateString[]
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> 314169, Method -> "AlternatingSigns"] // Timing
DateString[]
Where I have 10 digits to round off. (The command NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> big number, Method -> "AlternatingSigns"] tends to give about 3 digits of error to the right.)
**The following records are due to the work of Richard Crandall found [here][4]. **
24. Sam Noble of Apple computed 1,000,000 digits of the MRB constant in 18 days 9 hours 11 minutes 34.253417 seconds
25. Finishing on Dec 11, 2012 Ricard Crandall, an Apple scientist, computed 1,048,576 digits
in a lighting fast 76.4 hours. That's on a 2.93 Ghz 8-core Nehalem
26. I computed a little over 1,200,000 digits of the MRB constant in 11
days, 21 hours, 17 minutes, and 41 seconds,( finishing on on March 31 2013). I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
27. On May 17, 2013 I finished a 2,000,000 or more digit computation of the MRB constant, using only around 10GB of RAM. It took 37 days 5 hours 6 minutes 47.1870579 seconds. I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
28. Finally, I would like to announce a new unofficial world record computation of the MRB constant that was finished on Sun 21 Sep 2014 18:35:06. It took 1 month 27 days 2 hours 45 minutes 15 seconds. I computed 3,014,991 digits of the MRB constant with Mathematica 10.0. I Used my new version of Richard Crandall's code, below, optimized for my platform and large computations. I also used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz with 64 GB of RAM of which only 16 GB was used. Can you beat it (in more number of digits, less memory used, or less time taken)? This confirms that my previous "2,000,000 or more digit computation" was actually accurate to 2,009,993 digits. (They were used as MRBtest2M.)
(**Fastest (at MRB's end) as of 25 Jul 2014*.*)
DateString[]
prec = 3000000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] :=
Module[{a, d, s, k, bb, c, n, end, iprec, xvals, x, pc, cores = 12,
tsize = 2^7, chunksize, start = 1, ll, ctab,
pr = Floor[1.005 pre]}, chunksize = cores*tsize;
n = Floor[1.32 pr];
end = Ceiling[n/chunksize];
Print["Iterations required: ", n];
Print["end ", end];
Print[end*chunksize]; d = ChebyshevT[n, 3];
{b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
iprec = Ceiling[pr/27];
Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
x = N[E^(Log[ll]/(ll)), iprec];
pc = iprec;
While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^ll - ll;
x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];(**N[Exp[Log[ll]/ll], pr]**)x, {l, 0, tsize - 1}], {j, 0, cores - 1},
Method -> "EvaluationsPerKernel" -> 4]];
ctab = ParallelTable[Table[c = b - c;
ll = start + l - 2;
b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
c, {l, chunksize}], Method -> "EvaluationsPerKernel" -> 2];
s += ctab.(xvals - 1);
start += chunksize;
Print["done iter ", k*chunksize, " ", SessionTime[] - T0];, {k, 0,
end - 1}];
N[-s/d, pr]];
t2 = Timing[MRBtest2 = expM[prec];]; DateString[]
Print[MRBtest2]
MRBtest2 - MRBtest2M
t2 From the computation was {1.961004112059*10^6, Null}.
Here are a couple of graphs of my record computations in max digits/ year:
![enter image description here][5]![enter image description here][6]
[1]: http://mathworld.wolfram.com/MRBConstant.html
[2]: /c/portal/getImageAttachment?filename=m.JPG&userId=366611
[3]: /c/portal/getImageAttachment?filename=m2.JPG&userId=366611
[4]: http://www.marvinrayburns.com/UniversalTOC25.pdf
[5]: /c/portal/getImageAttachment?filename=7559mrbrecord1.JPG&userId=366611
[6]: /c/portal/getImageAttachment?filename=mrbrecord3.JPG&userId=366611Marvin Ray Burns2014-10-09T18:08:49ZTransfer an artistic style to an image
http://community.wolfram.com/groups/-/m/t/1093926
![enter image description here][16]
# Introduction
Back in [Wolfram Summer School 2016][1] I worked on the project "Image Transformation with Neural Networks: Real-Time Style Transfer and Super-Resolution", which got later [published on Wolfram Community][2]. At the time I had to use the MXNetLink package, but now all the needed functionality is built-in, so here is a top-level implementation of artistic style transfer with Wolfram Language. This is a slightly simplified version of the original method, as it uses a single VGG layer to extract the style features, but a full implementation is of course possible with minor modifications to the code. You can also find this example in the docs:
[NetTrain][3] >> Applications >> Computer Vision >> Style Transfer
# Code
Create a new image with the content of a given image and in the style of another given image. This implementation follows the method described in Gatys et al., *A Neural Algorithm of Artistic Style*. An example content and style image:
![enter image description here][4]
To create the image which is a mix of both of these images, start by obtaining a pre-trained image classification network:
vggNet = NetModel["VGG-16 Trained on ImageNet Competition Data"];
Take a subnet that will be used as a feature extractor for the style and content images:
featureNet = Take[vggNet, {1, "relu4_1"}]
![enter image description here][5]
There are three loss functions used. The first loss ensures that the "content" is similar in the synthesized image and the content image:
contentLoss = NetGraph[{MeanSquaredLossLayer[]}, {1 -> NetPort["LossContent"]}]
![enter image description here][6]
The second loss ensures that the "style" is similar in the synthesized image and the style image. Style similarity is defined as the mean-squared difference between the Gram matrices of the input and target:
gramMatrix = NetGraph[{FlattenLayer[-1], TransposeLayer[1 -> 2], DotLayer[]}, {1 -> 3, 1 -> 2 -> 3}];
styleLoss = NetGraph[{gramMatrix, gramMatrix, MeanSquaredLossLayer[]},
{NetPort["Input"] -> 1, NetPort["Target"] -> 2, {1, 2} -> 3, 3 -> NetPort["LossStyle"]}]
![enter image description here][7]
The third loss ensures that the magnitude of intensity changes across adjacent pixels in the synthesized image is small. This helps the synthesized image look more natural:
l2Loss = NetGraph[{ThreadingLayer[(#1 - #2)^2 &], SummationLayer[]}, {{NetPort["Input"], NetPort["Target"]} -> 1 -> 2}];
tvLoss = NetGraph[<|
"dx1" -> PaddingLayer[{{0, 0}, {1, 0}, {0, 0}}, "Padding" -> "Fixed" ],
"dx2" -> PaddingLayer[{{0, 0}, {0, 1}, {0, 0}}, "Padding" -> "Fixed"],
"dy1" -> PaddingLayer[{{0, 0}, {0, 0}, {1, 0}}, "Padding" -> "Fixed" ],
"dy2" -> PaddingLayer[{{0, 0}, {0, 0}, {0, 1}}, "Padding" -> "Fixed"],
"lossx" -> l2Loss, "lossy" -> l2Loss, "tot" -> TotalLayer[]|>,
{{"dx1", "dx2"} -> "lossx", {"dy1", "dy2"} -> "lossy",
{"lossx", "lossy"} -> "tot" -> NetPort["LossTV"]}]
![enter image description here][8]
Define a function that creates the final training net for any content and style image. This function also creates a random initial image:
createTransferNet[net_, content_Image, styleFeatSize_] := Module[{dims = Prepend[3]@Reverse@ImageDimensions[content]},
NetGraph[<|
"Image" -> ConstantArrayLayer["Array" -> RandomReal[{-0.1, 0.1}, dims]],
"imageFeat" -> NetReplacePart[net, "Input" -> dims],
"content" -> contentLoss,
"style" -> styleLoss,
"tv" -> tvLoss|>,
{"Image" -> "imageFeat",
{"imageFeat", NetPort["ContentFeature"]} -> "content",
{"imageFeat", NetPort["StyleFeature"]} -> "style",
"Image" -> "tv"},
"StyleFeature" -> styleFeatSize ] ]
Define a [NetDecoder][9] for visualizing the predicted image:
meanIm = NetExtract[featureNet, "Input"][["MeanImage"]]
> {0.48502, 0.457957, 0.407604}
decoder = NetDecoder[{"Image", "MeanImage" -> meanIm}]
![enter image description here][10]
The training data consists of features extracted from the content and style images. Define a feature extraction function:
extractFeatures[img_] := NetReplacePart[featureNet, "Input" ->NetEncoder[{"Image", ImageDimensions[img],
"MeanImage" ->meanIm}]][img];
Create a training set consisting of a single example of a content and style feature:
trainingdata = <|
"ContentFeature" -> {extractFeatures[contentImg]},
"StyleFeature" -> {extractFeatures[styleImg]}
|>
Create the training net whose input dimensions correspond to the content and style image dimensions:
net = createTransferNet[featureNet, contentImg,
Dimensions@First@trainingdata["StyleFeature"]];
When training, the three losses are weighted differently to set the relative importance of the content and style. These values might need to be changed with different content and style images. Create a loss specification that defines the final loss as a combination of the three losses:
perPixel = 1/(3*Apply[Times, ImageDimensions[contentImg]]);
lossSpec = {"LossContent" -> Scaled[6.*10^-5],
"LossStyle" -> Scaled[0.5*10^-14],
"LossTV" -> Scaled[20.*perPixel]};
Optimize the image using [NetTrain][11]. [LearningRateMultipliers][12] are used to freeze all parameters in the net except for the [ConstantArrayLayer][13]. The training is best done on a GPU, as it will take up to an hour to get good results with CPU training. The training can be stopped at any time via Evaluation -> Abort Evaluation:
trainedNet = NetTrain[net,
trainingdata, lossSpec,
LearningRateMultipliers -> {"Image" -> 1, _ -> None},
TrainingProgressReporting ->
Function[decoder[#Weights[{"Image", "Array"}]]],
MaxTrainingRounds -> 300, BatchSize -> 1,
Method -> {"ADAM", "InitialLearningRate" -> 0.05},
TargetDevice -> "GPU"
]
![enter image description here][14]
Extract the final image from the [ConstantArrayLayer][15] of the trained net:
decoder[NetExtract[trainedNet, {"Image", "Array"}]]
![enter image description here][16]
[1]: https://education.wolfram.com/summer/school/alumni/2016/salvarezza/
[2]: http://community.wolfram.com/groups/-/m/t/885941
[3]: http://reference.wolfram.com/language/ref/NetTrain.html
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=I_432.png&userId=95400
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_179.png&userId=95400
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_180.png&userId=95400
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_181.png&userId=95400
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_182.png&userId=95400
[9]: http://reference.wolfram.com/language/ref/NetDecoder.html
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_184.png&userId=95400
[11]: http://reference.wolfram.com/language/ref/NetTrain.html
[12]: http://reference.wolfram.com/language/ref/LearningRateMultipliers.html
[13]: http://reference.wolfram.com/language/ref/ConstantArrayLayer.html
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_185.png&userId=95400
[15]: http://reference.wolfram.com/language/ref/ConstantArrayLayer.html
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=I_466.png&userId=95400Matteo Salvarezza2017-05-15T10:33:59ZEarn $ creating a Mathematica notebook
http://community.wolfram.com/groups/-/m/t/1164210
i am writing a paper "Simple Random Walk Models of Entanglement Effects on the Elasticity and Viscoelasticity of Polymer Networks." which i will be presenting at the Polymer Networks meeting in Prague.
of course, i'm writing it in a Mathematica notebook but it will have a bunch of equations and my physical disability makes it difficult for me to enter them (also, i am not very familiar with the use of the mathematical templates in WL / Mathematica).
i have PDFs of my published articles from which i am creating this 'tutorial' article . these PDFs have all the equations i will be using in the article.
what i need is someone to enter each equation into its own cell in a Mathematica notebook.
i can then take that notebook and can create a notebook containing both the text which i will be composing and the equations which i will 'copy and paste' from the 'notebook of equations'.
i can email the PDFs to the person who, after creating the notebook of equations, can email it to me.
so this is is a simple straightforward job that will earn some income for someone (the pricing is negotiable, based on the time involved, the difficulty of the work, etc.) .Richard Gaylord2017-08-13T09:34:53Z[GIF] Grassfire (Parallels of the limaçon trisectrix)
http://community.wolfram.com/groups/-/m/t/1158782
![Parallels of the limaçon trisectrix][1]
**Grassfire**
An interesting thing I noticed while playing around with the [grassfire transform][2], which just pushes a curve off itself to get parallel copies: the [limaçon trisectrix][3] is self-parallel, meaning that there is a special distance (namely $\sqrt{\frac{207}{64} + \frac{3 \sqrt{3}}{4}}$) so that if you travel exactly that distance in a direction normal to every point on the limaçon, you get a congruent copy of the limaçon.
I did some searching to try to determine who first noticed this, but couldn't find anything. Does anybody have a good reference?
Here's the code, where $f$ parametrizes the curve and $g$ gives the normal to the curve:
DynamicModule[{f, g, rmax = Sqrt[207/64 + (3 Sqrt[3])/4], n = 400,
m = 200, cols = RGBColor /@ {"#4E1184", "#FD367E", "#0E1555"}},
f[t_] := {-(1 - 2 Cos[t]) Sin[t], (1 - 2 Cos[t]) Cos[t]};
g[t_] = {-#[[2]], #[[1]]} &[Normalize[D[f[t], t]]];
Manipulate[
Graphics[
{EdgeForm[Directive[cols[[1]], Thickness[.002]]], FaceForm[None],
Table[
{EdgeForm[Directive[Opacity[n/(m*rmax) (s - r) + 1], Blend[cols[[;; 2]], n/(m*rmax) (s - r) + 1]]],
Polygon[Table[f[t] + s g[t], {t, 0., 2 π, 2 π/200}]]},
{s, Max[r - m * rmax/n, 0], Min[r, rmax], rmax/n}]},
Axes -> None, ImageSize -> 540, Background -> cols[[-1]],
PlotRange -> {{-(3 + rmax)/2, (3 + rmax)/2}, {-3.5, rmax - 1 + 1/2}}],
{r, 0., rmax + m*rmax/n}]
]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=parallels18c.gif&userId=610054
[2]: https://en.wikipedia.org/wiki/Grassfire_transform
[3]: https://en.wikipedia.org/wiki/Lima%C3%A7on_trisectrixClayton Shonkwiler2017-08-05T02:42:32Z[✓] Fastest safe way to change keys in large Dataset?
http://community.wolfram.com/groups/-/m/t/1163227
Consider a large Dataset (ds) that wraps a list of Associations or an Association of Associations. Perhaps the Dataset has something like a million rows and thirty columns. I decide I want to change the name of a few of the column names. The fastest way I have found to safely change the column names is with a function like this (assuming we have a list of Associations).
changeColumnNames[ds_,newColumnNames_]:=Dataset[Map[
AssociationThread[newColumnNames, #] &, Normal@ds[All, Values]]]
Basically I rebuild the Dataset using AssociationThread. This executes in a time roughly proportional to the number of rows. Given that the Dataset already has embedded inside its FullForm (second element) a list of the column names, I am wondering if there might be a way of doing this more swiftly, particularly if one is willing to use the Dataset` package.
All ideas welcome!
P.S. I've played around with Transpose-ing the Dataset but have not gotten anything to work both well and swiftly.Seth Chandler2017-08-11T20:36:35Z