Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Scienceshowthread.php?postid=4141 sorted by activeSolve PDE in Mathematica
http://community.wolfram.com/groups/-/m/t/1291173
Hi I want to solve the following PDE
heqn = D[u[x, t], t] == c*D[u[x, t], {x, 2}] -A* u[x, t]- B; // A, B and c are constants
ic = u[x, 0] == 2*A/b; // b is a constant
bc = u[0,t] ==0, u[500,t]==0;
sol = DSolveValue[{heqn, ic,bc}, u[x, t], {x, t}]
The solver is not working, can you please let me know where I am doing a mistake.
Thanks,
Vishalvishal nandigana2018-02-24T08:19:14ZUnderstand algorithm for AudioLocalMeasurements, & ModifiedKullbackLeibler?
http://community.wolfram.com/groups/-/m/t/1287765
I recently was doing some experiments with AudioLocalMeasurements for some music analysis tasks, and in the process of trying to explain what I did to someone I realized that I didn't know what the ModifiedKullbackLeibler measurement is computing. Specifically, what is modified versus regular KL divergence, and what is done to the frequency spectra of the windows being compared? Without understanding what the algorithm is actually reporting, it's hard to make any interpretation of the results that it computes.Matthew Sottile2018-02-19T07:23:15Z2D Hexagon to 3D Hexagon prism, or Thick Hexagon Mesh?
http://community.wolfram.com/groups/-/m/t/1290648
So I tried this code for 2D, but I would like to have say a slab with a defined thickness not just a plane,
The code for 2D is:
h[x_, y_] :=
Polygon[Table[{Cos[2 Pi k/6] + x, Sin[2 Pi k/6] + y}, {k, 6}]];
Graphics[{EdgeForm[Opacity[.7]], LightBlue,
Table[h[3 i + 3 ((-1)^j + 1)/4, Sqrt[3]/2 j], {i, 10}, {j, 15}]}]
I also found and modified this code [Here][1] but this is a very thin sheet , I would like to be able to define a thickness to it:
Graphics3D[
With[{hex =
Polygon[Table[{Cos[2 Pi k/6] + #, Sin[2 Pi k/6] + #2}, {k,
6}]] &},
Table[hex[3 i + 3 ((-1)^j + 1)/4, Sqrt[3]/2 j], {i, 10}, {j,
15}]] /.
Polygon[l_] :> {Red, Polygon[l], Polygon[{1, 0} # & /@ l]} /.
Polygon[l_List] :> Polygon[top @@@ l], Boxed -> False,
Axes -> False, PlotRange -> All, Lighting -> "Neutral"]
Can you help me how to convert 2D hexagonal to 3D hexagonal slab with user defined thickness?
[1]: https://mathematica.stackexchange.com/questions/77312/hexagonal-mesh-on-a-3d-surfaceArm Mo2018-02-24T02:10:32ZListLinePlot are not working with NumberForm data - Why?
http://community.wolfram.com/groups/-/m/t/1290871
Dear All,
I got a question regarding ListLinePlot in line with NumberForm Data.
Why are ListLinePlot not working with NumberForm Data?
TiR0Day1 = ((2*81) + 128)/3;
TiR0Day2 = ((2*82) + 131)/3;
TiR0Day3 = ((2*72) + 119)/3;
TiR0Day4 = ((2*90) + 141)/3;
TiR0Day5 = ((2*79) + 131)/3;
TiR0Day6 = ((2*68) + 116)/3;
TiR0Day7 = ((2*68) + 124)/3;
TiR0Day8 = ((2*62) + 106)/3;
TiR0Day9 = ((2*66) + 122)/3;
TiR0Day10 = ((2*66) + 103)/3;
ListLinePlot[{TiR0Day1, TiR0Day2, TiR0Day2, TiR0Day4, TiR0Day5,
TiR0Day6, TiR0Day7, TiR0Day8, TiR0Day9, TiR0Day10}]
![Without NumberForm][1]
Now with NumberForm and the same data as above. The plot is empty. Why?
TiR0Day1N = NumberForm[((2*81) + 128)/3, {Infinity, 1}];
TiR0Day2N = NumberForm[((2*82) + 131)/3, {Infinity, 1}];
TiR0Day3N = NumberForm[((2*72) + 119)/3, {Infinity, 1}];
TiR0Day4N = NumberForm[((2*90) + 141)/3, {Infinity, 1}];
TiR0Day5N = NumberForm[((2*79) + 131)/3, {Infinity, 1}];
TiR0Day6N = NumberForm[((2*68) + 116)/3, {Infinity, 1}];
TiR0Day7N = NumberForm[((2*68) + 124)/3, {Infinity, 1}];
TiR0Day8N = NumberForm[((2*62) + 106)/3, {Infinity, 1}];
TiR0Day9N = NumberForm[((2*66) + 122)/3, {Infinity, 1}];
TiR0Day10N = NumberForm[((2*66) + 103)/3, {Infinity, 1}];
ListLinePlot[{TiR0Day1N, TiR0Day2N, TiR0Day3N, TiR0Day4N, TiR0Day5N,
TiR0Day6N, TiR0Day7N, TiR0Day8N, TiR0Day9N, TiR0Day10N}]
![enter image description here][2]
Please can you help ?
Regards,.....Jos
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6773Plot.gif&userId=185016
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Plot2.gif&userId=185016Jos Klaps2018-02-23T14:37:07ZImport data from Excel and run the Anderson Darling test on it
http://community.wolfram.com/groups/-/m/t/1291087
I wanna import an excel .xlsx sheet into Mathematica and perform the AndersonDarling test on it.
Unluckily I'm in a hurry and unluckily I'm a very early beginner wtih Mathematica, so I decided to ask for help in this platform.
I got this error message but I do not have any idea how I can solve that issue.
AndersonDarlingTest::rctnln: The argument {{{151.6,151.9,149.8,148.8},{151.9,150.9,148.8,147.9},
{150.5,150.9,148.2,147.9},{149.9,148.8,150.7,151.},{152.6,152.2,153.4,149.9},<<41>>,
{151.,152.27,153.87,150.93},{152.65,152.21,151.67,150.21},{153.6,151.19,152.56,154.01},
{150.92,154.21,149.75,151.33},<<49>>}} at position 1 should be a rectangular array of real
numbers with length greater than the dimension of the array.
I kindly ask for some help and explanation, what I did wrong.
THANKS and BR Hansjohann.spies2018-02-24T06:40:10ZHow can I replace a list of functions by another list of functions in DEs?
http://community.wolfram.com/groups/-/m/t/1291607
I have this list as an output of one of the steps:
{4. u^(0,2)(0,x)+0.666667 u^(0,2)(1,x)==2.,0.666667 u^(0,2)(0,x)+1.33333 u^(0,2)(1,x)==0.}.
What I would like to do is to define a replacement where u(i,x) is replaced by a predefined function f(i,x). So the system to be
{4. f^(0,2)(0,x)+0.666667 f^(0,2)(1,x)==2., 0.666667 f^(0,2)(0,x)+1.33333 f^(0,2)(1,x)==0.}
Note, the size of the system may vary and be more than two equations.
Thanks in advance.Maha Youssef2018-02-24T19:22:44ZThoughts on a Python interface, and why ExternalEvaluate is just not enough
http://community.wolfram.com/groups/-/m/t/1185247
`ExternalEvaluate`, introduced in M11.2, is a nice initiative. It enables limited communication with multiple languages, including Python, and appears to be designed to be relatively easily extensible (see ``ExternalEvaluate`AddHeuristic`` if you want to investigate, though I wouldn't invest in this until it becomes documented).
**My great fear, however, is that with `ExternalEvaluate` Wolfram will consider the question of a Python interface settled.**
This would be a big mistake. A *general* framework, like `ExternalEvaluate`, that aims to work with *any* language and relies on passing code (contained in a string) to an evaluator and getting JSON back, will never be fast enough or flexible enough for *practical scientific computing*.
Consider a task as simple as computing the inverse of a $100\times100$ Mathematica matrix using Python (using [`numpy.linalg.inv`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html)).
I challenge people to implement this with `ExternalEvaluate`. It's not possible to do it *in a practically useful way*. The matrix has to be sent *as code*, and piecing together code from strings just can't replace structured communication. The result will need to be received as something encodable to JSON. This has terrible performance due to multiple conversions, and even risks losing numerical precision.
Just sending and receiving a tiny list of 10000 integers takes half a second (!)
In[6]:= ExternalEvaluate[py, "range(10000)"]; // AbsoluteTiming
Out[6]= {0.52292, Null}
Since I am primarily interested in scientific and numerical computing (as I believe most M users are), I simply won't use `ExternalEvaluate` much, as it's not suitable for this purpose. What if we need to do a [mesh transformation](https://mathematica.stackexchange.com/q/155484/12) that Mathematica can't currently handle, but there's a Python package for it? It's exactly the kind of problem I am looking to apply Python for. I have in fact done mesh transformations using MATLAB toolboxes directly from within Mathematica, using [MATLink][1], while doing the rest of the processing in Mathematica. But I couldn't do this with ExternalEvaluate/Python in a reasonable way.
In 2017, any scientific computing system *needs* to have a Python interface to be taken seriously. [MATLAB has one][2], and it *is* practically usable for numerical/scientific problems.
----
## A Python interface
I envision a Python interface which works like this:
- The MathLink/WSTP API is exposed to Python, and serves as the basis of the system. MathLink is good at transferring large numerical arrays efficiently.
- Fundamental data types (lists, dictionaries, bignums, etc.) as well as datatypes critical for numerical computing (numpy arrays) can be transferred *efficiently* and *bidirectionally*. Numpy arrays in particular must translate to/from packed arrays in Mathematica with the lowest possible overhead.
- Python functions can be set up to be called from within Mathematica, with automatic argument translation and return type translation. E.g.,
PyFun["myfun"][ (* myfun is a function defined in Python *)
{1,2,3} (* a list *),
PyNum[{1,2,3}] (* cast to numpy array, since the interpretation of {1,2,3} is ambiguous *),
PySet[{1,2,3}] (* cast to a set *)
]
- The system should be user-extensible to add translations for new datatypes, e.g. a Python class that is needed frequently for some application.
- The primary mode of operation should be that Python is run as a slave (subprocess) of Mathematica. But there should be a second mode of operation where both Mathematica and Python are being used interactively, and they are able to send/receive structured data to/from each other on demand.
- As a bonus: Python can also call back to Mathematica, so e.g. we can use a numerical optimizer available in Python to find the minimum of a function defined in Mathematica
- An interface whose primary purpose is to call Mathematica from Python is a different topic, but can be built on the same data translation framework described above.
The development of such an interface should be driven by real use cases. Ideally, Wolfram should talk to users who use Mathematica for more than fun and games, and do scientific computing as part of their daily work, with multiple tools (not just M). Start with a number of realistic problems, and make sure the interface can help in solving them. As a non-trivial test case for the datatype-extension framework, make sure people can set up auto-translation for [SymPy objects][3], or a [Pandas dataframe][4], or a [networkx graph][5]. Run `FindMinimum` on a Python function and make sure it performs well. (In a practical scenario this could be a function implementing a physics simulation rather than a simple formula.) As a performance stress test, run `Plot3D` (which triggers a very high number of evaluations) on a Python function. Performance and usability problems will be exposed by such testing early, and then the interface can be *designed* in such a way as to make these problems at least solvable (if not immediately solved in the first version). I do not believe that they are solvable with the `ExternalEvaluate` design.
Of course, this is not the only possible design for an interface. J/Link works differently: it has handles to Java-side objects. But it also has a different goal. Based on my experience with MATLink and RLink, I believe that *for practical scientific/numerical computing*, the right approach is what I outlined above, and that the performance of data structre translation is critical.
----
## ExternalEvaluate
Don't get me wrong, I do think that the `ExternalEvaluate` framework is a very useful initiative, and it has its place. I am saying this because I looked at its source code and it appears to be easily extensible. R has zeromq and JSON capabilities, and it looks like one could set it up to work with `ExternalEvaluate` in a day or so. So does Perl, anyone want to give it a try? `ExternalEvaluate` is great because it is simple to use and works (or can be made to work) with just about any interpreted language that speaks JSON and zeromq. But it is also, in essence, a quick and dirty hack (that's extensible in a quick and dirty way), and won't be able to scale to the types of problems I mentioned above.
----
## MathLink/WSTP
Let me finally say a few words about why MathLink/WSTP are critical for Mathematica, and what should be improved about them.
I believe that any serious interface should be built on top of MathLink. Since Mathematica already has a good interface capable of inter-process communication, that is designed to work well with Mathematica, and designed to handle numerical and symbolic data efficiently, use it!!
Two things are missing:
- Better documentation and example programs, so more people will learn MathLink
- If the MathLink library (not Mathematica!) were open source, people would be able to use it to link to libraries [which are licensed under the GPL][6]. Even a separate open source implementation that only supports shared memory passing would be sufficient—no need to publish the currently used code in full. Many scientific libraries are licensed under the GPL, often without their authors even realizing that they are practically preventing them from being used from closed source systems like Mathematica (due to the need to link to the MathLink libraries). To be precise, GPL licensed code *can* be linked with Mathematica, but the result cannot be shared with anyone. I have personally requested the author of a certain library to grant an exception for linking to Mathematica, and they did not grant it. Even worse, I am not sure they understood the issue. The authors of other libraries *cannot* grant such a permission because they themselves are using yet other GPL's libraries.
[MathLink already has a more permissive license than Mathematica.][7] Why not go all the way and publish an open source implementation?
I am hoping that Wolfram will fix these two problems, and encourage people to create MathLink-based interfaces to other systems. (However, I also hope that Wolfram will create a high-quality Python link themselves instead of relying on the community.)
I have talked about the potential of Mathematica as a glue-language at some Wolfram events in France, and I believe that the capability to interface external libraries/systems easily is critical for Mathematica's future, and so is a healthy third-party package ecosystem.
[1]: http://matlink.org/
[2]: https://www.mathworks.com/help/matlab/matlab-engine-for-python.html
[3]: http://www.sympy.org/
[4]: http://pandas.pydata.org/
[5]: https://networkx.github.io/
[6]: https://en.wikipedia.org/wiki/Copyleft
[7]: https://www.wolfram.com/legal/agreements/mathlink.htmlSzabolcs Horvát2017-09-15T12:33:04ZMaTLink (Matlab==>Mathematica)
http://community.wolfram.com/groups/-/m/t/1290731
I am Hamed, I have MATLABR2015a and Mathematica 11.1.1.0, I will now want to install MATLink to allow a quick and easy transition from Matlab to Mathematica. The version of MATLink 1.1 does not work on my computer can someone help me please?Hamed BOUARE2018-02-23T12:47:50ZA New Sorting: Top 70 Greatest Mathematicians in History
http://community.wolfram.com/groups/-/m/t/1290697
When my kids asked me, "Dad, who is greatest mathematican?"
I found the internet old talking of [Top 10 Greatest Mathematicians][1] is not completely satisified.
http://listverse.com/2010/12/07/top-10-greatest-mathematicians/
So I decide to find a new sort for all great mathematicians based on Wolfram and MathWorld documenated mathematical contributions. Result and code show below.
![enter image description here][2]
![enter image description here][3]
![enter image description here][4]
![enter image description here][5]
![enter image description here][6]
Looking up at all those achievement of great mathematicans in human history,
as an amateur-level mathmatics enthusiast, I would like to quote below code as an end.
TextTranslation["Wir müssen erinnern, wir werden erinnern."]
[1]: http://listverse.com/2010/12/07/top-10-greatest-mathematicians/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Top20.png&userId=569571
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Top70.png&userId=569571
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=king1.png&userId=569571
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=king2.png&userId=569571
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=king3.png&userId=569571Frederick Wu2018-02-24T07:07:31Z[✓] Integrate a piecewise function?
http://community.wolfram.com/groups/-/m/t/1290148
I am trying to integrate a step function given in a book, but it doesn't work.
Phi[x_]:= Piecewise[{{-Hg x, -g/2 <= x <= g/2}, {-g Hg/2, x >= g/2}, {g Hg/2, x < -g/2}}]
I can plot it and give the same result as in book but the integration is not the same so my integration code is:
Integrate[y1/ \[Pi] \[CapitalPhi][x]/((x1-x)^2+y1^2), {x, -Infinity, Infinity}, Assumptions -> x1 > 0 && y1 > 0](*here y1=y in book*)
In book it only says for y>0, so I assume x>0 also because for just y>0 mathematica doesn't integrate. Thanks for your help
my code and book page is attached
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=IMG_4799.JPG&userId=1289160Arm Mo2018-02-22T23:29:57Z[✓] Plot a graph for viscosity vs. molecular weight?
http://community.wolfram.com/groups/-/m/t/1289179
Hi,
I just started learning mathematica.
I'm trying to plot this plot the graph for viscosity vs molecular weight (attached pictures Formula [1] and
books plot[2] from Cosgrove book of colloidal sciences.)
I first try to sum functions (linear part EE(M) and non linear part EE(M)) but I don't think it works like that (line 3).
So I used the "IF" command and it seems it is also wrong (line 5). I think I have problem with defining functions and parameters in mathematica.
My .nb file is attached and
I appreciate your help.
P.S: Also look at line7 why mathematica doesn't plot when I use function's name?
EE1[M1_] := a M1;
EE2[M1_] := a M1^3.4;
Plot[{a M1 + a M1^3.4}, {M1, 1, 100}, PlotRange -> Automatic]
Mc = 54;
ee[M_] := If[M < Mc, M, M^3.4](*ee is viscosity [\eta]*)
Plot[If[M < Mc, M, M^3.4], {M, 1, 100}, PlotRange -> Automatic]
Also why this code doesn't plot manipulate when i use function's name
Manipulate[Plot[{EE1 + EE2}, {M1, 1, 100}, PlotRange -> Automatic], {a, 1, 3}]
![enter image description here][1]
![`enter image description here`][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=formula.png&userId=1289160
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=plotofviscosity.png&userId=1289160Arm Mo2018-02-21T18:38:12ZHow to use the Wolfram|Alpha time dilation calculator on Black Holes?
http://community.wolfram.com/groups/-/m/t/772569
Hello,
I’m a graphic artist working for an education company in Arizona. I was given the task of writing a twenty page reader about black holes for middle school students, and all of my research has gone well. But I’ve reached a dead end regarding time dilation. I’m writing a scenario where the reader visits a ten-solar mass black hole while his or her friend stays at a safe distance. I would like to write the following:
> If you could stay just in front of the event horizon, you could watch
> your ten year old friend turn 100 years old in just [*xxx* *amount of
> time*].
Unfortunately, I can’t get an adequate answer to this. I was directed to the time dilation calculator here http://www.wolframalpha.com/input/?i=time+dilation+calculator , but I don’t know how to use it. Just playing with it I’ve gotten negative numbers, *i*, and “exceeds the speed of light’. I have no idea what any of this means. What’s the gravitational acceleration? What’s the rest frame? What’s the radius of what?
I know the time should be very short, but “a blink of an eye” isn’t useful. Would some kind-hearted soul be willing to walk me through this in layman’s language? Or better yet, give me an accurate (but not necessarily precise) number. Any help is greatly appreciated.
Thank you and best regards,
JackJack M2016-01-13T02:39:29ZLength and Width of a line
http://community.wolfram.com/groups/-/m/t/1291132
Hello,
How do I measure the average length and the average width of a line from an image?
For example, the length and the width of the river in the attached picture.![enter image description here][1]
Thank you
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fsbdev3_066534.jpg&userId=1291117Subbarao Raikar2018-02-23T22:56:12Z[✓] Calculate this integral with one/two unknown variables?
http://community.wolfram.com/groups/-/m/t/1290048
Among this double integral, "a" and "e" are assumed to be unknown parameters. I want to calculate this integral when "a" or "e" is set as a constant. And if possible, I want to draw the 3D plot of this integral with unknown "a" and "e". Thanks very much.
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=%E6%96%B0%E5%BB%BA%E4%BD%8D%E5%9B%BE%E5%9B%BE%E5%83%8F6.jpg&userId=1290032zhaojx842018-02-22T14:04:43ZKeyboard shortcut for "Evaluate Notebook"
http://community.wolfram.com/groups/-/m/t/1290781
I recently upgraded to Mathematica 11 for some side projects. I love the program, but I was annoyed to see that there is no keyboard shortcut for the Evaluate Notebook command. F8 appears to be unused in the normal configuration on Linux. So I’m posting a solution for adding F8 as a keyboard shortcut for Evaluate Notebook:
If you are on Linux, all you have to do is edit /usr/local/Wolfram/Mathematica/11.2/SystemFiles/FrontEnd/TextResources/X/MenuSetup.tr
1. Make a backup of MenuSetup.tr
2. Open MenuSetup.tr
2. Search for “Evaluate Notebook”
3. You’ll find the correct line (there is only one). Add ‘, MenuKey[“F8”, Modifiers->{}]’ after “EvaluateNotebook”
4. The revised line should read ‘MenuItem[“Evaluate N&otebook”, “EvaluateNotebook”, MenuKey[“F8”, Modifiers->{}]],’
5. Re-launch Mathematica, and you should see F8 next to the Evaluate Notebook command. Try it out.
6. Success!
Is there some reason why there is no keyboard shortcut for Evaluate Notebook? I am at a loss because it is simple to add one.Andrew Watters2018-02-23T17:03:02ZDoes anyone run Mathematica on a ODROID-MC1 ?
http://community.wolfram.com/groups/-/m/t/1290909
Hi,
Does anyone run Mathematica under any Linux variant on a ODROID-MC1 ? /https://odroidinc.com/collections/odroid-single-board-computers/products/odroid-mc1-1 /
If yes, was was the WolframMark score ?
Thanks ahead,
JánosJanos Lobb2018-02-23T19:50:43Z[✓] Plot an Archimedean Spiral with equidistant points?
http://community.wolfram.com/groups/-/m/t/1180645
Hi
has somebody experience with the Archimedean Spiral?
In general it is quite simple to create an Archimedean Spiral by e.g. a line function. I rather want to create a spiral with equidistant points.
The amount of the "sampling" points should be adjustable.
I saw some examples, but I could not follow it, for example:
[Equation to place points equidistantly on an Archimedian Spiral using arc-length][1]
I created following code so far:
a = 0.75;
K = 3;
L1 = 90;
rp = 0.0;
alpha = L1*Pi/180;
Sample = 80; (* sampling per azimuth direction *)
M = 1; (* number of spirals *)
x1[t_, m_] := (rp*Cos[alpha]) + a*t*Cos[t + (m*2*Pi/M)];
y1[t_, m_] := (rp*Sin[alpha]) + a*t*Sin[t + (m*2*Pi/M)];
data1 = Table [{x1[t, m], y1[t, m]}, {t, 0, K*2 \[Pi],
2 \[Pi]/Sample}, {m, 1, M}];
dataflat1 = Flatten[data1, 1];
Graphics[{Thick, Blue, PointSize[0.0075], Point[dataflat1]}, Axes -> True, AxesLabel -> {X, Y}]
![enter image description here][2]
[1]: https://math.stackexchange.com/questions/1371668/equation-to-place-points-equidistantly-on-an-archimedian-spiral-using-arc-length/2216736#2216736
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3425trgfa.png&userId=11733Nikki Peter2017-09-10T15:18:29ZMathematica for nuclear medicine?
http://community.wolfram.com/groups/-/m/t/1290721
One of my relatives is a cardiologist, and he uses a ridiculously expensive medical imaging workstation with proprietary software that costs tens of thousands of dollars. This thing is ancient and runs Solaris. It's called a Pegasys ADAC I believe. Here is a sample of what this type of system analyzes:
![Image slices in a nuclear medicine study][1]
These are image slices from a camera that orbits around the patient and looks at their heart. It works by detecting radioactive emissions from isotopes injected into the patient. The proprietary software ([examples of software][2]) assembles these slices into a volumetric visualization of blood flow through the patient's heart, and also computes things like blood flow and other relevant measurements. The resolution has not seemed to improve in 20 years despite advances in camera technology and other areas, so he is stuck using this old technology.
I was intrigued by [this page][3] on the Wolfram site regarding volumetric visualization and manipulation of 3D visuals based on image slices.
It appears Mathematica can do exactly the same thing that the medical imaging system does, for about 1/30th the price, and produce better results, and also not lock someone into a proprietary file format. Also, Mathematica would permit many additional features such as statistical analyses and various other features across large volumes of collected patient data. Currently, that is a manual process involving data entry into Excel spreadsheets, if it is even done at all.
Has anyone used Mathematica for nuclear medicine?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=UPPER_LIMIT_slices.jpg&userId=1290344
[2]: http://www.medx-inc.co/index2e5c.html?fuseaction=products.236&
[3]: http://reference.wolfram.com/language/tutorial/VolumeRenderingAndProcessing.htmlAndrew Watters2018-02-23T11:34:42ZMake Mathematica's interface less blurry?
http://community.wolfram.com/groups/-/m/t/1202244
I run Mathematica 11.2 in Windows 10. I have a 4k monitor (resolution 3840x2160) at work and another at home running at recommended 150% scale. The Mathematica interface looks really blurry and it is painful to read (see attached image, the window behind Mathematica is the browser window where this message was being composed. Browser text is very sharp, as is the rest of Windows. Mathematica text is blurry).
My laptop (Surface book) runs at a resolution of 3000x2000 and 200% scale and Mathematica there looks even blurrier.
High dpi monitors have been out for many years and Mathematica has always been blurry for me on them. Is there a way of making it give good text? Am I missing some non-obvious setting that improves this?
Luis.Luis Rademacher2017-10-13T02:23:21Z[✓] Keep x/y (Divide[x,y]) from turning into Times[x, Power[y,-1]]?
http://community.wolfram.com/groups/-/m/t/1290158
I would like to have a way to keep x/y in the Divide[x,y] notation, but it seems
to go immediately to the standard form
qTimes[x, Power[y,-1]]
There does not seem to be any attribute that controls this as there is
for properties such as commutative and associative, etc.
Similarly, you have
FullForm[2/(3x)]
as
Times[2, Times[Rational[1, 3], Power[x, -1]]]
when
I might want to distinguish between 2/(3x) and (2/3)(1/x) etc.
I know this is normally a feature, not a bug, but there are times when I'd like to have
more control over conversion to standard form. Is there any way to do this??
Much thanks. - ElaineElaine Kant2018-02-22T23:32:14ZAnalyze stereo panning and width of panned instruments?
http://community.wolfram.com/groups/-/m/t/1290711
Hi, I'm an independent music producer with an interest in R&D.
For background on this question, here is a helpful article on [stereo panning][1]. Stereo panning is used to give the sensation that a sound or instrument is coming from a particular direction, left, center, or right. Through adjustment of panning in a digital audio workstation or mix console, the listener can hear all of the instruments clearly even when there is a full band. By contrast, with mono sound, all the sounds come out of both sides of the system and seem like they are on top of each other. This is one reason stereo is such a vast improvement.
Anyway, in a typical mix console, there are potentiometers that control left and right panning. Here is a picture of a classic Solid State Logic 4000 G analogue console:
![SSL 4000 G][2]
Interesting effects are possible when you double the instrument and pan one copy hard left and one copy hard right, or some combination of that. This produces a sensation of "width" where the sound seems wider than a normal instrument.
I am wondering whether anyone has used Mathematica to analyze panning and the perceived width of sound in stereo mixes. If not, I think it would be an awesome use of the program. For my first act, I would compute panning and width of different mixes of the same work to see what listeners respond to the best. That also seems like a possible research opportunity.
[1]: http://joelambertmastering.com/mix-tips-from-your-mastering-engineer-4-going-wide-panning-to-leave-an-impression/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ssl-4000-g-ultimation-548708.jpg&userId=1290344Andrew Watters2018-02-23T11:20:39ZCatch up on documentation
http://community.wolfram.com/groups/-/m/t/1290522
A great deal of functionalities is undocumented despite being around in the same form for a long time.
While many of them are 'documented unofficially' here or on mathematica.stackexchange the fact the they are not officially supported means:
- we can't expect WRI Support to answer (though they are helpful more than they need to)
- it is very risky to build commercial solutions using them, and very cumbersome without them
I would really prefer that than anything new.
I'm talking about:
- **PacletManager`*** (Paclets, PacletInfo.m and all related)
Absolute must, how come it is not documented when it is now the standard way of managing/distributing packages?
- **Internal`*** (at least partially)
People use InheritedBlock, WithLocalSettings, HandlerBlock! [**all the time**.][1] They are really helpful
- **Language`***
Responsible for dependencies collecting for e.g. CloudDeployed APIFunction/FormFunction. But not only, stuff it contains would be a great help for advanced developers.
Why don't you give us tools as opposed to the solution which does not fit all use cases?
- **DynamicNamespace**, `DynamicLocation` and friends
Extremely helpful framework behind Graph's plotting feature (see what we know so far: [**mma.se.com/q/155237**](https://mathematica.stackexchange.com/q/155237/5478)
- **FrontEnd in general**: ``FrontEnd` `` ``FEPrivate` `` (e.g. AttachedCell) and longstanding bugs
Stagnation in this area saddens me deeply so I will just make a note that I care :-) And I know that supporting three front ends on different OSs and browsers is a huge task.
There are cases of hard to debug issues that were not fixed since, sometimes, V9. If you are not planning to fix them maybe it is time to talk about them in *PossibleIssues* section as I understand that they not always are pure bugs but a side effect of subtle optimization patches.
----------
Did I forget about anything? Do you agree?
[1]: https://mathematica.stackexchange.com/search?q=InheritedBlockKuba Podkalicki2018-02-23T07:12:59Z[✓] Make a 3D scatter plot using multiple data sets imported from excel?
http://community.wolfram.com/groups/-/m/t/1290300
I have multiple data sets in several excel (.xlsx) files that I want to visualize as a single 3D scatter plot. I can import each data set individually and plot them using the ListPointPlot3D command:
data3 = Import["/Users/hewittwm/Documents/Mathematica/data3.xlsx"]
ListPointPlot3D[data3, BoxRatios -> 1, PlotRange -> {{0, 3000}, {0, 1}, {0, 1}}, AxesLabel -> {"Volume", "Buriedness", "Hydrophobicity"}]
data4 = Import["/Users/hewittwm/Documents/Mathematica/data4.xlsx"]
ListPointPlot3D[data4, BoxRatios -> 1, PlotRange -> {{0, 3000}, {0, 1}, {0, 1}}, AxesLabel -> {"Volume", "Buriedness", "Hydrophobicity"}]
But when I try to plot both together:
ListPointPlot3D[{data3,data4}]
I get an error saying "must be a valid array or a list of valid arrays"
How do I resolve this? Is there a better way to go about plotting my data points in 3D? Any advice would be greatly appreciated as I am fairly new to Mathematica and the Wolfram Language.William Hewitt2018-02-22T21:08:15Z[GIF] Visualizing Interstellar's Wormhole: from article to programming
http://community.wolfram.com/groups/-/m/t/852052
*Click on image to zoom. Press browser back button to return to the post.*
----------
[![enter image description here][5]][5]
Let me start off by saying that I know almost nothing about general relativity, but I thought it was really fun translating the equations presented in [this paper](http://arxiv.org/abs/1502.03809) by Oliver James, Eugenie von Tunzelmann, Paul Franklin, and Kip Thorne into notebook expressions.
Embedding Diagrams
==================
The paper gives some really cool figures to show the curvature of 4-dimensional spacetime in the region around a wormhole. The physics of the wormhole is essentially described by three parameters:
1. $\rho$ - the radius of the wormhole
2. $a$ - the length of the wormhole
3. $\mathcal{M}$ - a parameter describing the curvature, described in the paper as the "gentleness of the transition from the wormhole's cylindrical interior to its asymptotically flat exterior"
To look at the curvature for a given set of parameters, we only really care about the ratios $a/\rho$ and $\mathcal{M}/\rho$.
Taking equations (5) and (6) from the paper, we can plot the curvature for any pair of these parameters using cylindrical coordinates. Since the $z$ coordinate is described found via numerical integration, I chose to speed up the `ParametricPlot3D` by first forming an interpolating function.
embeddingDiagram[a_, M_, lmax_: 4] := Module[{ρ = 1, z, zz, x, r},
x[l_] := (2 (Abs[l] - a))/(π*M);
r[l_] := ρ +
UnitStep[
Abs[l] - a] (M (x[l]*ArcTan[x[l]] - 1/2 Log[1 + (x[l])^2]));
z[l_] :=
NIntegrate[Sqrt[
1 - (UnitStep[
Abs[ll] -
a] (2 ArcTan[(2 (-a + Abs[ll]))/(M π)] Sign[
l])/π)^2], {ll, 0, l}];
zz = Interpolation@({#, z[#]} & /@ Subdivide[lmax, 20]);
ParametricPlot3D[{{r[l] Cos[t], r[l] Sin[t], zz[l]}, {r[l] Cos[t],
r[l] Sin[t], -zz[l]}}, {l, 0, lmax}, {t, 0, 2 π},
PlotStyle -> Directive[Orange, Specularity[White, 50]],
Boxed -> False,
Axes -> False,
ImageSize -> 500,
PlotPoints -> {40, 15}]
]
and here are three examples shown in the paper,
embeddingDiagram[0.005, 0.05/1.43]
embeddingDiagram[0.5, 0.014]
embeddingDiagram[0.5, 0.43, 10]
![enter image description here][2]
Tracing rays through the wormhole
=================================
The appendix to the paper describes a procedure for creating an image taken from a camera on one side of the wormhole. The procedure involves generating a map from one set of spherical polar coordinates (the "camera sky") to the "celestial spheres" describing the two ends of the wormhole.
First a location is chosen for the camera, then light rays are traced backwards in time from the camera to one of the two celestial spheres. This ray tracing involves solving 5 coupled differential equations back from $t=0$ to minus infinity (or a large negative time).
For this I use [`ParametricNDSolve`](http://reference.wolfram.com/language/ref/ParametricNDSolve.html). The functions being solved for are the spherical coordinates of the light rays and their momenta.
The parameters for `ParametricNDSolve` are the wormhole parameters listed above, the camera's position `{lcamera, θcamera, ϕcamera}` and the "camera sky" coordinates, used to build the map. Rather than walk through their derivation (again, not a cosmologist), I cite the paper for the equations given below:
rayTrace = Module[{
(* auxilliary variables *)
nl, nϕ, nθ, pϕ,
bsquared, M, x, r, rprime,
(* parameters for ParametricNDSolve *)
θcamsky, \
ϕcamsky, ρ, lcamera, θcamera, ϕcamera, W, a,
(* time dependent parameters to be solved for *)
l, θ, ϕ, pl, pθ,
(* the time variable *)
t
},
(* Eq. (7) *)
M = W/1.42953;
(*Eq. 5 *)
x[l_] := (2 (Abs[l] - a))/(π*M);
r[l_] := ρ +
UnitStep[
Abs[l] - a] (M (x[l]*ArcTan[x[l]] - 1/2 Log[1 + (x[l])^2]));
rprime[l_] :=
UnitStep[
Abs[l] -
a] (2 ArcTan[(2 (-a + Abs[l]))/(M π)] Sign[l])/π;
(* Eq. A.9b *)
nl = -Sin[θcamsky] Cos[ϕcamsky];
nϕ = -Sin[θcamsky] Sin[ϕcamsky];
nθ = Cos[θcamsky];
(*Eq. A.9d*)
pϕ = r[lcamera] Sin[θcamera] nϕ;
bsquared = (r[lcamera])^2*(nθ^2 + nϕ^2);
ParametricNDSolveValue[{
(* Eq. A.7 *)
l'[t] == pl[t],
θ'[t] == pθ[t]/(r[l[t]])^2,
ϕ'[t] == pϕ/((r[l[t]])^2 (Sin[θ[t]])^2),
pl'[t] == bsquared*rprime[l[t]]/(r[l[t]])^3,
pθ'[t] ==
pϕ^2/(r[l[t]])^2 Cos[θ[t]]/(Sin[θ[t]])^3,
(* Eq. A.9c *)
pl[0] == nl,
pθ[0] == r[lcamera] nθ,
(* Initial conditions, paragraph following Eq. A.9d *)
l[0] == lcamera,
θ[0] == θcamera,
ϕ[0] == ϕcamera
},
{l, θ, ϕ, pl, pθ},
{t, 0, -10^6},
{θcamsky, ϕcamsky,
lcamera, θcamera, ϕcamera, ρ, W, a}]];
Now to use the `rayTrace` function - we want to build up an array of values for which we can use a `ListInterpolation` function to map any direction in the camera's local sky to coordinates in one of the celestial spheres. Exactly which celestial sphere is determined by the sign of the lenght coordinate, `l`. The size of the array is very important. I find that it is important to use an odd number of array elements, or you'll end up with an ugly vertical line in the center of your image.
generateMap[nn_, lc_, θc_, ϕc_, ρ_, W_, a_] :=
ParallelTable[{Mod[#2/π, 1], Mod[#3/(2 π), 1], #1} & @@
Through[rayTrace[θ, ϕ, lc, θc, ϕc, ρ,
W, a][-10^6]][[;; 3]], {θ,
Subdivide[π, nn]}, {ϕ, Subdivide[2 π, nn]}]
Finally you need a function to transform the two input images using the map generated by the above function. I would be very happy if someone could suggest a method to do this better - perhaps using `ImageTransformation`? I was able to make something work with `ImageTransformation` but it was much less efficient than this. Essentially, `ImageTransformation` can map pixels from one part of an image to another, but they won't grab pixels from another image. You could create a composite image, with the two stacked on top of each other, or you could use the transformation function on each one separately and then combine them.
blackHoleImage[foreground_, background_, map_] :=
Module[{raytracefunc, img1func, img2func, nrows, ncols, mapfunc},
{nrows, ncols} = Reverse@ImageDimensions@foreground;
raytracefunc =
ListInterpolation[#, {{1, nrows}, {1, ncols}},
InterpolationOrder -> 1] & /@ Transpose[(map), {2, 3, 1}];
img1func =
ListInterpolation[#, {{0, 1}, {0, 1}}] & /@
Transpose[(foreground // ImageData), {2, 3, 1}];
img2func =
ListInterpolation[#, {{0, 1}, {0, 1}}] & /@
Transpose[(background // ImageData), {2, 3, 1}];
mapfunc[a_, b_, x_ /; x <= 0] := Through[img2func[a, b]];
mapfunc[a_, b_, x_ /; x > 0] := Through[img1func[a, b]];
Image@Array[
mapfunc @@ Through[raytracefunc[#1, #2]] &, {nrows, ncols}]
]
Low-resolution test
===================
To generate a map using `nn=501` takes about 15 to 20 minutes on my PC, so it's no good for testing the effects of various parameters. So we'll make a much smaller map, and the quality of the image will be lower. We can grab a couple of images from the cite listed in the paper,
foreground=Import["http://www.dneg.com/wp-content/uploads/2015/02/InterstellarWormhole_Fig6b.jpg"];
background=Import["http://www.dneg.com/wp-content/uploads/2015/02/InterstellarWormhole_Fig6a.jpg"];
and make a 101 by 101 map in under a minute:
map1 =
generateMap[101, 6.0, π/2, 0, 5.0, 0.07, 2.4]; // AbsoluteTiming
(* {36.2135, Null} *)
Here I've taken some some paramters I think make a cool picture ($ \rho = 5.0$, $a = 2.4$, $W = \mathcal{M}/1.43 = 0.07$) and put the camera at $\{l, \theta, \phi\} = \{ 6, \pi/2, 0 \}$. Since the map is low resolution, I can reduce the resolution of the images to get a quick result,
blackHoleImage[ImageResize[foreground, 500],
ImageResize[background, 500], map1]
[![enter image description here][3]][4]
But if you set `nn=501` and don't resize the images you get
![enter image description here][5]
Alien invasion
==============
Have you ever read the *Commonwealth Saga* by Peter F. Hamilton, wherein an alien invades human-held territories via wormhole with the intent of exterminating our species?
![Mathematica graphics](http://i.imgur.com/DQ33Qkh.gif)
[here](https://www.dropbox.com/s/f9zu8eyjnxx77jj/out.mp4?dl=1) is a better quality, lower filesize mp4 of the above animation. To make this one, I varied the wormhole width from 0 up to 5, then used `ImageCompose` to add in [this](http://www.wpclipart.com/cartoon/aliens/alien_ship/flying_saucer_2_T.png) stock image flying saucer, then shrunk the wormhole back to zero width.
Unfinished tasks
--
I think it would be very interesting to take the result of `rayTrace` and plot it on top of the embedding diagram, but I haven't quite figured this out.
I also think it would be pretty neat to take a terrestrial picture (say of the White House), and have a wormhole open up in the background. Finally, I would be very pleased to figure out how to put the wormhole at any position in an image I want, rather than just the center.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=frame_100.png&userId=130877
[2]: http://i.stack.imgur.com/f3drY.png
[3]: http://i.stack.imgur.com/cvKBK.png
[4]: http://i.stack.imgur.com/cvKBK.png
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=frame_100.jpg&userId=20103Jason Biggs2016-05-06T13:44:35ZAvoid truncated tweets texts when using ServiceExecute?
http://community.wolfram.com/groups/-/m/t/1289820
I use the Twitter command to extract tweets and do some analysis
twitter = ServiceConnect["Twitter", "New"]
listTws = twitter["TweetSearch", "Query" -> "Trump", MaxItems -> 10];
For example, the text of the first tweet can be extracted with
listTws[[1]]["Text"]
whose output is
"RT @BreitbartNews: \"Let\[CloseCurlyQuote]s turn a negative into a
positive... I want them to say, \[OpenCurlyQuote]Look, we are not
profiting off the deaths of children. We..."
The tweet is truncated after "We..." . If I look at the [original tweet][1] (ID 966454631361589249) , the original phrase is completed.
Who can I extract the whole sentence??
Is there a way to avoid to import retweets?? They are the majority and look like "RT @name blabla"
[Here][2] I read that I need to set "tweet_mode=extended". How can I do that with Mathematica?
**UPDATE**I have noticed that the truncated tweets' length is 140, whereas the total length is 280. This may be related to the new feature of twitter. Is there a way to fix it?
[2]: https://twittercommunity.com/t/retrieve-full-tweet-when-truncated-non-retweet/75542/3
[1]: http://%22https://twitter.com/nigel_trump/status/966454631361589249%22Francesco Sgarlata2018-02-21T23:57:16ZWordDefinition function in 10.3 fails
http://community.wolfram.com/groups/-/m/t/583266
I just installed 10.3. Here is the result of running the WordDefinition function exactly as it appears in the docs:
In[3]:= WordDefinition["dolphin"]
During evaluation of In[3]:= CreatePaclet::badppi: The paclet C:\Users\David\AppData\Roaming\Mathematica\Paclets\Temporary\WordData_Canonicalization-10.0.254656959.paclet does not have a properly formatted PacletInfo.m or PacletInfo.wl file. You can use the VerifyPaclet function to get more detailed information about the error.
During evaluation of In[3]:= PacletInstall::instl: An error occurred installing paclet from file C:\Users\David\AppData\Roaming\Mathematica\Paclets\Temporary\WordData_Canonicalization-10.0.254656959.paclet: Not a valid paclet.
During evaluation of In[3]:= WordData::dlfail: Internet download of data for WordData failed. Use Help > Internet Connectivity... to test or reconfigure internet connectivity. >>
During evaluation of In[3]:= CreatePaclet::badppi: The paclet C:\Users\David\AppData\Roaming\Mathematica\Paclets\Temporary\WordData_Canonicalization-10.0.254656410.paclet does not have a properly formatted PacletInfo.m or PacletInfo.wl file. You can use the VerifyPaclet function to get more detailed information about the error.
During evaluation of In[3]:= PacletInstall::instl: An error occurred installing paclet from file C:\Users\David\AppData\Roaming\Mathematica\Paclets\Temporary\WordData_Canonicalization-10.0.254656410.paclet: Not a valid paclet.
During evaluation of In[3]:= WordData::dlfail: Internet download of data for WordData failed. Use Help > Internet Connectivity... to test or reconfigure internet connectivity. >>
Out[3]= Missing["NotAvailable"]David Keith2015-10-16T00:40:48ZAnalyzing historical State of the Union address data
http://community.wolfram.com/groups/-/m/t/1275556
When I heard the State of the Union was coming up, I knew it was going to call for a Mathematica data deep dive! With the help of the data from [this site][1] and some archived Wikipedia data, I was able to do just that. Mathematica made it easy to compare and contrast the State of the Union addresses given by each president.
**Creating a CSV file of all the relevant data**
With all the data from the UCSB site, I was well on my way to doing something interesting. I was able to copy most of this information as plain text into an Excel document. However, with dates outside the normal date range, I had to be careful that none of these were converted into invalid dates. I also had to hand paste all of the URLs to the SOTU addresses I found on Wikipedia as well as the time length of the spoken speeches. I've linked the CSV file [here][2] as well as in the Mathematica notebook, so you can avoid the same hassle!
**Importing the data and speeches**
Using SemanticImport, this process was relatively simple. The dates, time lengths, and URLs were automatically recognized as entities, which saved me quite a bit of programming time.
fileLocation =
"https://amoeba.wolfram.com/index.php/s/8RtPmIDGwzrr0FR/download";
SOTU = SemanticImport[fileLocation, Automatic, "Rows"]
I was then able to use Import to pull all of the speeches from the URLs in obtained in that SemanticImport. These were all imported as lists of strings automatically.
speechList = Import[SOTU[[#]][[6]]] & /@ Range[Length[SOTU]]
Using the AppendTo functionality, I was easily able to add these imported speeches to the original dataset using an iterative loop.
i = 1;
While[i < Length[speechList] + 1,
AppendTo[SOTU[[i]], speechList[[i]]];
i++;
]
Finally, using the GroupBy function, I was able to sort this list into list associated with specific presidents. This was especially helpful for some of the Manipulate functions used later to create WordClouds.
SOTUgrouped = GroupBy[SOTU, First]
**Using WordCloud to visualize each speech**
WordCloud is one of my favorite functions to use in analyzing text and speeches. This function makes it extremely easy to quickly visualize the degree of word selection in textual data. In this case, I analyzed all of President Trump's State of the Union addresses (excluding the "applause" that was included in the transcript with WordSelectionFunction). Considering there was only one, this was relatively easy, but I wanted to create a skeleton for the next portion that would join all of his speeches together with the StringJoin function and Slot functionality.
string = StringJoin[
SOTUgrouped["Donald Trump"][[#]][[7]] & /@
Range[Length[SOTUgrouped["Donald Trump"]]]];
WordCloud[string, WordSelectionFunction -> (# != "applause" &)]
![enter image description here][3]
I will say this time and time again, one of the most interesting functions in Mathematica is the Manipulate function. Using this, I was able to do the same WordCloud for every single one of the presidents and give the user the ability to flip through different presidents and compare. Using the Keys function, I was able to quickly create a list of president for the variable President from my grouped list created earlier.
totalsWC = Manipulate[
string =
StringJoin[
SOTUgrouped[President][[#]][[7]] & /@
Range[Length[SOTUgrouped[President]]]];
WordCloud[string, WordSelectionFunction -> (# != "applause" &)],
{President, Keys[SOTUgrouped]}
]
![enter image description here][4]
To take this a step further, I thought it would be even more interesting to have the option to select either the president's speeches as a whole or particular speeches to see how the focus changed throughout the presidency. I first programmed the Speech variable to update dynamically based on the number of speeches under each president. Unfortunately, this would cause some issues when flipping between presidents if a Speech value was out of range. I used some simple If statements with the MemberQ function to make this a bit more user-friendly when jumping around between presidents. Overall though, I followed a pretty similar skeleton to the previous example.
bySpeechWC = Manipulate[
speeches :=
Join[{0 -> "All Speeches"}, # -> "Speech #" <> ToString[#] & /@
Range[Length[SOTUgrouped[President]]]];
wc := If[
MemberQ[Keys[speeches], Speech],
If[
Speech == 0,
WordCloud[string,
WordSelectionFunction -> (# != "Applause" && # != "applause" &),
ImageSize -> Large],
WordCloud[SOTUgrouped[President][[Speech]][[7]],
WordSelectionFunction -> (# != "Applause" && # != "applause" &),
ImageSize -> Large]
],
"Word Cloud cannot be generated until you choose a speech # \
within range"];
date := If[
MemberQ[Keys[speeches], Speech],
If[
Speech == 0,
"N/A",
SOTUgrouped[President][[Speech]][[2]]
],
"Date cannot be generated until you choose a speech # within \
range"];
string =
StringJoin[
SOTUgrouped[President][[#]][[7]] & /@
Range[Length[SOTUgrouped[President]]]];
Column[{
Row[{"President: " <> President}],
Row[{"Date: ", date}],
Row[{wc}]
}],
{President, Keys[SOTUgrouped]},
{Speech, Dynamic[speeches]},
ControlType -> PopupMenu,
Initialization :> (Speech = 0)
]
![enter image description here][5]
**Exploring word usage across different eras**
I decided to take this textual analysis a step further and compare the usage of the most common state of the union words over time. I started by creating a list of the top 100 used words across all of the speeches using WordCounts. I also used StringJoin to combine all of the speeches and DeleteStopwords to avoid counting words like "the", "and", "with", etc.
top100PresTerms =
Take[WordCounts[DeleteStopwords[StringJoin[speechList]],
IgnoreCase -> True], 100]
I knew I would need to tally these within each speech, so I started by testing the Tally function with George Washington's original State of the Union then used the Keys function to find each of the Keys in the top 100 within this speech. This list was put into a list of Associations. Missing[] terms were used as placeholders for top 100 words not found.
gw1 = WordCounts[DeleteStopwords[SOTU[[1]][[7]]], IgnoreCase -> True]
Association[# -> gw1[#] & /@ Keys[top100PresTerms]]
Using this same methodology, I was able to iterate through every speech in order to pair a date with their respective lists of associations of the top 100 words.
i = 1;
wordUse = {};
While[i < Length[SOTU] + 1,
allWords =
WordCounts[DeleteStopwords[SOTU[[i]][[7]]], IgnoreCase -> True];
compList = Association[# -> allWords[#] & /@ Keys[top100PresTerms]];
AppendTo[wordUse, SOTU[[i]][[2]] -> compList];
i++;
]
To explore how this list could be used, I picked a specific word, "government", from my top 100 list. I used this to create a time series of all of the government mentions in the speeches. This list was then usable for a DateListPlot to show the change in the frequency of this world over time.
govCount = {wordUse[[#]][[1]], wordUse[[#]][[2]]["government"]} & /@
Range[Length[wordUse]]
DateListPlot[govCount, ImageSize -> Large]
![enter image description here][6]
Using the same methodology, I was able to create similar time series for each of the top 100 words and store them in a list of associations, which would prove to be beneficial for the Manipulate function.
i = 1;
timeSeries = <||>;
While[i < Length[Keys[top100PresTerms]] + 1,
key = Keys[top100PresTerms][[i]];
keyList =
Association[
wordUse[[#]][[1]] -> wordUse[[#]][[2]][key] & /@
Range[Length[wordUse]]];
AppendTo[timeSeries, key -> DeleteMissing[keyList]];
i++;
]
This list of associations made it easy to pull the specific time series for each of the words in the top 100 list. You can see an example of this with the word "state".
timeSeries["state"]
Using this same methodology, I was able to use the Keys function to again pull specific time series from my new list of associations based on the selected top 100 word. The time series was then plotted on a DateListPlot. I allowed for the user to select and compare two different key words. A nice added feature is the legend that shows the word and also their position in the top 100 list. This was made possible with the Position function.
Manipulate[
DateListPlot[{timeSeries[keyWord1], timeSeries[keyWord2]},
Filling -> Bottom,
ImageSize -> Full,
PlotRange -> Full,
PlotStyle -> {Red, Blue},
GridLines -> {Range[DateObject[{1790}], DateObject[{2015}],
Quantity[5, "Years"]], Range[0, 200, 5]},
PlotLegends ->
Placed[{keyWord1 <> " " <>
ToString[Flatten[Position[Keys[top100PresTerms], keyWord1]]],
keyWord2 <> " " <>
ToString[Flatten[Position[Keys[top100PresTerms], keyWord2]]]},
Above]],
{keyWord1, Keys[top100PresTerms]},
{keyWord2, Keys[top100PresTerms]}
]
![enter image description here][7]
**Observing the trend of spoken vs. written speeches**
I noticed that there were also ebbs and flows of written vs. spoken speeches. To look at this, I tallied the spoken and written speeches of each president and put them into yet another list of associations.
i = 1;
presKeys = Keys[SOTUgrouped];
sORwTally = <||>;
While[i < Length[presKeys] + 1,
indivTally =
Tally[SOTUgrouped[presKeys[[i]]][[#]][[3]] & /@
Range[Length[SOTUgrouped[presKeys[[i]]]]]];
AppendTo[sORwTally, presKeys[[i]] -> indivTally];
i++;
]
I decided that I wanted to visualize these in a PairedBarChart, so I placed them into two lists, spoken and written. I used a series of If statements to test for "spoken" and "written" tallies in my original list, and used 0 as a placeholder in the respective list if one or the other was not present for a specific president.
i = 1;
spoken = {};
written = {};
While[i < Length[presKeys] + 1,
If[Length[sORwTally[presKeys[[i]]]] > 1,
AppendTo[spoken, sORwTally[presKeys[[i]]][[1]][[2]]];
AppendTo[written, sORwTally[presKeys[[i]]][[2]][[2]]];
,
If[sORwTally[presKeys[[i]]][[1]][[1]] == "spoken",
AppendTo[spoken, sORwTally[presKeys[[i]]][[1]][[2]]];
AppendTo[written, 0];
,
AppendTo[spoken, 0];
AppendTo[written, sORwTally[presKeys[[i]]][[1]][[2]]];
]
];
i++;
]
Using this complied list, it was simple to use the PairedBarChart. I experimented with ChartLabels to make the list a little more user-friendly and ChartStyle to give it a little extra color.
PairedBarChart[spoken, written,
ChartLabels -> {Placed[{"Spoken", "Written"}, Above], None,
presKeys}, ImageSize -> Full, ChartStyle -> "Rainbow"]
![enter image description here][8]
**Comparing the average word count of each president (written, spoken, and total)**
On a similar note, I thought it may be interesting to look at how word count of both spoken and written State of the Union addresses varied among presidents. I created a list of associations of presidents spoken, written, and total word count averages using a series of While loops. I added in some If loops to set the average to 0 instead of using the Mean function as I anticipated that some of the presidents would not have either an instance of "spoken" or "written" per what I found in my previous analysis.
i = 1;
presWordMeans = <||>;
While[i < Length[presKeys] + 1,
j = 1;
indS = {};
indW = {};
indAll = {};
sORwList =
SOTUgrouped[presKeys[[i]]][[#]][[3]] & /@
Range[Length[SOTUgrouped[presKeys[[i]]]]];
While[j < Length[sORwList] + 1,
If[sORwList[[j]] == "spoken",
AppendTo[indS, SOTUgrouped[presKeys[[i]]][[j]][[4]]],
AppendTo[indW, SOTUgrouped[presKeys[[i]]][[j]][[4]]]
];
AppendTo[indAll, SOTUgrouped[presKeys[[i]]][[j]][[4]]];
j++;
];
If[Length[indS] == 0,
sMean = 0;,
sMean = N[Mean[indS]];
];
If[Length[indW] == 0,
wMean = 0;,
wMean = N[Mean[indW]];
];
allMean = N[Mean[indAll]];
AppendTo[presWordMeans, presKeys[[i]] -> {sMean, wMean, allMean}];
i++;
]
Again, using the Manipulate function, I was able to make an interesting dynamic BarChart for users to compare this data across five different presidents. The Keys function again made it incredibly simple to create lists of presidents for the user to select from. ChartLegends were added to distinguish the different averages and ChartLabels were added to clarify which group corresponded to which president.
Manipulate[
BarChart[{presWordMeans[pres1], presWordMeans[pres2],
presWordMeans[pres3], presWordMeans[pres4], presWordMeans[pres5]},
ImageSize -> Full, BarSpacing -> {0, 1},
ChartLabels -> {{pres1, pres2, pres3, pres4, pres5}, None},
ChartStyle -> {Red, Blue, Gray},
ChartLegends ->
Placed[{"Average Spoken Words", "Average Written Words",
"Average Words"}, Above]],
{pres1, presKeys},
{pres2, presKeys},
{pres3, presKeys},
{pres4, presKeys},
{pres5, presKeys}
]
![enter image description here][9]
**Conclusion**
I hope this has proven to be an interesting exploration of textual data analysis. I rarely get a chance to dive into the social sciences with Mathematica, so it was certainly a fun experience for me. This should be a good example of all the different input capabilities as well as the several types of data visualization tools that can be used with the Manipulate function.
Download the full notebook [here][10] or via the attached file! (Sorry guys, but you'll have to upload this year's data on your own!)
[1]: http://www.presidency.ucsb.edu/sou.php
[2]: https://amoeba.wolfram.com/index.php/s/8RtPmIDGwzrr0FR
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUtrumpwc.JPG&userId=1161398
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUpreswc.JPG&userId=1161398
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUbyspeechman.JPG&userId=1161398
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUgovtdateplot.JPG&userId=1161398
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUwordman.JPG&userId=1161398
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUpairedbc.JPG&userId=1161398
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUbcman.JPG&userId=1161398
[10]: https://amoeba.wolfram.com/index.php/s/gS3Z8hcgAmIwlXnSam Tone2018-01-31T15:41:28ZConnect to a FileMaker .fmp12 database using DatabaseLink?
http://community.wolfram.com/groups/-/m/t/1289467
Using `DatabaseLink`, Is it possible to connect to and query a FileMaker `.fmp12` database? If so, how?Murray Eisenberg2018-02-21T21:59:27ZLoad a function from cubin, ptx or library file using CUDAFunctionLoad?
http://community.wolfram.com/groups/-/m/t/1289656
According to the documentation of `CUDAFunctionLoad` it should be easy to specify a compiled file (cubin, ptx, dll should all work) as the source for loading a `CUDAFunction`. Unfortunately it does not for me. Compiling from source works fine, but as soon as I try to load the `CUDAFunction` from the compiled file (I tried cubin, ptx and a dll) things fail.
Here is a very simple example that does not work for me, no matter what combination I try:
Let's create a cubin file first from a very simple CUDA kernel:
Needs["CUDALink`"];
code = "
__global__ void addTwo(int * in, int * out, int length) {
int index = threadIdx.x + blockIdx.x*blockDim.x;
if (index < length)
out[index] = in[index] + 2;
}";
cubinFile = CreateExecutable[code, "test", "Compiler" -> NVCCCompiler,
"CreateCUBIN" -> True];
This successfully creates `test.cubin`.
Unfortunately loading the function `addTwo` fails:
cudaFun = CUDAFunctionLoad[File[cubinFile],
"addTwo", {{_Integer, _, "Input"}, {_Integer, _,
"Output"}, _Integer}, 256, "ShellCommandFunction" :> Print,
"ShellOutputFunction" -> Print];
> CUDAFunctionLoad::invsrc: CUDALink encountered invalid source input.
> The source input must be either a string containing the program, or a
> list of one element indicating the file containing the program.
The input file should be valid, but maybe I am missing something obvious here. Interestingly enough going the same route creating a .ptx file yields a different error:
ptxFile =
CreateExecutable[code, "test", "Compiler" -> NVCCCompiler,
"CreatePTX" -> True];
cudaFun =
CUDAFunctionLoad[File[ptxFile],
"addTwo", {{_Integer, _, "Input"}, {_Integer, _,
"Output"}, _Integer}, 256, "ShellCommandFunction" :> Print,
"ShellOutputFunction" -> Print];
> CUDAFunctionLoad::notfnd: CUDALink resource not found.
In addition to cubin and ptx files I tried compiling a library file using `CreateLibrary`, which was created fine but also could not be loaded using `CUDAFunctionLoad`.
Any ideas on what is going wrong here and how I can actually load a CUDAFunction from a compiled file?
You can just copy and paste the code above into mathematica and run it as long as you have CUDA setup properly. Can you reproduce the behavior?
------------------
Additional Information:<br>
I am running Mathematica 11.2 on Windows 10.<br>
CUDA is setup properly and I can do all CUDA computations in Mathematica.Wizard2018-02-22T02:06:27ZConsistent foreign exchange options
http://community.wolfram.com/groups/-/m/t/1278848
Consistency in foreign exchange derivatives is being discussed in the below note where we look at the problem from the probability measure perspective. We review option valuation from both sides of the FX contract and conclude that investors' preferences are subject to different probability measures when the FX rate inverts. Following this we prove the validity of Siegel's paradox.
![enter image description here][1]
**Introduction**
------------
Foreign exchange options are the oldest options in the market with a long history of trading. As such, they have been deeply-researched and are well-understood. Nevertheless, we return to this topic to look at the product consistency, since this may still not be entirely clear. We review this consistency from both - domestic and foreign perspectives and show what adjustments are required to ensure the options are arbitrage-free when the investor's position changes
**Foreign exchange options - 1st currency measure**
-----------------------------------------------
Foreign e change options are financial contracts on FX rate -i.e. rate of exchange of currency 1 fro currency 2. GBP/USD or EUR/USD are examples of such currency pairs. Options are essentially contracts on the future spot FX rate. We will demonstrate the exposition to this subject using EUR/USD exchange rate. This is the rate that sets the exchange equation of $X = \[Euro]1
A reader familiar with the equity derivatives market will immediately spot the similarity between these two products. If the equity growth rate under the risk-neutral measure is the risk-free rate r, the equity pays a continuous dividend yield q and the price process is assumed log-normal, this is identical to the FX when we express r and the USD risk-free rate and q as the equivalent EUR risk-free rate.
Looking at his from the USD-perspective, we can express the EUR/USD FX process as:
$$dF = F (r-q) dt + σ F dW$$
This is a well-known log-normal process for the exchange rate where F represents the EUR/USD rate, \[Sigma] is the FX rate volatility and W represents a Wiener process under the USD-measure.
Pricing option on this future rate is trivial - this is an option buy \[Euro] 1 for K USD and time T. Therefore from the USD-perspective, the option pays: Max[0,F-K] where K is the strike exchange rate. Pricing this option in Mathematica is easy - we build the standard Ito Process for initial value F0.
ipUSD = Refine[
ItoProcess[{(r - q)*F, \[Sigma]*F}, {F, F0}, t], {\[Sigma] > 0,
F[t] > 0, t > 0}];
{Mean[ipUSD[t]], Variance[ipUSD[t]]}
{E^((-q + r) t) F0, E^(-2 q t + 2 r t) (-1 + E^(t $[Sigma]^2)) F0^2}
The option premium from the USD-perspective is an expectation of the above Ito Process.
usdOpt = Exp[-r*t]*
Expectation[Max[F[t] - K, 0], F \[Distributed] ipUSD,
Assumptions ->
F0 > 0 && K > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
-(1/2) E^(-r t) (-2 E^((-q + r) t) F0 +
E^((-q + r) t)
F0 Erfc[(t (-2 q + 2 r + \[Sigma]^2) + 2 Log[F0] - 2 Log[K])/(
2 Sqrt[2] Sqrt[t] \[Sigma])] +
K Erfc[(t (2 q - 2 r + \[Sigma]^2) - 2 Log[F0] + 2 Log[K])/(
2 Sqrt[2] Sqrt[t] \[Sigma])])
**Foreign exchange options - 2nd currency measure**
-----------------------------------------------
Now we touch upon a part that is less clear - what if the option buyer (seller) thinks from the the EUR-perspective? This is quite legitimate as option buyers or sellers can have different preferences when entering into the option contract. How do we ensure that the option contract is consistent from each side-perspective?
Let's spell out the EUR investor position by replicating the USD investor side
- EUR riskless process is dP = P q dt and not dB = B r dt representing USD process
- The exchange-rate is now 1/F and not F
- When SDE for the exchange rate from the USD-point of view is the one above, then for the process 1/F this becomes - using Ito lemma:
f = 1/F;
ip02 = Refine[
ItoProcess[{(r - q)*F, \[Sigma]*F, f}, {F, F0}, t], {\[Sigma] > 0,
F[t] > 0, t > 0, r > 0, q > 0}];
ipEUR = ItoProcess[ip02] // Simplify
ItoProcess[{{(-q + r) F[t], (q - r + \[Sigma]^2)/
F[t]}, {{\[Sigma] F[t]}, {-(\[Sigma]/F[t])}}, \[FormalX]1[
t]}, {{F, \[FormalX]1}, {F0, 1/F0}}, {t, 0}]
The inverted FX rate (USD/EUR) produces different Ito Process than the one observed on the USD-side. This is clear from the definition below:
$$d(1/F) = (1/F) (q-r+σ^2) dt -σ (1/F) d W$$
Our objective is to find probability measure under which the FX option priced in the first section from the USD-perspective will be identical to the one priced from the EUR-perspective. Let's take all tradable components of the trade: (i) USD risk-free discount factor B , (ii) FX rate EUR/USD F and (iii) EUR discount factor P. Based on this we define:
- USD-risk-free process converted to EUR: B/F
- Discounted value of the above : B/ (F P)
So, we need a multi-dimensional Ito process to model B/(F P)
ip03 = Refine[
ItoProcess[{{0, r B, q P}, {F \[Sigma], 0, 0},
B/(P F)}, {{F, B, P}, {F0, B0, P0}}, t], {\[Sigma] > 0, r > 0,
q > 0, t > 0}] // Simplify;
ipEUR2 = ItoProcess[ip03]
ItoProcess[{{0, r B[t], q P[t], (-q B[t] + r B[t] + \[Sigma]^2 B[t])/(
F[t] P[t])}, {{\[Sigma] F[t]}, {0}, {0}, {-((\[Sigma] B[t])/(
F[t] P[t]))}}, \[FormalX]1[t]}, {{F, B, P, \[FormalX]1}, {F0, B0,
P0, B0/(F0 P0)}}, {t, 0}]
From the above Ito Formula, we extract two coefficients - drift and volatility of B/(F P) and create new ItoProcess that reflects the changes when FX inversion occurs.
Flatten[ipEUR2[[1]]];
dr = %[[4]] /. {F[t] -> 1, B[t] -> 1, P[t] -> 1}
vl = %%[[8]] /. {F[t] -> 1, B[t] -> 1, P[t] -> 1}
ItoProcess[{dr F, vl F}, {F, F0}, t];
ipEUR3 = ItoProcess[%]
-q + r + \[Sigma]^2
-Sigma
ItoProcess[{{(-q + r + \[Sigma]^2) F[t]}, {{-\[Sigma] F[t]}},
F[t]}, {{F}, {F0}}, {t, 0}]
It is quite clear that the inverted FX rate process USD/EUR is indeed different to the one observed in the EUR/USD case.
In order to prove this consistency, we need to show that FX call option on EUR/USD from EUR point of view is identical to the one priced from the USD-perspective. So, we need to prove that:
E^(-r t) Subscript[E, USD] ( Max[ Subscript[F, t]-K,0]) = E^(-q t) Subscript[F, 0] Subscript[E, EUR] ( (1/Subscript[F, t]) Max[1/Subscript[F, t]-K,0] )
This is because the expectation of the option payoff has to be converted back into EUR. All we need to price this option is use the following expectation:
eurOpt = F0 Exp[-q t] Expectation[Max[F[t] - k, 0]/F[t],
F \[Distributed] ipEUR3,
Assumptions ->
F0 > 0 && k > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
1/2 E^(-(q + r) t) (E^(r t) F0 - E^(q t) k +
E^(r t) F0 Erf[(
t (-2 q + 2 r + \[Sigma]^2) + 2 Log[F0] - 2 Log[k])/(
2 Sqrt[2] Sqrt[t] \[Sigma])] +
E^(q t) k Erf[(t (2 q - 2 r + \[Sigma]^2) - 2 Log[F0] + 2 Log[k])/(
2 Sqrt[2] Sqrt[t] \[Sigma])])
To finalise this exercise, we compute both option premiums:
usdNum = usdOpt /. {F0 -> 1.35, t -> 0.5, K -> 1.36, \[Sigma] -> 0.2,
r -> 0.01, q -> 0.012}
eurNum = eurOpt /. {F0 -> 1.35, t -> 0.5, k -> 1.36, \[Sigma] -> 0.2,
r -> 0.01, q -> 0.012}
usdNum - eurNum // Chop
0.070452
0.070452
0
Both option premium are the same. This proves they are ***consistent***.
**Siegel's paradox**
----------------
In the context of the above discussion, it is worth mentioning ***Siegel's paradox*** as it directly links the FX processes to probability measures. Let's start again with the definition of FX evolution from the USD-perspective . Under the USD probability measure (USD risk-neutral process) we showed earlier that this was:
$$dF = F (r-q) dt + σ F dW$$
The expected future FX rate - the ***FX Forward*** at time t is an expectation of Subscript[F, t] under the USD measure:
usdExp = Expectation[F[t], F \[Distributed] ipUSD,
Assumptions -> F0 > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
E^((-q + r) t) F0
Let's look now at EUR-investor point of view. (S)he can do similar calculation and under her/his neutral measure the USD/EUR process follows:
$$d(1/F) = (1/F) (q-r) dt + (1/F) σ dW$$
So, the forward rate of 1/F (EUR per USD) is:
eurExp2 =
Expectation[1/F[t], F \[Distributed] ipEUR3,
Assumptions -> F0 > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
E^((q - r) t)/F0
This seems logical, since inverted FX is simply :1/F. Here lies the problem: since 1/F is essentially a convex function, by Jensen's inequality:
(E[F])^-1 < E [F^-1]
when both expectations are taken w.r.t to same probability measure = i.e. calculated with the same distribution and F is non-constant. This runs contrary to our assertion above where we outlined the conditions for consistency - i.e. different probability measure.
Siegel's paradox is simply a statement confirming that the spot rate inversion does not extrapolate to the forward space and the forward FX rate in general ***cannot*** be an unbiased estimate of future spot FX rate. At least not *simultaneously* for both sides of the contract due to *'convexity'* effect in the inverted FX function. This is due to the Jensen's inequality statement above. If the property holds for the USD-investor, it cannot be true for the EUR investor and vice-versa since their forward expectation are subject to ***different probability measures***.
We prove this on the simple case - define standard Ito process and then take the expectations for for F and 1/F
ip05 = Refine[
ItoProcess[{(r - q)*F, \[Sigma]*F}, {F, F0}, t], {\[Sigma] > 0,
F[t] > 0, t > 0, r > 0, q > 0}];
usdFwrd =
Expectation[F[t], F \[Distributed] ip05,
Assumptions -> F0 > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
eurFwrd =
Expectation[1/F[t], F \[Distributed] ip05,
Assumptions -> F0 > 0 && \[Sigma] > 0 && t > 0 && r > 0 && q > 0] //
Simplify
E^((-q + r) t) F0
E^(t (q - r + \[Sigma]^2))/F0
We see that the FX forwards are different as their are taken from different probabilities (with different mean and variance). The forward of 1/F depends also on volatility whereas F does not. Let's show the validity of Jensen's inequality: 1/ Subscript[F, USD] and Subscript[F, EUR]
fxMeanDiff = 1/usdFwrd - eurFwrd // Simplify
-((E^((q - r) t) (-1 + E^(t \[Sigma]^2)))/F0)
Since the above quantity is negative, this shows that indeed
(E[Subscript[F, t]])^-1 < E[Subsuperscript[F, t, -1]]
Plot[fxMeanDiff /. {F0 -> 1.35, r -> 0.01, q -> 0.0045,
t -> 0.5}, {\[Sigma], 0.1, 0.3},
PlotLabel ->
Style["Jensen's inequality and FX forward rates", {15, Bold}, Blue],
PlotStyle -> {Thick, Red}]
![enter image description here][2]
Jensen's inequality effects increases with volatility. On the other hand, the only instance when both forwards will be consistent w.r.t the same probability occurs when \[Sigma]=0. Since this is never the case, we conclude that Siegel's paradox holds.
**Conclusion**
----------
The objective of this note was to present the FX derivatives - forwards and options from different perspective. Whilst the FX spot market is reasonably simple, derivatives are more complicated, especially when we start looking at them from each contractual perspective. Change of probability measure, and hence different probabilities are required to ensure consistency. Existence of Siegel's paradox proves this.
Change of probability measure is handled implicitly by Mathematica once the FX process is correctly defined. The same applies to proving of Siegel's paradox.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Jensinequality.png&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Jensinequality.png&userId=387433Igor Hlivka2018-02-04T20:12:16ZAvoid DatabaseExamplesBuild to fail?
http://community.wolfram.com/groups/-/m/t/1289450
In Mathematica 11.2.0.0 (under macOS HighSierra 10.13.3), I'm trying to use `DatabaseLink` with the supplied example databases. Here's what I've done:
<< DatabaseLink`DatabaseExamples`
DatabaseExamplesBuild[]
> JDBC::error: invalid authorization specification - not found: SA
> $Failed[1, DatabaseLink`DatabaseExamples`Package`RestoreException["publisher"]]
What's wrong and how should it be fixed?Murray Eisenberg2018-02-21T21:27:38ZFactor out known multipliers in algebraic expressions?
http://community.wolfram.com/groups/-/m/t/1285533
Hi there,
I have this function of a[t], and it's a long function of other variables in time and some constant parameter (Ixx, Iyy, Izz), which I really want to explicit in a somatory of any combination of those constant parameters.
I've tried to Simplify and then Apart, but Mathematica doesn't group similar coefficients. I wonder if I could get this result applying some kind of mapping (...)?
Well, below is the function a[t]:
a[t_] := -((Cos[ϕ[t]]*
Izz[t]*(Ty[t] -
Cos[ϕ[t]]*Iyy[t]*Derivative[1][ϕ][t]*
Derivative[1][ψ][t] +
Cos[ϕ[t]]*(-Ixx[t] + Izz[t])*
(Sin[ψ[t]]*Derivative[1][θ][t] +
Derivative[1][ϕ][t])*((-Cos[ψ[t]])*
Tan[ϕ[t]]*Derivative[1][θ][t] +
Derivative[1][ψ][t]) +
Iyy[t]*Derivative[1][θ][
t]*(Cos[ψ[t]]*Sin[ϕ[t]]*
Derivative[1][ϕ][t] +
Cos[ϕ[t]]*Sin[ψ[t]]*
Derivative[1][ψ][t])) -
Iyy[t]*Sin[ϕ[t]]*(Tz[t] +
Izz[t]*Sin[ϕ[t]]*Derivative[1][ϕ][t]*
Derivative[1][ψ][
t] + (Ixx[t] -
Iyy[t])*(Sin[ψ[t]]*Derivative[1][θ][t] +
Derivative[1][ϕ][t])*
(Cos[ϕ[t]]*Cos[ψ[t]]*
Derivative[1][θ][t] +
Sin[ϕ[t]]*Derivative[1][ψ][t]) +
Izz[t]*Derivative[1][θ][
t]*(Cos[ϕ[t]]*Cos[ψ[t]]*
Derivative[1][ϕ][t] -
Sin[ϕ[t]]*Sin[ψ[t]]*
Derivative[1][ψ][t])))/((-Cos[ϕ[t]]^2)*
Cos[ψ[t]]*Iyy[t]*Izz[t] -
Cos[ψ[t]]*Iyy[t]*Izz[t]*Sin[ϕ[t]]^2));
Summarizing, what I want is a way to say : **rewrite a[t] factorizing in a sum by these terms (Ixx, Iyy, Izz) on any combination between them.**
( In other way, I could ask the same by *rewriting a[t] factorizing in terms that don't vary in time?* Would this be possible/easier? )
Can anybody suggest a good thing to try?
Thanks.André Barbosa2018-02-15T10:31:20ZIncremental Machine Learning with feeding data every hour?
http://community.wolfram.com/groups/-/m/t/1283797
I ran couple of simple ML examples available from the Documentation.
A small data set with less than 100 records would take a whole 10 sec to be trained in my laptop.
If I have a new set of sample data of 100 records available every hour, do I need to add those records to the population and train the entire Classifier all over again every hour? Is Classifier capable of retaining of its previous trained knowledge, while we keep feeding in newly available set of data to the Classifier every now & then?Peter Lim2018-02-13T09:48:59ZAvoid System Modeler to crash when performing basic functions?
http://community.wolfram.com/groups/-/m/t/1289004
Hi,
I've just downloaded a free-trial version of System Modeler. I've validated it OK but it crashes every time I try to perform basic operations like opening a file. The error message it gives is:
"A critical error has occurred and Model Center must be restarted. Any unsaved Modelica classes will be restored if possible.
Click OK to restart Model Center, or Cancel to exit."
Does anyone know why? My OS is Windows 7 Professional, 64-bit.
Thanks, ArchieArchie Watts-Farmer2018-02-20T16:39:45ZLive code templates
http://community.wolfram.com/groups/-/m/t/1273720
## Background
I enjoy coding in the FrontEnd (except it crashes and lookup across files does not exist), but I often miss 'hands on keyboard', customizable code templates.
E.g. I often forget to wrap an option name with quotes "_" or I'm starting a new function and would like to avoid retyping `Attributes/Options` `Catch/Check` etc.
I don't like palettes for something that I need to do quickly and frequently.
So I created a little stylesheet, for 11+ (v10 on todo list) and in a *beta* stage at the moment.
Should work on Win/MacOs. Do not use on pre V11 as it may crash the FE.
https://github.com/kubaPod/DevTools
I case you are interested and/or have any ideas about this / similar features, let me know here or create an Issue in GitHub.
Topic cross posted on Mathematica.stackexchange: https://mathematica.stackexchange.com/q/164653/5478
## Setup
(*additional package I use to install github assets' paclets,
you can download .paclet manually if you want
*)
Import["https://raw.githubusercontent.com/kubapod/mpm/master/install.m"]
Needs["MPM`"]
(*installing the package*)
MPMInstall["kubapod", "devtools"]
(*changing default .m stylesheet to a dev's stylesheet*)
CurrentValue[$FrontEnd, "DefaultPackageStyleDefinitions"] =
FrontEnd`FileName[{"DevTools", "DevPackage.nb"}]
(*test*)
FrontEndTokenExecute["NewPackage"]
## How to:
- <kbd>Ctrl</kbd>+<kbd>1</kbd> to open a menu
- navigate with arrows and hit enter/return or hit a shortkey like <kbd>n</kbd>
/ <kbd>{</kbd> / <kbd>[</kbd>
## Customization
Once you setup a new stylesheet the package should have an additional toolbar with 'Edit code templates' button on the top right. Click on it and a user's templates file should open.
It is just a .m file with a header that should explain everything. It will be improved in future.
## Example
[![enter image description here][1]][1]
There is also a dark one based on a build-in ReversedColors.nb stylesheet:
CurrentValue[$FrontEnd, "DefaultPackageStyleDefinitions"
] = FrontEnd`FileName[{"DevTools", "DevPackageDark.nb"}]
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/v81cV.gif
[2]: https://i.stack.imgur.com/g96TY.gifKuba Podkalicki2018-01-28T18:00:23Z[✓]Find values of a function's variable that satisfy a certain condition?
http://community.wolfram.com/groups/-/m/t/1289212
Given a function
F[x,y]=x+y
I want to find all values of **y** for which
Abs[F[x,y]]<= 5
when
0 <= x <= 1
The answer should be
0 <= y <= 4
Is there built in functionality that can handle a problem of this type? I'm looking for a shortcut to help avoid implementing a full blown algorithm on my own.Piotr Pawlowicz2018-02-21T01:14:36Z[✓] Set logarithmic and reversal scaling axes?
http://community.wolfram.com/groups/-/m/t/1287170
I'm trying to create a Hertzsprung-Russell Diagram which requires both axes to be logarithmic and the x-axis to be reversed. I've found the ScalingFunctions option for the ListPlot command, but it can only seem to do one change to each axes at a time ({"Log","Log"} or {"Reverse","Log"}).
Are there any other methods of manipulating axes or a way to get an extra change out of the ScalingFunctions option?
Thanks!Peter Driscoll2018-02-18T01:42:46Z[✓] Control the precision of the result with FindMinimum?
http://community.wolfram.com/groups/-/m/t/1287379
Dear all,
I have an issue with a constrained minimization and FindMinimum.
Consider two cylinders of length L and radius R: the center of the first is located at the origin and its axis is parallel to the z axis, the center of the second is located at {xT, 0, xz}, and its axis is oriented according to the polar angles \theta and \phi.
Given two vectors {r[1],..,r[3]} and {s[1],..,s[3]}, I want to find the minimum of the square distance between them
f(r,s) = (r[1] - s[1])^2 + (r[2] - s[2])^2 + (r[3] - s[3])^2,
with the constraint that the vector {r[1],r[2],r[3]} lies within the first cylinder, and {s[1],..,s[3]} lies within the second cylinder:
In[1]:= R = 1/2;
L = 4;
xT = 78/100;
xz = -4/10;
\[CurlyTheta] = 8/10;
\[Phi] = 3;
In[7]:= FindMinimum[{(r[1] - s[1])^2 + (r[2] - s[2])^2 + (r[3] -
s[3])^2, r[1]^2 + r[2]^2 <= R^2, L + 2 r[3] >= 0,
2 r[3] <=
L, (Sin[\[Phi]] (xT - s[1]) +
Cos[\[Phi]] s[
2])^2 + (Cos[\[CurlyTheta]] (Cos[\[Phi]] (-xT + s[1]) +
Sin[\[Phi]] s[2]) + Sin[\[CurlyTheta]] (xz - s[3]))^2 <= R^2,
2 Cos[\[Phi]] Sin[\[CurlyTheta]] (xT - s[1]) +
2 Cos[\[CurlyTheta]] (xz - s[3]) <=
L + 2 Sin[\[CurlyTheta]] Sin[\[Phi]] s[2],
2 Sin[\[CurlyTheta]] Sin[\[Phi]] s[2] <=
L + 2 Cos[\[Phi]] Sin[\[CurlyTheta]] (xT - s[1]) +
2 Cos[\[CurlyTheta]] (xz - s[3])}, {{r[1], 0}, {s[1], xT}, {r[2],
0}, {s[2], 0}, {r[3], 0}, {s[3], xz}}]
Out[7]= {1.7518957310232823*10^-15, {r[1] -> 0.24200719173126448,
s[1] -> 0.24200723086912757, r[2] -> 0.0173827341902361,
s[2] -> 0.01738274867534208, r[3] -> -0.0952795259719789,
s[3] -> -0.0952795227618218}}
In this example the two cylinders overlap, the minimum is r = s, and the value of the objective function at the minimum must be zero. However, FindMinimum returns some small, but nonzero value ˜ 1e-15.
Is there a way to make sure that, if the minimum is x=y, then the minimum for the objective function is exactly zero, i.e., `0.`?
Thank you.Joao Porto2018-02-18T14:37:20ZWhat distance function does FindClusters use?
http://community.wolfram.com/groups/-/m/t/1288262
My list contains numbers from 0-40k. The figure shows data distribution:
![enter image description here][1]
I tried `FindClusters[list] `
The output is two clusters as seen here:
{{4169, 7114, 5025, 7316, 4977, 10411, 9352, 16438, 8719, 14330,
10277, 7144, 11950, 18572, 10471, 4915, 4958, 7556, 5145, 13862,
8466, 14138, 10861, 11815, 5638, 15242, 16666, 23564, 4256, 13014,
9865, 3729, 5980, 7740, 14290, 14067, 12038, 14125, 6436, 14240,
19054, 9622, 13876, 8362, 5983, 7163, 4908, 12856, 15923, 14368,
14467, 9393, 9555, 8537, 9149, 10272, 8228, 6525, 6596, 10401, 6244,
16576, 15262, 12593, 16128, 13189, 13508, 14206, 15115, 24985,
19442, 18195, 14522, 9103, 8781, 9394, 4716, 6760, 9281, 6958,
10581, 10862, 11518, 11508, 5691, 8567, 9797, 10897, 9535, 8723,
7645, 7035, 7186, 7392, 6913, 7549, 18990, 12778, 15982, 5145,
14650, 14468, 13480, 20918, 14713, 17319, 22983, 20166, 9464, 23675,
8466, 9598, 9698, 7082, 18233, 15193, 11804, 10285, 25290, 17428,
11320, 6441, 11868, 14666, 18505, 11778, 12131, 9275, 6347, 13024,
19351, 14984, 14150, 18093, 7455, 20572, 14041, 23137, 12763, 14986,
11280, 13584, 17583, 14394, 17540, 18123, 16960, 9344, 20265,
21251, 19206, 25316, 17411, 17123, 17137, 11778, 19055, 15926,
18753, 19731, 14524, 21106, 12309, 12357, 17689, 23076, 20067,
10224, 16353, 7571, 8493, 8927, 15024, 18869, 14585, 16099, 18462,
14361, 15621, 15584, 20522, 18542, 13220, 19124, 16885, 10800,
20395, 18752, 17369, 21940, 14893, 14939, 25153, 19275, 15273,
18337, 18835, 17250, 26872, 15279, 14366, 15319, 20846, 15711,
18547, 20289, 22089, 17250, 18777, 21723, 17813, 21230, 24460, 8375,
14843, 18409, 4854, 10552, 13598, 14440, 14707, 17834, 18916,
22908, 7045, 20264, 20317, 6742, 8589, 15747, 17136, 12764, 18185,
6882, 8867, 7009, 13119, 10461, 11362, 14844, 14337, 9780, 7170,
8486, 8538, 8758, 8383, 5024, 7285, 10365, 5239, 7644, 8675, 7909,
8781, 7353, 6439, 9123, 8136, 11655, 18012, 8834, 11400, 8248, 8207,
9232, 11126, 24912, 12578, 8352, 13299, 6344, 8347, 6876, 14591,
11316, 18416, 11233, 8438, 20095, 10800, 7596, 5791, 7083, 7931,
6021, 6088, 13472, 9212, 6992, 8428, 9336, 11558, 10948, 8795, 6353,
11253, 9172, 15023, 6512, 7775, 11892, 7908, 7545, 8135, 10378,
8896, 7302, 12794, 10991, 10490, 7240, 9780, 4285, 4694, 6847, 9383,
6969, 7879, 12737, 5840, 5550, 12252, 9034, 8661, 10347, 11444,
8241, 11445, 11539, 14462, 17701, 13711, 8229, 7458, 12440, 13455,
12092, 13517, 12047, 10099, 18228, 14068, 17192, 18021, 12252,
11070, 11711, 12952, 12144, 9109, 6563, 4531, 7438, 8839, 15560,
11478, 18469, 14584}, {35494, 32082, 27490, 29077, 31458, 31198}}
My second try was to specify the number of clusters using `FindClusters[list,4] `. The output was:
{{4169, 7114, 5025, 7316, 4977, 10411, 9352, 16438, 8719, 14330,
10277, 7144, 11950, 18572, 10471, 4915, 4958, 7556, 5145, 13862,
8466, 14138, 10861, 11815, 5638, 15242, 16666, 23564, 4256, 13014,
9865, 3729, 5980, 7740, 14290, 14067, 12038, 14125, 6436, 14240,
19054, 9622, 13876, 8362, 5983, 7163, 4908, 12856, 15923, 14368,
14467, 9393, 9555, 8537, 9149, 10272, 8228, 6525, 6596, 10401, 6244,
16576, 15262, 12593, 16128, 13189, 13508, 14206, 15115, 19442,
18195, 14522, 9103, 8781, 9394, 4716, 6760, 9281, 6958, 10581,
10862, 11518, 11508, 5691, 8567, 9797, 10897, 9535, 8723, 7645,
7035, 7186, 7392, 6913, 7549, 18990, 12778, 15982, 5145, 14650,
14468, 13480, 20918, 14713, 17319, 22983, 20166, 9464, 23675, 8466,
9598, 9698, 7082, 18233, 15193, 11804, 10285, 17428, 11320, 6441,
11868, 14666, 18505, 11778, 12131, 9275, 6347, 13024, 19351, 14984,
14150, 18093, 7455, 20572, 14041, 23137, 12763, 14986, 11280, 13584,
17583, 14394, 17540, 18123, 16960, 9344, 20265, 21251, 19206,
17411, 17123, 17137, 11778, 19055, 15926, 18753, 19731, 14524,
21106, 12309, 12357, 17689, 23076, 20067, 10224, 16353, 7571, 8493,
8927, 15024, 18869, 14585, 16099, 18462, 14361, 15621, 15584, 20522,
18542, 13220, 19124, 16885, 10800, 20395, 18752, 17369, 21940,
14893, 14939, 19275, 15273, 18337, 18835, 17250, 15279, 14366,
15319, 20846, 15711, 18547, 20289, 22089, 17250, 18777, 21723,
17813, 21230, 24460, 8375, 14843, 18409, 4854, 10552, 13598, 14440,
14707, 17834, 18916, 22908, 7045, 20264, 20317, 6742, 8589, 15747,
17136, 12764, 18185, 6882, 8867, 7009, 13119, 10461, 11362, 14844,
14337, 9780, 7170, 8486, 8538, 8758, 8383, 5024, 7285, 10365, 5239,
7644, 8675, 7909, 8781, 7353, 6439, 9123, 8136, 11655, 18012, 8834,
11400, 8248, 8207, 9232, 11126, 12578, 8352, 13299, 6344, 8347,
6876, 14591, 11316, 18416, 11233, 8438, 20095, 10800, 7596, 5791,
7083, 7931, 6021, 6088, 13472, 9212, 6992, 8428, 9336, 11558, 10948,
8795, 6353, 11253, 9172, 15023, 6512, 7775, 11892, 7908, 7545,
8135, 10378, 8896, 7302, 12794, 10991, 10490, 7240, 9780, 4285,
4694, 6847, 9383, 6969, 7879, 12737, 5840, 5550, 12252, 9034, 8661,
10347, 11444, 8241, 11445, 11539, 14462, 17701, 13711, 8229, 7458,
12440, 13455, 12092, 13517, 12047, 10099, 18228, 14068, 17192,
18021, 12252, 11070, 11711, 12952, 12144, 9109, 6563, 4531, 7438,
8839, 15560, 11478, 18469, 14584}, {35494}, {24985, 25290, 25316,
27490, 25153, 29077, 26872, 24912}, {32082, 31458, 31198}}
Could you explain me how this function works? I don't want to have a huge cluster with most of the values. Instead, I expect that the function recognises a cluster for values near 10k, 15k, 20k and 30k.
**What is the distance function used in FindingClusters()?**
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=histo.jpg&userId=1123184Veronica Estrada2018-02-19T16:36:27ZHow does Wolfram Community count views?
http://community.wolfram.com/groups/-/m/t/1253940
I noticed if I view a discussion twice or have to refresh, the two wiews are recorded but later one is taken off. Even sometimes a discussion will show a few less visits in a few minutes than it did. Is that a correct observation?Marvin Ray Burns2017-12-25T15:46:42ZLittle pieces of code for graph and networks theory
http://community.wolfram.com/groups/-/m/t/98022
I thought it would be nice to have a discussion of little pieces of Mathematica code that can help when working with graphs and networks. Here for example, one to undirect graphs when needed:
[mcode]ToUndirectedGraph[dirGraph_] :=
Graph[VertexList@dirGraph, #[[1]] \[UndirectedEdge] #[[2]] & /@
Union[Sort[{#[[1]], #[[2]]}] & /@ EdgeList@dirGraph]][/mcode]or what about a graph planarity check function (i.e. adding any edge would destroy its planarity):[mcode]MaxPlanarQ[graph_] :=
PlanarGraphQ[graph] &&
With[{pos =
Select[Position[Normal@AdjacencyMatrix@graph,
0], #[[1]] < #[[2]] &],
vertex = VertexList[graph],
edges = EdgeList[graph]
},
val = True;
Do[If[PlanarGraphQ[
Graph[Append[edges,
vertex[[i[[1]]]] \[UndirectedEdge] vertex[[i[[2]]]]]]],
val = False; Break[]], {i, pos}]; Return[val]][/mcode]And one to produce random permutations for a given graph with the indicated number n of nodes:[mcode]PermuteGraph[g_, n_] :=
Table[AdjacencyMatrix@
Graph[RandomSample[VertexList@g], EdgeList@g], {n}][/mcode]What about code for counting sizes of graph automorphism groups? I have some, but it uses Saucy, an open-source software that has been tested to be (surprisingly) in practice very fast, despite the NP question underlying this task (unknown whether it has a polynomial time algorithm or it is NP-complete). There is a function in Combnatorica but you can read about its drawbacks in the [url=http://mathworld.wolfram.com/GraphAutomorphism.html]graph automorphism[/url] page in MathWorld.Hector Zenil2013-08-15T18:51:44ZMathematica One Liner Generator
http://community.wolfram.com/groups/-/m/t/1288883
I would like to suggest a twist to the Mathematica One Liner Competition.
The idea of this version of the challenge is to make use of the meta-programming capabilities in WL to create a one-liner generator. Some suggested competition rules:
1. The Generator would need to be written in WL, but would not itself have to comprise a single line of WL code, although that would be a really neat trick!
2. The output of the Generator would be a single executable line of WL code, or a single-line function taking one or more arguments. In other words, the output has to comprise valid WL syntax and be executable.
3. The output code produced by the Generator would itself produce some kind of result. There are no stipulations as to what that result might be, although graphical output tends to be heavily favored. But it could be, for instance, a famous number sequence, a mathematical equation, or something else entirely. All that is stipulated is that the output produced by the generated one-liner is "interesting".
4. The challenge solution could include code (e.g. machine learning/DNN code) to determine whether the generated one-liner is capable of producing an "interesting" result. One liners that are not valid WL syntax, or which produce no output, for example, would be classified as "uninteresting". A one-liner that generated large prime numbers, or cool animations, might be classified as "interesting". And so on.Jonathan Kinlay2018-02-20T18:08:49Z[✓]Create a network of 300 nodes that can be clustered into 100-node chunks
http://community.wolfram.com/groups/-/m/t/1288902
I have a rather basic problem where I am asked to create a network of 300 nodes that can be clustered into 100-node chunks. Essentially, the nodes have many connections within a chunk and comparatively few between different100-node chunks.
For example, if I made the connections with a chunk occur with 100% probability and between chunks occur 0% I get the following:
edges = {};
For[l = 1, l <= 3 , l++, h = 100*l;
For[n = 100 (l - 1) + 1, n < 100*l + 1, n++,
For[m = n + 1, m <= 300, m++,
If[m <= h,
If[RandomReal[{0, 1}] > 0.0, AppendTo[edges, n <-> m]],
If[RandomReal[{0, 1}] > 0.99, AppendTo[edges, n <-> m]]]]]]
![enter image description here][1]
which is expected and gives a corresponding MatrixPlot of the Adjacency Matrix as:
![enter image description here][2]
Which is also all well and good. Now, something very strange seems to be going on when I try to make the connections between the 100-node chunks be non zero. For example, if I made the probability be 1%, the folowing is obtained:
![enter image description here][3]
This looks ok, but it becomes very evident that there is a problem when one looks at the corresponding Matrix Plot of the Adjacency Matrix
![enter image description here][4]
Evidently, there is a very big problem somewhere. I did try to check this in a bit more detail and it seems that everything is working properly with the Matrix Plot. It seems that there is somewhere a problem with my code - but where? The code, in my opinion, seems rather simple and straightforward. I am really confused with what the issue is and could use assistance, it doesn't seem possible that Mathematica would have a problem with something so basic.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture1.PNG&userId=1288488
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture2.PNG&userId=1288488
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture3.PNG&userId=1288488
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture4.PNG&userId=1288488Boris Barron2018-02-20T16:41:48ZSolve Pdes in cylinder coordinates? ( Infinity error due to 1/r )
http://community.wolfram.com/groups/-/m/t/1288801
I have been trying to solve the following equations(Eq.1) in cylinder coordinates. And I want my solution domain is r>=0. Because of the 1/r and 1/r^2 terms in Eq.1, I ran into the Infinity error problem when using NDSolve . Then I rewrote my equations by multiplying them by r or r^2 to remove 1/r and 1/r^2 terms and got Eq.2, but I still met the Infinite problem using NDSolve. So is it possible to get solutions if my solution domain is r>=0. I believe this solution is physically real but I do not know how to get it using NDSolve. Any help would be great.
![enter image description here][1]
![enter image description here][2]
And my codes for Eq.2 are:
TMax = 1.615; S = 1/Pi^2/2; rMin = 0; rMax = 2;
{usol, hsol} =
NDSolveValue[{D[u[t, r], t]*r^2 == -D[u[t, r], r]*u[t, r]*r^2 +
3*1/h[t, r]^4*D[h[t, r], r]*r^2 +
3*S*(D[h[t, r], r, r, r]*r^2 - D[h[t, r], r] +
r*D[h[t, r], r, r]) +
4/h[t, r]*(h[t, r]*r^2*D[u[t, r], r, r] +
D[u[t, r], r]*D[h[t, r], r]*r^2 + h[t, r]*r*D[u[t, r], r] -
h[t, r]*u[t, r] - u[t, r]*r/2*D[h[t, r], r]),
D[h[t, r], t]*r == -h[t, r]*u[t, r] - u[t, r]*r*D[h[t, r], r] -
h[t, r]*r*D[u[t, r], r], u[0, r] == 0,
h[0, r] == 1 - 0.2*Cos[Pi*r], h[t, rMin] == h[t, rMax]}, {u,
h}, {t, 0, TMax}, {r, rMin, rMax}, PrecisionGoal -> Infinity,
AccuracyGoal -> 10, MaxSteps -> 10^6,
Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"MaxPoints" -> 5000, "MinPoints" -> 5000,
"DifferenceOrder" -> 4}}]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20180220110517.jpg&userId=1266560
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20180220104746.jpg&userId=1266560Yixin Zhang2018-02-20T11:03:08ZInterest rates derivatives in multi-curves framework
http://community.wolfram.com/groups/-/m/t/1273892
We discuss the changes to the interest rates processes when we move from mono-curve setting to multi-curve framework. This is characterised by presence of several curves – a dedicated discount curve and set of estimation curves – each for specific Libor rate. The first is generally assumed to be the OIS curve, whilst the rest are ‘tenor’ curves for given Libor tenor.
The changes in the forward Libor estimation are due to the ‘loss’ of martingale property when mono-curve world is replaced by multiple curve framework. We show how multiplicative adjustment works in this new setting and how interest rate derivatives are affected. Various modelling assumptions are used to show derivatives pricing in this new setting.
![enter image description here][23]
#Introduction#
We review the setting of Interest rate derivatives in post-crisis era characterised by multi-curve environment where dedicated yield curves are defined for forward rate estimation and cashflow discounting. The multi-curve framework is a direct consequence of financial crisis of 2007-2008 when the so-called 'Libor market' - represented by single yield curve stopped being seen as risk-free, and new curves started to emerge to better reflect the counterparty risk in the financial markets.
The current interest rate framework exists in the simplest form in the dual curve setting - (i) discounting curve - usually built with OIS instruments and (ii) estimation curve - generally used to build the 'main' estimation curve in a given currency. This is 3 month Libor curve for USD or 6 months Euribor curve for EUR.
Existence of dual curve environment does change the interest rate mathematics. Martingales defined in the single curve framework do not hold and the process has to be adapted to account for curves duality. We demonstrate how this process work and show how interest rate derivatives - both linear and optional work when we move from the singe to dual framework.
#Interest rate derivatives in a single curve framework#
We firs look at the single-curve homework in the pre-2007 era. When only one yield curve exists, the derivatives pricing is simple and tidy. Forward rate defined on the singe curve using two deposits coincides the forward rate agreement rate = FRA rate
Subscript[F, S] = 1/(Subscript[T, 2] - Subscript[T, 1]) * P[0, Subscript[T, 1]]/P[0, Subscript[T, 2]] - 1;
where Subscript[F, S] is the forward rate in a single-curve setting, Subscript[T, 1], Subscript[T, 2] and tow maturity dates with Subscript[T, 2]>Subscript[T, 1] and P[0, Subscript[T, 1]] , P[0,Subscript[T, 2]] are two discount factors at time 0 with maturities Subscript[T, 1] and Subscript[T, 2].
The FRA rate is then defined as Subscript[F, FRA] = K such that the payoff of the contract at time 0 has value = 0 L[Subscript[T, 1], Subscript[T, 2]] - K = 0 with L[Subscript[T, 1], Subscript[T, 2]] defined as forward term-rate.
## Interest rate swaps in single-curve setting ##
Interest rate swap together with FRA are one the simplest linear interest rate derivatives. It usually involves exchange of fixed rate for a series of floating forward rates up to final maturity:
fixedLeg = S Sum[\[Delta][i] P[0, i], {i, 1, m}]
floatLeg = Sum[\[Delta][i] P[0, i] Subscript[L, S][i], {i, 1, n}]
![enter image description here][2]
where $L_S$ is the forward Libor rate in a single curve framework. This is identical to the FRA rate defined above:
Subscript[L, S][i] = 1/\[Delta][i]*(P[i - 1]/P[i]-1)
floatLeg =
Sum[\[Delta][i]*
P[0, i]*(1/\[Delta][i])*(P[0, i - 1]/P[0, i] - 1), {i, 1, n}]
> P[0, 0] - P[0, n]
Then swap rate *S* is simply a solution to the equation:
Solve[fixedLeg == floatLeg, S]
![enter image description here][3]
This shows that in a single-curve framework the swap rate is simply a difference in two discount factors normalised by $annuity= \sum_{i = 1}^n\delta[i]\ P[0, i]$. The same curve is used to produce discount factors that are used for (i) discounting and (ii) forward Libor estimation.
#Multi-curve framework for Interest rate derivatives#
When we move to multi-curve setting, we assume:
- Separate discounting curve - generally OIS curve
- Separate estimation curve for 'main index - 3M or 6M
- Separate estimation curves for 'minor indices - say 1M, 12M or 6M (in 3M setting)
When the discounting curve is set to the OIS curve, we define the OIS forward rate with tenor h similarly to the forward in the mono-currency setting:
![enter image description here][4]
for i = 1...n where $\delta[i]$ is a year fraction for interval $T_{i-1} -T_i$ and $P_{OIS}[t,T_i]$ is a discount factor from the OIS curve at time t maturing at time $T_i$
In the multi-curve framework, the Libor definition in the single curve environment does not hold L[Subscript[T, 1],Subscript[T, 2]]] != Subscript[F, S][t;Subscript[T, 1],Subscript[T, 2]] != Subscript[F, OIS][t,Subscript[T, 1],Subscript[T, 2]] since the discount factors in definition of Libor when only one curve is used is not the same as in dual curve case. Libor is dual curve setting is calculated from the estimation curve with unique set of discount factors.
The expectation of forward Libor in the dual curve setting can be expressed as
\!\(
\*SubsuperscriptBox[\(E\), \(t\),
SubsuperscriptBox[\(Q\), \(OIS\), \(T2\)]]\([\)\)Subscript[F, D][Subscript[T, 1];Subscript[T, 1],Subscript[T, 2]]] = Subsuperscript[E, t, Subsuperscript[Q, OIS, T2]][E^Subsuperscript[Q, OIS, T2][L[Subscript[T, 1],Subscript[T, 2]|Subscript[\[ScriptCapitalF], t]]. The valuation of forward rate agreement with Libor forward rate is therefore defined as: FRA[t,Subscript[T, 1],Subscript[T, 2]] =Subscript[P, OIS][t,Subscript[T, 2]] \[Delta][Subscript[T, 1],Subscript[T, 2]] \!\(
\*SubsuperscriptBox[\(E\), \(t\),
SubscriptBox[\(Q\), \(OIS\)]]\([\)\)L[t,Subscript[T, 1],Subscript[T, 2]]-K].
Current market practice takes a shortcut and simply values the FRA as FRA[t,Subscript[T, 1],Subscript[T, 2]] =Subscript[P, OIS][t,Subscript[T, 2]] \[Delta][Subscript[T, 1],Subscript[T, 2]] (L[t,Subscript[T, 1],Subscript[T, 2]]-K]) where discount factor Subscript[P, OIS][t,Subscript[T, 2]] comes from the OIS curve and the forward Libor L[t,Subscript[T, 1],Subscript[T, 2]] is taken from the estimation curve. This is clearly inconsistent since forward Libor is NOT martingale under the OIS forward measure.
##Libor adjustment in multi-curve framework##
To restore the non-arbitrage relationship, forward Libor rate has to be adjusted. We refer to this as Forward basis that restores the equilibrium between Subscript[F, OIS] and Subscript[F, E]. Assuming multiplicative basis Aj, we get:
Fd = (1/\[Delta]) (Pd[T1]/Pd[T2] - 1);
Fe = (1/\[Gamma]) (Pe[T1]/Pe[T2] - 1);
Solve[Fe \[Gamma] == Fd \[Delta] Aj, Aj]
![enter image description here][5]
where Subscript[F, d] represents forward rate from the OIS curve, Subscript[F, e] is the forward Libor from the estimation curve, Subscript[P, d] is a discount factor f from the OIS curve and Subscript[P, e] is a discount factor from the estimation curve.
The forward basis is therefore a ratio of discount factors from both curves and can be recovered ex-post once both curve have been calibrated to the market data.
From modelling perspective, however it is desirable to express the forward rate in terms of single curve. We introduce new discount factor adjustment Subscript[B, j] and re-calculate the forward adjustment spread:
Fd = (1/\[Delta]) (PdT1/PdT2 - 1);
Fe = (1/\[Gamma]) (PeT1/PeT2 - 1);
PeT1 = BjT1 PdT1;
PeT2 = BjT2 PdT2;
Solve[Fd == Fe Aj, Aj] // Simplify
![enter image description here][6]
and get the forward Libor $F_e$
Fe
![enter image description here][7]
This shows that the Libor is a function of (i) OIS curve and (ii) discount factor adjuster. To proceed, we assume that the forward rate follows LogNormal martingale dynamics under forward measure Subscript[Q, e]
Fe = GeometricBrownianMotionProcess[0, \[Sigma], x0];
Fe[t]
![enter image description here][8]
We apply to similar process to the forward adjuster defined under forward measure $Q_d$
Bj = GeometricBrownianMotionProcess[0, \[Eta], y0];
Bj[t]
![enter image description here][9]
To change the measure from Subscript[Q, e]==> Subscript[Q, d], we use the change-of-measure technique that says:
Subscript[E, OIS][Fe] = Subscript[E, e][Fd Bj]. To change the measure, we need joint expectation of OIS forward and the forward adjuster. We apply the **Binormal Copula** with LogNormal marginals
cDist = CopulaDistribution[{"Binormal", ρ}, {Fe[t], Bj[t]}];
cdrift = Expectation[x*y, {x, y} \[Distributed] cDist,
Assumptions ->
t > 0 && η > 0 && σ > 0 && -1 <= ρ <= 1]
![enter image description here][10]
The joint expectation of forward OIS and forward adjuster on relative basis provides the adjustment for the process where the change of measure occurs. Since the martingale process for forward Libor has to be drift less, we adjust the forward rate by its negative quantity;
Aj = cdrift/(x0 y0) /. t -> -t
![enter image description here][11]
Returning back to our original Libor adjustment formula, we observe:
![enter image description here][12]
Fe_Adj = Expectation[x, x \[Distributed] Fe[t],
Assumptions -> t > 0 && σ > 0 && x0 > 0]*Aj /. x0 -> L[0]
![enter image description here][13]
This is the forward Libor rate under the OIS forward discounting measure. The adjustment is a function (i) time, (ii) volatility of Libor and (iii) volatility of OIS rate. A reader familiar with the exposition above, an recognise here the parallelism to process drift adjustment in the foreign currency market know as **'quanto adjustment'**. The similarity is obvious - we work with two curves, two sources and randomness and switch the measure similarly to what we do in foreign currency markets.
##Forward rate agreement - FRA##
This is the simple contract that pays the difference between forward Libor and fixed rate
![enter image description here][14]
where \[DoubleStruckCapitalC] is nominal and $\delta[\tau]$ is a year fraction on day count convention between the Libor tenor $T_1$ and $T_2$.
Ho much does the adjustment affects the FRA valuation? We look first at **market volatilities** - (i) for the Libor rate and (ii) Adjuster:
Assume: $\delta=0.25$, C=1 mil, $P_{OIS}[t,T_2] = 0.98$, t=1, L=0.0125,K=0.0125
fra = C*Pois*δ*(L*E^(-t*ρ*σ*η) - K) /. {C ->
1000000, L -> 0.0125, K -> 0.0125, t -> 1, δ -> 0.25,
Pois -> 0.98, ρ -> 0.75}
Plot3D[fra, {σ, 0.1, 0.3}, {η, 0.1, 0.3},
AxesLabel -> Automatic, PlotTheme -> "Marketing",
PlotLabel ->
Style["FRA valuation impact by market volatilities", Blue, 15]]
![enter image description here][15]
![enter image description here][16]
As we can see from the graph above, higher volatilities will push the value of forward rate lower and therefore making the value of long forward contract more negative. The opposite applies to a short FRA contract.
We can now look at correlation impact:
fra2 = C*Pois*δ*(L*E^(-t*ρ*σ*η) - K) /. {C ->
1000000, L -> 0.0125, K -> 0.0125, t -> 1, δ -> 0.25,
Pois -> 0.98, σ -> 0.2, η -> 0.2};
Plot[fra2, {ρ, -0.75, 0.75}, PlotStyle -> Red,
PlotLabel -> Style["Correlation impact on FRA valuation", Blue, 15]]
![enter image description here][17]
Negative correlation will increase the long FRA value since the adjusted Libor will be higher. Positive correlation will drive the valuation i into negative territory.
##Interest rate swap - IRS##
The payer IRS formula is determined from the same equation: fixed leg = float leg
fixedLeg = K Sum[δ[i] Subscript[P, OIS][i], {i, 1, m}];
floatLeg =
Sum[δ[i] Subscript[P, OIS][i] L[
i] Exp[-Subscript[t, i] ρ σ η], {i, 1, n}];
swapR = Solve[fixedLeg == floatLeg, K] // Simplify
![enter image description here][18]
This is the equilibrium swap rate that will make present value at inception zero. The new formula differs from the swap rate formula in the single curve framework in two instances:
- Discount factor P comes from a special discounting curve - the OIS
curve and becomes Subscript[P, OIS][t,Subscript[T, i]]
- Numerator does not reduce to a simple difference of two discount
factors since adjusted Libor rate L[t, Subscript[T, 1],Subscript[T,
2]] E^(-t \[Rho] \[Eta] \[Sigma]) is now estimated on a different
curve, the so-called estimation curve
##Caps and Floors##
Consider first a Caplet paying out at time Subscript[T, k]. Caplet is essentially a call option on forward Libor rate L[t;Subscript[T, k-1],Subscript[T, k]]: \[Delta][\[Tau]] *(L[t;Subscript[T, k-1],Subscript[T, k]]-K)^+ where \[Delta][\[Tau]] is a year fraction between Subscript[T, 1] and Subscript[T, 2] and K is a fixed strike rate. The pricing formula is simply a discounted conditioned expectation of the payoff positivity under certain distributional assumptions. So, to price a Caplet in multi-curve framework, we proceed as in mono-currency case, with replacement: Libor mono-curve -> Libor multi-curve:
![enter image description here][19]
Pricing formula will differ depending on the choice distributional assumptions for the forward Libor rate. For calculation purposes, we set the adjusted Libor rate -Subscript[L, e][t;Subscript[T, 1],Subscript[T, 2]] E^(-t \[Rho] \[Eta] \[Sigma]) = x0. We choose the three processes that become the most common in the market - i.e. (i) Normal process, (ii) LogNormal process and (iii) Mean-reverting Normal process.
- **Normal process:**
nProc = OrnsteinUhlenbeckProcess[0, σ, 0, x0];
nCplt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[x - k, 0],
x \[Distributed] nProc[t],
Assumptions -> σ > 0 && t > 0] // FullSimplify
![enter image description here][20]
We can now investigate the behaviour of the Caplet w..r.t Libor volatility and strike
Plot3D[nCplt /. {Subscript[P, OIS][t, i] -> 0.98, δ[i] -> 0.25,
x0 -> 0.0125, t -> 1}, {σ, 0.005, 0.015}, {k, 0.01,
0.0135}, PlotLabel ->
Style["Caplet Normal premium", Blue, {15, Bold}],
PlotLegends -> Automatic, AxesLabel -> Automatic,
ColorFunction -> "Rainbow"]
![enter image description here][21]
Premium increases as volatility goes up and strike declines. However, volatility is more dominant factor.
- **Lognormal process**
lProc = GeometricBrownianMotionProcess[0, σ, x0];
lCplt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[x - k, 0],
x \[Distributed] lProc[t],
Assumptions -> σ > 0 && t > 0 && k > 0 && x0 > 0] //
FullSimplify
![enter image description here][22]
A similar pattern is observed for other processes, such as LogNormal
Plot3D[lCplt /. {Subscript[P, OIS][t, i] -> 0.98, δ[i] -> 0.25,
x0 -> 0.0125, t -> 1}, {σ, 0.15, 0.5}, {k, 0.01, 0.0135},
PlotLabel -> Style["Caplet LogNormal premium", Blue, {15, Bold}],
PlotLegends -> Automatic, AxesLabel -> Automatic,
ColorFunction -> "TemperatureMap"]
![enter image description here][23]
- **Mean-reverting normal process**
mProc = OrnsteinUhlenbeckProcess[μ, σ, θ, x0];
mCplt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[x - k, 0],
x \[Distributed] NormalDistribution[a, b],
Assumptions -> b > 0 && t > 0];
mCplt = % /. {a -> mProc[t][[1]], b -> mProc[t][[2]]} // FullSimplify
![enter image description here][24]
###Floors###
Interest rate floors are are essentially put options on forward Libor rate with payoff function:
![enter image description here][25]
- **Normal process**
nProc = OrnsteinUhlenbeckProcess[0, σ, 0, x0];
nFlrt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[k - x, 0],
x \[Distributed] nProc[t],
Assumptions -> σ > 0 && t > 0] // FullSimplify
![enter image description here][26]
- **LogNormal process**
lFlrt = Subscript[P, OIS][t, i] δ[i] Expectation[Max[k - x, 0],
x \[Distributed] lProc[t],
Assumptions -> σ > 0 && t > 0 && k > 0 && x0 > 0] //
FullSimplify
![enter image description here][27]
Plot3D[lFlrt /. {Subscript[P, OIS][t, i] -> 0.98, δ[i] -> 0.25,
x0 -> 0.0125, t -> 1}, {σ, 0.15, 0.5}, {k, 0.01, 0.0135},
PlotLabel -> Style["Floorlet LogNormal premium", Blue, {15, Bold}],
PlotLegends -> Automatic, AxesLabel -> Automatic,
ColorFunction -> "Pastel"]
![enter image description here][28]
##Swaptions##
Swaptions are options on the swap rate defined above. They exist in tow formats: (i) Payer swaption = put option on the swap rate and (ii) Receiver swaption = call option on the swap rate. When we operate in the multi-curve framework, we deal with the same problem as in Libor case - i.e. swap rate adjustment.
We develop the swap adjustment in the same way as Libor. When LogNormal dynamics for the swap rate is envisaged, we arrive at the adjustment quantity though a joint expectation process:
![enter image description here][29]
The volatilities in the exponent are now swaption volatilities and correlation coefficient $\rho$ is the correlation between the swap rate and the adjuster.
###Receiver swaption###
This is the call option on the swap rate with the payoff ; Rec_OSWP = Subscript[AF, OIS] (Subscript[S, ADJ][t; Subscript[T, 0],Subscript[T, n]] -K)^+ where:
Subscript[AF, OIS] = Sum[γ[i] Subscript[P, OIS][i], {i, 1, n}]
![enter image description here][30]
The option premium will depend on the modelling choice for the underlying swap rate. We look again at (i) Normal process, (ii) LogNormal process and (iii) Mean-reverting Normal process:
- **Normal process**
nSwpn = Subscript[AF, OIS]
Expectation[Max[x - k, 0], x \[Distributed] nProc[t],
Assumptions -> σ > 0 && t > 0] // FullSimplify
![enter image description here][31]
- **LogNormal process**
lSwpn = Subscript[AF, OIS]
Expectation[Max[x - k, 0], x \[Distributed] lProc[t],
Assumptions -> σ > 0 && t > 0 && k > 0 && x0 > 0] //
FullSimplify
![enter image description here][32]
- **Normal mean-reverting process**
mCplt = Subscript[AF, OIS]
Expectation[Max[x - k, 0],
x \[Distributed] NormalDistribution[a, b],
Assumptions -> b > 0 && t > 0];
mCplt = % /. {a -> mProc[t][[1]], b -> mProc[t][[2]]} // FullSimplify
![enter image description here][33]
###Payer swaption###
These are *put opinions* on the swap rate which in case of multi-curve environment is drift-adjusted. For example, if we assume normal distribution for the swap rate, we get
- **Normal process**
nSwpn2 = Subscript[AF, OIS]
Expectation[Max[k - x, 0], x \[Distributed] nProc[t],
Assumptions -> σ > 0 && t > 0] // FullSimplify
![enter image description here][34]
#Conclusion#
Multi-curve framework in case of interest rate derivatives brings new paradigm that requires certain adjustment to the underlying rates. This is due to a measure change when the expectation of the rates stops being martingale. Introduction of separate discounting curve - OIS requires adjustment to the forward rate drift in order to preserve non-arbitrage condition. Quarto-style adjustment known in foreign currency market is being used to derive neat formula.
Pricing and valuation adjustment with Mathematica, as the above demonstration shows, is easy. Availability of stochastic routines and probabilistic functions leads to quick and elegant solution.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Caplet.jpg&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.02.28.png&userId=20103
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.18.04.png&userId=20103
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.23.08.png&userId=20103
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.27.49.png&userId=20103
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.29.20.png&userId=20103
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.30.27.png&userId=20103
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.31.55.png&userId=20103
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15141.png&userId=20103
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=36372.png&userId=20103
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=57483.png&userId=20103
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at11.38.59.png&userId=20103
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10334.png&userId=20103
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=52615.png&userId=20103
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=64676.png&userId=20103
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=71507.png&userId=20103
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=103448.png&userId=20103
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=31589.png&userId=20103
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at12.05.36.png&userId=20103
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1092710.png&userId=20103
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=291011.png&userId=20103
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=446612.png&userId=20103
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=732913.png&userId=20103
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=570614.png&userId=20103
[25]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at12.14.26.png&userId=20103
[26]: http://community.wolfram.com//c/portal/getImageAttachment?filename=329415.png&userId=20103
[27]: http://community.wolfram.com//c/portal/getImageAttachment?filename=259916.png&userId=20103
[28]: http://community.wolfram.com//c/portal/getImageAttachment?filename=810317.png&userId=20103
[29]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-01-29at12.19.37.png&userId=20103
[30]: http://community.wolfram.com//c/portal/getImageAttachment?filename=240818.png&userId=20103
[31]: http://community.wolfram.com//c/portal/getImageAttachment?filename=934319.png&userId=20103
[32]: http://community.wolfram.com//c/portal/getImageAttachment?filename=797420.png&userId=20103
[33]: http://community.wolfram.com//c/portal/getImageAttachment?filename=881621.png&userId=20103
[34]: http://community.wolfram.com//c/portal/getImageAttachment?filename=498322.png&userId=20103Igor Hlivka2018-01-29T12:58:41ZSet up SendMail from Mathematica?
http://community.wolfram.com/groups/-/m/t/1286756
I just worked through the online version of "An Elementary Introduction to the Wolfram Language" with Mathematica running on a Pi Zero W. Amazed to find most things worked on that tiny platform.
There were a few examples of using SendMail in the book that I wasn't able to make work. Yahoo didn't like the mail being relayed through the Wolfram Cloud so I was attempting to configure the SMTP server settings, but can't find the "Preferences > Internet & Mail > Mail Settings" menu suggested by the SendMail::cloudrelay message that appeared in my notebook.
Since it may be important, I am using
pi@raspberrypi:~ $ mathematica --version
11.2
Any help getting this set up would be appreciated.
-- ToddTodd Kroeger2018-02-17T07:40:08ZSet Image acquisition ($ImagingDevice) on Unix?
http://community.wolfram.com/groups/-/m/t/1288310
When I do<br>
$ImagingDevice
I get the following message
Message[$ImagingDevice::notsupported, "Unix"]
What can I do to fix it?Santiago Hincapie2018-02-19T23:04:51ZUpload file to Wolfram Cloud?
http://community.wolfram.com/groups/-/m/t/1288289
I'm logged into my cloud account but I can't seem to save/open anything from MathematicaOnline's folders... Any suggestions?
![enter image description here][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-02-19at4.42.44PM.png&userId=900170
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Untitled.jpeg&userId=900170Mike Sollami2018-02-19T22:01:28ZRendering of RegionIntersection in 3D?
http://community.wolfram.com/groups/-/m/t/1283669
I am trying to visualize some region intersections in 3D.
## Example 1:
ra = 10;
ri = 5;
R1 = RegionDifference[Ball[{0, 0, 0}, ra], Ball[{0, 0, ri - 1/2}, ri]];
Show[R1 // Region, Axes -> True]
![rendered result][1]
The resulting rendered region has a hole, while it should not have one. Does anyone know a way to improve on this.
Another example.
## Example 2:
ra = 10;
ri = 5;
R1 = RegionDifference[Ball[{0, 0, 0}, ra], Ball[{0, 0, 0}, ri]];
R2 = Cylinder[{{-100, 0, 0}, {100, 0, 0}}, 5];
R = RegionIntersection[R1, R2] // Region
The resulting region is rendered with jagged edges.
![The rendered result of Example2][2]
How can this rendering be improved? I know that the rendered edges can not be infinitely sharp like in the mathematical world, but I think some improvement should be possible. Does anyone know how to achieve this? I am using Mathematica 11.1 on Windows.
Thanks for your help.
Maarten
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2018-02-1310_05_39-RegionIntersectionrenderingnotgood.nb_-WolframMathematica11.1.png&userId=307930
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2018-02-1310_06_27-RegionIntersectionrenderingnotgood.nb_-WolframMathematica11.1.png&userId=307930Maarten van der Burgt2018-02-13T09:17:38ZFunctions or packages to implement Belief propagation?
http://community.wolfram.com/groups/-/m/t/1288141
I look for an efficient way to implement a [Belief Propagation][1] (more specific, parallel implementation). Does anyone know about any useful functions or packages?
[1]: https://en.wikipedia.org/wiki/Belief_propagationKiril Dan2018-02-19T17:50:59Z