Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Language sorted by active[✓] Why doesn't Cases[{"a", "b"}, "a"~~___] return {"a"} instead of {}?
http://community.wolfram.com/groups/-/m/t/1037935
Why doesn't
Cases[{"a", "b"}, "a"~~___]
return {"a"} instead of {}?
I used three underscores. More detailed, why does 62 work, but 61 not
In Cases[{"abcde", "b"}, "a"~~__]
Out[61]= {}
In[62]:= StringMatchQ["abcde", "a" ~~ __]
Out[62]= TruePaul Nielan2017-03-23T01:50:49Z[✓] Interpolate a 3D vector field?
http://community.wolfram.com/groups/-/m/t/1038013
Hello, So I'm trying to take a 3D magnetic field, and then interpolate it so I can take the differentials and plot nice magnetic fields. I'm struggling to tell interpolate how to interpolate my data - I get an error reading
> ListInterpolation::inhr: Requested order is too high; order has been reduced to {3,3,3,2}
Which I don't quite understand.
The data is fine because I can plot it correctly using ListVectorPlot3D
Minimum Working Example:
(*define the vectors for each dimension*)
vx = ConstantArray[1, {5, 5, 5}]
vx = ConstantArray[3, {5, 5, 5}]
vy = ConstantArray[2, {5, 5, 5}]
(*put each dimension to gether to get a 3D vector field - i.e. each point has a 3 vector at it *)
together =
ArrayReshape[Transpose[Flatten /@ {vx, vy, vz}], {5, 5, 5, 3}]
interpolation = ListInterpolation[together]
ListVectorPlot3D[together]
VectorPlot3D[interpolation[x, y, z], {x, 1, 5}, {y, 1, 5}, {z, 1, 5}]
Can interpolate not work with this complicated data structures, i.e. might they be scalars. Would it then be possible to combine 3 interpolation functions?Tomi Baikie2017-03-23T11:02:27ZGet proper clipping at PlotRange edge?
http://community.wolfram.com/groups/-/m/t/1037863
I've noticed that in some situations the clipping is not done properly, despite the explicit setting PlotRangeClipping->True:
So I plot some complex function:
Plot[{my function}, {M5, 0, 400}, PlotRange -> {{0, 400}, {0, 1500}},
Frame -> True,
FrameTicks -> {{{0, 300, 600, 900, 1200, 1500},
None}, {{0, 100, 200, 300, 400}, None}},
FrameLabel -> {"Subscript[\[Mu], 5] (MeV)", "m (MeV)"},
PlotStyle -> {{Black, Thickness[.007]}, {Black, Dashed},
Black, {Black, Dashing[Tiny]}},
PlotLabels -> Placed[{"+", "\[Sigma]", "-"}, {Scaled[0.6], Above}],
ImagePadding -> None]
Then I save it as EPS, and I insert in LATEX:
\begin{figure}[!htb]
\centering
\includegraphics[scale=.7]{plot.eps}
\caption{Digraph.}
\label{fig:digraph}
\end{figure}
And after that I do pdf.
It displays as:
[![enter image description here][1]][2]
How may I solve this problem?
EDIT: SIMPLE EXAMPLE
Let
ListPlot[Table[{x, Sin[x]}, {x, -0.5, 6, 0.2}],
PlotRange -> {{0, 5.1}, {-1, 1}}, Frame -> True,
ImageSize -> {200, 150}, Joined -> True,
ImagePadding -> {{40, 5}, {40, 5}}, PlotRangeClipping -> True]
Export["zz.eps", %];
It displays good as
[![enter image description here][3]][4]
Then I save it as EPS, to insert in LATEX:
\begin{figure}[!htb]
\centering
\includegraphics[scale=.7]{zz.eps}
\caption{Digraph.}
\label{fig:digraph}
\end{figure}
And after that I do pdf:
[![enter image description here][5]][6]
[![enter image description here][7]][8]
There are "whiskers". How may I remove it?
***connection [with ][9]***
[1]: https://i.stack.imgur.com/PKOVo.png
[2]: https://i.stack.imgur.com/PKOVo.png
[3]: https://i.stack.imgur.com/xpiJd.png
[4]: https://i.stack.imgur.com/xpiJd.png
[5]: https://i.stack.imgur.com/z8SMI.png
[6]: https://i.stack.imgur.com/z8SMI.png
[7]: https://i.stack.imgur.com/TFf5i.png
[8]: https://i.stack.imgur.com/TFf5i.png
[9]: https://mathematica.stackexchange.com/questions/140757/non-proper-clipping-at-plotrange-edgeAlex Yakubovich2017-03-23T10:53:53Z[✓] NetTrain TargetDevice GPU error
http://community.wolfram.com/groups/-/m/t/902394
When I add the option `TargetDevice->"GPU"` to NetTrain in Mathematica 11.0.0.0 I get the error 'First argument to NetTrain should be a fully specified net'. Is this broken in 11.0.0.0?Gijsbert Wiesenekker2016-08-09T21:51:12ZAvoid crashing the Mathematica kernel while calling to Audio[] ?
http://community.wolfram.com/groups/-/m/t/1025102
I'm attaching a Notebook. The full contents are:
mySampleRate = 48000;
testSeries = {0, 1, 0}
testAudio =
Audio[{testSeries, testSeries}, SampleRate -> mySampleRate]
When I evaluate the Notebook, Mathematica beeps 4 times, and it reports:
> An unknown box name (ToBoxes) was sent as the BoxForm for the
> expression. Check the format rules for the expression.
and when I go to Help -> Why The Beep, Mathematica reports:
> The kernel Local has quit (exited) during the course of an evaluation.
Actually, Mathematica's behavior on this Notebook is non-deterministic. Sometimes it beeps 4 times, sometimes it beeps just once, sometimes it gives me different messages when I ask it why the beep.Joe Donaldson2017-03-04T06:11:02ZAvoid problem of "Recursion depth of 1024 exceeded during evaluation of...?
http://community.wolfram.com/groups/-/m/t/1037435
Hello, everybody! I am facing a problem called "Recursion depth of 1024 exceeded during evaluation of ...". I know that it occurs because I cannot obtain an explicit solution of y. But I do not know how to write the code to realize it. Any one can help me to solve it? Thank you. I post an attachment below.EditProfile FillName2017-03-22T13:41:29ZGlyph Frieze Patterns
http://community.wolfram.com/groups/-/m/t/1032650
Frieze patterns overlap art and math. The design of the base tile in a frieze pattern is artistic, while its repetition can be defined mathematically. This makes frieze patterns a good candidate for exploration with the Wolfram Language.
The bit of code below creates random frieze patterns from font glyphs. I chose sixteen asymmetric glyphs. Others would work, but they should be asymmetric to avoid double symmetries. Here is what the code does:
- randomly select one of the glyphs
- create a random dark color
- randomly rotate the glyph by 45° angles
- crop any excess background
- tile it according to one of the seven frieze patterns
There are 896 possible patterns, not counting the color variations. The results are often startling. Here are a few:
![\[screen shot\]][1]
![\[screen shot\]][2]
![\[screen shot\]][3]
![\[screen shot\]][4]
This suited my need as a small part of a larger project, a sort of school-house trivia game called *Chicken Scratch*. The questions must have a fair amount of randomness so the students reason rather than memorizing answers. For this question, the game presents the frieze pattern and the players choose from the names of four geometric definitions.
The Wolfram Demonstrations Project does have a half-dozen or so demonstrations for exploring frieze patterns. This is the first I've seen that uses glyphs for the base tile design. Though I could turn this into a demonstration, I need to focus on *Chicken Scratch*. Feel free to use this code however you want.
color1 = RGBColor[Table[RandomReal[.6], 3]];
symbol = RandomChoice[{9873, 9730, 38, 9816, 163, 9758, 8730, 8950,
11001, 10729, 10771, 9736, 10000, 9799, 9732, 8623}];
stamp = ImageCrop[
ImageRotate[
Rasterize[
Graphics[{color1,
Style[Text[FromCharacterCode[symbol]], 200]}]], (
RandomInteger[7] \[Pi])/8, Background -> White]];
width = ImageDimensions[stamp][[1]];
frieze = Switch[RandomInteger[{1, 7}],
1, ImageAssemble[Table[stamp, 12]],
2,
top = ImageAssemble[Table[stamp, 12]];
bot = ImageAssemble[Flatten[{
ImageRotate[ImageCrop[stamp, {width/2, Full}, Right], \[Pi]],
Table[ImageRotate[stamp, \[Pi]], 11],
ImageRotate[
ImageCrop[stamp, {width/2, Full}, Left], \[Pi]]}]];
imgLst = ConformImages[{top, bot}];
ImageAssemble[{{imgLst[[1]]}, {imgLst[[2]]}}],
3,
top = ImageAssemble[Table[stamp, 12]];
bot = ImageAssemble[Flatten[{
ImageReflect[ImageCrop[stamp, {width/2, Full}, Left]],
Table[ImageReflect[stamp], 11],
ImageReflect[ImageCrop[stamp, {width/2, Full}, Right]]}]];
imgLst = ConformImages[{top, bot}];
ImageAssemble[{{imgLst[[1]]}, {imgLst[[2]]}}],
4, ImageAssemble[
Riffle[Table[stamp, 6], Table[ImageReflect[stamp, Left], 6]]],
5, ImageAssemble[{Table[stamp, 12],
Table[ImageReflect[stamp], 12]}],
6, ImageAssemble[{Riffle[Table[stamp, 6],
Table[ImageReflect[stamp, Left], 6]],
Riffle[Table[ImageReflect[stamp, Left], 6], Table[stamp, 6]]}],
7, ImageAssemble[{Riffle[Table[stamp, 6],
Table[ImageReflect[stamp, Left], 6]],
Riffle[Table[stamp, 6], Table[ImageReflect[stamp, Left], 6]]}]];
pic = Image[frieze, ImageSize -> {{800}, {100}}]
You may have noticed that my code relies on procedural programming constructs like **switch** and **if**. I have only been using the Wolfram Language for about a year. I'm grateful that the Wolfram Language allows me to use procedural techniques while I learn how to write more elegant function-based code.
Oh, there is a possibility that some of the glyphs won't work on your system because they rely on what fonts you have on your machine. If that's the case, replace the character codes with ones that you do have.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.24.33AM.png&userId=788861
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.27.58AM.png&userId=788861
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.27.20AM.png&userId=788861
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.25.47AM.png&userId=788861Mark Greenberg2017-03-17T00:12:50ZPerformance tuning in Wolfram Language
http://community.wolfram.com/groups/-/m/t/1037730
*NOTE: Please see the original version of this post [**HERE**][1]. Cross-posted here per suggestion of [@Vitaliy Kaurov][at0].*
----------
Since Mathematica is a symbolic system, with symbolic evaluator much more
general than in numerically-based languages, it is not surprising that performance-tuning can be
more tricky here. There are many techniques, but they can all be understood
from a single main principle. It is:
*Avoid full Mathematica symbolic evaluation process as much as possible.*
All techniques seem to reflect some facet of it. The main idea here is that most of the
time, a slow Mathematica program is such because many Mathematica functions are very general. This
generality is a great strength, since it enables the language to support better and more powerful abstractions, but in many places in the program such generality, used without care, can be a (huge) overkill.
I won't be able to give many illustrative examples in the limited space, but they can be found in
several places, including some WRI technical reports (Daniel Lichtblau's
one on efficient data structures in Mathematica comes to mind), a very good
book of David Wagner on Mathematica programming, and most notably, many Mathgroup
posts. I also discuss a limited subset of them in [my book][3]. I will supply more references soon.
Here are a few most common ones (I only list those available within Mathematica
language itself, not mentioning CUDA \ OpenCL, or links to other languages, which are
of course also the possibilities):
1. *Push as much work into the kernel at once as possible, work with as large
chunks of data at a time as possible, without breaking them into pieces*
1.1. Use built-in functions whenever possible. Since they are implemented
in the kernel, in a lower-level language (C), they are typically (but not always!)
much faster than user-defined ones solving the same problem. The more specialized
version of a built-in function you are able to use, the more chances you have for a speed-up.
1.2. Use functional programming (`Map, Apply`, and friends). Also, use pure functions
in `#-&` notation when you can, they tend to be faster than Function-s with named
arguments or those based on patterns (especially for not computationally-intensive
functions mapped on large lists).
1.3. Use structural and vectorized operations (`Transpose, Flatten,
Partition, Part` and friends), they are even faster than functional.
1.4. Avoid using procedural programming (loops etc), because this programming
style tends to break large structures into pieces (array indexing etc).
This pushes larger part of the computation outside of the kernel and makes it slower.
2. *Use machine-precision whenever possible*
2.1. Be aware and use Listability of built-in numerical functions, applying them to
large lists of data rather than using `Map` or loops.
2.2. Use `Compile`, when you can. Use the new capabilities of `Compile`, such as `CompilationTarget->"C"`,
and making our compile functions parallel and Listable.
2.3. Whenever possible, use vectorized operations (`UnitStep, Clip, Sign, Abs`, etc)
inside `Compile`, to realize "vectorized control flow" constructs such as `If`, so that
you can avoid explicit loops (at least as innermost loops) also inside `Compile`. This
can move you in speed from Mathematica byte-code to almost native C speed, in some cases.
2.4. When using `Compile`, make sure that the compiled function doesn't bail out to non-compiled evaluation. See examples [in this MathGroup thread][4].
3. *Be aware that Lists are implemented as arrays in Mathematica*
3.1. Pre-allocate large lists
3.2. Avoid `Append, Prepend, AppendTo` and `PrependTo` in loops, for building
lists etc (because they copy entire list to add a single element, which leads
to quadratic rather than linear complexity for list-building)
3.3. Use linked lists (structures like `{1,{2,{3,{}}}}` ) instead of plain lists
for list accumulation in a program. The typical idiom is `a = {new element, a}`.
Because a is a reference, a single assignment is constant-time.
3.4. Be aware that pattern-matching for sequence patterns (BlankSequence,
BlankNullSequence) is also based on Sequences being arrays. Therefore, a rule
`{fst_,rest___}:>{f[fst],g[rest]}` will copy the entire list when applied. In particular, don't
use recursion in a way which may look natural in other languages. If you want to use recursion on lists, first convert your lists to linked lists.
4. *Avoid inefficient patterns, construct efficient patterns*
4.1. Rule-based programming can be both very fast and very slow, depending on how
you build your structures and rules, but in practice it is easier to inadvertently
make it slow. It will be slow for rules which force the pattern-matcher to make many
a priory doomed matching attempts, for example by under-utilizing each run of the
pattern-matcher through a long list (expression). Sorting elements is a good example:
`list//.{left___,x_,y_,right___}/;x>y:>{left,y,x,right}` - has a cubic complexity in the
size of the list (explanation is e.g. [here][5]).
4.2. Build efficient patterns, and corresponding structures to store your data, making
pattern-matcher to waste as little time on false matching attempts as possible.
4.3. Avoid using patterns with computationally intensive conditions or tests. The
pattern-matcher will give you the most speed when patterns are mostly syntactic in
nature (test structure, heads, etc). Every time when condition `(/;)` or pattern test `(?)`
is used, for every potential match, the evaluator is invoked by the pattern-matcher,
and this slows it down.
5. *Be aware of immutable nature of most Mathematica built-in functions*
Most Mathematica built-in functions which process lists create a copy of an original list and
operate on that copy. Therefore, they may have a linear time (and space) complexity in the
size of the original list, even if they modify a list in only a few places. One universal
built-in function that does not create a copy, modifies the original expression and does not
have this issue, is `Part`.
5.1. Avoid using most list-modifying built-in functions for a large number of
small independent list modifications, which can not be formulated as a single step
(for example, `NestWhile[Drop[#,1]&,Range[1000],#<500&]` )
5.2. Use extended functionality of `Part` to extract and modify a large number of
list (or more general expression) elements at the same time. This is very fast,
and not just for packed numerical arrays (`Part` modifies the original list).
5.3. Use `Extract` to extract many elements at different levels at once, passing
to it a possibly large list of element positions.
6. *Use efficient built-in data structures*
The following internal data structures are very efficient and can be used in
many more situations than it may appear from their stated main purpose. Lots of such examples can be found by searching the Mathgroup archive, particularly contributions of Carl Woll.
6.1. Packed arrays
6.2. Sparse arrays
7. *Use hash - tables.*
Starting with version 10, immutable associative arrays are available in Mathematica (Associations)
7.1 Associations
the fact that they are immutable does not prevent them to have efficient insertion and deletion of key-value pairs (cheap copies different from the original association by the presence, or absence, of a given key-value pair). They represent the idiomatic associative arrays in Mathematica, and have very good performance characteristics.
For earlier versions,the following alternatives work pretty well, being based on internal Mathematica's hash-tables:
7.2. Hash-tables based on `DownValues` or `SubValues`
7.3. `Dispatch`
8. *Use element - position duality*
Often you can write faster functions to work with positions of elements rather than
elements themselves, since positions are integers (for flat lists). This can give you
up to an order of magnitude speed-up, even compared to generic built-in functions
(`Position` comes to mind as an example).
9. *Use Reap - Sow*
`Reap` and `Sow` provide an efficient way of collecting intermediate results, and generally
"tagging" parts you want to collect, during the computation. These commands also go well
with functional programming.
10. *Use caching, dynamic programming, lazy evaluation*
10.1. Memoization is very easily implemented in Mathematica, and can save a lot of execution
time for certain problems.
10.2. In Mathematica, you can implement more complex versions of memoization, where you can
define functions (closures) at run-time, which will use some pre-computed parts in their
definitions and therefore will be faster.
10.3. Some problems can benefit from lazy evaluation. This seems more relevant to memory -
efficiency, but can also affect run-time efficiency. Mathematica's symbolic constructs make
it easy to implement.
A successful performance - tuning process usually employs a combination of these techniques,
and you will need some practice to identify cases where each of them will be beneficial.
[1]: http://mathematica.stackexchange.com/a/29351
[2]: https://groups.google.com/d/topic/comp.soft-sys.math.mathematica/XOXapJm_Q1Q/discussion
[3]: http://www.mathprogramming-intro.org/
[4]: https://groups.google.com/d/topic/comp.soft-sys.math.mathematica/XOXapJm_Q1Q/discussion
[5]: http://www.mathprogramming-intro.org/book/node355.html
[at0]: http://community.wolfram.com/web/vitaliykLeonid Shifrin2017-03-22T20:05:20ZCreate a plot of vectors using a Do-Loop?
http://community.wolfram.com/groups/-/m/t/1036796
I am trying to create a plot of vectors using a Do-loop and
"Graphics [{Arrow[{....". Within the Do loop I calculate
the beginning (X,Y) and ending coordinates (X1,Y1) for each vector.
Can I use the "Graphics [{Arrow[{" command within a Do-Loop
or do I first have to create a Table? Below is my code.
Terry
A = {
{.79, 1.36}, {.93, 1.38}, {.58, .38}, {.87, .87}, {.83, .79},
{.31, .99}, {.60, .48}, {.60, .87}, {1.64, .15}, {1.11, 1.30},
{.53, .97}, {1.26, .39}, {2.37, .00}, {1.17, 1.76}, {.96, 1.26},
{.56, .46}, {1.17, .20}, {.63, .26}, {1.01, .47}, {.81, .77}};
DI = {-.90, -1.20, 1., -.97, -1.08, -1.53, -.61, -.60,
1.24, -.69, -1.31, .92, 2.39, -.06, -.48, -.82, -.82,
1.11, .66, -.15, -1.08}
Do[MDISC = Sqrt[A[[i, 1]]^2 + A[[i, 2]]^2];
COSX = A[[i, 1]]/MDISC;
COSY = A[[i, 2]]/MDISC;
BIGD = -DI[[i]]/MDISC;
X = BIGD*COSX;
Y = BIGD*COSY;
X1 = (MDISC + BIGD)*COSX;
Y1 = (MDISC + BIGD)*COSY;
Graphics[{Arrow[{{X, Y}, {X1, Y1}}]}], {i, 20}]Terry Ackerman2017-03-21T11:42:18ZCorrect a NDSolve approach when an argument contains an InverseFunction?
http://community.wolfram.com/groups/-/m/t/972466
We have a thin crystal of thickness d illuminated uniformly from the left at an intensity lgti[0, t] of unity. The crystal is composed of a photo reactive species *a* which absorbs light. Inside the crystal at location x and time t the photo chemical reaction leads to a local concentration of *a* given as a[x, t] in the code below.
Code to generate the concentration, a[x, t], and light intensity, lgti[x, t], within the crystal is straightforward and is shown.
The cause of the errors generated using the code is that in some cases a[x, t] does not return a number.These exceptional cases are near a known limit of the InverseFunction and a[x, t] in these cases could be given the values unity if found. My problem is that using NumberQ inside a Module definition of a[x, t] always gives false because ligti[x, t] is unevaluated.
The starting code is shown but see the attached notebook.
ClearAll[a, x, t, eqns, \[Sigma]N, lgti, soln, t0, d, tmax]
\[Sigma]N = 1.78*10^4; d = 0.001; tmax = 3000.0;
a[x_, t_] := InverseFunction[(1.8996253051848473`/lgti[x, t] *
(162.99559471365637` Log[1 + 8.98898678414097` (1 - #1)] -
172.98458149779734` Log[#1]) ) &
] [t]
eqns = {D[lgti[x, t], x ] == - \[Sigma]N a[x, t ] lgti[x, t],(* Beer's Law *)
lgti[0, t] == 1,
lgti[x, 0] == Exp[-\[Sigma]N x]
};
t0 = AbsoluteTime[];
soln = NDSolve[eqns, lgti, {x, 0, d}, {t, 0, tmax},
MaxStepFraction -> 0.01] [[1]];
Print[ToString[(AbsoluteTime[] - t0)/60] <> " minutes"]
Any advice on how to code a[x, t] so that lgti[x, t] appears as a number within the body of the code would be welcome.
An alternate approach would also be well received.Mervin Hanson2016-12-01T00:35:14ZIssue with label backgrounds in Graphics3D, Mathematica 11.1
http://community.wolfram.com/groups/-/m/t/1035985
A new, undesired, "feature" in MMA 11.1 is that labels in Graphics3D seems to always be rendered against a white background. These screenshots below are from a recent [Demonstration by Izidor Hafner][1]
MMA 10.4
![enter image description here][2]
MMA 11.1
![enter image description here][3]
[1]: http://demonstrations.wolfram.com/FourTheoremsOnSphericalTriangles/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MMA10.4.png&userId=93385
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MMA11.1.png&userId=93385Hans Milton2017-03-20T19:20:57ZCode puzzles: turning docs into educational games
http://community.wolfram.com/groups/-/m/t/1032663
Teaching programing and assessing learning progress is often a very custom task. I wanted to create a completely automated "practically" infinite stream of random puzzles that guide a leaner towards improving programing skills. I think the major problem is content creation. To test whether the learner knows a programming concept, an exercise needs to be wisely designed. And it is better to have a randomized set of such exercises to definitely test the knowledge and exclude guesses and cheating and so on. Often creating such educational materials is very tedious, time consuming, and manual. Exactly like creating good documentation. I will explain one simple idea of using docs to make an educational game. This is just a barebone prototype to clearly follow the inner workings (try it out & share: https://wolfr.am/bughunter ). Please comment with feedback on how we can develop this idea further.
[![enter image description here][1]][3]
# Introduction: efficient use of resources
The docs are the finest wealth and depth of information and should be explored beyond their regular usage. Manual painstaking time consuming effort of creating good programing documentation should be used to its fullest potential. An automated game play would be a novel take on docs. We can use existing code examples in docs to randomly pull pieces of code and make programing exercises automatically. Being able to read code and find bugs is, in my experience, one of the most enlightening practices. The goal of the linked above game is to find a defect of the input code (bug) and fix it. Hence, the "bug hunter". There are just 2 possible outcomes of a single game cycle, --- and after each you can "try again":
![enter image description here][4]
# Core game code: making puzzles
Wolfram Language (WL) documentation is one of the best I've seen. It has pages and pages of examples starting from simple ones and going though the all details of the usage. Moreover the docs are written in WL itself and furthermore, WL can access docs and even has internal self-knowledge of its structure via WolframLanguageData. For instance, this is how you can show a relationship community graph for symbols related to `GatherBy`:
WolframLanguageData["GatherBy", "RelationshipCommunityGraph"]
![enter image description here][5]
We can use `WolframLanguageData` to access docs examples and then drop some parts of the code. The puzzle is then for the learner to find what is missing. For the sake of clarity designing a small working prototype lets limit test WL functions and corresponding docs' pages to some small number. So out of ~5000 (and we [just released a new addition][6]):
WolframLanguageData[] // Length
`4838`
built in symbols I just take 30
functions = {"Append", "Apply", "Array", "Cases", "Delete", "DeleteCases", "Drop", "Except",
"Flatten", "FlattenAt", "Fold", "Inner", "Insert", "Join", "ListConvolve", "Map", "MapThread",
"Nest", "Outer", "Partition", "Prepend", "ReplacePart", "Reverse", "RotateLeft", "RotateRight",
"Select", "Sort", "Split", "Thread", "Transpose"};
functions // Length
30
that are listed on a [very old but neat animated page][7] of some essential core-language collection. I will also add some "sugar syntax" to potential removable parts of code:
sugar = {"@@", "@", "/@", "@@@", "#", "^", "&"};
So, for instance, out of the following [example in docs][8] we could remove a small part to make a puzzle:
![enter image description here][9]
Here is an example of "sugar syntax" removal, which for novice programmers would be harder to solve:
![enter image description here][10]
Next step is to define a function that can check if a string is a built-in symbol (function, all 5000) or if it is some of sugar syntax we defined above:
ClearAll[ExampleHeads];
ExampleHeads[e_]:=
Select[
Cases[e,_String, Infinity],
(NameQ["System`"<>#]||MemberQ[sugar,#])&&#=!="Input"&
]
Next function essentially makes a single quiz question. First it randomly picks a function from list of 30 symbols we defined. Then it goes to the doc page of that symbol to the section called "Basic Examples". It finds a random example and removes a random part out of it:
ranquiz[]:=Module[
{ranfun=RandomChoice[functions],ranexa,ranhead},
ranexa=RandomChoice[WolframLanguageData[ranfun,"DocumentationBasicExamples"]][[-2;;-1]];
ranhead=RandomChoice[ExampleHeads[ranexa[[1]]]];
{
ReplacePart[#,Position[#,ranhead]->""]&@ranexa[[1]],
ranexa[[2]],
ranhead,
ranfun
}
]
Now we will define a few simple variables and tools.
# Image variables
I keep marveling how convenient it is that Mathematica front end can make images to be part of code. This makes notebooks a great IDE:
![enter image description here][11]
# Databin for tracking stats
It is important to have statistics of your learning game: to understand how to improve it where the education process should go. [Wolfram Datadrop][12] is an amazing tool for these purposes.
[![enter image description here][13]][14]
We define the databin as
bin = CreateDatabin[<|"Name" -> "BugHunter"|>]
# Deploy game to the web
To make an actual application usable by everyone with internet access I will use [Wolfram Development Platform][15] and [Wolfram Cloud][16]. First I define a function that will build the "result of the game" web page. It will check is answer is wrong or right and give differently designed pages accordingly.
quiz[answer_String,check_String,fun_String]:=
(
DatabinAdd[Databin["kd3hO19q"],{answer,check,fun}];
Grid[{
{If[answer===check,
Grid[{{Style["Right! You got the bug!",40,Darker@Red,FontFamily->"Chalkduster"]},{First[imgs]}}],
Grid[{{Style["Wrong! The bug got you!",40,Darker@Red,FontFamily->"Chalkduster"]},{Last[imgs]}}]
]},
{Row[
{Hyperlink["Try again","https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter"],
"|",
hyperlink["Documentation","http://reference.wolfram.com/language/ref/"<>fun<>".html"],
"|",
hyperlink["Fun hint","http://reference.wolfram.com/legacy/flash/animations/"<>fun<>".html"]},
Spacer[10]
]},
{Style["===================================================="]},
{hyperlink["An Elementary Introduction to the Wolfram Language","https://www.wolfram.com/language/elementary-introduction"]},
{hyperlink["Fast introduction for programmers","http://www.wolfram.com/language/fast-introduction-for-programmers/en"]},
{logo}
}]
)
This function is used inside `CloudDeploy[...FormFunction[...]...]` construct to actually deploy the application. `FormFunction` builds a query form, a web user interface to formulate a question and to get user's answer. Note for random variables to function properly `Delayed` is used as a wrapper for `FormFunction`.
CloudDeploy[Delayed[
quizloc=ranquiz[];
FormFunction[
{{"code",None} -> "String",
{"x",None}-><|
"Input"->StringRiffle[quizloc[[3;;4]],","],
"Interpreter"->DelimitedSequence["String"],
"Control"->Function[Annotation[InputField[##],{"class"->"sr-only"},"HTMLAttrs"]]|>},
quiz[#code,#x[[1]],#x[[2]]]&,
AppearanceRules-> <|
"Title" -> Grid[{{title}},Alignment->Center],
"MetaTitle"->"BUG HUNTER",
"Description"-> Grid[{
{Style["Type the missing part of input code",15, Darker@Red,FontFamily->"Ayuthaya"]},
{Rasterize@Grid[{
{"In[1]:=",quizloc[[1]]},
{"Out[1]=",quizloc[[2]]}},Alignment->Left]}
}]
|>]],
"bughunter",
Permissions->"Public"
]
The result of the deployment is a cloud object at a URL:
CloudObject[https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter]
with the short version:
URLShorten["https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter", "bughunter"]
https://wolfr.am/bughunter
And we are done! You can go at the above URL and play.
# Further thoughts
Here are some key points and further thoughts.
## Advantages:
- Automation of content: NO new manual resource development, use existing code bases.
- Automation of testing: NO manual labor of grading.
- Quality of testing: NO multiple choice, NO guessing.
- Quality of grading: almost 100% exact detection of mistakes and correct solutions.
- Fight cheating: clear to identify question type "find missing code part" helps to ban help from friendly forums (such as this one).
- Almost infinite variability of examples if whole docs system is engaged.
- High range from very easy to very hard examples (exclusion of multiple functions and syntax can make this really hard).
## Improvements:
- Flexible scoring system based on function usage frequencies.
- Optional placeholder as hint where the code is missing.
- Using network of related functions (see above) to move smoothly through the topical domains.
- Using functions frequency to feed easier or harder exercises based on test progress.
***Please comment with your own thoughts and games and code!***
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-17at10.37.46AM.png&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at6.49.58PM.png&userId=11733
[3]: https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at6.58.45PM.png&userId=11733
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sgfq354ythwrgsf.png&userId=11733
[6]: http://blog.wolfram.com/2017/03/16/the-rd-pipeline-continues-launching-version-11-1/
[7]: http://reference.wolfram.com/legacy/flash/
[8]: http://reference.wolfram.com/language/ref/Except.html
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.21.05PM.png&userId=11733
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.25.52PM.png&userId=11733
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.34.59PM.png&userId=11733
[12]: https://datadrop.wolframcloud.com/
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.41.35PM.png&userId=11733
[14]: https://datadrop.wolframcloud.com/
[15]: https://www.wolfram.com/development-platform/
[16]: http://www.wolfram.com/cloud/Vitaliy Kaurov2017-03-17T01:13:24ZAlternative framework for stress testing routines - Value at Risk focus
http://community.wolfram.com/groups/-/m/t/458805
Stress testing in the Value-of-Risk context provides complementary view of the firm’s risk profile when market becomes extremely volatile and unstable. In this respect, the stress testing generally replicates crisis scenarios in the current market setting. Traditional approach to stress testing emphasises qualitative factors. We propose alternative arrangement - purely quantitative approach with probabilistic setting where particular stress test is modelled thought the quantiles of the calibrated probability distribution.
![StVar image][1]
#Introduction
Value-at-Risk (VaR) as a standard risk measure is usually defined in terms of time horizon, confidence interval and parameters setting that drive the calculation of the measure. There are few theoretical underlying assumptions behind VaR and one f them refers to VaR as the measure under the normal market conditions. This fundamental principle, by definition, is meant to ignore extreme market scenarios where market parameters - and the volatility in particular - undergoes a period of excessive variations.
Stress testing plays a complementary function to VaR - it identifies the hidden vulnerability of the risk measure as a result of hidden assumptions and thus provides risk and senior managers with a clear view of the loss when market runs into crisis.Therefore, financial institutions these days are required to run the stress testing of their VaR on regular basis to quantify additional loss based on stress-affected para maters.
To ease and streamline the stress testing exercise we papoose new approach to stress testing based on probabilistic representation of VaR variables where particular scenario is modelled through the quantile behaviour of calibrated density. This approach is flexible, easy to implement and leads to elegant and tractable solution that can be applied in various risk quantification settings.
#Stress definition
Stress testing is generally carried away in three settings:
- Historical scenario of particular crisis
- Stylised definition of particular scenarios
- Hypothetical events
The recent regulatory review strongly prefers the historical scenario approach where particular period with consecutive 12 months observation of crisis has to be included into the stress testing exercise.
The objective of this paper is to propose a new parametric approach to stress testing where the underlying market parameters are defined probabilistically and the stress scenario is then expressed as a quantile of certain probability distribution. The solution brings a number of benefits - the primary being simplicity of implementation, ease of maintenance and ease of use.
#Problem setting
Consider the following case - we have built a portfolio of 5 UK stocks - Lloyds, Vodafone, Barclays, BP and HSBC - with £1 million invested in each of them. We want to compute **VaR** and **stressed VaR** for our portfolio with £5 million value.
##Getting market data
We get the market data for the 5 stocks with a long history from Jan 2008
stocks = {"LLOY.L", "VOD.L", "BARC.L", "BP.L", "HSBA.L"};
data = TimeSeries[FinancialData[#, "January 1, 2008"]] & /@ stocks;
The price history of each stock looks as follows:
DateListPlot[data, PlotLegends -> stocks]
![enter image description here][2]
## Processing market data
dret = TimeSeriesResample[MovingMap[Log[Last[#]] - Log[First[#]] &, data, {2}]];
![enter image description here][3]
In the same way we calculate each stock historical daily volatility
qvol = MovingMap[StandardDeviation, dret, {60, "Day"}];
Evaluate@DateListPlot[qvol, PlotRange -> All, PlotLegends -> stocks]
![enter image description here][4]
Both return and its volatility reveal high values for financial stocks - Lloyds and Barclays in particular- with excessive movement in 2009, 2010 and partially in 2012.
![enter image description here][5]
This are the periods that represent the peak of financial crisis and the stress to the market.
#Probabilistic definition of volatility
VaR metrics is primarily dealing with the return volatility and therefore the volatility behaviour in the key object of the stress testing process. As the graph above suggests, the volatility evolution over time is not constant and we observe periods of high and low values. If we make an abstract view on the volatility data in general, we can treat it as a 'random variable' with some probability distribution. Presence of high and low values strongly supports this argument.
We propose Johnson distribution - unbounded type - to define the distribution of volatility density over the observed period of time. Let's recall that Johnson distributions represent a framework of distribution family in the from Y=\[Sigma] g((X-\[Gamma])/\[Delta])+\[Mu] where X~Normal[]. In case of 'unbounded' distribution the function g=sinh(x). The Johnson distribution family are well-behaved functions and with four parameters to use are therefore ideally suited for the data fitting with long and patchy tails. In this respect they can be easily used to model the volatility distribution over time.
The probability density of the Johnson unbounded distribution is:
PDF[JohnsonDistribution["SU", \[Gamma], \[Delta], \[Mu], \[Sigma]], x]
![enter image description here][6]
and the density plot with varying shape factor \[Gamma] looks as follows:
Plot[Evaluate@
Table[PDF[JohnsonDistribution["SU", \[Gamma], 1.25, 0.007, 0.00341],
x], {\[Gamma], {-3, -4, -5}}], {x, 0, 0.15}, Filling -> Axis,
PlotRange -> All,
PlotLegends -> {"\[Gamma]=-3", "\[Gamma]=-4", "\[Gamma]=-5"}]
![enter image description here][7]
Johnson type of distributions are very flexible in terms of tail control.
## Historical distribution of volatility
We can visualise the volatility distribution by looking at each stock histogram:
size = Length[stocks];
Table[Histogram[qvol[[i, All, 2]], 30, "PDF",
PlotLabel -> stocks[[i]],
ColorFunction ->
Function[{height}, ColorData["Rainbow"][height]]], {i, size}]
![enter image description here][8]
We can use the historical data to fit the volatility to the Johnson distribution:
edist = Table[
EstimatedDistribution[qvol[[i, All, 2]],
JohnsonDistribution [
"SU", \[Gamma], \[Delta], \[Mu], \[Sigma]]], {i, size}]
How good is the fit? We can observe this on the charts:
Table[Show[
Histogram[qvol[[i, All, 2]], 20, "PDF", PlotLabel -> stocks[[i]]],
Plot[PDF[edist[[i]], x], {x, 0, 0.1}, PlotRange -> All,
PlotStyle -> {Blue, Thick}]], {i, size}]
![enter image description here][9]
## Defining correlation matrix
Portfolio VaR requires correlation structure amongst the VaR components. We can create it from the historical return data defined above.The only problem is to define the time window from which we select the market data. For the standard VaR we can choose the one year history
tswin = {{2014, 3, 1}, {2015, 3, 1}};
retwin = TimeSeriesWindow[dret, tswin];
volwin = Table[
Mean[MovingMap[StandardDeviation, retwin[[i]], {60, "Day"}][[All,
2]]], {i, size}]
Table[retwin[[i, All, 2]], {i, size}];
wincorr = Correlation[Transpose[%]];
wincorr // MatrixForm
> {0.0117992, 0.0127976, 0.0155371, 0.011966, 0.00891589}
![enter image description here][10]
#Standard VaR calculation
Having defined the volatility and the correlation matrix based on past year data, we can calculate the parametric VaR easily. Assuming the normal distribution for the stock returns and 1 day VaR horizon, this can be defined as follows:
##Individual stock VaR
indVar=-Sqrt[2] \[Pi] \[Sigma] erfc^-1(2 \[Alpha])
where \[Pi] = value of the investment = £1 million, \[Sigma] = stock return volatility and \[Alpha]= confidence level
ndinv = Refine[InverseCDF[NormalDistribution[0, \[Sigma]], \[Alpha]],
0 < \[Alpha] < 1]
> -Sqrt[2] \[Sigma] InverseErfc[2 \[Alpha]]
With 99% confidence and each stock volatility calculated above
indvar = Table[
10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] -> volwin[[i]]}, {i, size}]
Total[%]
BarChart[indvar, ChartStyle -> "Rainbow",
PlotLabel -> Style["Individual stock 1 day VaR", 16],
ChartLegends -> stocks]
> {27449.1, 29771.8, 36144.6, 27837.1, 20741.5}
> 141944.
![enter image description here][11]
We can see the highest individual VaR for Barclays and the lowest for HSBC. This is consistent with the individual stock volatilities observed above.
##Portfolio VaR
Formula-wise this is equivalent to:
portVaR=Sqrt[indVar^T.\[CapitalSigma].indVar]
baseportVaR = Sqrt[indvar.wincorr.indvar]
> 104226.
One can see that the portfolio VaR < \[Sum] individual VaRs due to diversification effect. Portfolio features reduce the sum of individual VaR by almost £40,000.
The 1 day total portfolio VaR is £104k or 2.2% of the portfolio's value
#Stressed VaR
VaR is a function driven primarily by volatility of return. In portfolio context there is another factor - correlations. When stressing VaR, one has to think about stress extension to both parameters.
## Historical scenario of past crisis
This is the most frequently used method to handle stress testing. Having available data for the entire period makes this selection simple.
Looking at the historical volatility graph, it is obvious that the volatility peaked in 2009 - 2010. We therefore select this period for our stress testing.
tswin = {{2009, 3, 1}, {2010, 3, 1}};
retwin = TimeSeriesWindow[dret, tswin];
volwin = Table[
Mean[MovingMap[StandardDeviation, retwin[[i]], {60, "Day"}][[All,
2]]], {i, size}]
Table[retwin[[i, All, 2]], {i, size}];
wincorr = Correlation[Transpose[%]];
wincorr // MatrixForm
> {0.0412017, 0.013961, 0.0342192, 0.0138671, 0.0187612}
![enter image description here][12]
We can now see very different values for both - the volatilities and the correlation matrix. To compute the stressed VaR, all we need is just to replace the standard VaR set with the stress period data:
indSTvar =
Table[10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] -> volwin[[i]]}, {i,
size}]
Total[%]
BarChart[indSTvar, ChartStyle -> "Rainbow",
PlotLabel -> Style["Individual stock 1 day Stress VaR", 16],
ChartLegends -> stocks]
> {95849.5, 32478.1, 79605.9, 32259.8, 43645.}
>
> 283838.
![enter image description here][13]
Consequently, the individual stocks stress VaR is very different from the standard VaR. For example, the Lloyds stressed VaR is almost 3 times higher than the standard VaR measure.
Portfolio-level stressed VaR:
stressPortVaR = Sqrt[indSTvar.wincorr.indSTvar]
> 218471.
BarChart[{baseportVaR, stressPortVaR}, ChartStyle -> {Blue, Red},
ChartLegends -> {"Std VaR", "Stress VaR"}]
![enter image description here][14]
The portfolio 1 day stressed VaR has **doubled** under the stress scenario and represents £218k loss
We can eventually choose any other period to see how the VaR behaves under different set of market parameters.
##Inverse CDF method for stress scenarios - individual case
The calibration of historical volatility to the Johnson distribution enables us to explore alternative route for the stressed VaR generation. This can be described as **Inverse CDF** method.
If we assume that the return distribution is normal and the volatility of that return is calibrated to the Johnson unbounded distribution, we can obtain the stressed VaR metrics as a quantile of both distributions.
- Stressed volatility:
We are interested in the quantile value of the volatility to capture the stressed market sentiment. This can be easily achieved through the Inverse CDF function
invJD = Refine[
InverseCDF[
JohnsonDistribution [
"SU", \[Gamma], \[Beta], \[Kappa], \[Nu]], \[Lambda]],
0 < \[Lambda] < 1] // Simplify
![enter image description here][15]
- Combined stressed VaR:
The VaR is then the composite value of the VaR formula and the quantiled volatility measure
stVaRNd = ndinv /. \[Sigma] -> invJD // Simplify
> ndinv
The above formula is the **parametric definition** of the stressed VaR. The function operates on two quantile parameters:
- VaR confidence level \[Alpha]
- Volatility 'stress' factor \[Lambda]
The stressed VaR parametric model will behave as follows:
Plot3D[stVaRNd /. {\[Kappa] -> 0.006583, \[Nu] ->
0.0002419, \[Gamma] -> -4.978, \[Beta] -> 1.07132}, {\[Alpha],
0.5, 0.9}, {\[Lambda], 0.3, 0.75},
ColorFunction -> "TemperatureMap", PlotLegends -> Automatic]
![enter image description here][16]
It is worth noting that the model is quite sensitive to the volatility stress factor \[Lambda].
##Probabilistic approach to stress VaR - portfolio context
Apart from stressing volatility parameter, we need to define the stressed correlation matrix. We propose simple multiplicative factor approach where the original matrix is multiplied by a positive number which increases the correlation coefficients in the matrix. This is consistent with market practice - in period of crisis there is a strong positive tendency for financial assets in the same class to move together. The function below does exactly this:
stressCM[cm_, f_] :=
Table[If[cm[[i, j]] == 1, 1, Min[0.99, cm[[i, j]]*(1 + f)]], {i,
size}, {j, size}]
Applying 20% increase on the correlations of standard VaR produces the following CM:
stressCM[wincorr, 0.2] // MatrixForm
![enter image description here][17]
The matrix values are in line with the stressed matrix historical scenario of 2009-2010
To execute the computation we first generate the individual stressed VaR metrics using the Inverse CDF method:
indstressvar =
Table[10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] ->
Mean[InverseCDF[edist[[i]], {0.5, 0.7, 0.9}]]}, {i, size}]
Total[%]
> {86608., 37365.8, 82986.8, 41072.8, 41535.8}
> 289569.
And then obtain the portfolio VaR in the same way as in the standard case but with the stressed correlation matrix:
portstvar = Sqrt[indstressvar.stressCM[wincorr, 0.2].indstressvar]
> 232010.
The parametric stressed VaR number is similar to what we obtained when we applied the historical scenario method. This shows that the alternative probabilistic stressed VaR approach works well and can be easily applied in practical setting.
#Extension of the stressed VaR concept
##VaR with Student-T distribution
When we opt for the generalised Student-T distribution for stock returns, the standard VaR can be defined as:
StVaR = Refine[
InverseCDF[StudentTDistribution[0, \[Sigma], \[Nu]], \[Alpha]],
1/2 < \[Alpha] < 1] // Simplify
![enter image description here][18]
The extension to the stressed VaR is trivial:
StVaR /. \[Sigma] -> invJD
![enter image description here][19]
The formula is essentially the Student-T Stressed VaR expression
We need to calibrate the distribution to obtain the degrees of freedom value for each stock
edist2 = Table[
EstimatedDistribution[dret[[i, All, 2]],
StudentTDistribution[0, \[Sigma], \[Nu]]], {i, size}]
> {StudentTDistribution[0, 0.0155062, 1.90416],
StudentTDistribution[0, 0.0093817, 2.82662],
StudentTDistribution[0, 0.0154996, 1.92631],
StudentTDistribution[0, 0.00850414, 2.12496],
StudentTDistribution[0, 0.00925811, 2.35383]}
stparam = List @@@ edist2
> {{0, 0.0155062, 1.90416}, {0, 0.0093817, 2.82662}, {0, 0.0154996,
1.92631}, {0, 0.00850414, 2.12496}, {0, 0.00925811, 2.35383}}
The individual stocks VaR are then:
ststressind =
Table[10^6 StVaR /. {\[Alpha] -> 0.99, \[Nu] ->
stparam[[i, 3]], \[Sigma] ->
Mean[InverseCDF[edist[[i]], {0.5, 0.7, 0.9}]]}, {i, size}]
Total[%]
> {98371.2, 33623.8, 96266.1, 44665.5, 41708.4}
>
> 314635.
and the total portfolio VaR equals to:
Sqrt[ststressind.stressCM[wincorr, 0.2].ststressind]
> 253701.
The Student-T stressed VaR is higher than in the normal case which is in line with expectation, especially when the degree of freedom is < 4.
#Conclusion
Parametric stressed VaR represents elegant and practical extension to the existing methods for stress testing. The main feature of this approach is ease of use and simplicity of application once the calibration dataset is available.
The parametric method can be applied to other probability distributions if one wants to test the parametric stressed VaR under different distributional assumptions. Student T approach is explicitly presented to demonstrate this case. Extension to other distributions is trivial since the stressed volatility with Inverse CDF can be easily applied in arbitrary setting.
[1]: /c/portal/getImageAttachment?filename=StressVarImange.jpg&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.png&userId=95400
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.png&userId=95400
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=95400
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&userId=95400
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.png&userId=95400
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.png&userId=95400
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.png&userId=95400
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9.png&userId=95400
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.png&userId=95400
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.png&userId=95400
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12.png&userId=95400
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=13.png&userId=95400
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=14.png&userId=95400
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15.png&userId=95400
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16.png&userId=95400
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=17.png&userId=95400
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=18.png&userId=95400
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19.png&userId=95400Igor Hlivka2015-03-13T18:06:46Z[✓] Understand sequence behaviour?
http://community.wolfram.com/groups/-/m/t/1036760
Mathematica 11.0.1.0, Windows 64 bit
Can somebody explain why in the first example "Sequence" behaves as expected, whereas in the second we have an error:
First example:
m = {{1, 2}, {3, 4}};
m[[All, 2]] = Sequence[];
mm[[2]] = Sequence[];
Second example:
m = {{1, 2}, {3, 4}};
m[[2]] = Sequence[];
m[[All, 2]] = Sequence[];Daniel Huber2017-03-21T09:47:14ZUse SPI protocol with Arduino & ADC?
http://community.wolfram.com/groups/-/m/t/1029674
Hi, I'm interested in how I use SPI protocol.
In Mathematics documentation has information that might somehow use the SPI library in Arduino.
I have an Arduino Uno, and I want to connect ADC such as MAX 6675, with SPI protocol.
MAX 6675 is the ADC for temperature measurement using SPI protocol for sharing information, and there are several skeches to gather information from it.
I can not imagine how.
I am interested in whether someone has sample code to work with SPI protocol.Bohdan Romanchuk2017-03-12T23:08:59Z[GIF] Elaborating on Arrival's Alien Language, Part I., II. & III.
http://community.wolfram.com/groups/-/m/t/1034626
I recently watched "Arrival", and thought that some of the dialogue sounded Wolfram-esque. Later, I saw the following blog post:
[Quick, How Might the Alien Spacecraft Work?][1]
Along with many others, I enjoyed the movie. The underlying artistic concept for the alien language reminded me of decade old memories, a book by Stephen Addiss, [Art of Zen][2]. Asian-influenced symbolism is an interesting place to start building a sci-fi concept, even for western audiences.
I also found Cristopher Wolfram's broadcast and the associated files:
[Youtube Broadcast][3]
[Github Files ( with image files ) ][4]
Thanks for sharing! More science fiction, yes!
I think the constraint of circular logograms could be loosened. This leads to interesting connections with theory of functions, which I think the Aliens would probably know about.
The following code takes an alien logogram as input and outputs a deformation according to do-it-yourself formulation of the Pendulum Elliptic Functions:
![Human Animation][5]
## $m=2$ Inversion Coefficients ##
MultiFactorial[n_, nDim_] := Times[n, If[n - nDim > 1, MultiFactorial[n - nDim, nDim], 1]]
GeneralT[n_, m_] := Table[(-m)^(-j) MultiFactorial[i + m (j - 1) + 1, m]/ MultiFactorial[i + 1, m], {i, 1, n}, {j, 1, i}]
a[n_] := With[{gt = GeneralT[2 n, 2]}, gt[[2 #, Range[#]]] & /@ Range[n] ]
## Pendulum Values : $2(1-\cos(x))$ Expansion Coefficients ##
c[n_ /; OddQ[n]] := c[n] = 0;
c[n_ /; EvenQ[n]] := c[n] = 2 (n!) (-2)^(n/2)/(n + 2)!;
## Partial Bell Polynomials ##
Note: These polynomials are essentially the same as the "**BellY**" ( hilarious naming convention), but recursion optimized. See timing tests below.
B2[0, 0] = 1;
B2[n_ /; n > 0, 0] := 0;
B2[0, k_ /; k > 0] := 0;
B2[n_ /; n > 0, k_ /; k > 0] := B2[n, k] = Total[
Binomial[n - 1, # - 1] c[#] B2[n - #, k - 1] & /@
Range[1, n - k + 1] ];
## Function Construction ##
BasisT[n_] := Table[B2[i, j]/(i!) Q^(i + 2 j), {i, 2, 2 n, 2}, {j, 1, i/2}]
PhaseSpaceExpansion[n_] := Times[Sqrt[2 \[Alpha]], 1 + Dot[MapThread[Dot, {BasisT[n], a[n]}], (2 \[Alpha])^Range[n]]];
AbsoluteTiming[CES50 = PhaseSpaceExpansion[50];] (* faster than 2(s) *)
Fast50 = Compile[{{\[Alpha], _Real}, {Q, _Real}}, Evaluate@CES50];
## Image Processing ##
note: This method is a hack from ".jpg" to sort-of vector drawing. I haven't tested V11.1 vectorization functionality, but it seems like this could be a means to process all jpg's and output a file of vector polygons. Anyone ?
LogogramData = Import["Human1.jpg"];
Logogram01 = ImageData[ColorNegate@Binarize[LogogramData, .9]];
ArrayPlot@Logogram01;
Positions1 =
Position[Logogram01[[5 Range[3300/5], 5 Range[3300/5]]], 1];
Graphics[{Disk[#, 1.5] & /@ Positions1, Red,
Disk[{3300/5/2, 3300/5/2}, 10]}];
onePosCentered =
N[With[{cent = {3300/5/2, 3300/5/2} }, # - cent & /@ Positions1]];
radii = Norm /@ onePosCentered;
maxR = Max@radii;
normRadii = radii/maxR;
angles = ArcTan[#[[2]], #[[1]]] & /@ onePosCentered;
Qs = Cos /@ angles;
## Constructing and Printing Image Frames ##
AlienWavefunction[R_, pixel_, normRad_, Qs_, angles_] := Module[{
deformedRadii = MapThread[Fast50, {R normRad, Qs}],
deformedVectors = Map[N[{Cos[#], Sin[#]}] &, angles],
deformedCoords
},
deformedCoords =
MapThread[Times, {deformedRadii, deformedVectors}];
Show[ PolarPlot[ Evaluate[
CES50 /. {Q -> Cos[\[Phi]], \[Alpha] -> #/10} & /@
Range[9]], {\[Phi], 0, 2 Pi}, Axes -> False,
PlotStyle -> Gray],
Graphics[Disk[#, pixel] & /@ deformedCoords], ImageSize -> 500]]
AbsoluteTiming[ OneFrame =
AlienWavefunction[1, (1 + 1)* 1.5/maxR, normRadii, Qs, angles]
](* about 2.5 (s)*)
![Alien Pendulum][6]
## Validation and Timing ##
In this code, we're using the magic algorithm to get up to about $100$ orders of magnitude in the half energy, $50$ in the energy. I did prove $m=1$ is equivalent to other published forms, but haven't found anything in the literature about $m=2$, and think that the proving will take more time, effort, and insight (?). For applications, we just race ahead without worrying too much, but do check with standard, known expansions:
EK50 = Normal@ Series[D[ Expand[CES50^2/2] /. Q^n_ :> (1/2)^n Binomial[n, n/2], \[Alpha]], {\[Alpha], 0, 50}];
SameQ[Normal@ Series[(2/Pi) EllipticK[\[Alpha]], {\[Alpha], 0, 50}], EK50]
Plot[{(2/Pi) EllipticK[\[Alpha]], EK50}, {\[Alpha], .9, 1}, ImageSize -> 500]
Out[]:= True
![Approximation Validity][7]
This plot gives an idea of approximation validity via the time integral over $2\pi$ radians in phase space. Essentially, even the time converges up to, say, $\alpha = 0.92$. Most of the divergence is tied up in the critical point, which is difficult to notice in the phase space drawings above.
Also compare the time of function evaluation:
tDIY = Mean[ AbsoluteTiming[Fast50[.9, RandomReal[{0, 1}]] ][[1]] & /@ Range[10000]];
tMma = Mean[AbsoluteTiming[JacobiSN[.9, RandomReal[{0, 1}]] ][[1]] & /@ Range[10000]];
tMma/tDIY
In the region of sufficient convergence, Mathematica function **JacobiSN** is almost 20 times slower. The CES radius also requires a function call to **JacobiCN**, so an output-equivalent **AlienWavefunction** algorithm using built-in Mathematica functions would probably take at least 20 times as long to produce. When computing hundreds of images this is a noticeable slow down, something to avoid ! !
Also compare time to evaluate the functional basis via the Bell Polynomials:
BasisT2[n_] := Table[BellY[i, j, c /@ Range[2 n]]/(i!) Q^(i + 2 j), {i, 2, 2 n, 2}, {j, 1, i/2}];
SameQ[BasisT2[20], BasisT[20]]
t1 = AbsoluteTiming[BasisT[#];][[1]] & /@ Range[100];
t2 = AbsoluteTiming[BasisT2[#];][[1]] & /@ Range[25];
ListLinePlot[{t1, t2}, ImageSize -> 500]
![Series Inverse][8]
The graph shows quite clearly that careful evaluation via the recursion relations changes the complexity of the inversion algorithm to polynomial time, $(n^2)$, in one special example where the forward series expansions coefficients have known, numeric values.
## Conclusion ##
We show proof-of-concept that alien logograms admit deformations that preserve the cycle topology. Furthermore we provide an example calculation where the "human" logogram couples to a surface. Deformation corresponds to scale transformation of the logogram along the surface. Each deformation associates with an energy.
Invoking the pendulum analogy gives the energy a physical meaning in terms of gravity, but we are not limited to classical examples alone. The idea extends to arbitrary surfaces in two, three or four dimensions, as long as the surfaces have local extrema. Around the extrema, there will exist cycle contours, which we can inscript with the Alien logograms. This procedure leads readily to large form compositions, especially if the surface has many extrema. Beyond Fourier methods, we might also apply spherical harmonics, and hyperspherical harmonics to get around the limitation of planarity.
The missing proof... Maybe later. LOL! ~ ~ ~ ~ Brad
And in the Fanfiction Voice:
Physicist : "It should be no surprise that heptapod speech mechanism involves an arbitrary deformation of the spacetime manifold."
Linguist : "Space-traveling aliens, yes, of course they know math and physics, but Buddhist symbology, where'd they learn that?"
[1]: http://blog.stephenwolfram.com/2016/11/quick-how-might-the-alien-spacecraft-work/
[2]: https://books.google.com/books/about/Art_of_Zen.html?id=4jGEQgAACAAJ
[3]: https://www.youtube.com/watch?v=8N6HT8hzUCA&t=4992s
[4]: https://github.com/WolframResearch/Arrival-Movie-Live-Coding
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Deformation.gif&userId=234448
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AlienPendulum.png&userId=234448
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=EllipticK.png&userId=234448
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=BellPolynomial.png&userId=234448Brad Klee2017-03-18T20:23:59ZMathematica 11.1 Is Out Now!
http://community.wolfram.com/groups/-/m/t/1032274
Mathematica 11.1 is out now! With over 100 new functions, the latest version significantly expands Mathematica's already state-of-the-art capabilities in machine learning, neural networks, audio processing, robust descriptive statistics and more. Longtime users will also notice a major redesign to the appearance of our documentation.
**Machine Learning**
- New machine learning functionality including [SequencePredict][1], [ActiveClassification][2] and [ActivePrediction][3]
- [FeatureSpacePlot][4] and [FeatureNearest][5] to view and find objects in a feature space for not only numerical and textual data but also sounds, images and any combination thereof
**Neural Networks**
- Neural network functionality has been significantly enhanced, adding seamless support for variable-length sequences and recurrent networks, over 20 new layer types, powerful new network combinators and a growing library of pre-trained neural nets to use as building blocks
- Obtain a trained version of existing neural nets or specific properties of a given net with [NetModel][6]
**Audio Processing**
- Record audio from various connected devices through an interactive user interface using [AudioCapture][7]
- Integrated audio analysis functionality that computes loudness or plots the cepstrum of a recording
**More Cool New Functionality**
- [CurrentImage][8] has been significantly rewritten for faster dynamic image capturing
- New functions useful in microscope image processing: [BrightnessEqualize][9] and [Image3DProjection][10]
- The addition of new region properties and region constructors greatly increases geometric computation functionality: [SierpinskiMesh][11], [MengerMesh][12], [CantorMesh][13], [HilbertCurve][14], [PeanoCurve][15] and more
- Robust descriptive statistics functionality with applications in location, dispersion and shape characterization
- New functions for performing web searches ([WebSearch][16], [WebImageSearch][17]) and translating text between dozens of languages ([TextTranslation][18])
- Create customizable visualizations of geographic properties that can wrap entities with [GeoBubbleChart][19]
- Completely redesigned documentation with responsive layout
[1]: http://reference.wolfram.com/language/ref/SequencePredict.html
[2]: http://reference.wolfram.com/language/ref/ActiveClassification.html
[3]: http://reference.wolfram.com/language/ref/ActivePrediction.html
[4]: http://reference.wolfram.com/language/ref/FeatureSpacePlot.html
[5]: http://reference.wolfram.com/language/ref/FeatureNearest.html
[6]: http://reference.wolfram.com/language/ref/NetModel.html
[7]: http://reference.wolfram.com/language/ref/AudioCapture.html
[8]: http://reference.wolfram.com/language/ref/CurrentImage.html
[9]: http://reference.wolfram.com/language/ref/BrightnessEqualize.html
[10]: http://reference.wolfram.com/language/ref/Image3DProjection.html
[11]: http://reference.wolfram.com/language/ref/SierpinskiMesh.html
[12]: http://reference.wolfram.com/language/ref/MengerMesh.html
[13]: http://reference.wolfram.com/language/ref/CantorMesh.html
[14]: http://reference.wolfram.com/language/ref/HilbertCurve.html
[15]: http://reference.wolfram.com/language/ref/PeanoCurve.html
[16]: http://reference.wolfram.com/language/ref/WebSearch.html
[17]: http://reference.wolfram.com/language/ref/WebImageSearch.html
[18]: http://reference.wolfram.com/language/ref/TextTranslation.html
[19]: http://reference.wolfram.com/language/ref/GeoBubbleChart.htmlJohn Moore2017-03-16T18:29:37Z[✓] Use SetDelayed for a selected set of indices?
http://community.wolfram.com/groups/-/m/t/1036976
I'm trying to define some delayed values and let mathematica do the simplification for me during the course of computation. Here is a minimal work example:
x[i_] := y[i];
Mx = Table[x[i] + 2, {i, 1, 10}];
My = Table[y[i], {i, 1, 10}];
difference = Mx - My
The output is simply `{2, 2, 2, 2, 2, 2, 2, 2, 2, 2}`. What I'm eager to know is if there is a way that I can define the delayed value **only** for x[i] with 2<=i<=10 (i.e. a selected subset of the indices), so that the final result would be something like `{x[1]-y[i]+2, 2, 2, 2, 2, 2, 2, 2, 2, 2}`. That is, every entry of the result is reduced by taking the delayed values but the first one. Thanks!!K X2017-03-21T17:20:56ZDownloading StarData that meets a certain qualification?
http://community.wolfram.com/groups/-/m/t/1036912
In this sample code, I first download subsets, of the names of stars, from multiple StarData classes.
A Union is used to combine all the target star names in a single list.
Using a parallel request across four CPUs, a certain set of metrics are retrieved for each star.
Not all stars have a complete set of parameter data.
To simplify plotting, Missing["NotAvailable"] entries are set to zero.
The same data is then normalized in relation to the sun, and units are removed.
A Cases function is used to remove all stars that have zero for a mass value.
This works, but in the real code version, all stars byStarData class are first downloaded once, and saved separately as exports.
Then a second program, imports each saved class export, and graphs across classes and particular metrics.
Is there a way of just downloading StarData entries, where the Mass parameter is not equal to Missing["NotAvailable"] ?
Something like:
Take[StarData[EntityClass["Star", "ClassAStar"]], {1, 100}] where "Mass" != "Missing["NotAvailable"]"
Working sample code:
sunMass = StarData["Sun", "Mass"];
sunLuminosity = StarData["Sun", "Luminosity"];
sunTemperature = StarData["Sun", "EffectiveTemperature"];
sunDiameter = StarData["Sun", "Diameter"];
sunGravity = StarData["Sun", "Gravity"];
sunDensity = StarData["Sun", "Density"];
sunVolume = StarData["Sun", "Volume"];
listDataA =
Take[StarData[EntityClass["Star", "ClassAStar"]], {1, 100}];
listDataB =
Take[StarData[EntityClass["Star", "ClassBStar"]], {1, 100}];
listDataF =
Take[StarData[EntityClass["Star", "ClassFStar"]], {1, 100}];
listDataG =
Take[StarData[EntityClass["Star", "ClassGStar"]], {1, 100}];
listDataK =
Take[StarData[EntityClass["Star", "ClassKStar"]], {1, 100}];
listDataM =
Take[StarData[EntityClass["Star", "ClassMStar"]], {1, 100}];
listDataO =
Take[StarData[EntityClass["Star", "ClassOStar"]], {1, 100}];
listDataSuper =
Take[StarData[EntityClass["Star", "Supergiant"]], {1, 100}];
listDataGiant =
Take[StarData[EntityClass["Star", "NormalGiant"]], {1, 100}];
listDataSubgiant =
Take[StarData[EntityClass["Star", "Subgiant"]], {1, 100}];
listData =
Union[listDataA, listDataB, listDataF, listDataG, listDataK,
listDataM, listDataO, listDataSuper, listDataGiant,
listDataSubgiant];
CloseKernels[]; LaunchKernels[4]
AbsoluteTiming[
Length[
data =
Transpose[
ParallelMap[
StarData[listData, #] &,
{"Name", "Metallicity", "SpectralClass", "BVColorIndex",
"EffectiveTemperature",
"Mass", "Luminosity", "AbsoluteMagnitude", "Gravity", "Density",
"Diameter",
"DistanceFromEarth", "MainSequenceLifetime", "Parallax",
"RadialVelocity", "Radius", "StarEndState", "StarType",
"SurfaceArea",
"VariablePeriod", "Volume"}]]]]
zeroData = data /. {Missing["NotAvailable"] -> 0};
noUnitsData =
zeroData /. {c1_, c2_, c3_, c4_, c5_, c6_, c7_, c8_, c9_, c10_,
c11_, c12_, c13_, c14_, c15_, c16_, c17_, c18_, c19_, c20_,
c21_} -> {c1, c2, c3, c4, QuantityMagnitude[c5],
QuantityMagnitude[c6/sunMass],
QuantityMagnitude[c7/sunLuminosity], c8,
QuantityMagnitude[c9/sunGravity], QuantityMagnitude[c10],
QuantityMagnitude[c11/sunDiameter], QuantityMagnitude[c12],
QuantityMagnitude[c13], QuantityMagnitude[c14]
, QuantityMagnitude[c15], QuantityMagnitude[c16], c17, c18,
QuantityMagnitude[c19], QuantityMagnitude[c20],
QuantityMagnitude[c21/sunVolume]};
Length[unionAll =
Cases[
Sort[noUnitsData, #1[[6]] > #2[[6]] &],
{_, _, _, _, _, x_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _} /;
x != 0]
]
topTemp = Take[
Sort[unionAll, #1[[5]] > #2[[5]] &],
50];
lowTemp = Take[
Cases[
Sort[unionAll, #1[[5]] < #2[[5]] &],
{_, _, _, _, x_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _} /;
x != 0]
, 50];
topMass = Take[
Sort[unionAll, #1[[6]] > #2[[6]] &]
, 50];
lowMass = Take[
Cases[
Sort[unionAll, #1[[6]] < #2[[6]] &],
{_, _, _, _, _, x_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _} /;
x != 0]
, 50];
topLuminosity = Take[
Sort[unionAll, #1[[7]] > #2[[7]] &]
, 50];
lowLuminosity = Take[
Cases[
Sort[unionAll, #1[[7]] < #2[[7]] &],
{_, _, _, _, _, _, x_, _, _, _, _, _, _, _, _, _, _, _, _, _, _} /;
x != 0]
, 50];
Manipulate[
plotType[
With[{c1 = xAxis, c2 = yAxis, tooltip1 = tooltip},
Tooltip[{(xAxisReverse )* Slot[c1], (yAxisReverse) *
Slot[c2]}, {Slot[tooltip1], Slot[c1], Slot[c2]}] &] @@@
category,
PlotRange -> Automatic, ImageSize -> 800,
AxesLabel -> {xLabel, yLabel}, AxesOrigin -> {xOrigin, yOrigin},
PlotStyle -> Red],
{{xAxisReverse, 1}, {1, -1}},
{{yAxisReverse, 1}, {1, -1}},
{{xOrigin,
0}, {-100000, -60000, -50000, -40000, -30000, -20000, -10000, \
-5000, -1000, -100, -50, -40, -30, -25, -20, -15, -10, -9, -8, -7, \
-6, -5, -4, -3, -2, -1, -.0001, -.001, -.01, -.1, -.2, -.3, -.4, -.5,
0, .5, 1, 1.5, 2, 2.5, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25, 30, 40,
50}},
{{yOrigin,
0}, {-30, -25, -20, -15, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, \
-.0001, -.001, -.01, -.1, -.2, -.3, -.4, -.5,
0, .1, .001, .0001, .00001, .5, 1, 1.5, 2, 2.5, 3, 4, 5, 6, 7, 8,
9, 10, 15, 20, 25, 30, 40, 50, 100, 1000, 5000, 10000, 20000,
30000, 40000, 50000, 60000}},
{{xAxis, 6}, {2 -> "Metallicity", 4 -> "BVColorIndex",
5 -> "Temperature", 6 -> "Mass", 7 -> "Luminosity",
8 -> "AbsoluteMagnitude", 9 -> "Gravity", 10 -> "Density",
11 -> "Diameter", 12 -> "DistanceFromEarth",
13 -> "MainSequenceLifetime", 14 -> "Parallax",
15 -> "RadialVelocity", 16 -> "Radius", 19 -> "SurfaceArea",
20 -> "VariablePeriod", 21 -> "Volume"}},
{{xLabel, "Mass"},
{"Metallicity", "BVColorIndex", "Temperature",
"Mass", "Luminosity", "AbsoluteMagnitude", "Gravity", "Density",
"Diameter", "DistanceFromEarth", "MainSequenceLifetime",
"Parallax", "RadialVelocity",
"Radius", "SurfaceArea", "VariablePeriod", "Volume"}
},
{{yAxis, 7}, {2 -> "Metallicity", 4 -> "BVColorIndex",
5 -> "Temperature", 6 -> "Mass", 7 -> "Luminosity",
8 -> "AbsoluteMagnitude", 9 -> "Gravity", 10 -> "Density",
11 -> "Diameter", 12 -> "DistanceFromEarth",
13 -> "MainSequenceLifetime", 14 -> "Parallax",
15 -> "RadialVelocity", 16 -> "Radius", 19 -> "SurfaceArea",
20 -> "VariablePeriod", 21 -> "Volume"}},
{{yLabel, "Luminosity"}, {"Metallicity", "BVColorIndex",
"Temperature",
"Mass", "Luminosity", "AbsoluteMagnitude", "Gravity", "Density",
"Diameter", "DistanceFromEarth", "MainSequenceLifetime",
"Parallax", "RadialVelocity",
"Radius", "SurfaceArea", "VariablePeriod", "Volume"}
},
{{tooltip, 3}, {1 -> "Name", 3 -> "SpectralClass",
2 -> "Metallicity", 4 -> "BVColorIndex", 5 -> "Temperature",
6 -> "Mass", 7 -> "Luminosity", 8 -> "AbsoluteMagnitude",
9 -> "Gravity", 10 -> "Density", 11 -> "Diameter",
12 -> "DistanceFromEarth", 13 -> "MainSequenceLifetime",
14 -> "Parallax", 15 -> "RadialVelocity", 16 -> "Radius",
19 -> "SurfaceArea", 20 -> "VariablePeriod", 21 -> "Volume"}},
{{plotType, ListLogLogPlot}, {ListPlot, ListLogPlot, ListLogLogPlot}},
{{category, unionAll}, {
topTemp -> "topTemp",
lowTemp -> "lowTemp",
topMass -> "topMass",
lowMass -> "lowMass",
topLuminosity -> "topLuminosity",
lowLuminosity -> "lowLuminosity",
unionAll -> "unionAll"
}}]
ListPicker[Dynamic[a],
{
topTemp -> "topTemp",
lowTemp -> "lowTemp",
topMass -> "topMass",
lowMass -> "lowMass",
topLuminosity -> "topLuminosity",
lowLuminosity -> "lowLuminosity",
unionAll -> "unionAll"
}
] ;Joseph Karpinski2017-03-21T15:38:51ZUse FindShortestTour to create dog walking route?
http://community.wolfram.com/groups/-/m/t/1036964
Hi:
I was wondering if anyone has a good .nb I could use to find the most efficient route to walk a dog in any city with stops at a pet supply store, a coffee shop, and a park from a starting point (home) that is returned to. Or if you have a .nb with an example from a specific city, that would be useful. Any help is appreciated!Swede White2017-03-21T16:51:03Z[GIF] Drug overdose trends in USA counties 1999 - 2014
http://community.wolfram.com/groups/-/m/t/837574
The data [Drug Poisoning Mortality: United States, 1999–2014][1] are published by USA government. In a few recent blogs ([1][2], [2][3], [3][4]) **static visualizations** of data were performed. **Here we show how to animate maps of geographical drug overdose spread in USA**. Below you can see 4 images, each reflecting upon ***Age-adjusted death rates for drug poisoning per 100,000 population by county and year***:
1. First static frame 1999
2. Last static frame 2014
3. Animated .GIF of the whole period with 1 frame per year
4. Range of rates versus time, USA average
Quoting NPR news [Obama Asks Congress For More Money To Fight Opioid Drug Abuse][5]:
> Every day in America more than 50 people die from an overdose of prescription pain medication. Some people who start out abusing pain pills later turn to heroin, which claims another 29 lives each day.
----------
**1999: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][6]
![enter image description here][7]
----------
**2014: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][8]
![enter image description here][9]
----------
**1999 - 2014 Animation: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][10]
![enter image description here][11]
----------
**Range of rates versus time: Age-adjusted death rates for drug poisoning per 100,000 for USA average over counties**
![enter image description here][12]
Getting the data
----------------
We can download data in .CSV format from [CDC web site][13]. I keep data file in the same as the notebook directory to shorten file-path strings.
SetDirectory[NotebookDirectory[]]
raw = SemanticImport["ops.csv"]
![enter image description here][14]
Making "interpreted" dataset
----------------------------
In [Wolfram Language][15] (WL) many built-in data allow for interpretation of imported data. For example, the USA counties could be interpreted as entities:
![enter image description here][16]
But I did not use `SemanticImport` to interpret on import automatically, because I would like to do this efficiently. The table has 50247 entries
Normal[raw[All, "County"]] // Length
> 50247
while there are only 3141 actual counties listed:
Normal[raw[All, "County"]] // Union // Length
> 3141
So instead of making 50247 calls to `interpreter` we will make just 3141 and use efficient `Dispatch` after to distribute replacement rules over all 50247 entries. I've spent only 100 seconds on making `Dispatch`
countyRULEs = Dispatch[
Thread[# -> Interpreter["USCounty"][#]] &@
Union[Normal[raw[All, "County"]]]]; // AbsoluteTiming
> {108.124, Null}
And almost no time on interpreting dataset:
data = raw /. countyRULEs; // AbsoluteTiming
data
> {0.441731, Null}
![enter image description here][17]
Bounds of death-rates for future rescaling
------------------------------------------
Note a `StringReplace` trick for going `ToExpression` here and throughout the rest of the post:
MinMax[ToExpression[StringReplace[Normal[
data[All, "Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2]
> {1, 20}
Testing color scheme
--------------------
Color scheme are important to properly blend with native colors of maps and also to express data. These are some tests with [Color Schemes][18] available in Wolfram Language.
tmp = GeoNearest["City",
Entity["City", {"Atlanta", "Georgia", "UnitedStates"}], {All, Quantity[50, "Kilometers"]}];
Multicolumn[Table[
GeoRegionValuePlot[tmp -> "PopulationDensity", PlotLegends -> False,
ColorFunction -> (ColorData[{clmap, "Reverse"}][#] &), ImageSize -> 400]
, {clmap, {"CherryTones", "SolarColors", "SunsetColors",
"RustTones", "WatermelonColors", "Rainbow", "RoseColors",
"ThermometerColors", "BrownCyanTones"}}], 3]
![enter image description here][19]
Year 1999: a specific year GiS plot
---------
GeoRegionValuePlot[
Thread[Normal[data[Select[#Year == 1999 &], "County"]] ->
ToExpression[StringReplace[Normal[data[Select[#Year == 1999 &]][All,
"Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2],
GeoRange -> {{24, 50}, {-125, -66}},
GeoProjection -> "Mercator",
ColorFunctionScaling -> False,
ColorFunction -> (ColorData[{"CherryTones", "Reverse"}][
Rescale[#, {1, 20}]] &),
PlotLegends -> False,
ImageSize -> 1000] // Rasterize
Making animation
----------------
frames = ParallelTable[
GeoRegionValuePlot[
Thread[
Normal[data[Select[#Year == year &], "County"]] ->
ToExpression[StringReplace[Normal[data[Select[#Year == year &],
"Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2],
GeoRange -> {{24, 50}, {-125, -66}},
GeoProjection -> "Mercator",
ColorFunctionScaling -> False,
ColorFunction -> (ColorData[{"CherryTones", "Reverse"}][
Rescale[#, {1, 20}]] &),
PlotLegends -> False,
ImageSize -> 800],
{year, Range[1999, 2014]}];
Making legend
-------------
Panel@Grid[Transpose[{#, ColorData[{"CherryTones", "Reverse"}][Rescale[#, {1, 20}]]} & /@Range[1, 20]]]
Growth of death rates ranges vs time
------------------------------------
bandGrowth = Transpose[Table[N[Mean[ToExpression[
StringReplace[Normal[data[Select[#Year == y &]][All,
"Estimated Age-adjusted Death Rate, 11 Categories (in \
ranges)"]], {"-" -> "~List~", ">" -> "{#,#}&@"}]]]], {y, Range[1999, 2014]}]]
BarChart[{#[[1]], #[[2]] - #[[1]]} & /@ Transpose[bandGrowth],
PlotTheme -> "Marketing", ChartLayout -> "Stacked",
ChartLabels -> {Range[1999, 2014], None}, ImageSize -> 850,
AspectRatio -> 1/3, ChartStyle -> {Yellow, Red}]
Another color scheme sample
------------------
In this dark-low-values color scheme you can see better a few white spots. Those are very few counties where data are missing.
----------
1999
----
![enter image description here][20]
----------
2014
----
![enter image description here][21]
[1]: https://data.cdc.gov/NCHS/NCHS-Drug-Poisoning-Mortality-County-Trends-United/pbkm-d27e?category=NCHS&view_name=NCHS-Drug-Poisoning-Mortality-County-Trends-United
[2]: http://blogs.cdc.gov/nchs-data-visualization/drug-poisoning-mortality/
[3]: https://evergreen.data.socrata.com/stories/s/b5gk-7v6a/
[4]: http://www.nytimes.com/interactive/2016/01/07/us/drug-overdose-deaths-in-the-us.html
[5]: http://www.npr.org/sections/thetwo-way/2016/02/02/465348441/obama-asks-congress-for-more-money-to-fight-opioid-drug-abuse
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10097figure1.png&userId=11733
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6105figure2.png&userId=11733
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif-41464821.gif&userId=11733
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=opoid.png&userId=11733
[13]: https://data.cdc.gov/NCHS/NCHS-Drug-Poisoning-Mortality-County-Trends-United/pbkm-d27e?category=NCHS&view_name=NCHS-Drug-Poisoning-Mortality-County-Trends-United
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdfregdavscr345y5t.png&userId=11733
[15]: https://www.wolfram.com/language/
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2016-04-11_05-01-09.png&userId=11733
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2016-04-11_05-05-57.png&userId=11733
[18]: http://reference.wolfram.com/language/guide/ColorSchemes.html
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdf45yhfhgsdy5uejtyhsgdf.png&userId=11733
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdafsfqegr.png&userId=11733
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dfghu6r7euyrtea.png&userId=11733Vitaliy Kaurov2016-04-11T10:21:01ZSolve an equation with a Piecewise function?
http://community.wolfram.com/groups/-/m/t/1035180
Hello I'm new to Mathematica, though I know a few about it. I'm trying to find a group of 2 values for Piecewise function changing with x, but I constantly have a problem with Solve producing a lot of errors can you tell me where is my mistake in this ? Whole thing is in attached file, Basically its:
F(x)=Piecewise[...](x,wl,wp)
Solve[MaxValue[F,x]/MinValue[F,x]==1.15,{wl,wp}] Or
CylindricalDecomposition[1.1<=MaxValue[F,x]/MinValue[F,x]<=1.2,{wl,wp}]
So I'm trying to get wl and wp values for optimization, but certainly I'm doing something wrong, please help.Pio Tyldens2017-03-19T15:24:41Z[✓] Manipulate of a 2-D plot?
http://community.wolfram.com/groups/-/m/t/1033865
I am trying to create a plot of a logistic function with parameters AHAT and MDIF. The values of these parameters depend up
underlying parameters A and DIF and an underlying ability distribution with a covariance omega and mean vector mu. I have to go through several steps to determine AHAT and MDIF. I would like to create a plot where I can manipulate sigma, mu1 and mu2 to see how it affects the
logistic curve. I can't figure out how to create the plot. Leaving the variable sigma and mu1 and mu2 in the expressions at the outside
will not work. I have attached a MSWord document with the code. Does any have suggestions?
A={{1.2,.4},{.8,.5},{.9,1.0},{1.3,.5},{.6,.9}};u={{μ1,μ2}};
DIF={{.5},{.7},{1.0},{-1.},{2.0}};
Ω={{σ,1},{1,σ}}
L=CholeskyDecomposition[Ω]
W=Eigenvectors[Transpose[L].Transpose[A].A.L]
AM=List[A[[1]]]
W1M=List[W[[1]]]
W2M=List[W[[2]]]
AHAT=AM.Transpose[W1M]/Sqrt[2.89+AM.Transpose[W2M].W2M.Transpose[AM]]
MDIF=(DIF[[1]]-AM.Transpose[u])/AM.Transpose[W1M]
Manipulate[Plot[{(1/(1+Exp[-1.7*AHAT*(θ-MDIF)])),},{θ,-3,3},AxesLabel->{θ,p}],{σ,1,3},{μ1,-2,2},{μ2,-2,2}]Terry Ackerman2017-03-17T19:28:15ZExport a 3D graphics in dxf or stl not converted in triangles?
http://community.wolfram.com/groups/-/m/t/1036461
I need to export in dxf or stl format some 3D geometrical constructions I made in Mathematica. But when the exported files are examined in a CAD viewer I see only figures formed out of triangles. Sometimes some triangles appear in unexpected places. Here is an example.
tube = {CapForm[None], Tube[{{0, 0, 0}, {0, 0, 10}}, 5]};
cube = Cuboid[{10, 10, 0}, {20, 20, 10}];
G3D = Graphics3D[{tube, cube}];
Export["G3D.dxf", G3D];
The circle of the tube is decomposed into a polygon and the tube's surface is decomposed into rectangles and the rectangles into triangles. The cube looks better but I do not want it to be made out of triangles but out of rectangles.Petre Logofatu2017-03-21T11:46:39ZSubscript more than one character? a_(2)=3a_{2-1}
http://community.wolfram.com/groups/-/m/t/1036556
How do you get wolfram to read the 2-1 completely subscripted, or as a "base" a_(2)=3a_{2-1} It only sends the 2 to the base of a at the end of the equation!!!! never the -1 ugh..... plz helpRudy Ram2017-03-21T01:13:06ZMake blinking work for all percentages of the screens?
http://community.wolfram.com/groups/-/m/t/1036511
I'm building interactive material for my classes
I wish there were flickering, aids, questions, etc.
I am using two panels, one for the graphics (left panel ) and the other for the explanations (right panel).
I control the graph from the right panel.
In this example I present, the blinking does not run with all percentages of the screens. I do not know why.
For the example I present there is no blinking in 110% and 115% of the screen.
In general, the examples I am developing the flicker fails for different percentages.
I.m using 11.1 (the same with 11.0 and 10.x)
Hypothesis:
I am not using StringForm correctly.
My definitions are not yet appropriate.
Thanks in advance.Ernesto Espinosa2017-03-21T00:17:43Z[✓] Inconsistent language design MinMax - Solved in V11.1
http://community.wolfram.com/groups/-/m/t/816095
The new function MinMax is defined (first line in the documentation) as follows:
![enter image description here][1]
However this does not hold for an empty list
{Min[{}], Max[{}]}
MinMax[{}]
So it is self-contradicting and slightly confusing. I know MinMax says it returns {-infinity,infinity} but that does not fit into the the definitions of Min, Max and MinMax. Like other functions like ReIm and AbsArg, the function should just returns {Min[..],Max[..]} (but more optimized) like ReIm returns {Re[..],Im[..]} and AbsArg returns {Abs[..],Arg[..]}...
Take the following example, i want the minimum and maximum of those lists. It should be the same to do the MinMax on each of them, and then do the min of all the minima, and the max of all the maxima, but the function doesn't handle empty lists in a coherent way (as Min and Max by itself does).
SHMinMax[x___]:={Min[x],Max[x]}
num1={{4,7},{5,9},{11,13,2,8}};
num2=Flatten[num1];
(* all give same results as there is no 'empty' list *)
MinMax[num2]
{Min[#1],Max[#2]}&@@Transpose[MinMax/@num1]
{Min[#1],Max[#2]}&@@Transpose[SHMinMax/@num1]
(* empty list! built-in MinMax function is not coherent *)
num1={{4,7},{5,9},{11,13,2,8},{}};
num2=Flatten[num1];
MinMax[num2]
{Min[#1],Max[#2]}&@@Transpose[MinMax/@num1]
{Min[#1],Max[#2]}&@@Transpose[SHMinMax/@num1]
Note that just Min (or Max) is coherent:
num1 = {{4, 7}, {5, 9}, {11, 13, 2, 8}, {}};
num2 = Flatten[num1];
Min[num2]
Min@(Min /@ num1)
The very definition of Min[] to return infinity, has been done such that Min[3,Min[4],Min[-1,6]] or Min[3,4,-1,6] or Min[3,Min[],Min[4,-1],Min[6]] always give the same results, regardless of the order.
More confusingly is that it works differently for empty Interval objects and with empty lists.
(* built in MinMax IS coherent with empty intervals *)
{Min[Interval[]], Max[Interval[]]}
MinMax[Interval[]]
SHMinMax[Interval[]]
(* but not with empty lists *)
{Min[{}], Max[{}]}
MinMax[{}]
SHMinMax[{}]
To summarise, the choice of returning {-infinity, infinity} for MinMax[{}] is inconsistent with and incoherent with the definitions of Min[], Max[] and the function MinMax itself. And more puzzling gives also 'unusual' results for Interval objects.
I would like to know how this design choice was conceived. To me it seems very illogical and incoherent as it is now. While my own function SHMinMax which is the definition as given in the first definition of MinMax, does follow all the logic and is consistent throughout (for lists, empty lists, intervals and empty intervals!).
Please let me understand the logic!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-03-03at18.27.05.png&userId=73716Sander Huisman2016-03-03T14:44:17ZFindClusters versus ClusteringComponents?
http://community.wolfram.com/groups/-/m/t/960586
Dear all,
I just found out that there's a difference in the number clusters retrieved when using FindClusters and ClusteringComponents for the same data set, even when completely the same settings are used:
Do[
koppels = RandomReal[{0, 100}, 500];
cl = FindClusters[koppels, DistanceFunction -> EuclideanDistance, Method -> "Optimize"];
indices2 = ClusteringComponents[koppels, Automatic, 1, DistanceFunction -> EuclideanDistance, Method -> "Optimize"];
Print[{First@Dimensions[cl], Max[indices2]}];,
{i, 1, 6}]
{2,2}
{2,2}
{1,2}
{2,2}
{1,2}
{1,2}
This shouldn't be the case, because the same clusters should be found either way.
Does someone have an idea of what's going on here?
Thanks for the information!
JanJan Baetens2016-11-10T18:00:40ZCalculate (A*Nabla)B ?
http://community.wolfram.com/groups/-/m/t/1035972
Does anyone how to write (A*Nabla)B in Mathematica?
As far as I understood I can also write (A*Nabla)B=Nabla(A*Transposed[B])-B(Nabla*A), but I can't get this to work either as I don't know how to write A*Transposed(B).
Any ideas?mkssion2017-03-20T18:28:47ZSolve an Elliptic PDE for u[x,y] with the rhs given by numerical f[u]?
http://community.wolfram.com/groups/-/m/t/1035422
I have a somewhat unusual mathematical problem. I need to solve numerically a second order elliptic PDE for u[x,y] with the right hand side given by numerical function F[u], that is, some give function of u, not of [x,y].
In 1D things work well
(* make interpolation of a linear function: *)
FofA = Interpolation[Table[{x, x}, {x, 0, 1, 10^-2}]];
(*construct a numerical function from the interpolation: *)
FofA1[x_] := FofA[z] /. z -> x
(* Find solutions for u’ = FofA1[u] *)
uofx = Flatten[
NDSolve[{Dt[u[x], x] == (FofA1[u[x]]), u[1] == 1},
u, {x, 0, 1}]].{1};
(* Plot it and compare with analytical *)
Show[{Plot[u[x] /. uofx, {x, 0, 1}, PlotRange -> All],
Plot[E^(-1 + x), {x, 0, 1}]}]
But in 2D it fails. One of the hints, I think is: if I try to solve Poisson equation in the form
D[u[x, y], {x, 2}] + D[u[x, y], {y, 2}] == u[x, y]
the NDSolveValue works OK, but it fails for
D[u[x, y], {x, 2}] + D[u[x, y], {y, 2}] == u[x, y]^{1.}
(when the rhs is numerically evaluated)
Thanks a lot for the insight.Maxim Lyutikov2017-03-19T22:24:02Z[✓] Avoid problems with Autorun in Manipulates, MMA 11.1?
http://community.wolfram.com/groups/-/m/t/1035529
Help! My own manipulated graphs that worked perfectly with autorun do not work now with 11.1. The autorun does not work despite the fact that I have ContinuousAction -> True. Is there perhaps a new version of mathplayer?
I do not know what happened, but it works now !!!Christos Papahristodoulou2017-03-20T11:10:50Z[✓] Locator constrained by polygon and NSolve
http://community.wolfram.com/groups/-/m/t/1035196
Dear Community,
I have a polygon with a constrained locator in it. I would like to draw a horizontal line from the locator towards the left boundary of the polygon, which is also given by a stewise function called qgHSZ. I try to achieve this with NSolve, but I get some strange warnings, like
"Part 2 of {0.5,10.} does not exist" Why??
"NSolve was unable to solve the system with inexact coeficients, etc." Not clear either. If I test NSolve below the plot, it works fine.
What do I do wrong? Notebook attached.
Tx for the kind help in advance,
regards, AndrasAndras Gilicz2017-03-19T21:32:01Z[✓] Training loss goes to 0 when specifying TargetDevice = GPU
http://community.wolfram.com/groups/-/m/t/917616
HI -- The neural network package is really cool. I'm learning a lot from experimenting with it. I have noticed one odd thing I can't explain. I have a fairly simple CNN that I train on some images. It trains (although super-slowly) on my CPU, with a more or less reasonable loss getting smaller on each training round. But when I set TargetDevice = "GPU" it instantly reports 0 loss and finishes in just a couple seconds. As one clue, I had an Nvidia 970 with the latest Nvidia drivers and this didn't happen. I just upgraded to a new 1080 (partially for faster network training!), and that is when this started happening.
I've attached a Notebook that demonstrates this behavior and has SystemInformation[] in it. (It does use images that aren't included, but I don't think there is anything special about them, they are just a bunch of JPEGs). I'm running Nvidia driver 372.70. If there is a different driver I should be using instead, please let me know. Thanks!David Cardinal2016-09-01T21:46:08ZTurning anaglyph images into stereo pairs
http://community.wolfram.com/groups/-/m/t/1035318
A colleague sent this link to an article about the strangest solar system object I've ever seen, Saturn's moon Pan: https://saturn.jpl.nasa.gov/resources/7616 . The article contains an anaglyph stereo image:
![Pan anaglyph][1]
I love stereo images and hate anaglyph images. I prefer free-viewing side-by-side stereo pairs. The images are cleaner, and I don't have to hunt for the stereo glasses that I can never find when I need them. It occurred to me that all the information I need to turn the anaglyph image into a stereo pair is in the image. And the code turns out to be practically a one-liner.
A little poking around with the Get Pixel Color tool (in the right-click menu) revealed that apparently all of the information for the red image is in the red channel and all of the information for the cyan image is in the blue and green channels (surprise!). So producing a left-right pair is just a matter of picking out the red and blue (or green) channels, which is just what ColorSeparate does. For cross-eyed viewing, you want the blue channel on the left and red on the right, so I had to use Reverse to change the channel order. This is the complete code:
UnAnaglyph[image_] :=
Row[Reverse[Image[#, ImageSize -> 250] & /@ Most[ColorSeparate[image, "RGB"]]], " "]
It works beautifully:
UnAnaglyph[ImageTake[image, All, 450]]
![enter image description here][2]
UnAnaglyph[ImageTake[image, All, -450]]
![enter image description here][3]
I had fun applying UnAnaglyph to random anaglyph images I found in a Google image search.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5849PanAnaglyph.jpg&userId=32882
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=LeftStereoPair.png&userId=32882
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=RightStereoPair.png&userId=32882Christopher Carlson2017-03-19T23:53:42ZSolve definite integrals?
http://community.wolfram.com/groups/-/m/t/1034805
Hello,
Please, I am trying to solve definite integral and its work but the results not simplified.
F = Integrate[(x^4*Exp[x])/(Exp[x] - 1)^2, {x, 0, 44}]
This is the result
(1/(15 (-1 + E^44)))4 (-\[Pi]^4 +
E^44 (\[Pi]^4 +
30 (-1874048 + 42592 Log[-1 + E^44] - 2904 PolyLog[2, 1/E^44] -
132 PolyLog[3, 1/E^44] - 3 PolyLog[4, 1/E^44])) +
30 (-42592 Log[-1 + E^44] +
3 (468512 + 968 PolyLog[2, 1/E^44] + 44 PolyLog[3, 1/E^44] +
PolyLog[4, 1/E^44])))studygroups 20002017-03-19T00:29:08ZVisualize links of Facebook pages, as leaves of a tree?
http://community.wolfram.com/groups/-/m/t/1033964
How would someone take a list of Facebook pages and represent them as leaves of a tree? – For one to have it interactive from Mathematica / Wolfram Language (as Links), and to be able to print them and cut them out as simple bulletin board cut outs for let’s say a classroom exercise, AND most importantly, but not necessarily at first, have it stand as the foundation of a more advanced program to be written in the future that will eventually become an interactive digital sign -which I can explain later once the simple pages as leaves gets solved.Brian Woytovich2017-03-18T13:19:50Z[✓] Create a constrained locator inside a polygon?
http://community.wolfram.com/groups/-/m/t/1034164
Dear Community,
I have a closed polygon, and I would like to place a locator into it, so it remains constrained to the interior of the polygon, i.e. it never leaves the polygon when I drag it. How could I do it? My polygon is attached.
Tx for the kind help in advance,
AndrasAndras Gilicz2017-03-18T17:07:06ZBlanagrams
http://community.wolfram.com/groups/-/m/t/1024712
A blanagram is an anagram for another word except for the substitution of one letter. Think of Scrabble with a blank square (blank + anagram = blanagram). For example, *phyla* is a blanagram of *glyph*; replace the "g" with an "a" and find anagrams (see [blog on Anagrams][1]).
We will prototype with a simple word, glyph. We'll start by getting a list of the individual characters in glyph.
In[1]:= Characters["glyph"]
Out[1]= {"g", "l", "y", "p", "h"}
Start by replacing the first letter in glyph with an a and then finding all anagrams. The third argument to StringReplacePart is a list of beginning and ending positions for the replacement.
In[2]:= StringReplacePart["glyph", "a", {1, 1}]
Out[2]= "alyph"
In[3]:= Anagrams[%]
Out[3]= {"phyla", "haply"}
Now do the same for each character position in the word.
In[4]:= Map[StringReplacePart["glyph", "a", {#, #}] &, Range[StringLength["glyph"]]]
Out[4]= {"alyph", "gayph", "glaph", "glyah", "glypa"}
In[5]:= Flatten[Map[Anagrams, %]]
Out[5]= {"phyla", "haply"}
Having done this for the letter a, we now repeat for all other single characters.
In[6]:= CharacterRange["a", "z"]
Out[6]= {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"}
In[7]:= blana = Table[
Map[StringReplacePart["glyph", ch, {#, #}] &,
Range[StringLength["glyph"]]],
{ch, CharacterRange["a", "z"]}]
Out[7]= {{"alyph", "gayph", "glaph", "glyah", "glypa"}, {"blyph", "gbyph", "glbph", "glybh", "glypb"},
{"clyph", "gcyph", "glcph", "glych", "glypc"}, {"dlyph", "gdyph", "gldph", "glydh", "glypd"},
{"elyph", "geyph", "gleph", "glyeh", "glype"}, {"flyph", "gfyph", "glfph", "glyfh", "glypf"},
{"glyph", "ggyph", "glgph", "glygh", "glypg"}, {"hlyph", "ghyph", "glhph", "glyhh", "glyph"},
{"ilyph", "giyph", "gliph", "glyih", "glypi"}, {"jlyph", "gjyph", "gljph", "glyjh", "glypj"},
{"klyph", "gkyph", "glkph", "glykh", "glypk"}, {"llyph", "glyph", "gllph", "glylh", "glypl"},
{"mlyph", "gmyph", "glmph", "glymh", "glypm"}, {"nlyph", "gnyph", "glnph", "glynh", "glypn"},
{"olyph", "goyph", "gloph", "glyoh", "glypo"},{"plyph", "gpyph", "glpph", "glyph", "glypp"},
{"qlyph", "gqyph", "glqph", "glyqh", "glypq"}, {"rlyph", "gryph", "glrph", "glyrh", "glypr"},
{"slyph", "gsyph", "glsph", "glysh", "glyps"}, {"tlyph", "gtyph", "gltph", "glyth", "glypt"},
{"ulyph", "guyph", "gluph", "glyuh", "glypu"}, {"vlyph", "gvyph", "glvph", "glyvh", "glypv"},
{"wlyph", "gwyph", "glwph", "glywh", "glypw"}, {"xlyph", "gxyph", "glxph", "glyxh", "glypx"},
{"ylyph", "gyyph", "glyph", "glyyh", "glypy"}, {"zlyph", "gzyph", "glzph", "glyzh", "glypz"}}
Because of the extra nesting of the output ( `Table[Map[...]]` ) we need to flatten the output at a deeper level. And delete duplicates.
In[8]:= Flatten[Map[Anagrams, blana, {2}]] // DeleteDuplicates
Out[8]= {"phyla", "haply", "glyph", "lymph", "sylph"}
Finally, put all the pieces together to create a reusable function, `Blanagrams`.
In[9]:= Blanagrams[word_String] := Module[{blana},
blana = Table[
Map[StringReplacePart[word, ch, {#, #}] &,
Range[StringLength[word]]],
{ch, CharacterRange["a", "z"]}];
DeleteDuplicates[Flatten[Map[Anagrams, blana, {2}]]]
]
This turns out to be fairly quick for small words, but it bogs down for larger words.
In[10]:= Blanagrams["glyph"] //Timing
Out[10]= {1.25209, {"phyla", "haply", "glyph", "lymph", "sylph"}}
In[11]:= Blanagrams["string"] //Timing
Out[11]= {8.85717, {"taring", "rating", "gratin", "arsing", "grains", "sating", "giants", "grants",
"gratis", "strain", "trains", "brings", "grinds", "sering", "singer", "signer", "resign", "reigns",
"ingres", "signet", "tinges", "ingest", "tigers", "insert", "inters", "inerts", "string", "things",
"nights", "rights", "girths", "tiring", "siring", "rising", "siting", "tigris", "glints", "roting",
"soring", "signor", "groins", "ingots", "strong", "intros", "spring", "sprint", "prints", "stings",
"turing", "truing", "strung", "grunts", "wrings", "trying", "stying", "stingy"}}
Because we are doing similar computations for each of the letters of the alphabet, this is a good candidate for parallelizing. But which parts do we parallelize? One way to help determine where the computational bottlenecks are is to profile the code. Integrated development environments such as Wolfram Workbench have built-in profilers, but here we will create a simple set of profiling steps to determine where our bottlenecks are.
Here is a small auxiliary function that wraps [AbsoluteTiming][2] around an expression and adds a tag to make it easy to identify the various parts of what is reported.
In[12]:= timing[expr_, tag_] := Print[{NumberForm[First@AbsoluteTiming[expr], 10], tag}]
In[13]:= SetAttributes[timing, HoldAll];
In[14]:= word = "string";
timing[
tmp = Table[Map[StringReplacePart[word, ch, {#, #}] &,
Range[StringLength[word]]],{ch, CharacterRange["a", "z"]}];,
"table"
];
timing[
Flatten[tmp2 = Map[Anagrams, tmp, {2}]];,
"map Anagrams"
];
timing[
DeleteDuplicates[Flatten[tmp2]];,
"flatten and delete duplicates"
]
Out[15]= {0.000794,table}
Out[16]= {8.986847,map Anagrams}
Out[17]= {0.000050,flatten and delete duplicates}
Not surprisingly, creating the many possible letter combinations is very quick. The greatest part of this computation is spent with mapping Anagrams across the many word combinations. So we can simply try to parallelize that using [ParallelMap][3].
In[17]:= BlanagramsParallel[word_String]:=
Module[{blana},
blana = Table[Map[StringReplacePart[word,ch{#,#}]&,
Range[StringLength[word]]],{ch,CharacterRange["a","z"]}];
DeleteDuplicates@Flatten[ParallelMap[Anagrams,blana,{2}]]
]
Launch kernels on our current machine and compute.
In[18]:= LaunchKernels[]
Out[18]= {KernelObject[1,local],KernelObject[2,local]}
In[19]:= DistributeDefinitions[Anagrams]
Out[19]= {Anagrams,word}
In[20]:= BlanagramsParallel["strings"]//AbsoluteTiming
Out[20]= {37.792462,
{ratings,gratins,staring,strains,resting,stinger,singers,signers,
resigns,ingress,signets,ingests,tigress,inserts,strings,tirings,risings,
sitings,sorting,storing,signors,tossing,springs,sprints,rusting,tryings,stringy}}
For comparison, here is the computation done serially on one kernel.
In[21]:= Blanagrams["strings"]//AbsoluteTiming
Out[21]= {47.756149,
{ratings,gratins,staring,strains,resting,stinger,singers,signers,resigns,
ingress,signets,ingests,tigress,inserts,strings,tirings,risings,sitings,sorting,
storing,signors,tossing,springs,sprints,rusting,tryings,stringy}}
With the 2-kernel machine this was run on, we are getting a slight speedup. This particular computation parallelizes well and so on an 8-kernel machine, we would see substantially faster compute times.
In[22]:= First[%]/First[%%]
Out[22]= 1.2636422
----------
*NOTE: This article was [originally posted][4] on Paul Wellin's blog PROGRAMMING WITH MATHEMATICA.*
[1]: http://programmingwithmathematica.blogspot.com/2011/12/anagrams.html
[2]: http://reference.wolfram.com/language/ref/AbsoluteTiming.html
[3]: http://reference.wolfram.com/language/ref/ParallelMap.html
[4]: http://programmingwithmathematica.blogspot.com/2011/12/blanagrams.htmlPaul Wellin2017-03-03T11:10:07ZMimic GenerateSymmetricKey["password"] in openssl?
http://community.wolfram.com/groups/-/m/t/1025247
What are the openssl command line parameters which would produce the same key as the GenerateSymmetricKey["password"] function?
For example,
key = GenerateSymmetricKey["password",
Method -> <| "InitializationVector" -> None, "Cipher" -> "AES256", "BlockMode" -> "CBC"|>] ;
key["Key"]
Produces the key "7wUOXiuJb4qDpJSmxq3QULZuLxcgUnWUwle4pLn53Kc=". How could I get openssl to produce the same key given the same password, either in Hex or base64 format?O Mirza2017-03-04T12:10:41ZUsing NetModel to "fine tune" models with new final layers?
http://community.wolfram.com/groups/-/m/t/1033923
Hi -- First, congrats on 11.1. Support for the 1080 GPU is enough for me to get excited. I also love that I can load pre-trained models using NetModel, as that is an increasingly obvious strategy for problem solving. However, I'm not clear on how I would go about keeping the weights from the lower (feature) layers, while re-training the upper layers. Knowing how thorough you are, I'm sure it's possible, but just not sure how. NetChain lets me build layers, but I don't think it lets me operate on them. NetExtract lets me pull out layers, but I don't want just the model, I'd like to keep the pre-trained weights. Thanks! -- DavidDavid Cardinal2017-03-18T00:07:22Z[✓] Use RegionMember properly?
http://community.wolfram.com/groups/-/m/t/1034606
Dear Community,
I tried to explore the RegionMember function to my previous post ( constrained locator ). This function should give True, it a point is inside a region, and False otherwise. Applying it to my polygon it gives True, although the investigated point is clearly outside the region :-/ Does anybody has an explanation?
Tx in advance,
AndrasAndras Gilicz2017-03-18T19:47:05Z