Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions from all groups sorted by activeHow do i put this function by parts
http://community.wolfram.com/groups/-/m/t/1039293
Hi, how do i put this on wolfram mathematica?
![function][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=trozos_1_grafica.gif&userId=1039278juan Galindo2017-03-25T21:04:02ZSaving data directly in notebooks
http://community.wolfram.com/groups/-/m/t/1039591
Do you ever wish you could save data directly into a notebook instead of having to export it? I sometimes do things like
mydata = {1,2,3, ...};
just to be able to use this data next time I open the notebook. This could be data I pasted from the web, or something that just took several minutes to generate.
But saving it like this is inconvenient, and often takes up a lot of space.
Today I published a blog post about a better method that I have been using recently. I thought people here may be interested:
* [Save data directly in Mathematica notebooks](http://szhorvat.net/pelican/save-data-in-notebooks.html)
The idea is to use first `Compress` the data, then use `Interpretation` to create a compact display for it. Here's a small function that packages all of this up:
ClearAll[SaveToCell]
SaveToCell::usage =
"SaveToCell[variable] creates an input cell that reassigns the current value of variable.\n" <>
"SaveToCell[variables, display] shows 'display' on the right-hand-side of the assignment.";
SetAttributes[SaveToCell, HoldFirst]
SaveToCell[var_, name : Except[_?OptionQ] : "data", opt : OptionsPattern[]] :=
With[{data = Compress[var],
panel = ToBoxes@Tooltip[Panel[name, FrameMargins -> Small], DateString[]]},
CellPrint@Cell[
BoxData@RowBox[{
MakeBoxes[var],
"=",
InterpretationBox[panel, Uncompress[data]],
";"
}],
"Input",
(* prevent deletion by Cell > Delete All Output: *)
GeneratedCell -> False
(* CellLabel is special: last occrrence takes precedence, so it comes before opt: *)
CellLabel -> "(saved)",
opt,
CellLabelAutoDelete -> False
]
]
If you have your data in the variable `var`, simply run `SaveToCell[var]`, which will create an input cell that re-assign the value of `var`. It looks like this:
![enter image description here][1]
We can also customize the display:
var = Range[1000];
SaveToCell[var, Short[var]]
![enter image description here][2]
Hovering the display will show the date when the data was saved.
`SaveToCell` also takes arbitrary `Cell` options, and passes them down to the generated cell. Something strange I observed while writing this function is that with some options, such as `CellLabel`, [it is not the first but the last occurrence of the option that takes precedence][3]. Does anyone know why?
We can use this functionality to change the cell style, add a different label, or to protect the cell against accidental deletion: `SaveToCell[var, Deletable -> False]`.
I have been using this little function for a while, and I hope that others will find it useful too.
Do be careful though: notebooks are not designed for storing large data. I would avoid storing data as large as several tens of megabytes within notebooks.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=in1.png&userId=38370
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=in3.png&userId=38370
[3]: http://mathematica.stackexchange.com/q/140989/12Szabolcs Horvát2017-03-25T21:56:48ZAvoid problem fitting Complex part of equation with FindFit?
http://community.wolfram.com/groups/-/m/t/1038719
Hi, I am Prady and I am new to Mathematica. I have just started using it for my project work. I have to fit Havriliak Negami Equation to my experimental results. When I take the real part and use FindFit, it is fitting correctly. I have included this in the file Real_part.nb.
However, when I include the complex part it is giving error .
This error can be seen in Real+Imag_part.nb.
Kindly advise which function to use instead of FindFit or provide a solution. Thank you.Pradyotta Biswal2017-03-24T05:24:00ZPerformance tuning in Wolfram Language
http://community.wolfram.com/groups/-/m/t/1037730
*NOTE: Please see the original version of this post [**HERE**][1]. Cross-posted here per suggestion of [@Vitaliy Kaurov][at0].*
----------
Since Mathematica is a symbolic system, with symbolic evaluator much more
general than in numerically-based languages, it is not surprising that performance-tuning can be
more tricky here. There are many techniques, but they can all be understood
from a single main principle. It is:
*Avoid full Mathematica symbolic evaluation process as much as possible.*
All techniques seem to reflect some facet of it. The main idea here is that most of the
time, a slow Mathematica program is such because many Mathematica functions are very general. This
generality is a great strength, since it enables the language to support better and more powerful abstractions, but in many places in the program such generality, used without care, can be a (huge) overkill.
I won't be able to give many illustrative examples in the limited space, but they can be found in
several places, including some WRI technical reports (Daniel Lichtblau's
one on efficient data structures in Mathematica comes to mind), a very good
book of David Wagner on Mathematica programming, and most notably, many Mathgroup
posts. I also discuss a limited subset of them in [my book][3]. I will supply more references soon.
Here are a few most common ones (I only list those available within Mathematica
language itself, not mentioning CUDA \ OpenCL, or links to other languages, which are
of course also the possibilities):
1. *Push as much work into the kernel at once as possible, work with as large
chunks of data at a time as possible, without breaking them into pieces*
1.1. Use built-in functions whenever possible. Since they are implemented
in the kernel, in a lower-level language (C), they are typically (but not always!)
much faster than user-defined ones solving the same problem. The more specialized
version of a built-in function you are able to use, the more chances you have for a speed-up.
1.2. Use functional programming (`Map, Apply`, and friends). Also, use pure functions
in `#-&` notation when you can, they tend to be faster than Function-s with named
arguments or those based on patterns (especially for not computationally-intensive
functions mapped on large lists).
1.3. Use structural and vectorized operations (`Transpose, Flatten,
Partition, Part` and friends), they are even faster than functional.
1.4. Avoid using procedural programming (loops etc), because this programming
style tends to break large structures into pieces (array indexing etc).
This pushes larger part of the computation outside of the kernel and makes it slower.
2. *Use machine-precision whenever possible*
2.1. Be aware and use Listability of built-in numerical functions, applying them to
large lists of data rather than using `Map` or loops.
2.2. Use `Compile`, when you can. Use the new capabilities of `Compile`, such as `CompilationTarget->"C"`,
and making our compile functions parallel and Listable.
2.3. Whenever possible, use vectorized operations (`UnitStep, Clip, Sign, Abs`, etc)
inside `Compile`, to realize "vectorized control flow" constructs such as `If`, so that
you can avoid explicit loops (at least as innermost loops) also inside `Compile`. This
can move you in speed from Mathematica byte-code to almost native C speed, in some cases.
2.4. When using `Compile`, make sure that the compiled function doesn't bail out to non-compiled evaluation. See examples [in this MathGroup thread][4].
3. *Be aware that Lists are implemented as arrays in Mathematica*
3.1. Pre-allocate large lists
3.2. Avoid `Append, Prepend, AppendTo` and `PrependTo` in loops, for building
lists etc (because they copy entire list to add a single element, which leads
to quadratic rather than linear complexity for list-building)
3.3. Use linked lists (structures like `{1,{2,{3,{}}}}` ) instead of plain lists
for list accumulation in a program. The typical idiom is `a = {new element, a}`.
Because a is a reference, a single assignment is constant-time.
3.4. Be aware that pattern-matching for sequence patterns (BlankSequence,
BlankNullSequence) is also based on Sequences being arrays. Therefore, a rule
`{fst_,rest___}:>{f[fst],g[rest]}` will copy the entire list when applied. In particular, don't
use recursion in a way which may look natural in other languages. If you want to use recursion on lists, first convert your lists to linked lists.
4. *Avoid inefficient patterns, construct efficient patterns*
4.1. Rule-based programming can be both very fast and very slow, depending on how
you build your structures and rules, but in practice it is easier to inadvertently
make it slow. It will be slow for rules which force the pattern-matcher to make many
a priory doomed matching attempts, for example by under-utilizing each run of the
pattern-matcher through a long list (expression). Sorting elements is a good example:
`list//.{left___,x_,y_,right___}/;x>y:>{left,y,x,right}` - has a cubic complexity in the
size of the list (explanation is e.g. [here][5]).
4.2. Build efficient patterns, and corresponding structures to store your data, making
pattern-matcher to waste as little time on false matching attempts as possible.
4.3. Avoid using patterns with computationally intensive conditions or tests. The
pattern-matcher will give you the most speed when patterns are mostly syntactic in
nature (test structure, heads, etc). Every time when condition `(/;)` or pattern test `(?)`
is used, for every potential match, the evaluator is invoked by the pattern-matcher,
and this slows it down.
5. *Be aware of immutable nature of most Mathematica built-in functions*
Most Mathematica built-in functions which process lists create a copy of an original list and
operate on that copy. Therefore, they may have a linear time (and space) complexity in the
size of the original list, even if they modify a list in only a few places. One universal
built-in function that does not create a copy, modifies the original expression and does not
have this issue, is `Part`.
5.1. Avoid using most list-modifying built-in functions for a large number of
small independent list modifications, which can not be formulated as a single step
(for example, `NestWhile[Drop[#,1]&,Range[1000],#<500&]` )
5.2. Use extended functionality of `Part` to extract and modify a large number of
list (or more general expression) elements at the same time. This is very fast,
and not just for packed numerical arrays (`Part` modifies the original list).
5.3. Use `Extract` to extract many elements at different levels at once, passing
to it a possibly large list of element positions.
6. *Use efficient built-in data structures*
The following internal data structures are very efficient and can be used in
many more situations than it may appear from their stated main purpose. Lots of such examples can be found by searching the Mathgroup archive, particularly contributions of Carl Woll.
6.1. Packed arrays
6.2. Sparse arrays
7. *Use hash - tables.*
Starting with version 10, immutable associative arrays are available in Mathematica (Associations)
7.1 Associations
the fact that they are immutable does not prevent them to have efficient insertion and deletion of key-value pairs (cheap copies different from the original association by the presence, or absence, of a given key-value pair). They represent the idiomatic associative arrays in Mathematica, and have very good performance characteristics.
For earlier versions,the following alternatives work pretty well, being based on internal Mathematica's hash-tables:
7.2. Hash-tables based on `DownValues` or `SubValues`
7.3. `Dispatch`
8. *Use element - position duality*
Often you can write faster functions to work with positions of elements rather than
elements themselves, since positions are integers (for flat lists). This can give you
up to an order of magnitude speed-up, even compared to generic built-in functions
(`Position` comes to mind as an example).
9. *Use Reap - Sow*
`Reap` and `Sow` provide an efficient way of collecting intermediate results, and generally
"tagging" parts you want to collect, during the computation. These commands also go well
with functional programming.
10. *Use caching, dynamic programming, lazy evaluation*
10.1. Memoization is very easily implemented in Mathematica, and can save a lot of execution
time for certain problems.
10.2. In Mathematica, you can implement more complex versions of memoization, where you can
define functions (closures) at run-time, which will use some pre-computed parts in their
definitions and therefore will be faster.
10.3. Some problems can benefit from lazy evaluation. This seems more relevant to memory -
efficiency, but can also affect run-time efficiency. Mathematica's symbolic constructs make
it easy to implement.
A successful performance - tuning process usually employs a combination of these techniques,
and you will need some practice to identify cases where each of them will be beneficial.
[1]: http://mathematica.stackexchange.com/a/29351
[2]: https://groups.google.com/d/topic/comp.soft-sys.math.mathematica/XOXapJm_Q1Q/discussion
[3]: http://www.mathprogramming-intro.org/
[4]: https://groups.google.com/d/topic/comp.soft-sys.math.mathematica/XOXapJm_Q1Q/discussion
[5]: http://www.mathprogramming-intro.org/book/node355.html
[at0]: http://community.wolfram.com/web/vitaliykLeonid Shifrin2017-03-22T20:05:20Z[EIWL] Get right outputs of these examples from the EIWL Book?
http://community.wolfram.com/groups/-/m/t/1038429
I've been reading AN ELEMENTARY INTRODUCTION TO THE WOLFRAM LANGUAGE.
Either I'm doing something wrong or something has changed.
Neither of these examples (page 213) seems to work:
WordCloud[TextCases[WikipediaData["EU"],"Country"]]
&
TextStructure["You can do so much with the Wolfram Language."]
1. Is there maybe something I'm missing? I'm a complete novice, so perhaps I'm overlooking a basic step.
2. Is there a place to get updates to the book?
3. Could someone please tell me the correct way to do these?
Thanks for your help!Brad Edwards2017-03-23T22:27:30ZMathematica 11.0.1 now available for the Raspberry Pi
http://community.wolfram.com/groups/-/m/t/1028536
Hi all,
Mathematica 11.0.1 is now available for the Raspberry Pi on Raspbian. If you already have Mathematica installed on your Raspberry Pi, you can update with the following:
sudo apt-get update && sudo apt-get upgrade wolfram-engine
If you don't already have Mathematica installed you can run the following commands to install it:
sudo apt-get update && sudo apt-get install wolfram-engine
New features for the Raspberry Pi include :
- Neural Network features including constructing custom nets : http://reference.wolfram.com/language/guide/NeuralNetworks.html
- Audio processing features including out of core streaming of large sounds as well as advanced audio processing : http://reference.wolfram.com/language/guide/AudioProcessing.html
- Travel based path plan functions including path finding from one city to another : http://reference.wolfram.com/language/guide/LocationsPathsAndRouting.html
- Channel based communication for sending and receiving messages : http://reference.wolfram.com/language/guide/Channel-BasedCommunication.html
- Powerful and easy scripting through WolframScript : http://reference.wolfram.com/language/ref/program/wolframscript.html
- And many more : http://reference.wolfram.com/language/guide/SummaryOfNewFeaturesIn11.html
Additionally, with the new release of WolframScript on the Raspberry Pi, you can install WolframScript standalone and run it without a local kernel against the cloud using the `-cloud` option. This means you can use the Wolfram Language through WolframScript on the Raspberry Pi without having wolfram-engine installed by running it against the cloud. See the documentation page for WolframScript for more details.Ian Johnson2017-03-09T21:02:49ZFormal Concept Analysis with mathematica
http://community.wolfram.com/groups/-/m/t/1039502
Hi, Im looking for examples of Formal Concept Analysis (developed by Rudolf Wille in the 70s) coded with Mathematica. Such analysis provides hierarchical lattices for formal contexts. I appretiate your input. M.Mariana Espinosa2017-03-25T13:24:13ZWhere Is the Center of the US?
http://community.wolfram.com/groups/-/m/t/1039049
Where is the geographic center of the US? According to the National Geologic Survey, the center of the conterminous US is at 39°50'N 98°35'W, and the center of all US states (including Alaska and Hawaii) is at 44°58'N 103°46'W. The thing is, the center of a region on the globe is not entirely well defined. That is, there are multiple ways of characterizing the center of a region like the US.
With Projections
=======
Well how did the NGS do it? The NGS published their results for the center of the conterminous US in 1918. They took a map of the US, cut out a piece of cardboard in the shape of that map, and then found its center of mass by hanging it from various points. The center of mass of this piece of cardboard, they said, was analogous to the center of the conterminous US.
What they essentially measured is the centroid of the US in some projection. The key point is "in some projection". With another projection you could get a different result. The NGS didn't try many projections, but they did consider this problem. However, as Oscar S. Adams somewhat hilariously says in a [paper][1] presenting some of the NGS's methods,
> [...] almost any one of the methods already outlined will give a point that is accurate enough for all practical purposes. As a matter of fact, it is hardly conceivable that such a point should meet any 'practical purpose' in any case.
In 1918, checking this claim, that the projections will not make too big a difference, would have required a lot of cardboard and a lot of time, but now it is much easier to investigate this.
First, let's get a 2D MeshRegion for the US. To get a 2D region, we have to use some projection, so we will use the equirectangular projection where the x-coordinate is the longitude and the y -coordinate is the latitude. This makes it easy to convert this to a geo-Polygon by just wrapping GeoPosition around each vertex, and also makes it easier to put this in all sorts of projections.
We need to get a geo-Polygon for the conterminous US and convert it into a MeshRegion. To do this, we can just call DiscretizeGraphics.
r = DiscretizeGraphics[Polygon/@Map[Reverse, Entity["Country", "UnitedStates"]["Polygon"][[1,1]],{2}]]
![conterminous US region][2]
RegionUnion[
DiscretizeGraphics@Map[Polygon]@Map[Reverse, Entity["Country", "UnitedStates"]["Polygon"][[1,1]],{2}],
DiscretizeGraphics@Map[Polygon]@Map[Reverse, Entity["AdministrativeDivision", {"Hawaii", "UnitedStates"}]["Polygon"][[1,1]],{2}],
DiscretizeGraphics@Map[Polygon]@Map[Reverse, Entity["AdministrativeDivision", {"Alaska", "UnitedStates"}]["Polygon"][[1,1]],{2}]
]
![US states region][3]
Everything we do with the conterminous US will also apply to all of the states, and in fact to any other region on the planet. We'll start by defining a function that projects a MeshRegion like r into whatever map projection we want.
projectMesh[r_, proj_] := MeshRegion[GeoGridPosition[GeoPosition[Reverse/@MeshCoordinates[r]], proj][[1]], MeshCells[r,2]]
projectMesh[r, "Robinson"]
![projected conterminous US region][4]
Now we can easily creating a function that finds the centroid of this region, and then uses GeoGridPosition to take that centroid from the projection's coordinates to a GeoPosition.
meshCentroid[r_, proj_] := GeoPosition@GeoGridPosition[RegionCentroid[projectMesh[r, proj]], proj]
In[]:= meshCentroid[r, "Mercator"]
Out[]= GeoPosition[{40.1393, -99.3083}]
Now, we can try this with a few projections and put them on a map.
GeoListPlot[
Labeled[meshCentroid[r,#],#]&/@{"Equirectangular","Mercator","Albers","CylindricalEqualArea","Bonne","LambertAzimuthal"},
GeoRange->"AdministrativeDivision", ImageSize->700]
![projection centers][5]
Let's do try this with lots of projections! This gets us all of the commonly named projections that can project the conterminous US.
projs =
Quiet@Select[
Complement[
GeoProjectionData[],
GeoProjectionData["UTMZone"],
GeoProjectionData["SPCS27"],
GeoProjectionData["SPCS83"]],
MeshRegionQ@projectMesh[r,{#,"Centering"->{0,0}}]&]
Out[]= {"Airy", "Aitoff", "Albers", "AmericanPolyconic", "ApianI", \
"ApianII", "ArdenClose", "Armadillo", "AugustEpicycloidal", \
"AzimuthalEquidistant", "BaconGlobular", "Balthasart", \
"BehrmannEqualArea", "BipolarObliqueConicConformal", \
"BoggsEumorphic", "Bonne", "Bottomley", "BraunConicStereographic", \
"BraunCylindrical", "BraunII", "BSAMCylindrical", "Cassini", \
"Collignon", "ConicEquidistant", "ConicPerspective", \
"ConicSatelliteTracking", "Craster", "CrasterCylindricalEqualArea", \
"CylindricalEqualArea", "CylindricalEquidistant", \
"CylindricalSatelliteTracking", "DenoyerSemielliptical", \
"EckertGreifendorff", "EckertI", "EckertII", "EckertIII", "EckertIV", \
"EckertV", "EckertVI", "EquatorialStereographic", "Equirectangular", \
"Euler", "FoucautEqualArea", "FoucautStereographic", \
"FournierGlobularI", "FournierII", "GallIsographic", \
"GallStereographic", "GinzburgI", "GinzburgII", "GinzburgIV", \
"GinzburgIX", "GinzburgPseudoCylindrical", "GinzburgV", "GinzburgVI",
"GoodeHomolosine", "GottElliptical", "GottMugnoloElliptical", \
"Hammer", "Hatano", "HerschelConicConformal", "HoboDyer", \
"Hyperelliptical", "KarchenkoShabanova", "KavrayskiyV", \
"KavrayskiyVII", "Lagrange", "LambertAzimuthal", \
"LambertConformalConic", "LambertConformalConicNGS", \
"LambertConicEqualArea", "LambertCylindrical", "Larrivee", \
"Loximuthal", "Maurer", "McBrydeThomasFlatPolarParabolic", \
"McBrydeThomasFlatPolarQuartic", "McBrydeThomasFlatPolarSinusoidal", \
"McBrydeThomasI", "McBrydeThomasII", "Mercator", "MillerCylindrical", \
"MillerCylindricalII", "MillerPerspective", "Mollweide", "MurdochI", \
"MurdochII", "MurdochIII", "NaturalEarth", "Nell", "NellHammer", \
"ObliqueMercator", "OrteliusOval", "PavlovCylindrical", \
"PeirceQuincuncial", "PutninsP1", "PutninsP1Prime", "PutninsP2", \
"PutninsP3", "PutninsP3Prime", "PutninsP4Prime", "PutninsP5", \
"PutninsP5Prime", "PutninsP6", "PutninsP6Prime", "QuarticAuthalic", \
"RectangularPolyconic", "Robinson", "Shield", "SinuMollweide", \
"Sinusoidal", "SnyderMinimumError", "SpaceObliqueMercator", \
"Stereographic", "Times", "TissotConicEqualArea", "ToblerI", \
"ToblerII", "TransverseMercator", "TrapezoidalMercator", \
"TrystanEdwards", "UPSNorth", "UPSSouth", "UrmayevCylindricalII", \
"UrmayevCylindricalIII", "UrmayevI", "UrmayevPseudoCylindrical", \
"VanDerGrinten", "VanDerGrintenII", "VanDerGrintenIII", \
"VanDerGrintenIV", "WagnerI", "WagnerII", "WagnerIII", "WagnerIV", \
"WagnerIX", "WagnerV", "WagnerVI", "WagnerVII", "WagnerVIII", \
"WerenskioldI", "Werner", "Wiechel", "WinkelI", "WinkelII", \
"WinkelSnyder", "WinkelTripel"}
Now many of these projections actually have a few parameters. Of particular importance here is a parameter called "Centering". The thing is, we don't want WL to try to be clever in picking some center for the projection if that center isn't going to be consistent. For now, we can just set the "Centering" for each projection to GeoPosition[{0,0}].
GeoListPlot[Labeled[meshCentroid[r, {#,"Centering"->{0,0}}],#]&/@projs, GeoRange->"AdministrativeDivision", ImageSize->1000]
![lots of projection centers][6]
So it looks like we have a bit of a spread. Most of the projections give the center of the US in Kansas, with a few in Nebraska, and just a couple of oddballs in Wyoming.
That "Centering" parameter is causing some silly things though. For some projections, "Centering"->GeoPosition[{0,0}] is fine. But for others, we get things like this.
projectMesh[r, {"LambertAzimuthal", "Centering"->GeoPosition[{0,0}]}]
![US with the LambertAzimuthal projection with ][7]
What we really want is to center the projection on the center of the US, but of course we don't know the center of the US (or at least there is no unique center). There are a few ways we could try to get around this. I like the following method. We start with "Centering"->GeoPosition[{0,0}] and find the center of the US with meshCentroid. Then we run it again, this time with "Centering" set to the output of that. Basically, we run this recursively where we set the "Centering" to the last output again and again until it stabilizes. This can be expressed nicely with NestWhile or NestWhileList.
geoPositionFilter[g:GeoPosition[{__?NumberQ}]] := g
geoPositionFilter[_] := Missing[]
centroidPaths =
AssociationMap[Function[proj,
NestWhileList[
Quiet@Check[geoPositionFilter@meshCentroid[r,{proj,"Centering"->#}],Missing[]]&,GeoPosition[{0,0}],
!MissingQ[#2]&&GeoDistance[#1,#2]>0.1mi&,
2,
10
]],
projs];
centroids = centroidPaths[[All, -1]];
We can take a look at the series of projections we get while it stabilizes for some projection.
projectMesh[r, {"LambertAzimuthal","Centering"->#}]&/@centroidPaths["LambertAzimuthal"]
![LambertAzimuthal projection stabilizing][8]
And we can look at the series of centers as it stabilizes.
GeoListPlot[centroidPaths["LambertAzimuthal"], Joined -> True, ImageSize -> 700]
![LambertAzimuthal centers while stabilizing][9]
It turns out some projections stabilize faster than others. This log scaled plot shows the distances between substituent centers for all of our projections.
GeoListPlot[centroidPaths["LambertAzimuthal"], Joined -> True, ImageSize -> 700]
![plot of stabilization with all projections][10]
This is good, but we have to be a bit careful sometimes. For example, when including Alaska and Hawaii, a projection called "AugustEpicycloidal" never stabilizes. That's that purple line at the top of this next plot.
ListLogPlot[DeleteCases[Values[GeoDistanceList /@ DeleteMissing[centroidPaths, 2]], {}], Joined -> True, AxesLabel -> {"iterations", "miles"}]
![plot of stabilization with all projections for all 50 states][11]
It turns out this projection bounces between two points in the Pacific forever.
GeoListPlot[centroidPaths["AugustEpicycloidal"], Joined -> True, ImageSize -> 700]
![AugustEpicycloidal being unhelpful][12]
The points are far enough apart that we shouldn't just pick one, so we should just remove this projection when we include Alaska or Hawaii.
Finally, let's see the map for the centers calculated like this.
GeoListPlot[KeyValueMap[Labeled[#2, #1] &, DeleteMissing[centroids]], GeoRange -> "AdministrativeDivision", ImageSize -> 1000]
![all projection centers without "Centering"->{0,0}][13]
We get a much tighter clustering! This is nice, but it still doesn't give us a conclusive answer. There are some ways we could find the "center" (again, not well defined) of these points, but such a point would be in part determined by what projections we consider. Instead, we can look to some projection independent approaches.
Without Projections
=======
## 3D Centroids ##
The first projection independent that comes to mind for me involves looking the region as an unprojected polygon in 3D. That is, get the shape of the US like an orange peel around the Earth. This is a handy function that uses GeoPositionXYZ to create such a shape.
countryRegion3D[meshI:(_MeshRegion|_BoundaryMeshRegion)] :=
Block[{mesh = Quiet@TriangulateMesh@DiscretizeRegion@meshI},
If[Head[mesh] =!= MeshRegion,
Missing[],
MeshRegion[GeoPositionXYZ[GeoPosition[Reverse/@MeshCoordinates[mesh]]][[1]], MeshCells[mesh,2]]
]
]
countryRegion3D[r]
![the conterminous US in 3D][14]
This is obviously projection independent. Now, to find the center of this, we can just find the centroid.
In[]:= RegionCentroid@countryRegion3D@r
Out[]= {-735688., -4.7425*10^6, 3.98579*10^6}
This centroid is in the coordinates of GeoPositionXYZ, so we can convert it back to a GeoPosition.
In[]:= GeoPosition@GeoPositionXYZ@RegionCentroid@countryRegion3D@r
Out[]= GeoPosition[{39.9031, -98.8178, -130866.}]
This point though is on the inside of the Earth. When we converted it back to a GeoPosition, we can see it has a third parameter. This parameter is the height, which in this case is negative (because it is inside the Earth). By ignoring that height, we are in essence drawing a line from the center of the Earth to that point and seeing where that line intersects the surface. This is what happens when we display this point with GeoListPlot.
GeoListPlot[GeoPosition@GeoPositionXYZ@RegionCentroid@countryRegion3D@r, GeoRange -> Entity["Country", "UnitedStates"], ImageSize -> 1000]
![center of the conterminous US with the unprojected centroid method][15]
## Median Coordinates ##
Oscar S. Adams suggests another approach before dismissing it. He suggests that you could find essentially the median latitude and longitude. That is, find the latitude such that if you cut the conterminous US along that latitude both resulting sides would have about the same area. The same operation could be performed with longitude.
We have to be careful about how we do this because many projections don't preserve area, and many projections don't represents lines of latitude and longitude as straight horizontal and vertical lines. We can start then by projecting the region of interest into a projection that has these properties, "CylindricalEqualArea".
equalAreaR = projectMesh[r, "CylindricalEqualArea"]
![conterminous US in the CylindricalEqualArea projection][16]
We can now find the vertical and horizontal lines that cut this region into even halves.
clippedArea[r_, b_?NumberQ] := NIntegrate[Boole[x < b], {x, y} \[Element] r]
halfClipLocation[r_] := With[{targetArea = Area[r]/2}, NArgMin[Abs[clippedArea[r, b] - targetArea], {b, Sequence @@ RegionBounds[r][[1]]}]]
This finds the vertical line that cuts the conterminous US in half.
In[]:= medianX = halfClipLocation[equalAreaR]
Out[]= -98.0503
And this finds the horizontal line that cuts the conterminous US in half by essentially switching latitude and longitude and performing the same operation.
In[]:= medianY = halfClipLocation[MeshRegion[Reverse /@ MeshCoordinates@equalAreaR, MeshCells[equalAreaR, 2]]]
Out[]= 36.7212
Now these numbers and in the coordinate system of the CylindricalEqualArea projection. We can use GeoGridPosition though to convert them into a GeoPosition.
In[]:= GeoPosition@GeoGridPosition[{medianX, medianY}, "CylindricalEqualArea"]
Out[]= GeoPosition[{39.8594, -98.0503}]
GeoListPlot[GeoPosition@GeoGridPosition[{medianX, medianY}, "CylindricalEqualArea"], GeoRange -> Entity["Country", "UnitedStates"], ImageSize -> 1000]
![center of the conterminous US with the median coordinates method][17]
The problem with this, as Adams mentions, is that this isn't necessarily unique by the angle that you cut. That is, if instead of cutting along a line that runs north-south and another that runs east-west, you actually cut along lines shifted 45\[Degree] from those, then you could get a different answer. That is, if the conterminous US was oriented differently on the globe, this method would give a different answer. If the conterminous US were rotated 45\[Degree] and sitting in the middle of the pacific, the distance from this center point to, let's say, Chicago is not necessarily the same as if the conterminous US is where it actually is. This property seems undesirable because it depends on the orientation of the region on the earth when really the center seems like it should only be dependent on the shape of the region.
## Central Features ##
There is one final method we will consider. This involves finding the central feature of the US. The central feature of a collection of points, as defined by the function conveniently named CentralFeature, is the point which minimizes the sum of the distances from each of those points to some central point. That is, it tries to find a point that is close to all the specified points. This is in many ways similar to the Riemannian center of mass, which is defined as being the same, but for the square of the distances. Now, we have a region and not a collection of points, so we need to use RandomPoint to get those points so that we can feed them to CentralFeature. CentralFeature already knows how to deal with GeoPositions, so this is all we have to do.
In[]:= centralFeature = CentralFeature@GeoPosition[Reverse /@ RandomPoint[r, 1000]]
Out[]= GeoPosition[{40.2308, -99.2463}]
Arguably, we really want to do this with an integral over the whole region instead of with some selected points, no matter how numerous, but realistically we would perform such an integral with Method->"MonteCarlo", and so this is not all too different. Here, we sampled the US with 1000 points. When put on a map, 1000 points seems to cover everything pretty well. However, this actually doesn't give consistent results.
features = Table[CentralFeature@GeoPosition[Reverse /@ RandomPoint[r, 1000]], 100];
GeoListPlot[features, GeoRange -> Entity["Country", "UnitedStates"], ImageSize -> 1000]
![central features calculated with 1000 sample points][18]
With 2000 points, it is more consistent, but still a rather large area.
features2 = Table[CentralFeature@GeoPosition[Reverse /@ RandomPoint[r, 2000]], 100];
GeoListPlot[features2, GeoRange -> Entity["Country", "UnitedStates"], ImageSize -> 1000]
![central features calculated with 2000 sample points][19]
Overnight, I ran a similar computation with 15000 points. The clustering here is sufficient that any one of these points is fairly precise, but just for fun, we can take the CentralFeature of these to get a central point of our central points, and get a final center of the US.
features3 = Table[Echo@n; CentralFeature@GeoPosition[Reverse /@ RandomPoint[r, 15000]], {n, 100}];
features3 >> "~/Desktop/features3.wl";
GeoListPlot[features3, GeoRange -> Entity["Country", "UnitedStates"], ImageSize -> 1000]
![central features calculated with 15000 sample points][20]
GeoListPlot[features3, GeoRange -> "AdministrativeDivision", ImageSize -> 1000]
![central features calculated with 15000 sample points closeup][21]
I personally find the method with the unprojected polygons most convincing. It is very easy and efficient to calculate a precise location, and that location is completely projection independent. The runner up for me is this CentralFeature method, but its complexity and the approximations that become necessary make it a bit harder to judge.
Finally, here is a map showing all of the centers of the conterminous US.
features3 = Get["~/Desktop/features3.wl"];
GeoListPlot[{
Values@DeleteMissing[centroids],
{GeoPosition@GeoPositionXYZ@RegionCentroid@countryRegion3D@r},
{GeoPosition@GeoGridPosition[{medianX,medianY},"CylindricalEqualArea"]},
{CentralFeature[features3]}},
GeoRange->"AdministrativeDivision",ImageSize->1000,
PlotLegends->{"projection centers","3D centroid","median coordinate","central feature"}]
![all the centers we considered for the conterminous US][22]
[1]: https://www.ngs.noaa.gov/PUBS_LIB/GeoCenter_USA1.pdf
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.05.25PM.png&userId=24497
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.12.01PM.png&userId=24497
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.10.50PM.png&userId=24497
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.15.45PM.png&userId=24497
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.20.10PM.png&userId=24497
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.21.39PM.png&userId=24497
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.26.32PM.png&userId=24497
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.27.26PM.png&userId=24497
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.28.54PM.png&userId=24497
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.31.53PM.png&userId=24497
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.33.03PM.png&userId=24497
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.34.16PM.png&userId=24497
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.37.22PM.png&userId=24497
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.39.46PM.png&userId=24497
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.41.02PM.png&userId=24497
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.44.29PM.png&userId=24497
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.48.42PM.png&userId=24497
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.49.49PM.png&userId=24497
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.51.44PM.png&userId=24497
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.52.40PM.png&userId=24497
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-24at8.55.33PM.png&userId=24497Christopher Wolfram2017-03-25T03:56:31ZHow to adjust an image by sculpting its gamut
http://community.wolfram.com/groups/-/m/t/1038958
Meet a colorful character, [Mr. Helmet Hornbill][1]:
![enter image description here][2]
Mr. Hornbill is likely pleased with how his pictures get adjusted day after day. Ah... artistically pleasant photo retouching. Well, sure, but not today, Mr. Hornbill.
This picture of Mr. Hornbill was already stretched so its Red, Green and Blue color values range from 0 to 1. Or maybe Mr. Hornbill did not need anything like that, because maybe he is naturally stretched:
img = Import@"https://cdn.pixabay.com/photo/2013/08/23/14/17/animal-175033_1280.jpg";
In[222]:= ImageMeasurements[img, {"Min", "Max"}]
Out[222]= {{0., 0., 0.}, {1., 1., 1.}}
Anyhow, things get nicer and sharper for Mr. Hornbill before they get worse. A composition of itself with its flattened channel version looks flattering:
ImageCompose[img, {HistogramTransform[img], .5}]
![enter image description here][3]
Switching colors with a rooster by transferring color distributions channel by channel: Not so nice. (Images' backgrounds contribute to the distributions.)
HistogramTransform[ColorConvert[img, "LAB"], ColorConvert[rooster, "LAB"]]
![enter image description here][4]
And now... Mr. Hornbill will serve my purpose of demonstrating how to change the 3D distribution of its colors in the Lab space. The target 3D distribution? A non-pleasing one: the sRGB colorspace gamut, treated as if all its colors are equiprobable.
This is where we start. In the final image, colors will spread the entire volume (sRGB gamut) uniformly.
ChromaticityPlot3D[{"sRGB", img}]
![enter image description here][5]
How can I do that to Mr. Hornbill? Step by step. Like a sculptor, I reshape the color distribution in each of the 3D dimensions, then rotate randomly the distribution and reshape again. And again, until convergence. Mr. Hornbill might say is a victim of a multidimensional PDF transfer by iteratively reshaping random marginal histograms.
Before that, I hack my way to define a 3D, empirical, uniform distribution for the sRGB gamut. Once I grabbed points on the 6 "faces" of the gamut, I compute the convex hull, then sample on a grid the 3D space around the convex hull so as to only retain these 3D points inside the convex hull.
c = ChromaticityPlot3D["sRGB", "LAB"];
points3D = c[[1, 1, All, 1]];
chm = ConvexHullMesh[Flatten[points3D, 1]];
ptsinside = {};
mf = RegionMember[chm];
Do[
If[mf[{l, a, b}],
AppendTo[ptsinside, {l, a, b}]
],
Evaluate[Sequence @@ pspec]
];
This gives about 62 thousands points. And now, the iterative sculpting:
multidimensionalReshape[source_, ref_, \[Epsilon]_: 10^-10, maxiterations_: 200] :=
FixedPoint[
With[{R = randomRotationMatrix[Length[source]]},
Transpose[R].HistogramTransform[R.#, R.ref]] &,
source, maxiterations,
SameTest -> (RootMeanSquare[Norm /@ Transpose[#1 - #2]] < \[Epsilon] &)];
randomRotationMatrix[n_] :=
If[Det@# == 1, #, randomRotationMatrix[n]] &[
Transpose@#.DiagonalMatrix[Sign@Diagonal@#2] & @@
QRDecomposition@RandomReal[NormalDistribution[], {n, n}]];
pix = Flatten /@ ImageData[ColorConvert[img, "LAB"], Interleaving -> False];
new = multidimensionalReshape[pix, Transpose@ptsinside];
adjusted = Image[Partition[#, First@ImageDimensions@img] & /@ new, Interleaving -> False, ColorSpace -> "LAB"];
The result looks quite good enough, pixels are spread all around inside the volume. I.e. we sculpted the initial convex hull into the shape of the sRGB gamut.
![enter image description here][6]
Mr. Hornbill stays cool under his new colors:
![enter image description here][7]
[1]: https://cdn.pixabay.com/photo/2013/08/23/14/17/animal-175033_1280.jpg
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3826animal-175033_1920.jpg&userId=24957
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-25at01.32.27.png&userId=24957
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-25at01.40.42.png&userId=24957
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-25at01.51.53.png&userId=24957
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-25at02.12.26.png&userId=24957
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-25at02.16.04.png&userId=24957Matthias Odisio2017-03-25T07:17:27Z[GIF] Elaborating on Arrival's Alien Language, Part I., II. & III.
http://community.wolfram.com/groups/-/m/t/1034626
I recently watched "Arrival", and thought that some of the dialogue sounded Wolfram-esque. Later, I saw the following blog post:
[Quick, How Might the Alien Spacecraft Work?][1]
Along with many others, I enjoyed the movie. The underlying artistic concept for the alien language reminded me of decade old memories, a book by Stephen Addiss, [Art of Zen][2]. Asian-influenced symbolism is an interesting place to start building a sci-fi concept, even for western audiences.
I also found Cristopher Wolfram's broadcast and the associated files:
[Youtube Broadcast][3]
[Github Files ( with image files ) ][4]
Thanks for sharing! More science fiction, yes!
I think the constraint of circular logograms could be loosened. This leads to interesting connections with theory of functions, which I think the Aliens would probably know about.
The following code takes an alien logogram as input and outputs a deformation according to do-it-yourself formulation of the Pendulum Elliptic Functions:
![Human Animation][5]
## $m=2$ Inversion Coefficients ##
MultiFactorial[n_, nDim_] := Times[n, If[n - nDim > 1, MultiFactorial[n - nDim, nDim], 1]]
GeneralT[n_, m_] := Table[(-m)^(-j) MultiFactorial[i + m (j - 1) + 1, m]/ MultiFactorial[i + 1, m], {i, 1, n}, {j, 1, i}]
a[n_] := With[{gt = GeneralT[2 n, 2]}, gt[[2 #, Range[#]]] & /@ Range[n] ]
## Pendulum Values : $2(1-\cos(x))$ Expansion Coefficients ##
c[n_ /; OddQ[n]] := c[n] = 0;
c[n_ /; EvenQ[n]] := c[n] = 2 (n!) (-2)^(n/2)/(n + 2)!;
## Partial Bell Polynomials ##
Note: These polynomials are essentially the same as the "**BellY**" ( hilarious naming convention), but recursion optimized. See timing tests below.
B2[0, 0] = 1;
B2[n_ /; n > 0, 0] := 0;
B2[0, k_ /; k > 0] := 0;
B2[n_ /; n > 0, k_ /; k > 0] := B2[n, k] = Total[
Binomial[n - 1, # - 1] c[#] B2[n - #, k - 1] & /@
Range[1, n - k + 1] ];
## Function Construction ##
BasisT[n_] := Table[B2[i, j]/(i!) Q^(i + 2 j), {i, 2, 2 n, 2}, {j, 1, i/2}]
PhaseSpaceExpansion[n_] := Times[Sqrt[2 \[Alpha]], 1 + Dot[MapThread[Dot, {BasisT[n], a[n]}], (2 \[Alpha])^Range[n]]];
AbsoluteTiming[CES50 = PhaseSpaceExpansion[50];] (* faster than 2(s) *)
Fast50 = Compile[{{\[Alpha], _Real}, {Q, _Real}}, Evaluate@CES50];
## Image Processing ##
note: This method is a hack from ".jpg" to sort-of vector drawing. I haven't tested V11.1 vectorization functionality, but it seems like this could be a means to process all jpg's and output a file of vector polygons. Anyone ?
LogogramData = Import["Human1.jpg"];
Logogram01 = ImageData[ColorNegate@Binarize[LogogramData, .9]];
ArrayPlot@Logogram01;
Positions1 =
Position[Logogram01[[5 Range[3300/5], 5 Range[3300/5]]], 1];
Graphics[{Disk[#, 1.5] & /@ Positions1, Red,
Disk[{3300/5/2, 3300/5/2}, 10]}];
onePosCentered =
N[With[{cent = {3300/5/2, 3300/5/2} }, # - cent & /@ Positions1]];
radii = Norm /@ onePosCentered;
maxR = Max@radii;
normRadii = radii/maxR;
angles = ArcTan[#[[2]], #[[1]]] & /@ onePosCentered;
Qs = Cos /@ angles;
## Constructing and Printing Image Frames ##
AlienWavefunction[R_, pixel_, normRad_, Qs_, angles_] := Module[{
deformedRadii = MapThread[Fast50, {R normRad, Qs}],
deformedVectors = Map[N[{Cos[#], Sin[#]}] &, angles],
deformedCoords
},
deformedCoords =
MapThread[Times, {deformedRadii, deformedVectors}];
Show[ PolarPlot[ Evaluate[
CES50 /. {Q -> Cos[\[Phi]], \[Alpha] -> #/10} & /@
Range[9]], {\[Phi], 0, 2 Pi}, Axes -> False,
PlotStyle -> Gray],
Graphics[Disk[#, pixel] & /@ deformedCoords], ImageSize -> 500]]
AbsoluteTiming[ OneFrame =
AlienWavefunction[1, (1 + 1)* 1.5/maxR, normRadii, Qs, angles]
](* about 2.5 (s)*)
![Alien Pendulum][6]
## Validation and Timing ##
In this code, we're using the magic algorithm to get up to about $100$ orders of magnitude in the half energy, $50$ in the energy. I did prove $m=1$ is equivalent to other published forms, but haven't found anything in the literature about $m=2$, and think that the proving will take more time, effort, and insight (?). For applications, we just race ahead without worrying too much, but do check with standard, known expansions:
EK50 = Normal@ Series[D[ Expand[CES50^2/2] /. Q^n_ :> (1/2)^n Binomial[n, n/2], \[Alpha]], {\[Alpha], 0, 50}];
SameQ[Normal@ Series[(2/Pi) EllipticK[\[Alpha]], {\[Alpha], 0, 50}], EK50]
Plot[{(2/Pi) EllipticK[\[Alpha]], EK50}, {\[Alpha], .9, 1}, ImageSize -> 500]
Out[]:= True
![Approximation Validity][7]
This plot gives an idea of approximation validity via the time integral over $2\pi$ radians in phase space. Essentially, even the time converges up to, say, $\alpha = 0.92$. Most of the divergence is tied up in the critical point, which is difficult to notice in the phase space drawings above.
Also compare the time of function evaluation:
tDIY = Mean[ AbsoluteTiming[Fast50[.9, RandomReal[{0, 1}]] ][[1]] & /@ Range[10000]];
tMma = Mean[AbsoluteTiming[JacobiSN[.9, RandomReal[{0, 1}]] ][[1]] & /@ Range[10000]];
tMma/tDIY
In the region of sufficient convergence, Mathematica function **JacobiSN** is almost 20 times slower. The CES radius also requires a function call to **JacobiCN**, so an output-equivalent **AlienWavefunction** algorithm using built-in Mathematica functions would probably take at least 20 times as long to produce. When computing hundreds of images this is a noticeable slow down, something to avoid ! !
Also compare time to evaluate the functional basis via the Bell Polynomials:
BasisT2[n_] := Table[BellY[i, j, c /@ Range[2 n]]/(i!) Q^(i + 2 j), {i, 2, 2 n, 2}, {j, 1, i/2}];
SameQ[BasisT2[20], BasisT[20]]
t1 = AbsoluteTiming[BasisT[#];][[1]] & /@ Range[100];
t2 = AbsoluteTiming[BasisT2[#];][[1]] & /@ Range[25];
ListLinePlot[{t1, t2}, ImageSize -> 500]
![Series Inverse][8]
The graph shows quite clearly that careful evaluation via the recursion relations changes the complexity of the inversion algorithm to polynomial time, $(n^2)$, in one special example where the forward series expansions coefficients have known, numeric values.
## Conclusion ##
We show proof-of-concept that alien logograms admit deformations that preserve the cycle topology. Furthermore we provide an example calculation where the "human" logogram couples to a surface. Deformation corresponds to scale transformation of the logogram along the surface. Each deformation associates with an energy.
Invoking the pendulum analogy gives the energy a physical meaning in terms of gravity, but we are not limited to classical examples alone. The idea extends to arbitrary surfaces in two, three or four dimensions, as long as the surfaces have local extrema. Around the extrema, there will exist cycle contours, which we can inscript with the Alien logograms. This procedure leads readily to large form compositions, especially if the surface has many extrema. Beyond Fourier methods, we might also apply spherical harmonics, and hyperspherical harmonics to get around the limitation of planarity.
The missing proof... Maybe later. LOL! ~ ~ ~ ~ Brad
And in the Fanfiction Voice:
Physicist : "It should be no surprise that heptapod speech mechanism involves an arbitrary deformation of the spacetime manifold."
Linguist : "Space-traveling aliens, yes, of course they know math and physics, but Buddhist symbology, where'd they learn that?"
[1]: http://blog.stephenwolfram.com/2016/11/quick-how-might-the-alien-spacecraft-work/
[2]: https://books.google.com/books/about/Art_of_Zen.html?id=4jGEQgAACAAJ
[3]: https://www.youtube.com/watch?v=8N6HT8hzUCA&t=4992s
[4]: https://github.com/WolframResearch/Arrival-Movie-Live-Coding
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Deformation.gif&userId=234448
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=AlienPendulum.png&userId=234448
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=EllipticK.png&userId=234448
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=BellPolynomial.png&userId=234448Brad Klee2017-03-18T20:23:59ZThe Chaos Game - part II
http://community.wolfram.com/groups/-/m/t/1039030
![enter image description here][1]
A couple of weeks ago I posted my first [The Chaos Game post][2]. This will be a continuation on that, exploring some new ideas. Please make sure to read [the previous one first][3].
## Colors ##
The first thing I wanted to try after my previous post was coloring. Let's color each of the points based on the point it jumps towards to:
ClearAll[sequence]
sequence[n_,m_]:=RandomChoice[Range[n],m]
Table[
circlepoints=N@CirclePoints[n];
seq=sequence[n,50000];
pts=Rest[FoldList[(#1+circlepoints[[#2]])/2&,RandomChoice[circlepoints],seq]];
plotdata=Transpose[{pts,seq}];
plotdata=SortBy[GatherBy[plotdata,Last],#[[1,2]]&];
plotdata=plotdata[[All,All,1]];
colors=ColorData[97]/@Range[n];
Rasterize[Graphics[{PointSize[0.001],Riffle[colors,Point/@plotdata],FaceForm[],EdgeForm[Black],RegularPolygon[n],PointSize[0.01],Riffle[colors,Point/@circlepoints]},ImageSize->{400,400},PlotRange->1.1],"Image",ImageSize->{400,400},RasterSize->{400,400}]
,
{n,3,8}
] // Partition[#,3]& // ImageAssemble
![enter image description here][4]
This explains why the regular triangle and the square have such an unique behavior; it does not 'blend'. To be more precise: the triangle only excludes spaces, and the square exactly covers the plane again. For higher order regular polygons you see that there is overlap and that creates high and low density regions creating a bunch of patterns.
For the case of restricted jumping, like we did last time, we can also do the coloring, here the modified code:
ClearAll[CreateSequence,CreateSequenceImage]
CreateSequence[n_,steps_,choices_]:=Mod[Accumulate[RandomChoice[choices,steps-1]~Prepend~1],n,1]
CreateSequenceImage[n_,m_,choices_]:=Module[{seq,circlepoints,pts,plotdata,colors},
seq=CreateSequence[n,m,choices];
circlepoints=N@CirclePoints[n];
pts=Rest@FoldList[(#1+circlepoints[[#2]])/2&,First[circlepoints],seq];
plotdata=Transpose[{pts,seq}];
plotdata=SortBy[GatherBy[plotdata,Last],#[[1,2]]&];
plotdata=plotdata[[All,All,1]];
colors=ColorData[97]/@Range[n];
Rasterize[Graphics[{FaceForm[],EdgeForm[Black],RegularPolygon[n],PointSize[0.001],Riffle[colors,Point/@plotdata],PointSize[0.03],Riffle[colors,Point/@circlepoints]},ImageSize->300,PlotRange->1.1],"Image",RasterSize->{300,300}]
]
Let's have a look at all the possible jumping-subsets for hexagons:
Grid[Join @@@
Partition[{#, CreateSequenceImage[6, 10^4, #]} & /@
Subsets[Range[6], {1, \[Infinity]}], UpTo[4]], Frame -> All]
![enter image description here][5]
Some really nice patterns are now created!
## Fractional jumping ##
Up to now we have always jumped half-way, let's change that, and see what happens. I will introduce alpha the factor that we jump. Historically we always have set that to 0.5 (half-way). In my definition 0 means not moving, and 1 going all the way to the next point. The code can be easily modified:
ClearAll[sequence]
sequence[n_,m_]:=RandomChoice[Range[n],m]
FactorJump[n_,m_,\[Alpha]_]:=Module[{circlepoints,seq,pts,counts,img,bg},
circlepoints=N@CirclePoints[n];
seq=sequence[n,m];
pts=FoldList[(1-\[Alpha])#1+\[Alpha] circlepoints[[#2]]&,First[circlepoints],seq];
counts=Transpose@BinCounts[pts,{-1.1,1.1,0.005},{-1.1,1.1,0.005}];
counts=Reverse[counts];
img=Image[1-counts/Max[counts]];
bg=Graphics[Graphics[{FaceForm[],EdgeForm[Black],RegularPolygon[n],Text[NumberForm[\[Alpha],{\[Infinity],2}],{0,1.05}]},ImageSize->500,PlotRange->1.1]];
bg=Rasterize[bg,"Image",RasterSize->ImageDimensions[img],ImageSize->ImageDimensions[img]];
ImageMultiply[img,bg]
]
Note that I also included the plotting of the density of the points using **BinCounts**, which gives smooth images. Let's try it out with some new alpha:
FactorJump[3, 10^7, 0.3]
![enter image description here][6]
pretty nifty! Let's make a movie changing alpha gradually from 0 to 1:
n = 3;
imgs = Table[FactorJump[n, 3 10^5, \[Alpha]], {\[Alpha], 0, 1, 0.01}];
Export[ToString[n] <> "factor.gif", imgs, "DisplayDurations" -> 1/25.0]
![enter image description here][7]
Now for squares:
![enter image description here][8]
pentagons:
![enter image description here][9]
Of course we are not limited by our range 0 to 1, we can go beyond. (negative alpha means you run away, quickly going outside the screen, so that is not a good idea). Here for pentagons, and for alpha up to 1.8:
![enter image description here][10]
## Distance jumping ##
Rather than jumping a certain fraction, let's jump a specific distance in the direction of our point. Again we modify the code quite easily:
ClearAll[sequence,DistanceJump]
sequence[n_,m_]:=RandomChoice[Range[n],m]
DistanceJump[n_,m_,d_]:=Module[{circlepoints,seq,pts,counts,img,bg,reg,size},
circlepoints=N@CirclePoints[n];
seq=sequence[n,m];
pts=FoldList[#1+d Normalize[circlepoints[[#2]]-#1]&,First[circlepoints],seq];
size=3;
counts=Transpose@BinCounts[pts,{-size,size,size/250.0},{-size,size,size/250.0}];
counts=Reverse[counts];
reg=Quantile[Join@@counts,0.999];
img=Image[1- counts/reg];
bg=Graphics[Graphics[{FaceForm[],EdgeForm[Black],RegularPolygon[n],Text[NumberForm[d,{\[Infinity],2}],{0,-1.05}]},ImageSize->500,PlotRange->size]];
bg=Rasterize[bg,"Image",RasterSize->ImageDimensions[img],ImageSize->ImageDimensions[img]];
ImageMultiply[img,bg]
]
Let's try it out:
DistanceJump[5, 10^6, 0.5]
![enter image description here][11]
again we see patterns emerge; let's again make a movie varying the distance d:
j=0;
Dynamic[j]
n=3;
CloseKernels[];
LaunchKernels[4];
DistributeDefinitions[DistanceJump,n]
SetSharedVariable[j];
imgs=ParallelTable[j++;DistanceJump[n,10^6,d],{d,0.1,3,0.01}];
Export[ToString[n]<>"distance.gif",imgs,"DisplayDurations"->1/25.0]
![enter image description here][12]
and for a pentagon:
![enter image description here][13]
Really nice visualization with very complicated patterns emerging from the very simple equations! Hope you enjoyed this little exploration.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=26415distance.gif&userId=73716
[2]: http://community.wolfram.com/groups/-/m/t/1025180
[3]: http://community.wolfram.com/groups/-/m/t/1025180
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=colored.png&userId=73716
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6subsetcolored.png.png&userId=73716
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=factorjumptest1.png&userId=73716
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=13373factor.gif&userId=73716
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=93424factor.gif&userId=73716
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=102355factor.gif&userId=73716
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=107365Largefactor.gif&userId=73716
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=distancejumptest1.png&userId=73716
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=68463distance.gif&userId=73716
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=26415distance.gif&userId=73716Sander Huisman2017-03-24T23:06:24ZFeedFoward network
http://community.wolfram.com/groups/-/m/t/1039118
Dear all,
i'm looking for doing a Feedfoward neural network with 2 input antd 1 ouput with 1 hidden layer, sigmoid function and retropropagation.
how to do that ?
training and test
data = {{1657, 0.42} -> 2545, {1466, 0.80} -> 2545, {1410, 0.91} ->
1195, {1731, 0.27} -> 4995, {1647, 0.44} -> 3925, {1533, 0.66} ->
2765, {1438, 0.85} -> 2115, {1586, 0.56} -> 3670, {1569, 0.59} ->
2375, {1317, 1.09} -> 1545, {1560, 0.61} -> 2765, {1490, 0.75} ->
2205, {1377, 0.97} -> 1515, {1333, 1.06} -> 1300, {1355, 1.02} ->
1570, {1619, 0.49} -> 2935, {1455, 0.82} -> 1715, {1463, 0.80} ->
1770, {1296, 1.14} -> 1130};JNO jean2017-03-24T19:01:53ZTesting for beauty
http://community.wolfram.com/groups/-/m/t/1037946
What do you think of the idea of automatically judging if a piece of data was beautiful? This could mean the data in an image (ImageData) or maybe the result of a computation (e.g. CellularAutomaton), or anything, although I am thinking of a list or an array of numbers primarily.
My first thought was that there are many filters for image processing, but I don't know which might be useful. The next thing I think of is mathematical transforms. For example, taking the Fourier or Hadamard transform you expect the coefficients to decay, and if they don't then that would not be nice.
This code deletes the constant term and does some measure of the variance, using Mean as a shortcut to counting the 0's and 1's, those closer to the min than the max respectively without knowing the length or dimension. (Note Fourier does not assume the size is a power of 2 but Hadamard does.)
FourierBeauty[list_] := Mean[1. - Round[Rescale[Abs[Rest[Flatten[Fourier[list]]]]]]]
Maybe for an image this might not be bad. Here is what it picks out of the ExampleData test images:
Grid[{#, ExampleData[#]} & /@
MaximalBy[ExampleData["TestImage"],
FourierBeauty[
ImageData[
Binarize[
ImageResize[
ColorConvert[ExampleData[#], "Grayscale"], {64, 64}]]]] &],
Frame -> All]
![enter image description here][1]
but here are the CAs it likes the most.
MaximalBy[Range[0, 255],
Sum[FourierBeauty[ CellularAutomaton[#, RandomInteger[1, 2^8], {{0, 2^8 - 1}}]], 100] &]->{1, 3, 5, 17, 57, 87, 119, 127}
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fourier-beauty-image.jpg&userId=23275Todd Rowland2017-03-23T02:46:11ZUse a proper optimization function for a profit model?
http://community.wolfram.com/groups/-/m/t/1038414
Hello,
First time user of Mathematica and I'm really enjoying it! I've created a profit maximization model for a manufacturing firm with many different variables. I want to figure out a way to tell my model to maximize profit subject to a few constraints by changing only a select subset of variables. What would be the best function for doing that?
I've tried searching online for an answer but it's not immediately clear what function would allow me to specify constraints as well as the specific variables I want the optimizer to change in order to optimize. I've attached my model just in case - apologies in advance for the way it's constructed....it's my first one! ha
Thanks,
ScottScott T2017-03-23T20:57:06ZWhy is this integral different when the variable is substituted?
http://community.wolfram.com/groups/-/m/t/1038147
In Mathematica, typing
Integrate[Abs[-0.7 - y]*(1 - Abs[y]), {y, -1, 1}]
yields 0.709, but typing
Integrate[Abs[x - y]*(1 - Abs[y]), {y, -1, 1}] /. x -> -0.7
yields 1/3. I am not experienced with Mathematica but I was hoping someone would be able to explain to me why these give different answers.Alexander Dunlap2017-03-23T18:11:40ZMy “Today's Template” using Mathematica?
http://community.wolfram.com/groups/-/m/t/843579
Is there a way to recreate what I’m currently doing in Microsoft Word everyday, using the Wolfram Langue and/OR Mathematica to a CDF Printable Format? It’s fairly simple and strait forward. It is a template I use and print for a magnetic whiteboard I use to do videos.
The first number is the year month day displayed in an single eight digit number, the second area is the day of the year / days left in the year, the third is the week number with the day of the week following the ISO standard, the fourth and middle area is the day of the week, month and day, and the year, the fifth is the season and the number of days in that season/ and the days remaining (with the week of that season), the sixth is the current quarter and the days in that quarter / and the days remaining (with the week of that quarter).
This “todays template” works as Monday being the first day of the week.
For obvious reasons I would like to have it be generated automatically, for less obvious reasons I would like it to be formatted and displayed as in the image (if I am able to successfully upload/share it to the [Wolfram Community][1] site).
![Daily Templage][2]
Now I’m sure this is a simple thing to accomplish but as someone trying to depend less upon Microsoft technologies and more upon Wolfram’s I would greatly appreciate any direction. while this is something I will be attempting to recreate on my own eventually, as I become more accomplished in Mathematica and more specifically the wolfram language, and because the Computable Document Format is just that, computable, would it be possible to have the calculations automatically update within this document format itself triggered from the system clock on my computer?
Or is there some other method someone could suggest? Something that possibly I have not yet considered like doing the calculations in Mathematica or wolfram alfa and having and them just share the data itself to be displayed in template that I've already created on some other document type.
[1]: http://community.wolfram.com/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20160421-0839.png&userId=340725Brian Woytovich2016-04-21T16:43:12Z[✓] Why doesn't Cases[{"a", "b"}, "a"~~___] return {"a"} instead of {}?
http://community.wolfram.com/groups/-/m/t/1037935
Why doesn't
Cases[{"a", "b"}, "a"~~___]
return {"a"} instead of {}?
I used three underscores. More detailed, why does 62 work, but 61 not
In Cases[{"abcde", "b"}, "a"~~__]
Out[61]= {}
In[62]:= StringMatchQ["abcde", "a" ~~ __]
Out[62]= TruePaul Nielan2017-03-23T01:50:49Z[✓] Interpolate a 3D vector field?
http://community.wolfram.com/groups/-/m/t/1038013
Hello, So I'm trying to take a 3D magnetic field, and then interpolate it so I can take the differentials and plot nice magnetic fields. I'm struggling to tell interpolate how to interpolate my data - I get an error reading
> ListInterpolation::inhr: Requested order is too high; order has been reduced to {3,3,3,2}
Which I don't quite understand.
The data is fine because I can plot it correctly using ListVectorPlot3D
Minimum Working Example:
(*define the vectors for each dimension*)
vx = ConstantArray[1, {5, 5, 5}]
vx = ConstantArray[3, {5, 5, 5}]
vy = ConstantArray[2, {5, 5, 5}]
(*put each dimension to gether to get a 3D vector field - i.e. each point has a 3 vector at it *)
together =
ArrayReshape[Transpose[Flatten /@ {vx, vy, vz}], {5, 5, 5, 3}]
interpolation = ListInterpolation[together]
ListVectorPlot3D[together]
VectorPlot3D[interpolation[x, y, z], {x, 1, 5}, {y, 1, 5}, {z, 1, 5}]
Can interpolate not work with this complicated data structures, i.e. might they be scalars. Would it then be possible to combine 3 interpolation functions?Tomi Baikie2017-03-23T11:02:27ZGet proper clipping at PlotRange edge?
http://community.wolfram.com/groups/-/m/t/1037863
I've noticed that in some situations the clipping is not done properly, despite the explicit setting PlotRangeClipping->True:
So I plot some complex function:
Plot[{my function}, {M5, 0, 400}, PlotRange -> {{0, 400}, {0, 1500}},
Frame -> True,
FrameTicks -> {{{0, 300, 600, 900, 1200, 1500},
None}, {{0, 100, 200, 300, 400}, None}},
FrameLabel -> {"Subscript[\[Mu], 5] (MeV)", "m (MeV)"},
PlotStyle -> {{Black, Thickness[.007]}, {Black, Dashed},
Black, {Black, Dashing[Tiny]}},
PlotLabels -> Placed[{"+", "\[Sigma]", "-"}, {Scaled[0.6], Above}],
ImagePadding -> None]
Then I save it as EPS, and I insert in LATEX:
\begin{figure}[!htb]
\centering
\includegraphics[scale=.7]{plot.eps}
\caption{Digraph.}
\label{fig:digraph}
\end{figure}
And after that I do pdf.
It displays as:
[![enter image description here][1]][2]
How may I solve this problem?
EDIT: SIMPLE EXAMPLE
Let
ListPlot[Table[{x, Sin[x]}, {x, -0.5, 6, 0.2}],
PlotRange -> {{0, 5.1}, {-1, 1}}, Frame -> True,
ImageSize -> {200, 150}, Joined -> True,
ImagePadding -> {{40, 5}, {40, 5}}, PlotRangeClipping -> True]
Export["zz.eps", %];
It displays good as
[![enter image description here][3]][4]
Then I save it as EPS, to insert in LATEX:
\begin{figure}[!htb]
\centering
\includegraphics[scale=.7]{zz.eps}
\caption{Digraph.}
\label{fig:digraph}
\end{figure}
And after that I do pdf:
[![enter image description here][5]][6]
[![enter image description here][7]][8]
There are "whiskers". How may I remove it?
***connection [with ][9]***
[1]: https://i.stack.imgur.com/PKOVo.png
[2]: https://i.stack.imgur.com/PKOVo.png
[3]: https://i.stack.imgur.com/xpiJd.png
[4]: https://i.stack.imgur.com/xpiJd.png
[5]: https://i.stack.imgur.com/z8SMI.png
[6]: https://i.stack.imgur.com/z8SMI.png
[7]: https://i.stack.imgur.com/TFf5i.png
[8]: https://i.stack.imgur.com/TFf5i.png
[9]: https://mathematica.stackexchange.com/questions/140757/non-proper-clipping-at-plotrange-edgeAlex Yakubovich2017-03-23T10:53:53ZDownload and reinstall Mathematica 6.0.1?
http://community.wolfram.com/groups/-/m/t/1037975
Hello,
Today, through the USER PORTAL, as it is indicated there, I asked to 'orders@wolfram.co.uk' how to download and reinstall my Mathematica 6.0.1 (full commercial version).
However, my email:
Sirs,
I take for granted that Mathematica 6.0.1, (full license) is still available for me to download and re-install. May I wrong?
If not, please, might I ask you tell me how I can get it?
Thank you.
The email was rejected without any additional information or reason.
The USER PORTAL says the my license of Mathematica 6.0.1 is still active, and preserves its license number.
Any one from WRI could help?
RegardsDr Martin2017-03-23T09:15:07Z[✓] NetTrain TargetDevice GPU error
http://community.wolfram.com/groups/-/m/t/902394
When I add the option `TargetDevice->"GPU"` to NetTrain in Mathematica 11.0.0.0 I get the error 'First argument to NetTrain should be a fully specified net'. Is this broken in 11.0.0.0?Gijsbert Wiesenekker2016-08-09T21:51:12ZAvoid crashing the Mathematica kernel while calling to Audio[] ?
http://community.wolfram.com/groups/-/m/t/1025102
I'm attaching a Notebook. The full contents are:
mySampleRate = 48000;
testSeries = {0, 1, 0}
testAudio =
Audio[{testSeries, testSeries}, SampleRate -> mySampleRate]
When I evaluate the Notebook, Mathematica beeps 4 times, and it reports:
> An unknown box name (ToBoxes) was sent as the BoxForm for the
> expression. Check the format rules for the expression.
and when I go to Help -> Why The Beep, Mathematica reports:
> The kernel Local has quit (exited) during the course of an evaluation.
Actually, Mathematica's behavior on this Notebook is non-deterministic. Sometimes it beeps 4 times, sometimes it beeps just once, sometimes it gives me different messages when I ask it why the beep.Joe Donaldson2017-03-04T06:11:02ZAvoid problem of "Recursion depth of 1024 exceeded during evaluation of...?
http://community.wolfram.com/groups/-/m/t/1037435
Hello, everybody! I am facing a problem called "Recursion depth of 1024 exceeded during evaluation of ...". I know that it occurs because I cannot obtain an explicit solution of y. But I do not know how to write the code to realize it. Any one can help me to solve it? Thank you. I post an attachment below.EditProfile FillName2017-03-22T13:41:29ZGlyph Frieze Patterns
http://community.wolfram.com/groups/-/m/t/1032650
Frieze patterns overlap art and math. The design of the base tile in a frieze pattern is artistic, while its repetition can be defined mathematically. This makes frieze patterns a good candidate for exploration with the Wolfram Language.
The bit of code below creates random frieze patterns from font glyphs. I chose sixteen asymmetric glyphs. Others would work, but they should be asymmetric to avoid double symmetries. Here is what the code does:
- randomly select one of the glyphs
- create a random dark color
- randomly rotate the glyph by 45° angles
- crop any excess background
- tile it according to one of the seven frieze patterns
There are 896 possible patterns, not counting the color variations. The results are often startling. Here are a few:
![\[screen shot\]][1]
![\[screen shot\]][2]
![\[screen shot\]][3]
![\[screen shot\]][4]
This suited my need as a small part of a larger project, a sort of school-house trivia game called *Chicken Scratch*. The questions must have a fair amount of randomness so the students reason rather than memorizing answers. For this question, the game presents the frieze pattern and the players choose from the names of four geometric definitions.
The Wolfram Demonstrations Project does have a half-dozen or so demonstrations for exploring frieze patterns. This is the first I've seen that uses glyphs for the base tile design. Though I could turn this into a demonstration, I need to focus on *Chicken Scratch*. Feel free to use this code however you want.
color1 = RGBColor[Table[RandomReal[.6], 3]];
symbol = RandomChoice[{9873, 9730, 38, 9816, 163, 9758, 8730, 8950,
11001, 10729, 10771, 9736, 10000, 9799, 9732, 8623}];
stamp = ImageCrop[
ImageRotate[
Rasterize[
Graphics[{color1,
Style[Text[FromCharacterCode[symbol]], 200]}]], (
RandomInteger[7] \[Pi])/8, Background -> White]];
width = ImageDimensions[stamp][[1]];
frieze = Switch[RandomInteger[{1, 7}],
1, ImageAssemble[Table[stamp, 12]],
2,
top = ImageAssemble[Table[stamp, 12]];
bot = ImageAssemble[Flatten[{
ImageRotate[ImageCrop[stamp, {width/2, Full}, Right], \[Pi]],
Table[ImageRotate[stamp, \[Pi]], 11],
ImageRotate[
ImageCrop[stamp, {width/2, Full}, Left], \[Pi]]}]];
imgLst = ConformImages[{top, bot}];
ImageAssemble[{{imgLst[[1]]}, {imgLst[[2]]}}],
3,
top = ImageAssemble[Table[stamp, 12]];
bot = ImageAssemble[Flatten[{
ImageReflect[ImageCrop[stamp, {width/2, Full}, Left]],
Table[ImageReflect[stamp], 11],
ImageReflect[ImageCrop[stamp, {width/2, Full}, Right]]}]];
imgLst = ConformImages[{top, bot}];
ImageAssemble[{{imgLst[[1]]}, {imgLst[[2]]}}],
4, ImageAssemble[
Riffle[Table[stamp, 6], Table[ImageReflect[stamp, Left], 6]]],
5, ImageAssemble[{Table[stamp, 12],
Table[ImageReflect[stamp], 12]}],
6, ImageAssemble[{Riffle[Table[stamp, 6],
Table[ImageReflect[stamp, Left], 6]],
Riffle[Table[ImageReflect[stamp, Left], 6], Table[stamp, 6]]}],
7, ImageAssemble[{Riffle[Table[stamp, 6],
Table[ImageReflect[stamp, Left], 6]],
Riffle[Table[stamp, 6], Table[ImageReflect[stamp, Left], 6]]}]];
pic = Image[frieze, ImageSize -> {{800}, {100}}]
You may have noticed that my code relies on procedural programming constructs like **switch** and **if**. I have only been using the Wolfram Language for about a year. I'm grateful that the Wolfram Language allows me to use procedural techniques while I learn how to write more elegant function-based code.
Oh, there is a possibility that some of the glyphs won't work on your system because they rely on what fonts you have on your machine. If that's the case, replace the character codes with ones that you do have.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.24.33AM.png&userId=788861
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.27.58AM.png&userId=788861
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.27.20AM.png&userId=788861
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.25.47AM.png&userId=788861Mark Greenberg2017-03-17T00:12:50Z[✓] 10.4 upgrade broke customized context menu
http://community.wolfram.com/groups/-/m/t/823984
Under Mathematica 10.3 I had customized the context menu to include "Copy as Bitmap" for "GraphicsBox" and "Image" selections by inserting a line
MenuItem["Copy as &Bitmap", FrontEndExecute[ FrontEndToken[FrontEnd`SelectedNotebook[], "CopySpecial", "MGF"]]]
in the appropriate places in the file **$InstallationDirectory\SystemFiles\FrontEnd\TextResources\ContextMenus.tr**
The same does no longer work under 10.4. What has changed? Has anybody else had that problem?Martin Rommel2016-03-15T17:18:39ZCreate a plot of vectors using a Do-Loop?
http://community.wolfram.com/groups/-/m/t/1036796
I am trying to create a plot of vectors using a Do-loop and
"Graphics [{Arrow[{....". Within the Do loop I calculate
the beginning (X,Y) and ending coordinates (X1,Y1) for each vector.
Can I use the "Graphics [{Arrow[{" command within a Do-Loop
or do I first have to create a Table? Below is my code.
Terry
A = {
{.79, 1.36}, {.93, 1.38}, {.58, .38}, {.87, .87}, {.83, .79},
{.31, .99}, {.60, .48}, {.60, .87}, {1.64, .15}, {1.11, 1.30},
{.53, .97}, {1.26, .39}, {2.37, .00}, {1.17, 1.76}, {.96, 1.26},
{.56, .46}, {1.17, .20}, {.63, .26}, {1.01, .47}, {.81, .77}};
DI = {-.90, -1.20, 1., -.97, -1.08, -1.53, -.61, -.60,
1.24, -.69, -1.31, .92, 2.39, -.06, -.48, -.82, -.82,
1.11, .66, -.15, -1.08}
Do[MDISC = Sqrt[A[[i, 1]]^2 + A[[i, 2]]^2];
COSX = A[[i, 1]]/MDISC;
COSY = A[[i, 2]]/MDISC;
BIGD = -DI[[i]]/MDISC;
X = BIGD*COSX;
Y = BIGD*COSY;
X1 = (MDISC + BIGD)*COSX;
Y1 = (MDISC + BIGD)*COSY;
Graphics[{Arrow[{{X, Y}, {X1, Y1}}]}], {i, 20}]Terry Ackerman2017-03-21T11:42:18ZWhy "Updating from Wolfram Research server"?
http://community.wolfram.com/groups/-/m/t/749400
Today I loaded and evaluated a simple notebook I saved yesterday. When the evaluation got to a Histogram statement it stopped and displayed "Updating from Wolfram Research server" for about a minute. I'm running 10.3 on Windows 7 x64. I thought Histogram was a built-in function. Why would Mathematica need to access the Wolfram server to execute it?
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=updatingfromserver.png&userId=98179David Keith2015-12-03T17:23:52ZCorrect a NDSolve approach when an argument contains an InverseFunction?
http://community.wolfram.com/groups/-/m/t/972466
We have a thin crystal of thickness d illuminated uniformly from the left at an intensity lgti[0, t] of unity. The crystal is composed of a photo reactive species *a* which absorbs light. Inside the crystal at location x and time t the photo chemical reaction leads to a local concentration of *a* given as a[x, t] in the code below.
Code to generate the concentration, a[x, t], and light intensity, lgti[x, t], within the crystal is straightforward and is shown.
The cause of the errors generated using the code is that in some cases a[x, t] does not return a number.These exceptional cases are near a known limit of the InverseFunction and a[x, t] in these cases could be given the values unity if found. My problem is that using NumberQ inside a Module definition of a[x, t] always gives false because ligti[x, t] is unevaluated.
The starting code is shown but see the attached notebook.
ClearAll[a, x, t, eqns, \[Sigma]N, lgti, soln, t0, d, tmax]
\[Sigma]N = 1.78*10^4; d = 0.001; tmax = 3000.0;
a[x_, t_] := InverseFunction[(1.8996253051848473`/lgti[x, t] *
(162.99559471365637` Log[1 + 8.98898678414097` (1 - #1)] -
172.98458149779734` Log[#1]) ) &
] [t]
eqns = {D[lgti[x, t], x ] == - \[Sigma]N a[x, t ] lgti[x, t],(* Beer's Law *)
lgti[0, t] == 1,
lgti[x, 0] == Exp[-\[Sigma]N x]
};
t0 = AbsoluteTime[];
soln = NDSolve[eqns, lgti, {x, 0, d}, {t, 0, tmax},
MaxStepFraction -> 0.01] [[1]];
Print[ToString[(AbsoluteTime[] - t0)/60] <> " minutes"]
Any advice on how to code a[x, t] so that lgti[x, t] appears as a number within the body of the code would be welcome.
An alternate approach would also be well received.Mervin Hanson2016-12-01T00:35:14ZIssue with label backgrounds in Graphics3D, Mathematica 11.1
http://community.wolfram.com/groups/-/m/t/1035985
A new, undesired, "feature" in MMA 11.1 is that labels in Graphics3D seems to always be rendered against a white background. These screenshots below are from a recent [Demonstration by Izidor Hafner][1]
MMA 10.4
![enter image description here][2]
MMA 11.1
![enter image description here][3]
[1]: http://demonstrations.wolfram.com/FourTheoremsOnSphericalTriangles/
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MMA10.4.png&userId=93385
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=MMA11.1.png&userId=93385Hans Milton2017-03-20T19:20:57ZCode puzzles: turning docs into educational games
http://community.wolfram.com/groups/-/m/t/1032663
Teaching programing and assessing learning progress is often a very custom task. I wanted to create a completely automated "practically" infinite stream of random puzzles that guide a leaner towards improving programing skills. I think the major problem is content creation. To test whether the learner knows a programming concept, an exercise needs to be wisely designed. And it is better to have a randomized set of such exercises to definitely test the knowledge and exclude guesses and cheating and so on. Often creating such educational materials is very tedious, time consuming, and manual. Exactly like creating good documentation. I will explain one simple idea of using docs to make an educational game. This is just a barebone prototype to clearly follow the inner workings (try it out & share: https://wolfr.am/bughunter ). Please comment with feedback on how we can develop this idea further.
[![enter image description here][1]][3]
# Introduction: efficient use of resources
The docs are the finest wealth and depth of information and should be explored beyond their regular usage. Manual painstaking time consuming effort of creating good programing documentation should be used to its fullest potential. An automated game play would be a novel take on docs. We can use existing code examples in docs to randomly pull pieces of code and make programing exercises automatically. Being able to read code and find bugs is, in my experience, one of the most enlightening practices. The goal of the linked above game is to find a defect of the input code (bug) and fix it. Hence, the "bug hunter". There are just 2 possible outcomes of a single game cycle, --- and after each you can "try again":
![enter image description here][4]
# Core game code: making puzzles
Wolfram Language (WL) documentation is one of the best I've seen. It has pages and pages of examples starting from simple ones and going though the all details of the usage. Moreover the docs are written in WL itself and furthermore, WL can access docs and even has internal self-knowledge of its structure via WolframLanguageData. For instance, this is how you can show a relationship community graph for symbols related to `GatherBy`:
WolframLanguageData["GatherBy", "RelationshipCommunityGraph"]
![enter image description here][5]
We can use `WolframLanguageData` to access docs examples and then drop some parts of the code. The puzzle is then for the learner to find what is missing. For the sake of clarity designing a small working prototype lets limit test WL functions and corresponding docs' pages to some small number. So out of ~5000 (and we [just released a new addition][6]):
WolframLanguageData[] // Length
`4838`
built in symbols I just take 30
functions = {"Append", "Apply", "Array", "Cases", "Delete", "DeleteCases", "Drop", "Except",
"Flatten", "FlattenAt", "Fold", "Inner", "Insert", "Join", "ListConvolve", "Map", "MapThread",
"Nest", "Outer", "Partition", "Prepend", "ReplacePart", "Reverse", "RotateLeft", "RotateRight",
"Select", "Sort", "Split", "Thread", "Transpose"};
functions // Length
30
that are listed on a [very old but neat animated page][7] of some essential core-language collection. I will also add some "sugar syntax" to potential removable parts of code:
sugar = {"@@", "@", "/@", "@@@", "#", "^", "&"};
So, for instance, out of the following [example in docs][8] we could remove a small part to make a puzzle:
![enter image description here][9]
Here is an example of "sugar syntax" removal, which for novice programmers would be harder to solve:
![enter image description here][10]
Next step is to define a function that can check if a string is a built-in symbol (function, all 5000) or if it is some of sugar syntax we defined above:
ClearAll[ExampleHeads];
ExampleHeads[e_]:=
Select[
Cases[e,_String, Infinity],
(NameQ["System`"<>#]||MemberQ[sugar,#])&&#=!="Input"&
]
Next function essentially makes a single quiz question. First it randomly picks a function from list of 30 symbols we defined. Then it goes to the doc page of that symbol to the section called "Basic Examples". It finds a random example and removes a random part out of it:
ranquiz[]:=Module[
{ranfun=RandomChoice[functions],ranexa,ranhead},
ranexa=RandomChoice[WolframLanguageData[ranfun,"DocumentationBasicExamples"]][[-2;;-1]];
ranhead=RandomChoice[ExampleHeads[ranexa[[1]]]];
{
ReplacePart[#,Position[#,ranhead]->""]&@ranexa[[1]],
ranexa[[2]],
ranhead,
ranfun
}
]
Now we will define a few simple variables and tools.
# Image variables
I keep marveling how convenient it is that Mathematica front end can make images to be part of code. This makes notebooks a great IDE:
![enter image description here][11]
# Databin for tracking stats
It is important to have statistics of your learning game: to understand how to improve it where the education process should go. [Wolfram Datadrop][12] is an amazing tool for these purposes.
[![enter image description here][13]][14]
We define the databin as
bin = CreateDatabin[<|"Name" -> "BugHunter"|>]
# Deploy game to the web
To make an actual application usable by everyone with internet access I will use [Wolfram Development Platform][15] and [Wolfram Cloud][16]. First I define a function that will build the "result of the game" web page. It will check is answer is wrong or right and give differently designed pages accordingly.
quiz[answer_String,check_String,fun_String]:=
(
DatabinAdd[Databin["kd3hO19q"],{answer,check,fun}];
Grid[{
{If[answer===check,
Grid[{{Style["Right! You got the bug!",40,Darker@Red,FontFamily->"Chalkduster"]},{First[imgs]}}],
Grid[{{Style["Wrong! The bug got you!",40,Darker@Red,FontFamily->"Chalkduster"]},{Last[imgs]}}]
]},
{Row[
{Hyperlink["Try again","https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter"],
"|",
hyperlink["Documentation","http://reference.wolfram.com/language/ref/"<>fun<>".html"],
"|",
hyperlink["Fun hint","http://reference.wolfram.com/legacy/flash/animations/"<>fun<>".html"]},
Spacer[10]
]},
{Style["===================================================="]},
{hyperlink["An Elementary Introduction to the Wolfram Language","https://www.wolfram.com/language/elementary-introduction"]},
{hyperlink["Fast introduction for programmers","http://www.wolfram.com/language/fast-introduction-for-programmers/en"]},
{logo}
}]
)
This function is used inside `CloudDeploy[...FormFunction[...]...]` construct to actually deploy the application. `FormFunction` builds a query form, a web user interface to formulate a question and to get user's answer. Note for random variables to function properly `Delayed` is used as a wrapper for `FormFunction`.
CloudDeploy[Delayed[
quizloc=ranquiz[];
FormFunction[
{{"code",None} -> "String",
{"x",None}-><|
"Input"->StringRiffle[quizloc[[3;;4]],","],
"Interpreter"->DelimitedSequence["String"],
"Control"->Function[Annotation[InputField[##],{"class"->"sr-only"},"HTMLAttrs"]]|>},
quiz[#code,#x[[1]],#x[[2]]]&,
AppearanceRules-> <|
"Title" -> Grid[{{title}},Alignment->Center],
"MetaTitle"->"BUG HUNTER",
"Description"-> Grid[{
{Style["Type the missing part of input code",15, Darker@Red,FontFamily->"Ayuthaya"]},
{Rasterize@Grid[{
{"In[1]:=",quizloc[[1]]},
{"Out[1]=",quizloc[[2]]}},Alignment->Left]}
}]
|>]],
"bughunter",
Permissions->"Public"
]
The result of the deployment is a cloud object at a URL:
CloudObject[https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter]
with the short version:
URLShorten["https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter", "bughunter"]
https://wolfr.am/bughunter
And we are done! You can go at the above URL and play.
# Further thoughts
Here are some key points and further thoughts.
## Advantages:
- Automation of content: NO new manual resource development, use existing code bases.
- Automation of testing: NO manual labor of grading.
- Quality of testing: NO multiple choice, NO guessing.
- Quality of grading: almost 100% exact detection of mistakes and correct solutions.
- Fight cheating: clear to identify question type "find missing code part" helps to ban help from friendly forums (such as this one).
- Almost infinite variability of examples if whole docs system is engaged.
- High range from very easy to very hard examples (exclusion of multiple functions and syntax can make this really hard).
## Improvements:
- Flexible scoring system based on function usage frequencies.
- Optional placeholder as hint where the code is missing.
- Using network of related functions (see above) to move smoothly through the topical domains.
- Using functions frequency to feed easier or harder exercises based on test progress.
***Please comment with your own thoughts and games and code!***
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-17at10.37.46AM.png&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at6.49.58PM.png&userId=11733
[3]: https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at6.58.45PM.png&userId=11733
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sgfq354ythwrgsf.png&userId=11733
[6]: http://blog.wolfram.com/2017/03/16/the-rd-pipeline-continues-launching-version-11-1/
[7]: http://reference.wolfram.com/legacy/flash/
[8]: http://reference.wolfram.com/language/ref/Except.html
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.21.05PM.png&userId=11733
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.25.52PM.png&userId=11733
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.34.59PM.png&userId=11733
[12]: https://datadrop.wolframcloud.com/
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.41.35PM.png&userId=11733
[14]: https://datadrop.wolframcloud.com/
[15]: https://www.wolfram.com/development-platform/
[16]: http://www.wolfram.com/cloud/Vitaliy Kaurov2017-03-17T01:13:24ZAlternative framework for stress testing routines - Value at Risk focus
http://community.wolfram.com/groups/-/m/t/458805
Stress testing in the Value-of-Risk context provides complementary view of the firm’s risk profile when market becomes extremely volatile and unstable. In this respect, the stress testing generally replicates crisis scenarios in the current market setting. Traditional approach to stress testing emphasises qualitative factors. We propose alternative arrangement - purely quantitative approach with probabilistic setting where particular stress test is modelled thought the quantiles of the calibrated probability distribution.
![StVar image][1]
#Introduction
Value-at-Risk (VaR) as a standard risk measure is usually defined in terms of time horizon, confidence interval and parameters setting that drive the calculation of the measure. There are few theoretical underlying assumptions behind VaR and one f them refers to VaR as the measure under the normal market conditions. This fundamental principle, by definition, is meant to ignore extreme market scenarios where market parameters - and the volatility in particular - undergoes a period of excessive variations.
Stress testing plays a complementary function to VaR - it identifies the hidden vulnerability of the risk measure as a result of hidden assumptions and thus provides risk and senior managers with a clear view of the loss when market runs into crisis.Therefore, financial institutions these days are required to run the stress testing of their VaR on regular basis to quantify additional loss based on stress-affected para maters.
To ease and streamline the stress testing exercise we papoose new approach to stress testing based on probabilistic representation of VaR variables where particular scenario is modelled through the quantile behaviour of calibrated density. This approach is flexible, easy to implement and leads to elegant and tractable solution that can be applied in various risk quantification settings.
#Stress definition
Stress testing is generally carried away in three settings:
- Historical scenario of particular crisis
- Stylised definition of particular scenarios
- Hypothetical events
The recent regulatory review strongly prefers the historical scenario approach where particular period with consecutive 12 months observation of crisis has to be included into the stress testing exercise.
The objective of this paper is to propose a new parametric approach to stress testing where the underlying market parameters are defined probabilistically and the stress scenario is then expressed as a quantile of certain probability distribution. The solution brings a number of benefits - the primary being simplicity of implementation, ease of maintenance and ease of use.
#Problem setting
Consider the following case - we have built a portfolio of 5 UK stocks - Lloyds, Vodafone, Barclays, BP and HSBC - with £1 million invested in each of them. We want to compute **VaR** and **stressed VaR** for our portfolio with £5 million value.
##Getting market data
We get the market data for the 5 stocks with a long history from Jan 2008
stocks = {"LLOY.L", "VOD.L", "BARC.L", "BP.L", "HSBA.L"};
data = TimeSeries[FinancialData[#, "January 1, 2008"]] & /@ stocks;
The price history of each stock looks as follows:
DateListPlot[data, PlotLegends -> stocks]
![enter image description here][2]
## Processing market data
dret = TimeSeriesResample[MovingMap[Log[Last[#]] - Log[First[#]] &, data, {2}]];
![enter image description here][3]
In the same way we calculate each stock historical daily volatility
qvol = MovingMap[StandardDeviation, dret, {60, "Day"}];
Evaluate@DateListPlot[qvol, PlotRange -> All, PlotLegends -> stocks]
![enter image description here][4]
Both return and its volatility reveal high values for financial stocks - Lloyds and Barclays in particular- with excessive movement in 2009, 2010 and partially in 2012.
![enter image description here][5]
This are the periods that represent the peak of financial crisis and the stress to the market.
#Probabilistic definition of volatility
VaR metrics is primarily dealing with the return volatility and therefore the volatility behaviour in the key object of the stress testing process. As the graph above suggests, the volatility evolution over time is not constant and we observe periods of high and low values. If we make an abstract view on the volatility data in general, we can treat it as a 'random variable' with some probability distribution. Presence of high and low values strongly supports this argument.
We propose Johnson distribution - unbounded type - to define the distribution of volatility density over the observed period of time. Let's recall that Johnson distributions represent a framework of distribution family in the from Y=\[Sigma] g((X-\[Gamma])/\[Delta])+\[Mu] where X~Normal[]. In case of 'unbounded' distribution the function g=sinh(x). The Johnson distribution family are well-behaved functions and with four parameters to use are therefore ideally suited for the data fitting with long and patchy tails. In this respect they can be easily used to model the volatility distribution over time.
The probability density of the Johnson unbounded distribution is:
PDF[JohnsonDistribution["SU", \[Gamma], \[Delta], \[Mu], \[Sigma]], x]
![enter image description here][6]
and the density plot with varying shape factor \[Gamma] looks as follows:
Plot[Evaluate@
Table[PDF[JohnsonDistribution["SU", \[Gamma], 1.25, 0.007, 0.00341],
x], {\[Gamma], {-3, -4, -5}}], {x, 0, 0.15}, Filling -> Axis,
PlotRange -> All,
PlotLegends -> {"\[Gamma]=-3", "\[Gamma]=-4", "\[Gamma]=-5"}]
![enter image description here][7]
Johnson type of distributions are very flexible in terms of tail control.
## Historical distribution of volatility
We can visualise the volatility distribution by looking at each stock histogram:
size = Length[stocks];
Table[Histogram[qvol[[i, All, 2]], 30, "PDF",
PlotLabel -> stocks[[i]],
ColorFunction ->
Function[{height}, ColorData["Rainbow"][height]]], {i, size}]
![enter image description here][8]
We can use the historical data to fit the volatility to the Johnson distribution:
edist = Table[
EstimatedDistribution[qvol[[i, All, 2]],
JohnsonDistribution [
"SU", \[Gamma], \[Delta], \[Mu], \[Sigma]]], {i, size}]
How good is the fit? We can observe this on the charts:
Table[Show[
Histogram[qvol[[i, All, 2]], 20, "PDF", PlotLabel -> stocks[[i]]],
Plot[PDF[edist[[i]], x], {x, 0, 0.1}, PlotRange -> All,
PlotStyle -> {Blue, Thick}]], {i, size}]
![enter image description here][9]
## Defining correlation matrix
Portfolio VaR requires correlation structure amongst the VaR components. We can create it from the historical return data defined above.The only problem is to define the time window from which we select the market data. For the standard VaR we can choose the one year history
tswin = {{2014, 3, 1}, {2015, 3, 1}};
retwin = TimeSeriesWindow[dret, tswin];
volwin = Table[
Mean[MovingMap[StandardDeviation, retwin[[i]], {60, "Day"}][[All,
2]]], {i, size}]
Table[retwin[[i, All, 2]], {i, size}];
wincorr = Correlation[Transpose[%]];
wincorr // MatrixForm
> {0.0117992, 0.0127976, 0.0155371, 0.011966, 0.00891589}
![enter image description here][10]
#Standard VaR calculation
Having defined the volatility and the correlation matrix based on past year data, we can calculate the parametric VaR easily. Assuming the normal distribution for the stock returns and 1 day VaR horizon, this can be defined as follows:
##Individual stock VaR
indVar=-Sqrt[2] \[Pi] \[Sigma] erfc^-1(2 \[Alpha])
where \[Pi] = value of the investment = £1 million, \[Sigma] = stock return volatility and \[Alpha]= confidence level
ndinv = Refine[InverseCDF[NormalDistribution[0, \[Sigma]], \[Alpha]],
0 < \[Alpha] < 1]
> -Sqrt[2] \[Sigma] InverseErfc[2 \[Alpha]]
With 99% confidence and each stock volatility calculated above
indvar = Table[
10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] -> volwin[[i]]}, {i, size}]
Total[%]
BarChart[indvar, ChartStyle -> "Rainbow",
PlotLabel -> Style["Individual stock 1 day VaR", 16],
ChartLegends -> stocks]
> {27449.1, 29771.8, 36144.6, 27837.1, 20741.5}
> 141944.
![enter image description here][11]
We can see the highest individual VaR for Barclays and the lowest for HSBC. This is consistent with the individual stock volatilities observed above.
##Portfolio VaR
Formula-wise this is equivalent to:
portVaR=Sqrt[indVar^T.\[CapitalSigma].indVar]
baseportVaR = Sqrt[indvar.wincorr.indvar]
> 104226.
One can see that the portfolio VaR < \[Sum] individual VaRs due to diversification effect. Portfolio features reduce the sum of individual VaR by almost £40,000.
The 1 day total portfolio VaR is £104k or 2.2% of the portfolio's value
#Stressed VaR
VaR is a function driven primarily by volatility of return. In portfolio context there is another factor - correlations. When stressing VaR, one has to think about stress extension to both parameters.
## Historical scenario of past crisis
This is the most frequently used method to handle stress testing. Having available data for the entire period makes this selection simple.
Looking at the historical volatility graph, it is obvious that the volatility peaked in 2009 - 2010. We therefore select this period for our stress testing.
tswin = {{2009, 3, 1}, {2010, 3, 1}};
retwin = TimeSeriesWindow[dret, tswin];
volwin = Table[
Mean[MovingMap[StandardDeviation, retwin[[i]], {60, "Day"}][[All,
2]]], {i, size}]
Table[retwin[[i, All, 2]], {i, size}];
wincorr = Correlation[Transpose[%]];
wincorr // MatrixForm
> {0.0412017, 0.013961, 0.0342192, 0.0138671, 0.0187612}
![enter image description here][12]
We can now see very different values for both - the volatilities and the correlation matrix. To compute the stressed VaR, all we need is just to replace the standard VaR set with the stress period data:
indSTvar =
Table[10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] -> volwin[[i]]}, {i,
size}]
Total[%]
BarChart[indSTvar, ChartStyle -> "Rainbow",
PlotLabel -> Style["Individual stock 1 day Stress VaR", 16],
ChartLegends -> stocks]
> {95849.5, 32478.1, 79605.9, 32259.8, 43645.}
>
> 283838.
![enter image description here][13]
Consequently, the individual stocks stress VaR is very different from the standard VaR. For example, the Lloyds stressed VaR is almost 3 times higher than the standard VaR measure.
Portfolio-level stressed VaR:
stressPortVaR = Sqrt[indSTvar.wincorr.indSTvar]
> 218471.
BarChart[{baseportVaR, stressPortVaR}, ChartStyle -> {Blue, Red},
ChartLegends -> {"Std VaR", "Stress VaR"}]
![enter image description here][14]
The portfolio 1 day stressed VaR has **doubled** under the stress scenario and represents £218k loss
We can eventually choose any other period to see how the VaR behaves under different set of market parameters.
##Inverse CDF method for stress scenarios - individual case
The calibration of historical volatility to the Johnson distribution enables us to explore alternative route for the stressed VaR generation. This can be described as **Inverse CDF** method.
If we assume that the return distribution is normal and the volatility of that return is calibrated to the Johnson unbounded distribution, we can obtain the stressed VaR metrics as a quantile of both distributions.
- Stressed volatility:
We are interested in the quantile value of the volatility to capture the stressed market sentiment. This can be easily achieved through the Inverse CDF function
invJD = Refine[
InverseCDF[
JohnsonDistribution [
"SU", \[Gamma], \[Beta], \[Kappa], \[Nu]], \[Lambda]],
0 < \[Lambda] < 1] // Simplify
![enter image description here][15]
- Combined stressed VaR:
The VaR is then the composite value of the VaR formula and the quantiled volatility measure
stVaRNd = ndinv /. \[Sigma] -> invJD // Simplify
> ndinv
The above formula is the **parametric definition** of the stressed VaR. The function operates on two quantile parameters:
- VaR confidence level \[Alpha]
- Volatility 'stress' factor \[Lambda]
The stressed VaR parametric model will behave as follows:
Plot3D[stVaRNd /. {\[Kappa] -> 0.006583, \[Nu] ->
0.0002419, \[Gamma] -> -4.978, \[Beta] -> 1.07132}, {\[Alpha],
0.5, 0.9}, {\[Lambda], 0.3, 0.75},
ColorFunction -> "TemperatureMap", PlotLegends -> Automatic]
![enter image description here][16]
It is worth noting that the model is quite sensitive to the volatility stress factor \[Lambda].
##Probabilistic approach to stress VaR - portfolio context
Apart from stressing volatility parameter, we need to define the stressed correlation matrix. We propose simple multiplicative factor approach where the original matrix is multiplied by a positive number which increases the correlation coefficients in the matrix. This is consistent with market practice - in period of crisis there is a strong positive tendency for financial assets in the same class to move together. The function below does exactly this:
stressCM[cm_, f_] :=
Table[If[cm[[i, j]] == 1, 1, Min[0.99, cm[[i, j]]*(1 + f)]], {i,
size}, {j, size}]
Applying 20% increase on the correlations of standard VaR produces the following CM:
stressCM[wincorr, 0.2] // MatrixForm
![enter image description here][17]
The matrix values are in line with the stressed matrix historical scenario of 2009-2010
To execute the computation we first generate the individual stressed VaR metrics using the Inverse CDF method:
indstressvar =
Table[10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] ->
Mean[InverseCDF[edist[[i]], {0.5, 0.7, 0.9}]]}, {i, size}]
Total[%]
> {86608., 37365.8, 82986.8, 41072.8, 41535.8}
> 289569.
And then obtain the portfolio VaR in the same way as in the standard case but with the stressed correlation matrix:
portstvar = Sqrt[indstressvar.stressCM[wincorr, 0.2].indstressvar]
> 232010.
The parametric stressed VaR number is similar to what we obtained when we applied the historical scenario method. This shows that the alternative probabilistic stressed VaR approach works well and can be easily applied in practical setting.
#Extension of the stressed VaR concept
##VaR with Student-T distribution
When we opt for the generalised Student-T distribution for stock returns, the standard VaR can be defined as:
StVaR = Refine[
InverseCDF[StudentTDistribution[0, \[Sigma], \[Nu]], \[Alpha]],
1/2 < \[Alpha] < 1] // Simplify
![enter image description here][18]
The extension to the stressed VaR is trivial:
StVaR /. \[Sigma] -> invJD
![enter image description here][19]
The formula is essentially the Student-T Stressed VaR expression
We need to calibrate the distribution to obtain the degrees of freedom value for each stock
edist2 = Table[
EstimatedDistribution[dret[[i, All, 2]],
StudentTDistribution[0, \[Sigma], \[Nu]]], {i, size}]
> {StudentTDistribution[0, 0.0155062, 1.90416],
StudentTDistribution[0, 0.0093817, 2.82662],
StudentTDistribution[0, 0.0154996, 1.92631],
StudentTDistribution[0, 0.00850414, 2.12496],
StudentTDistribution[0, 0.00925811, 2.35383]}
stparam = List @@@ edist2
> {{0, 0.0155062, 1.90416}, {0, 0.0093817, 2.82662}, {0, 0.0154996,
1.92631}, {0, 0.00850414, 2.12496}, {0, 0.00925811, 2.35383}}
The individual stocks VaR are then:
ststressind =
Table[10^6 StVaR /. {\[Alpha] -> 0.99, \[Nu] ->
stparam[[i, 3]], \[Sigma] ->
Mean[InverseCDF[edist[[i]], {0.5, 0.7, 0.9}]]}, {i, size}]
Total[%]
> {98371.2, 33623.8, 96266.1, 44665.5, 41708.4}
>
> 314635.
and the total portfolio VaR equals to:
Sqrt[ststressind.stressCM[wincorr, 0.2].ststressind]
> 253701.
The Student-T stressed VaR is higher than in the normal case which is in line with expectation, especially when the degree of freedom is < 4.
#Conclusion
Parametric stressed VaR represents elegant and practical extension to the existing methods for stress testing. The main feature of this approach is ease of use and simplicity of application once the calibration dataset is available.
The parametric method can be applied to other probability distributions if one wants to test the parametric stressed VaR under different distributional assumptions. Student T approach is explicitly presented to demonstrate this case. Extension to other distributions is trivial since the stressed volatility with Inverse CDF can be easily applied in arbitrary setting.
[1]: /c/portal/getImageAttachment?filename=StressVarImange.jpg&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.png&userId=95400
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.png&userId=95400
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=95400
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&userId=95400
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.png&userId=95400
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.png&userId=95400
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.png&userId=95400
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9.png&userId=95400
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.png&userId=95400
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.png&userId=95400
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12.png&userId=95400
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=13.png&userId=95400
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=14.png&userId=95400
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15.png&userId=95400
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16.png&userId=95400
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=17.png&userId=95400
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=18.png&userId=95400
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19.png&userId=95400Igor Hlivka2015-03-13T18:06:46Z[✓] Mathematica 11.1: no start by clicking on nb file in Windows Explorer
http://community.wolfram.com/groups/-/m/t/1037264
I installed Mathematica 11.1 the other day. After installation I cannot get Mathematica started by double clicking on a file in Windows Explorer.
Right click and selecting Open With and then Wolfram Mathematica 11.1 does open the file correctly.
I have seen this problem in the past but cannot remember how it was solved (maybe by a new release of Mathematica?).
The clean start, by pressing Ctrl+Shift when starting Mathematica
https://reference.wolfram.com/language/tutorial/TroubleshootingTheWolframSystem.htm
does not solve the problem. **Does anyone have a solution this problem?**
Thanks a lot - MaartenMaarten van der Burgt2017-03-22T08:46:21Z[✓] Understand sequence behaviour?
http://community.wolfram.com/groups/-/m/t/1036760
Mathematica 11.0.1.0, Windows 64 bit
Can somebody explain why in the first example "Sequence" behaves as expected, whereas in the second we have an error:
First example:
m = {{1, 2}, {3, 4}};
m[[All, 2]] = Sequence[];
mm[[2]] = Sequence[];
Second example:
m = {{1, 2}, {3, 4}};
m[[2]] = Sequence[];
m[[All, 2]] = Sequence[];Daniel Huber2017-03-21T09:47:14ZUse SPI protocol with Arduino & ADC?
http://community.wolfram.com/groups/-/m/t/1029674
Hi, I'm interested in how I use SPI protocol.
In Mathematics documentation has information that might somehow use the SPI library in Arduino.
I have an Arduino Uno, and I want to connect ADC such as MAX 6675, with SPI protocol.
MAX 6675 is the ADC for temperature measurement using SPI protocol for sharing information, and there are several skeches to gather information from it.
I can not imagine how.
I am interested in whether someone has sample code to work with SPI protocol.Bohdan Romanchuk2017-03-12T23:08:59ZAccess an Amazon Redshift account from Mathematica?
http://community.wolfram.com/groups/-/m/t/1037412
Hello, i am trying to access an Amazon Redshift account from Mathematica. Has anyone experiences with the setup ?bernd lamberts2017-03-22T00:17:10ZMathematica 11.1 Is Out Now!
http://community.wolfram.com/groups/-/m/t/1032274
Mathematica 11.1 is out now! With over 100 new functions, the latest version significantly expands Mathematica's already state-of-the-art capabilities in machine learning, neural networks, audio processing, robust descriptive statistics and more. Longtime users will also notice a major redesign to the appearance of our documentation.
**Machine Learning**
- New machine learning functionality including [SequencePredict][1], [ActiveClassification][2] and [ActivePrediction][3]
- [FeatureSpacePlot][4] and [FeatureNearest][5] to view and find objects in a feature space for not only numerical and textual data but also sounds, images and any combination thereof
**Neural Networks**
- Neural network functionality has been significantly enhanced, adding seamless support for variable-length sequences and recurrent networks, over 20 new layer types, powerful new network combinators and a growing library of pre-trained neural nets to use as building blocks
- Obtain a trained version of existing neural nets or specific properties of a given net with [NetModel][6]
**Audio Processing**
- Record audio from various connected devices through an interactive user interface using [AudioCapture][7]
- Integrated audio analysis functionality that computes loudness or plots the cepstrum of a recording
**More Cool New Functionality**
- [CurrentImage][8] has been significantly rewritten for faster dynamic image capturing
- New functions useful in microscope image processing: [BrightnessEqualize][9] and [Image3DProjection][10]
- The addition of new region properties and region constructors greatly increases geometric computation functionality: [SierpinskiMesh][11], [MengerMesh][12], [CantorMesh][13], [HilbertCurve][14], [PeanoCurve][15] and more
- Robust descriptive statistics functionality with applications in location, dispersion and shape characterization
- New functions for performing web searches ([WebSearch][16], [WebImageSearch][17]) and translating text between dozens of languages ([TextTranslation][18])
- Create customizable visualizations of geographic properties that can wrap entities with [GeoBubbleChart][19]
- Completely redesigned documentation with responsive layout
[1]: http://reference.wolfram.com/language/ref/SequencePredict.html
[2]: http://reference.wolfram.com/language/ref/ActiveClassification.html
[3]: http://reference.wolfram.com/language/ref/ActivePrediction.html
[4]: http://reference.wolfram.com/language/ref/FeatureSpacePlot.html
[5]: http://reference.wolfram.com/language/ref/FeatureNearest.html
[6]: http://reference.wolfram.com/language/ref/NetModel.html
[7]: http://reference.wolfram.com/language/ref/AudioCapture.html
[8]: http://reference.wolfram.com/language/ref/CurrentImage.html
[9]: http://reference.wolfram.com/language/ref/BrightnessEqualize.html
[10]: http://reference.wolfram.com/language/ref/Image3DProjection.html
[11]: http://reference.wolfram.com/language/ref/SierpinskiMesh.html
[12]: http://reference.wolfram.com/language/ref/MengerMesh.html
[13]: http://reference.wolfram.com/language/ref/CantorMesh.html
[14]: http://reference.wolfram.com/language/ref/HilbertCurve.html
[15]: http://reference.wolfram.com/language/ref/PeanoCurve.html
[16]: http://reference.wolfram.com/language/ref/WebSearch.html
[17]: http://reference.wolfram.com/language/ref/WebImageSearch.html
[18]: http://reference.wolfram.com/language/ref/TextTranslation.html
[19]: http://reference.wolfram.com/language/ref/GeoBubbleChart.htmlJohn Moore2017-03-16T18:29:37Z[✓] Use SetDelayed for a selected set of indices?
http://community.wolfram.com/groups/-/m/t/1036976
I'm trying to define some delayed values and let mathematica do the simplification for me during the course of computation. Here is a minimal work example:
x[i_] := y[i];
Mx = Table[x[i] + 2, {i, 1, 10}];
My = Table[y[i], {i, 1, 10}];
difference = Mx - My
The output is simply `{2, 2, 2, 2, 2, 2, 2, 2, 2, 2}`. What I'm eager to know is if there is a way that I can define the delayed value **only** for x[i] with 2<=i<=10 (i.e. a selected subset of the indices), so that the final result would be something like `{x[1]-y[i]+2, 2, 2, 2, 2, 2, 2, 2, 2, 2}`. That is, every entry of the result is reduced by taking the delayed values but the first one. Thanks!!K X2017-03-21T17:20:56ZDownloading StarData that meets a certain qualification?
http://community.wolfram.com/groups/-/m/t/1036912
In this sample code, I first download subsets, of the names of stars, from multiple StarData classes.
A Union is used to combine all the target star names in a single list.
Using a parallel request across four CPUs, a certain set of metrics are retrieved for each star.
Not all stars have a complete set of parameter data.
To simplify plotting, Missing["NotAvailable"] entries are set to zero.
The same data is then normalized in relation to the sun, and units are removed.
A Cases function is used to remove all stars that have zero for a mass value.
This works, but in the real code version, all stars byStarData class are first downloaded once, and saved separately as exports.
Then a second program, imports each saved class export, and graphs across classes and particular metrics.
Is there a way of just downloading StarData entries, where the Mass parameter is not equal to Missing["NotAvailable"] ?
Something like:
Take[StarData[EntityClass["Star", "ClassAStar"]], {1, 100}] where "Mass" != "Missing["NotAvailable"]"
Working sample code:
sunMass = StarData["Sun", "Mass"];
sunLuminosity = StarData["Sun", "Luminosity"];
sunTemperature = StarData["Sun", "EffectiveTemperature"];
sunDiameter = StarData["Sun", "Diameter"];
sunGravity = StarData["Sun", "Gravity"];
sunDensity = StarData["Sun", "Density"];
sunVolume = StarData["Sun", "Volume"];
listDataA =
Take[StarData[EntityClass["Star", "ClassAStar"]], {1, 100}];
listDataB =
Take[StarData[EntityClass["Star", "ClassBStar"]], {1, 100}];
listDataF =
Take[StarData[EntityClass["Star", "ClassFStar"]], {1, 100}];
listDataG =
Take[StarData[EntityClass["Star", "ClassGStar"]], {1, 100}];
listDataK =
Take[StarData[EntityClass["Star", "ClassKStar"]], {1, 100}];
listDataM =
Take[StarData[EntityClass["Star", "ClassMStar"]], {1, 100}];
listDataO =
Take[StarData[EntityClass["Star", "ClassOStar"]], {1, 100}];
listDataSuper =
Take[StarData[EntityClass["Star", "Supergiant"]], {1, 100}];
listDataGiant =
Take[StarData[EntityClass["Star", "NormalGiant"]], {1, 100}];
listDataSubgiant =
Take[StarData[EntityClass["Star", "Subgiant"]], {1, 100}];
listData =
Union[listDataA, listDataB, listDataF, listDataG, listDataK,
listDataM, listDataO, listDataSuper, listDataGiant,
listDataSubgiant];
CloseKernels[]; LaunchKernels[4]
AbsoluteTiming[
Length[
data =
Transpose[
ParallelMap[
StarData[listData, #] &,
{"Name", "Metallicity", "SpectralClass", "BVColorIndex",
"EffectiveTemperature",
"Mass", "Luminosity", "AbsoluteMagnitude", "Gravity", "Density",
"Diameter",
"DistanceFromEarth", "MainSequenceLifetime", "Parallax",
"RadialVelocity", "Radius", "StarEndState", "StarType",
"SurfaceArea",
"VariablePeriod", "Volume"}]]]]
zeroData = data /. {Missing["NotAvailable"] -> 0};
noUnitsData =
zeroData /. {c1_, c2_, c3_, c4_, c5_, c6_, c7_, c8_, c9_, c10_,
c11_, c12_, c13_, c14_, c15_, c16_, c17_, c18_, c19_, c20_,
c21_} -> {c1, c2, c3, c4, QuantityMagnitude[c5],
QuantityMagnitude[c6/sunMass],
QuantityMagnitude[c7/sunLuminosity], c8,
QuantityMagnitude[c9/sunGravity], QuantityMagnitude[c10],
QuantityMagnitude[c11/sunDiameter], QuantityMagnitude[c12],
QuantityMagnitude[c13], QuantityMagnitude[c14]
, QuantityMagnitude[c15], QuantityMagnitude[c16], c17, c18,
QuantityMagnitude[c19], QuantityMagnitude[c20],
QuantityMagnitude[c21/sunVolume]};
Length[unionAll =
Cases[
Sort[noUnitsData, #1[[6]] > #2[[6]] &],
{_, _, _, _, _, x_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _} /;
x != 0]
]
topTemp = Take[
Sort[unionAll, #1[[5]] > #2[[5]] &],
50];
lowTemp = Take[
Cases[
Sort[unionAll, #1[[5]] < #2[[5]] &],
{_, _, _, _, x_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _} /;
x != 0]
, 50];
topMass = Take[
Sort[unionAll, #1[[6]] > #2[[6]] &]
, 50];
lowMass = Take[
Cases[
Sort[unionAll, #1[[6]] < #2[[6]] &],
{_, _, _, _, _, x_, _, _, _, _, _, _, _, _, _, _, _, _, _, _, _} /;
x != 0]
, 50];
topLuminosity = Take[
Sort[unionAll, #1[[7]] > #2[[7]] &]
, 50];
lowLuminosity = Take[
Cases[
Sort[unionAll, #1[[7]] < #2[[7]] &],
{_, _, _, _, _, _, x_, _, _, _, _, _, _, _, _, _, _, _, _, _, _} /;
x != 0]
, 50];
Manipulate[
plotType[
With[{c1 = xAxis, c2 = yAxis, tooltip1 = tooltip},
Tooltip[{(xAxisReverse )* Slot[c1], (yAxisReverse) *
Slot[c2]}, {Slot[tooltip1], Slot[c1], Slot[c2]}] &] @@@
category,
PlotRange -> Automatic, ImageSize -> 800,
AxesLabel -> {xLabel, yLabel}, AxesOrigin -> {xOrigin, yOrigin},
PlotStyle -> Red],
{{xAxisReverse, 1}, {1, -1}},
{{yAxisReverse, 1}, {1, -1}},
{{xOrigin,
0}, {-100000, -60000, -50000, -40000, -30000, -20000, -10000, \
-5000, -1000, -100, -50, -40, -30, -25, -20, -15, -10, -9, -8, -7, \
-6, -5, -4, -3, -2, -1, -.0001, -.001, -.01, -.1, -.2, -.3, -.4, -.5,
0, .5, 1, 1.5, 2, 2.5, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25, 30, 40,
50}},
{{yOrigin,
0}, {-30, -25, -20, -15, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, \
-.0001, -.001, -.01, -.1, -.2, -.3, -.4, -.5,
0, .1, .001, .0001, .00001, .5, 1, 1.5, 2, 2.5, 3, 4, 5, 6, 7, 8,
9, 10, 15, 20, 25, 30, 40, 50, 100, 1000, 5000, 10000, 20000,
30000, 40000, 50000, 60000}},
{{xAxis, 6}, {2 -> "Metallicity", 4 -> "BVColorIndex",
5 -> "Temperature", 6 -> "Mass", 7 -> "Luminosity",
8 -> "AbsoluteMagnitude", 9 -> "Gravity", 10 -> "Density",
11 -> "Diameter", 12 -> "DistanceFromEarth",
13 -> "MainSequenceLifetime", 14 -> "Parallax",
15 -> "RadialVelocity", 16 -> "Radius", 19 -> "SurfaceArea",
20 -> "VariablePeriod", 21 -> "Volume"}},
{{xLabel, "Mass"},
{"Metallicity", "BVColorIndex", "Temperature",
"Mass", "Luminosity", "AbsoluteMagnitude", "Gravity", "Density",
"Diameter", "DistanceFromEarth", "MainSequenceLifetime",
"Parallax", "RadialVelocity",
"Radius", "SurfaceArea", "VariablePeriod", "Volume"}
},
{{yAxis, 7}, {2 -> "Metallicity", 4 -> "BVColorIndex",
5 -> "Temperature", 6 -> "Mass", 7 -> "Luminosity",
8 -> "AbsoluteMagnitude", 9 -> "Gravity", 10 -> "Density",
11 -> "Diameter", 12 -> "DistanceFromEarth",
13 -> "MainSequenceLifetime", 14 -> "Parallax",
15 -> "RadialVelocity", 16 -> "Radius", 19 -> "SurfaceArea",
20 -> "VariablePeriod", 21 -> "Volume"}},
{{yLabel, "Luminosity"}, {"Metallicity", "BVColorIndex",
"Temperature",
"Mass", "Luminosity", "AbsoluteMagnitude", "Gravity", "Density",
"Diameter", "DistanceFromEarth", "MainSequenceLifetime",
"Parallax", "RadialVelocity",
"Radius", "SurfaceArea", "VariablePeriod", "Volume"}
},
{{tooltip, 3}, {1 -> "Name", 3 -> "SpectralClass",
2 -> "Metallicity", 4 -> "BVColorIndex", 5 -> "Temperature",
6 -> "Mass", 7 -> "Luminosity", 8 -> "AbsoluteMagnitude",
9 -> "Gravity", 10 -> "Density", 11 -> "Diameter",
12 -> "DistanceFromEarth", 13 -> "MainSequenceLifetime",
14 -> "Parallax", 15 -> "RadialVelocity", 16 -> "Radius",
19 -> "SurfaceArea", 20 -> "VariablePeriod", 21 -> "Volume"}},
{{plotType, ListLogLogPlot}, {ListPlot, ListLogPlot, ListLogLogPlot}},
{{category, unionAll}, {
topTemp -> "topTemp",
lowTemp -> "lowTemp",
topMass -> "topMass",
lowMass -> "lowMass",
topLuminosity -> "topLuminosity",
lowLuminosity -> "lowLuminosity",
unionAll -> "unionAll"
}}]
ListPicker[Dynamic[a],
{
topTemp -> "topTemp",
lowTemp -> "lowTemp",
topMass -> "topMass",
lowMass -> "lowMass",
topLuminosity -> "topLuminosity",
lowLuminosity -> "lowLuminosity",
unionAll -> "unionAll"
}
] ;Joseph Karpinski2017-03-21T15:38:51ZAvoid error in Suggestions bar (Mathematica 11.1)?
http://community.wolfram.com/groups/-/m/t/1034002
I have just updated Mathematica to version 11.1 (Windows 10 Home 64-bit) and the suggestions bar is not working properly. I receive this error message when I use one of the options of one arrow of the suggestions bar:
![Error using the suggestions bar][1]
INTERNAL SELF-TEST ERROR: EventTracker|c|521
Click here to find out if this problem is known, and to help improve
the Wolfram System by reporting it to Wolfram Research.
The error happens after installing the software and running it, with the default configurations.
Does anyone know what is the problem?
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Errorsuggestionsbar.PNG&userId=410040
[2]: https://www.wolfram.com/support/error-report/?Version=11.1.0&SystemID=Windows-x86-64&LicenseID=L4968-1451&location=EventTracker-c-521Rodrigo Sambade Saá2017-03-17T23:52:17ZUse FindShortestTour to create dog walking route?
http://community.wolfram.com/groups/-/m/t/1036964
Hi:
I was wondering if anyone has a good .nb I could use to find the most efficient route to walk a dog in any city with stops at a pet supply store, a coffee shop, and a park from a starting point (home) that is returned to. Or if you have a .nb with an example from a specific city, that would be useful. Any help is appreciated!Swede White2017-03-21T16:51:03Z[GIF] Drug overdose trends in USA counties 1999 - 2014
http://community.wolfram.com/groups/-/m/t/837574
The data [Drug Poisoning Mortality: United States, 1999–2014][1] are published by USA government. In a few recent blogs ([1][2], [2][3], [3][4]) **static visualizations** of data were performed. **Here we show how to animate maps of geographical drug overdose spread in USA**. Below you can see 4 images, each reflecting upon ***Age-adjusted death rates for drug poisoning per 100,000 population by county and year***:
1. First static frame 1999
2. Last static frame 2014
3. Animated .GIF of the whole period with 1 frame per year
4. Range of rates versus time, USA average
Quoting NPR news [Obama Asks Congress For More Money To Fight Opioid Drug Abuse][5]:
> Every day in America more than 50 people die from an overdose of prescription pain medication. Some people who start out abusing pain pills later turn to heroin, which claims another 29 lives each day.
----------
**1999: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][6]
![enter image description here][7]
----------
**2014: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][8]
![enter image description here][9]
----------
**1999 - 2014 Animation: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][10]
![enter image description here][11]
----------
**Range of rates versus time: Age-adjusted death rates for drug poisoning per 100,000 for USA average over counties**
![enter image description here][12]
Getting the data
----------------
We can download data in .CSV format from [CDC web site][13]. I keep data file in the same as the notebook directory to shorten file-path strings.
SetDirectory[NotebookDirectory[]]
raw = SemanticImport["ops.csv"]
![enter image description here][14]
Making "interpreted" dataset
----------------------------
In [Wolfram Language][15] (WL) many built-in data allow for interpretation of imported data. For example, the USA counties could be interpreted as entities:
![enter image description here][16]
But I did not use `SemanticImport` to interpret on import automatically, because I would like to do this efficiently. The table has 50247 entries
Normal[raw[All, "County"]] // Length
> 50247
while there are only 3141 actual counties listed:
Normal[raw[All, "County"]] // Union // Length
> 3141
So instead of making 50247 calls to `interpreter` we will make just 3141 and use efficient `Dispatch` after to distribute replacement rules over all 50247 entries. I've spent only 100 seconds on making `Dispatch`
countyRULEs = Dispatch[
Thread[# -> Interpreter["USCounty"][#]] &@
Union[Normal[raw[All, "County"]]]]; // AbsoluteTiming
> {108.124, Null}
And almost no time on interpreting dataset:
data = raw /. countyRULEs; // AbsoluteTiming
data
> {0.441731, Null}
![enter image description here][17]
Bounds of death-rates for future rescaling
------------------------------------------
Note a `StringReplace` trick for going `ToExpression` here and throughout the rest of the post:
MinMax[ToExpression[StringReplace[Normal[
data[All, "Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2]
> {1, 20}
Testing color scheme
--------------------
Color scheme are important to properly blend with native colors of maps and also to express data. These are some tests with [Color Schemes][18] available in Wolfram Language.
tmp = GeoNearest["City",
Entity["City", {"Atlanta", "Georgia", "UnitedStates"}], {All, Quantity[50, "Kilometers"]}];
Multicolumn[Table[
GeoRegionValuePlot[tmp -> "PopulationDensity", PlotLegends -> False,
ColorFunction -> (ColorData[{clmap, "Reverse"}][#] &), ImageSize -> 400]
, {clmap, {"CherryTones", "SolarColors", "SunsetColors",
"RustTones", "WatermelonColors", "Rainbow", "RoseColors",
"ThermometerColors", "BrownCyanTones"}}], 3]
![enter image description here][19]
Year 1999: a specific year GiS plot
---------
GeoRegionValuePlot[
Thread[Normal[data[Select[#Year == 1999 &], "County"]] ->
ToExpression[StringReplace[Normal[data[Select[#Year == 1999 &]][All,
"Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2],
GeoRange -> {{24, 50}, {-125, -66}},
GeoProjection -> "Mercator",
ColorFunctionScaling -> False,
ColorFunction -> (ColorData[{"CherryTones", "Reverse"}][
Rescale[#, {1, 20}]] &),
PlotLegends -> False,
ImageSize -> 1000] // Rasterize
Making animation
----------------
frames = ParallelTable[
GeoRegionValuePlot[
Thread[
Normal[data[Select[#Year == year &], "County"]] ->
ToExpression[StringReplace[Normal[data[Select[#Year == year &],
"Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2],
GeoRange -> {{24, 50}, {-125, -66}},
GeoProjection -> "Mercator",
ColorFunctionScaling -> False,
ColorFunction -> (ColorData[{"CherryTones", "Reverse"}][
Rescale[#, {1, 20}]] &),
PlotLegends -> False,
ImageSize -> 800],
{year, Range[1999, 2014]}];
Making legend
-------------
Panel@Grid[Transpose[{#, ColorData[{"CherryTones", "Reverse"}][Rescale[#, {1, 20}]]} & /@Range[1, 20]]]
Growth of death rates ranges vs time
------------------------------------
bandGrowth = Transpose[Table[N[Mean[ToExpression[
StringReplace[Normal[data[Select[#Year == y &]][All,
"Estimated Age-adjusted Death Rate, 11 Categories (in \
ranges)"]], {"-" -> "~List~", ">" -> "{#,#}&@"}]]]], {y, Range[1999, 2014]}]]
BarChart[{#[[1]], #[[2]] - #[[1]]} & /@ Transpose[bandGrowth],
PlotTheme -> "Marketing", ChartLayout -> "Stacked",
ChartLabels -> {Range[1999, 2014], None}, ImageSize -> 850,
AspectRatio -> 1/3, ChartStyle -> {Yellow, Red}]
Another color scheme sample
------------------
In this dark-low-values color scheme you can see better a few white spots. Those are very few counties where data are missing.
----------
1999
----
![enter image description here][20]
----------
2014
----
![enter image description here][21]
[1]: https://data.cdc.gov/NCHS/NCHS-Drug-Poisoning-Mortality-County-Trends-United/pbkm-d27e?category=NCHS&view_name=NCHS-Drug-Poisoning-Mortality-County-Trends-United
[2]: http://blogs.cdc.gov/nchs-data-visualization/drug-poisoning-mortality/
[3]: https://evergreen.data.socrata.com/stories/s/b5gk-7v6a/
[4]: http://www.nytimes.com/interactive/2016/01/07/us/drug-overdose-deaths-in-the-us.html
[5]: http://www.npr.org/sections/thetwo-way/2016/02/02/465348441/obama-asks-congress-for-more-money-to-fight-opioid-drug-abuse
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10097figure1.png&userId=11733
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6105figure2.png&userId=11733
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif-41464821.gif&userId=11733
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=opoid.png&userId=11733
[13]: https://data.cdc.gov/NCHS/NCHS-Drug-Poisoning-Mortality-County-Trends-United/pbkm-d27e?category=NCHS&view_name=NCHS-Drug-Poisoning-Mortality-County-Trends-United
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdfregdavscr345y5t.png&userId=11733
[15]: https://www.wolfram.com/language/
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2016-04-11_05-01-09.png&userId=11733
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2016-04-11_05-05-57.png&userId=11733
[18]: http://reference.wolfram.com/language/guide/ColorSchemes.html
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdf45yhfhgsdy5uejtyhsgdf.png&userId=11733
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdafsfqegr.png&userId=11733
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dfghu6r7euyrtea.png&userId=11733Vitaliy Kaurov2016-04-11T10:21:01ZSolve an equation with a Piecewise function?
http://community.wolfram.com/groups/-/m/t/1035180
Hello I'm new to Mathematica, though I know a few about it. I'm trying to find a group of 2 values for Piecewise function changing with x, but I constantly have a problem with Solve producing a lot of errors can you tell me where is my mistake in this ? Whole thing is in attached file, Basically its:
F(x)=Piecewise[...](x,wl,wp)
Solve[MaxValue[F,x]/MinValue[F,x]==1.15,{wl,wp}] Or
CylindricalDecomposition[1.1<=MaxValue[F,x]/MinValue[F,x]<=1.2,{wl,wp}]
So I'm trying to get wl and wp values for optimization, but certainly I'm doing something wrong, please help.Pio Tyldens2017-03-19T15:24:41Z[✓] Manipulate of a 2-D plot?
http://community.wolfram.com/groups/-/m/t/1033865
I am trying to create a plot of a logistic function with parameters AHAT and MDIF. The values of these parameters depend up
underlying parameters A and DIF and an underlying ability distribution with a covariance omega and mean vector mu. I have to go through several steps to determine AHAT and MDIF. I would like to create a plot where I can manipulate sigma, mu1 and mu2 to see how it affects the
logistic curve. I can't figure out how to create the plot. Leaving the variable sigma and mu1 and mu2 in the expressions at the outside
will not work. I have attached a MSWord document with the code. Does any have suggestions?
A={{1.2,.4},{.8,.5},{.9,1.0},{1.3,.5},{.6,.9}};u={{μ1,μ2}};
DIF={{.5},{.7},{1.0},{-1.},{2.0}};
Ω={{σ,1},{1,σ}}
L=CholeskyDecomposition[Ω]
W=Eigenvectors[Transpose[L].Transpose[A].A.L]
AM=List[A[[1]]]
W1M=List[W[[1]]]
W2M=List[W[[2]]]
AHAT=AM.Transpose[W1M]/Sqrt[2.89+AM.Transpose[W2M].W2M.Transpose[AM]]
MDIF=(DIF[[1]]-AM.Transpose[u])/AM.Transpose[W1M]
Manipulate[Plot[{(1/(1+Exp[-1.7*AHAT*(θ-MDIF)])),},{θ,-3,3},AxesLabel->{θ,p}],{σ,1,3},{μ1,-2,2},{μ2,-2,2}]Terry Ackerman2017-03-17T19:28:15ZExport a 3D graphics in dxf or stl not converted in triangles?
http://community.wolfram.com/groups/-/m/t/1036461
I need to export in dxf or stl format some 3D geometrical constructions I made in Mathematica. But when the exported files are examined in a CAD viewer I see only figures formed out of triangles. Sometimes some triangles appear in unexpected places. Here is an example.
tube = {CapForm[None], Tube[{{0, 0, 0}, {0, 0, 10}}, 5]};
cube = Cuboid[{10, 10, 0}, {20, 20, 10}];
G3D = Graphics3D[{tube, cube}];
Export["G3D.dxf", G3D];
The circle of the tube is decomposed into a polygon and the tube's surface is decomposed into rectangles and the rectangles into triangles. The cube looks better but I do not want it to be made out of triangles but out of rectangles.Petre Logofatu2017-03-21T11:46:39ZSubscript more than one character? a_(2)=3a_{2-1}
http://community.wolfram.com/groups/-/m/t/1036556
How do you get wolfram to read the 2-1 completely subscripted, or as a "base" a_(2)=3a_{2-1} It only sends the 2 to the base of a at the end of the equation!!!! never the -1 ugh..... plz helpRudy Ram2017-03-21T01:13:06ZMake blinking work for all percentages of the screens?
http://community.wolfram.com/groups/-/m/t/1036511
I'm building interactive material for my classes
I wish there were flickering, aids, questions, etc.
I am using two panels, one for the graphics (left panel ) and the other for the explanations (right panel).
I control the graph from the right panel.
In this example I present, the blinking does not run with all percentages of the screens. I do not know why.
For the example I present there is no blinking in 110% and 115% of the screen.
In general, the examples I am developing the flicker fails for different percentages.
I.m using 11.1 (the same with 11.0 and 10.x)
Hypothesis:
I am not using StringForm correctly.
My definitions are not yet appropriate.
Thanks in advance.Ernesto Espinosa2017-03-21T00:17:43Z[✓] Inconsistent language design MinMax - Solved in V11.1
http://community.wolfram.com/groups/-/m/t/816095
The new function MinMax is defined (first line in the documentation) as follows:
![enter image description here][1]
However this does not hold for an empty list
{Min[{}], Max[{}]}
MinMax[{}]
So it is self-contradicting and slightly confusing. I know MinMax says it returns {-infinity,infinity} but that does not fit into the the definitions of Min, Max and MinMax. Like other functions like ReIm and AbsArg, the function should just returns {Min[..],Max[..]} (but more optimized) like ReIm returns {Re[..],Im[..]} and AbsArg returns {Abs[..],Arg[..]}...
Take the following example, i want the minimum and maximum of those lists. It should be the same to do the MinMax on each of them, and then do the min of all the minima, and the max of all the maxima, but the function doesn't handle empty lists in a coherent way (as Min and Max by itself does).
SHMinMax[x___]:={Min[x],Max[x]}
num1={{4,7},{5,9},{11,13,2,8}};
num2=Flatten[num1];
(* all give same results as there is no 'empty' list *)
MinMax[num2]
{Min[#1],Max[#2]}&@@Transpose[MinMax/@num1]
{Min[#1],Max[#2]}&@@Transpose[SHMinMax/@num1]
(* empty list! built-in MinMax function is not coherent *)
num1={{4,7},{5,9},{11,13,2,8},{}};
num2=Flatten[num1];
MinMax[num2]
{Min[#1],Max[#2]}&@@Transpose[MinMax/@num1]
{Min[#1],Max[#2]}&@@Transpose[SHMinMax/@num1]
Note that just Min (or Max) is coherent:
num1 = {{4, 7}, {5, 9}, {11, 13, 2, 8}, {}};
num2 = Flatten[num1];
Min[num2]
Min@(Min /@ num1)
The very definition of Min[] to return infinity, has been done such that Min[3,Min[4],Min[-1,6]] or Min[3,4,-1,6] or Min[3,Min[],Min[4,-1],Min[6]] always give the same results, regardless of the order.
More confusingly is that it works differently for empty Interval objects and with empty lists.
(* built in MinMax IS coherent with empty intervals *)
{Min[Interval[]], Max[Interval[]]}
MinMax[Interval[]]
SHMinMax[Interval[]]
(* but not with empty lists *)
{Min[{}], Max[{}]}
MinMax[{}]
SHMinMax[{}]
To summarise, the choice of returning {-infinity, infinity} for MinMax[{}] is inconsistent with and incoherent with the definitions of Min[], Max[] and the function MinMax itself. And more puzzling gives also 'unusual' results for Interval objects.
I would like to know how this design choice was conceived. To me it seems very illogical and incoherent as it is now. While my own function SHMinMax which is the definition as given in the first definition of MinMax, does follow all the logic and is consistent throughout (for lists, empty lists, intervals and empty intervals!).
Please let me understand the logic!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2016-03-03at18.27.05.png&userId=73716Sander Huisman2016-03-03T14:44:17ZFindClusters versus ClusteringComponents?
http://community.wolfram.com/groups/-/m/t/960586
Dear all,
I just found out that there's a difference in the number clusters retrieved when using FindClusters and ClusteringComponents for the same data set, even when completely the same settings are used:
Do[
koppels = RandomReal[{0, 100}, 500];
cl = FindClusters[koppels, DistanceFunction -> EuclideanDistance, Method -> "Optimize"];
indices2 = ClusteringComponents[koppels, Automatic, 1, DistanceFunction -> EuclideanDistance, Method -> "Optimize"];
Print[{First@Dimensions[cl], Max[indices2]}];,
{i, 1, 6}]
{2,2}
{2,2}
{1,2}
{2,2}
{1,2}
{1,2}
This shouldn't be the case, because the same clusters should be found either way.
Does someone have an idea of what's going on here?
Thanks for the information!
JanJan Baetens2016-11-10T18:00:40ZCalculate (A*Nabla)B ?
http://community.wolfram.com/groups/-/m/t/1035972
Does anyone how to write (A*Nabla)B in Mathematica?
As far as I understood I can also write (A*Nabla)B=Nabla(A*Transposed[B])-B(Nabla*A), but I can't get this to work either as I don't know how to write A*Transposed(B).
Any ideas?mkssion2017-03-20T18:28:47ZSolve an Elliptic PDE for u[x,y] with the rhs given by numerical f[u]?
http://community.wolfram.com/groups/-/m/t/1035422
I have a somewhat unusual mathematical problem. I need to solve numerically a second order elliptic PDE for u[x,y] with the right hand side given by numerical function F[u], that is, some give function of u, not of [x,y].
In 1D things work well
(* make interpolation of a linear function: *)
FofA = Interpolation[Table[{x, x}, {x, 0, 1, 10^-2}]];
(*construct a numerical function from the interpolation: *)
FofA1[x_] := FofA[z] /. z -> x
(* Find solutions for u’ = FofA1[u] *)
uofx = Flatten[
NDSolve[{Dt[u[x], x] == (FofA1[u[x]]), u[1] == 1},
u, {x, 0, 1}]].{1};
(* Plot it and compare with analytical *)
Show[{Plot[u[x] /. uofx, {x, 0, 1}, PlotRange -> All],
Plot[E^(-1 + x), {x, 0, 1}]}]
But in 2D it fails. One of the hints, I think is: if I try to solve Poisson equation in the form
D[u[x, y], {x, 2}] + D[u[x, y], {y, 2}] == u[x, y]
the NDSolveValue works OK, but it fails for
D[u[x, y], {x, 2}] + D[u[x, y], {y, 2}] == u[x, y]^{1.}
(when the rhs is numerically evaluated)
Thanks a lot for the insight.Maxim Lyutikov2017-03-19T22:24:02Z