Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Staff Picks sorted by activePerformance tuning in Wolfram Language
http://community.wolfram.com/groups/-/m/t/1037730
*NOTE: Please see the original version of this post [**HERE**][1]. Cross-posted here per suggestion of [@Vitaliy Kaurov][at0].*
----------
Since Mathematica is a symbolic system, with symbolic evaluator much more
general than in numerically-based languages, it is not surprising that performance-tuning can be
more tricky here. There are many techniques, but they can all be understood
from a single main principle. It is:
*Avoid full Mathematica symbolic evaluation process as much as possible.*
All techniques seem to reflect some facet of it. The main idea here is that most of the
time, a slow Mathematica program is such because many Mathematica functions are very general. This
generality is a great strength, since it enables the language to support better and more powerful abstractions, but in many places in the program such generality, used without care, can be a (huge) overkill.
I won't be able to give many illustrative examples in the limited space, but they can be found in
several places, including some WRI technical reports (Daniel Lichtblau's
one on efficient data structures in Mathematica comes to mind), a very good
book of David Wagner on Mathematica programming, and most notably, many Mathgroup
posts. I also discuss a limited subset of them in [my book][3]. I will supply more references soon.
Here are a few most common ones (I only list those available within Mathematica
language itself, not mentioning CUDA \ OpenCL, or links to other languages, which are
of course also the possibilities):
1. *Push as much work into the kernel at once as possible, work with as large
chunks of data at a time as possible, without breaking them into pieces*
1.1. Use built-in functions whenever possible. Since they are implemented
in the kernel, in a lower-level language (C), they are typically (but not always!)
much faster than user-defined ones solving the same problem. The more specialized
version of a built-in function you are able to use, the more chances you have for a speed-up.
1.2. Use functional programming (`Map, Apply`, and friends). Also, use pure functions
in `#-&` notation when you can, they tend to be faster than Function-s with named
arguments or those based on patterns (especially for not computationally-intensive
functions mapped on large lists).
1.3. Use structural and vectorized operations (`Transpose, Flatten,
Partition, Part` and friends), they are even faster than functional.
1.4. Avoid using procedural programming (loops etc), because this programming
style tends to break large structures into pieces (array indexing etc).
This pushes larger part of the computation outside of the kernel and makes it slower.
2. *Use machine-precision whenever possible*
2.1. Be aware and use Listability of built-in numerical functions, applying them to
large lists of data rather than using `Map` or loops.
2.2. Use `Compile`, when you can. Use the new capabilities of `Compile`, such as `CompilationTarget->"C"`,
and making our compile functions parallel and Listable.
2.3. Whenever possible, use vectorized operations (`UnitStep, Clip, Sign, Abs`, etc)
inside `Compile`, to realize "vectorized control flow" constructs such as `If`, so that
you can avoid explicit loops (at least as innermost loops) also inside `Compile`. This
can move you in speed from Mathematica byte-code to almost native C speed, in some cases.
2.4. When using `Compile`, make sure that the compiled function doesn't bail out to non-compiled evaluation. See examples [in this MathGroup thread][4].
3. *Be aware that Lists are implemented as arrays in Mathematica*
3.1. Pre-allocate large lists
3.2. Avoid `Append, Prepend, AppendTo` and `PrependTo` in loops, for building
lists etc (because they copy entire list to add a single element, which leads
to quadratic rather than linear complexity for list-building)
3.3. Use linked lists (structures like `{1,{2,{3,{}}}}` ) instead of plain lists
for list accumulation in a program. The typical idiom is `a = {new element, a}`.
Because a is a reference, a single assignment is constant-time.
3.4. Be aware that pattern-matching for sequence patterns (BlankSequence,
BlankNullSequence) is also based on Sequences being arrays. Therefore, a rule
`{fst_,rest___}:>{f[fst],g[rest]}` will copy the entire list when applied. In particular, don't
use recursion in a way which may look natural in other languages. If you want to use recursion on lists, first convert your lists to linked lists.
4. *Avoid inefficient patterns, construct efficient patterns*
4.1. Rule-based programming can be both very fast and very slow, depending on how
you build your structures and rules, but in practice it is easier to inadvertently
make it slow. It will be slow for rules which force the pattern-matcher to make many
a priory doomed matching attempts, for example by under-utilizing each run of the
pattern-matcher through a long list (expression). Sorting elements is a good example:
`list//.{left___,x_,y_,right___}/;x>y:>{left,y,x,right}` - has a cubic complexity in the
size of the list (explanation is e.g. [here][5]).
4.2. Build efficient patterns, and corresponding structures to store your data, making
pattern-matcher to waste as little time on false matching attempts as possible.
4.3. Avoid using patterns with computationally intensive conditions or tests. The
pattern-matcher will give you the most speed when patterns are mostly syntactic in
nature (test structure, heads, etc). Every time when condition `(/;)` or pattern test `(?)`
is used, for every potential match, the evaluator is invoked by the pattern-matcher,
and this slows it down.
5. *Be aware of immutable nature of most Mathematica built-in functions*
Most Mathematica built-in functions which process lists create a copy of an original list and
operate on that copy. Therefore, they may have a linear time (and space) complexity in the
size of the original list, even if they modify a list in only a few places. One universal
built-in function that does not create a copy, modifies the original expression and does not
have this issue, is `Part`.
5.1. Avoid using most list-modifying built-in functions for a large number of
small independent list modifications, which can not be formulated as a single step
(for example, `NestWhile[Drop[#,1]&,Range[1000],#<500&]` )
5.2. Use extended functionality of `Part` to extract and modify a large number of
list (or more general expression) elements at the same time. This is very fast,
and not just for packed numerical arrays (`Part` modifies the original list).
5.3. Use `Extract` to extract many elements at different levels at once, passing
to it a possibly large list of element positions.
6. *Use efficient built-in data structures*
The following internal data structures are very efficient and can be used in
many more situations than it may appear from their stated main purpose. Lots of such examples can be found by searching the Mathgroup archive, particularly contributions of Carl Woll.
6.1. Packed arrays
6.2. Sparse arrays
7. *Use hash - tables.*
Starting with version 10, immutable associative arrays are available in Mathematica (Associations)
7.1 Associations
the fact that they are immutable does not prevent them to have efficient insertion and deletion of key-value pairs (cheap copies different from the original association by the presence, or absence, of a given key-value pair). They represent the idiomatic associative arrays in Mathematica, and have very good performance characteristics.
For earlier versions,the following alternatives work pretty well, being based on internal Mathematica's hash-tables:
7.2. Hash-tables based on `DownValues` or `SubValues`
7.3. `Dispatch`
8. *Use element - position duality*
Often you can write faster functions to work with positions of elements rather than
elements themselves, since positions are integers (for flat lists). This can give you
up to an order of magnitude speed-up, even compared to generic built-in functions
(`Position` comes to mind as an example).
9. *Use Reap - Sow*
`Reap` and `Sow` provide an efficient way of collecting intermediate results, and generally
"tagging" parts you want to collect, during the computation. These commands also go well
with functional programming.
10. *Use caching, dynamic programming, lazy evaluation*
10.1. Memoization is very easily implemented in Mathematica, and can save a lot of execution
time for certain problems.
10.2. In Mathematica, you can implement more complex versions of memoization, where you can
define functions (closures) at run-time, which will use some pre-computed parts in their
definitions and therefore will be faster.
10.3. Some problems can benefit from lazy evaluation. This seems more relevant to memory -
efficiency, but can also affect run-time efficiency. Mathematica's symbolic constructs make
it easy to implement.
A successful performance - tuning process usually employs a combination of these techniques,
and you will need some practice to identify cases where each of them will be beneficial.
[1]: http://mathematica.stackexchange.com/a/29351
[2]: https://groups.google.com/d/topic/comp.soft-sys.math.mathematica/XOXapJm_Q1Q/discussion
[3]: http://www.mathprogramming-intro.org/
[4]: https://groups.google.com/d/topic/comp.soft-sys.math.mathematica/XOXapJm_Q1Q/discussion
[5]: http://www.mathprogramming-intro.org/book/node355.html
[at0]: http://community.wolfram.com/web/vitaliykLeonid Shifrin2017-03-22T20:05:20ZTesting for beauty
http://community.wolfram.com/groups/-/m/t/1037946
What do you think of the idea of automatically judging if a piece of data was beautiful? This could mean the data in an image (ImageData) or maybe the result of a computation (e.g. CellularAutomaton), or anything, although I am thinking of a list or an array of numbers primarily.
My first thought was that there are many filters for image processing, but I don't know which might be useful. The next thing I think of is mathematical transforms. For example, taking the Fourier or Hadamard transform you expect the coefficients to decay, and if they don't then that would not be nice.
This code deletes the constant term and does some measure of the variance, using Mean as a shortcut to counting the 0's and 1's, those closer to the min than the max respectively without knowing the length or dimension. (Note Fourier does not assume the size is a power of 2 but Hadamard does.)
FourierBeauty[list_] := Mean[1. - Round[Rescale[Abs[Rest[Flatten[Fourier[list]]]]]]]
Maybe for an image this might not be bad. Here is what it picks out of the ExampleData test images:
Grid[{#, ExampleData[#]} & /@
MaximalBy[ExampleData["TestImage"],
FourierBeauty[
ImageData[
Binarize[
ImageResize[
ColorConvert[ExampleData[#], "Grayscale"], {64, 64}]]]] &],
Frame -> All]
![enter image description here][1]
but here are the CAs it likes the most.
MaximalBy[Range[0, 255],
Sum[FourierBeauty[ CellularAutomaton[#, RandomInteger[1, 2^8], {{0, 2^8 - 1}}]], 100] &]->{1, 3, 5, 17, 57, 87, 119, 127}
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fourier-beauty-image.jpg&userId=23275Todd Rowland2017-03-23T02:46:11ZGlyph Frieze Patterns
http://community.wolfram.com/groups/-/m/t/1032650
Frieze patterns overlap art and math. The design of the base tile in a frieze pattern is artistic, while its repetition can be defined mathematically. This makes frieze patterns a good candidate for exploration with the Wolfram Language.
The bit of code below creates random frieze patterns from font glyphs. I chose sixteen asymmetric glyphs. Others would work, but they should be asymmetric to avoid double symmetries. Here is what the code does:
- randomly select one of the glyphs
- create a random dark color
- randomly rotate the glyph by 45° angles
- crop any excess background
- tile it according to one of the seven frieze patterns
There are 896 possible patterns, not counting the color variations. The results are often startling. Here are a few:
![\[screen shot\]][1]
![\[screen shot\]][2]
![\[screen shot\]][3]
![\[screen shot\]][4]
This suited my need as a small part of a larger project, a sort of school-house trivia game called *Chicken Scratch*. The questions must have a fair amount of randomness so the students reason rather than memorizing answers. For this question, the game presents the frieze pattern and the players choose from the names of four geometric definitions.
The Wolfram Demonstrations Project does have a half-dozen or so demonstrations for exploring frieze patterns. This is the first I've seen that uses glyphs for the base tile design. Though I could turn this into a demonstration, I need to focus on *Chicken Scratch*. Feel free to use this code however you want.
color1 = RGBColor[Table[RandomReal[.6], 3]];
symbol = RandomChoice[{9873, 9730, 38, 9816, 163, 9758, 8730, 8950,
11001, 10729, 10771, 9736, 10000, 9799, 9732, 8623}];
stamp = ImageCrop[
ImageRotate[
Rasterize[
Graphics[{color1,
Style[Text[FromCharacterCode[symbol]], 200]}]], (
RandomInteger[7] \[Pi])/8, Background -> White]];
width = ImageDimensions[stamp][[1]];
frieze = Switch[RandomInteger[{1, 7}],
1, ImageAssemble[Table[stamp, 12]],
2,
top = ImageAssemble[Table[stamp, 12]];
bot = ImageAssemble[Flatten[{
ImageRotate[ImageCrop[stamp, {width/2, Full}, Right], \[Pi]],
Table[ImageRotate[stamp, \[Pi]], 11],
ImageRotate[
ImageCrop[stamp, {width/2, Full}, Left], \[Pi]]}]];
imgLst = ConformImages[{top, bot}];
ImageAssemble[{{imgLst[[1]]}, {imgLst[[2]]}}],
3,
top = ImageAssemble[Table[stamp, 12]];
bot = ImageAssemble[Flatten[{
ImageReflect[ImageCrop[stamp, {width/2, Full}, Left]],
Table[ImageReflect[stamp], 11],
ImageReflect[ImageCrop[stamp, {width/2, Full}, Right]]}]];
imgLst = ConformImages[{top, bot}];
ImageAssemble[{{imgLst[[1]]}, {imgLst[[2]]}}],
4, ImageAssemble[
Riffle[Table[stamp, 6], Table[ImageReflect[stamp, Left], 6]]],
5, ImageAssemble[{Table[stamp, 12],
Table[ImageReflect[stamp], 12]}],
6, ImageAssemble[{Riffle[Table[stamp, 6],
Table[ImageReflect[stamp, Left], 6]],
Riffle[Table[ImageReflect[stamp, Left], 6], Table[stamp, 6]]}],
7, ImageAssemble[{Riffle[Table[stamp, 6],
Table[ImageReflect[stamp, Left], 6]],
Riffle[Table[stamp, 6], Table[ImageReflect[stamp, Left], 6]]}]];
pic = Image[frieze, ImageSize -> {{800}, {100}}]
You may have noticed that my code relies on procedural programming constructs like **switch** and **if**. I have only been using the Wolfram Language for about a year. I'm grateful that the Wolfram Language allows me to use procedural techniques while I learn how to write more elegant function-based code.
Oh, there is a possibility that some of the glyphs won't work on your system because they rely on what fonts you have on your machine. If that's the case, replace the character codes with ones that you do have.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.24.33AM.png&userId=788861
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.27.58AM.png&userId=788861
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.27.20AM.png&userId=788861
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at9.25.47AM.png&userId=788861Mark Greenberg2017-03-17T00:12:50ZCode puzzles: turning docs into educational games
http://community.wolfram.com/groups/-/m/t/1032663
Teaching programing and assessing learning progress is often a very custom task. I wanted to create a completely automated "practically" infinite stream of random puzzles that guide a leaner towards improving programing skills. I think the major problem is content creation. To test whether the learner knows a programming concept, an exercise needs to be wisely designed. And it is better to have a randomized set of such exercises to definitely test the knowledge and exclude guesses and cheating and so on. Often creating such educational materials is very tedious, time consuming, and manual. Exactly like creating good documentation. I will explain one simple idea of using docs to make an educational game. This is just a barebone prototype to clearly follow the inner workings (try it out & share: https://wolfr.am/bughunter ). Please comment with feedback on how we can develop this idea further.
[![enter image description here][1]][3]
# Introduction: efficient use of resources
The docs are the finest wealth and depth of information and should be explored beyond their regular usage. Manual painstaking time consuming effort of creating good programing documentation should be used to its fullest potential. An automated game play would be a novel take on docs. We can use existing code examples in docs to randomly pull pieces of code and make programing exercises automatically. Being able to read code and find bugs is, in my experience, one of the most enlightening practices. The goal of the linked above game is to find a defect of the input code (bug) and fix it. Hence, the "bug hunter". There are just 2 possible outcomes of a single game cycle, --- and after each you can "try again":
![enter image description here][4]
# Core game code: making puzzles
Wolfram Language (WL) documentation is one of the best I've seen. It has pages and pages of examples starting from simple ones and going though the all details of the usage. Moreover the docs are written in WL itself and furthermore, WL can access docs and even has internal self-knowledge of its structure via WolframLanguageData. For instance, this is how you can show a relationship community graph for symbols related to `GatherBy`:
WolframLanguageData["GatherBy", "RelationshipCommunityGraph"]
![enter image description here][5]
We can use `WolframLanguageData` to access docs examples and then drop some parts of the code. The puzzle is then for the learner to find what is missing. For the sake of clarity designing a small working prototype lets limit test WL functions and corresponding docs' pages to some small number. So out of ~5000 (and we [just released a new addition][6]):
WolframLanguageData[] // Length
`4838`
built in symbols I just take 30
functions = {"Append", "Apply", "Array", "Cases", "Delete", "DeleteCases", "Drop", "Except",
"Flatten", "FlattenAt", "Fold", "Inner", "Insert", "Join", "ListConvolve", "Map", "MapThread",
"Nest", "Outer", "Partition", "Prepend", "ReplacePart", "Reverse", "RotateLeft", "RotateRight",
"Select", "Sort", "Split", "Thread", "Transpose"};
functions // Length
30
that are listed on a [very old but neat animated page][7] of some essential core-language collection. I will also add some "sugar syntax" to potential removable parts of code:
sugar = {"@@", "@", "/@", "@@@", "#", "^", "&"};
So, for instance, out of the following [example in docs][8] we could remove a small part to make a puzzle:
![enter image description here][9]
Here is an example of "sugar syntax" removal, which for novice programmers would be harder to solve:
![enter image description here][10]
Next step is to define a function that can check if a string is a built-in symbol (function, all 5000) or if it is some of sugar syntax we defined above:
ClearAll[ExampleHeads];
ExampleHeads[e_]:=
Select[
Cases[e,_String, Infinity],
(NameQ["System`"<>#]||MemberQ[sugar,#])&&#=!="Input"&
]
Next function essentially makes a single quiz question. First it randomly picks a function from list of 30 symbols we defined. Then it goes to the doc page of that symbol to the section called "Basic Examples". It finds a random example and removes a random part out of it:
ranquiz[]:=Module[
{ranfun=RandomChoice[functions],ranexa,ranhead},
ranexa=RandomChoice[WolframLanguageData[ranfun,"DocumentationBasicExamples"]][[-2;;-1]];
ranhead=RandomChoice[ExampleHeads[ranexa[[1]]]];
{
ReplacePart[#,Position[#,ranhead]->""]&@ranexa[[1]],
ranexa[[2]],
ranhead,
ranfun
}
]
Now we will define a few simple variables and tools.
# Image variables
I keep marveling how convenient it is that Mathematica front end can make images to be part of code. This makes notebooks a great IDE:
![enter image description here][11]
# Databin for tracking stats
It is important to have statistics of your learning game: to understand how to improve it where the education process should go. [Wolfram Datadrop][12] is an amazing tool for these purposes.
[![enter image description here][13]][14]
We define the databin as
bin = CreateDatabin[<|"Name" -> "BugHunter"|>]
# Deploy game to the web
To make an actual application usable by everyone with internet access I will use [Wolfram Development Platform][15] and [Wolfram Cloud][16]. First I define a function that will build the "result of the game" web page. It will check is answer is wrong or right and give differently designed pages accordingly.
quiz[answer_String,check_String,fun_String]:=
(
DatabinAdd[Databin["kd3hO19q"],{answer,check,fun}];
Grid[{
{If[answer===check,
Grid[{{Style["Right! You got the bug!",40,Darker@Red,FontFamily->"Chalkduster"]},{First[imgs]}}],
Grid[{{Style["Wrong! The bug got you!",40,Darker@Red,FontFamily->"Chalkduster"]},{Last[imgs]}}]
]},
{Row[
{Hyperlink["Try again","https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter"],
"|",
hyperlink["Documentation","http://reference.wolfram.com/language/ref/"<>fun<>".html"],
"|",
hyperlink["Fun hint","http://reference.wolfram.com/legacy/flash/animations/"<>fun<>".html"]},
Spacer[10]
]},
{Style["===================================================="]},
{hyperlink["An Elementary Introduction to the Wolfram Language","https://www.wolfram.com/language/elementary-introduction"]},
{hyperlink["Fast introduction for programmers","http://www.wolfram.com/language/fast-introduction-for-programmers/en"]},
{logo}
}]
)
This function is used inside `CloudDeploy[...FormFunction[...]...]` construct to actually deploy the application. `FormFunction` builds a query form, a web user interface to formulate a question and to get user's answer. Note for random variables to function properly `Delayed` is used as a wrapper for `FormFunction`.
CloudDeploy[Delayed[
quizloc=ranquiz[];
FormFunction[
{{"code",None} -> "String",
{"x",None}-><|
"Input"->StringRiffle[quizloc[[3;;4]],","],
"Interpreter"->DelimitedSequence["String"],
"Control"->Function[Annotation[InputField[##],{"class"->"sr-only"},"HTMLAttrs"]]|>},
quiz[#code,#x[[1]],#x[[2]]]&,
AppearanceRules-> <|
"Title" -> Grid[{{title}},Alignment->Center],
"MetaTitle"->"BUG HUNTER",
"Description"-> Grid[{
{Style["Type the missing part of input code",15, Darker@Red,FontFamily->"Ayuthaya"]},
{Rasterize@Grid[{
{"In[1]:=",quizloc[[1]]},
{"Out[1]=",quizloc[[2]]}},Alignment->Left]}
}]
|>]],
"bughunter",
Permissions->"Public"
]
The result of the deployment is a cloud object at a URL:
CloudObject[https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter]
with the short version:
URLShorten["https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter", "bughunter"]
https://wolfr.am/bughunter
And we are done! You can go at the above URL and play.
# Further thoughts
Here are some key points and further thoughts.
## Advantages:
- Automation of content: NO new manual resource development, use existing code bases.
- Automation of testing: NO manual labor of grading.
- Quality of testing: NO multiple choice, NO guessing.
- Quality of grading: almost 100% exact detection of mistakes and correct solutions.
- Fight cheating: clear to identify question type "find missing code part" helps to ban help from friendly forums (such as this one).
- Almost infinite variability of examples if whole docs system is engaged.
- High range from very easy to very hard examples (exclusion of multiple functions and syntax can make this really hard).
## Improvements:
- Flexible scoring system based on function usage frequencies.
- Optional placeholder as hint where the code is missing.
- Using network of related functions (see above) to move smoothly through the topical domains.
- Using functions frequency to feed easier or harder exercises based on test progress.
***Please comment with your own thoughts and games and code!***
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-17at10.37.46AM.png&userId=11733
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at6.49.58PM.png&userId=11733
[3]: https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at6.58.45PM.png&userId=11733
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sgfq354ythwrgsf.png&userId=11733
[6]: http://blog.wolfram.com/2017/03/16/the-rd-pipeline-continues-launching-version-11-1/
[7]: http://reference.wolfram.com/legacy/flash/
[8]: http://reference.wolfram.com/language/ref/Except.html
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.21.05PM.png&userId=11733
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.25.52PM.png&userId=11733
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.34.59PM.png&userId=11733
[12]: https://datadrop.wolframcloud.com/
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2017-03-16at7.41.35PM.png&userId=11733
[14]: https://datadrop.wolframcloud.com/
[15]: https://www.wolfram.com/development-platform/
[16]: http://www.wolfram.com/cloud/Vitaliy Kaurov2017-03-17T01:13:24ZAlternative framework for stress testing routines - Value at Risk focus
http://community.wolfram.com/groups/-/m/t/458805
Stress testing in the Value-of-Risk context provides complementary view of the firm’s risk profile when market becomes extremely volatile and unstable. In this respect, the stress testing generally replicates crisis scenarios in the current market setting. Traditional approach to stress testing emphasises qualitative factors. We propose alternative arrangement - purely quantitative approach with probabilistic setting where particular stress test is modelled thought the quantiles of the calibrated probability distribution.
![StVar image][1]
#Introduction
Value-at-Risk (VaR) as a standard risk measure is usually defined in terms of time horizon, confidence interval and parameters setting that drive the calculation of the measure. There are few theoretical underlying assumptions behind VaR and one f them refers to VaR as the measure under the normal market conditions. This fundamental principle, by definition, is meant to ignore extreme market scenarios where market parameters - and the volatility in particular - undergoes a period of excessive variations.
Stress testing plays a complementary function to VaR - it identifies the hidden vulnerability of the risk measure as a result of hidden assumptions and thus provides risk and senior managers with a clear view of the loss when market runs into crisis.Therefore, financial institutions these days are required to run the stress testing of their VaR on regular basis to quantify additional loss based on stress-affected para maters.
To ease and streamline the stress testing exercise we papoose new approach to stress testing based on probabilistic representation of VaR variables where particular scenario is modelled through the quantile behaviour of calibrated density. This approach is flexible, easy to implement and leads to elegant and tractable solution that can be applied in various risk quantification settings.
#Stress definition
Stress testing is generally carried away in three settings:
- Historical scenario of particular crisis
- Stylised definition of particular scenarios
- Hypothetical events
The recent regulatory review strongly prefers the historical scenario approach where particular period with consecutive 12 months observation of crisis has to be included into the stress testing exercise.
The objective of this paper is to propose a new parametric approach to stress testing where the underlying market parameters are defined probabilistically and the stress scenario is then expressed as a quantile of certain probability distribution. The solution brings a number of benefits - the primary being simplicity of implementation, ease of maintenance and ease of use.
#Problem setting
Consider the following case - we have built a portfolio of 5 UK stocks - Lloyds, Vodafone, Barclays, BP and HSBC - with £1 million invested in each of them. We want to compute **VaR** and **stressed VaR** for our portfolio with £5 million value.
##Getting market data
We get the market data for the 5 stocks with a long history from Jan 2008
stocks = {"LLOY.L", "VOD.L", "BARC.L", "BP.L", "HSBA.L"};
data = TimeSeries[FinancialData[#, "January 1, 2008"]] & /@ stocks;
The price history of each stock looks as follows:
DateListPlot[data, PlotLegends -> stocks]
![enter image description here][2]
## Processing market data
dret = TimeSeriesResample[MovingMap[Log[Last[#]] - Log[First[#]] &, data, {2}]];
![enter image description here][3]
In the same way we calculate each stock historical daily volatility
qvol = MovingMap[StandardDeviation, dret, {60, "Day"}];
Evaluate@DateListPlot[qvol, PlotRange -> All, PlotLegends -> stocks]
![enter image description here][4]
Both return and its volatility reveal high values for financial stocks - Lloyds and Barclays in particular- with excessive movement in 2009, 2010 and partially in 2012.
![enter image description here][5]
This are the periods that represent the peak of financial crisis and the stress to the market.
#Probabilistic definition of volatility
VaR metrics is primarily dealing with the return volatility and therefore the volatility behaviour in the key object of the stress testing process. As the graph above suggests, the volatility evolution over time is not constant and we observe periods of high and low values. If we make an abstract view on the volatility data in general, we can treat it as a 'random variable' with some probability distribution. Presence of high and low values strongly supports this argument.
We propose Johnson distribution - unbounded type - to define the distribution of volatility density over the observed period of time. Let's recall that Johnson distributions represent a framework of distribution family in the from Y=\[Sigma] g((X-\[Gamma])/\[Delta])+\[Mu] where X~Normal[]. In case of 'unbounded' distribution the function g=sinh(x). The Johnson distribution family are well-behaved functions and with four parameters to use are therefore ideally suited for the data fitting with long and patchy tails. In this respect they can be easily used to model the volatility distribution over time.
The probability density of the Johnson unbounded distribution is:
PDF[JohnsonDistribution["SU", \[Gamma], \[Delta], \[Mu], \[Sigma]], x]
![enter image description here][6]
and the density plot with varying shape factor \[Gamma] looks as follows:
Plot[Evaluate@
Table[PDF[JohnsonDistribution["SU", \[Gamma], 1.25, 0.007, 0.00341],
x], {\[Gamma], {-3, -4, -5}}], {x, 0, 0.15}, Filling -> Axis,
PlotRange -> All,
PlotLegends -> {"\[Gamma]=-3", "\[Gamma]=-4", "\[Gamma]=-5"}]
![enter image description here][7]
Johnson type of distributions are very flexible in terms of tail control.
## Historical distribution of volatility
We can visualise the volatility distribution by looking at each stock histogram:
size = Length[stocks];
Table[Histogram[qvol[[i, All, 2]], 30, "PDF",
PlotLabel -> stocks[[i]],
ColorFunction ->
Function[{height}, ColorData["Rainbow"][height]]], {i, size}]
![enter image description here][8]
We can use the historical data to fit the volatility to the Johnson distribution:
edist = Table[
EstimatedDistribution[qvol[[i, All, 2]],
JohnsonDistribution [
"SU", \[Gamma], \[Delta], \[Mu], \[Sigma]]], {i, size}]
How good is the fit? We can observe this on the charts:
Table[Show[
Histogram[qvol[[i, All, 2]], 20, "PDF", PlotLabel -> stocks[[i]]],
Plot[PDF[edist[[i]], x], {x, 0, 0.1}, PlotRange -> All,
PlotStyle -> {Blue, Thick}]], {i, size}]
![enter image description here][9]
## Defining correlation matrix
Portfolio VaR requires correlation structure amongst the VaR components. We can create it from the historical return data defined above.The only problem is to define the time window from which we select the market data. For the standard VaR we can choose the one year history
tswin = {{2014, 3, 1}, {2015, 3, 1}};
retwin = TimeSeriesWindow[dret, tswin];
volwin = Table[
Mean[MovingMap[StandardDeviation, retwin[[i]], {60, "Day"}][[All,
2]]], {i, size}]
Table[retwin[[i, All, 2]], {i, size}];
wincorr = Correlation[Transpose[%]];
wincorr // MatrixForm
> {0.0117992, 0.0127976, 0.0155371, 0.011966, 0.00891589}
![enter image description here][10]
#Standard VaR calculation
Having defined the volatility and the correlation matrix based on past year data, we can calculate the parametric VaR easily. Assuming the normal distribution for the stock returns and 1 day VaR horizon, this can be defined as follows:
##Individual stock VaR
indVar=-Sqrt[2] \[Pi] \[Sigma] erfc^-1(2 \[Alpha])
where \[Pi] = value of the investment = £1 million, \[Sigma] = stock return volatility and \[Alpha]= confidence level
ndinv = Refine[InverseCDF[NormalDistribution[0, \[Sigma]], \[Alpha]],
0 < \[Alpha] < 1]
> -Sqrt[2] \[Sigma] InverseErfc[2 \[Alpha]]
With 99% confidence and each stock volatility calculated above
indvar = Table[
10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] -> volwin[[i]]}, {i, size}]
Total[%]
BarChart[indvar, ChartStyle -> "Rainbow",
PlotLabel -> Style["Individual stock 1 day VaR", 16],
ChartLegends -> stocks]
> {27449.1, 29771.8, 36144.6, 27837.1, 20741.5}
> 141944.
![enter image description here][11]
We can see the highest individual VaR for Barclays and the lowest for HSBC. This is consistent with the individual stock volatilities observed above.
##Portfolio VaR
Formula-wise this is equivalent to:
portVaR=Sqrt[indVar^T.\[CapitalSigma].indVar]
baseportVaR = Sqrt[indvar.wincorr.indvar]
> 104226.
One can see that the portfolio VaR < \[Sum] individual VaRs due to diversification effect. Portfolio features reduce the sum of individual VaR by almost £40,000.
The 1 day total portfolio VaR is £104k or 2.2% of the portfolio's value
#Stressed VaR
VaR is a function driven primarily by volatility of return. In portfolio context there is another factor - correlations. When stressing VaR, one has to think about stress extension to both parameters.
## Historical scenario of past crisis
This is the most frequently used method to handle stress testing. Having available data for the entire period makes this selection simple.
Looking at the historical volatility graph, it is obvious that the volatility peaked in 2009 - 2010. We therefore select this period for our stress testing.
tswin = {{2009, 3, 1}, {2010, 3, 1}};
retwin = TimeSeriesWindow[dret, tswin];
volwin = Table[
Mean[MovingMap[StandardDeviation, retwin[[i]], {60, "Day"}][[All,
2]]], {i, size}]
Table[retwin[[i, All, 2]], {i, size}];
wincorr = Correlation[Transpose[%]];
wincorr // MatrixForm
> {0.0412017, 0.013961, 0.0342192, 0.0138671, 0.0187612}
![enter image description here][12]
We can now see very different values for both - the volatilities and the correlation matrix. To compute the stressed VaR, all we need is just to replace the standard VaR set with the stress period data:
indSTvar =
Table[10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] -> volwin[[i]]}, {i,
size}]
Total[%]
BarChart[indSTvar, ChartStyle -> "Rainbow",
PlotLabel -> Style["Individual stock 1 day Stress VaR", 16],
ChartLegends -> stocks]
> {95849.5, 32478.1, 79605.9, 32259.8, 43645.}
>
> 283838.
![enter image description here][13]
Consequently, the individual stocks stress VaR is very different from the standard VaR. For example, the Lloyds stressed VaR is almost 3 times higher than the standard VaR measure.
Portfolio-level stressed VaR:
stressPortVaR = Sqrt[indSTvar.wincorr.indSTvar]
> 218471.
BarChart[{baseportVaR, stressPortVaR}, ChartStyle -> {Blue, Red},
ChartLegends -> {"Std VaR", "Stress VaR"}]
![enter image description here][14]
The portfolio 1 day stressed VaR has **doubled** under the stress scenario and represents £218k loss
We can eventually choose any other period to see how the VaR behaves under different set of market parameters.
##Inverse CDF method for stress scenarios - individual case
The calibration of historical volatility to the Johnson distribution enables us to explore alternative route for the stressed VaR generation. This can be described as **Inverse CDF** method.
If we assume that the return distribution is normal and the volatility of that return is calibrated to the Johnson unbounded distribution, we can obtain the stressed VaR metrics as a quantile of both distributions.
- Stressed volatility:
We are interested in the quantile value of the volatility to capture the stressed market sentiment. This can be easily achieved through the Inverse CDF function
invJD = Refine[
InverseCDF[
JohnsonDistribution [
"SU", \[Gamma], \[Beta], \[Kappa], \[Nu]], \[Lambda]],
0 < \[Lambda] < 1] // Simplify
![enter image description here][15]
- Combined stressed VaR:
The VaR is then the composite value of the VaR formula and the quantiled volatility measure
stVaRNd = ndinv /. \[Sigma] -> invJD // Simplify
> ndinv
The above formula is the **parametric definition** of the stressed VaR. The function operates on two quantile parameters:
- VaR confidence level \[Alpha]
- Volatility 'stress' factor \[Lambda]
The stressed VaR parametric model will behave as follows:
Plot3D[stVaRNd /. {\[Kappa] -> 0.006583, \[Nu] ->
0.0002419, \[Gamma] -> -4.978, \[Beta] -> 1.07132}, {\[Alpha],
0.5, 0.9}, {\[Lambda], 0.3, 0.75},
ColorFunction -> "TemperatureMap", PlotLegends -> Automatic]
![enter image description here][16]
It is worth noting that the model is quite sensitive to the volatility stress factor \[Lambda].
##Probabilistic approach to stress VaR - portfolio context
Apart from stressing volatility parameter, we need to define the stressed correlation matrix. We propose simple multiplicative factor approach where the original matrix is multiplied by a positive number which increases the correlation coefficients in the matrix. This is consistent with market practice - in period of crisis there is a strong positive tendency for financial assets in the same class to move together. The function below does exactly this:
stressCM[cm_, f_] :=
Table[If[cm[[i, j]] == 1, 1, Min[0.99, cm[[i, j]]*(1 + f)]], {i,
size}, {j, size}]
Applying 20% increase on the correlations of standard VaR produces the following CM:
stressCM[wincorr, 0.2] // MatrixForm
![enter image description here][17]
The matrix values are in line with the stressed matrix historical scenario of 2009-2010
To execute the computation we first generate the individual stressed VaR metrics using the Inverse CDF method:
indstressvar =
Table[10^6 ndinv /. {\[Alpha] -> 0.99, \[Sigma] ->
Mean[InverseCDF[edist[[i]], {0.5, 0.7, 0.9}]]}, {i, size}]
Total[%]
> {86608., 37365.8, 82986.8, 41072.8, 41535.8}
> 289569.
And then obtain the portfolio VaR in the same way as in the standard case but with the stressed correlation matrix:
portstvar = Sqrt[indstressvar.stressCM[wincorr, 0.2].indstressvar]
> 232010.
The parametric stressed VaR number is similar to what we obtained when we applied the historical scenario method. This shows that the alternative probabilistic stressed VaR approach works well and can be easily applied in practical setting.
#Extension of the stressed VaR concept
##VaR with Student-T distribution
When we opt for the generalised Student-T distribution for stock returns, the standard VaR can be defined as:
StVaR = Refine[
InverseCDF[StudentTDistribution[0, \[Sigma], \[Nu]], \[Alpha]],
1/2 < \[Alpha] < 1] // Simplify
![enter image description here][18]
The extension to the stressed VaR is trivial:
StVaR /. \[Sigma] -> invJD
![enter image description here][19]
The formula is essentially the Student-T Stressed VaR expression
We need to calibrate the distribution to obtain the degrees of freedom value for each stock
edist2 = Table[
EstimatedDistribution[dret[[i, All, 2]],
StudentTDistribution[0, \[Sigma], \[Nu]]], {i, size}]
> {StudentTDistribution[0, 0.0155062, 1.90416],
StudentTDistribution[0, 0.0093817, 2.82662],
StudentTDistribution[0, 0.0154996, 1.92631],
StudentTDistribution[0, 0.00850414, 2.12496],
StudentTDistribution[0, 0.00925811, 2.35383]}
stparam = List @@@ edist2
> {{0, 0.0155062, 1.90416}, {0, 0.0093817, 2.82662}, {0, 0.0154996,
1.92631}, {0, 0.00850414, 2.12496}, {0, 0.00925811, 2.35383}}
The individual stocks VaR are then:
ststressind =
Table[10^6 StVaR /. {\[Alpha] -> 0.99, \[Nu] ->
stparam[[i, 3]], \[Sigma] ->
Mean[InverseCDF[edist[[i]], {0.5, 0.7, 0.9}]]}, {i, size}]
Total[%]
> {98371.2, 33623.8, 96266.1, 44665.5, 41708.4}
>
> 314635.
and the total portfolio VaR equals to:
Sqrt[ststressind.stressCM[wincorr, 0.2].ststressind]
> 253701.
The Student-T stressed VaR is higher than in the normal case which is in line with expectation, especially when the degree of freedom is < 4.
#Conclusion
Parametric stressed VaR represents elegant and practical extension to the existing methods for stress testing. The main feature of this approach is ease of use and simplicity of application once the calibration dataset is available.
The parametric method can be applied to other probability distributions if one wants to test the parametric stressed VaR under different distributional assumptions. Student T approach is explicitly presented to demonstrate this case. Extension to other distributions is trivial since the stressed volatility with Inverse CDF can be easily applied in arbitrary setting.
[1]: /c/portal/getImageAttachment?filename=StressVarImange.jpg&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.png&userId=95400
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.png&userId=95400
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=95400
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&userId=95400
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.png&userId=95400
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.png&userId=95400
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.png&userId=95400
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9.png&userId=95400
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.png&userId=95400
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.png&userId=95400
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12.png&userId=95400
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=13.png&userId=95400
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=14.png&userId=95400
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15.png&userId=95400
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16.png&userId=95400
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=17.png&userId=95400
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=18.png&userId=95400
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19.png&userId=95400Igor Hlivka2015-03-13T18:06:46Z[GIF] Drug overdose trends in USA counties 1999 - 2014
http://community.wolfram.com/groups/-/m/t/837574
The data [Drug Poisoning Mortality: United States, 1999–2014][1] are published by USA government. In a few recent blogs ([1][2], [2][3], [3][4]) **static visualizations** of data were performed. **Here we show how to animate maps of geographical drug overdose spread in USA**. Below you can see 4 images, each reflecting upon ***Age-adjusted death rates for drug poisoning per 100,000 population by county and year***:
1. First static frame 1999
2. Last static frame 2014
3. Animated .GIF of the whole period with 1 frame per year
4. Range of rates versus time, USA average
Quoting NPR news [Obama Asks Congress For More Money To Fight Opioid Drug Abuse][5]:
> Every day in America more than 50 people die from an overdose of prescription pain medication. Some people who start out abusing pain pills later turn to heroin, which claims another 29 lives each day.
----------
**1999: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][6]
![enter image description here][7]
----------
**2014: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][8]
![enter image description here][9]
----------
**1999 - 2014 Animation: Age-adjusted death rates for drug poisoning per 100,000 population by county and year**
![enter image description here][10]
![enter image description here][11]
----------
**Range of rates versus time: Age-adjusted death rates for drug poisoning per 100,000 for USA average over counties**
![enter image description here][12]
Getting the data
----------------
We can download data in .CSV format from [CDC web site][13]. I keep data file in the same as the notebook directory to shorten file-path strings.
SetDirectory[NotebookDirectory[]]
raw = SemanticImport["ops.csv"]
![enter image description here][14]
Making "interpreted" dataset
----------------------------
In [Wolfram Language][15] (WL) many built-in data allow for interpretation of imported data. For example, the USA counties could be interpreted as entities:
![enter image description here][16]
But I did not use `SemanticImport` to interpret on import automatically, because I would like to do this efficiently. The table has 50247 entries
Normal[raw[All, "County"]] // Length
> 50247
while there are only 3141 actual counties listed:
Normal[raw[All, "County"]] // Union // Length
> 3141
So instead of making 50247 calls to `interpreter` we will make just 3141 and use efficient `Dispatch` after to distribute replacement rules over all 50247 entries. I've spent only 100 seconds on making `Dispatch`
countyRULEs = Dispatch[
Thread[# -> Interpreter["USCounty"][#]] &@
Union[Normal[raw[All, "County"]]]]; // AbsoluteTiming
> {108.124, Null}
And almost no time on interpreting dataset:
data = raw /. countyRULEs; // AbsoluteTiming
data
> {0.441731, Null}
![enter image description here][17]
Bounds of death-rates for future rescaling
------------------------------------------
Note a `StringReplace` trick for going `ToExpression` here and throughout the rest of the post:
MinMax[ToExpression[StringReplace[Normal[
data[All, "Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2]
> {1, 20}
Testing color scheme
--------------------
Color scheme are important to properly blend with native colors of maps and also to express data. These are some tests with [Color Schemes][18] available in Wolfram Language.
tmp = GeoNearest["City",
Entity["City", {"Atlanta", "Georgia", "UnitedStates"}], {All, Quantity[50, "Kilometers"]}];
Multicolumn[Table[
GeoRegionValuePlot[tmp -> "PopulationDensity", PlotLegends -> False,
ColorFunction -> (ColorData[{clmap, "Reverse"}][#] &), ImageSize -> 400]
, {clmap, {"CherryTones", "SolarColors", "SunsetColors",
"RustTones", "WatermelonColors", "Rainbow", "RoseColors",
"ThermometerColors", "BrownCyanTones"}}], 3]
![enter image description here][19]
Year 1999: a specific year GiS plot
---------
GeoRegionValuePlot[
Thread[Normal[data[Select[#Year == 1999 &], "County"]] ->
ToExpression[StringReplace[Normal[data[Select[#Year == 1999 &]][All,
"Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2],
GeoRange -> {{24, 50}, {-125, -66}},
GeoProjection -> "Mercator",
ColorFunctionScaling -> False,
ColorFunction -> (ColorData[{"CherryTones", "Reverse"}][
Rescale[#, {1, 20}]] &),
PlotLegends -> False,
ImageSize -> 1000] // Rasterize
Making animation
----------------
frames = ParallelTable[
GeoRegionValuePlot[
Thread[
Normal[data[Select[#Year == year &], "County"]] ->
ToExpression[StringReplace[Normal[data[Select[#Year == year &],
"Estimated Age-adjusted Death Rate, 11 Categories (in ranges)"]], {"-" -> "+", ">" -> "2*"}]]/2],
GeoRange -> {{24, 50}, {-125, -66}},
GeoProjection -> "Mercator",
ColorFunctionScaling -> False,
ColorFunction -> (ColorData[{"CherryTones", "Reverse"}][
Rescale[#, {1, 20}]] &),
PlotLegends -> False,
ImageSize -> 800],
{year, Range[1999, 2014]}];
Making legend
-------------
Panel@Grid[Transpose[{#, ColorData[{"CherryTones", "Reverse"}][Rescale[#, {1, 20}]]} & /@Range[1, 20]]]
Growth of death rates ranges vs time
------------------------------------
bandGrowth = Transpose[Table[N[Mean[ToExpression[
StringReplace[Normal[data[Select[#Year == y &]][All,
"Estimated Age-adjusted Death Rate, 11 Categories (in \
ranges)"]], {"-" -> "~List~", ">" -> "{#,#}&@"}]]]], {y, Range[1999, 2014]}]]
BarChart[{#[[1]], #[[2]] - #[[1]]} & /@ Transpose[bandGrowth],
PlotTheme -> "Marketing", ChartLayout -> "Stacked",
ChartLabels -> {Range[1999, 2014], None}, ImageSize -> 850,
AspectRatio -> 1/3, ChartStyle -> {Yellow, Red}]
Another color scheme sample
------------------
In this dark-low-values color scheme you can see better a few white spots. Those are very few counties where data are missing.
----------
1999
----
![enter image description here][20]
----------
2014
----
![enter image description here][21]
[1]: https://data.cdc.gov/NCHS/NCHS-Drug-Poisoning-Mortality-County-Trends-United/pbkm-d27e?category=NCHS&view_name=NCHS-Drug-Poisoning-Mortality-County-Trends-United
[2]: http://blogs.cdc.gov/nchs-data-visualization/drug-poisoning-mortality/
[3]: https://evergreen.data.socrata.com/stories/s/b5gk-7v6a/
[4]: http://www.nytimes.com/interactive/2016/01/07/us/drug-overdose-deaths-in-the-us.html
[5]: http://www.npr.org/sections/thetwo-way/2016/02/02/465348441/obama-asks-congress-for-more-money-to-fight-opioid-drug-abuse
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10097figure1.png&userId=11733
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6105figure2.png&userId=11733
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=legend3245regfas.png&userId=11733
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif-41464821.gif&userId=11733
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=opoid.png&userId=11733
[13]: https://data.cdc.gov/NCHS/NCHS-Drug-Poisoning-Mortality-County-Trends-United/pbkm-d27e?category=NCHS&view_name=NCHS-Drug-Poisoning-Mortality-County-Trends-United
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdfregdavscr345y5t.png&userId=11733
[15]: https://www.wolfram.com/language/
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2016-04-11_05-01-09.png&userId=11733
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2016-04-11_05-05-57.png&userId=11733
[18]: http://reference.wolfram.com/language/guide/ColorSchemes.html
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdf45yhfhgsdy5uejtyhsgdf.png&userId=11733
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=sdafsfqegr.png&userId=11733
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=dfghu6r7euyrtea.png&userId=11733Vitaliy Kaurov2016-04-11T10:21:01Z