Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Scienceshowthread.php?s= sorted by activeRedirect temporary files to an external disk?
http://community.wolfram.com/groups/-/m/t/1341347
Many times Mathematica aborts because there is no more space available on disk. Presumably this is due to the temporary files that it creates and needs for its calculations. If I were to install a large disk on my Macintosh for all the space required by Mathematica, how can I tell it to address all its temporary files and swapping to that extra disk?
I do not wish to reinstall Mathematica on an external disk. I wish to maintain the rest of Mathematica's files in my current disk, and send only the temporary files to a secondary disk.Juan José Basagoiti2018-05-17T23:36:11ZOpen my nb file in Mathematica Cloud?
http://community.wolfram.com/groups/-/m/t/1343203
Hey everyone!
I am in a bit of a panic. I have a student Mathematica online account that I have been using for over a year now. I have a template file from a class I am doing that has opened just fine this whole time until last night. I have searched for an answer (to no avail), have made several copies & used backups of the file...nothing I do seems to work.
When I attempt to open it in the cloud, I see a blue line traversing the top of the screen like it is working on opening it although it never does no matter how long I wait. It also only shows limited option such as "Open this file in Mathematica", File, and Help at the top of the screen. It never lets me get to evaluation options or anything.
Any ideas? Anyone else know what is happening?Kasi Clark2018-05-22T02:42:48ZCircle and ellipse touching point coordinates?
http://community.wolfram.com/groups/-/m/t/1343756
If the foci points of an ellipse and centre point & radius of a circle is given, how to find the coordinates of the touching point of the circle and the ellipse? Assume that the ellipse is touching the circle perimeter and circle is outside of the ellipse.Raj Kuveju2018-05-22T23:59:07Zneed a help for solving a problem
http://community.wolfram.com/groups/-/m/t/1343732
the problem in a png pic Belowbilored alahmar2018-05-22T23:21:35Zneed a help in a moment problem
http://community.wolfram.com/groups/-/m/t/1343723
the problem in a png pic belowbilored alahmar2018-05-22T23:20:42ZThe Hippasus Primes
http://community.wolfram.com/groups/-/m/t/965609
According to legend, when Hippasus proved [the irrationality of $\sqrt2$](http://mathworld.wolfram.com/PythagorassConstant.html), he was thrown off a ship. Poor guy.
..
Gauss discovered that the numbers 1, 2, 3, 7, 11, 19, 43, 67, 163 led to unique factorization domains, and conjectured these were the only such numbers. Another person almost forgotten was Kurt Heegner, who proved Gauss's conjecture. But there was a small gap in his proof. Years later, Alan Baker and Harold Stark proved the result. But then they looked at Heegner's proof and announced it was pretty much correct, four years after Heegner's death. In his honor, 1, 2, 3, 7, 11, 19, 43, 67, 163 are known as [Heegner numbers](http://mathworld.wolfram.com/HeegnerNumber.html).
..
The $\mathbb{Q}(\sqrt{-1})$ numbers are known as [Gaussian integers](http://mathworld.wolfram.com/GaussianInteger.html).
The $\mathbb{Q}(\sqrt{-3})$ numbers are known as [Eisenstein integers](http://mathworld.wolfram.com/EisensteinInteger.html).
The $\mathbb{Q}(\sqrt{-7})$ numbers are known as [Kleinian integers](https://en.wikipedia.org/wiki/Kleinian_integer).
..
What about $\mathbb{Q}(\sqrt{-2})$? Why doesn't it have a name? I propose we call these **Hippasus integers**. He doesn't get much credit for his discoveries about $\sqrt{2}$, so may as well give him this to fill in the gap.
..
So what do the Hippasus primes look like? Here's some code based on the [Sieve of Eratosthenes](http://mathworld.wolfram.com/SieveofEratosthenes.html) that seems to work. I'm sure it can be vastly improved upon.
heeg = 2;
klein = RootReduce[Select[SortBy[Flatten[Table[a + b (Sqrt[heeg] I - 1)/2, {a, -50, 50}, {b, -70, 70}]], N[Norm[#]] &], 1 < Norm[#] < 40 &]];
sieve = Take[#, -2] & /@ (Last /@ (Sort /@ SplitBy[SortBy[{Norm[#]^2, 2 Re[#], 2 Im[#]/Sqrt[heeg]} & /@ klein, Abs[#] &], Abs[#] &]));
primes = {};
Module[{addedprime, remove},
While[Length[sieve] > 1,
addedprime = sieve[[1]];
primes = Append[primes, addedprime];
remove = Union[Join[Abs[{#[[1]], #[[2]]/Sqrt[heeg]}] & /@ (ReIm[2 (addedprime.{1, Sqrt[heeg] I}/2) (#.{1, Sqrt[heeg] I}/2)] & /@ sieve),
Abs[{#[[1]], #[[2]]/Sqrt[heeg]}] & /@ (ReIm[2 (addedprime.{1, -Sqrt[heeg] I}/2) (#.{1, Sqrt[heeg] I}/2)] & /@ sieve)]];
sieve = Select[Drop[sieve, 1], Not[MemberQ[remove, #]] &]]];
Graphics[Table[Point[{{1, 1}, {1, -1}, {-1, 1}, {-1, -1}}[[k]] ReIm[#]] & /@ (#.{1, Sqrt[heeg] I}/2 & /@ primes), {k, 1, 4}]]
![Hippasus primes][1]
With a change of the Heegner number at the top, the Gaussian primes, Hippasus primes, Eisenstein primes, and Kleinian primes can all be calculated:
![Heegner 1 2 3 7][2]
In case you were curious, we can also calculate the primes based on Heegner numbers 11, 19, 43, and 67.
![Heegner 11 19 43 67][3]
Those last two look pretty weird, so maybe I'm making a mistake somewhere. The primes based on 163 look even stranger.
![Heegner 163][4]
There are so many weird patterns that I almost didn't show this one. But then I remembered the [lucky numbers of Euler](http://mathworld.wolfram.com/LuckyNumberofEuler.html), which are based on Heegner numbers. The long line of primes is likely accurate. If anyone can improve/speed up the code and make a much larger picture, I'd love to see that.
..
The same goes for a bigger picture of the **Hippasus primes**. If there is another name for these, please let me know. If you agree this is a great name for them, also let me know.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=HippasusPrimes.gif&userId=21530
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Heegner1237.gif&userId=21530
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Heegner11194367.gif&userId=21530
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Heegner163.gif&userId=21530Ed Pegg2016-11-17T23:30:46ZUse index.html files in Wolfram Cloud sites?
http://community.wolfram.com/groups/-/m/t/1250045
### Cross post on StackExchange: https://mathematica.stackexchange.com/questions/162265/using-index-html-files-in-wolfram-cloud-sites
---
Part as exercise, part so I could write data-science blog posts I built a website builder using Mathematica that sets up sites in the cloud.
As an example site, here is a paclet server website I set up: https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/main.html
Unfortunately, to get this to work I had to remap my site's index.html file to a main.html file, because when I try to view the site at the index.html either by explicitly routing there or by going to the implicit view I am pushed back to the implicit view and given a 500 error.
Note that I cannot copy the index.html file to the site root i.e.,
CopyFile[
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/index.html"],
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer", Permissions->"Public"]
]
as I get a `CloudObject::srverr` failure
I can't even set up a permanent redirect like so:
CloudDeploy[
Delayed@HTTPRedirect[
"https://www.wolframcloud.com/objects/b3m2a1.paclets/PacletServer/main.html",
<|"StatusCode" -> 301|>
],
"server",
Permissions -> "Public"
]
CloudObject["https://www.wolframcloud.com/objects/b3m2a1.paclets/server"]
As while this apparently worked, going to that site causes my browser to spin infinitely and before finally giving up.
Even more, all of these possible hacks are ugly and I'd much rather work with the standard website setup.
How can I do this?b3m2a1 2017-12-19T17:59:55ZText in maps
http://community.wolfram.com/groups/-/m/t/1343637
Friends, I want to insert text in a map. For example, in the following code I would like to put a label of the two towns I am highlighting in the map. I have not been able to do it: not by labeled nor by any other method. Actually, I would like to use Inset or Text, to have full control of the text style.
Can somebody help me?
Attached, the example-code. ThanksFrancisco Gutierrez2018-05-22T19:36:19ZConvert 1040 Seconds Into 17 Minutes and 20 Seconds?
http://community.wolfram.com/groups/-/m/t/844216
Hello,
I have a basic question which I cannot find the answer to:
How do I make Mathematica convert 1040 seconds into 17 minutes and 20 seconds? I want Mathematica to give me the conversion with one command or cell input. I would also like for the output to have the words "minutes" and "seconds" if possible.
Thanks.Dan M2016-04-22T10:07:25ZHaz que Wolfram|Alpha funcione en cualquier idioma con TextTranslation
http://community.wolfram.com/groups/-/m/t/1342695
Nota: Esta publicación es una traducción de la publicación por Arnoud Buzing: [Using TextTranslation to make WolframAlpha work with ANY language][1].
Algo que no me agrada mucho de Wolfram|Alpha es que solo funciona bien con entradas en inglés y no muy bien (o nada bien) en otros idiomas.
Cuando incluímos una nueva función de traducción en la versión 11.1, TextTranslation, inmediatamente pensé en usarlo con Wolfram|Alpha.
Cuando finalmente pude investigar esto, descubrí que los resultados son realmente sorprendentes, así que quiero compartir algunos resultados aquí.
Una manera muy fácil de probar esto es instalando el siguiente paclet de la [siguiente página GitHub][2]:
PacletInstall["https://github.com/arnoudbuzing/prototypes/releases/download/v0.2.3/Prototypes-0.2.3.paclet"]
O si no gusta instalar el paclet completo (porque también incluye docenas de otras funciones), puede usar las definiciones del código desde aquí:
[https://github.com/arnoudbuzing/prototypes/blob/master/Prototypes/WolframAlpha.wl][3]
Veamos algunos ejemplos y cómo estos se comparan con la función WolframAlpha no traducida. Supongamos que queremos preguntarle a Wolfram|Alpha cuál es la capital de Japón (Tokio). Claramente, la consulta en inglés funciona de la siguiente manera:
WolframAlpha["what is the capital of Japan", "Result"]
(Esto retorna Tokio como respuesta). Pero ahora hagamos la pregunta en neerlandés:
WolframAlpha["wat is de hoofdstad van Japan", "Result"]
Ahora se obtiene una respuesta muy extraña: 18,4 millones de vehículos (estimación de 2004). Esto está claramente incorrecto, lo cual hace que Wolfram|Alpha sea completamente inútil para las preguntas en el idioma neerlandés.
Entonces, ahora pensemos en lo que se requiere para mejorar esta situación: primero debemos revisar si estamos lidiando con una entrada que no está en inglés y, de ser así, tenemos que traducirla a una entrada en inglés y ejecutarla a través de la función WolframAlpha. Este es el código de Wolfram Language que hace exactamente eso. Lo llamé WolframBeta para distinguirlo de la función original:
WolframBeta[ input_String, args___ ] := Module[{language, translation},
language = LanguageIdentify[input];
translation = If[language =!= Entity["Language", "English"], TextTranslation[input, language -> "English"], input];
WolframAlpha[translation, args]
]
Ahora probemos esta función:
WolframBeta["wat is de hoofdstad van Japan", "Result"]
Es un poco más lento debido a la llamada a la función de traducción, ¡pero regresa el resultado correcto (Tokio)!
Y de inmediato funciona para muchos idiomas:
WolframBeta["cual es la capital de japón?", "Result"]
WolframBeta["日本の首都は何ですか", "Result"]
WolframBeta["Was ist die Hauptstadt von Japan?", "Result"]
WolframBeta["什么是日本的首都", "Result"]
Todas estas preguntas retornan como resultado a Tokio, mientras que la función WolframAlpha falla de maneras distintas en cada idioma! (No incluiré los resultados aquí, pero son vergonzosos)
(Para obtener las traducciones en estos idiomas traduje del inglés, así que espero que sean correctas)
WolframAlpha también puede retornar "resultados hablados", una cadena de caracteres sencilla en inglés como respuesta. Por ejemplo:
WolframAlpha["what is the capital of Japan?", "SpokenResult"]
Esto retorna la cadena de caracteres en inglés: "La capital de Japón es Tokio, Japón"
Pero TextTranslation funciona en ambos sentidos (de hecho, funciona entre dos idiomas, pero en este contexto solo nos interesa la traducción desde y hacia inglés).
Aquí hay una modificación que a) traduce un idioma que no sea ingles a inglés, b) hace la consulta, y c) traduce el resultado hablado nuevamente al idioma original:
WolframBeta[ input_String, "SpokenResult", args___ ] := Module[{language, translation,result},
language = LanguageIdentify[input];
translation = If[language =!= Entity["Language", "English"], TextTranslation[input, language -> "English"], input];
result = WolframAlpha[translation, "SpokenResult", args];
If[language =!= Entity["Language", "English"], TextTranslation[result, "English" -> language], result]
]
Ahora echemos un vistazo a algunos ejemplos en español. Como entrada tenemos “¿Qué distancia hay entre Amsterdam y Rotterdam en kilómetros?”:
WolframBeta“¿Qué distancia hay entre Amsterdam y Rotterdam en kilómetros?”, "SpokenResult"]
Esto retorna (correctamente): " La respuesta es de 56,4 kilómetros"
Y ahora preguntemos, cuánta vitamina C hay en un vaso de jugo de naranja:
WolframBeta["¿cuánta vitamina C hay en un vaso de jugo de naranja?", "SpokenResult"]
Obtenemos el siguiente resultado: "La respuesta es aproximadamente 93 miligramos"
Ahora probemos algo con una respuesta un poco más compleja (producto interno bruto de México), al menos gramaticalmente (esta vez realmente me sorprendió):
WolframBeta["cual es el producto interno bruto de México?", "SpokenResult"]
Respuesta: "El producto interno bruto de México es de $ 1,05 trillones por año"
A veces al preguntar por el tiempo, se debe proporcionar el país además de la ciudad:
WolframBeta ["¿qué tan cálido es Monterrey, Mexico?", SpokenResult"]
Respuesta (en Farenheit): "La temperatura en Monterrey, Nuevo León, México es de 86 grados Fahrenheit"
Tambien podemos obtener respuestas a consultas matemáticas:
WolframBeta["¿Cuál es la derivada de seno de X?", "SpokenResult"]
Respuesta: "La respuesta es coseno de X"
Y preguntas sobre personas famosas y su relación con otras personas:
WolframBeta["¿Quiénes son los hijos del Príncipe William?","SpokenResult"]
Respuesta: "Los hijos de Prince William son Prince George de Cambridge; Carlota de Cambridge; y Luis de Cambridge"
Espero que esta o una versión de esta idea se pueda agregar oficialmente a Wolfram|Alpha en algún momento. Creo que sería útil hacer que Wolfram|Alpha se utilice más en todo el mundo para ayudar a las personas con su curiosidad computacional.
Déjame saber lo que piensas. Comentarios, sugerencias y pull requests son bienvenidos!
[1]: http://community.wolfram.com/groups/-/m/t/1337022
[2]: https://github.com/arnoudbuzing/prototypes/releases
[3]: https://github.com/arnoudbuzing/prototypes/blob/master/Prototypes/WolframAlpha.wlKarla Santana2018-05-21T22:20:10ZNeural Nets for time series prediction: Where to start?
http://community.wolfram.com/groups/-/m/t/1343037
Dear Members
I have used mathematica to learn some things such as regression for time series. There are plenty of models with unctions of very high leves of abstraction such as "TimeSeriesForecast". Also, many detailed examples are given.
I like to try to learn newural nets for time series forecasting but it has not been easy at all. I have not found examples in the subject an not related models are available in the neural net repository-
Could any one help me to find a good place to start on the subject?
Best regards
JesusJ Jesus Rico-Melgoza2018-05-22T01:14:25ZUse RLink to run the quantile regression function from R?
http://community.wolfram.com/groups/-/m/t/1343508
I am running a Monte Carlo simulation to compare errors from least squares method and quantile regression.
I have generated the data as per below *(y = x Beta + error)*, for three different beta's. (\\[Tau] is my quantile level)
The data is ready for *LinearModelFit*.
But how can I apply the *rq* function in *R* from the *quantreg* library to my data?
I appreciate your help. This community is awesome.
Thanks in advance,
Thad
Set n, m and \[Tau]
n = 1000;
m = n;
\[Tau] = 0.9;
columns = 100;
Generate data
SeedRandom[1976];
xdata = Table[RandomVariate[NormalDistribution[], n], columns];
\[Epsilon]data = Table[RandomVariate[NormalDistribution[], n], columns];
\[Beta]data = {1/3, 1, 3};
num\[Beta] = Length[\[Beta]data];
ydata = Table[xdata \[Beta]data[[k]] + \[Epsilon]data, {k, num\[Beta]}];
data = Table[
Transpose[{ydata[[q, k]], xdata[[k]]}], {q, num\[Beta]}, {k, columns}];
Run Least Squares
lsFunc = Table[LinearModelFit[#, x, x] & /@ data[[q]], {q, num\[Beta]}];
Moreover, I need the ability to extract the parameters from the quantile regression results, like I can do with *LinearModelFit["BestFitParameters"]* and *["FitResiduals"]*.Thadeu Freitas Filho2018-05-22T10:49:59ZRemove `InvisiblePrefixScriptBase` ?
http://community.wolfram.com/groups/-/m/t/1342717
I have accidentally deleted a successful A=A.nb, but have a paper printout A` of it.
I have then taken `a similar` B=B.nb and hand-entered-edited it using A` to reproduce A; call it C=C.nb
C does not run properly and I get a `InvisiblePrefixScriptBase` Message
How do I use paper A` to (re)create a good A?
Thanks, Michael Caola
(I am as ignorant as I seem, and would appreciate any advice)michael caola2018-05-21T12:59:21ZProject a picture onto a specific geographic area?
http://community.wolfram.com/groups/-/m/t/1343159
How to replace a world map with a picture and project it to a specific geographic area?
I can already use shapefiles to draw satellite photos of specific administrative regions.
However, replacing the original global satellite image with a picture cannot be done.
Request friendly expert guidance ~~~
data = Import["COUNTY201804300214.shp", "Data"];
picture= Import["https://upload.wikimedia.org/wikipedia/commons/thumb/4/41/Simple_\world_map.svg/2000px-Simple_world_map.svg.png", "PNG"];
data[[All, 1]]
geometry = ("Geometry" /. data);
GeoGraphics[{GeoStyling["Satellite"], geometry[[12]]}, GeoBackground -> None]
GeoGraphics[{GeoStyling[{"Image", picture}], geometry[[12]]}, GeoBackground -> None]Tsai Ming-Chou2018-05-22T09:19:35ZAutomatically sliding a conv net onto a larger image
http://community.wolfram.com/groups/-/m/t/1343104
**How to control the step size of the following conv net as it slides onto a larger image?**
See also: https://mathematica.stackexchange.com/questions/144060/sliding-fullyconvolutional-net-over-larger-images/148033
As a toy example, I'd like to slide a digit classifier trained on 28x28 images to classify each neighborhood of a larger image.
This is lenet with linear layers replaced by 1x1 convolutional layers.
trainingData = ResourceData["MNIST", "TrainingData"];
testData = ResourceData["MNIST", "TestData"];
lenetModel =
NetModel["LeNet Trained on MNIST Data",
"UninitializedEvaluationNet"];
newlenet = NetExtract[lenetModel, All];
newlenet[[7]] = ConvolutionLayer[500, {4, 4}];
newlenet[[8]] = ElementwiseLayer[Ramp];
newlenet[[9]] = ConvolutionLayer[10, 1];
newlenet[[10]] = SoftmaxLayer[1];
newlenet[[11]] = PartLayer[{All, 1, 1}];
newlenet =
NetChain[newlenet,
"Input" ->
NetEncoder[{"Image", {28, 28}, ColorSpace -> "Grayscale"}]]
Now train it:
newtd = First@# -> UnitVector[10, Last@# + 1] & /@ trainingData;
newvd = First@# -> UnitVector[10, Last@# + 1] & /@ testData;
ng = NetGraph[
<|"inference" -> newlenet,
"loss" -> CrossEntropyLossLayer["Probabilities", "Input" -> 10]
|>,
{
"inference" -> NetPort["loss", "Input"],
NetPort["Target"] -> NetPort["loss", "Target"]
}
]
tnew = NetTrain[ng, newtd, ValidationSet -> newvd,
TargetDevice -> "GPU"]
Now remove dimensions information (see stackexchange for the code definition of `removeInputInformation`):
removeInputInformation[layer_ConvolutionLayer] :=
With[{k = NetExtract[layer, "OutputChannels"],
kernelSize = NetExtract[layer, "KernelSize"],
weights = NetExtract[layer, "Weights"],
biases = NetExtract[layer, "Biases"],
padding = NetExtract[layer, "PaddingSize"],
stride = NetExtract[layer, "Stride"],
dilation = NetExtract[layer, "Dilation"]},
ConvolutionLayer[k, kernelSize, "Weights" -> weights,
"Biases" -> biases, "PaddingSize" -> padding, "Stride" -> stride,
"Dilation" -> dilation]]
removeInputInformation[layer_PoolingLayer] :=
With[{f = NetExtract[layer, "Function"],
kernelSize = NetExtract[layer, "KernelSize"],
padding = NetExtract[layer, "PaddingSize"],
stride = NetExtract[layer, "Stride"]},
PoolingLayer[kernelSize, stride, "PaddingSize" -> padding,
"Function" -> f]]
removeInputInformation[layer_ElementwiseLayer] :=
With[{f = NetExtract[layer, "Function"]}, ElementwiseLayer[f]]
removeInputInformation[x_] := x
tmp = NetExtract[NetExtract[tnew, "inference"], All];
n3 = removeInputInformation /@ tmp[[1 ;; -3]];
AppendTo[n3, SoftmaxLayer[1]];
n3 = NetChain@n3;
And the network `n3` slides onto any larger input. However, note that it seems to slide with steps of 4. How could I make it take steps of 1 instead?
In[358]:= n3[RandomReal[1, {1, 28*10, 28}]] // Dimensions
Out[358]= {10, 64, 1}
In[359]:= BlockMap[Length, Range[28*10], 28, 4] // Length
Out[359]= 64Matthias Odisio2018-05-21T22:20:23ZWhat's the hardest integral Mathematica running Rubi can find?
http://community.wolfram.com/groups/-/m/t/1343015
***Rubi*** (***Ru***le-***b***ased ***i***ntegrator) is an open source program written in ***Mathematica***'s powerful pattern-matching language. The recently released version 4.15 of ***Rubi*** at http://www.apmaths.uwo.ca/~arich/ requires ***Mathematica*** 7 or better to run. Among other improvements, ***Rubi*** 4.15 enhances the functionality of its integrate command as follows:
- Int[*expn*, *var*] returns the antiderivative (indefinite integral) of *expn* with respect to *var*.
- Int[*expn*, *var*, Step] displays the first step used to integrate *expn* with respect to *var*, and returns the intermediate result.
- Int[*expn*, *var*, Steps] displays all the steps used to integrate *expn* with respect to *var*, and returns the antiderivative.
- Int[*expn*, *var*, Stats], before returning the antiderivative of *expn* with respect to *var*, displays a list of statistics of the form {*a*, *b*, *c*, *d*, *e*} where
*a*) is the number of steps used to integrate *expn*,
*b*) is the number of distinct rules used to integrate *expn*,
*c*) is the leaf count size of *expn*,
*d*) is the leaf count size of the antiderivative of *expn*, and
*e*) is the rule-to-size ratio of the integration (i.e. the quotient of elements *b* and *c*).
The last element of the list of statistics displayed by ***Rubi***'s Int[*expn*, *var*, Stats] command is the number of distinct rules required to integrate *expn* divided by the size of *expn*. This rule-to-size ratio provides a normalized measure of the amount of mathematical knowledge ***Rubi*** uses to integrate expressions. In other words, this ratio can be used as a metric showing the difficulty of solving indefinite integration problems. For example, the hardest problem in ***Rubi***'s 70,000+ test suite is integrating (a+b ArcTanh[c/x^2])^2 which has a rule-to-size ratio of 2.5.
On ***Rubi***'s website are the terms of a challenge, for which there is a substantial prize, for the user who finds the hardest problem ***Rubi*** can integrate.Albert Rich2018-05-21T22:17:32ZControl a replacement step in any step and get the number of steps?
http://community.wolfram.com/groups/-/m/t/1342594
Given the following sets with names
s1={x1,x2}
x1={y1,y2}
y1 ={z1,z2}
When s1 is entered, the names would be replaced finally by all the sets. That is
Input s1
output::{{{z1,z2},y2},x2}
Question1: could we control a replacement step in any step? For example, we get
{{y1,y2},x2} so that y1 is not replaced?
Question2:: could we get the number of the replacement step, for example,to know, at step 2, we get {{y1,y2},x2}?Math Logic2018-05-21T20:20:58ZPrinciple of RandomSearch method in NMinimize?
http://community.wolfram.com/groups/-/m/t/1314173
Hello everyone,
I am using NMinimize procedure with RandomSearch method explicitly chosen for optimization of a non-convex 6 dimensional problem. Those 6 variables are non-negative and they sum up to one.
Can someone explain me how does RandomSearch method in Wolfram Language environment work? It is unclear from http://reference.wolfram.com/language/tutorial/ConstrainedOptimizationGlobalNumerical.html
For example: "... generating a population of random starting points…“ - how admissible solutions are obtained? From which (multivariate) distribution we are sampling from? A similar question may be asked for remaining 3 methods, "NelderMead", "DifferentialEvolution", and "SimulatedAnnealing".
The method seems to be different from method described at en.wikipedia.org/wiki/Random_search where hypercubes are mentioned. Am I right?
Thank you for your answers!Lukas Vacek2018-04-04T16:46:52ZUse While loop (run code until both conditions are satisfied)?
http://community.wolfram.com/groups/-/m/t/1342630
Hello,
I have an error while using "while" loop (I want the code to run until both condition satified:
xx +zz =2 && yy +ww =0.
any suggestions?
Here's the code:
ClearAll[Y ,X,z,w];
T0=1 ;
Hmu=0.8;
SA=10+Hmu ;
SB=10-Hmu;
Sigma=1.5;
K=10;
r=1.02;
gamma=0.1;
Vi=2;
Eye=2;
ZC=(K-SA)/Sigma;
ZB=(K-SB)/Sigma ;
S0=(SA-gamma(Sigma^2)(Vi /Eye ))/r
C0=((SA-gamma(Sigma^2)(Vi /Eye ) -K)(1-CDF[NormalDistribution[0, 1], ZC+gamma*Sigma*(Vi /Eye ) ])+(PDF[NormalDistribution[0, 1], ZC+gamma*Sigma*(Vi /Eye ) ])Sigma)/r
xx=0; zz=0;
while[xx +zz =2 && yy +ww =0]
A=(((Vi /Eye ) -X)S0-Y*C0)r+X*SA-(gamma/2)(X^2)(Sigma^2)
B=(((Vi /Eye ) -X)S0-Y*C0)r +(X +Y)SA -(gamma/2)((X +Y) ^2)(Sigma^2) -Y *K
Q=(((Vi /Eye ) -z)S0-w*C0)r+z*SB-(gamma/2)(z^2)(Sigma^2)
P=(((Vi /Eye ) -z)S0-w*C0)r +(z +w)SB -(gamma/2)(z +w) ^2(Sigma^2) -w *K
UA=-Exp[-gamma*A]*(CDF[NormalDistribution[0, 1], ZC+gamma*X *Sigma ]) -Exp[-gamma*B]*(1-CDF[NormalDistribution[0, 1], ZC+gamma*(X +Y )*Sigma ])
UB=-Exp[-gamma*Q ]*(CDF[NormalDistribution[0, 1], ZB+gamma*z *Sigma ]) -Exp[-gamma*P ]*(1-CDF[NormalDistribution[0, 1], ZB+gamma*(z+w )*Sigma ])
AMAX=Maximize[UA ,{X >=0,X<=2,Y>=-1,Y<=1},{X ,Y}]
BMAX=Maximize[UB ,{z >=0,z<=2,w>=-1,w<=1},{z ,w}]
xx=Replace[X,AMAX [[2,1]]]
yy=Replace[Y,AMAX [[2,2]]]
zz=Replace[z,BMAX [[2,1]]]
ww=Replace[w,BMAX [[2,2]]]
If[xx +zz<2,S0=S0-0.001,If[xx+zz>2,S0,S0+0.001,S0]]
If[yy +ww<0,C0=C0-0.001,If[yy+ww>0,C0,C0+0.001,C0]]yossi sh2018-05-21T11:24:34ZLearning to See in the Dark
http://community.wolfram.com/groups/-/m/t/1342609
spoiler alert: this is just a request...
Fresh out of the University of Illinois Urbana-Champaign (a few kilometers away of WRI headquarters) and Intel, there's a new NN conceived to improve ultra low light photography.
The article: [article][1]
Some examples: [video showing multiple examples][2]
![enter image description here][3]
And the request: **PLEASE PLEASE PLEASE PLEASE***
(also eagerly waiting for the [Deep Image Prior][4] to be made available on the Wolfram Neural Net Repository, or better, [this][5] more recent one from NVIDIA)
\* either integration as a function (or in a function), or available within the Neural Net Repository framwork.
[1]: https://arxiv.org/abs/1805.01934
[2]: https://www.youtube.com/watch?v=qWKUFK7MWvg&feature=youtu.be
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2018-05-21_11-07-56.gif&userId=26431
[4]: https://sites.skoltech.ru/app/data/uploads/sites/25/2017/12/deep_image_prior.pdf
[5]: https://www.youtube.com/watch?v=gg0F5JjKmhAPedro Fonseca2018-05-21T09:28:19ZMinimum time optimal control: How to use NDSolve for unknown final time?
http://community.wolfram.com/groups/-/m/t/1342397
Hi, I am new to Mathematica and am trying to solve minimum time optimal control problem using NDSolve. Please see attached for my notebook. I have 4 ODEs and 4 boundary conditions. Final time (tf) in this case is also unknown and need to be calculated by another constraint (transversality condition for free final time):
1 + \[Lambda1][tf]*x2[tf] - \[Lambda2][tf]*Sign[\[Lambda2]][tf] == 0
I am not sure how to add this equation to NDSolve and solve the problem altogether to include tf? I am sure there is a better way to do it. I thought this problem was straightforward but I couldn't find solution online...so here I am asking for help. This problem can actually be solved by hand calculation but I want to learn how to solve it with Mathematica. Thank you so much for your help!Danop Rajabhandharaks2018-05-21T05:20:46ZLoop subdivision on triangle meshes
http://community.wolfram.com/groups/-/m/t/1338790
(Cross-posted from [Mathematica.StackExchange](https://mathematica.stackexchange.com/q/161331/38178))
Every now and then, the question pops up how a given geometric mesh (e.g. a `MeshRegion`) can be refined to produce a i.) finer and ii.) smoother mesh. For example, the following triangle mesh from the example database is pretty coarse.
R = ExampleData[{"Geometry3D", "Triceratops"}, "MeshRegion"]
MeshCellCount[R, 2]
[![enter image description here][4]][1]
> 5660
Well, we _could_ execute this
S = DiscretizeRegion[R, MaxCellMeasure -> {1 -> 0.01}]
MeshCellCount[S, 2]
[![enter image description here][4]][1]
> 1332378
only to learn that the visual appearance hasn't improved at all.
So, how can we refine in a smoothing way with Mathematica? There are several subdivision schemes known in geometry processing, e.g. [Loop subdivision](https://en.wikipedia.org/wiki/Loop_subdivision_surface) and [Catmull-Clark subdivision](https://en.wikipedia.org/wiki/Catmull-Clark_subdivision_surface) for general polyhedral meshes, but there seem to be no built-in methods for these.
Implementation
---
Let's see if we can do that with what Mathematica offers us. Still, we need quite a bit of preparation. In the first place we need methods to compute cell adjacency matrices form [here](https://mathematica.stackexchange.com/questions/160443/how-to-obtain-the-cell-adjacency-graph-of-a-mesh/160457#160457). I copied the code for completeness. The built-in `"ConnectivityMatrix"` properties for `MeshRegions` return pattern arrays, so we start to convert them into numerical matrices.
SparseArrayFromPatternArray[A_SparseArray] := SparseArray @@ {
Automatic, Dimensions[A], A["Background"], {1, {
A["RowPointers"],
A["ColumnIndices"]
},
ConstantArray[1, Length[A["ColumnIndices"]]]
}
}
CellAdjacencyMatrix[R_MeshRegion, d_, 0] := If[MeshCellCount[R, d] > 0,
SparseArrayFromPatternArray[R["ConnectivityMatrix"[d, 0]]],
{}
];
CellAdjacencyMatrix[R_MeshRegion, 0, d_] := If[MeshCellCount[R, d] > 0,
SparseArrayFromPatternArray[R["ConnectivityMatrix"[0, d]]],
{}
];
CellAdjacencyMatrix[R_MeshRegion, 0, 0] :=
If[MeshCellCount[R, 1] > 0,
With[{A = CellAdjacencyMatrix[R, 0, 1]},
With[{B = A.Transpose[A]},
SparseArray[B - DiagonalMatrix[Diagonal[B]]]
]
],
{}
];
CellAdjacencyMatrix[R_MeshRegion, d1_, d2_] :=
If[(MeshCellCount[R, d1] > 0) && (MeshCellCount[R, d2] > 0),
With[{B = CellAdjacencyMatrix[R, d1, 0].CellAdjacencyMatrix[R, 0, d2]},
SparseArray[
If[d1 == d2,
UnitStep[B - DiagonalMatrix[Diagonal[B]] - d1],
UnitStep[B - (Min[d1, d2] + 1)]
]
]
],
{}
];
Alternatively to copying the code above, simply make sure that you have [IGraph/M](http://szhorvat.net/pelican/igraphm-a-mathematica-interface-for-igraph.html) version 0.3.93 or later installed and run
Needs["IGraphM`"];
CellAdjacencyMatrix = IGMeshCellAdjacencyMatrix;
Next is a `CompiledFunction` to compute the triangle faces for the new mesh:
getSubdividedTriangles =
Compile[{{ff, _Integer, 1}, {ee, _Integer, 1}},
{
{Compile`GetElement[ff, 1],Compile`GetElement[ee, 3],Compile`GetElement[ee, 2]},
{Compile`GetElement[ff, 2],Compile`GetElement[ee, 1],Compile`GetElement[ee, 3]},
{Compile`GetElement[ff, 3],Compile`GetElement[ee, 2],Compile`GetElement[ee, 1]},
ee
},
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
];
Finally, the methods that webs everything together. It assembles the subdivision matrix (which maps the old vertex coordinates to the new ones), uses it to compute the new positions and calls `getSubdividedTriangles` in order to generate the new triangle faces.
ClearAll[LoopSubdivide];
Options[LoopSubdivide] = {
"VertexWeightFunction" -> Function[n, 5./8. - (3./8. + 1./4. Cos[(2. Pi)/n])^2],
"EdgeWeight" -> 3./8.,
"AverageBoundary" -> True
};
LoopSubdivide[R_MeshRegion, opts : OptionsPattern[]] := LoopSubdivide[{R, {{0}}}, opts][[1]];
LoopSubdivide[{R_MeshRegion, A_?MatrixQ}, OptionsPattern[]] :=
Module[{A00, A10, A12, A20, B00, B10, n, n0, n1, n2, βn, pts,
newpts, edges, faces, edgelookuptable, triangleneighedges,
newfaces, subdivisionmatrix, bndedgelist, bndedges, bndvertices,
bndedgeQ, intedgeQ, bndvertexQ,
intvertexQ, β, βbnd, η},
pts = MeshCoordinates[R];
A10 = CellAdjacencyMatrix[R, 1, 0];
A20 = CellAdjacencyMatrix[R, 2, 0];
A12 = CellAdjacencyMatrix[R, 1, 2];
edges = MeshCells[R, 1, "Multicells" -> True][[1, 1]];
faces = MeshCells[R, 2, "Multicells" -> True][[1, 1]];
n0 = Length[pts];
n1 = Length[edges];
n2 = Length[faces];
edgelookuptable = SparseArray[
Rule[
Join[edges, Transpose[Transpose[edges][[{2, 1}]]]],
Join[Range[1, Length[edges]], Range[1, Length[edges]]]
],
{n0, n0}];
(*A00=CellAdjacencyMatrix[R,0,0];*)
A00 = Unitize[edgelookuptable];
bndedgelist = Flatten[Position[Total[A12, {2}], 1]];
If[Length[bndedgelist] > 0, bndedges = edges[[bndedgelist]];
bndvertices = Sort[DeleteDuplicates[Flatten[bndedges]]];
bndedgeQ = SparseArray[Partition[bndedgelist, 1] -> 1, {n1}];
bndvertexQ = SparseArray[Partition[bndvertices, 1] -> 1, {n0}];
B00 = SparseArray[ Join[bndedges, Reverse /@ bndedges] -> 1, {n0, n0}];
B10 = SparseArray[ Transpose[{Join[bndedgelist, bndedgelist],
Join @@ Transpose[bndedges]}] -> 1, {n1, n0}];
,
bndedgeQ = SparseArray[{}, {Length[edges]}];
bndvertexQ = SparseArray[{}, {n0}];
B00 = SparseArray[{}, {n0, n0}];
B10 = SparseArray[{}, {n1, n0}];
];
intedgeQ = SparseArray[Subtract[1, Normal[bndedgeQ]]];
intvertexQ = SparseArray[Subtract[1, Normal[bndvertexQ]]];
n = Total[A10];
β = OptionValue["VertexWeightFunction"];
η = OptionValue["EdgeWeight"];
βn = β /@ n;
βbnd = If[TrueQ[OptionValue["AverageBoundary"]], 1./8., 0.];
subdivisionmatrix =
Join[Plus[
DiagonalMatrix[SparseArray[1. - βn] intvertexQ + (1. - 2. βbnd) bndvertexQ],
SparseArray[(βn/n intvertexQ)] A00, βbnd B00],
Plus @@ {((3. η - 1.) intedgeQ) (A10),
If[Abs[η - 0.5] < Sqrt[$MachineEpsilon],
Nothing, ((0.5 - η) intedgeQ) (A12.A20)], 0.5 B10}];
newpts = subdivisionmatrix.pts;
triangleneighedges = Module[{f1, f2, f3},
{f1, f2, f3} = Transpose[faces];
Partition[
Extract[
edgelookuptable,
Transpose[{Flatten[Transpose[{f2, f3, f1}]],
Flatten[Transpose[{f3, f1, f2}]]}]],
3]
];
newfaces =
Flatten[getSubdividedTriangles[faces, triangleneighedges + n0],
1];
{
MeshRegion[newpts, Polygon[newfaces]],
subdivisionmatrix
}
]
Test examples
---
So, let's test it. A classical example is subdividing an `"Isosahedron"`:
R = RegionBoundary@PolyhedronData["Icosahedron", "MeshRegion"];
regions = NestList[LoopSubdivide, R, 5]; // AbsoluteTiming // First
g = GraphicsGrid[Partition[regions, 3], ImageSize -> Full]
> 0.069731
[![enter image description here][1]][1]
Now, let's tackle the `"Triceratops"` from above:
R = ExampleData[{"Geometry3D", "Triceratops"}, "MeshRegion"];
regions = NestList[LoopSubdivide, R, 2]; // AbsoluteTiming // First
g = GraphicsGrid[Partition[regions, 3], ImageSize -> Full]
> 0.270776
[![enter image description here][2]][2]
The meshes so far had trivial boundary. As for an example with nontrivial boundary, I dug out the `"Vase"` from the example dataset:
R = ExampleData[{"Geometry3D", "Vase"}, "MeshRegion"];
regions = NestList[LoopSubdivide, R, 2]; // AbsoluteTiming // First
g = GraphicsRow[
Table[Show[S, ViewPoint -> {1.4, -2.1, -2.2},
ViewVertical -> {1.7, -0.6, 0.0}], {S, regions}],
ImageSize -> Full]
> 1.35325
[![enter image description here][3]][3]
Remarks and edits
---
Added some performance improvements and incorporated some ideas by [Chip Hurst](https://mathematica.stackexchange.com/users/4346) form [this post](https://mathematica.stackexchange.com/questions/160443/how-to-obtain-the-cell-adjacency-graph-of-a-mesh/166491#166491).
Added options for customization of the subdivision process, in particular for planar subdivision (see [this post](https://mathematica.stackexchange.com/a/170604/38178) for an application example).
Added a way to also return the subdivision matrix since it can be useful, e.g. for [geometric multigrid solvers](https://mathematica.stackexchange.com/a/173617/38178). Just call with a matrix as second argument, e.g., `LoopSubdivide[R,{{1}}]`.
Fixed a bug that produced dense subdivision matrices in some two-dimensional examples due to not using `0` as `"Background"` value.
[4]: https://i.stack.imgur.com/nuWBd.png
[1]: https://i.stack.imgur.com/l1VcB.png
[2]: https://i.stack.imgur.com/qSbBh.png
[3]: https://i.stack.imgur.com/dp1BY.pngHenrik Schumacher2018-05-14T16:15:13ZEmbedding images in QR code
http://community.wolfram.com/groups/-/m/t/1341834
![manually edited][11]
I guess a story-telling type post would attract more upvotes and probably give some insight about how to 'solve problems' using Mathematica, so I would go into details and try to explain not only the code but also how I figured out how to write them.
To begin with, here's three QR code generated with the code, check it by yourself, they are actually scan-able~ It's also amazing that even very fine details of the image can be shown in the QR code (Note that it would better if you view these QR with your glass off XD)
![mma1][1] ![poa][2]
![mma2][3]
#How this works?
In fact, this form of QR code is not my original idea, I came across such type of QR code on internet but failed to find its origin. So I tried to figure out the principle by myself.
Carefully observe the image, one can find out that there's something odd about this QR:
![wierd behavior][4]
The marker on the corner are **three times coarser** than the majority of the QR. So I initially hypothesize that the QR recognition algorithm would first average the brightness of a segment, turning it into a normal QR and then recognize it. The code I used is as follows:
Block[{img = Import["http://community.wolfram.com//c/portal/getImageAttachment?filename=mathematica1.png&userId=1340903"], dat, partitioned},
dat = ImageData@ImagePad[Binarize[ImageResize[img, Scaled[1/3]]], -4];
partitioned = Partition[dat, {3, 3}];
Grid[{ImageResize[#, Dimensions@dat], BarcodeRecognize@#} & /@
{Image@dat,Binarize@Image@Map[Mean@*Flatten, partitioned, {2}]}]
]
The result proved me wrong as the averaged version cannot be properly recognized. Then further observer the QR code, I found that there are mysterious dots even in the places which should be purely white, also the dots are a bit *too* structural. So I suspect that normal QR code recognition algorithm only takes the color of the center dot, so I added this to the previous code:
Map[#[[2,2]]&,partitioned,{2}]
then it worked out properly!
![theory][5]
As we've already cracked the theory, we can now generate some of our own.
#How to generate?
####QR code generation
First we can use `BarcodeImage` to generate a QR code, for example here I would use: `"This is a sample QR generated by Mathematica!"` as the content of the QR code:
text = "This is a sample QR generated by Mathematica!";
qrraw = BarcodeImage[text, {"QR", "H"}, 1]
BarcodeRecognize@qrraw
####Image processing
Then we create a black and white image to use as background, for example here we use the wolfram wolf icon:
![wolfram wolf][6]
Import, convert to grayscale and adjust the grayscale a bit:
img=ColorConvert[Rasterize[Graphics[{
Inset[Import["http://community.wolfram.com//c/portal/getImageAttachment?filename=wolframwolf.png&userId=1340903"],{.6,.4},Automatic,.8],
Text[Style["WOLFRAM",Bold,14],{.5,.92}]
},PlotRange->{{0,1},{0,1}},ImageSize->3ImageDimensions@qrraw]],"Grayscale"]^.45
which returns:
![B&W image][7]
Note that in order to get enough resolution while keeping the QR code easy to scan, the dimension of the QR code is best in the range of [25,50], one can test that using `ImageDimensions@img` and adjust it by changing the error correction level by setting `{"QR",lev}` where lev can be "L", "M", "Q", or "H".
####Merging
Then we should merge this two image together. Here we use the technique of [dither](https://en.wikipedia.org/wiki/Dither) to display grayscale image using only white and black pixels. In the process of dithering, we should notice that at center of each 9*9 pixel the value should correspond to the value in the QR image, or the QR code would be invalid. The code could be easily written out as follows:
dithering[imgdat_, qrdat_] :=
Block[{imgdat1 = imgdat, dimx, dimy, tmp1, tmp2, f = UnitStep[# - .5] &},
{dimx, dimy} = Dimensions@imgdat;
Quiet@Do[
(*Rounding*)
tmp1 = If[Mod[{i, j}, 3] == {2, 2}, qrdat[[(i + 1)/3, (j + 1)/3]], f[imgdat1[[i, j]]]];
tmp2 = Clip[imgdat1[[i, j]] - tmp1, {-.5, .5}];
(*Diffuse Error*)
imgdat1[[i, j]] = tmp1;
imgdat1[[i, j + 1]] += 0.4375 tmp2;
If[j != 1, imgdat1[[i + 1, j - 1]] += 0.1875 tmp2];
imgdat1[[i + 1, j]] += 0.3125 tmp2;
imgdat1[[i + 1, j + 1]] += 0.0625 tmp2
, {i, dimx}, {j, dimy}];
imgdat1
]
Special attention should be paid to the handling key pixels of the QR code, the error created by introducing it should not be ignored, but its influence should be limited in a range, so here a `Clip` in error is required, while in a traditional dithering process it would be redundant.
Apply dithering to the image and we have:
Image[ditherdat=dithering[ImageData@img, ImageData@qrraw]]
![output 1][8]
####Refinement
One can see that the shape of the original image is quite well preserved and key points of the QR code are properly dealt with. Then the final step is to process the key features on the corner and edge of the QR code, which is quite trivial:
replicate = (Flatten[ConstantArray[#, {3, 3}], {{3, 1}, {4, 2}}] &);
refineqr[qrdat_] :=
Block[{qrd = qrdat, d = Length[qrdat]},
(*Corner*)
(qrd[[#1 ;; 24 #1 ;; #1, #2 ;; 24 #2 ;; #2]] = replicate[qrd[[2 #1 ;; 23 #1 ;; 3 #1, 2 #2 ;; 23 #2 ;; 3 #2]]]) & @@@ {{1, 1}, {1, -1}, {-1, 1}};
(*Edge*)
qrd[[22 ;; d - 21, 19 ;; 21]] = Transpose[qrd[[19 ;; 21, 22 ;; d - 21]] = replicate[{Mod[Range[(d + 1)/3 - 14], 2]}]];
qrd]
Then apply this to previously get result, we get the final result, which is scan-able:
Image[final = refineqr@ditherdat]
BarcodeRecognize@%
It's usually favourable to have a 3x zoom to the image:
Image@replicate@final
![final result][9]
A fully packed version is shown in the attachment notebook file, where:
createqr[text,img]
would generate the same result.
Further optimizations could include using machine learning to further refine the display effect. Sharper lines, Less interfering key points and more could be expected.
**ENJOY~**
----
#Update
@Henrik Schachner kindly remind me that the previous QR is not that easy to scan with average QR scanning software. So I made some tiny updates to make the QR more standardized and much more easier to scan:
refineqr[qrdat_] :=
Block[{qrd = qrdat, d = Length[qrdat], temp = Fold[ArrayPad[#1, 1, #2] &, {{{0}}, 1, 0}], p},
p = Position[Round@ListCorrelate[temp, qrdat[[2 ;; ;; 3, 2 ;; ;; 3]], {{1, 1}, {-1, -1}}, 0, Abs@*Subtract], 0, 2];
(*Corner*)
(qrd[[#1 ;; 24 #1 ;; #1, #2 ;; 24 #2 ;; #2]] = replicate[qrd[[2 #1 ;; 23 #1 ;; 3 #1, 2 #2 ;; 23 #2 ;; 3 #2]]]) & @@@ {{1, 1}, {1, -1}, {-1, 1}};
(*Edge*)
qrd[[22 ;; d - 21, 19 ;; 21]] = Transpose[ qrd[[19 ;; 21, 22 ;; d - 21]] = replicate[{Mod[Range[(d + 1)/3 - 14], 2]}]];
(*Special*)
(qrd[[3 #1 - 2 ;; 3 #1 + 12, 3 #2 - 2 ;; 3 #2 + 12]] = replicate@temp) & @@@ p;
qrd]
after this update, the QR code would look like this:
![edited][10]
After minor manual edition, it could be like:
![manually edited][11]
Maybe this would be easier to scan due to the newly add correction block on the right-down corner.
Also, I think I found a better realization using same basic design principle [here](http://vecg.cs.ucl.ac.uk/Projects/SmartGeometry/halftone_QR/paper_docs/halftoneQR_sigga13.pdf).
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mma1.png&userId=1340903
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5511POA.png&userId=1340903
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=mma2.png&userId=1340903
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=illus.png&userId=1340903
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6963illus.png&userId=1340903
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=wolframwolf.png&userId=1340903
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out0.png&userId=1340903
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out1.png&userId=1340903
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=final_big.png&userId=1340903
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=edited.png&userId=1340903
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ww_manual.png&userId=1340903Jingxian Wang2018-05-18T19:38:43ZAvoid getting $Aborted in FinancialData?
http://community.wolfram.com/groups/-/m/t/1342726
Using mathematica online FinancialData function often returns $Aborted.Kianoosh Kassiri2018-05-21T13:20:07ZTranslationCell - Instantly translate English text cells to your language!
http://community.wolfram.com/groups/-/m/t/1341205
Note: Please check out this [interesting post on a NotebookTranslate][1] function([direct link to cloud notebook][2]), by [Thomas Colignatus][3].
Starting with version 11.1 the Wolfram Language includes a new text translation function, aptly named [`TextTranslation`][4].
Recently I made a post which showed how you can use this function to [query Wolfram|Alpha in any language][5].
But there is so much more you can do with this function. In this post I will share the idea of a `TranslationCell` function, which
creates a regular text cell, with an attached button which lets you toggle from English to a specific language.
Let's start with a famous English quote from the recent past:
> We choose to go to the Moon! We choose to go to the Moon in this
> decade and do the other things, not because they are easy, but because
> they are hard; because that goal will serve to organize and measure
> the best of our energies and skills, because that challenge is one
> that we are willing to accept, one we are unwilling to postpone, and
> one we intend to win, and the others, too.
And let's assign this quote to a variable named `quote`:
```
quote = "We choose to go to the Moon! We choose to go to the Moon in \
this decade and do the other things, not because they are easy, but \
because they are hard; because that goal will serve to organize and \
measure the best of our energies and skills, because that challenge \
is one that we are willing to accept, one we are unwilling to \
postpone, and one we intend to win, and the others, too."
```
I don't want to get to deeply into the implementation details, but if you are interested in them I recommend perusing the code for it on my GitHub project:
https://github.com/arnoudbuzing/prototypes/blob/master/Prototypes/Notebook.wl#L148
And if you want to try this function, simply install the paclet that has this function included:
```
PacletInstall["https://github.com/arnoudbuzing/prototypes/releases/download/v0.2.5/Prototypes-0.2.5.paclet"]
```
So let's take a look at an example:
```
TranslateCell[ quote, "Spanish" ]
```
This creates the following cell:
![enter image description here][6]
And clicking on the button will translate the English text to Spanish (this may take 1-2 seconds since it is calling a translation service):
![enter image description here][7]
Clicking the button again reverts to English (this is fast, because it stored the original text in the cell as metadata):
![enter image description here][8]
And of course this works for many languages, like Russian:
```
TranslationCell[ quote, "Russian" ]
```
![enter image description here][9]
Or Swedish:
```
TranslationCell[ quote, "Swedish" ]
```
![enter image description here][10]
Or Arabic:
```
TranslationCell[ quote, "Arabic" ]
```
![enter image description here][11]
It might be useful to extend this idea to support translation between any two languages ( "LanguageA" -> "LanguageB" ), so I think this will be the next improvement.
Let me know what you think! I am interested in feedback and additional ideas on how to use `TextTranslation` in the Wolfram Language!
[1]: http://community.wolfram.com/groups/-/m/t/1313456
[2]: https://www.wolframcloud.com/objects/thomas-cool/Utilities/2018-04-02-NotebookTranslate.nb
[3]: http://community.wolfram.com/web/cool
[4]: http://reference.wolfram.com/language/ref/TextTranslation.html
[5]: http://community.wolfram.com/groups/-/m/t/1337022
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-04.png&userId=22112
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-05.png&userId=22112
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-04.png&userId=22112
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-06.png&userId=22112
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-07.png&userId=22112
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-08.png&userId=22112Arnoud Buzing2018-05-17T16:14:18ZUse NMaximize output as parameter?
http://community.wolfram.com/groups/-/m/t/1342618
Hello,
I'm running two maximization problem and I want to use the output for additional condition in "If" function.
How can I use the output as parameter? (e.g. X+z in this code)
Here is the Code:
ClearAll[Y ,X,z,w]
T0=1
Hmu=0.2
SA=10+Hmu
SB=10-Hmu
Sigma=1.5
K=10
r=1.02
gamma=0.1
Vi=2
Eye=2
ZC=(K-SA)/Sigma
ZB=(K-SB)/Sigma
S0=(SA-gamma(Sigma^2)(Vi /Eye ))/r
C0=((SA-gamma(Sigma^2)(Vi /Eye ) -K)(1-CDF[NormalDistribution[0, 1], ZC+gamma*Sigma*(Vi /Eye ) ])+(PDF[NormalDistribution[0, 1], ZC+gamma*Sigma*(Vi /Eye ) ])Sigma)/r
A=(((Vi /Eye ) -X)S0-Y*C0)r+X*SA-(gamma/2)(X^2)(Sigma^2)
B=(((Vi /Eye ) -X)S0-Y*C0)r +(X +Y)SA -(gamma/2)(X +Y) ^2(Sigma^2) -Y *K
Q=(((Vi /Eye ) -z)S0-w*C0)r+z*SB-(gamma/2)(z^2)(Sigma^2)
P=(((Vi /Eye ) -z)S0-w*C0)r +(z +w)SB -(gamma/2)(z +w) ^2(Sigma^2) -w *K
UA=-Exp[-gamma*A]*(CDF[NormalDistribution[0, 1], ZC+gamma*X *Sigma ]) -Exp[-gamma*B]*(1-CDF[NormalDistribution[0, 1], ZC+gamma*(X +Y )*Sigma ])
UB=-Exp[-gamma*Q ]*(CDF[NormalDistribution[0, 1], ZB+gamma*z *Sigma ]) -Exp[-gamma*P ]*(1-CDF[NormalDistribution[0, 1], ZB+gamma*(z +w )*Sigma ])
NMaximize[UA ,{X >=0,X<=2,Y>=-1,Y<=1},{X ,Y}]
NMaximize[UB ,{z >=0,z<=2,w>=-1,w<=1},{z ,w}]
If[X+z<=2,1,0]yossi sh2018-05-21T09:37:41ZGet Stellate image using the PolyhedronOperations` package?
http://community.wolfram.com/groups/-/m/t/1341888
I tried using the PolyhedronOperations` package. The Needs command worked OK. I used an example in the mathematica help file but Stellate didn't produce the output in the help file. Here is the output I got
![enter image description here][1]
Here is the output in the help file
![enter image description here][2]
Any ideas about the reason for the difference? Thanks
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Q1.png&userId=764017
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Q1.1.png&userId=764017fajad binj2018-05-19T17:08:26ZAvoid problem with the Mathematica Trial edition installation?
http://community.wolfram.com/groups/-/m/t/1342279
I have a problem with the Mathematica installation.
I tried to install the Mathematica Trial edition.
But, Mathematiaca Trial edition can not installed with the phrase related to
"This application can only run a single instance"
It's probably a problem with " a single instance ".
What is the problem? What should I do?Yoon Young Jin2018-05-21T04:32:36ZFind formula of a Gamma[] product with complex conjugate pair of numbers?
http://community.wolfram.com/groups/-/m/t/1342337
Wolfram Language knows about some simplifications for the product of the Gammafunction for complex conjugate numbers, i.e.
In[1]:= Gamma[n I] Gamma[-n I] // FullSimplify
Out[1]= (\[Pi] Csch[n \[Pi]])/n
In[2]:= Gamma[1 + n I] Gamma[1 - n I] // FullSimplify
Out[2]= n \[Pi] Csch[n \[Pi]]
In[3]:= Gamma[2 + n I] Gamma[2 - n I] // FullSimplify
Out[3]= n (1 + n^2) \[Pi] Csch[n \[Pi]]
In[4]:= Gamma[3 + n I] Gamma[3 - n I] // FullSimplify
Out[4]= n (4 + 5 n^2 + n^4) \[Pi] Csch[n \[Pi]]
In[5]:= Gamma[4 + n I] Gamma[4 - n I] // FullSimplify
Out[5]= Gamma[4 - I n] Gamma[4 + I n]
... but at some point it is stuck. With the help of WL I found these Identities for m=4,5 and 6.
In[8]:= N[Table[{Gamma[4 + n I] Gamma[
4 - n I] == (\[Pi] n (n^6 + 14 n^4 + 49 n^2 + 36))/
Sinh[n \[Pi]]}, {n, 1, 4}], 20]
Out[8]= {{True}, {True}, {True}, {True}}
In[9]:= N[
Table[{Gamma[5 + n I] Gamma[
5 - n I] == (\[Pi] n (n^8 + 30 n^6 + 273 n^4 + 820 n^2 + 576))/
Sinh[n \[Pi]]}, {n, 1, 4}], 20]
Out[9]= {{True}, {True}, {True}, {True}}
In[10]:= N[
Table[{Gamma[6 + n I] Gamma[
6 - n I] == (\[Pi] n (n^10 + 55 n^8 + 1023 n^6 + 7645 n^4 +
21076 n^2 + 14400))/Sinh[n \[Pi]]}, {n, 1, 4}], 20]
Out[10]= {{True}, {True}, {True}, {True}}
And of course more identities for other natural numbers can be found with some effort. Maybe someone can find a closed formula. for the case Gamma[m+I n]Gamma[m-I n], where m is a natural number.Oliver Seipel2018-05-20T12:55:15ZClarify sequence-to-sequence learning with neural nets in WL?
http://community.wolfram.com/groups/-/m/t/1341622
I've been learning about recurrent neural networks lately and I think I'm starting to get the basic idea of how they work. I'm particularly interested in the sequence transformation capabilities of these nets for applications in both NLP and generative art. I've played with a few simple (non-recurrent) nets in Mathematica, but would like to learn more about how to implement recurrent sequence-to-sequence learning.
I've read the Wolfram tutorial [Sequence Learning and NLP with Neural Networks][1], and I'm particularly interested in the section titled **Integer Addition with Variable-Length Output**. If I understand correctly, sequence-to-sequence learning involves converting a sequence to a vector, and then converting that vector into another sequence. I understand (mostly) the "sequence-to-vector" parts with things like `SequenceLastLayer[]`. However, I'm still not entirely clear from the tutorial how the "vector-to-sequence" part of this works. Are there other, more descriptive examples somewhere?
[1]: http://reference.wolfram.com/language/tutorial/NeuralNetworksSequenceLearning.htmlAndrew Campbell2018-05-18T14:00:11ZObtain the field intensity at a certain position of a maser interferometer?
http://community.wolfram.com/groups/-/m/t/1329310
Hey guys,
I would lik to know how normalizing these figures giving by this programm for obtaining the field intensity at an arbitrary off-center for exemple at x=0.5a in order to find the samme valeus as given in Resonant Modes in a Maser Interferometer by using equation 26:
Exp[I*0.25*Pi]/(2*Sqrt[d])*\[Integral](Exp[-I*k*Sqrt[b^2 + (x1 - x2)^2]]/ Sqrt[Sqrt[b^2 + (x1 - x2)^2]])*(1 + b/Sqrt[b^2 + (x1 - x2)^2])
By A. G. FOX and TINGYE Ll (Manuscript received October 20, 1960) articl.
(==================================================================)
(* lam=d; a=25d;b=100d ; k=2[Pi]/d)-one Trip // 0 < x2 < 1 a)
(==================================================================)
d = 1; lam = d; a = 25*d; b = 100*d ; k = 2 [Pi]/d
x2 = Table[x2, {x2, 0, 1, 0.01}]*a
f1 = (Exp[-IkSqrt[b^2 + (x1 - x2)^2]]/ Sqrt[Sqrt[b^2 + (x1 - x2)^2]])*(1 + b/Sqrt[b^2 + (x1 - x2)^2])
g1 = NIntegrate[f1, {x1, -a, a}]
fact = Exp[I*0.25Pi]/(2*Sqrt[d])
g2 = fact*g1
Abg = Abs[g2]
ListLinePlot[Abg]
Ag = Arg[g2]
ListLinePlot[Ag]
Please see my attachment for more details.
Thanks in advance.MOUMA MIRAL2018-04-29T09:56:57ZProve a formula for a known convergent series with Resolve?
http://community.wolfram.com/groups/-/m/t/1342090
I ran across this behaviour that I do not understand. I wanted to prove a formula for a known convergent series:
$\sum_{k=-\infty}^\infty \frac{1}{(2k+1)(2q-2k-1)} = -\frac{\pi^2}{4} \delta_{q,0}\quad$ for $\quad q\in\mathbb{Z}$.
However, Mathematica says this is wrong:
Resolve[ForAll[q, Sum[1/((2 k + 1) (2 q - 2 k - 1)) == -(\[Pi]^2/4) KroneckerDelta[q, 0], {k,-Infinity,Infinity}], Integers]
returns `False`. Moreover, if the values for $q$ are restricted, i.e. if I look separately at the cases $q=0$ and $q\neq 0$,
I get a `True` result: The two commands
Resolve[ForAll[q, q == 0, Sum[1/((2 k + 1) (2 q - 2 k - 1)) == -(\[Pi]^2/4) KroneckerDelta[q, 0], {k,-Infinity,Infinity}], Integers]
Resolve[ForAll[q, q != 0, Sum[1/((2 k + 1) (2 q - 2 k - 1)) == -(\[Pi]^2/4) KroneckerDelta[q, 0], {k,-Infinity,Infinity}], Integers]
both evaluate to `True`. Logically, this is a contradiction, so my question is whether I misunderstood the function of these commands or this is a bug.
Thanks in advance for clarification!
(Using Mathematica 10.0)Julian Farnsteiner2018-05-19T19:29:14ZCreate Magic Cubes, moon level ( 9x9) ?
http://community.wolfram.com/groups/-/m/t/1341127
![Magic cube 9x9][1]
Hello Wolfram Comunity ! My name is Serg and im wondering how it could be possibly done ?
Besides that by all horizontal, vertical and diagonal rows it gives 369, it also gives 1,6,2,7,3,8,4,9,5 if you add numbers in a cell
which in one step gives sequence 1,2,3,4,5 and 6,7,8,9,
+
if you'll look at second number in any cell, you'll see that it creates a perfect pattern that is mirors from both sides
like : 7,6,7,6,7,6,7,6,7 from left side and 5,6,5,6,5,6,5,6,5 from other, and in a whole picture you may see that it is sequesialy creates a wonderfull
pattern from both sides with only 1,1,1,1,1,1,1,1,1 in the central row.
Also if you'll look at second numbers at the top and the bottom, you'll find out intresting mirror pattern 7,8,9,0,1,2,3,4,5 which has some meaning for
shure, plus at the bottom we could find out that the pattern is the same and it applies to all vertical rows, all second numbers are the same.
So, my question is : How it is possibly be done ?
Im learning math for years and realy don't understand the key of making such beautifull constructions like this one.
I got software like HypercubeGenerator, TesseractGenerator, CubeGenerator but it gives me the results so far from this miracle.
Could you plese help me to understand how it was done ?
Thank You a lot for you attention
Here is the cube itself in the attachment
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=talisman_137a.jpg&userId=210822Sergiy Skorin2018-05-17T16:06:29ZPerform Community-specific search?
http://community.wolfram.com/groups/-/m/t/1342064
Perhaps I am missing something, but is there a Wolfram Community specific search field where I can look for particular posts?
There is the overall WRI search in the upper right hand corner of the window, but as far as I can see, nothing to search the community only.
Have I missed it in my coffee-limited state of mind?David Reiss2018-05-19T17:17:03ZUNET image segmentation in stem cells research
http://community.wolfram.com/groups/-/m/t/1341081
For my research project I had to encounter a thorny problem. But before I tell about the problem I would like to briefly mention something about my research project. Basically I am using embryonic stem cells that self-organize to form spheroids (balls of cells) to study gastrulation events. In order to not bog down the readers with technical jargon, “gastrulation” is a process where the stem cells start to form the different layers; each layer then goes onto form the various tissues/organs, in the process unraveling the developmental plan of the entire organism. I am using experimental techniques and quantitative principles from biophysics and engineering to understand some aspects of this crucial process
Now coming back to the problem at hand, the gastruloids (image below) are quite rough in their appearance and not as beautiful as one would like them to be (only a mother can love such an image). Any means of quantifying these gastruloids requires me to initially segment them. When you see a time-lapse images of gastruloids it becomes apparent that they shed a lot of cells (for reasons I do not know yet). This adds considerable noise to the system; oftentimes to the point that – as a human – my eyes are fooled and run into the difficulty of finding the right contours for the spheroids. Here comes the disclosure: classical means/operations in image-processing (gradients and edge detection, filtering, morphological operations etc.. ) prove utterly futile for image segmentation in my case.
![enter image description here][1]
(A gastruloid – virtually a ball of cells with many shed around the periphery)
So what can you do to address the problem where even the best image processing tool in existence – the human eyes – fails. This is precisely where you take help of neural networks. Neural networks are selling like hotcakes during the recent years and added life and hope to the once dead area of artificial intelligence. Again to avoid underlying technical details, neural networks is a paradigm utilized by the computer to mimic the working of a human brain by taking into account the complex interactions between the cells – but only digitally. There are many flavours of neural networks out there, each one geared towards performing a specific task. With advancements made in the area of deep learning/artificial intelligence, the neural nets have started to surpass humans in tasks that humans have been known to be best for i.e. classification tasks. A few recent examples that come to mind include Google’s AlphaGo beating the former World Go champion and an AI diagnosing skin cancer with an unprecedented accuracy.
I utilized one such flavour of neural networks (a deep convolutional network – termed as UNET) to solve my longstanding problem. I constructed the network in Wolfram-Language with external help from Alexey Golyshev. UNET is a deep convolutional network that has a series of convolutional and pooling operations in the contraction phase of the net (wherein the features are extracted) and a sequence of deconvolution & convolution operations in the expansion phase which then yields an output from the network. This output can be subjected to a threshold to ultimately generate a binarized mask (the image segmentation).
![enter image description here][2]
The architecture of UNET as provided by the author: https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/
(* ::Package:: *)
BeginPackage["UNETSegmentation`"]
(* ::Section:: *)
(*Creating UNet*)
conv[n_]:=NetChain[
{
ConvolutionLayer[n,3,"PaddingSize"->{1,1}],
Ramp,
BatchNormalizationLayer[],
ConvolutionLayer[n,3,"PaddingSize"->{1,1}],
Ramp,
BatchNormalizationLayer[]
}
];
pool := PoolingLayer[{2,2},2];
dec[n_]:=NetGraph[
{
"deconv" -> DeconvolutionLayer[n,{2,2},"Stride"->{2,2}],
"cat" -> CatenateLayer[],
"conv" -> conv[n]
},
{
NetPort["Input1"]->"cat",
NetPort["Input2"]->"deconv"->"cat"->"conv"
}
];
nodeGraphMXNET[net_,opt: ("MXNetNodeGraph"|"MXNetNodeGraphPlot")]:= net~NetInformation~opt;
UNET := NetGraph[
<|
"enc_1"-> conv[64],
"enc_2"-> {pool,conv[128]},
"enc_3"-> {pool,conv[256]},
"enc_4"-> {pool,conv[512]},
"enc_5"-> {pool,conv[1024]},
"dec_1"-> dec[512],
"dec_2"-> dec[256],
"dec_3"-> dec[128],
"dec_4"-> dec[64],
"map"->{ConvolutionLayer[1,{1,1}],LogisticSigmoid}
|>,
{
NetPort["Input"]->"enc_1"->"enc_2"->"enc_3"->"enc_4"->"enc_5",
{"enc_4","enc_5"}->"dec_1",
{"enc_3","dec_1"}->"dec_2",
{"enc_2","dec_2"}->"dec_3",
{"enc_1","dec_3"}->"dec_4",
"dec_4"->"map"},
"Input"->NetEncoder[{"Image",{160,160},ColorSpace->"Grayscale"}]
]
(* ::Section:: *)
(*DataPrep*)
dataPrep[dirImage_,dirMask_]:=Module[{X, masks,imgfilenames, maskfilenames,ordering, fNames,func},
func[dir_] := (SetDirectory[dir];
fNames = FileNames[];
ordering = Flatten@StringCases[fNames,x_~~p:DigitCharacter.. :> ToExpression@p];
Part[fNames,Ordering@ordering]);
imgfilenames = func@dirImage;
X = ImageResize[Import[dirImage<>"\\"<>#],{160,160}]&/@imgfilenames;
maskfilenames = func@dirMask;
masks = Import[dirMask<>"\\"<>#]&/@maskfilenames;
{X, NetEncoder[{"Image",{160,160},ColorSpace->"Grayscale"}]/@masks}
]
(* ::Section:: *)
(*Training UNet*)
trainNetwithValidation[net_,dataset_,labeldataset_,validationset_,labelvalidationset_, batchsize_: 8, maxtrainRounds_: 100]:=Module[{},
SetDirectory[NotebookDirectory[]];
NetTrain[net, dataset->labeldataset,All, ValidationSet -> Thread[validationset-> labelvalidationset],
BatchSize->batchsize,MaxTrainingRounds->maxtrainRounds, TargetDevice->"GPU",
TrainingProgressCheckpointing->{"Directory","results","Interval"->Quantity[5,"Rounds"]}]
];
trainNet[net_,dataset_,labeldataset_, batchsize_:8, maxtrainRounds_: 10]:=Module[{},
SetDirectory[NotebookDirectory[]];
NetTrain[net, dataset->labeldataset,All,BatchSize->batchsize,MaxTrainingRounds->maxtrainRounds, TargetDevice->"GPU",
TrainingProgressCheckpointing->{"Directory","results","Interval"-> Quantity[5,"Rounds"]}]
];
(* ::Section:: *)
(*Measure Accuracy*)
measureModelAccuracy[net_,data_,groundTruth_]:= Module[{acc},
acc =Table[{i, 1.0 - HammingDistance[N@Round@Flatten@net[data[[i]],TargetDevice->"GPU"],
Flatten@groundTruth[[i]]]/(160*160)},{i,Length@data}
];
{Mean@Part[acc,All,2],TableForm@acc}
];
(* ::Section:: *)
(*Miscellaneous*)
saveNeuralNet[net_]:= Module[{dir = NotebookDirectory[]},
Export[dir<>"unet.wlnet",net]]/; Head[net]=== NetGraph;
saveInputs[data_,labels_,opt:("data"|"validation")]:=Module[{},
SetDirectory[NotebookDirectory[]];
Switch[opt,"data",
Export["X.mx",data];Export["Y.mx",labels],
"validation",
Export["Xval.mx",data];Export["Yval.mx",labels]
]
]
EndPackage[];
The above code can also be found in the repository @ [Wolfram-MXNET GITHUB][3]
I trained my network over my laptop GPU (Nvidia GTX 1050) by feeding an augmented data (a set of 300 images constructed from a small dataset) . The training was done in under 3 minutes !. The accuracy (computed as the Hamming Distance between two vectors) of the generated binary masks with respect to the ground truth (unseen data) for a set of 90 images was 98.55 %. And with this a task that previously required me to painstakingly trace the contour of the gastruloids manually can now be performed in a matter of milliseconds. All the saved time and perspiration to be utilized somewhere else?
![enter image description here][4]
Below is the results obtained by applying our trained net on one input:
![enter image description here][5]
The interesting aspect for me regarding the network was that despite my gastruloids being highly dynamic (changing shape over time) I never had to explicity state it to the network. All the necessary features were learned from the limited number of images that I trained my network with. This is the beauty of the neural network.
![enter image description here][6]
Finally the output of the net as applied on a number of unseen images:
![enter image description here][7]
Note: I have a python MXNET version of UNET @ [python mxnet GITHUB][8]
The wolfram version of UNET however seems to outperform the python version even though it also utilizes MXNET at the back-end for implementing neural networks. It should not come as a surprise because my guess is that the people at Wolfram Research may have done internal optimizations on top of the library
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=gastruloid.png&userId=942204
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=u-net-architecture-initial-authors-implementation.png&userId=942204
[3]: https://github.com/alihashmiii/UNet-Segmentation-Wolfram
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img1.png&userId=942204
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img2.png&userId=942204
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img3.png&userId=942204
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=img4.png&userId=942204
[8]: https://github.com/alihashmiii/blobsegmentationAli Hashmi2018-05-17T15:06:45ZReflect a plot on the other side of the axe y?
http://community.wolfram.com/groups/-/m/t/1341497
Hi guys!
How could I reflect the plot on the other side of the axe y (like the image) so I can work on two wheels?
Is there any method of Mathematica or...?
Quadrilatero[q_, xA_, yA_, Lc_, L2_, L3_, modo_, xD_, yD_] :=
Module[
{xB, yB, L5, \[Theta]5, c, \[Alpha], \[Theta]2, xC, yC, \[Theta]3},
xB = xA - Lc;
yB = yA;
L5 = Sqrt[(xD - xB)^2 + (yD - yB)^2];
\[Theta]5 = ArcTan[xD - xB, yD - yB];
c = (L5^2 + L2^2 - L3^2)/(2 L5 L2);
If[modo > 0, \[Alpha] = ArcCos[c], \[Alpha] = -ArcCos[c]];
\[Theta]2 = \[Theta]5 + \[Alpha];
xC = xB + L2 Cos[\[Theta]2];
yC = yB + L2 Sin[\[Theta]2];
\[Theta]3 = 0;
{\[Theta]3, \[Theta]2, {{xA, yA}, {xB, yB}, {xC, yC}, {xD, yD}}}
]
Manipulate[
Module [{sol, coordinate, qq, xD, xA, yD, yA, modo, q},
q = \[Pi];
xD = -0.75;
yD = 0;
xA = s;
yA = 0.30;
modo = 1;
sol = Quadrilatero[q, xA, yA, Lc, L2, L3, modo, xD, yD];
coordinate = sol[[3]];
Show[
(* Plot Traiettoria Nera *)
ParametricPlot[
Quadrilatero[qq, xA, yA, Lc, L2, L3, modo, xD, yD][[3]][[2]],
{qq, 0, q + 0.00001},
PlotRange -> {{-1, 1}, {-.2, .4}},
AspectRatio -> .5,
PlotStyle -> {Black, Dashed}],
(* Plot Traiettoria Blu *)
ParametricPlot[
Quadrilatero[qq, xA, yA, Lc, L2, L3, modo, xD, yD][[3]][[3]],
{qq, 0, q + 0.00001},
PlotRange -> {{-1, .2}, {-.2, .4}},
AspectRatio -> .5,
PlotStyle -> {Blue, Dashed}],
(* Plot Aste *)
ListLinePlot[coordinate,
PlotRange -> {{-1, .2}, {-.2, .4}},
AspectRatio -> .5,
PlotStyle -> Thick],
(* Plot Ruota *)
ListLinePlot[{coordinate[[3]], coordinate[[4]]},
PlotRange -> {{-1, .2}, {-.2, .4}},
AspectRatio -> .5,
PlotStyle -> {Thickness[0.1], Opacity[0.08], Red}
],
Graphics[
{
(*LightBlue,Opacity[0.2],Rectangle[{-0.75,-0.2},{0.75,0.4}],*)
Gray, Thick, Disk[{xA, yA}, .02], Disk[sol[[3]][[4]], .02],
Orange, Thick, Disk[sol[[3]][[2]], .02], Disk[sol[[3]][[3]], .02]
}
]
]
],
{{s, -0.035}, 0.04, -0.075, Appearance -> "Open"},
{{Lc, .1}, 0.04, 0.14, Appearance -> "Open"},
{{L2, 0.7}, 0, 2, Appearance -> "Open"},
{{L3, .1}, 0, .25, Appearance -> "Open"}
]
I attatched even the .nb file, but I don't know why I get a lot of errors when I open the file and still I don't "Evaluate Notebook", why?Lorenzo Cristofori2018-05-18T11:09:06ZSolving the Douglas-Plateau Problem Numerically
http://community.wolfram.com/groups/-/m/t/1341653
Cross-posted from [Mathematica.StackExchange](https://mathematica.stackexchange.com/a/158356/38178).
Douglas-Plateau Problem
---
Given a compact, two-dimensional smooth manifold $\varSigma$ with boundary $\partial\varSigma$ and an embedding $\gamma \colon \partial\varSigma \to \mathbb{R}^3$, find an immersion $f \colon \varSigma \to \mathbb{R}^3$ of minimal area $\mathcal{A}(f) \,\colon = \int_{f(\varSigma)} \operatorname{d} \mathcal{H}^2$ among those immersions satisfying $f|_{\partial \varSigma} = \gamma$.
The instance of this problem in which $\varSigma$ is the closed disk is called Plateau's problem. In the 1930s, [Radó](http://www.jstor.org/stable/1968237) and [Douglas](https://www.ams.org/tran/1931-033-01/S0002-9947-1931-1501590-9/S0002-9947-1931-1501590-9.pdf) showed independently that there is always at least one solution of Plateau's problem. (This is not true for manifolds $\varSigma$ in different topological classes.)
If $f$ is a local minimizer of the Douglas-Plateau problem that happens to be also an embedding, then $f(\varSigma)$ describes the shape of a soap film at rest that is spanned into the boundary curve $\gamma(\partial \varSigma)$.
There are several ways to treat this problem numerically but the simplest method might be to discretize the boundary curve $\gamma(\partial \varSigma)$ by an inscribed polygonal line and a candidate surface $f(\varSigma)$ by an immersed triangle meshe. Then the surface area is merely a function in the coordinates of the (interior) vertices of the immersed mesh, so that one can apply numerical optimization methods in the search for minimizers. By the way, that is [precisely what Douglas did](https://www.jstor.org/stable/pdf/1967991) before he moved on to solve Plateau's problem theoretically. (The technique that Douglas used in his proof was also exploited by [Dziuk and Hutchinson](https://www.jstor.org/stable/2585097) to derive a numerical method for solving Plateau's problem.)
Background on the Algorithm
---
Here is a method that utilizes $H^1$-gradient flows. This is far quicker than the $L^2$-gradient flow (a.k.a. _[mean curvature flow](https://mathematica.stackexchange.com/a/172603/38178)_) or using `FindMinimum` and friends, in particular when dealing with finely discretized surfaces. The algorithm was originally developed by [Pinkall and Poltier](https://projecteuclid.org/download/pdf_1/euclid.em/1062620735).
For those who are interested: A major reason is the [Courant–Friedrichs Lewy condition](https://en.wikipedia.org/wiki/Courant-Friedrichs-Lewy_condition), which enforces the time step size in explicit integration schemes for parabolic PDEs to be proportial to the maximal cell diameter of the mesh. This leads to the need for _many_ time iterations for fine meshes. Another problem is that the Hessian of the surface area with repect to the surface positions is highly ill-conditioned (both in the continuous as well as in the discretized setting.)
In order to compute $H^1$-gradients, we need the Laplace-Beltrami operator of an immersed surface $\varSigma$, or rather its associated bilinear form
$$ a_\varSigma(u,v) = \int_\varSigma \langle\operatorname{d} u, \operatorname{d} v \rangle \, \operatorname{vol}, \quad u,\,v\in H^1(\varSigma;\mathbb{R}^3).$$
The $H^1$-gradient $\operatorname{grad}^{H^1}_\varSigma(F) \in H^1_0(\varSigma;\mathbb{R}^3)$ of the area functional $F(\varSigma)$ solves the the following Poisson problem
$$a_\varSigma(\operatorname{grad}^{H^1}_\varSigma(F),v) = DF(\varSigma) \, v \quad \text{for all $v\in H^1_0(\varSigma;\mathbb{R}^3)$}.$$
When the gradient at the surface configuration $\varSigma$ is known, we simply translate $\varSigma$ by $- \delta t \, \operatorname{grad}^{H^1}_\varSigma(F)$ with some step size $\delta t>0$.
Surprisingly, the differential $DF(\varSigma)$ is given by
$$ DF(\varSigma) \, v = \int_\varSigma \langle\operatorname{d} \operatorname{id}_\varSigma, \operatorname{d} v \rangle \, \operatorname{vol},$$
so, we can also use the discretized Laplace-Beltrami to compute it.
Implementation
---
Unfortunately, Mathematica's FEM tools cannot deal with finite elements on surfaces, yet. Therefore, I provide some code to assemble the Laplace-Beltrami operator of a triangular mesh.
getLaplacian = Quiet[Block[{xx, x, PP, P, UU, U, VV, V, f, Df, u, Du, v, Dv, g, integrant, quadraturepoints, quadratureweights},
xx = Table[x[[i]], {i, 1, 2}];
PP = Table[P[[i, j]], {i, 1, 3}, {j, 1, 3}];
UU = Table[U[[i]], {i, 1, 3}];
VV = Table[V[[i]], {i, 1, 3}];
(*local affine parameterization of the surface with respect to the "standard triangle"*)
f = x \[Function] PP[[1]] + x[[1]] (PP[[2]] - PP[[1]]) + x[[2]] (PP[[3]] - PP[[1]]);
Df = x \[Function] Evaluate[D[f[xx], {xx}]];
(*the Riemannian pullback metric with respect to f*)
g = x \[Function] Evaluate[Df[xx]\[Transpose].Df[xx]];
(*two affine functions u and v and their derivatives*)
u = x \[Function] UU[[1]] + x[[1]] (UU[[2]] - UU[[1]]) + x[[2]] (UU[[3]] - UU[[1]]);
Du = x \[Function] Evaluate[D[u[xx], {xx}]];
v = x \[Function] VV[[1]] + x[[1]] (VV[[2]] - VV[[1]]) + x[[2]] (VV[[3]] - VV[[1]]);
Dv = x \[Function] Evaluate[D[v[xx], {xx}]];
integrant = x \[Function] Evaluate[D[D[
Dv[xx].Inverse[g[xx]].Du[xx] Sqrt[Abs[Det[g[xx]]]],
{UU}, {VV}]]];
(*since the integrant is constant over each trianle, we use a one-
point Gauss quadrature rule (for the standard triangle) *)
quadraturepoints = {{1/3, 1/3}};
quadratureweights = {1/2};
With[{
code = N[quadratureweights.Map[integrant, quadraturepoints]] /. Part -> Compile`GetElement
},
Compile[{{P, _Real, 2}}, code,
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True]
]
]
];
getLaplacianCombinatorics = Quiet[Module[{ff},
With[{
code = Flatten[Table[Table[{ff[[i]], ff[[j]]}, {i, 1, 3}], {j, 1, 3}], 1] /. Part -> Compile`GetElement
},
Compile[{{ff, _Integer, 1}},
code,
CompilationTarget -> "C",
RuntimeAttributes -> {Listable},
Parallelization -> True
]
]]];
LaplaceBeltrami[pts_, flist_, pat_] := With[{
spopt = SystemOptions["SparseArrayOptions"],
vals = Flatten[getLaplacian[Partition[pts[[flist]], 3]]]
},
Internal`WithLocalSettings[
SetSystemOptions["SparseArrayOptions" -> {"TreatRepeatedEntries" -> Total}],
SparseArray[Rule[pat, vals], {Length[pts], Length[pts]}, 0.],
SetSystemOptions[spopt]]
];
Now we can minimize: We utilize that the differential of area with respect to vertex positions `pts` equals `LaplaceBeltrami[pts, flist, pat].pts`. I use constant step size `dt = 1`; this works surprisingly well. Of course, one may add a line search method of one's choice.
areaGradientDescent[R_MeshRegion, stepsize_: 1., steps_: 10,
reassemble_: False] :=
Module[{method, faces, bndedges, bndvertices, pts, intvertices, pat,
flist, A, S, solver}, Print["Initial area = ", Area[R]];
method = If[reassemble, "Pardiso", "Multifrontal"];
pts = MeshCoordinates[R];
faces = MeshCells[R, 2, "Multicells" -> True][[1, 1]];
bndedges = Developer`ToPackedArray[Region`InternalBoundaryEdges[R][[All, 1]]];
bndvertices = Union @@ bndedges;
intvertices = Complement[Range[Length[pts]], bndvertices];
pat = Flatten[getLaplacianCombinatorics[faces], 1];
flist = Flatten[faces];
Do[A = LaplaceBeltrami[pts, flist, pat];
If[reassemble || i == 1,
solver = LinearSolve[A[[intvertices, intvertices]], Method -> method]];
pts[[intvertices]] -= stepsize solver[(A.pts)[[intvertices]]];, {i, 1, steps}];
S = MeshRegion[pts, MeshCells[R, 2], PlotTheme -> "LargeMesh"];
Print["Final area = ", Area[S]];
S
];
Example 1
---
We have to create some geometry. Any `MeshRegion` with triangular faces and nonempty boundary would do (although it is not guaranteed that an area minimizer exists).
h = 0.9;
R = DiscretizeRegion[
ImplicitRegion[{x^2 + y^2 + z^2 == 1}, {{x, -h, h}, {y, -h, h}, {z, -h, h}}],
MaxCellMeasure -> 0.00001,
PlotTheme -> "LargeMesh"
]
[![enter image description here][1]][1]
And this is all we have to do for minimization:
areaGradientDescent[R, 1., 20., False]
> Initial area = 8.79696
> Final area = 7.59329
[![enter image description here][2]][2]
Example 2
---
Since creating interesting boundary data along with suitable initial surfaces is a bit involved and since <s>I cannot upload `MeshRegions` here</s> it is just more fun, I decided to compress the initial surface for this example into these two images:
[![enter image description here][3]][3]
[![enter image description here][4]][4]
The surface can now obtained with
R = MeshRegion[
Transpose[ImageData[Import["https://i.stack.imgur.com/aaJPM.png"]]],
Polygon[Round[#/Min[#]] &@ Transpose[ ImageData[Import["https://i.stack.imgur.com/WfjOL.png"]]]]
]
[![enter image description here][5]][5]
With the function `LoopSubdivide` [from this post](http://community.wolfram.com/groups/-/m/t/1338790), we can successively refine and minimize with
SList = NestList[areaGradientDescent@*LoopSubdivide, R, 4]
[![enter image description here][6]][6]
Here is the final minimizer in more detail:
[![enter image description here][7]][7]
Final Remarks
---
If huge deformations are expected during the gradient flow, it helps a lot to set `reassemble = True`. This uses always the Laplacian of the current surface for the gradient computation. However, this is considerably slower since the Laplacian has to be refactorized in order to solve the linear equations for the gradient. Using `"Pardiso"` as `Method` does help a bit.
Of course, the best we can hope to obtain this way is a _local_ minimizer.
[1]: https://i.stack.imgur.com/KByfZ.png
[2]: https://i.stack.imgur.com/H7GCH.png
[3]: https://i.stack.imgur.com/aaJPM.png
[4]: https://i.stack.imgur.com/WfjOL.png
[5]: https://i.stack.imgur.com/Aabqj.png
[6]: https://i.stack.imgur.com/vZnFl.png
[7]: https://i.stack.imgur.com/UTjfT.pngHenrik Schumacher2018-05-18T18:30:21ZGet and plot Financial Data for NASDAQ - Price Works but Volume Does Not?
http://community.wolfram.com/groups/-/m/t/1320190
Hi,
I'm trying to plot NASDAQ price and volume using the following commands. Interestingly the "Volume" property works for individual securities but does not work for market indices like NASDAQ. Has anyone else observed this issue?
DateListPlot[FinancialData["NASDAQ",{{2018,1},{2018,4}}],PlotLabel ->"NASDAQ" ] --> this works
DateListPlot[FinancialData["NASDAQ","Volume",{{2018,1},{2018,3}}],PlotLabel ->"NASDAQ Volume" ] --> this does not work
ThanksRobert Stephens2018-04-14T22:27:37ZHow to realize the function Nest[] with two replaced variables?
http://community.wolfram.com/groups/-/m/t/1340863
How to realize Nest[{a,b,#1,#2} &, ]?
For example, #1 should be replaced by x, and #2 by y simultaneously or even respectively ?Math Logic2018-05-17T06:01:04ZTry to beat these MRB constant records!
http://community.wolfram.com/groups/-/m/t/366628
POSTED BY: Marvin Ray Burns .
**MKB constant calculations have been moved to their own discussion at**
[Calculating the digits of the MKB constant][1] .
I think this important point got buried near the end.
When it comes to mine and a few more educated people's passion to calculate many digits and the dislike possessed by a few more educated people; it is all a matter telling us that the human mind is multi faceted in giving passion, to person a, for one task and to person b for another task!
The MRB constant is defined below. See http://mathworld.wolfram.com/MRBConstant.html
> ![enter image description here][2]
Here are some record computations. If you know of any others let me know..
1. On or about Dec 31, 1998 I computed 1 digit of the (additive inverse of the) MRB constant with my TI-92's, by adding 1-sqrt(2)+3^(1/3)-4^(1/4) as far as I could. That first digit by the way is just 0.
2. On Jan 11, 1999 I computed 3 digits of the MRB constant with the Inverse Symbolic Calculator.
3. In Jan of 1999 I computed 4 correct digits of the MRB constant using Mathcad 3.1 on a 50 MHz 80486 IBM 486 personal computer operating on Windows 95.
4. Shortly afterwards I computed 9 correct digits of the MRB constant using Mathcad 7 professional on the Pentium II mentioned below.
5. On Jan 23, 1999 I computed 500 digits of the MRB constant with the online tool called Sigma.
6. In September of 1999, I computed the first 5,000 digits of the MRB Constant on a 350 MHz Pentium II with 64 Mb of ram using the simple PARI commands \p 5000;sumalt(n=1,((-1)^n*(n^(1/n)-1))), after allocating enough memory.
7. On June 10-11, 2003 over a period, of 10 hours, on a 450mh P3 with an available 512mb RAM: I computed 6,995 accurate digits of the MRB constant.
8. Using a Sony Vaio P4 2.66 GHz laptop computer with 960 MB of available RAM, on 2:04 PM 3/25/2004, I finished computing 8000 digits of the MRB constant.
9. On March 01, 2006 with a 3GH PD with 2GBRAM available, I computed the first 11,000 digits of the MRB Constant.
10. On Nov 24, 2006 I computed 40, 000 digits of the MRB Constant in 33hours and 26min via my own program in written in Mathematica 5.2. The computation was run on a 32-bit Windows 3GH PD desktop computer using 3.25 GB of Ram.
11. Finishing on July 29, 2007 at 11:57 PM EST, I computed 60,000 digits of MRB Constant. Computed in 50.51 hours on a 2.6 GH AMD Athlon with 64 bit Windows XP. Max memory used was 4.0 GB of RAM.
12. Finishing on Aug 3 , 2007 at 12:40 AM EST, I computed 65,000 digits of MRB Constant. Computed in only 50.50 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 5.0 GB of RAM.
13. Finishing on Aug 12, 2007 at 8:00 PM EST, I computed 100,000 digits of MRB Constant. They were computed in 170 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 11.3 GB of RAM. Median (typical) daily record of memory used was 8.5 GB of RAM.
14. Finishing on Sep 23, 2007 at 11:00 AM EST, I computed 150,000 digits of MRB Constant. They were computed in 330 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 22 GB of RAM. Median (typical) daily record of memory used was 17 GB of RAM.
15. Finishing on March 16, 2008 at 3:00 PM EST, I computed 200,000 digits of MRB Constant using Mathematica 5.2. They were computed in 845 hours on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 47 GB of RAM. Median (typical) daily record of memory used was 28 GB of RAM.
16. Washed away by Hurricane Ike -- on September 13, 2008 sometime between 2:00PM - 8:00PM EST an almost complete computation of 300,000 digits of the MRB Constant was destroyed. Computed for a long 4015. Hours (23.899 weeks or 1.4454*10^7 seconds) on a 2.66GH Core2Duo using 64 bit Windows XP. Max memory used was 91 GB of RAM. The Mathematica 6.0 code used follows:
Block[{$MaxExtraPrecision = 300000 + 8, a, b = -1, c = -1 - d,
d = (3 + Sqrt[8])^n, n = 131 Ceiling[300000/100], s = 0}, a[0] = 1;
d = (d + 1/d)/2; For[m = 1, m < n, a[m] = (1 + m)^(1/(1 + m)); m++];
For[k = 0, k < n, c = b - c;
b = b (k + n) (k - n)/((k + 1/2) (k + 1)); s = s + c*a[k]; k++];
N[1/2 - s/d, 300000]]
17. On September 18, 2008 a computation of 225,000 digits of MRB Constant was started with a 2.66GH Core2Duo using 64 bit Windows XP. It was completed in 1072 hours. Memory usage is recorded in the attachment pt 225000.xls, near the bottom of this post. .
18. 250,000 digits was attempted but failed to be completed to a serious internal error which restarted the machine. The error occurred sometime on December 24, 2008 between 9:00 AM and 9:00 PM. The computation began on November 16, 2008 at 10:03 PM EST. Like the 300,000 digit computation this one was almost complete when it failed. The Max memory used was 60.5 GB.
19. On Jan 29, 2009, 1:26:19 pm (UTC-0500) EST, I finished computing 250,000 digits of the MRB constant. with a multiple step Mathematica command running on a dedicated 64bit XP using 4Gb DDR2 Ram on board and 36 GB virtual. The computation took only 333.102 hours. The digits are at http://marvinrayburns.com/250KMRB.txt . The computation is completely documented in the attached 250000.pd at bottom of this post.
20. On Sun 28 Mar 2010 21:44:50 (UTC-0500) EST, I started a computation of 300000 digits of the MRB constant using an i7 with 8.0 GB of DDR3 Ram on board.; But it failed due to hardware problems.
21. I computed 299,998 Digits of the MRB constant. The computation began Fri 13 Aug 2010 10:16:20 pm EDT and ended 2.23199*10^6 seconds later |
Wednesday, September 8, 2010. I used Mathematica 6.0 for Microsoft
Windows (64-bit) (June 19, 2007) That is an average of 7.44 seconds per digit.. I used my Dell Studio XPS 8100 i7 860 @ 2.80 GH 2.80 GH
with 8GB physical DDR3 RAM. Windows 7 reserved an additional 48.929
GB virtual Ram.
22. I computed exactly 300,000 digits to the right of the decimal point
of the MRB constant from Sat 8 Oct 2011 23:50:40 to Sat 5 Nov 2011
19:53:42 (2.405*10^6 seconds later). This run was 0.5766 seconds per digit slower than the
299,998 digit computation even though it used 16GB physical DDR3 RAM on the same machine. The working precision and accuracy goal
combination were maximized for exactly 300,000 digits, and the result was automatically saved as a file instead of just being displayed on the front end. Windows reserved a total of 63 GB of working memory of which at 52 GB were recorded being used. The 300,000 digits came from the Mathematica 7.0 command
Quit; DateString[]
digits = 300000; str = OpenWrite[]; SetOptions[str,
PageWidth -> 1000]; time = SessionTime[]; Write[str,
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> digits + 3, AccuracyGoal -> digits,
Method -> "AlternatingSigns"]]; timeused =
SessionTime[] - time; here = Close[str]
DateString[]
23. 314159 digits of the constant took 3 tries do to hardware failure. Finishing on September 18, 2012 I computed 314159 digits, taking 59 GB of RAM. The digits are came from the Mathematica 8.0.4 code
DateString[]
NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> 314169, Method -> "AlternatingSigns"] // Timing
DateString[]
Where I have 10 digits to round off. (The command NSum[(-1)^n*(n^(1/n) - 1), {n, \[Infinity]},
WorkingPrecision -> big number, Method -> "AlternatingSigns"] tends to give about 3 digits of error to the right.)
**The following records are due to the work of Richard Crandall found [here][3]. **
24. Sam Noble of Apple computed 1,000,000 digits of the MRB constant in 18 days 9 hours 11 minutes 34.253417 seconds
25. Finishing on Dec 11, 2012 Ricard Crandall, an Apple scientist, computed 1,048,576 digits
in a lighting fast 76.4 hours. That's on a 2.93 Ghz 8-core Nehalem
26. I computed a little over 1,200,000 digits of the MRB constant in 11
days, 21 hours, 17 minutes, and 41 seconds,( finishing on on March 31 2013). I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
27. On May 17, 2013 I finished a 2,000,000 or more digit computation of the MRB constant, using only around 10GB of RAM. It took 37 days 5 hours 6 minutes 47.1870579 seconds. I used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz.
28. Finally, I would like to announce a new unofficial world record computation of the MRB constant that was finished on Sun 21 Sep 2014 18:35:06. It took 1 month 27 days 2 hours 45 minutes 15 seconds. I computed 3,014,991 digits of the MRB constant with Mathematica 10.0. I Used my new version of Richard Crandall's code, below, optimized for my platform and large computations. I also used a six core Intel(R) Core(TM) i7-3930K CPU @ 3.20 GHz 3.20 GHz with 64 GB of RAM of which only 16 GB was used. Can you beat it (in more number of digits, less memory used, or less time taken)? This confirms that my previous "2,000,000 or more digit computation" was actually accurate to 2,009,993 digits. (They were used as MRBtest2M.)
(**Fastest (at MRB's end) as of 25 Jul 2014*.*)
DateString[]
prec = 3000000;
(**Number of required decimals.*.*)ClearSystemCache[];
T0 = SessionTime[];
expM[pre_] :=
Module[{a, d, s, k, bb, c, n, end, iprec, xvals, x, pc, cores = 12,
tsize = 2^7, chunksize, start = 1, ll, ctab,
pr = Floor[1.005 pre]}, chunksize = cores*tsize;
n = Floor[1.32 pr];
end = Ceiling[n/chunksize];
Print["Iterations required: ", n];
Print["end ", end];
Print[end*chunksize]; d = ChebyshevT[n, 3];
{b, c, s} = {SetPrecision[-1, 1.1*n], -d, 0};
iprec = Ceiling[pr/27];
Do[xvals = Flatten[ParallelTable[Table[ll = start + j*tsize + l;
x = N[E^(Log[ll]/(ll)), iprec];
pc = iprec;
While[pc < pr, pc = Min[3 pc, pr];
x = SetPrecision[x, pc];
y = x^ll - ll;
x = x (1 - 2 y/((ll + 1) y + 2 ll ll));];(**N[Exp[Log[ll]/ll], pr]**)x, {l, 0, tsize - 1}], {j, 0, cores - 1},
Method -> "EvaluationsPerKernel" -> 4]];
ctab = ParallelTable[Table[c = b - c;
ll = start + l - 2;
b *= 2 (ll + n) (ll - n)/((ll + 1) (2 ll + 1));
c, {l, chunksize}], Method -> "EvaluationsPerKernel" -> 2];
s += ctab.(xvals - 1);
start += chunksize;
Print["done iter ", k*chunksize, " ", SessionTime[] - T0];, {k, 0,
end - 1}];
N[-s/d, pr]];
t2 = Timing[MRBtest2 = expM[prec];]; DateString[]
Print[MRBtest2]
MRBtest2 - MRBtest2M
t2 From the computation was {1.961004112059*10^6, Null}.
Here are a couple of graphs of my record computations in max digits/ year:
![enter image description here][4]![enter image description here][5]
[1]: http://community.wolfram.com/groups/-/m/t/1323951?p_p_auth=W3TxvEwH
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=68115.JPG&userId=366611
[3]: http://www.marvinrayburns.com/UniversalTOC25.pdf
[4]: /c/portal/getImageAttachment?filename=7559mrbrecord1.JPG&userId=366611
[5]: /c/portal/getImageAttachment?filename=mrbrecord3.JPG&userId=366611Marvin Ray Burns2014-10-09T18:08:49ZMathEd: Major update of 2 notebooks & packages on the place value system
http://community.wolfram.com/groups/-/m/t/1341254
These 2 notebooks with packages have major updates, at their original locations (otherwise some users might miss out on the update):
(1) Pronunciation of the integers with full use of the place value system
http://community.wolfram.com/web/community/groups/-/m/t/1334793
(2) Tables for addition and subtraction with better use of the place value system
http://community.wolfram.com/groups/-/m/t/1313408
PM. I noticed that attaching a notebook to a Posting might cause that this notebook is transformed and included into the html of the Posting. This happened in (2) above. Let me ask Staff not to do this. I do not want to sound ungrateful, when someone has done some effort to make this happen, but it is better not to do so. An attachment is not the same as a Posting. Their titles differ. The layouts differ. While I checked the notebook, I did not check the html transcription. I might not want a html layout when I made an interactive notebook. While I updated the notebook at the original location, I have no access to the html that was created there, and it gives the old text. It is okay to let bygones be bygones, but it would help to know for future submissions that attachments are such only, with perhaps another button that asks for conversion to html. It is unclear to me whether version management would be a more general issue for these Postings.Thomas Colignatus2018-05-17T20:50:31ZPlot regional data from by-country indicators?
http://community.wolfram.com/groups/-/m/t/1337822
Hi,
I have a csv containing columns for countries, indicator name (e.g. population, fertility), and several columns for values on different years, such as a column for 1960, another for 1961, etc.
How do I create a listplot for regional data (grouping countries) taking the indicator of each country, calculculating a weighted average by population.
Below the example
Any help would be highly appreciated as my table is huge and have to create dozens of similar charts.
s = Import[
"C:\\Users\\Jesus Enrique\\Documents\\Wolfram \
Desktop\\Sample1.csv"];Enrique Vargas2018-05-13T01:44:32ZAvoid backticks on the right of matrix elements?
http://community.wolfram.com/groups/-/m/t/1337479
Hi, I have a problem with the output of an evaluated function.
<p>I am doing a school project that involves matrices and the logistic sigmoid. I've created a function that takes as parameters four matrices and should give back a tensor from which I'm supposed to extract a matrix. The problem is this: when I evaluate the function the result is a tensor where each element is an expression with a backtick (<b>this one: ` </b>) on the right that doesn't allow any further calculation. To evaluate the expressions in the tensor, and get what I want, I have to copy the output in another input space and remove manually all the backticks.</p>
<br>Here is the code:
(* Matrices *)
A = {{.2, .8}}
W = {{.3}, {.6}}
Q = {{.4, .6 , .2}}
T = {{.5, .1, .1}}
(* Functions *)
Output[a_, w_, q_] := LogisticSigmoid[LogisticSigmoid[a.w].q]
Error[a_, w_, q_, t_] := (t - Output[a, w, q])^2
Derror[a_, w_, q_, t_] = D[Error[a, w, q, t], q]
(* And until here it's all right *)
(* Here come the troubles *)
Derror[A, W, Q, T]
<br>The output when executing Derror is:
{{0.0309271 {{0.631812}}.1,
0.238167 {{0.631812}}.1,
0.214915 {{0.631812}}.1}}
And if i put that in the input space (just pasting, without executing it) I get:
{{0.030927085534456576` {{0.6318124177361016`}}.1,
0.23816694971195315` {{0.6318124177361016`}}.1,
0.21491527050353018` {{0.6318124177361016`}}.1}}
<br>The expected result is:
{{{{0.00195401}}, {{0.0150477}}, {{0.0135786}}}}
But to get to that I have to input:
(* The output of Derror but with backticks removed *)
{{0.030927085534456576 {{0.6318124177361016}}.1, 0.23816694971195315 {{0.6318124177361016}}.1, 0.21491527050353018 {{0.6318124177361016}}.1}}
<br>Why is this happening? I'm not very trained in Wolfram Mathematica so I really don't know what to do. I searched the web but I found nothing. I tried with converting the output to string, replacing "`" with "", and than converting the string to expression again but nothing, it's useless. Look forward to find some answer here and sorry for my english (i'm not from an english-speaking country).Paolo Galfano2018-05-12T19:55:33ZUse Mathematica Excel-Link with a 64bit Windows Server 2016?
http://community.wolfram.com/groups/-/m/t/1332166
Hi,
I'm trying to replace VBA with Mathematica in Excel spreadsheets.
I understand that Excel-Link is the right tool, but reading the documentation on the [Wolfram website][1] it seems that the product is not compatible with 64-bit Excel.
> "Mathematica Link for Excel 3.7.1 requires Mathematica 8 or greater
> and Microsoft Excel 2010 or newer. It is available for Windows 7, 8,
> 8.1 or 10. Earlier software and operating systems may be supported with a few limitations. ***The ExcelLink package for Mathematica is
> compatible with both 32-bit and 64-bit versions of Mathematica and
> Excel. The Mathematica Link add-in for Excel is compatible only with
> the 32-bit version of Excel***. "
My platform is a 64bit Windows Server 2016 with a 64bit Excel, this means that I can use ExcelLink package to drive Excel/64bit from a Mathematica-notebook but I can not create mathematica-macros cause the 32/64 bit incompatibility.
Can someone using Excel-Link help me better understand the situation?
Thank you so much for your collaboration
Massimo
[1]: https://www.wolfram.com/products/applications/excel_link/Massimo Salese2018-05-04T07:02:11ZPhase unwrapping
http://community.wolfram.com/groups/-/m/t/1340126
A common function to 'unwrap' a list of data which has had a modulus operation working on it is still absent from the Wolfram Language. This also quite commonly happens when you measure something in the lab which is for example an angle that jumps back to '0' after every rotation. To solve this, I wrote my own function, hopefully this is helpful for you. Here it is:
ClearAll[Unwrap]
Unwrap[lst_List]:=Unwrap[lst,2Pi] (* phase jumps of 2Pi is the default because of trigonometric funtions *)
Unwrap[lst_List,\[CapitalDelta]_]:=Unwrap[lst,\[CapitalDelta],Scaled[0.5]] (* default tolerance is half the phase jump \[CapitalDelta] *)
Unwrap[lst_List,\[CapitalDelta]_,tolerance_]:=Module[{tol,jumps},
tol=If[Head[tolerance]===Scaled,
\[CapitalDelta] tolerance[[1]]
,
tolerance
];
jumps=Differences[lst];
jumps=-Sign[jumps]Unitize[Chop[Abs[jumps],tol]];
jumps=\[CapitalDelta] Prepend[Accumulate[jumps],0];
jumps+lst
]
When a list is given, the default period is assumed to be 2Pi, and the tolerance Pi. But one can specify any one likes with the second and third arguments.
So let's create some data and plot it:
dat=Table[Sin[0.2x]+4Sin[0.05x],{x,0,200}];
ListPlot[dat,AspectRatio->1/4,ImageSize->600,PlotMarkers->Automatic]
![enter image description here][1]
Now, let's take the modulus of the data and plot it:
mod=Mod[dat,4,-2];
ListPlot[mod,AspectRatio->1/4,ImageSize->600,PlotMarkers->Automatic]
![enter image description here][2]
Now we indeed have many sharp jumps, but with the above function we can undo this:
unmod=Unwrap[mod,4];
ListPlot[unmod,AspectRatio->1/4,ImageSize->600,PlotMarkers->Automatic]
![enter image description here][3]
So we return now to our original data; great!
With some tricks we can also do it with 2D-data, here i create some data, plot it, mod it (what a mess!), plot it, unmod it, plot it:
dat=Table[Sin[0.2x]+4Sin[0.05x+0.05y]+Sin[0.1y],{x,0,200},{y,0,200}];
ListPlot3D[dat]
mod=Mod[dat,3];
ListPlot3D[mod]
unmod=Unwrap[#,3]&/@mod;
tmp=Unwrap[#,3]-#&[unmod[[All,1]]];
unmod=unmod+tmp;
ListPlot3D[unmod]
![enter image description here][4]
Hope you enjoy it and find it useful!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-05-15at18.56.11.png&userId=73716
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-05-15at18.56.15.png&userId=73716
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-05-15at18.56.18.png&userId=73716
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-05-15at18.59.33.png&userId=73716Sander Huisman2018-05-15T17:04:40ZA monad for classification workflows
http://community.wolfram.com/groups/-/m/t/1340015
# Introduction
In this document we describe the design and implementation of a (software programming) monad for classification workflows specification and execution. The design and implementation are done with Mathematica / Wolfram Language (WL).
The goal of the monad design is to make the specification of classification workflows (relatively) easy, straightforward, by following a certain main scenario and specifying variations over that scenario.
The monad is named ClCon and it is based on the State monad package ["StateMonadCodeGenerator.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m), [[AAp1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m), [AA1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Monad-code-generation-and-extension.md)], the classifier ensembles package ["ClassifierEnsembles.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m), [[AAp4](https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m), [AA2](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/ROC-for-Classifier-Ensembles-Bootstrapping-Damaging-and-Interpolation.md)], and the package for [Receiver Operating Characteristic (ROC)](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) functions calculation and plotting ["ROCFunctions.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/ROCFunctions.m), [[AAp5](https://github.com/antononcube/MathematicaForPrediction/blob/master/ROCFunctions.m), [AA2](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/ROC-for-Classifier-Ensembles-Bootstrapping-Damaging-and-Interpolation.md), [Wk2](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)].
The data for this document is read from WL's repository using the package ["GetMachineLearningDataset.m"](https://github.com/antononcube/MathematicaVsR/blob/master/Projects/ProgressiveMachineLearning/Mathematica/GetMachineLearningDataset.m), [[AAp10](https://github.com/antononcube/MathematicaVsR/blob/master/Projects/ProgressiveMachineLearning/Mathematica/GetMachineLearningDataset.m)].
The monadic programming design is used as a [Software Design Pattern](https://en.wikipedia.org/wiki/Software_design_pattern). The `ClCon` monad can be also seen as a [Domain Specific Language](https://en.wikipedia.org/wiki/Domain-specific_language) (DSL) for the specification and programming of machine learning classification workflows.
Here is an example of using the `ClCon` monad over the Titanic data:
![ClCon-simple-dsTitanic-pipeline](https://imgur.com/zwjBynL.png)
The table above is produced with the package ["MonadicTracing.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m), [[AAp2](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m), [AA1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Monad-code-generation-and-extension.md)], and some of the explanations below also utilize that package.
As it was mentioned above the monad `ClCon` can be seen as a DSL. Because of this the monad pipelines made with `ClCon` are sometimes called "specifications".
## Contents description
The document has the following structure.
- The sections "Package load" and "Data load" obtain the needed code and data.
(Needed and put upfront from the
["Reproducible research"](https://en.wikipedia.org/wiki/Reproducibility#Reproducible_research)
point of view.)
- The sections "Design consideration" and "Monad design" provide motivation and design decisions rationale.
- The sections "ClCon overview" and "Monad elements" provide technical description of the `ClCon` monad
needed to utilize it.
(Using a fair amount of examples.)
- The section "Example use cases" gives several more elaborated examples of `ClCon` that have "real life" flavor.
(But still didactic and concise enough.)
- The section "Unit test" describes the tests used in the development of the `ClCon` monad.
(The random pipelines unit tests are especially interesting.)
- The section "Future plans" outlines future directions of development.
(The most interesting and important one is the
["conversational agent"](https://github.com/antononcube/ConversationalAgents/tree/master/Projects/ClassficationWorkflowsAgent)
direction.)
- The section "Implementation notes" has (i) a diagram outlining the `ClCon` development process,
and (ii) a list of observations and morals.
(Some fairly obvious, but deemed fairly significant and hence stated explicitly.)
**Remark:** One can read only the sections "Introduction", "Design consideration", "Monad design", and "ClCon overview".
That set of sections provide a fairly good, programming language agnostic exposition of the substance and novel ideas of this document.
# Package load
The following commands load the packages [[AAp1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m)--AAp10, AAp12]:
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicContextualClassification.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/MonadicProgramming/MonadicTracing.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaVsR/master/Projects/ProgressiveMachineLearning/Mathematica/GetMachineLearningDataset.m"]
Import["https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m"]
(*
Importing from GitHub: MathematicaForPredictionUtilities.m
Importing from GitHub: MosaicPlot.m
Importing from GitHub: CrossTabulate.m
Importing from GitHub: StateMonadCodeGenerator.m
Importing from GitHub: ClassifierEnsembles.m
Importing from GitHub: ROCFunctions.m
Importing from GitHub: VariableImportanceByClassifiers.m
Importing from GitHub: SSparseMatrix.m
Importing from GitHub: OutlierIdentifiers.m
*)
# Data load
In this section we load data that is used in the rest of the document. The "quick" data is created in order to specify quick, illustrative computations.
**Remark:** In all datasets the classification labels are in the last column.
The summarization of the data is done through `ClCon`, which in turn uses the function `RecordsSummary` from the package ["MathematicaForPredictionUtilities.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m), [[AAp7](https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m)].
## WL resources data
The following commands produce datasets using the package [[AAp10](https://github.com/antononcube/MathematicaVsR/blob/master/Projects/ProgressiveMachineLearning/Mathematica/GetMachineLearningDataset.m)] (that utilizes `ExampleData`):
dsTitanic = GetMachineLearningDataset["Titanic"];
dsMushroom = GetMachineLearningDataset["Mushroom"];
dsWineQuality = GetMachineLearningDataset["WineQuality"];
Here is are the dimensions of the datasets:
Dataset[Dataset[Map[Prepend[Dimensions[ToExpression[#]], #] &, {"dsTitanic", "dsMushroom", "dsWineQuality"}]][All, AssociationThread[{"name", "rows", "columns"}, #] &]]
![ClCon-datasets-dimensions](https://imgur.com/vevAWAh.png)
Here is the summary of `dsTitanic`:
ClConUnit[dsTitanic]⟹ClConSummarizeData["MaxTallies" -> 12];
![ClCon-dsTitanic-summary](https://imgur.com/mr6q8M9.png)
Here is the summary of `dsMushroom` in long form:
ClConUnit[dsMushroom]⟹ClConSummarizeDataLongForm["MaxTallies" -> 12];
![ClCon-dsMushroom-summary](https://imgur.com/Lhwr3Ht.png)
Here is the summary of `dsWineQuality` in long form:
ClConUnit[dsWineQuality]⟹ClConSummarizeDataLongForm["MaxTallies" -> 12];
![ClCon-dsWineQuality-summary](https://imgur.com/FETQehj.png)
## "Quick" data
In this subsection we make up some data that is used for illustrative purposes.
SeedRandom[212]
dsData = RandomInteger[{0, 1000}, {100}];
dsData = Dataset[
Transpose[{dsData, Mod[dsData, 3], Last@*IntegerDigits /@ dsData, ToString[Mod[#, 3]] & /@ dsData}]];
dsData = Dataset[dsData[All, AssociationThread[{"number", "feature1", "feature2", "label"}, #] &]];
Dimensions[dsData]
(* {100, 4} *)
Here is a sample of the data:
RandomSample[dsData, 6]
![ClCon-quick-data-sample](https://imgur.com/dDhN9NG.png)
Here is a summary of the data:
ClConUnit[dsData]⟹ClConSummarizeData;
![ClCon-quick-data-summary-ds](https://imgur.com/e0hzJjE.png)
Here we convert the data into a list of record-label rules (and show the summary):
mlrData = ClConToNormalClassifierData[dsData];
ClConUnit[mlrData]⟹ClConSummarizeData;
![ClCon-quick-data-summary-mlr](https://imgur.com/8AZ4uPi.png)
Finally, we make the array version of the dataset:
arrData = Normal[dsData[All, Values]];
# Design considerations
The steps of the main classification workflow addressed in this document follow.
1. Retrieving data from a data repository.
2. Optionally, transform the data.
3. Split data into training and test parts.
1. Optionally, split training data into training and validation parts.
4. Make a classifier with the training data.
5. Test the classifier over the test data.
1. Computation of different measures including ROC.
The following diagram shows the steps.
[![Classification-workflow-horizontal-layout](https://imgur.com/OT5Qkqil.png)](https://github.com/antononcube/MathematicaForPrediction/raw/master/MarkdownDocuments/Diagrams/A-monad-for-classification-workflows/Classification-workflow-horizontal-layout.jpg)
Very often the workflow above is too simple in real situations. Often when making "real world" classifiers we have to experiment with different transformations, different classifier algorithms, and parameters for both transformations and classifiers. Examine the following mind-map that outlines the activities in making competition classifiers.
[![Making-competitions-classifiers-mind-map.png](https://imgur.com/RTvPsKk.png)](https://github.com/antononcube/MathematicaForPrediction/raw/master/MarkdownDocuments/Diagrams/A-monad-for-classification-workflows/Making-competitions-classifiers-mind-map.png)
In view of the mind-map above we can come up with the following flow-chart that is an elaboration on the main, simple workflow flow-chart.
[![Classification-workflow-extended.jpg](https://imgur.com/SB9eP1K.png)](https://github.com/antononcube/MathematicaForPrediction/raw/master/MarkdownDocuments/Diagrams/A-monad-for-classification-workflows/Classification-workflow-extended.jpg)
In order to address:
+ the introduction of new elements in classification workflows,
+ workflows elements variability, and
+ workflows iterative changes and refining,
it is beneficial to have a DSL for classification workflows. We choose to make such a DSL through a [functional programming monad][1], \[[Wk1][2], [AA1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Monad-code-generation-and-extension.md)].
Here is a quote from \[[Wk1][3]\] that fairly well describes why we choose to make a classification workflow monad and hints on the desired properties of such a monad.
> [...] The monad represents computations with a sequential structure: a monad defines what it means to chain operations together. This enables the programmer to build pipelines that process data in a series of steps (i.e. a series of actions applied to the data), in which each action is decorated with the additional processing rules provided by the monad. [...]
> Monads allow a programming style where programs are written by putting together highly composable parts, combining in flexible ways the possible actions that can work on a particular type of data. [...]
**Remark:** Note that quote from \[Wk1\] refers to chained monadic operations as "pipelines". We use the terms "monad pipeline" and "pipeline" below.
# Monad design
The monad we consider is designed to speed-up the programming of classification workflows outlined in the previous section. The monad is named **ClCon** for "**Cl**assification with **Con**text".
We want to be able to construct monad pipelines of the general form:
![ClCon-generic-monad-formula](https://imgur.com/oUlLxtm.png)
ClCon is based on the [State monad][4], \[Wk1, [AA1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Monad-code-generation-and-extension.md)\], so the monad pipeline form (1) has the following more specific form:
![ClCon-State-monad-formula](https://imgur.com/TLX1D6B.png)
This means that some monad operations will not just change the pipeline value but they will also change the pipeline context.
In the monad pipelines of `ClCon` we store different objects in the contexts for at least one of the following two reasons.
1. The object will be needed later on in the pipeline, or
2. The object is hard to compute.
Such objects are training data, ROC data, and classifiers.
Let us list the desired properties of the monad.
+ Rapid specification of non-trivial classification workflows.
+ The monad works with different data types: Dataset, lists of machine learning rules, full arrays.
+ The pipeline values can be of different types. Most monad functions modify the pipeline value; some modify the context; some just echo results.
+ The monad works with single classifier objects and with classifier ensembles.
+ This means support of different classifier measures and ROC plots for both single classifiers and classifier ensembles.
+ The monad allows of cursory examination and summarization of the data.
+ For insight and in order to verify assumptions.
+ The monad has operations to compute importance of variables.
+ We can easily obtain the pipeline value, context, and different context objects for manipulation outside of the monad.
+ We can calculate classification measures using a specified ROC parameter and a class label.
+ We can easily plot different combinations of ROC functions.
The `ClCon` components and their interaction are given in the following diagram. (The components correspond to the main workflow given in the previous section.)
[![ClCon-components-interaction.jpg](https://imgur.com/Iv6e1Byl.png)](https://github.com/antononcube/MathematicaForPrediction/raw/master/MarkdownDocuments/Diagrams/A-monad-for-classification-workflows/ClCon-components-interaction.jpg)
In the diagram above the operations are given in rectangles. Data objects are given in round corner rectangles and classifier objects are given in round corner squares.
The main ClCon operations implicitly put in the context or utilize from the context the following objects:
+ training data,
+ test data,
+ validation data,
+ classifier (a classifier function or an association of classifier functions),
+ ROC data,
+ variable names list.
Note the that the monadic set of types of `ClCon` pipeline values is fairly heterogenous and certain awareness of "the current pipeline value" is assumed when composing ClCon pipelines.
Obviously, we can put in the context any object through the generic operations of the State monad of the package ["StateMonadGenerator.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Monad-code-generation-and-extension.md), [[AAp1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Monad-code-generation-and-extension.md)].
# ClCon overview
When using a monad we lift certain data into the "monad space", using monad's operations we navigate computations in that space, and at some point we take results from it.
With the approach taken in this document the "lifting" into the `ClCon` monad is done with the function `ClConUnit`. Results from the monad can be obtained with the functions `ClConTakeValue`, `ClConContext`, or with the other `ClCon` functions with the prefix "ClConTake" (see below.)
Here is a corresponding diagram of a generic computation with the `ClCon` monad:
[![ClCon-pipeline](https://imgur.com/GtinWpu.png)](https://github.com/antononcube/MathematicaForPrediction/raw/master/MarkdownDocuments/Diagrams/A-monad-for-classification-workflows/ClCon-pipeline.jpg)
**Remark:** It is a good idea to compare the diagram with formulas (1) and (2).
Let us examine a concrete `ClCon` pipeline that corresponds to the diagram above. In the following table each pipeline operation is combined together with a short explanation and the context keys after its execution.
![ClCon-pipeline-TraceMonad-table](https://imgur.com/igxc6LC.png)
Here is the output of the pipeline:
![ClCon-pipeline-TraceMonad-Echo-output](https://imgur.com/pea8fPo.png)
In the specified pipeline computation the last column of the dataset is assumed to be the one with the class labels.
The ClCon functions are separated into four groups:
+ operations,
+ setters,
+ takers,
+ State Monad generic functions.
An overview of the those functions is given in the tables in next two sub-sections. The next section, "Monad elements", gives details and examples for the usage of the `ClCon` operations.
## Monad functions interaction with the pipeline value and context
The following table gives an overview the interaction of the ClCon monad functions with the pipeline value and context.
![ClCon-table-of-operations-setters-takers](https://imgur.com/nLiccok.png)
Several functions that use ROC data have two rows in the table because they calculate the needed ROC data if it is not available in the monad context.
## State monad functions
Here are the `ClCon` State Monad functions (generated using the prefix "ClCon", [AAp1, AA1]):
![ClCon-StateMonad-functions-table](https://imgur.com/4v7CGFD.png)
# Monad elements
In this section we show that `ClCon` has all of the properties listed in the previous section.
## The monad head
The monad head is `ClCon`. Anything wrapped in `ClCon` can serve as monad's pipeline value. It is better though to use the constructor `ClConUnit`. (Which adheres to the definition in [Wk1].)
ClCon[{{1, "a"}, {2, "b"}}, <||>]⟹ClConSummarizeData;
![ClCon-monad-head-example](https://imgur.com/tCn9Ee1.png)
## Lifting data to the monad
The function lifting the data into the monad `ClCon` is `ClConUnit`.
The lifting to the monad marks the beginning of the monadic pipeline. It can be done with data or without data. Examples follow.
ClConUnit[dsData]⟹ClConSummarizeData;
![ClCon-lifting-data-example-1](https://imgur.com/HQoqo34.png)
ClConUnit[]⟹ClConSetTrainingData[dsData]⟹ClConSummarizeData;
![ClCon-lifting-data-example-2](https://imgur.com/IIo6Ctk.png)
(See the sub-section "Setters and takers" for more details of setting and taking values in `ClCon` contexts.)
Currently the monad can deal with data in the following forms:
+ datasets,
+ matrices,
+ lists of example->label rules.
The ClCon monad also has the non-monadic function `ClConToNormalClassifierData` which can be used to convert datasets and matrices to lists of example->label rules. Here is an example:
Short[ClConToNormalClassifierData[dsData], 3]
(*
{{639, 0, 9} -> "0", {121, 1, 1} -> "1", {309, 0, 9} -> "0", {648, 0, 8} -> "0", {995, 2, 5} -> "2", {127, 1, 7} -> "1", {908, 2, 8} -> "2", {564, 0, 4} -> "0", {380, 2, 0} -> "2", {860, 2, 0} -> "2",
<<80>>,
{464, 2, 4} -> "2", {449, 2, 9} -> "2", {522, 0, 2} -> "0", {288, 0, 8} -> "0", {51, 0, 1} -> "0", {108, 0, 8} -> "0", {76, 1, 6} -> "1", {706, 1, 6} -> "1", {765, 0, 5} -> "0", {195, 0, 5} -> "0"}
*)
When the data lifted to the monad is a dataset or a matrix it is assumed that the last column has the class labels. WL makes it easy to rearrange columns in such a way the any column of dataset or a matrix to be the last.
## Data splitting
The splitting is made with `ClConSplitData`, which takes up to two arguments and options. The first argument specifies the fraction of training data. The second argument -- if given -- specifies the fraction of the validation part of the training data. If the value of option `Method` is "LabelsProportional", then the splitting is done in correspondence of the class labels tallies. ("LabelsProportional" is the default value.) Data splitting demonstration examples follow.
Here are the dimensions of the dataset dsData:
Dimensions[dsData]
(* {100, 4} *)
Here we split the data into $70$% for training and $30$% for testing and then we verify that the corresponding number of rows add to the number of rows of dsData:
val = ClConUnit[dsData]⟹ClConSplitData[0.7]⟹ClConTakeValue;
Map[Dimensions, val]
Total[First /@ %]
(*
<|"trainingData" -> {69, 4}, "testData" -> {31, 4}|>
100
*)
Note that if Method is not "LabelsProportional" we get slightly different results.
val = ClConUnit[dsData]⟹ClConSplitData[0.7, Method -> "Random"]⟹ClConTakeValue;
Map[Dimensions, val]
Total[First /@ %]
(*
<|"trainingData" -> {70, 4}, "testData" -> {30, 4}|>
100
*)
In the following code we split the data into $70$% for training and $30$% for testing, then the training data is further split into $90$% for training and $10$% for classifier training validation; then we verify that the number of rows add up.
val = ClConUnit[dsData]⟹ClConSplitData[0.7, 0.1]⟹ClConTakeValue;
Map[Dimensions, val]
Total[First /@ %]
(*
<|"trainingData" -> {61, 4}, "testData" -> {31, 4}, "validationData" -> {8, 4}|>
100
*)
## Classifier training
The monad ClCon supports both single classifiers obtained with Classify and classifier ensembles obtained with Classify and managed with the package ["ClassifierEnsembles.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m), [[AAp4](https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m)].
### Single classifier training
With the following pipeline we take the Titanic data, split it into 75/25 % parts, train a Logistic Regression classifier, and finally take that classifier from the monad.
cf =
ClConUnit[dsTitanic]⟹
ClConSplitData[0.75]⟹
ClConMakeClassifier["LogisticRegression"]⟹
ClConTakeClassifier;
Here is information about the obtained classifier:
ClassifierInformation[cf, "TrainingTime"]
(* Quantity[3.84008, "Seconds"] *)
If we want to pass parameters to the classifier training we can use the `Method` option. Here we train a Random Forest classifier with $400$ trees:
cf =
ClConUnit[dsTitanic]⟹
ClConSplitData[0.75]⟹
ClConMakeClassifier[Method -> {"RandomForest", "TreeNumber" -> 400}]⟹
ClConTakeClassifier;
ClassifierInformation[cf, "TreeNumber"]
(* 400 *)
### Classifier ensemble training
With the following pipeline we take the Titanic data, split it into 75/25 % parts, train a classifier ensemble of three Logistic Regression classifiers and two Nearest Neighbors classifiers using random sampling of $90$% of the training data, and finally take that classifier ensemble from the monad.
ensemble =
ClConUnit[dsTitanic]⟹
ClConSplitData[0.75]⟹
ClConMakeClassifier[{{"LogisticRegression", 0.9, 3}, {"NearestNeighbors", 0.9, 2}}]⟹
ClConTakeClassifier;
The classifier ensemble is simply an association with keys that are automatically assigned names and corresponding values that are classifiers.
ensemble
![ClCon-ensemble-classifier-example-1](https://imgur.com/HHwLTTW.png)
Here are the training times of the classifiers in the obtained ensemble:
ClassifierInformation[#, "TrainingTime"] & /@ ensemble
(*
<|"LogisticRegression[1,0.9]" -> Quantity[3.47836, "Seconds"],
"LogisticRegression[2,0.9]" -> Quantity[3.47681, "Seconds"],
"LogisticRegression[3,0.9]" -> Quantity[3.4808, "Seconds"],
"NearestNeighbors[1,0.9]" -> Quantity[1.82454, "Seconds"],
"NearestNeighbors[2,0.9]" -> Quantity[1.83804, "Seconds"]|>
*)
A more precise specification can be given using associations. The specification
<|"method" -> "LogisticRegression", "sampleFraction" -> 0.9, "numberOfClassifiers" -> 3, "samplingFunction" -> RandomChoice|>
says "make three Logistic Regression classifiers, for each taking $90$% of the training data using the function `RandomChoice`."
Here is a pipeline specification equivalent to the pipeline specification above:
ensemble2 =
ClConUnit[dsTitanic]⟹
ClConSplitData[0.75]⟹
ClConMakeClassifier[{
<|"method" -> "LogisticRegression",
"sampleFraction" -> 0.9,
"numberOfClassifiers" -> 3,
"samplingFunction" -> RandomSample|>,
<|"method" -> "NearestNeighbors",
"sampleFraction" -> 0.9,
"numberOfClassifiers" -> 2,
"samplingFunction" -> RandomSample|>}]⟹
ClConTakeClassifier;
ensemble2
![ClCon-ensemble-classifier-example-2](https://imgur.com/H8xdoFu.png)
## Classifier testing
Classifier testing is done with the testing data in the context.
Here is a pipeline that takes the Titanic data, splits it, and trains a classifier:
p =
ClConUnit[dsTitanic]⟹
ClConSplitData[0.75]⟹
ClConMakeClassifier["DecisionTree"];
Here is how we compute selected classifier measures:
p⟹
ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}]⟹
ClConTakeValue
(*
<|"Accuracy" -> 0.792683,
"Precision" -> <|"died" -> 0.802691, "survived" -> 0.771429|>,
"Recall" -> <|"died" -> 0.881773, "survived" -> 0.648|>,
"FalsePositiveRate" -> <|"died" -> 0.352, "survived" -> 0.118227|>|>
*)
(The measures are listed in the function page of [ClassifierMeasurements](http://reference.wolfram.com/language/ref/ClassifierMeasurements.html).)
Here we show the confusion matrix plot:
p⟹ClConClassifierMeasurements["ConfusionMatrixPlot"]⟹ClConEchoValue;
![ClCon-classifier-testing-ConfusionMatrixPlot-echo](https://imgur.com/QNXUh5H.png)
Here is how we plot ROC curves by specifying the ROC parameter range and the image size:
p⟹ClConROCPlot["FPR", "TPR", "ROCRange" -> Range[0, 1, 0.1], ImageSize -> 200];
![ClCon-classifier-testing-ROCPlot-echo](https://imgur.com/stclBvw.png)
**Remark:** ClCon uses the package [ROCFunctions.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/ROCFunctions.m), [[AAp5](https://github.com/antononcube/MathematicaForPrediction/blob/master/ROCFunctions.m)], which implements all functions defined in [[Wk2](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)].
Here we plot ROC functions values ($y$-axis) over the ROC parameter ($x$-axis):
p⟹ClConROCListLinePlot[{"ACC", "TPR", "FPR", "SPC"}];
![ClCon-classifier-testing-ROCListLinePlot-echo](https://imgur.com/WNdgi6J.png)
Note of the "ClConROC*Plot" functions automatically echo the plots. The plots are also made to be the pipeline value. Using the option specification "Echo"->False the automatic echoing of plots can be suppressed. With the option "ClassLabels" we can focus on specific class labels.
p⟹
ClConROCListLinePlot[{"ACC", "TPR", "FPR", "SPC"}, "Echo" -> False, "ClassLabels" -> "survived", ImageSize -> Medium]⟹
ClConEchoValue;
![ClCon-classifier-testing-ROCListLinePlot-survived-echo](https://imgur.com/hZzXsT7.png)
## Variable importance finding
Using the pipeline constructed above let us find the most decisive variables using systematic random shuffling (as explained in [[AA3](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Importance-of-variables-investigation-guide.md)]):
p⟹
ClConAccuracyByVariableShuffling⟹
ClConTakeValue
(*
<|None -> 0.792683, "id" -> 0.664634, "passengerClass" -> 0.75, "passengerAge" -> 0.777439, "passengerSex" -> 0.612805|>
*)
We deduce that "passengerSex" is the most decisive variable because its corresponding classification success rate is the smallest. (See [[AA3](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Importance-of-variables-investigation-guide.md)] for more details.)
Using the option "ClassLabels" we can focus on specific class labels:
p⟹ClConAccuracyByVariableShuffling["ClassLabels" -> "survived"]⟹ClConTakeValue
(*
<|None -> {0.771429}, "id" -> {0.595506}, "passengerClass" -> {0.731959}, "passengerAge" -> {0.71028}, "passengerSex" -> {0.414414}|>
*)
## Setters and takers
The values from the monad context can be set or obtained with the corresponding "setters" and "takers" functions as summarized in previous section.
For example:
p⟹ClConTakeClassifier
(* ClassifierFunction[__] *)
Short[Normal[p⟹ClConTakeTrainingData]]
(*
{<|"id" -> 858, "passengerClass" -> "3rd", "passengerAge" -> 30, "passengerSex" -> "male", "passengerSurvival" -> "survived"|>, <<979>> }
*)
Short[Normal[p⟹ClConTakeTestData]]
(* {<|"id" -> 285, "passengerClass" -> "1st", "passengerAge" -> 60, "passengerSex" -> "female", "passengerSurvival" -> "survived"|> , <<327>> }
*)
p⟹ClConTakeVariableNames
(* {"id", "passengerClass", "passengerAge", "passengerSex", "passengerSurvival"} *)
If other values are put in the context they can be obtained through the (generic) function `ClConTakeContext`, [AAp1]:
p = ClConUnit[RandomReal[1, {2, 2}]]⟹ClConAddToContext["data"];
(p⟹ClConTakeContext)["data"]
(* {{0.815836, 0.191562}, {0.396868, 0.284587}} *)
Another generic function from [AAp1] is `ClConTakeValue` (used many times above.)
# Example use cases
## Classification with MNIST data
Here we show an example of using ClCon with the reasonably large dataset of images MNIST, [[YL1](http://yann.lecun.com/exdb/mnist/)].
mnistData = ExampleData[{"MachineLearning", "MNIST"}, "Data"];
SeedRandom[3423]
p =
ClConUnit[RandomSample[mnistData, 20000]]⟹
ClConSplitData[0.7]⟹
ClConSummarizeData⟹
ClConMakeClassifier["NearestNeighbors"]⟹
ClConClassifierMeasurements[{"Accuracy", "ConfusionMatrixPlot"}]⟹
ClConEchoValue;
![ClCon-MNIST-example-output](https://imgur.com/2GZE0wJ.png)
Here we plot the ROC curve for a specified digit:
p⟹ClConROCPlot["ClassLabels" -> 5];
## Conditional continuation
In this sub-section we show how the computations in a ClCon pipeline can be stopped or continued based on a certain condition.
The pipeline below makes a simple classifier ("LogisticRegression") for the WineQuality data, and if the recall for the important label ("high") is not large enough makes a more complicated classifier ("RandomForest"). The pipeline marks intermediate steps by echoing outcomes and messages.
SeedRandom[267]
res =
ClConUnit[dsWineQuality[All, Join[#, <|"wineQuality" -> If[#wineQuality >= 7, "high", "low"]|>] &]]⟹
ClConSplitData[0.75, 0.2]⟹
ClConSummarizeData(* summarize the data *)⟹
ClConMakeClassifier[Method -> "LogisticRegression"](* training a simple classifier *)⟹
ClConROCPlot["FPR", "TPR", "ROCPointCallouts" -> False]⟹
ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}]⟹
ClConEchoValue⟹
ClConIfElse[#["Recall", "high"] > 0.70 & (* criteria based on the recall for "high" *),
ClConEcho["Good recall for \"high\"!", "Success:"],
ClConUnit[##]⟹
ClConEcho[Style["Recall for \"high\" not good enough... making a large random forest.", Darker[Red]], "Info:"]⟹
ClConMakeClassifier[Method -> {"RandomForest", "TreeNumber" -> 400}](* training a complicated classifier *)⟹
ClConROCPlot["FPR", "TPR", "ROCPointCallouts" -> False]⟹
ClConClassifierMeasurements[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}]⟹
ClConEchoValue &];
![ClCon-conditional-continuation-example-output](https://imgur.com/wpakjS6.png)
We can see that the recall with the more complicated is classifier is higher. Also the ROC plots of the second classifier are visibly closer to the ideal one. Still, the recall is not good enough, we have to find a threshold that is better that the default one. (See the next sub-section.)
## Classification with custom thresholds
(In this sub-section we use the monad from the previous sub-section.)
Here we compute classification measures using the threshold $0.3$ for the important class label ("high"):
res⟹
ClConClassifierMeasurementsByThreshold[{"Accuracy", "Precision", "Recall", "FalsePositiveRate"}, "high" -> 0.3]⟹
ClConTakeValue
(* <|"Accuracy" -> 0.782857, "Precision" -> <|"high" -> 0.498871, "low" -> 0.943734|>,
"Recall" -> <|"high" -> 0.833962, "low" -> 0.76875|>,
"FalsePositiveRate" -> <|"high" -> 0.23125, "low" -> 0.166038|>|> *)
We can see that the recall for "high" is fairly large and the rest of the measures have satisfactory values. (The accuracy did not drop that much, and the false positive rate is not that large.)
Here we compute suggestions for the best thresholds:
res (* start with a previous monad *)⟹
ClConROCPlot[ImageSize -> 300] (* make ROC plots *)⟹
ClConSuggestROCThresholds[3] (* find the best 3 thresholds per class label *)⟹
ClConEchoValue (* echo the result *);
![ClCon-best-thresholds-example-output](https://imgur.com/NIkYzVA.png)
The suggestions are the ROC points that closest to the point $\{0,1\}$ (which corresponds to the ideal classifier.)
Here is a way to use threshold suggestions within the monad pipeline:
res⟹
ClConSuggestROCThresholds⟹
ClConEchoValue⟹
(ClConUnit[##]⟹
ClConClassifierMeasurementsByThreshold[{"Accuracy", "Precision", "Recall"}, "high" -> First[#1["high"]]] &)⟹
ClConEchoValue;
(*
value: <|high->{0.35},low->{0.65}|>
value: <|Accuracy->0.825306,Precision-><|high->0.571831,low->0.928736|>,Recall-><|high->0.766038,low->0.841667|>|>
*)
# Unit tests
The development of `ClCon` was done with two types of unit tests: (1) directly specified tests, [[AAp11](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassification-Unit-Tests.wlt)], and (2) tests based on randomly generated pipelines, [[AAp12](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m)].
Both unit test packages should be further extended in order to provide better coverage of the functionalities and illustrate -- and postulate -- pipeline behavior.
## Directly specified tests
Here we run the unit tests file ["MonadicContextualClassification-Unit-Tests.wlt"](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassification-Unit-Tests.wlt), [[AAp11](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassification-Unit-Tests.wlt)]:
AbsoluteTiming[
testObject = TestReport["~/MathematicaForPrediction/UnitTests/MonadicContextualClassification-Unit-Tests.wlt"]
]
![ClCon-direct-unit-tests-TestReport-icon](https://imgur.com/tzbkNyg.png)
The natural language derived test ID's should give a fairly good idea of the functionalities covered in [AAp11].
Values[Map[#["TestID"] &, testObject["TestResults"]]]
(* {"LoadPackage", "EvenOddDataset", "EvenOddDataMLRules", \
"DataToContext-no-[]", "DataToContext-with-[]", \
"ClassifierMaking-with-Dataset-1", "ClassifierMaking-with-MLRules-1", \
"AccuracyByVariableShuffling-1", "ROCData-1", \
"ClassifierEnsemble-different-methods-1", \
"ClassifierEnsemble-different-methods-2-cont", \
"ClassifierEnsemble-different-methods-3-cont", \
"ClassifierEnsemble-one-method-1", "ClassifierEnsemble-one-method-2", \
"ClassifierEnsemble-one-method-3-cont", \
"ClassifierEnsemble-one-method-4-cont", "AssignVariableNames-1", \
"AssignVariableNames-2", "AssignVariableNames-3", "SplitData-1", \
"Set-and-take-training-data", "Set-and-take-test-data", \
"Set-and-take-validation-data", "Partial-data-summaries-1", \
"Assign-variable-names-1", "Split-data-100-pct", \
"MakeClassifier-with-empty-unit-1", \
"No-rocData-after-second-MakeClassifier-1"} *)
## Random pipelines tests
Since the monad `ClCon` is a DSL it is natural to test it with a large number of randomly generated "sentences" of that DSL. For the `ClCon` DSL the sentences are `ClCon` pipelines. The package ["MonadicContextualClassificationRandomPipelinesUnitTests.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m), [[AAp12](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m)], has functions for generation of ClCon random pipelines and running them as verification tests. A short example follows.
Generate pipelines:
SeedRandom[234]
pipelines = MakeClConRandomPipelines[300];
Length[pipelines]
(* 300 *)
Here is sample of the generated pipelines:
Block[{DoubleLongRightArrow, pipelines = RandomSample[pipelines, 6]},
Clear[DoubleLongRightArrow];
pipelines = pipelines /. {_Dataset -> "ds", _?DataRulesForClassifyQ -> "mlrData"};
GridTableForm[
Map[List@ToString[DoubleLongRightArrow @@ #, FormatType -> StandardForm] &, pipelines],
TableHeadings -> {"pipeline"}]
]
AutoCollapse[]
![ClCon-random-pipelines-tests-sample-table](https://imgur.com/t4rCT5r.png)
Here we run the pipelines as unit tests:
AbsoluteTiming[
res = TestRunClConPipelines[pipelines, "Echo" -> True];
]
(* {350.083, Null} *)
From the test report results we see that a dozen tests failed with messages, all of the rest passed.
rpTRObj = TestReport[res]
![ClCon-random-pipelines-TestReport-icon](https://imgur.com/rr4vXUX.png)
(The message failures, of course, have to be examined -- some bugs were found in that way. Currently the actual test messages are expected.)
# Future plans
## Workflow operations
### Outliers
Better outliers finding and manipulation incorporation in `ClCon`. Currently only outlier finding is surfaced in [AAp3]. (The package internally has other related functions.)
ClConUnit[dsTitanic[Select[#passengerSex == "female" &]]]⟹
ClConOutlierPosition⟹
ClConTakeValue
(* {4, 17, 21, 22, 25, 29, 38, 39, 41, 59} *)
### Dimension reduction
Support of dimension reduction application -- quick construction of pipelines that allow the applying different dimension reduction methods.
Currently with `ClCon` dimension reduction is applied only to data the non-label parts of which can be easily converted into numerical matrices.
ClConUnit[dsWineQuality]⟹
ClConSplitData[0.7]⟹
ClConReduceDimension[2, "Echo" -> True]⟹
ClConRetrieveFromContext["svdRes"]⟹
ClConEchoFunctionValue["SVD dimensions:", Dimensions /@ # &]⟹
ClConSummarizeData;
![ClCon-dimension-reduction-example-echo](https://imgur.com/nEwoySa.png)
## Conversational agent
Using the packages [AAp13, AAp15] we can generate `ClCon` pipelines with natural commands. The plan is to develop and document those functionalities further.
# Implementation notes
The ClCon package, [MonadicContextualClassification.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m), [[AAp3](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m)], is based on the packages [[AAp1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m), AAp4-AAp9]. It was developed using Mathematica and the [Mathematica plug-in for IntelliJ IDEA](https://github.com/halirutan/Mathematica-IntelliJ-Plugin), by Patrick Scheibe , [[PS1](https://github.com/halirutan/Mathematica-IntelliJ-Plugin)]. The following diagram shows the development workflow.
[![ClCon-development-cycle](https://imgur.com/hmMPfCrl.png)](https://github.com/antononcube/MathematicaForPrediction/raw/master/MarkdownDocuments/Diagrams/A-monad-for-classification-workflows/ClCon-development-cycle.jpg)
Some observations and morals follow.
+ Making the unit tests [AAp11] made the final implementation stage much more comfortable.
+ Of course, in retrospect that is obvious.
+ Initially ["MonadicContextualClassification.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m) was not real a package, just a collection of global context functions with the prefix "ClCon". This made some programming design decisions harder, slower, and more cumbersome. By making a proper package the development became much easier because of the "peace of mind" brought by the context feature encapsulation.
+ The explanation for this is that the initial versions of ["MonadicContextualClassification.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m) were made to illustrate the monad programing described in [[AA1](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Monad-code-generation-and-extension.md)] using the package ["StateMonadCodeGenerator.m"](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m).
+ The making of random pipeline tests, [AAp12], helped catch a fair amount of inconvenient "features" and bugs.
+ (Both tests sets [AAp11, AAp12] can be made to be more comprehensive.)
+ The design of a conversational agent for producing ClCon pipelines with natural language commands brought a very fruitful viewpoint on the overall functionalities and the determination and limits of the ClCon development goals. See [AAp13, AAp14, AAp15].
+ ["Eat your own dog food"](https://en.wikipedia.org/wiki/Eating_your_own_dog_food), or in this case: "use ClCon functionalities to implement ClCon functionalities."
+ Since we are developing a DSL it is natural to use that DSL for its own advancement.
+ Again, in retrospect that is obvious. Also probably should be seen as a consequence of practicing a certain code refactoring discipline.
+ The reason to list that moral is that often it is somewhat "easier" to implement functionalities thinking locally, ad-hoc, forgetting or not reviewing other, already implemented functions.
+ In order come be better design and find inconsistencies: write many pipelines and discuss with co-workers.
+ This is obvious. I would like to mention that a somewhat good alternative to discussions is (i) writing this document and related ones and (ii) making, running, and examining of the random pipelines tests.
# References
## Packages
\[AAp1\] Anton Antonov, [State monad code generator Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*.
*URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/StateMonadCodeGenerator.m) .
\[AAp2\] Anton Antonov, [Monadic tracing Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*.
*URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicTracing.m) .
\[AAp3\] Anton Antonov, [Monadic contextual classification Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*.*
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MonadicProgramming/MonadicContextualClassification.m) .
\[AAp4\] Anton Antonov, [Classifier ensembles functions Mathematica package](https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/ClassifierEnsembles.m), (2016), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*.
*URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/ClassifierEnsembles.m) .
\[AAp5\] Anton Antonov, [Receiver operating characteristic functions Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/ROCFunctions.m), (2016), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*.
*URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/ROCFunctions.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/ROCFunctions.m) .
\[AAp6\] Anton Antonov, [Variable importance determination by classifiers implementation in Mathematica](https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/VariableImportanceByClassifiers.m)[, ](https://github.com/antononcube/MathematicaForPrediction/blob/master/IndependentComponentAnalysis.m)(2015), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction).
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/VariableImportanceByClassifiers.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/VariableImportanceByClassifiers.m) .
\[AAp7\] Anton Antonov, [MathematicaForPrediction utilities](https://github.com/antononcube/MathematicaForPrediction/raw/master/MathematicaForPredictionUtilities.m), (2014), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction).
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/MathematicaForPredictionUtilities.m) .
\[AAp8\] Anton Antonov, [Cross tabulation implementation in Mathematica](https://raw.githubusercontent.com/antononcube/MathematicaForPrediction/master/CrossTabulate.m), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction).
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/CrossTabulate.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/CrossTabulate.m) .
\[AAp9\] Anton Antonov, [SSparseMatrix Mathematica package](https://github.com/antononcube/MathematicaForPrediction/blob/master/SSparseMatrix.m), (2018), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction).
\[AAp10\] Anton Antonov, [Obtain and transform Mathematica machine learning data-sets](https://github.com/antononcube/MathematicaVsR/blob/master/Projects/ProgressiveMachineLearning/Mathematica/GetMachineLearningDataset.m), (2018), [MathematicaVsR at GitHub](https://github.com/antononcube/MathematicaVsR).
\[AAp11\] Anton Antonov, [Monadic contextual classification Mathematica unit tests](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassification-Unit-Tests.wlt), (2018), [MathematicaVsR at GitHub](https://github.com/antononcube/MathematicaVsR).
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassification-Unit-Tests.wlt](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassification-Unit-Tests.wlt) .
\[AAp12\] Anton Antonov, [Monadic contextual classification random pipelines Mathematica unit tests](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m), (2018), [MathematicaVsR at GitHub](https://github.com/antononcube/MathematicaVsR).
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m](https://github.com/antononcube/MathematicaForPrediction/blob/master/UnitTests/MonadicContextualClassificationRandomPipelinesUnitTests.m) .
## ConverationalAgents Packages
\[AAp13\] Anton Antonov, [Classifier workflows grammar in EBNF](https://github.com/antononcube/ConversationalAgents/blob/master/EBNF/ClassifierWorkflowsGrammar.ebnf), (2018), [ConversationalAgents at GitHub](https://github.com/antononcube/ConversationalAgents), [https://github.com/antononcube/ConversationalAgents](https://github.com/antononcube/ConversationalAgents).
\[AAp14\] Anton Antonov, Classifier workflows grammar Mathematica unit tests, (2018), [ConversationalAgents at GitHub](https://github.com/antononcube/ConversationalAgents), [https://github.com/antononcube/ConversationalAgents](https://github.com/antononcube/ConversationalAgents).
\[AAp15\] Anton Antonov, [ClCon translator Mathematica package](https://github.com/antononcube/ConversationalAgents/blob/master/Projects/ClassficationWorkflowsAgent/Mathematica/ClConTranslator.m), (2018), [ConversationalAgents at GitHub](https://github.com/antononcube/ConversationalAgents), [https://github.com/antononcube/ConversationalAgents](https://github.com/antononcube/ConversationalAgents).
## MathematicaForPrediction articles
\[AA1\] Anton Antonov, [Monad code generation and extension](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Monad-code-generation-and-extension.md), (2017), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction)*, *[https://github.com/antononcube/MathematicaForPrediction](https://github.com/antononcube/MathematicaForPrediction).
\[AA2\] Anton Antonov, ["ROC for classifier ensembles, bootstrapping, damaging, and interpolation"](https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/), (2016), [MathematicaForPrediction at WordPress](https://mathematicaforprediction.wordpress.com).
URL: [https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/](https://mathematicaforprediction.wordpress.com/2016/10/15/roc-for-classifier-ensembles-bootstrapping-damaging-and-interpolation/) .
\[AA3\] Anton Antonov, ["Importance of variables investigation guide"](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Importance-of-variables-investigation-guide.md), (2016), [MathematicaForPrediction at GitHub](https://github.com/antononcube/MathematicaForPrediction).
URL: [https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Importance-of-variables-investigation-guide.md](https://github.com/antononcube/MathematicaForPrediction/blob/master/MarkdownDocuments/Importance-of-variables-investigation-guide.md) .
## Other
\[Wk1\] Wikipedia entry, [Monad](https://en.wikipedia.org/wiki/Monad_(functional_programming)),
URL: [https://en.wikipedia.org/wiki/Monad_(functional_programming)](https://en.wikipedia.org/wiki/Monad_(functional_programming)) .
\[Wk2\] Wikipedia entry, [Receiver operating characteristic](https://en.wikipedia.org/wiki/Receiver_operating_characteristic),
URL: [https://en.wikipedia.org/wiki/Receiver_operating_characteristic](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) .
\[YL1\] Yann LeCun et al., [MNIST database site](http://yann.lecun.com/exdb/mnist/).
URL: [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/) .
\[PS1\] Patrick Scheibe, [Mathematica (Wolfram Language) support for IntelliJ IDE](https://github.com/halirutan/Mathematica-IntelliJ-Plugin)A, (2013-2018), [Mathematica-IntelliJ-Plugin at GitHub](https://github.com/halirutan/Mathematica-IntelliJ-Plugin).
URL: [https://github.com/halirutan/Mathematica-IntelliJ-Plugin](https://github.com/halirutan/Mathematica-IntelliJ-Plugin) .
[1]: https://en.wikipedia.org/wiki/Monad_%28functional_programming%29
[2]: https://en.wikipedia.org/wiki/Monad_%28functional_programming%29
[3]: https://en.wikipedia.org/wiki/Monad_%28functional_programming%29
[4]: https://en.wikipedia.org/wiki/Monad_%28functional_programming%29#State_monads%29Anton Antonov2018-05-15T21:18:28ZYanny or Laurel? You decide!
http://community.wolfram.com/groups/-/m/t/1340318
Recommendation: If you like this post, be sure to also check out [this, even better, post][1]!
The [new gold or blue internet debate][2] is about [a new audio clip][3] which sounds like <em>Yanny</em> to some people and <em>Laurel</em> to others. Various explanations have been given such as the audio quality of your headset or speakers and the age of the person listening (with the hypothesis that older people have trouble hearing the higher frequencies of the recording and therefore hear something else than younger people).
So let's take a listen to the audio in question. Here it is: [yanny-laurel.mp4][4]
When I play this on my desktop computer with my headset, I hear <em>Yanny</em>. However, listening to the same audio on my phone speaker I hear <em>Laurel</em>, so the audio characteristics of the rendering device do play a role.
Now let's experiment in the [Wolfram Language][5] with this audio file. This will [`Import`][6] the file:
```
audio = Import["yanny-laurel.mp4"]
```
The output is an interactive audio control, which lets you play the sound:
![enter image description here][7]
You can take a look at the [`Spectrogram`][8] of the audio, to see the frequency distribution over time:
```
Spectrogram[audio]
```
![enter image description here][9]
This makes it visually clear that the utterance (<em>Yanny</em> or <em>Laurel</em>) is repeated once in the file.
You can change the pitch of the audio with [`AudioPitchShift`][10], which either compresses or stretches the frequencies by a factor 'r':
```
audios = Table[AudioPitchShift[audio, r], {r, 0.5, 1.5, .1}]
```
![enter image description here][11]
If you listen to the compressed audio, you end up hearing <em>Yanni</em>, but the stretched audio sounds like <em>Laurel</em>.
We can join the audio snippets together with [`AudioJoin`][12], so we can listen to them in sequence (forward or in reverse):
```
a1 = AudioJoin[audios];
a2 = AudioJoin[Reverse[audios]];
```
Here are the two files: [a1.wav][13] and [a2.wav][14]
Interestingly enough, if you listen to [a1.wav][15] (compressed to stretched) it starts out sounding like <em>Yanni</em> almost to the end, but if you listen to [a2.wav][16] (stretched to compressed) it starts out sounds like <em>Laurel</em> almost to the end! The human brain seems to want to cling to what it heard just previously, so the context of what you hear matters (a lot)!!!
[1]: http://community.wolfram.com/groups/-/m/t/1340356
[2]: https://en.wikipedia.org/wiki/The_dress
[3]: https://www.cnn.com/2018/05/15/health/yanny-laurel-audio-social-media-trnd/index.html
[4]: https://www.wolframcloud.com/objects/arnoudb/yanny-laurel.mp4
[5]: http://www.wolfram.com/language/fast-introduction-for-programmers/en/
[6]: http://reference.wolfram.com/language/ref/Import.html
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-01.png&userId=22112
[8]: http://reference.wolfram.com/language/ref/Spectrogram.html
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-02.png&userId=22112
[10]: http://reference.wolfram.com/language/ref/AudioPitchShift.html
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=out-03.png&userId=22112
[12]: http://reference.wolfram.com/language/ref/AudioJoin.html
[13]: https://www.wolframcloud.com/objects/arnoudb/a1.wav
[14]: https://www.wolframcloud.com/objects/arnoudb/a2.wav
[15]: https://www.wolframcloud.com/objects/arnoudb/a1.wav
[16]: https://www.wolframcloud.com/objects/arnoudb/a2.wavArnoud Buzing2018-05-16T14:32:20ZPerfect Pitch - Yanny or Laurel?
http://community.wolfram.com/groups/-/m/t/1340356
The [Yanny/Laurel clip][1] is the [internet meme *du jour*][2], so I thought I'd take a quick look at it. If you haven't heard of it yet (what, have you been under a rock for the last five hours?), some people hear "Yanny", and some people hear "Laurel". But why? And what's the deciding factor?
The original clip repeats the phrase twice, so I simply took one of them using `AudioTrim`.
aud = AudioTrim[
Import["https://video.twimg.com/ext_tw_video/996218345631318016/pu/\
vid/180x320/E4YbExw0wX1ACH5v.mp4"], {Quantity[1, "Seconds"],
Quantity[1.8, "Seconds"]}]
First I decided to take a look at this clip, and compare with just the words "Yanny" and "Laurel". Thankfully WL has `SpeechSynthesize` built in, so it was easy to compare the spectrogram.
![spectrogram comparisons][3]
I think this is quite interesting - it looks like our original clip is much closer to "Laurel" than "Yanny".
In the New York Times article above, Twitter user @xvv mentioned that pitch shifting the clip has an impact on what you hear. A bit of playing around in a `Manipulate` gave me the same impression!
Manipulate[
Module[{a = AudioPitchShift[aud, x]},
Audio[a]
],
{x, 0.7, 1.3, 0.005}]
If I move the slider around, I hear "Yanny" on the left and "Laurel" on the right. If I move the slider around the middle, what I hear tends to change based on which of the two I heard previously, the result of some weird "echo" in my brain that seems to say that "a sound I heard previously is similar to the sound I'm hearing now, thus they are the same sound".
Anyway, this gave me an idea. I was wondering if it was possible to see exactly where the sound changes from Yanny to Laurel. I didn't have time to listen to every small pitch shift, as well as that "echo" influencing which sound I heard. Thankfully, computers these days are able to do all kinds of things, so I used the (quite recently added) [Deep Speech 2 Neural Network][4] to do this for me.
First initialising the network (simple as usual):
nm = NetModel["Deep Speech 2 Trained on Baidu English Data"]
First, I used a simple Manipulate to see whether the network actually hears the sound differently.
Manipulate[
MapAt[
StringJoin[#] &,
nm[AudioPitchShift[aud, x], {"TopNegativeLogLikelihoods", 5}
], {All, 1}],
{x, 0.8, 1.1, 0.1}
]
And yes, in fact it does! When pitch shifted downwards, the network hears "yeary","yearly","yeay" and so on, and when shifted upwards, it hears "laurel","loural","lourl" and the like.
Since we _can_ plot this data, we _should_. The following code is a bit awkward - I hadn't even had a coffee at this point. Essentially, if the network hears a word that starts with a "y", we take it that a human would hear the word "Yanny", and if the network hears a word that starts with an "l", a human would hear the word "Laurel". I throw away other words since it doesn't appear that regular humans are hearing "mento" or something else weird.
First, we'll get the first character of the prediction and its score for every 0.001 interval of pitch shift between 0.7 and 1.3.
tbl = ParallelTable[
x -> Normal[
KeyMap[First,
Association@
NetModel["Deep Speech 2 Trained on Baidu English Data"][
AudioPitchShift[aud, x], {"TopNegativeLogLikelihoods", 1}]]],
{x, 0.7, 1.3, 0.001}]
That data is pretty easy to plot, so we do so, dropping letters that aren't "y" or "l".
ListPlot[KeyValueMap[
{#1,
KeyValueMap[
Function[{k, v}, Switch[k,
"y", {k, v},
"l", {k, -v},
_, Nothing
]],
Association@#2]}
&,
Association@tbl] /. {x_, {{l_, y_}}} -> Tooltip[{x, y}, l],
PlotRange -> All, PlotMarkers -> Automatic, Axes -> {True, False},
Frame -> {{False, False}, {True, False}},
FrameLabel -> {{"", None}, {"Pitch Shifted By", False}},
FrameTicks -> All,
PlotLabel -> "Above the line is Yanny, below the line is Laurel"]
![neural network seeing yanny or laurel][5]
In the above image, the distance from the centre line is the uncertainty. Above the line is where the network heard "Yanny" or something like it, and below the line is where the network heard "Laurel" or something like it.
We can see that right in the centre, right at the original pitch, is where the network switches between "Yanny" and "Laurel". This indicates that only small variations in the pitch of the clip will cause regular non-computers to hear the clip differently also.
Finally, I wanted to make a small animation displaying what's happening when you pitch shift, and the impact that it has on what you (or the network) hears.
Animate[
Module[{a = AudioPitchShift[aud, x]},
GraphicsGrid[{{
Spectrogram[a, ImageSize -> Large],
Rasterize@StringJoin@nm[a]
}, {
Rasterize@StringJoin["Pitch shifted by ", ToString@x]
}}, ImageSize -> Large]
],
{x, 0.7, 1.3, 0.005}]
![animation of spectrogram and network output][6]
In conclusion, I think that tiny differences in the pitch of what you hear can have a big difference on what your brain parses as a word. These differences in pitch might be from the speakers you're using, the shape of your ear, your age, who knows. If nothing else, it's pretty cool that we can use a neural network for this as an analog to humans susceptible to echoes of sounds they've recently heard.
This is my first post to the forums so I hope you found it interesting! Please consider checking out [Tough Soles][7] - it's got nothing whatsoever to do with this but it is my life's work! :)
[1]: https://video.twimg.com/ext_tw_video/996218345631318016/pu/vid/%5C%20180x320/E4YbExw0wX1ACH5v.mp4
[2]: https://www.nytimes.com/2018/05/15/science/yanny-laurel.html?module=WatchingPortal&region=c-column-middle-span-region&pgType=Homepage&action=click&mediaId=thumb_square&state=standard&contentPlacement=6&version=internal&contentCollection=www.nytimes.com&contentId=https://www.nytimes.com/2018/05/15/science/yanny-laurel.html&eventName=Watching-article-click
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-05-16at16.56.55.png&userId=1340333
[4]: https://resources.wolframcloud.com/NeuralNetRepository/resources/Deep-Speech-2-Trained-on-Baidu-English-Data
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-05-16at17.19.55.png&userId=1340333
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=yannylaurel.gif&userId=1340333
[7]: https://youtube.com/toughsoles "Tough Soles"Carl Lange2018-05-16T16:38:18ZUse "Plot" command inside "For" loop and superpose in same figure?
http://community.wolfram.com/groups/-/m/t/1340259
I wish to include a "Plot" command within a "For" loop. The loop numerically solves a differential equation in intervals (0,\tau), (\tau,2\tau) and so on, with the initial condition changing each time (the solution at \tau at the end of first iteration becomes initial condition for the second). In the attached file, the definitions of functions are all fine, but the main trouble comes while running the For loop: it produces the correct plots but in different figures. Can I superpose all these plots in the same figure, so as to get a single curve in the full range? Thanks in advance.Sourabh Lahiri2018-05-16T09:37:21ZDifferent results solving the same integral?
http://community.wolfram.com/groups/-/m/t/1339067
I need to evaluate an integral as in the attached file, for general parameters a, b and c. I obtain a conditional 0 as result using the Integrate function. <br><br>However, when I evaluate the integral again (numerically and analitically) substituing the parameters a,b and c by numbers (which fulfill the conditions obtained before), the result is not zero.<br><br>Should I trust the result 0 for general parameters a, b and c, or not? What could be happening?<br><br>Thanks so muchEnrique Rodriguez2018-05-14T13:52:45Z