Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Wolfram Scienceshowthread.php?s= sorted by activewhy empty densityplot3d?
http://community.wolfram.com/groups/-/m/t/1387903
Manipulate[
DensityPlot3D[(a*(x)^((-1.0034*\[Delta]) +
0.5*Sqrt[(2.0068*\[Delta])^2 -
4*(\[Delta])^2 ((\[Gamma]*G*x)^2 + (10.75026))]) +
a*(x)^((-1.0034*\[Delta]) -
0.5*Sqrt[(2.0068*\[Delta])^2 -
4*(\[Delta])^2 ((\[Gamma]*G*
x)^2 + (10.75026))]) + (\[Chi]*\[Beta]*\[Gamma]*(\
\[Delta])^2*G*
x)/((\[Delta])^2*1.15736*10^(-6)*((\[Gamma]*G*
x)^2 + (10.75026)))), {x, 50, 500}, {\[Delta],
0.000000001, 2}, {\[Chi], 0.00002, 2}], {{\[Gamma], 2.2571*10^(8),
"\[Gamma]"}, 2.2571*10^(-8), 2.2571*10^(8)}, {{G, 0.02, "G"},
0.002, 0.02}, {{\[Beta], 0.2, "\[Beta]"}, 0.2,
14}, {{a, 1*10^8, "a"}, 0.02, 1000000000}, ControlPlacement -> Left]
Hi. typed this code in Mathematica 11.1, it only plot's an empty cube. Does anybody know what the problem is?
Thank you very much!raymond confidence2018-07-20T01:34:53ZCan GeoHistogram deal with large data sets?
http://community.wolfram.com/groups/-/m/t/1350329
I have been trying to run GeoHistogram on a data set of around 1.3 million coordinate pairs, but I keep getting errors (different ones depending on the Mathematica version; tried 11.1 and 11.3 on Mac and Win). The "geos" data set consists of a list of pairs like this:
{37.4404,-121.87}
with some pairs repeated more or less often within the data set. The GeoHistogram call is simply this:
GeoHistogram[geos, PlotTheme -> "Scientific", ImageSize -> Full]
Has anybody else been having problems with this?
Thanks,
JohanJohan Lammens2018-06-01T15:44:29Z[WSC18] Modeling the growth or reduction of crime from 2016-2018 in Chicago
http://community.wolfram.com/groups/-/m/t/1382866
![Title Picture][1]
Introduction
============
Crime problems in the city of Chicago have gotten much exposure in media circles. Many online resources such as local newspapers (The Chicago Tribune), government agencies (Chicago Police), and academic institutions (University of Chicago Crime Lab) have attempted to use data stories and visualizations of the problems. We do not know, however of any attempts to use Wolfram Mathematica’s powerful tools to visualize crime in Chicago.
Abstract
========
Crime rates remain an important issue in the national dialogue, and many communities are looking for new approaches to understand the problem and best use their resources to address crime. Therefore, there is much opportunity to use advances in big data technology to give policy makers and citizens more insight into crime patterns so that they can focus resources on the areas with the highest pockets of crime. The
city of Chicago is at the forefront of the national consciousness regarding crime. In my project, a user will input a date and receive an output of all the crimes that occurred on that date in Chicago in chronological order. Markers will appear on the map when a crime occurs in a time lapse with different colors according to zip code.
Data Collection
=======
I started my project by obtaining very accurate crime data from the city of Chicago, showing types of crimes, when they were reported, and where they happened.
![enter image description here][2]
I then inputted this information into Mathematica. I then split the crimes in to Property Crime, Crime on a Person, and Violations and used the TogglerBar function in order to select a number of crimes you want to put in.
{propertycrime, {"ARSON", "BURGLARY", "THEFT", "ROBBERY", "MOTOR VEHICLE THEFT", "CRIMINAL DAMAGE", "CRIMINAL TRESPASS", "HOMICIDE"}, ControlType -> TogglerBar, ControlPlacement -> Top},
{personcrime, {"ASSAULT", "BATTERY", "HUMAN TRAFFICKING", "CRIM SEXUAL\ ASSAULT", "DECEPTIVE PRACTICE", "SEX OFFENSE", "STALKING", "PROSTITUTION", "KIDNAPPING", "INTIMIDATION", "OFFENSE INVOLVING CHILDREN"}, ControlType -> TogglerBar,
ControlPlacement -> Top},
{violations, {"WEAPONS VIOLATION", "OTHER NARCOTIC \ VIOLATION", "LIQUOR LAW VIOLATION"}, ControlType -> TogglerBar,
ControlPlacement -> Top},
Manipulate
=======
Manipulate[
Module[{crimetype},
crimetype = Join[propertycrime, personcrime, violations];
Quiet@GeoGraphics[
GeoMarker[
Select[data,
MemberQ[crimetype, #[[-1]]] && (#[[1]] == {year, month, day,
hour}) &][[All, {2, 3}]], "Color" -> Blue],
GeoCenter ->
Entity["City", {"Chicago", "Illinois", "UnitedStates"}],
GeoRange -> Quantity[20, "Miles"],
PlotLabel -> DateString[DateObject[{year, month, day, hour}]
]
]
],
Delimiter,
{propertycrime, {"ARSON", "BURGLARY", "THEFT", "ROBBERY",
"MOTOR VEHICLE THEFT", "CRIMINAL DAMAGE", "CRIMINAL TRESPASS",
"HOMICIDE"}, ControlType -> TogglerBar, ControlPlacement -> Top},
{personcrime, {"ASSAULT", "BATTERY", "HUMAN TRAFFICKING",
"CRIM SEXUAL\ ASSAULT", "DECEPTIVE PRACTICE", "SEX OFFENSE",
"STALKING", "PROSTITUTION", "KIDNAPPING", "INTIMIDATION",
"OFFENSE INVOLVING CHILDREN"}, ControlType -> TogglerBar,
ControlPlacement -> Top},
{violations, {"WEAPONS VIOLATION", "OTHER NARCOTIC \ VIOLATION",
"LIQUOR LAW VIOLATION"}, ControlType -> TogglerBar,
ControlPlacement -> Top},
Delimiter,
{{month, 6}, If[year == 2016, 6, 1], If[year == 2018, 6, 12], 1,
Appearance -> "Labeled", ControlPlacement -> Left},
{{day, If[year == 2016 && month == 6, 29,
If[year == 2017, 5, If[year == 2018 && month == 6, 5]]]},
If[month == 6 && year == 2016, 29, 1],
If [month == 1 || month == 3 || month == 5 || month == 7 ||
month == 8 || month == 10 || month == 12, 31,
If[month == 9 || month == 4 || month == 6 || month == 11,
If[month == 6 && year == 2018, 26, 30], 28]], 1,
Appearance -> "Labeled", ControlPlacement -> Left},
{{year, 2016}, 2016, 2018, 1, Appearance -> "Labeled",
ControlPlacement -> Left},
{{hour, 1}, 1, 23, 1, Appearance -> "Labeled",
ControlType -> Trigger, AnimationRate -> .5,
ControlPlacement -> Left},
Button["Include all crimes?", (propertycrime = {"ARSON", "BURGLARY",
"THEFT", "ROBBERY", "MOTOR VEHICLE THEFT", "CRIMINAL DAMAGE",
"CRIMINAL TRESPASS", "HOMICIDE"};
personcrime = {"ASSAULT", "BATTERY", "HUMAN TRAFFICKING",
"CRIM SEXUAL\ ASSAULT", "DECEPTIVE PRACTICE", "SEX OFFENSE",
"STALKING", "PROSTITUTION", "KIDNAPPING", "INTIMIDATION",
"OFFENSE INVOLVING CHILDREN"};
violations = {"WEAPONS VIOLATION", "OTHER NARCOTIC \ VIOLATION",
"LIQUOR LAW VIOLATION"};)],
Button["Remove all crimes?", (propertycrime = {}; personcrime = {};
violations = {};)],
ControlPlacement -> Left]
Results
=======
The results show selected crimes and times when the crimes were reported correlated with the Chicago location where the crime occurred. The map can show the seasonality and geographic pattern of certain crimes in certain neighborhoods and test assumptions regarding how crime occurs in the city. For example, we can test the assumption that crimes like assault primarily happen during the summer because in the cold Chicago winter many people do not go outside and meet other people. The map also can be used to test assumptions about safety in certain parts of the city versus others, and if there are any seasonal or time variations that correlate.
![enter image description here][3]
Future Work
===========
In the future I would like to expand out my project to vary colors on the map like a heat map, where I could set a time period and have areas of the city where crimes happen the most be dark red, but in areas where there are few crimes to be a cooler color. I would also like to build in probability to enable the user to choose a part of the city and perhaps a season or time and see the probability of crime happening in the location. If demographic data regarding criminals and victims is available, I would like to add functionality where a user could chose a certain demographic profile and see when and where it is most likely where they would commit a crime or be a victim of crime. Police could use data like this to know where to deploy patrol units and social services providers could use it to learn where and when they should focus their resources to prevent crime. I would also like to expand the data set to other cities. However, there is a challenge that not all police agencies collect or format their data in the same way. Looking at crime in border areas (ex: the Illinois/Indiana border at the very south end of Chicago) would be quite interesting, but the data challenges would be difficult.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture.PNG&userId=1371928
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture2.PNG&userId=1371928
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Capture3.PNG&userId=1371928Sachi Figliolini2018-07-13T19:12:23ZFast curve building with linear solving method
http://community.wolfram.com/groups/-/m/t/1386853
Introduction
------------
We discuss the mechanics of yield curve bootstrapping and demonstrate how the use of linear solver speeds up the process. Once the market data for curve construction is available, the bootstrapping process is quite quick and the entire yield curve is obtained in a single step. This optimises the bootstrapping procedures and leads to an elegant and transparent solution. The auxiliary output - the so-called yield curve derivatives such as zero and forward rates can be generated on the fly.
![enter image description here][1]
The yield curve methodology
---------------------------
Yield curves are one of the most fundamental concepts in finance. Curves are basic building blocks for many financial instruments and they are key determinants of value creation due to the discounting effect. Yield curves - when built with derivative instruments - are essential for forward rates calculation. As such, they are critical component of the entire market for interest rate derivatives.
Yield curves are created through bootstrapping - a technique that converts market observable rates into 'zero-coupon' instruments. To be consistent and correct, the yield curve has to satisfy the following condition:
Sum[ c[[i]] g[[i]] DF[[i], {i,1, n-1}] + (1+c[[n]] g[[n]]) DF[[n]] ==1
where **c** is the fixed market rate, **g** is the year-fraction and **DF** stands for discount factor that we are trying to bootstrap
When we choose the 'fixed leg' method, the above expression guarantees that all instruments on the yield curve will price to par. This ensures that discount factors are correctly calculated.
The curve equation expression above ensures the consistency for a single curve pillar - i.e. the maturity point on the curve When $m$ such instruments are lined up together, we obtain a matrix of curve equations that we can solve with linear solver:
Curve bootstrapping - example
-----------------------------
We demonstrate the curve building principles with the following simple example:
- Randomize market data for the curve construction
- Visualise the data as the curve input
dim=30;
t=Table[0.5,{dim}];
act=Accumulate[t];
r=Table[1,{dim}];
c=0.0055+RandomReal[{0.005,0.009}] Sqrt[act];
td=TemporalData[c,{act}];
ListLinePlot[td,PlotStyle->{Blue, Thickness[0.008]},GridLines->{Automatic,Automatic},PlotLabel->Style["Yield curve market quotes",Purple,15]]
![enter image description here][2]
Using the market data above, we construct matrix of cash flows and use **LinearSolve** method to obtain the required discount factors.
z=Table[If[i<j,c[[i]]*t[[i]],If[i==j,r[[i]]+c[[i]]*t[[i]],0]],{j,1,dim},{i,1,dim}];
df=LinearSolve[z,r];
td2=TemporalData[df,{act}];
ListLinePlot[td2,PlotLabel->Style["Discount factors",Blue,15],GridLines->{Automatic, Automatic},PlotStyle->{Red,Thickness[0.009]}]
![enter image description here][3]
The obtained discount factors are unique, monotonically decreasing and in right order. Having discount factors computed allows us to obtain all kind of measures that depend on discount factors:
1. **Zero-coupon rates**
These are the unique rates that pay out single cash flow at maturity and are essentially 'inverted' DF
$zero rate = Log[1/DF]/T$
zc=Log[1/df]/act;
td3=TemporalData[zc,{act}];
ListLinePlot[td3,PlotLabel->Style["Zero rates",Blue,15],GridLines->{Automatic, Automatic},PlotStyle->{Magenta,Thickness[0.009]}]
![enter image description here][4]
2. **Forward rates**
Forward rates are 'special' as they are rates observed today but starting in the future. They are market expectations of future short-term rates and
therefore important indicators of future level of interest rates in the economy.
We calculate a series of 3-monthly forward rates from the discount factors as follows:
forward_rate[t1,t2] =( DF[t1]/DF[t2]-1)/g[t1,t2]
where $t1$ and $t2$ are times in the future, $DF$ are discount factors at time point $t1$ and $t2$ and $g[t1,t2]$ is the year fraction between
time point $t1$ and $t2$, with $t2 >t1$
In order to compute forward rates, we need to introduce a simple function that computes the business day. That can be easily done using the built-in functional components:
OpenDay[start_,incr_,itype_]:=NestWhile[DatePlus[#,1]&,DatePlus[start,{incr,itype} ],Not[BusinessDayQ[#]]&]
and we can test it
OpenDay[Today,3,"Day"]
and get the correct day - Monday, 23rd July 18
To compute forward rates, we first generate forward dates, calculate the time interval, interpolate DF from the DF object and then use the forward rate formula to get the rate:
intdf=Interpolation[td2,Method->"Spline"];
fwdates=Table[OpenDay[Today,i,"Month"],{i,3,60,3}];
dd=DateDifference[%,"Year",DayCountConvention->"Actual360"];
Differences[dd];
kd=Mean[%][[1]];
fwrates=Table[(intdf[i]/intdf[i+kd]-1)/kd,{i,kd,60 kd, kd}]//Quiet;
td4=TemporalData[fwrates,{Range[kd,60 kd, kd]}];
ListLinePlot[td4,PlotTheme->"Web",PlotLabel->Style["Smooth Forward rates",Blue,15]]
![enter image description here][5]
We have obtained smooth forward rates from the generated discount factors - this is an evidence that the discount factors were properly calculated and the choice of interpolation was appropriate.
When the yield curve is upward sloping, then the mathematics of forward rates will cause forward rates being higher than the zero rates. We can see this is the case:
ListLinePlot[{td3,td4},PlotTheme->"Web",PlotLabel->Style["Zero and Forward rates",Blue,15], PlotLegends->{"Zero", "Forward"}]
![enter image description here][6]
Real-case example
-----------------
Having explained the yield curve bootstrapping methodology, we can now move to the real-case scenario where we will the actual market data to construct the Australian Dollar (AUD) curve. We use deposit rates in the short-end and the swap rates in the mid-to-long end of the curve with maturity up to 30 years. The swap will pay out semi-annually.
pmfreq=2;
pildates={OpenDay[Today,1, "Month"],OpenDay[Today,2, "Month"], Table[OpenDay[Today, 3 i, "Month"],{i,4}],OpenDay[Today,18, "Month"], Table[OpenDay[Today, i, "Year"],{i,2,10 }], OpenDay[Today,12, "Year"],OpenDay[Today,15, "Year"],OpenDay[Today,20, "Year"],OpenDay[Today,25, "Year"],OpenDay[Today,30, "Year"]}//Flatten;
pilyrs=DateDifference[pildates ,"Year",DayCountConvention->"Actual365"][[All,1]] ;
crvQ={0.0195,0.0197,0.0201,0.02034,0.02045,0.02056,0.0206,0.0208,0.0212,0.0224,0.0245,0.0259,0.0267,0.0274,0.0279,0.0284,0.0292,0.0305,0.03097,0.0312,0.0323};
tdQ=TemporalData[crvQ,{pilyrs}];
intQ=Interpolation[tdQ ,Method->"Spline"];
crvD=Length[crvQ];
cfdates={Today,OpenDay[Today,1, "Month"],OpenDay[Today,2, "Month"], Table[OpenDay[Today, 3 i, "Month"],{i,4}],Table[OpenDay[Today, (12/pmfreq ) i, "Month"],{i,3,30 pmfreq }]}//Flatten;
cfyrs=Drop[DateDifference[cfdates ,"Year",DayCountConvention->"Actual365"][[All,1]],1];
cfyD=Length[cfyrs ];
intrates=intQ [cfyrs ];
ListLinePlot[intrates,PlotLabel->Style["AUD yield curve term structure",Blue,{15,Bold}],PlotStyle->{Magenta, Thick}]
![enter image description here][7]
We first split the market data into (i) deposits and (ii) swap segments:
depoyrs={Take[ cfyrs ,3],Differences[Take[cfyrs,{3,6}]]}//Flatten;
fraAv=Take[depoyrs,-4]//Mean;
swapData=Inner[{#1,#2}&,Drop[cfyrs,6] ,Drop[intrates ,6],List];
swapyrs={0,cfyrs[[4]],cfyrs[[6]] ,Take[cfyrs,{7,cfyD}]}//Flatten;
swyfrac=Differences[swapyrs ];
swyAv=Mean[swyfrac];
swapD=Length[swapData];
and built the matrix of linear equations that we solve for the discount factors:
zTab1=Table[If[i==j,1+intrates[[j]] If[i<=2,depoyrs [[i]],fraAv ],0],{j,6},{i,cfyD}];
zTab2=Table[If[MemberQ[swapyrs,cfyrs[[i]]],If[cfyrs[[i]]<swapData[[j,1]],swapData[[j,2]] swyAv ,If[cfyrs[[i]]==swapData[[j,1]],1+swapData[[j,2]] swyAv ,0]],0],{j,swapD},{i,1,cfyD}];
zTab0=Join[zTab1,zTab2];
b1Vec=ConstantArray[1,cfyD];
dfact=LinearSolve[zTab0 ,b1Vec ];
tdDf=TemporalData[dfact,{cfyrs}]//Quiet;
ListLinePlot[tdDf ,PlotLabel->Style["Generated Discount factors",Blue,15],PlotStyle->{Thickness[0.01],Purple},PlotRange->{Full,{1,0.3}},InterpolationOrder->2]
![enter image description here][8]
This is the actual AUD yield curve built from derivative instruments that can be used to value various AUD instruments.
Curve smoothing
---------------
Since we use the actual market data, the generated discount factors are not as smooth as in our first example. This is particularly visible in the short-end of the curve. The existence of kinks is due to several factors - different segments of the the curve are being traded by different desks and equilibrium market rates are obtained through different price discovery. If curve kinks and peaks are undesirable, there exist several techniques to smooth them sway and we look at *linear filters* that can help to accomplish this task. The built-in Mathematica **GaussianFilter** is particularly useful as it provides required level of control to manipulate 'noisy' data.
For example, applying GaussianFilter with radius $r$ = 3 and standard deviation $sigma$ =2 provide nice and smooth output the the 'kinky' segment of the curve:
iDF=Interpolation[tdDf,Method->"Spline"];
tdDF2=TemporalData[Map[iDF,Range[0.25,30,0.25]] ,{Range[0.25,30,0.25]}];
orD=iDF/@Range[0.25,30,0.25];
flD=GaussianFilter[orD,{3,2}];
ListLinePlot[{Take[orD ,10],Take[flD,10]},PlotLabel->Style["Curve correction with Gaussian filter",Blue,15],PlotLegends->{"Original", "Filtered"},PlotStyle->{Thickness[0.008],Thickness[0.008]},InterpolationOrder->3]
![enter image description here][9]
The filtered output produces decent curve with smoother short-end that, if needed, can be used to construct all types of derivatives with appropriate rates. It has to be noted that the filter can be applied both to (i) discount factors and/or (ii) forward rates - depending on the required objective. Generally, the higher the radius, the smoother the output.
Conclusion
----------
The usage of linear solver and its application in curve bootstrapping leads to fast and accurate curve construction with unique discount factors. The function can be easily applied to any environment where the term structure of market data is duly defined. We have also demonstrated the use of filtering technique for curve smoothing with built-in Mathematica filters that provide rexcellent tool for output manipulation if the curve kinks become an issue.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=G10.png&userId=387433
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=G1.png&userId=387433
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=G2.png&userId=387433
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=G3.png&userId=387433
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=G4.png&userId=387433
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=G5.png&userId=387433
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=G6.png&userId=387433
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=G7.png&userId=387433
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=G9.png&userId=387433Igor Hlivka2018-07-18T14:58:28ZCan you index an arbitrary array in NDSolve to simulate white noise?
http://community.wolfram.com/groups/-/m/t/1387699
I'm trying to introduce a randomized element in a simulation, using NDSolve to do the integration. The random element is supposed to simulate a random thermal force. The way I've tried to implement this is by creating an array of random values and then trying to select values from the array during each time step. The following is a snippet of my current code.
(*Initialising some time values that will allow me to select particular values from my random array*)
Tfinal = 10;
SampleFreq = 2500;
NoT = Tfinal*SampleFreq + 1;
dt = Tfinal/(NoT - 1);
(*Array of random values and the function I'll be using in NDSolve*)
WN = RandomVariate[NormalDistribution[0, 1], NoT];
WNoise[t_] := Indexed[WN, Floor[Floor[t/dt]] + 1]
(*NDSolve function using white noise function*)
sol = x[t] /.
NDSolve[{D[x[t], t, t] == -(w^2) x[t] + WNoise[t], x[0] == 0,
x'[0] == 2 }, x, {t, 0, 10}]
The function above (without white noise) is the classic **F = -kx** equation. I've just tried to introduce a white noise driving force force to it as well. Unfortunately, it doesn't work when the WNoise function is added. The error message I typically get is the following.
Indexed::ind: The index 1/2 is not a nonzero integer.
NDSolve::ndnum: Encountered non-numerical value for a derivative at t == 0.`.
I'm just wondering if anyone has faced a similar problem and if this method of introducing white noise is possible or if I'm just being silly with this implementation.
Thank you for your help.Sandip Roy2018-07-19T20:18:37ZFindFit for a specific line from a ContourPlot?
http://community.wolfram.com/groups/-/m/t/1374074
Hi!
I have been trying to find an expression u(r) for a specific line of a contourplot (the one that looks like ln(x) on the last image of the nb)
Is there a way to :
Extract data from a ContourPlot that is not one to one line (meaning for 1 x , there are at least 2 y satisfying the equation)?
FindFit for a contour plot?
Have been trying so hard and looking everything on Google with no success. Any help is appreciated since im on timetableVASILEIOS MPISKETZIS2018-07-10T19:23:45ZUnexplained behavior of N function displayed precision
http://community.wolfram.com/groups/-/m/t/1387524
I noticed an anomaly when using different ways to calculate square roots. For good or bad over the years I have fallen into a habit of calculating square roots as raising a number to .5. Doing this in Mathematica has caused an unexplained behavior.
If I use Sqrt[5] or (5^(1/2) I get the same results and displayed precision
If I use (5^.5) I get the same result but less displayed precision.
See the below example . For phi1 and phi 2 I can see 20 decimal places. For phi3 I can see at most 5 decimal places. However phi1-phi3 is zero so I am getting the same result
Why am I only seeing 5 decimal places for phi3?
ClearAll
$Version
phi1 = (1 + Sqrt[5])/2;
N[phi1, 20] (* <——— Shows 20 decimal places*)
phi2 = (1 + 5^(1/2))/2;
N[phi2, 20] (* <——— Shows 20 decimal places*)
phi3 = (1 + 5^(.5))/2;
N[phi3, 20] (* <——— Shows only 5 decimal places*)
phi1-phi3
ClearAll
"11.3.0 for Mac OS X x86 (64-bit) (March 7, 2018)"
1.6180339887498948482
1.6180339887498948482
1.61803
0.Albert Cattani2018-07-19T17:12:36Z[WSC18] Phone Gesture Recognition with Machine Learning
http://community.wolfram.com/groups/-/m/t/1386392
# Accelerometer-based Gesture Recognition with Machine Learning
![A GIF of gesture recognition in progress][1]
## Introduction
This is Part 2 of a 2-part community post - Part 1 (Streaming Live Phone Sensor Data to the Wolfram Language) is available here: http://community.wolfram.com/groups/-/m/t/1386358
As technology advances, we are constantly seeking new, more intuitive methods of interfacing with our devices and the digital world; one such method is gesture recognition. Although touchless human interface devices (kinetic user interfaces) exist and are in development, the cost and configuration required for these sometimes makes them impractical, particularly for mobile applications. A simpler method would be to use devices the user already has on their person - such as a phone or a smartwatch - to detect basic gestures, taking advantage of the wide array of sensors included in such devices. In an attempt to assess the feasibility of such a method, methods of asynchronous communication between a mobile device and the Wolfram Language are investigated, and a gesture recognition system based around an accelerometer sensor is implemented, using a machine learning model to classify a few simple gestures from mobile accelerometer data.
## Investigating Methods of Asynchronous Communication
To implement an accelerometer-based gesture recognition system, we must devise a suitable means for a mobile device to transmit accelerometer data to a computer running the Wolfram Language (WL). On a high level, the WL has baked-in support for a variety of devices - specifically the Raspberry Pi, Vernier Go!Link compatible sensors, Arduino microcontrollers, webcams and devices using the RS-232 or RS-422 serial protocol (http://reference.wolfram.com/language/guide/UsingConnectedDevices.html); unfortunately, there is no easy way to access sensor data from Android or iOS mobile devices.
On a low level, the WL natively supports TCP and ZMQ socket functionality, as well as receipt and transmission of HTTP requests and Pub-Sub channel communication. We investigate the feasibility of both methods for transmission of accelerometer data in Part 1 of this community post (http://community.wolfram.com/groups/-/m/t/1386358).
## Gesture Classification using Neural Networks
Now that we are able to stream accelerometer data to the WL, we may proceed to implement gesture recognition / classification. Due to limited time at camp, we used the UDP socket method to do this - in the future, we hope to move the system over to the (more user-friendly) channel interface.
We first configure the sensor stream, allowing live accelerometer data to be sent to the Wolfram Language:
### Configure the Sensor Stream
1. Install the "Sensorstream IMU+GPS" app ([https://play.google.com/store/apps/details?id=de.lorenz_fenster.sensorstreamgps][2])
2. Ensure the sensors you want to stream to Wolfram are ticked on the 'Toggle Sensors' page. (If you want to stream other sensors besides 'Accelerometer', 'Gyroscope' and 'Magnetic Field', ensure the 'Include User-Checked Sensor Data in Stream' box is ticked. Beware, though - the more sensors are ticked, the more latency the sensor stream will have.)
3. On the "Preferences" tab:
a. Change the target IP address in the app to the IP address of your computer (ensure your computer and phone are connected to the same local network)
b. Set the target port to 5555
c. Set the sensor update frequency to 'Fastest'
d. Select the 'UDP stream' radio box
e. Tick 'Run in background'
4. Switch stream ON **before** executing code. (nb. ensure your phone does not fall asleep during streaming - perhaps use the 'Caffeinate' app ([https://play.google.com/store/apps/details?id=xyz.omnicron.caffeinate&hl=en_US][3]) to ensure this.)
5. Execute the following WL code:
(in part from http://community.wolfram.com/groups/-/m/t/344278)
QuitJava[];
Needs["JLink`"];
InstallJava[];
udpSocket=JavaNew["java.net.DatagramSocket",5555];
readSocket[sock_,size_]:=JavaBlock@Block[{datagramPacket=JavaNew["java.net.DatagramPacket",Table[0,size],size]},sock@receive[datagramPacket];
datagramPacket@getData[];
listen[]:=record=DeleteCases[readSocket[udpSocket,1200],0]//FromCharacterCode//Sow;
results={};
RunScheduledTask[AppendTo[results,Quiet[Reap[listen[]]]];If[Length[results]>700,Drop[results,150]],0.01];
stream:=Refresh[ToExpression[StringSplit[#[[1]],","]]& /@ Select[results[[-500;;]],Head[#]==List&],UpdateInterval-> 0.01]
### Detecting Gestures
On a technical level, the problem of gesture classification is as follows: given a continuous stream of accelerometer data (or similar),
1. distinguish periods during which the user is performing a given gesture from other activities / noise and
2. identify / classify a particular gesture based on accelerometer data during that period. This essentially boils down to classification of a time series dataset, in which we can observe a series of emissions (accelerometer data) but not the states generating the emissions (gestures).
A relatively straightforward solution to (1) is to approximate the gradients of the moving averages of the x, y and z values of the data and take the Euclidean norm of these - whenever these increase above a threshold, a gesture has been made.
movingAvg[start_,end_,index_]:=Total[stream[[start;;end,index]]]/(end-start+1);
^ Takes the average of the x, y or z values of data (specified by *index* - x-->3, y-->4, z-->5) from the index *start* to the index *end*.
normAvg[start_,middle_,end_]:=((movingAvg[middle,end,3]-movingAvg[start,middle,3])/(middle-start))^2+((movingAvg[middle,end,4]-movingAvg[start,middle,4])/(middle-start))^2+((movingAvg[middle,end,5]-movingAvg[start,middle,5])/(middle-start))^2;
^ (Assuming difference from start to middle is equal to difference from middle to end:) Approximates the gradient at index *middle* using the average from *start* to *middle* and from *middle* to *end* for x, y and z values, and then takes the sum of the squares of these values. Note that we do not need to take the square root of the final answer (to find the Euclidean norm), as doing so and comparing it to some threshold *x* would be equivalent to not doing so and comparing it to the threshold *x^2* (square root is a computationally expensive operation).
Thus
Dynamic[normAvg[-155,-150,-146]]
will yield the square of the Euclidean norm of the gradients of the x, y and z values of the data (approximated by calculating the averages from the 155th most recent to 150th most recent and 150th most recent to 146th most recent values). As accelerometer data is sent to the Wolfram Language, this value will update.
### Data Collection
To train the network, we must collect gesture data. To do this, we have a variety of options - we can either represent the gesture as a tensor of 3 dimensional vectors (x,y,z accelerometer data points) and perform time series classification on these sequences of vectors using hidden Markov models or recurrent neural networks, or we can represent the gesture as a rasterised image of a graph much like the one below:
![Rasterised image of a gesture][4]
and perform image classification on the image of the graph.
Since the latter has had some degree of success (e.g. in http://community.wolfram.com/groups/-/m/t/1142260), we attempt a similar method:
PrepareDataIMU[dat_]:=Rasterize@ListLinePlot[{dat[[All,1]],dat[[All,2]],dat[[All,3]]},PlotRange->All,Axes->None,AxesLabel->None,PlotStyle->{Red, Green, Blue}];
^ Plots the data points in *dat* with no axes or axis labels, and with x coordinates in red, y coordinates in green, z coordinates in blue (this makes processing easier as Wolfram operates in RGB colours).
threshold = 0.8;
trainlist={};
appendToSample[n_,step_,x_]:=AppendTo [trainlist,PrepareDataIMU[Part[x,n;;step]]];
Dynamic[If[normAvg[-155,-150,-146]>threshold,appendToSample[-210,-70,stream],False],UpdateInterval->0.1]
^ Every 0.1 seconds, checks whether or not the normed average of the gradient of accelerometer data at the 150th most recent data point (using the *normAvg* function) is greater than the threshold - if it is, it will create a rasterised image of a graph of accelerometer data from the 210th most recent data point to the 70th most recent data point and append it to *trainlist* - a list of graphs of gestures. Patience is recommended here - there can be up to ~5 seconds' lag before a gesture appears. Ensure gestures are made reasonably vigorously.
As a first test, we attempted to generate 30 samples of each of the digits 1 to 5 drawn in the air with a phone - the images of these graphs were stored in *trainlist*. Then, we classified them as 1, 2, 3, 4 or 5, converting *trainlist* into an association with key <image of graph> and value <number>).
We split the data into training data (25 samples from each category) and test data (all remaining data):
TrainingTest[data_,number_]:=Module[{maxindex,sets,trainingdata,testdata},
maxindex=Max[Values[data]];
sets = Table[Select[data,#[[2]]==x&],{x,1,maxindex}];
sets=Map[RandomSample,sets];
trainingdata =Flatten[Map[#[[1;;number]]&,sets]];
testdata =Flatten[Map[#[[number+1;;-1]]&,sets]];
Return[{trainingdata,testdata}]
]
^ Randomly selects *number* training elements and *Length[data]-number* test elements for each value in the list *data*
gestTrainingTest=TrainingTest[gesture1to5w30,25];
gestTraining=gestTrainingTest[[1]];
gestTest=gestTrainingTest[[2]];
*gestTraining* and *gestTest* now contain key-value pairs like those below:
![Key-value pairs of accelerometer graphs and labels][5]
##### Machine Learning
To train a model on these images, we first attempt a basic *Classify*:
TrainWithClassify = Classify[gestTraining]
ClassifierInformation[TrainWithClassify]
![A poor result in ClassifierInformation][6]
Evidently, this gives a very poor training accuracy of 24.4% - given that there are 5 classes, this is only marginally better than random chance.
As the input data consists of images, we try transfer learning on an image identification neural network (specifically the VGG-16 network):
net = NetModel["VGG-16 Trained on ImageNet Competition Data"]
We remove the last few layers from the network (which classify images into the classes the network was trained on), leaving the earlier layers which perform more general image feature extraction:
featureFunction = Take[net,{1,"fc6"}]
[//]: # (No rules defined for Output)
We train a classifier using this neural net as a feature extractor:
NetGestClassifier = Classify[gestTraining,FeatureExtractor->featureFunction]
We now test the classifier using the data in *gestTest:*
NetGestTest = ClassifierMeasurements[NetGestClassifier,gestTest]
We check the training accuracy:
ClassifierInformation[NetGestClassifier]
![A better classifier information result][7]
NetGestTest["Accuracy"]
1.
NetGestTest["ConfusionMatrixPlot"]
![A good confusion matrix plot][8]
[//]: # (No rules defined for Output)
This method appears to be promising, as a training accuracy of 93.1% and a test accuracy of 100% was achieved.
## Implementation of Machine Learning Model
To use the model with live data, we use the same method of identifying gestures as before in 'Detecting Gestures' (detecting 'spikes' in the data using moving averages), but when a gesture is identified, instead of being appended to a list it is sent through the classifier:
results = {""};
ClassGestIMU[n_,step_,x_]:=Module[{aa,xa,ya},
aa = Part[x,n;;step];
xa=PrepareDataIMU[aa];
ya=gestClassifier[xa];
AppendTo[results,{Length@aa,xa,ya}]
];
Dynamic[If[normAvg[-155,-150,-146]>threshold,ClassGestIMU[-210,-70,stream],False],UpdateInterval->0.1]
Real time results (albeit with significant lag) can be seen by running
Dynamic@column[[-1]]
## Conclusions and Further Work
On the whole, this project was successful, with gesture detection and classification using rasterised graph images proving a viable method. However, the system as-is is impractical and unreliable, with a significant lag, training bias (trained to recognise the digits 1 to 5 the way they are drawn by one person only) and small sample size: these are problems that can be solved, given more time.
Further extensions to this project include:
- Serious code optimisation to reduce / eliminate lag
- An improved training interface to allow users to create their own gestures
- Integration of the gesture classification system with the Channel interface (as described earlier) and deployment of this to the cloud.
- Investigation of the feasibility of using an RNN or an LSTM for gesture classification - using a time series of raw gesture data rather than relying on rasterised images (which, although accurate, can be quite laggy). Alternatively, hidden Markov models could be used in an attempt to recover the states (gestures) that generate the observed data (accelerometer readings).
- Adding an API to trigger actions based on gestures and deployment of gesture recognition technology as a native app on smartphones / smartwatches.
- Improvement of gesture detection. At the moment, the code takes a predefined 1-2 second 'window' of data after a spike is detected - an improvement would be to detect when the gesture has ended and 'crop' the data values to include only the gesture made.
- Exploration of other applications of gesture recognition (e.g. walking style, safety, sign language recognition). Beyond limited UI navigation, a similar concept to what is currently implemented could be used with, say, a phone kept in the pocket, to analyse walking styles / gaits and, for instance, to predict or detect an elderly user falling and notify emergency services. Alternatively, with suitable methods for detecting finger motions, like flex sensors, such a system could be trained to recognise / transcribe sign language.
- Just for fun - training the model on Harry Potter-esque spell gestures, to use your phone as a wand...
A notebook version of this post is attached, along with a full version of the computational essay.
## Acknowledgements
We thank the mentors at the 2018 Wolfram High School Summer Camp - Andrea Griffin, Chip Hurst, Rick Hennigan, Michael Kaminsky, Robert Morris, Katie Orenstein, Christian Pasquel, Dariia Porechna and Douglas Smith - for their help and support during this project.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SensorDemo2.gif&userId=1371970
[2]: https://play.google.com/store/apps/details?id=de.lorenz_fenster.sensorstreamgps
[3]: https://play.google.com/store/apps/details?id=xyz.omnicron.caffeinate&hl=en_US
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=109724.png&userId=1371970
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12475.png&userId=1371970
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19276.png&userId=1371970
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=77697.png&userId=1371970
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=96898.png&userId=1371970Euan Ong2018-07-17T21:16:07ZLocalization of variables in For loop
http://community.wolfram.com/groups/-/m/t/1387329
Version: Mathematica 11.3.0.0
Platform: Windows 64
If I am not mistaken, there is a bug in the For loop implementation.
If I am wrong, please correct me.
It looks like the loop variable is not localized properly.
Consider:
`test[n_] := Module[{},
If[n > 2, Return[]];
Do[Print[{i,n}];test[n+1],{i,2}]
];
test[1] `
This works o.k., the output is: {1,1 } {1,2} {2,2} {2,1} {1,2} {2,2}
Now consider:
`test[n_] := Module[{},
If[n > 2, Return[]];
For[i = 1, i <= 2, i++, Print[{i, n}]; test[n + 1]]
];
test[1]`
Now the output is wrong: {1,1} {1,2} {2,2}
However, if one localizes the loop variable i in the module, it again works o.k.:
`test[n_] := Module[{i},
If[n > 2, Return[]];
For[i = 1, i <= 2, i++, Print[{i, n}]; test[n + 1]]
];
test[1]`
Now the output is again : {1,1 } {1,2} {2,2} {2,1} {1,2} {2,2}Daniel Huber2018-07-19T10:07:23ZNew Mac ans Mathematica
http://community.wolfram.com/groups/-/m/t/1387191
Does Mathematica use the six cores of the new Mac book Pro 15 ?André Dauphiné2018-07-19T09:25:01ZHow to show a simple graphic in slide presentation?
http://community.wolfram.com/groups/-/m/t/1387295
Newbie question here: I'm using the new presenter mode in 11.3. I want a slide to show a simple graphic, which I am importing using Import[]. I can get the image to show fine, but I don't want the input cell displayed during the presentation. How do I do this? SideCode hides the code but seems to require a separate step of evaluation during the presentation.
Bottom line: I want to click, see a graphic, and move on.
ThanksKevin Gue2018-07-19T13:19:06ZHow do I deploy part of a notebook to a web page?
http://community.wolfram.com/groups/-/m/t/1386969
I am a beginner and have subscribed to Mathematica Online. I would like to deploy part of a notebook to a web page but it seems that the menu option "Publish" only deploys an entire notebook without an option to select only part of the notebook.
Any help appreciatedLawrence Berry2018-07-18T16:02:14ZA quick question about PaneSelector
http://community.wolfram.com/groups/-/m/t/1387320
PaneSelector looks like a pretty idiomatic way to toggle displayed content e.g.:
PaneSelector[
{True -> progressBar, False -> button}
, Dynamic @ processing
]
Analogous If version:
Dynamic[ If[processing, progressBar, button] ]
The difference is that PaneSelector content will be converted to boxes and each switch can be faster than Dynamic which contents are sent for typesetting every time it is refreshed. Shortly, for PS only the value of processing does the round trip while for Dynamic[If[...]], full content.
Sounds good, does not work. I regret each time I use it.
Problem 1
DynamicModule[{processing = False, longProcedure},
longProcedure[] := Pause[2];
With[{
progressBar = ProgressIndicator[Appearance -> "Indeterminate"],
button = Button["Run", processing = True; longProcedure[]; processing = False, Method -> "Queued"],
buttonAsync = Button["Run async", processing = True; RunScheduledTask[longProcedure[], {.1}, "EpilogFunction" :> (processing = False)], Method -> "Queued"]
},
Grid[{
{"processing:", Style[Dynamic @ {processing}, Bold], SpanFromLeft},
{"PS:", PaneSelector[{True -> progressBar, False -> button}, Dynamic@processing, ImageSize -> All]},
{"If:", Dynamic[If[TrueQ@processing, progressBar, button]]},
{},
{"PS:", PaneSelector[{True -> progressBar, False -> buttonAsync}, Dynamic@processing, ImageSize -> All]},
{"If:", Dynamic[If[TrueQ@processing, progressBar, buttonAsync]]},
{},
{"Just toggle:", Button["toggle", processing = ! processing;, Method -> "Queued"]},
{"Toggle and pause:", Button["toggle2", processing = ! processing; Pause[2];, Method -> "Queued"]}
}, Alignment -> Left]
]
]
enter image description here
Pressing Run affects only If based solutions and not PS.
Pressing Run async shows both progress bars but when it is done only the If based solution shows the button back.
Simple toggle works as expected
Toggle and pause triggers PS based solution only after Pause is finished.
Using FinishDynamic is an overkill here.
Problem 2
It does undocumented caching so you need to inject to your view stuff like random numbers:
https://mathematica.stackexchange.com/q/166011/5478
https://mathematica.stackexchange.com/q/140739/5478
Do you fancy debugging by inserting RandomReal[] here and there? I do not.
Problem 3
Since it usually has PaneSelector[{__}, Dynamic[_Symbol]] form, you don't know when this [tag:bug] will hit you:
https://mathematica.stackexchange.com/q/100828/5478
Question
Did I miss the point of this function? What are the basic rules to follow to not face problems like this? Or is it better to just ignore this function which is what I would suggest to do.KadenAra CVBV2018-07-19T07:02:46Z[WSC2018] Relativistic Star Field Demonstration
http://community.wolfram.com/groups/-/m/t/1387351
##Introduction##
The purpose of this project was to demonstrate the effects of Special Relativity when one travels at a velocity close to that of the speed of light (around 995%). Using mathematics such as the Lorentz transformations and vector calculus, an accurate depiction was developed of time dilation and length contraction in a three dimensional star field. An equation was also developed to express the Doppler effect which represents the change in frequency of of lightwaves as seen by the viewer, and also exhibits the specific effects of time dilation and length contraction.
##Visualizing the Demonstration##
Several sources, such as Einstein's On the Electrodynamics of Moving Bodies, were required in order to retrieve the amount of information required to correctly represent Special Relativity. Though the research was never fully complete, a sufficient basis was developed to being coding the first steps of the project. Beginning with a simple star field, this initial code allowed for a base on which to build the final demonstration.
![enter image description here][1]
After coding the first star field, a simple proportionality was developed that proved useful in programming the later demonstrations.
![enter image description here][2]
After, a function was developed which coded for the Doppler effect which was used in a manipulate of a sphere situated on an x, y, and z axis, with two sliders for change in y position and change in velocity, and it allowed for the sphere to shift from red to blue by using the equation for the Doppler effect.
![Initial equation for the Doppler effect][3]
![enter image description here][4]
With a function for the Doppler shift written, it was adjusted in order to program it to function for multiple points, in this case, multiple stars, and was then applied to an adapted code from the original star field. Though it was not the final product, the change of color, when initially applied, did not follow the initial proportionality developed, and therefore the velocity and time were not relative.
![enter image description here][5]
![enter image description here][6]
For the final demonstration, the shift of the Doppler effect was modified so that as the stars began blue-shifted, but after passing the viewer (represented by a cuboid), redshifted. The same rule applied for stars traveling laterally. Three buttons were creating representing the front, back, and side, and an example of the result is shown below:
![enter image description here][7]
##Conclusion##
When finished, this demonstration successfully illustrated the effects of Special Relativity when traveling at .995% the speed of light through the Lorentz Transformation, time dilation, and length contraction. Thus, it allows us to view Special Relativity in action if, one day, we could travel through a star field at a velocity close to that of the speed of light. In the future, the "Searchlight Effect", otherwise known as "Relativistic Aberration", would be added to improve accuracy and further explore the realms and impacts of Special Relativity.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at11.29.18AM.png&userId=1381443
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at1.12.04PM.png&userId=1381443
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at1.22.30PM.png&userId=1381443
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at1.22.37PM.png&userId=1381443
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at1.23.22PM.png&userId=1381443
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at1.23.16PM.png&userId=1381443
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-19at9.24.25AM.png&userId=1381443Macy Maurer Levin2018-07-19T13:26:42ZUsing an Association to contain different equations within a Module
http://community.wolfram.com/groups/-/m/t/1384899
After much head scratching and experimenting, I've solved a particular problem - but I can't see why my first attempts did not work. So there's clearly something I'm not understanding here. Simplified code below:
myFuncA[xAssoc_] := Module[{},
tmpFuncA[x_] = xAssoc[a];
tmpFuncA[10]
]
myFuncB[xAssoc_] := Module[{},
tmpFuncStr = xAssoc[a];
tmpFuncB[x_] = tmpFuncStr;
tmpFuncB[10]
]
myFuncC[xAssoc_] := Module[{tmpFuncStr},
tmpFuncStr = xAssoc[a];
tmpFuncC[x_] = tmpFuncStr;
tmpFuncC[10]
]
myAssoc = <|a -> 1 + x, b -> 2 + x|>;
func[x_] = myAssoc[a];
func[10]
myFuncA[myAssoc]
myFuncB[myAssoc]
myFuncC[myAssoc]
I have an association with 2 simple equations in x. I can assign one of these 'a' to a function and then pass a value to it - func[10]. The output is as expected - 11.
However, I wanted this to be used inside another function, hence myFuncA. This contains essentially the same code,however the output is now 1+x
To get the correct result,I had to use an intermediate variable as in myFuncB which gives the correct - 11 - output.
But being a mindful programmer, I like to keep the scope of temporary variables as limited as possible. So I wrote myFuncC. The output from this 1+x
myFuncB works but myFuncA does not.
And myFuncC breaks it again.
What am I missing here? There must be something about variable scope I am not getting. Can anyone enlighten me perhaps.
[Edit: Mathematica 11.2 on Raspberry Pi]Paul Scanlon2018-07-16T18:39:07ZMathematica Paclet/Package Development Tutorial
http://community.wolfram.com/groups/-/m/t/1385719
Figured I'd put it out here that I'm working on/have partially developed a tutorial on packages and paclets in Mathematica as part of a broader tutorial I'm writing/have written.
You can check it out [here](https://www.wolframcloud.com/objects/b3m2a1/tutorial/package-usage-and-development/basics/packages-in-mathematica.html):
[![tut][1]](https://www.wolframcloud.com/objects/b3m2a1/tutorial/package-usage-and-development/basics/packages-in-mathematica.html)
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2791bah.png&userId=1186441b3m2a1 2018-07-17T10:16:59ZMonday morning quiz: Do you know your Wolfram L vocabulary?
http://community.wolfram.com/groups/-/m/t/1387403
The Wolfram language tries to make distinctions where John Doe wouldn't see the need/distinction. The result is a wealth of similar(?) vocab items which can be confusing to [beginners][1].
**Quiz question:** What do the following vocab items have in common, and why does *one* of them not belong in that list?
- [Blank\[ \]][2]
- [Cancel\[ \]][3]
- [Chop\[ \]][4]
- [Clear\[ \]][5]
- [Close\[ \]][6]
- Closed
- [Closing\[ \]][7]
- [Deletable][8]
- [Delete\[ \]][9]
- [Eliminate\[ \]][10]
- Empty
- [Except\[ \]][11]
- [Exclusions][12]
- [Exit\[ \]][13]
- Fail
- [Failure\[ \]][14]
- [False][15]
- Gone
- [Inactivate\[ \]][16]
- [Inactive\[ \]][17]
- [Indeterminate][18]
- [Interrupt\[ \]][19]
- [Invisible\[ \]][20]
- [Less][21]
- [Locked][22]
- [Missing\[ \]][23]
- [Negative\[ \]][24]
- [None][25]
- [Not][26]
- [Null][27]
- [Off\[ \]][28]
- [Placeholder\[ \]][29]
- [Quit\[ \]][30]
- [Remove\[ \]][31]
- Removed
- [Residue\[ \]][32]
- [Temporary][33]
- [Tiny][34]
- [Transparent][35]
- [Undefined][36]
- [Unique\[ \]][37]
- [Verbatim\[ \]][38]
The quiz question maybe meh but I hope you find that list as intriguing as i do. Did you know all the items (aka passive vocabulary)? And which ones did you ever use in your own code (aka active vocabulary)? I am new to the language, so i am still figuring out which ones when to use in my future programs.
[1]: http://community.wolfram.com/groups/-/m/t/1070946
[2]: http://reference.wolfram.com/language/ref/Blank.html
[3]: http://reference.wolfram.com/language/ref/Cancel.html
[4]: http://reference.wolfram.com/language/ref/Chop.html
[5]: http://reference.wolfram.com/language/ref/Clear.html
[6]: http://reference.wolfram.com/language/ref/Close.html
[7]: http://reference.wolfram.com/language/ref/Closing.html
[8]: http://reference.wolfram.com/language/ref/Deletable.html
[9]: http://reference.wolfram.com/language/ref/Delete.html
[10]: http://reference.wolfram.com/language/ref/Eliminate.html
[11]: http://reference.wolfram.com/language/ref/Except.html
[12]: http://reference.wolfram.com/language/ref/Exclusions.html
[13]: http://reference.wolfram.com/language/ref/Exit.html
[14]: http://reference.wolfram.com/language/ref/Failure.html
[15]: http://reference.wolfram.com/language/ref/False.html
[16]: http://reference.wolfram.com/language/ref/Inactivate.html
[17]: http://reference.wolfram.com/language/ref/Inactive.html
[18]: http://reference.wolfram.com/language/ref/Indeterminate.html
[19]: http://reference.wolfram.com/language/ref/Interrupt.html
[20]: http://reference.wolfram.com/language/ref/Invisible.html
[21]: http://reference.wolfram.com/language/ref/Less.html
[22]: http://reference.wolfram.com/language/ref/Locked.html
[23]: http://reference.wolfram.com/language/ref/Missing.html
[24]: http://reference.wolfram.com/language/ref/Negative.html
[25]: http://reference.wolfram.com/language/ref/None.html
[26]: http://reference.wolfram.com/language/ref/Not.html
[27]: http://reference.wolfram.com/language/ref/Null.html
[28]: http://reference.wolfram.com/language/ref/Off.html
[29]: http://reference.wolfram.com/language/ref/Placeholder.html
[30]: http://reference.wolfram.com/language/ref/Quit.html
[31]: http://reference.wolfram.com/language/ref/Remove.html
[32]: http://reference.wolfram.com/language/ref/Residue.html
[33]: http://reference.wolfram.com/language/ref/Temporary.html
[34]: http://reference.wolfram.com/language/ref/Tiny.html
[35]: http://reference.wolfram.com/language/ref/Transparent.html
[36]: http://reference.wolfram.com/language/ref/Undefined.html
[37]: http://reference.wolfram.com/language/ref/Unique.html
[38]: http://reference.wolfram.com/language/ref/Verbatim.htmlRaspi Rascal2018-07-19T09:18:29ZEquation numbering by chapters and sections
http://community.wolfram.com/groups/-/m/t/1384133
Hi,
I would like to write a report which includes chapters, sections, and subsections.
So, How do I number equations by chapter/section in Mathematica notebooks?
For example, this equation is the 4th equation in chapter 2 section 1 :
x+y=1 (2.1.4)
Kind regards,
OmarOmar Alsuhaimi2018-07-14T18:42:39ZPercentile of beta distribution
http://community.wolfram.com/groups/-/m/t/1386231
Hello. I want to find percentile of beta distribution and I should get the answer like that 0.0023.
Clear["Global`*"]
a = 0.848;
b = 117.46;
NSolve[PDF[BetaDistribution[a, b], p] == 0.05 && 1 >= p >= 0, p]
But it takes infinite time to be solved, so what should I correct in the code?Alex Graham2018-07-17T18:39:07ZGraphics3d: how to make object glow inside semitransparent cylinder?
http://community.wolfram.com/groups/-/m/t/1385468
I would like to reproduce the left image (created by Blender) using Mathematica and Graphics3D (code attached). Somehow I was not able to make the little sphere inside the central cylinder as shiny as in the blender image (see my Mathematica images on the right), even though I used Glow for the colour of the sphere. Does anyone know how I could make the sphere much shinier so that it actually glows and does not look so dull as it is now? Is there anything possibility with Overlay? Thanks.![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=comparison.png&userId=439307Markus Schmidt2018-07-16T20:35:05ZRecurrenceTables stopped working.
http://community.wolfram.com/groups/-/m/t/1384538
Hello - I started having errors in notebooks making use of RecurrenceTables. I am running Mathmatica Version - "11.3.0 for Mac OS X x86 (64-bit) (March 7, 2018)",
I have several notebooks making use of RecurrenceTable. These have been working for many months. All of a sudden they stopped working maybe after an update?
I have replicated the error by copy/pasting a RecurrenceTable example from the online help into a new and empty Notebook
The example is
RecurrenceTable[{a[n + 1] == 3 a[n], a[1] == 7}, a, {n, 1, 10}]
The error is - RecurrenceTable::dsvar: 1 cannot be used as a variable.
This is happening for all notebooks new and old that use RecurrenceTables
Any ideas what is going on?
thank you
AlAlbert Cattani2018-07-16T01:35:59ZCreating Dialog Box of 3D Plot (Rotation)
http://community.wolfram.com/groups/-/m/t/1386255
I'm currently working on creating a dialog box that holds a 3D plotted graph, and am wondering, is there anyway to have it rotate as it does in the mathematica notebook, instead of just a single frame of the plotted graph appearing in the dialog box? Any ideas? Not sure if it is possible, but would be interesting to see if there is a way to do this.Megan Lahm2018-07-17T18:44:07ZHelp with solving an equation
http://community.wolfram.com/groups/-/m/t/1386369
I'm a beginner to Mathematica. I had the following code:
Solve[-0.45 - 8./(1 + 3 E^(-x/30)) + (80 E^(-x/30) (0.65 - 0.01 x))/(1 + 3 E^(-x/30))^2==0,x]
This error appeared:
"Solve::inex: Solve was unable to solve the system with inexact coefficients or the system obtained by direct rationalization of inexact numbers present in the system. Since many of the methods used by Solve require exact input, providing Solve with an exact version of the system may help."
I suppose it is because E is inexact. How would I go about solving this equation then?Sabrina Lau2018-07-17T20:23:33ZLive code templates
http://community.wolfram.com/groups/-/m/t/1273720
## Background
I enjoy coding in the FrontEnd (except it crashes and lookup across files does not exist), but I often miss 'hands on keyboard', customizable code templates.
E.g. I often forget to wrap an option name with quotes "_" or I'm starting a new function and would like to avoid retyping `Attributes/Options` `Catch/Check` etc.
I don't like palettes for something that I need to do quickly and frequently.
So I created a little stylesheet, for 11+ (v10 on todo list) and in a *beta* stage at the moment.
Should work on Win/MacOs. Do not use on pre V11 as it may crash the FE.
https://github.com/kubaPod/DevTools
I case you are interested and/or have any ideas about this / similar features, let me know here or create an Issue in GitHub.
Topic cross posted on Mathematica.stackexchange: https://mathematica.stackexchange.com/q/164653/5478
## Setup
(*additional package I use to install github assets' paclets,
you can download .paclet manually if you want
*)
Import["https://raw.githubusercontent.com/kubapod/mpm/master/install.m"]
Needs["MPM`"]
(*installing the package*)
MPMInstall["kubapod", "devtools"]
(*changing default .m stylesheet to a dev's stylesheet*)
CurrentValue[$FrontEnd, "DefaultPackageStyleDefinitions"] =
FrontEnd`FileName[{"DevTools", "DevPackage.nb"}]
(*test*)
FrontEndTokenExecute["NewPackage"]
## How to:
- <kbd>Ctrl</kbd>+<kbd>1</kbd> to open a menu
- navigate with arrows and hit enter/return or hit a shortkey like <kbd>n</kbd>
/ <kbd>{</kbd> / <kbd>[</kbd>
## Customization
Once you setup a new stylesheet the package should have an additional toolbar with 'Edit code templates' button on the top right. Click on it and a user's templates file should open.
It is just a .m file with a header that should explain everything. It will be improved in future.
## Example
[![enter image description here][1]][1]
There is also a dark one based on a build-in ReversedColors.nb stylesheet:
CurrentValue[$FrontEnd, "DefaultPackageStyleDefinitions"
] = FrontEnd`FileName[{"DevTools", "DevPackageDark.nb"}]
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/v81cV.gif
[2]: https://i.stack.imgur.com/g96TY.gifKuba Podkalicki2018-01-28T18:00:23ZAnalysis of Axon Expression Intensity from Images
http://community.wolfram.com/groups/-/m/t/1386698
Axon, also known as the nerve fiber, is a long, slender projection of a nerve cell, or neuron, that conducts electrical impulses known as action potentials, away from the nerve cell body. Although the connectivity of axons are crucial in signal transfer between neurons, its structural assembly and trend across the cortex is yet not widely investigated. By utilizing image processing techniques, we look at the structural trend and are able to quantify axon expressions, providing valuable data for further investigation of neuronal activity.
Background: What are Axons?
---------------------------
The neocortex, also called the neopallium and isocortex, is the part of the mammalian brain involved in higher-order brain functions such as sensory perception, cognition, generation of motor commands, spatial reasoning and language. The neocortex is the largest part of the cerebral cortex - outer layer of the cerebrum - in human brain.
![enter image description here][1]
The neocortex is made up of six layers, labelled from the outermost inwards, I to VI. Since different layers specialize in different activities, analysis of changing trend in axon density across the layers is crucial to understanding brain activity. Plasticity over longer distances means that a larger number of neural circuits can be achieved and implies a larger memory capacity per synapse (Fawcett and Geller, 1998, Chen et al., 2002, Papadopoulos, 2002).
Axons span many millimeters of cortical territory, and individual axons target diverse areas (Zhang and Deschenes, 1998). Thus, understanding the repertoire of axonal structural changes is fundamental to evaluating mechanisms of functional rewiring in the brain.
Images acquired from distinct axon arbors in adult barrel cortex of GFP transgenic mice were used in this project. Two-photon microscopy techniques were used along with SBEM(Serial Block-face scanning Electron Microscopy) techniques. Images were obtained after a series of surface scanning throughout the entire sample, then stacked for segmentation and quantification. Due to the size of the data, only the first section of layer 1 was analyzed in this project.
## Import Images ##
In order to carry out an image analysis with electron-microscope images, image data (real-value pixel sizes corresponding to each pixels) is needed.
First we want to assign pixel sizes corresponding to real image sizes:
xpixelsize = 512Quantity[1,"Micrometers"];
ypixelsize = 512 Quantity[1,"Micrometers"];
zstepsize = 293Quantity[1,"Micrometers"];
Then we import the TIF dataset from directory.
pic=Import@URLDownload["https://github.com/JihyeonJe/JJ-WSS18/raw/master/axon.tif"];
## Image Processing ##
After images are imported, 3D mesh is created for visualization of general structural trend throughout the entire stack. Maximum intensity projection is also generated from the stacks to aid the understanding of overall axon distribution.
To carry out density and volume calculations, images were binarized with given thresholds.
To get a general idea of the structure, we create a 3D mesh with the dataset:
image3D=Image3D[image,ColorFunction->"GrayLevelOpacity",BoxRatios->{1,1,1/3}];
resize = ImageResize[image3D, 170]
![enter image description here][2]
Then we binarize all the images with a set threshold:
binarized = Map[MorphologicalBinarize[#, {0.10, 0.4}]&,ImageAdjust/@image];
Now let's create maximum intensity projection from the previously created binarized images:
MIP = Image3DProjection[Image3D[binarized]]
![enter image description here][3]
Display mesh and temporal interpolation side by side for convenient analysis:
{Labeled[image3D,Text@"3D mesh"],Labeled[MIP,Text@"Maximum Intensity Projection"]}
![enter image description here][4]
## Calculate volume and density ##
With processed images, we are now able to calculate volume and density of the axons expressed in the images. This step is crucial in analyzing the volumetric intensity of axon expression in the sample.
Volume of total axons expressed were calculated by counting all non-zero elements in the binarized images and multiplying them by pixel sizes.
expressedvol = Count[Flatten[ImageData /@ binarized],1]*xpixelsize*ypixelsize*zstepsize
Then we get the value of 12353445560320 um^3.
Now calculate the volume of the entire image stack by counting image dimensions and multiplying it with pixel sizes:
totalvol = First[ImageDimensions[First[image]]]*xpixelsize*Last[ImageDimensions[First[image]]]*ypixelsize*Length[image]*zstepsize
This results in another volumetric quantity of 1208088401018880 um^3.
Using these two values, calculate the average axon volume:
denstiy = N[expressedvol/totalvol]Quantity[1,"Micrometers"]
Then we can get the output of average axon volume, 0.0102256 um^3.
By casting a sliding window across the entire image stack from top to bottom, the general trend of axon density was observed. Since connectivity between axons is a crucial in understanding brain activity, sliding window allows an analysis of the amount of shared data between axons across the brain.
slidingwindow[x_] := Count[Flatten[ImageData[x]],1]
SetAttributes[slidingwindow,Listable];
window = Total /@ MovingMap[slidingwindow, binarized, 2];
Now plot the graph:
Show[ListPlot[window/(First[ImageDimensions[First[image]]]* Last[ImageDimensions[First[image]]]*3), Joined -> True, PlotStyle->Thick, PlotLabel-> "Axon Density Across Z",LabelStyle->Directive[Bold], AxesLabel->"Axon Density (\!\(\*TemplateBox[{InterpretationBox[\"\[InvisibleSpace]\", 1],RowBox[{SuperscriptBox[\"\\\"\[Micro]m\\\"\", \"3\"]}],\"micrometers cubed\",SuperscriptBox[\"\\\"Micrometers\\\"\", \"3\"]},\n\"Quantity\"]\))" ]]
![enter image description here][5]
From the plotted graph we can see the general trend of axon density across the brain. For example, a local maxima(peak) at the region corresponding to the line of Gennari would indicate that the specific area is responsible for active neuronal signal transfer. When analyzed across the entire brain, such data can provide a novel understanding of axon expression and structures.
## References ##
Image data acquired from Diadem Challenge - Neocortical Layer Axon 1 : http://www.diademchallenge.org/neocortical_layer_ 1_axons _readme.html
De Paola V1 et al. (2006) Cell type-specific structural plasticity of axonal branches and boutons in the adult neocortex. Cold Spring Harbor Symp. Quant. Neruon. 49, 861-875. DOI: 10.1016/j.neuron.2006.02.017
Author Information:
Jihyeon Je (Western Reserve Academy, jej19@wra.net)
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-18at4.41.29PM.png&userId=1352003
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-18at4.44.56PM.png&userId=1352003
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-18at4.47.25PM.png&userId=1352003
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-18at4.48.15PM.png&userId=1352003
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-18at4.52.59PM.png&userId=1352003Jihyeon Je2018-07-18T07:55:40ZCloud Deploy broken with neural net (new?)
http://community.wolfram.com/groups/-/m/t/1382601
Over the last few weeks I routinely deployed neural nets to the cloud for computation. Today, even the simplest example does not work anymore:
(*initialise random neural net that talkes 200x200 image as input*)
scrnet = NetInitialize@
NetChain[{ConvolutionLayer[1, 1], PartLayer[1]},
"Input" -> {1, 200, 200}]
(*Deploy to cloud*)
cnet = CloudExport[scrnet, "MX", Permissions -> "Public"];
(*test on example image *WORKS**)
img = Import["ExampleData/ocelot.jpg"];
With[{net = CloudImport@cnet}, Image@net@{ImageData@img}]
(*Deploy as cloud form page *)
CloudDeploy[
FormPage[{"image" -> "Image"},
With[{net = CloudImport@cnet}, Image@net@{ImageData@#image}] &],
Permissions -> "Public"]
The last piece of code generates a cloud object. If I try to upload the ocelot image I get Image[Failed] as the sole output. If I do not use a net the form works fine:
CloudDeploy[FormPage[{"image" -> "Image"}, Image@ImageData@#image &],
Permissions -> "Public"]
I contacted wolfram support already and they told me to post here. Am I doing something wrong? I have ~1000 cloud credits left on a free account so that should not be the issue. Frankly, I am having a lot of issues with the documentation for cloud computing in general and am starting to wonder if it is worth the hassle.
Best, MaxMaximilian Jakobs2018-07-13T14:19:54Z[WSC18] Streaming Live Phone Sensor Data to the Wolfram Language
http://community.wolfram.com/groups/-/m/t/1386358
# Streaming Live Phone Sensor Data to the Wolfram Language
(This forms Part 1 of a 2-part community post: "Using Machine Learning Models for Accelerometer-based Gesture Recognition" - part 2 is available at http://community.wolfram.com/groups/-/m/t/1386392)
![Live demo of sensor streaming][1]
Not only are smartphones wonderful ways to stay connected to the digital world, they also contain an astonishing array of sensors making them ideal for scientific and computational experimentation.
The Wolfram Language (WL) has extraordinary data processing and scientific computing abilities - the only sensors, however, from which they can read data are specialised and either somewhat expensive or require a significant amount of setup. On a high level, the WL has baked-in support for a variety of devices - specifically the Raspberry Pi, Vernier Go!Link compatible sensors, Arduino microcontrollers, webcams and devices using the RS-232 or RS-422 serial protocol (http://reference.wolfram.com/language/guide/UsingConnectedDevices.html); unfortunately, there is no easy way to access sensor data from Android or iOS mobile devices.
In this post, I will attempt to combine the two, demonstrating
1. A UDP socket-based method for transmission of (general) sensor data from an Android phone to the Wolfram Language (based on this excellent community post: http://community.wolfram.com/groups/-/m/t/344278 which does the same for iPhones)
2. A web-based, platform-agnostic method for transmission of IMU / inertial motion unit data (i.e. accelerometer and gyroscope data) from a phone to the Wolfram Language.
# Socket Transmission
On the Google Play Store, there exist a number of Android apps which can transmit live sensor data to a computer over UDP sockets - for instance, "Sensorstream IMU+GPS" ([https://play.google.com/store/apps/details?id=de.lorenz_fenster.sensorstreamgps][1]). Unfortunately, the WL does not support receipt and transmission of data over UDP sockets - while there exists a *Socket* library, as of 2018 this is only capable of dealing with TCP. Thus, to use UDP sockets in the WL, we must implement our own library using JLink to access Java socket packages from the WL. (Credit is due to http://community.wolfram.com/groups/-/m/t/344278 - the code here was slightly outdated so had to be modified.)
## Instructions
To send accelerometer (or other sensor) data from your phone to Wolfram over UDP sockets:
1. Install the "Sensorstream IMU+GPS" app
2. Ensure the sensors you want to stream to Wolfram are ticked on the 'Toggle Sensors' page. (If you want to stream other sensors besides 'Accelerometer', 'Gyroscope' and 'Magnetic Field', ensure the 'Include User-Checked Sensor Data in Stream' box is ticked. Beware, though - the more sensors are ticked, the more latency the sensor stream will have.)
3. On the "Preferences" tab:
a. Change the target IP address in the app to the IP address of your computer (ensure your computer and phone are connected to the same local network)
b. Set the target port to 5555
c. Set the sensor update frequency to 'Fastest'
d. Select the 'UDP stream' radio box
e. Tick 'Run in background'
4. Switch stream ON **before** executing code. (nb. ensure your phone does not fall asleep during streaming - perhaps use the 'Caffeinate' app (https://play.google.com/store/apps/details?id=xyz.omnicron.caffeinate&hl=en_US) to ensure this.)
5. Execute the following WL code (in part from http://community.wolfram.com/groups/-/m/t/344278):
Initialise JLink
QuitJava[];
Needs["JLink`"];
InstallJava[];
Initialise a socket connection - ensure *5555* is the target port set
udpSocket=JavaNew["java.net.DatagramSocket",5555];
Function that reads *size* bytes of a function.
readSocket[sock_,size_]:=JavaBlock@Block[{datagramPacket=JavaNew["java.net.DatagramPacket",Table[0,size],size]},sock@receive[datagramPacket];
datagramPacket@getData[]]
Function that reads from the socket, processes data and 'sows' it to be collected later
listen[]:=record=DeleteCases[readSocket[udpSocket,1200],0]//FromCharacterCode//Sow;
Initialises the results list and repeatedly appends accelerometer data to it every 0.01 seconds - if the list is over 700 elements long, the 150 oldest elements (at start of list) are removed.
results={};RunScheduledTask[AppendTo[results,Quiet[Reap[listen[]]]];If[Length[results]>700,Drop[results,150]],0.01];
Initialises the stream list to be refreshed every 0.01 seconds with the most recent 500 elements of results. Each element of results is a string of transmitted socket data (e.g. "225585.00455, 3, -1.591, 8.624, 5.106, 4, -0.193, -0.690, -0.072") - this is split into a list of strings {"225585.00455", "3", "-1.591"...} and each string is converted to a numerical expression.
stream:=Refresh[ToExpression[StringSplit[#[[1]],","]]& /@ Select[results[[-500;;]],Head[#]==List&],UpdateInterval-> 0.01]
*Stream* now contains the 500 most recent accelerometer readings, stored in an array. The values of Stream will be updated whenever the variable is used within a *Dynamic*. (Note that, with the default sensors enabled - the first three boxes ticked on the *Toggle Sensors* tab - the x, y and z coordinates of the accelerometer can be accessed at elements 3, 4 and 5 in each list in the array. (e.g. to access the most recent accelerometer reading, run stream[[-1,3;;5]])
The accelerometer data can then be visualised using a *ListLinePlot*:
While[Length[results]<500,Pause[2]];Dynamic[Refresh[ListLinePlot[{stream[[All,3]],stream[[All,4]],stream[[All,5]]},PlotRange->All],UpdateInterval->0.1]]
![A list line plot of accelerometer data][2]
The 'pulses' (i.e. shaking the phone) were carried out every second; from this it is evident that the frequency of data transmission is 50 Hz (i.e. data is sent every 0.02 seconds).
To get the most recent accelerometer data, run
Dynamic[stream[[-1,3;;5]]]
To end socket transmission, turn off the stream on the app, run
RemoveScheduledTask[ScheduledTasks[]];
udpSocket@close[];
QuitJava[];
and ensure the process 'JLink' is quit in Task Manager / Activity Monitor etc - if it is not closed properly, you will be unable to create another socket from that port.
# Channel Transmission
An alternative way to send data from a phone to the Wolfram Cloud is by using the Channel framework. Introduced in version 11 of the Wolfram Language. the Channel framework allows asynchronous communication between Wolfram sessions as well as external systems, with communication being brokered in the Wolfram Cloud. A key point to note about the Channel framework is that it is based on a publish-subscribe model, allowing messages to be sent and received through a 'channel' rather than pairing specific senders and receivers.
##Instructions
To transmit accelerometer data, run the following code: (for other sensors, see the bottom of the page)
ChannelDeploySensorPage[func_]:=Module[{listener,listenerurl,SensorHTML,c,url,u},
CloudConnect[];
listener=ChannelListen["Sensors",func[#Message]&,Permissions->"Public"];
listenerurl = listener["URL"];
SensorHTML="<!DOCTYPE html><html lang=en><meta charset=UTF-8><title>Sensors</title><script src=https://cdn.jsdelivr.net/npm/gyronorm@2.0.6/dist/gyronorm.complete.min.js></script><script>function makeXHR(n,t,o){var e=Date.now(),r=(Math.random(),new XMLHttpRequest);r.withCredentials=!0;var i=\""<>listenerurl<>"?operation=send&time=\"+e.toString()+\"&x=\"+n.toString()+\"&y=\"+t.toString()+\"&z=\"+o.toString();r.open(\"GET\",i,!0),r.send()}function init(){var n={frequency:100,gravityNormalized:!0,orientationBase:GyroNorm.WORLD,decimalCount:2,logger:null,screenAdjusted:!1},t=new GyroNorm;t.init(n).then(function(){t.start(function(n){makeXHR(n.dm.x,n.dm.y,n.dm.z)})})}window.onload=init</script>";
c = CloudExport[SensorHTML,"HTML",Permissions->"Public"];
u=URLShorten[c[[1]]];
Return[{u,BarcodeImage[u,"QR"],listener}]
]
Then run
c = ChannelDeploySensorPage[Func]
![Output - a QR code][3]
(where the argument *func* is some function to be called whenever the channel receives a new point of data from the phone - the argument given to *func* is an association such as the one below:)
<|x=3, y=4, z=1|>
Now, simply scan the QR code generated with your phone, and sensor data will be streamed from your phone to the computer.
The data transmitted can be viewed as a time series as follows:
c[[3]]["TimeSeries"]
[//]: # (No rules defined for Output)
The accelerometer data can also be plotted with the following Dynamic: (red --> x, green --> y, blue --> z):
Dynamic[ListLinePlot[ToExpression/@Reverse[Take[Reverse[#["Values"]],UpTo[100]]]&/@c[[3]]["TimeSeries"][[2;;4]],PlotRange->{All,{-50,50}},PlotStyle->{Red, Green, Blue}]]
When you're done, delete the channel by running
RemoveChannelListener[c[[3]]]
##Explanation
Setting up a channel is as easy as connecting to the Wolfram Cloud
CloudConnect[];
and typing
current="";
Func[x_]:=current=x;
listener=ChannelListen["NameOfChannel",Func[#Message]&, Permissions->"Public"]
![A channel listener][4]
Here, *Func* is a function that will be called each time the channel receives a message (the message is supplied as an argument to the function) - it simply sets the variable 'current' to the data last sent to the channel (in the form of key-value pairs - e.g. <|x=3, y=4, z=1|>. To make the channel accessible to other users, ensure the channel has Permissions set to Public.
To delete the channel (useful when debugging), call
RemoveChannelListener[listener];
One particularly useful feature of the Channel is that it has built-in support for receiving and parsing HTTP requests - simply send a GET request to the channel URL (given by *listener["URL"]*) and the WL will automatically parse the parameters and make the data available to the user:
For instance, if we send an HTTP GET request to *https://channelbroker.wolframcloud.com/users/<your Wolfram Cloud email address>/NameOfChannel* and append the parameters "*operation=send*" (indicates data is being sent to the channel) and "*test=5*":
BaseURL=listener["URL"]
Params = "?operation=send&test=5";
URLRead[HTTPRequest[BaseURL<>Params,<|Method->"Get"|>]]
[//]: # (No rules defined for Output)
The variable 'current' has now been updated and contains the key-value pair 'test->5' which we just sent to the channel.
current
<|"test" -> "5"|>
[//]: # (No rules defined for Output)
current[["test"]]
5
[//]: # (No rules defined for Output)
An alternative way of viewing the data from the channel is to call
listener["TimeSeries"]
[//]: # (No rules defined for Output)
This allows the data sent to the channel to be stored as a time series, which can be useful in applications such as collecting time-based sensor data.
### Transmission of Sensor Data over Channels
As demonstrated earlier, a nice feature of Channels is that data can be sent to the Wolfram Language over HTTP - instead of fiddling with JLink and sockets (which tend to be laggy and break easily), one can simply create a web page that streams sensor data to a channel.
For Android devices (running Google Chrome), there exist a range of built-in sensor APIs giving a web page access to raw accelerometer, gyroscope, light sensor and magnetometer data, and processed linear acceleration (i.e. total acceleration experienced by a device disregarding that produced by gravity), absolute orientation and relative orientation sensors. Documentation for these sensors exists online at https://developers.google.com/web/updates/2017/09/sensors-for-the-web.
Unfortunately, for iOS devices there does not exist an easy way to access sensors from the web - although one can use DeviceMotion events (https://developers.google.com/web/fundamentals/native-hardware/device-orientation/), the data these give can vary significantly from browser to browser (e.g. different browsers might use different coordinate systems), so training a machine learning model on gesture data produced by this method would require either retraining a model for each browser or significant processing of data based on browser.
However, there is another solution - namely, the gyronorm.js API (https://github.com/dorukeker/gyronorm.js), which claims to return 'consistent [gyroscope and accelerometer] values across different devices'. Using this, we construct a simple web page to transmit accelerometer data to a Wolfram Language channel called 'Sensors': (While the following code focuses on extracting accelerometer data, it is a trivial task to change the sensor being polled to read, for instance, gyroscope data in the Wolfram Language instead.)
<!DOCTYPE html>
<html lang=en>
<meta charset=UTF-8>
<title>Sensors</title>
<script src=https://cdn.jsdelivr.net/npm/gyronorm@2.0.6/dist/gyronorm.complete.min.js></script>
<script>
function makeXHR(x,y,z){
var t=Date.now();
r=new XMLHttpRequest;
r.withCredentials=true;
var i="https://channelbroker.wolframcloud.com/users/euan.l.y.ong@gmail.com/Sensors?operation=send&time="+t.toString()+"&x="+x.toString()+"&y="+y.toString()+"&z="+z.toString();
r.open("GET",i,!0);
r.send()
}
function init(){
//Explanations are from the GyroNorm GitHub page. (https://github.com/dorukeker/gyronorm.js/)
var n={
frequency:100, //send values every 100 milliseconds
gravityNormalized:!0, // Whether or not to normalise gravity-related values
orientationBase:GyroNorm.WORLD, // ( Can be Gyronorm.GAME or GyroNorm.WORLD. gn.GAME returns orientation values with respect to the head direction of the device. gn.WORLD returns the orientation values with respect to the actual north direction of the world. )
decimalCount:2, // How many digits after the decimal point to return for each value
logger:null,
screenAdjusted:!1
};
t=new GyroNorm;
t.init(n).then(function(){
t.start(function(data){
makeXHR(data.dm.x,data.dm.y,data.dm.z)
//Other possible values to substitute for data.dm.x, data.dm.y, data.dm.z are:
// data.do.alpha ( deviceorientation event alpha value )
// data.do.beta ( deviceorientation event beta value )
// data.do.gamma ( deviceorientation event gamma value )
// data.do.absolute ( deviceorientation event absolute value )
// data.dm.x ( devicemotion event acceleration x value )
// data.dm.y ( devicemotion event acceleration y value )
// data.dm.z ( devicemotion event acceleration z value )
// data.dm.gx ( devicemotion event accelerationIncludingGravity x value )
// data.dm.gy ( devicemotion event accelerationIncludingGravity y value )
// data.dm.gz ( devicemotion event accelerationIncludingGravity z value )
// data.dm.alpha ( devicemotion event rotationRate alpha value )
// data.dm.beta ( devicemotion event rotationRate beta value )
// data.dm.gamma ( devicemotion event rotationRate gamma value )
})
})
}
window.onload=init;
</script>
</html>
This webpage, when opened on an Android or iOS phone, will stream data to the 'Sensors' channel, sending a new HTTP request every 100 milliseconds. (Decreasing the 'frequency' leads to more frequent results, but can cause atrocious levels of lag.)
### Producing the ChannelDeploySensorPage function
Although this webpage allows accelerometer data to be transmitted from a phone to a computer, for it to be used it must be deployed on a server. To change, for instance, the channel name, one would need to edit the file on the server itself, which can quickly become a tiresome process. Thus, we developed a function which autogenerates the required HTML code and stores it in the Wolfram Cloud as a CloudObject where it can easily be accessed. The function also outputs a QR code, to allow mobile users to quickly navigate to the web page. (The argument *func* is simply the function to be called whenever the channel receives a new point of data from the phone.)
## Alternative Sensors
ChannelDeploySensorPage functions for accessing sensors other than the accelerometer can be found below: (For more information about sensors and readings, check out https://developers.google.com/web/fundamentals/native-hardware/device-orientation/)
### Device Orientation (alpha, beta, gamma, absolute):
ChannelDeploySensorPageDeviceOrientation[func_]:=Module[{listener,listenerurl,SensorHTML,c,url,u},
CloudConnect[];
listener=ChannelListen["Sensors",func[#Message]&,Permissions->"Public"];
listenerurl = listener["URL"];
SensorHTML="<!DOCTYPE html><html lang=en><meta charset=UTF-8><title>Sensors</title><script src=https://cdn.jsdelivr.net/npm/gyronorm@2.0.6/dist/gyronorm.complete.min.js></script><script>function makeXHR(n,t,o,abs){var e=Date.now(),r=(Math.random(),new XMLHttpRequest);r.withCredentials=!0;var i=\""<>listenerurl<>"?operation=send&time=\"+e.toString()+\"&alpha=\"+n.toString()+\"&beta=\"+t.toString()+\"&gamma=\"+o.toString()+\"&absolute=\"+abs.toString();r.open(\"GET\",i,!0),r.send()}function init(){var n={frequency:100,gravityNormalized:!0,orientationBase:GyroNorm.WORLD,decimalCount:2,logger:null,screenAdjusted:!1},t=new GyroNorm;t.init(n).then(function(){t.start(function(n){makeXHR(n.do.alpha,n.do.beta,n.do.gamma,n.do.absolute)})})}window.onload=init</script>";
c = CloudExport[SensorHTML,"HTML",Permissions->"Public"];
u=URLShorten[c[[1]]];
Return[{u,BarcodeImage[u,"QR"],listener}]
]
### Device Motion - Acceleration Including Gravity (x, y, z):
ChannelDeploySensorPageAccelerationGravity[func_]:=Module[{listener,listenerurl,SensorHTML,c,url,u},
CloudConnect[];
listener=ChannelListen["Sensors",func[#Message]&,Permissions->"Public"];
listenerurl = listener["URL"];
SensorHTML="<!DOCTYPE html><html lang=en><meta charset=UTF-8><title>Sensors</title><script src=https://cdn.jsdelivr.net/npm/gyronorm@2.0.6/dist/gyronorm.complete.min.js></script><script>function makeXHR(n,t,o){var e=Date.now(),r=(Math.random(),new XMLHttpRequest);r.withCredentials=!0;var i=\""<>listenerurl<>"?operation=send&time=\"+e.toString()+\"&x=\"+n.toString()+\"&y=\"+t.toString()+\"&z=\"+o.toString();r.open(\"GET\",i,!0),r.send()}function init(){var n={frequency:100,gravityNormalized:!0,orientationBase:GyroNorm.WORLD,decimalCount:2,logger:null,screenAdjusted:!1},t=new GyroNorm;t.init(n).then(function(){t.start(function(n){makeXHR(n.dm.gx,n.dm.gy,n.dm.gz)})})}window.onload=init</script>";
c = CloudExport[SensorHTML,"HTML",Permissions->"Public"];
u=URLShorten[c[[1]]];
Return[{u,BarcodeImage[u,"QR"],listener}]
]
### Device Motion - Rotation Rate (alpha, beta, gamma):
ChannelDeploySensorPageRotationRate[func_]:=Module[{listener,listenerurl,SensorHTML,c,url,u},
CloudConnect[];
listener=ChannelListen["Sensors",func[#Message]&,Permissions->"Public"];
listenerurl = listener["URL"];
SensorHTML="<!DOCTYPE html><html lang=en><meta charset=UTF-8><title>Sensors</title><script src=https://cdn.jsdelivr.net/npm/gyronorm@2.0.6/dist/gyronorm.complete.min.js></script><script>function makeXHR(n,t,o){var e=Date.now(),r=(Math.random(),new XMLHttpRequest);r.withCredentials=!0;var i=\""<>listenerurl<>"?operation=send&time=\"+e.toString()+\"&x=\"+n.toString()+\"&y=\"+t.toString()+\"&z=\"+o.toString();r.open(\"GET\",i,!0),r.send()}function init(){var n={frequency:100,gravityNormalized:!0,orientationBase:GyroNorm.WORLD,decimalCount:2,logger:null,screenAdjusted:!1},t=new GyroNorm;t.init(n).then(function(){t.start(function(n){makeXHR(n.dm.alpha,n.dm.beta,n.dm.gamma)})})}window.onload=init</script>";
c = CloudExport[SensorHTML,"HTML",Permissions->"Public"];
u=URLShorten[c[[1]]];
Return[{u,BarcodeImage[u,"QR"],listener}]
]
# Applications
Real world applications of this sensor data abound - aside from the gesture recognition system described in post 2 (http://community.wolfram.com/groups/-/m/t/1386392), you could use this sort of data to make a pocket seismometer, a fall detector, electronic dice, investigate centrifugal motion, investigate friction... More examples available here: http://www.gcdataconcepts.com/examples.html. If you make a sensor-based project in Wolfram, or think of a new / innovative / interesting way to use this data, or if the code above is buggy / incomplete, please do share it in the comments below!
A Wolfram Notebook version of this post is attached.
-- Euan Ong
# References
"Using Connected Devices": http://reference.wolfram.com/language/guide/UsingConnectedDevices.html
"Using your smart phone as the ultimate sensor array for Mathematica": http://community.wolfram.com/groups/-/m/t/344278
"Capturing Data from an Android Phone using Wolfram Data Drop": http://community.wolfram.com/groups/-/m/t/461190
"Sensorstream IMU+GPS": https://play.google.com/store/apps/details?id=de.lorenz_fenster.sensorstreamgps
"Sensors For The Web!": https://developers.google.com/web/updates/2017/09/sensors-for-the-web
"gyronorm.js": https://github.com/dorukeker/gyronorm.js
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ezgif-3-207848dda6.gif&userId=1371970
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=60691.png&userId=1371970
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=42762.png&userId=1371970
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=37343.png&userId=1371970Euan Ong2018-07-17T20:01:00ZHackhaton project: social network analysis
http://community.wolfram.com/groups/-/m/t/1385976
Introduction
===
We are living in the era of technologies when everything is fast and quick changing. Sometimes we don't have time to "clean up" our networks. Someone can say that we are lazy, but if we can make technologies and our knowledge work for us, why not?
Our purpose was to analyse part of the popular Russian social networking service - "Вконтакте" (VKontakte, meaning InContact). It is a standard situation when network administrators block pages of those people who break the social network rules. Some users in one day can decide that they don't want to "spend their lives on the Internet" and delete their pages.
And here we are. One day you check your list of friends and understand that some of them have been blocked or deleted. Do you want to spend your time cleaning up your list of friends? We don't think so. But our programme can help you with this.
Description of our work
-----------------------
First of all, you need to get a list of the user's friends' IDs, which you can receive with a simple the GET method request.
Import["https://api.vk.com/method/friends.get?user_id=" <> userID <>
"&access_token=" <> token <> "&v=5.78", "RawJSON"] // Short
userID - user identification.
token - a system object representing the subject of access control operations.
Receive the output in the form of an Association.
<|response -> <|count -> 209, items -> {92070, 119121, 238683, 1266876, 1292342,<<199>>, 323263490, 328646963, 341263802, 395195137, 433885152}|>|>
From which select the list of IDs.
friendsResponse =
Import["https://api.vk.com/method/friends.get?user_id=" <> userID <>
"&access_token=" <> token <> "&v=5.78", "RawJSON"];
friendIds = friendsResponse["response"]["items"];
Then we get additional information about each user with the help of users.get.
It is worth to notice that in the user_ids parameter, you can work with a list of IDs, for this list should be comma separated.
informationResponse =
Import["https://api.vk.com/method/users.get?user_ids=" <>
StringRiffle[friendIds, ","] <> "&access_token=" <> token <>
"&v=5.78&fields=photo_max", "RawJSON"];
friends = informationResponse["response"];
Information about each user is in the followed form:
<|"id" -> 219746518, "first_name" -> "Олег", "last_name" -> "Дудник",
"photo_max" ->
"https://pp.userapi.com/c847219/v847219990/54e38/33d5t5h5bQ4.jpg?\
ava=1"|>
If a user has been deleted, he will have the field deactivated with meaning deleted:
<|"id" -> 207385, "first_name" -> "DELETED", "last_name" -> "",
"deactivated" -> "deleted",
"photo_max" -> "https://vk.com/images/deactivated_200.png"|>
If a user has been blocked, he will have the field deactivated with meaning blocked:
<|"id" -> 10789003, "first_name" -> "Вита",
"last_name" -> "Свиридова", "deactivated" -> "banned",
"photo_max" -> "https://vk.com/images/deactivated_200.png"|>
In this way, we should delete from the list of friends those who have the field deactivated.
To delete the user with the particular ID, it is necessary to execute GET request:
"https://api.vk.com/method/friends.delete?user_id=" <> id <> "&access_token=" <> token <> "&v = 5.78"
Graphical part of the work
===
Receive IDs of friends of the chosen user and transform it into the edge list.
Get the graph:
![enter image description here][1]
It doesn't include valuable data, and it's hard to read it. Let's do the following actions:
1. Receive the number of messages between users. They are going to be weights of edges.
2. To make the graph more readable we will change the length of each edge according to its weight. Then each edge rotate to the random angle
Make it with the help of a rotation matrix.
Clear[friendsVisuo]
friendsVisuo[edgeListVisuo_, edgeWeightsVisuo_] :=
Module[{coord1 =
PropertyValue[{Graph[ed,
VertexCoordinates ->
Table[{0,
edgeWeightsVisuo[[i]]}.{{Sin@i,
Cos@i}, {Cos@i, -Sin@i}}, {i, Length@edgeListVisuo}],
VertexLabels -> "Name"], #}, VertexCoordinates] & /@
VertexList[
Graph[ed,
VertexCoordinates ->
Table[{0,
edgeWeightsVisuo[[i]]}.{{Sin@i,
Cos@i}, {Cos@i, -Sin@i}}, {i, Length@edgeListVisuo}],
VertexLabels -> "$"]]},
Show[
Table[Graphics[{Pink, Opacity[0.3], EdgeForm[{Thick, Blue}],
Disk[First@coord1, i^2]}], {i, 5}],
Graph[ed,
VertexCoordinates ->
Table[{0,
edgeWeightsVisuo[[i]]}.{{Sin@i, Cos@i}, {Cos@i, -Sin@i}}, {i,
Length@edgeListVisuo}], VertexLabels -> "Name"],
ImageSize -> Full]]
ed - the list of edges. Also, draw 3 concentric translucent circles for better visualization.
This is the result:
![enter image description here][2]
If we use data from a social network, we can get a more pleasant look:
![enter image description here][3]
Conclusion
====
It was our first hackathon, and we are pleased that we had the opportunity to participate in it, to work and communicate with people from Wolfram Research.We had all the facilities to work with comfort like the place, wi - fi, sockets, food. No doubt we will participate next time!
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=g1.png&userId=1385959
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=g2.png&userId=1385959
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=g3.png&userId=1385959Oleg Dudnik2018-07-17T16:58:55Zneed help ?
http://community.wolfram.com/groups/-/m/t/1385882
I've been attempting to solve a difference equation similar to an ARMA process. The equation however contains a random disturbance / stochastic term that I'm not sure how to input. I've tried \epsilon and I've tried e[t] . Anybody know of how to add this correctly?
I've managed to get the proper solutions using RSolveValue; but now just need to correctly specify the error term.
Thankstramismile32 tramismile322018-07-17T11:44:27ZSpecify a variable as random error?
http://community.wolfram.com/groups/-/m/t/1363477
I've been attempting to solve a difference equation similar to an ARMA process. The equation however contains a random disturbance / stochastic term that I'm not sure how to input. I've tried $ \epsilon$ and I've tried $ e[t] $. Anybody know of how to add this correctly?
I've managed to get the proper solutions using RSolveValue; but now just need to correctly specify the error term.
ThanksAli Sheikhpour2018-06-25T23:16:57ZProof of a Diophantine Equation that outputs Fibonacci Numbers
http://community.wolfram.com/groups/-/m/t/1383267
#Introduction
In Stephen Wolfram's book A New Kind of Science, he discovered a function that he conjectures to have some very special properties. He conjectures that for only positive inputs, all of the positive outputs produce Fibonacci Numbers, where each successive element greater than or equal to 3 is the sum of its previous elements.
The function which he discovered follows as such:
(2-(x^2-y^2-xy)^2)x
![The first Fibonacci Numbers up to 1000, produced by the Diophantine Equation][1]
This post will go through the proof that that specific Diophantine Equation produces Fibonacci Numbers for positive inputs and outputs. The goal of this post is to show a method that can be used to prove Dr Wolfram's Conjecture.
#Proof
If we analyze the x^2-y^2-xy part, if this is an integer K such that K^2<2, then we will have discovered what x and y values satisfy the initial conditions stated above. The only integers squared that is less than 2 are 1, 0, and -1 (We can have a negative value because it will be squared).
So, using Mathematica, we can find the x and y values satisfying the conditions:
First setting x^2-y^2-xy equal to 0 and solving:
![Quadratic = 0][2]
Now setting x^2-y^2-xy equal to 1 and solving:
![enter image description here][3]
Now setting x^2-y^2-xy equal to -1 and solving:
![enter image description here][4]
We know that the first 100 Fibonacci Numbers are {1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597,
2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418,
317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465,
14930352, 24157817, 39088169, 63245986, 102334155, 165580141,
267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073,
4807526976, 7778742049, 12586269025, 20365011074, 32951280099,
53316291173, 86267571272, 139583862445, 225851433717, 365435296162,
591286729879, 956722026041, 1548008755920, 2504730781961,
4052739537881, 6557470319842, 10610209857723, 17167680177565,
27777890035288, 44945570212853, 72723460248141, 117669030460994,
190392490709135, 308061521170129, 498454011879264, 806515533049393,
1304969544928657, 2111485077978050, 3416454622906707,
5527939700884757, 8944394323791464, 14472334024676221,
23416728348467685, 37889062373143906, 61305790721611591,
99194853094755497, 160500643816367088, 259695496911122585,
420196140727489673, 679891637638612258, 1100087778366101931,
1779979416004714189, 2880067194370816120, 4660046610375530309,
7540113804746346429, 12200160415121876738, 19740274219868223167,
31940434634990099905, 51680708854858323072, 83621143489848422977,
135301852344706746049, 218922995834555169026, 354224848179261915075}
Now if we solve the first equation for the first 50 outputs we get:
![enter image description here][5]
It can be seen that the equation produces every sixth Fibonacci number starting from the third element so using Binet's formula to produce every nth Fibonacci, by making these two equations equal each other, it can be proven that the equation (left) produces only Fibonacci numbers.
![enter image description here][6]
We must show that the above equation is true, and takes on the form, ![enter image description here][7] , and that ![enter image description here][8]
Setting ![enter image description here][9], we can do the following:
![enter image description here][10]
Following that logic, the proof can be extended to the other 11 equations as shown below:
![enter image description here][11]
![enter image description here][12]
The code for the remaining proofs has been attached to this post.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=fibonacci.PNG&userId=1372005
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=reduce0.PNG&userId=1372005
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=reduce1.PNG&userId=1372005
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=reduce-1.PNG&userId=1372005
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=eqn1.PNG&userId=1372005
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=eqn11.PNG&userId=1372005
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=proof.PNG&userId=1372005
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=proof2.PNG&userId=1372005
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=proof3.PNG&userId=1372005
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=eqn111.PNG&userId=1372005
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=eqn2.PNG&userId=1372005
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=eqn22.PNG&userId=1372005Suhaas Kataria2018-07-13T21:19:26ZA quick question about entity queries
http://community.wolfram.com/groups/-/m/t/1385577
Mathematica provides access to a huge amount of curated data. But most of this is so slow and so inconvenient to retrieve that it is literally next to useless.
Take this simple query as an example:
t = AbsoluteTime[];
Entity["Plant", "Species:GlycineMax"]["TaxonomyGraph"] // AbsoluteTiming
AbsoluteTime[] - t
This took 3.5 minutes (!!!) on my machine, despite AbsoluteTiming reporting merely 9 seconds. On a second try, after restart, it took 4.5 minutes.
This is a typical problem whenever trying to retrieve curated data. Even the 9 seconds would be much too slow for anything else than a one-time interactive query. Use in a program (loop) is out of the question.
Is there a fix for this kind of problem?
Given the great amount of effort Wolfram put into developing this functionality, why are these basic usability issues not being fixed? I am a bit puzzled because being "knowledge-based" is the main marketing point of the Wolfram Language.
Does anyone on this forum seriously use these functions? If yes, how can you manage the terrible performance?fagg CHFH2018-07-17T07:01:15ZUse SetCellGroupOpen with Dynamic
http://community.wolfram.com/groups/-/m/t/1385338
The new inline cell openers work via a packet called ``FEPrivate`SetCellGroupOpen``. The usage for this is something like:
MathLink`CallFrontEnd@
FrontEnd`Value@
FEPrivate`SetCellGroupOpen[
cell,
Open|Close|True|False
]
Now I want to get this to work like the little orange clickers in the docs which have a `Dynamic` state in the second arg to their ``CellGroupData``. Is there a way to make this function do that or any method that *does not involve `NotebookWrite/Put`*?b3m2a1 2018-07-16T23:18:45ZReading Specific Dimensions of a Multidimensional Time Series?
http://community.wolfram.com/groups/-/m/t/1385043
I'm processing natural language signals. One of the more useful stats is obtained with this function:
formants = AudioLocalMeasurements[sample, "Formants"];
It creates a time series object that has a 5-dimensional output. In other words, if I plotted the data in the time series, I'd get five plots, as shown below.
![enter image description here][1]
However, I'd like to get rid of some of the data and keep only the first two or three formants (lower two or three lines). I've tried to adjust the parameters of the ListLinePlot function and also to alter the data itself with replacement rules, both with no success.
If anyone knows how I can either reduce the data in the time series or display only part of the data, I would appreciate the help.
Thanks in advance,
Mark
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-16at9.17.07AM.png&userId=788861Mark Greenberg2018-07-16T16:27:44ZHow do I generalize this function for ratio of a list element?
http://community.wolfram.com/groups/-/m/t/1385132
How do I generalize this function so it can take any matrix and, for each row, take the ratio of each element to the n'th element of that row? Mapping works if I want to use the default position I set, but not if I want to specify a position.
Function:
rationormalize4[list_, pos_ : 4] := list/list[[pos]]
Here's what works and what doesn't...
Apply to a list with the default position of 4 - works:
In[76]:= rationormalize4[{1, 4, 2, 6}]
Out[76]= {1/6, 2/3, 1/3, 1}
Map over a matrix a.k.a. list of lists with the default position of 4 - works:
In[78]:= Map[rationormalize4, {{1, 3, 1, 6}, {2, 5, 7, 9}}]
Out[78]= {{1/6, 1/2, 1/6, 1}, {2/9, 5/9, 7/9, 1}}
Apply to a list of lists with a non-default position - doesn't work, thinks pos_ is the second list instead of the second element. Is there a way to write in a level specification?:
In[80]:= rationormalize4[{{1, 3, 1, 6}, {2, 5, 7, 9}}, 2]
During evaluation of In[80]:= Thread::tdlen: Objects of unequal length in {{1,3,1,6},{2,5,7,9}} {1/2,1/5,1/7,1/9} cannot be combined.
Out[80]= {{1, 3, 1, 6}, {2, 5, 7, 9}} {1/2, 1/5, 1/7, 1/9}
Setting attributes to Listable - doesn't work because then the function thinks it has to take the second element of each element and gets confused:
In[81]:= SetAttributes[rationormalize4, Listable]
In[82]:= rationormalize4[{{1, 3, 1, 6}, {2, 5, 7, 9}}, 2]
During evaluation of In[82]:= Part::partd: Part specification 1[[2]] is longer than depth of object.
During evaluation of In[82]:= Part::partd: Part specification 3[[2]] is longer than depth of object.
During evaluation of In[82]:= Part::partd: Part specification 1[[2]] is longer than depth of object.
During evaluation of In[82]:= General::stop: Further output of Part::partd will be suppressed during this calculation.
Out[82]= {{1/1[[2]], 3/3[[2]], 1/1[[2]], 6/6[[2]]}, {2/2[[2]], 5/
5[[2]], 7/7[[2]], 9/9[[2]]}}
Any help is appreciated - I'm making a bunch of data analysis functions for chemistry.Madeleine Sutherland2018-07-16T16:17:21ZProgrammatic control of the Completions Menu
http://community.wolfram.com/groups/-/m/t/1385329
I've been playing around with the autocomplete system for a while now, digging up what I can, which hasn't been much.
In particular I want to be able to better control the autocompletion menu and use it for my own purposes.
I found the ``"FE`"`` level symbols and their corresponding front-end packets, like ``FE`AC``, ``FE`OC``, ``FE`FC``, etc., but can't figure out what they do.
When I use them (e.g. by ``FE`FC["Print"]``) all that happens is I get an "INTERNAL SELF-TEST ERROR", which I naïvely suspect is because the `AttachedCell` popup window that houses the completions isn't available.
I understand this is undocumented functionality and using it is morally equivalent to taking candy from little kids, but I'd still like to know how it works and don't think anyone outside of the WRI can steer me in a useful direction.
So can anyone help me out here?b3m2a1 2018-07-16T23:14:46ZVBA optional arguments using NETLink
http://community.wolfram.com/groups/-/m/t/1385086
I'm using NETLink to modify an Excel file. I'd like to copy a sheet and put it after a specified sheet. The Microsoft documentation for the copy function is [here][1]. I can't figure out how to set an optional argument for the copy function.
example code:
Needs["NETLink`"]
InstallNET["Force32Bit"->True]
excelObj=CreateCOMObject["Exel.Application"]
wb=excelObj@Workbooks@Open["~/example.xlsx"]
sheets=wb@Worksheets
sheets[1]@Copy[(*what do I put here?*),sheets[1]]
In this example code I'm trying to copy the first sheet and insert the new sheet after the first sheet. There are workarounds, but in general, how do I deal with optional arguments like this using NETLink?
[1]: https://msdn.microsoft.com/en-us/VBA/Excel-VBA/articles/sheets-copy-method-excelEric Smith2018-07-16T19:41:24Z[WSC18] Implementation of Common Axiom Systems and Proof Generation
http://community.wolfram.com/groups/-/m/t/1382544
# Exploration of Fundamental Mathematics, via Implementation of Common Axiom Systems and Proof Generation
Utilizing the function FindEquationalProof introduced in version 11.3 of Mathematica, I implemented and proved common algebraic and logical conjectures as well as hypothetical, but plausible combinations of axioms and conjectures, as introduced in Chapter 12, Section 9 of Stephen Wolfram's book, *A New Kind of Science*. **The Full Computational Essay is attached at the bottom.**
This post is divided into the following 4 sections:
1. **Implementation of Modern Axiom Systems**
2. **Proof Generation**
3. **Alternative Axiom Systems**
4. **Implications**
## Implementation of Modern Axiom Systems
### Axioms, Conjectures, Theorems and Proofs
Most common math curriculums prioritize the introduction of theorems, but not many take the time to properly prove each of them. A mathematical proof is a logical argument for the validity of a particular statement. A statement without proof is called a conjecture. A statement that has been proved is a theorem.
Such mathematical system of proofs build upon each other. A proof might require another theorem, which might require another theorem for its proof. When we continue this trace of proofs and theorems, we reach a point in which a statement is so obviously true that it needs no proof. This is the informally defined axiom.
An example of an axiom may be `a + b == b + a`, or `Not[True] == False`. Despite the simplicity of these statements, without a consistent set of enough axioms, it is impossible to build up a viable system of mathematics.
A more complete definition, as well as the history of the investigation of proofs, can be found in the following Wikipedia article:
[Wikipedia - Proof Theory](https://en.wikipedia.org/wiki/Proof_theory)
### The FindEquationalProof Function
This newly-implemented function, though rather lacking in practicality, is conceptually an interesting product, as it introduces a method of computationally generating human-readable (but possibly lengthy) proofs. It is also interesting in that a system of axioms can be specified, rather than implied, which allows for the possibility of limiting the range of axioms a proof can use, or providing a completely new system of axioms, separate from that of our mathematics.
The function, in essence, generates such proofs by simple substitution—it replaces a part of an expression by the rules of the axiom, or lemmas (also generated via the same method). The following diagram excerpted from Wolfram's book visualizes the algorithm:
![nks-extract][1]
[1]
Axioms listed in the bottom left are used in the proofs of theorems. The validity of a particular statement is therefore proved by investigating if a certain set of substitutions to one expression can be transformed into another.
This process is equivalent to the proofs of conjectures of mathematics. Every proof is merely a series of substitution of parts of statements with axioms (or lemmas derived from axioms). Given enough computational power, and more importantly, a required set of consistent axioms, a theorem can be proved.
### Providing definitions and axioms
The only statements the FindEquationalProof function can understand, according to the documentation, are:
- lhs==rhs — equational logic
- ForAll[vars,lhs==rhs] — universal quantifiers over equational logic identities
Implementing even the simplest axioms were, therefore, a notoriously time-consuming task.
```
definitions = {
ForAll[{a, b}, ex[a, b] == not[ForAll[{a}, not[b]]]],
ForAll[{a, b}, im[a, b] == or[not[a], b]],
ForAll[{a}, ueq[a, a] == not[eq[a, a]]],
ForAll[{a, b}, eqv[a, b] == and[im[a, b], im[b, a]]],
ForAll[{a, b}, nand[a, b] == not[and[a, b]]],
ForAll[{a, b}, xor[a, b] == and[or[a, not[b]], or[not[a], b]]],
ForAll[{a, b}, or[or[a, b], not[b]] == "T"],
not["T"] == "F",
ForAll[{a}, eq[a, a] == "T"],
ForAll[{a, b}, im[a, b] == or[not[a], b]]
};
```
```
booleanLogic = {
ForAll[{a, b}, and[a, b] == and[b, a]],
ForAll[{a, b}, or[a, b] == or[b, a]],
ForAll[{a, b}, and[a, or[b, not[b]]] == a],
ForAll[{a, b}, or[a, and[b, not[b]]] == a],
ForAll[{a, b, c}, and[a, or[b, c]] == or[and[a, b], and[a, c]]],
ForAll[{a, b}, or[a, and[b, c]] == and[or[a, b], or[a, c]]]
};
```
Simple Boolean algebra axioms are specified, providing the system to compute `And`, `Or`, and `Not` operations. Definitions not provided, like `Exists`, `Implies`, !=, or <-> must be defined using only these logical operators. Notable implementations include:
```
ForAll[{a,b},ex[a,b]==b]] (*Definition of Existance*)
ForAll[{a, b}, im[a, b] == or[not[a],b] (*Definition of Implication*)
ForAll[{a, b}, or[ or[a,b], not[b] ] == "T"] (*Definition of Truth*)
ForAll[{a, b}, eq[a, b] == "T"] (*Functional Definition of Equality*)
```
More complicated operations such as NAND or XOR are defined as combinations of these operations.
Using these definitions and axioms, we can prove simple logical theorems, such as De Morgan's law, or the Modus Ponens. The output Proof Graphs show the Axioms in Green, and Lemmas in Red and Orange, revealing the order and steps of the proof:
```
modusP = ForAll[{p, q}, im[and[im[p, q], p], q] == "T"];
deMorgan = ForAll[{a, b}, not[and[a, b]] == or[not[a], not[b]]];
axioms = Join[definitions, booleanLogic];
FindEquationalProof[modusP, axioms]["ProofGraph"]
FindEquationalProof[deMorgan, axioms]["ProofGraph"]
```
![modusP][2]
![demorgan][3]
Note the fact that even such simple proofs can take over 200 steps to prove from the systems of axioms. More complicated expressions, such as the Wolfram Axiom, can be shown to follow from these axioms:
```
wolframLogic =
ForAll[{a, b, c},
nand[nand[nand[a, b], c], nand[a, nand[nand[a, c], a]]] == c];
FindEquationalProof[wolframLogic, axioms]["ProofGraph"]
```
![wolframLogic][4]
Another notable fact is that despite the length of the Wolfram Axiom compared to De Morgan's law, it takes a similar number of steps to reach the proof. A possible justification may be related to the Principle of Computational Equivalence.
### Implementing Arithmetic
The first attempted implementation of arithmetic was using a modified version of Peano's axioms. His system is simple to implement, but is slightly harder to express. 0 is defined, and all natural numbers are defined using a successor function s[x], which is equivalent to adding 1 to x, e.g. 1==s[0],2==s[s[0]], and so on. Addition is defined as a recursive function of s, and multiplication is defined as a recursive function of addition. Note that the distributive and commutative properties of addition and multiplication is not defined originally in Peano's axioms.
```
arithmetic = {
(*Addition*)
ForAll[{x, y}, add[x, s[y]] == s[add[x, y]]],
ForAll[{x}, add[0, x] == x],
ForAll[{x, y}, add[x, y] == add[y, x]],
ForAll[{x, y, z}, add[x, add[y, z]] == add[add[x, y], z]],
(*Multiplication*)
ForAll[{x, y}, times[x, s[y]] == add[times[x, y], x]],
ForAll[{x}, times[0, x] == 0],
ForAll[{x}, times[x, s[0]] == x],
ForAll[{x, y}, times[x, y] == times[y, x]],
ForAll[{x, y}, times[x, times[y, z]] == times[times[x, y], z]],
ForAll[{x, y, z},
add[times[x, z], times[y, z]] == times[add[x, y], z]]
};
```
With these in hand, many simple algebraic properties can be proved:
1+2x+x*x == (1+x)(1+x)
```
distribution =
ForAll[{x},
add[s[0], add[times[s[s[0]], x], times[x, x]]] ==
times[add[s[0], x], add[s[0], x]]];
FindEquationalProof[distribution, arithmetic]["ProofGraph"]
```
![distribution][5]
We can further extend this system by defining powers.
```
powers = {
ForAll[{x}, pow[x, s[0]] == x],
ForAll[{x}, pow[x, 0] == 1],(*Note: 0^0 is undef*)
ForAll[{x, y}, pow[x, s[y]] == times[pow[x, y], x]],
ForAll[{x, y, z}, pow[pow[x, y], z] == pow[x, times[y, z]]
]};
arithmetic = Join[arithmetic, powers];
```
However, if we introduce negative numbers, the system generates an impossible proof:
```
negatives = {
ForAll[{x}, add[x, neg[x]] == 0],
ForAll[{x}, y, sub[x, y] == add[x, neg[y]]]
};
arithmetic = Join[arithmetic, negatives];
zeroEqualsOne = FindEquationalProof[0 == s[0], arithmetic]
zeroEqualsOne["ProofNotebook"];
(*Remove semicolon and evaluate to view proof notebook*)
```
Inspection of the Proof Notebook reveals that the problem is due to the undefined power of 0^0. As x^0 is defined as 1, but 0^x is defined as 0, the inevitable conclusion is that 0 is equal to 1.
This problem showcases the importance of defining a sturdy set of axioms. Although this problem can be eliminated by the use of the "\[Implies]" operator defined in Boolean logic, it would be more wise to follow the steps of previous mathematicians, rather than to continue building a new set of axioms, as unpredictable inconsistencies and contradictions may occur.
### Real Algebra and Tarski Axioms
To implement real algebra, a more robust set of axioms need to be defined. Tarski's axioms of real arithmetic are what many consider to be the basis of current algebra:
![tarski][6]
[1]
Before we do so, however, axioms from predicate logic, such as implication or equivalence, are also required.
```
predicateLogic = {
im[ForAll[{a}, im[b, c]], im[ForAll[{a}, b], ForAll[{b}, c]]] ==
"T",
im[fq[a, b], im[a, ForAll[{b}, a]]] == "T",
im[fq[b, a], ex[a, eq[a, b]]] == "T"
};
tarskiAxioms = {
ForAll[{x, y, z}, add[x, add[y, z]] == add[add[x, y], z]],
ForAll[{a}, add[a, 0] == a],
ForAll[{a}, add[a, neg[a]] == 0],
ForAll[{a, b}, add[a, b] == add[b, a]],
ForAll[{x, y, z}, times[x, times[y, z]] == times[times[x, y], z]],
ForAll[{x, y, z},
add[times[x, z], times[y, z]] == times[add[x, y], z]],
ForAll[{x, y}, times[x, y] == times[y, x]],
ForAll[{a}, times[a, 1] == a],
ForAll[{a}, im[ueq[a, 0], eq[times[a, rec[a]], 1]] == "T"],
ForAll[{a, b, c}, im[and[gt[a, b], gr[b, c]], gr[a, c]] == "T"],
ForAll[{a, b}, im[gr[a, b], ueq[a, b]] == "T"],
ForAll[{a, b}, or[gr[a, b], a] == or[b, gr[b, a]]],
ForAll[{a, b, c}, im[gr[a, b], gr[add[a, c], add[b, c]]] == "T"],
ForAll[{a, b, c},
im[and[gr[a, b], gr[c, 0]], gr[times[a, c], times[b, c]]] ==
"T"],
gr[1, 0] == "T"
};
axioms = Join[definitions, booleanLogic, predicateLogic,
tarskiAxioms];
```
With these, we can compute with the set of all real numbers. Proving again the distribution law, and investigating the Proof Notebook, reveals the use of reciprocals, negative numbers, and even fractions in the proof.
```
distribution =
ForAll[{a, b, x, y},
times[add[x, y], add[x, b]] ==
add[add[times[x, x], times[b, x]], add[times[x, y], times[b, y]]]];
FindEquationalProof[distribution, axioms]["ProofNotebook"];
(*remove semicolon and evaluate to inspect ProofNotebook*)
```
It is surreal to see that the whole of algebra can be defined in these few axioms. Stephen Wolfram, in his book, remarks on this fact as well, stating after the two-page list of mathematical axioms that "It is from these axiom systems [...] that most of the millions of theorems in the literature of mathematics have ultimately been derived."
## Proof Generation
### Elimination of Axioms
A natural extension of such an investigation would be to computationally determine how many axioms could be eliminated from a particular proof. A grid would be ideal to view such data, therefore a wrapper function was written in order to automate this process.
generateProof accepts a list of theorems and axioms, and returns a 2-D array of its proofObjects. generateGrid accepts a list of theorems and axioms, as well as a few options about its display, and outputs a grid, revealing the number of steps of each proof.
```
generateProof[thms_?ListQ, axms_?ListQ, perTimeConstraint_?IntegerQ] :=
Module[{},
Table[Table[
FindEquationalProof[ForAll[{a, b, c}, thm], axm,
TimeConstraint -> perTimeConstraint], {thm, thms}], {axm, axms}]]
```
```
generateGrid[thms_?ListQ, axms_?ListQ, AxiomLength_?IntegerQ,
TimeConstraint_?IntegerQ, scheme_?StringQ, SimpleLabels_?BooleanQ] :=
Module[{proofs, steps, labels, viewThems, viewSteps, colorRules},
proofs = generateProof[thms, axms, TimeConstraint];
steps =
Table[StringCount[ToString@#["ProofFunction"], ";"] & /@
proofs[[n]], {n, Length[proofs]}];
labels = (Rest@Subsets@Range[AxiomLength]);
viewThems = {"Theorems"}~Join~thms;
viewSteps =
Table[{If[SimpleLabels, labels[[n]], axms[[n]]]}~Join~
steps[[n]], {n, Length[proofs]}];
{Rotate[#, \[Pi]/2, Baseline, {1, 1}] & /@ viewThems}~Join~steps;
colorRules = {None, None}~
Join~{Flatten@
Table[Table[{y, x} ->
ColorData[scheme,
Rescale[steps[[y - 1]][[x - 1]], {0,
Max@Flatten@steps}]], {x, 2, Length[steps[[1]]] + 1}], {y,
2, Length[steps] + 1}]};
Grid[{Rotate[#, \[Pi]/2] & /@ viewThems}~Join~viewSteps,
ItemSize -> {{Automatic, Table[1, Length[steps] - 1]}, {Automatic,
Table[1, Length[axms] - 1]}}, Spacings -> Automatic,
Frame -> All, Alignment -> Left, Background -> colorRules]]
```
For example, a set of randomly-generated axioms proving a set of equally randomly-generated theorems might look like the following grid—the top row are the theorems, and the leftmost column are the axioms. Each square is numbered and colored according to the length of its proof:
![exampleGrid][7]
In the our case, the intent is to investigate how many axioms from Boolean logic we can remove to prove particular theorems. Therefore, we can provide the generateGrid function a list of subsets of the Boolean logic. Each subset still requires definitions for things such as implication or equals, so it is prepended to each list.
```
subsets = Rest@Subsets@booleanLogic;
Short[subsets, 5]
```
`Out:`
$$\left\{\left\{\forall _{\{a,b\}}\text{and}(a,b)=\text{and}(b,a)\right\},\left\{\forall _{\{a,b\}}\text{or}(a,b)=\text{or}(b,a)\right\},\langle\langle 60\rangle\rangle ,\left\{\forall _{\{a,b\}}\text{and}(a,b)=\text{and}(b,a),\forall _{\{a,b\}}\text{or}(a,b)=\text{or}(b,a),\forall _{\{a,b\}}\text{and}(a,\text{or}(b,\text{not}(b)))=a,\forall _{\{a,b\}}\text{or}(a,\text{and}(b,\text{not}(b)))=a,\forall _{\{a,b,c\}}\text{and}(a,\text{or}(b,c))=\text{or}(\text{and}(a,b),\text{and}(a,c)),\forall _{\{a,b,c\}}\text{or}(a,\text{and}(b,c))=\text{and}(\text{or}(a,b),\text{or}(a,c))\right\}\right\}$$
```
subsetAxioms = Join[definitions, #] & /@ (Rest@Subsets@booleanLogic);
Short[subsetAxioms, 6]
```
$$\left\{\left\{\forall _{\{a,b\}}\text{ex}(a,b)=\text{not}(\text{not}(b)),\forall _{\{a,b\}}\text{im}(a,b)=\text{or}(\text{not}(a),b),\forall _a\text{ueq}(a,a)=\text{not}(\text{eq}(a,a)),\forall _{\{a,b\}}\text{eqv}(a,b)=\text{and}(\text{im}(a,b),\text{im}(b,a)),\forall _{\{a,b\}}\text{nand}(a,b)=\text{not}(\text{and}(a,b)),\forall _{\{a,b\}}\text{xor}(a,b)=\text{and}(\text{or}(a,\text{not}(b)),\text{or}(\text{not}(a),b)),\forall _{\{a,b\}}\text{or}(\text{or}(a,b),\text{not}(b))=\text{T},\text{not}(\text{T})=\text{F},\forall _a\text{eq}(a,a)=\text{T},\forall _{\{a,b\}}\text{im}(a,b)=\text{or}(\text{not}(a),b),\forall _{\{a,b\}}\text{and}(a,b)=\text{and}(b,a)\right\},\langle\langle 61\rangle\rangle ,\left\{\forall _{\{a,b\}}\text{ex}(a,b)=\text{not}(\text{not}(b)),\langle\langle 14\rangle\rangle ,\forall _{\{a,b,c\}}\text{or}(a,\text{and}(b,c))=\text{and}(\text{or}(a,b),\text{or}(a,c))\right\}\right\}$$
After such formatting, the list subsetAxioms contains every combination of axioms, each with the list of definitions. As displaying such long expressions on the grid is infeasible, we can number each axiom of Boolean logic, and only show its index on the grid. In the following example we investigate the provability and the number of steps required for proving De Morgan's theorem, using subsets of Boolean logic. The output grid is edited for easy viewing.
```
booleanLogic = {
(*1*) ForAll[{a, b}, and[a, b] == and[b, a]],
(*2*) ForAll[{a, b}, or[a, b] == or[b, a]],
(*3*) ForAll[{a, b}, and[a, or[b, not[b]]] == a],
(*4*) ForAll[{a, b}, or[a, and[b, not[b]]] == a],
(*5*) ForAll[{a, b, c}, and[a, or[b, c]] == or[and[a, b], and[a, c]]],
(*6*) ForAll[{a, b}, or[a, and[b, c]] == and[or[a, b], or[a, c]]]
};
generateGrid[{deMorgan}, subsetAxioms, 6, 10, "Rainbow", True]
```
![gridA][8]
### Generation of Random Boolean Expressions and Proofs
In order to see what proofs are required for particular theorems, we need to be able generate these theorems repetitively. The first function, replacer, translates the Wolfram Language's native Heads (e.g. Or, And, Nand, True), with ones defined within the axiom set (e.g. or, and, nand, "T"). The second function, randomBoolExp, generates a single logical statement in the axiom set's language.
```
replacer[expr_, reverse_?BooleanQ] :=
Module[{dictionary, reverseDictionary},
dictionary = {"and" -> "And", "or" -> "Or", "not" -> "Not",
"ex" -> "Exists", "im" -> "Implies", "ueq" -> "Unequal",
"eq" -> "Equal", "eqv" -> "Equivalent", "nand" -> "Nand",
"T" -> "True", "F" -> "False", "x" -> "X", "add" -> "Plus",
"times" -> "Times"};
Needs["GeneralUtilities`"];
reverseDictionary =
Association[dictionary] // AssociationInvert // Normal;
ToExpression@
StringReplace[ToString@expr,
If[reverse, reverseDictionary, dictionary]]]
```
```
randomBoolExp[numb_?IntegerQ, form_?StringQ] :=
Module[{tripleDictionary, replBoolExp, shortBoolExp},
tripleDictionary = {or[x_, y_, z_] -> or[x, or[y, z]],
and[x_, y_, z_] -> and[x, and[y, z]],
nand[x_, y_, z_] -> not[and[x, and[y, z]]],
xor[x_, y_, z_] -> and[and[a, b, c], not[and[a, b, c]]]};
replBoolExp =
replacer[
FullForm@(randBoolExp =
BooleanFunction[RandomInteger[1, 2^numb],
ToExpression /@ Alphabet[][[;; numb]]]), True] //.
tripleDictionary;
shortBoolExp =
replacer[FullForm@BooleanMinimize[randBoolExp, form], True] //.
tripleDictionary;
replBoolExp == shortBoolExp]
```
```
randomBoolExp[3, "NAND"](*Repeatedly evaluate for different output*)
```
`Out:`
```
or[and[a,and[not[b],c]],and[not[a],and[not[b],not[c]]]]==nand[not[and[a,and[not[b],c]]],not[and[not[a],and[not[b],not[c]]]]]
```
The function generates the expression via a random integer generator which is used as a truth table for the built-in BooleanFunction function. Each variable in the expression is an alphabet, and the number of variables are adjusted with the variable numb, which defaults to {a,b,c}. The expression is then simplified using a specific form, e.g. "NAND", "ANF", etc., and equated with the original equation. Finally, the expression is translated into the axiom's language set.
```
expressions = {ForAll[{a, b, c},
or[and[not[a], not[b]], or[and[not[a], c], and[not[b], not[c]]]] ==
or[and[not[a], c], and[not[b], not[c]]]],
ForAll[{a, b, c},
or[and[a, not[b]], or[and[not[a], b], and[b, not[c]]]] ==
or[and[a, not[b]], or[and[a, not[c]], and[not[a], b]]]],
ForAll[{a, b, c},
or[and[a, b], or[and[not[a], not[b]], and[b, c]]] ==
or[and[a, b], or[and[not[a], not[b]], and[not[a], c]]]],
ForAll[{a, b, c},
or[and[not[a], not[b]], or[and[b, c], and[not[b], not[c]]]] ==
or[and[not[a], c], or[and[b, c], and[not[b], not[c]]]]]
};
expressions = {ForAll[{a, b, c},
or[and[not[a], not[b]], or[and[not[a], c], and[not[b], not[c]]]] ==
or[and[not[a], c], and[not[b], not[c]]]],
ForAll[{a, b, c},
or[and[a, not[b]], or[and[not[a], b], and[b, not[c]]]] ==
or[and[a, not[b]], or[and[a, not[c]], and[not[a], b]]]],
ForAll[{a, b, c},
or[and[a, b], or[and[not[a], not[b]], and[b, c]]] ==
or[and[a, b], or[and[not[a], not[b]], and[not[a], c]]]],
ForAll[{a, b, c},
or[and[not[a], not[b]], or[and[b, c], and[not[b], not[c]]]] ==
or[and[not[a], c], or[and[b, c], and[not[b], not[c]]]]]
};
defAxioms = Join[definitions, #] & /@ (Rest@Subsets@booleanLogic);
grid = generateGrid[expressions, defAxioms, 6, 20, "Rainbow", True]
```
![gridB][9]
The empty columns for the 7th theorem is believed to be an issue of computing; limited computed power necessitated the time constraint of 20 seconds per proof.
An interesting fact to denote is that most of the generated expressions were provable without the full set of axioms. One would normally expect only a complete axiom system to be able to prove or falsify a statement. Despite the simplicity of these theorems compared to the complex mathematical problems researched in the real world, we can see that it is very plausible to generate proofs without a complete axiom system. It is actually the case that there can be no complete set of axioms, as an arbitrary number of axioms with arbitrary operators can be generated (as explored in the following section). It would also be possible to draw a relation with Gödel's Incompleteness theorem, which suggests that no consistent, complete set of axioms exist that can prove all possible truths describable within the system. A civilization which only considers the {1,2,3,5,6} without the 4th axiom, might be able to express the 3rd theorem, but would not be able to prove it without expanding its axiom system.
## Alternative Axiom Systems
### A new operator
Our common sense tells us that our system of axioms, i.e., logic, is the only possible system of axioms that exists. However, this is not necessarily the case. An expression can be created with an arbitrary operator, and those expressions can be tried as axioms or theorems, despite not being representable with our system.
Let us imagine a new operator, "$\cdot$", the CenterDot. We can then introduce variables, a and b, to build up expressions like the following:
$$((b\cdot (a\cdot a))\cdot a)\cdot b=a$$
This is impossible to make sense of within our system of logic, but still is a valid expression that describes an equation. Whether it is understandable or not is an unrequited issue; a more interesting question would be if these systems are viable, which we will explore by using them to prove other theorems.
The following list defines some of these arbitrary equalities that constitute as valid expressions:
$$\text{imaginaryAxioms}=\{\\a=a,b=a,a\cdot a=a,a\cdot b=a,\\b\cdot a=a,b\cdot b=a,a\cdot b=b\cdot a,a\cdot (a\cdot a)=a,\\(a\cdot a)\cdot a=a,a\cdot (a\cdot b)=a,a\cdot (b\cdot b)=a,b\cdot (a\cdot a)=a,\\b\cdot (a\cdot b)=a,b\cdot (b\cdot a)=a,(a\cdot a)\cdot b=a,(a\cdot b)\cdot a=a,\\(a\cdot b)\cdot b=a,(b\cdot a)\cdot a=a,(b\cdot a)\cdot b=a,(b\cdot b)\cdot a=a,\\b\cdot (b\cdot b)=a,(b\cdot b)\cdot b=a,(a\cdot a)\cdot (a\cdot a)=a,a\cdot ((a\cdot a)\cdot b)=a,\\(a\cdot a)\cdot (a\cdot b)=a,(a\cdot b)\cdot (b\cdot c)=a,(a\cdot b)\cdot c=a\cdot (b\cdot c),((b\cdot b)\cdot a)\cdot (a\cdot b)=a,\\((b\cdot (a\cdot a))\cdot a)\cdot b=a,b\cdot (c\cdot (a\cdot (b\cdot c)))=a,(a\cdot b)\cdot (a\cdot (b\cdot c))=a,(((b\cdot a)\cdot c)\cdot a)\cdot (a\cdot c)=a,(((b\cdot c)\cdot d)\cdot a)\cdot (a\cdot d)=a,\\\{(a\cdot b)\cdot a=a,a\cdot a=b\cdot b\},\{(a\cdot a)\cdot (a\cdot a)=a,a\cdot b=b\cdot a\},\\(b\cdot (b\cdot (a\cdot a)))\cdot (a\cdot (b\cdot c))=a\};$$
Similarly, as we did in the previous section with Boolean algebra, we can generate theorems to evaluate what these axioms can prove. The task is done with the following function randThms, and its related functions:
[2]
```
canonicalize[list_] :=
DeleteDuplicates[
DeleteDuplicates[Sort /@ list, #1 === (#2 /. {1 -> 0, 0 -> 1}) &]]
canonicalize[list_, vars_] :=
DeleteDuplicates[
DeleteDuplicates[Sort /@ list,
Function[{x, y},
MatchQ[x,
Alternatives @@ ((y /. (Rule @@@
Partition[Append[#, First[#]], 2, 1])) & /@
Permutations[vars])]]]]
revariable[expr_] := (expr /. {0 -> a, 1 -> b, 2 -> c})
```
```
randThms[length_?IntegerQ] :=
Module[{thms, axms, newthms, newaxms, ns},
thms = Cases[
Apply[Equal, #] & /@
Flatten[Groupings[#, CenterDot -> 2] & /@
Table[Rest[IntegerDigits[ns, 2]], {ns, length}]], _Equal];
newthms =
Select[revariable[canonicalize[thms]],
TautologyQ[Equivalent @@ (# /. CenterDot -> Nand)] &]
]
```
For example,
```
randThms[50] // Column
```
`Out:`
$$\begin{array}{l}
a\cdot b=b\cdot a \\
a=(a\cdot a)\cdot (a\cdot a) \\
a=(a\cdot a)\cdot (a\cdot b) \\
a=(a\cdot a)\cdot (b\cdot a) \\
a=(a\cdot b)\cdot (a\cdot a) \\
a=(b\cdot a)\cdot (a\cdot a) \\
\end{array}$$
### Arbitrary Proofs
Using the above mentioned proofGrid function, we can visualize which axioms yield which theorems. As in the previous run, the axioms and theorems are numbered for visual clarity. Again, despite the fact that it is always impossible to determine if a certain theorem holds, the simplicity of the ones used allow for an acceptable level of confidence.
$$\text{imaginaryTheorems}=\{a\cdot b=b\cdot a,a=(a\cdot a)\cdot (a\cdot a),a=(a\cdot a)\cdot (a\cdot b),a=(a\cdot a)\cdot (b\cdot a),a=(a\cdot b)\cdot (a\cdot a),a=(b\cdot a)\cdot (a\cdot a),a=((a\cdot a)\cdot a)\cdot (a\cdot a),a=(a\cdot a)\cdot ((a\cdot a)\cdot a),a=(a\cdot (a\cdot a))\cdot (a\cdot a),a=(a\cdot a)\cdot (a\cdot (a\cdot a)),a\cdot a=((a\cdot a)\cdot a)\cdot a,a\cdot a=a\cdot ((a\cdot a)\cdot a),a\cdot a=(a\cdot (a\cdot a))\cdot a,a\cdot a=a\cdot (a\cdot (a\cdot a)),a\cdot (a\cdot a)=(a\cdot a)\cdot a,a=(a\cdot a)\cdot (a\cdot (a\cdot b)),a\cdot a=a\cdot ((a\cdot a)\cdot b),a=(a\cdot a)\cdot ((a\cdot b)\cdot a),a=(a\cdot a)\cdot (a\cdot (b\cdot a)),a\cdot a=((a\cdot a)\cdot b)\cdot a,a=(a\cdot a)\cdot (a\cdot (b\cdot b)),a\cdot a=a\cdot ((a\cdot b)\cdot b),a=(a\cdot a)\cdot ((b\cdot a)\cdot a),a=(a\cdot (a\cdot b))\cdot (a\cdot a),a\cdot a=a\cdot (b\cdot (a\cdot a)),a=(a\cdot (a\cdot b))\cdot (a\cdot b),a\cdot a=a\cdot ((b\cdot a)\cdot b),a\cdot b=a\cdot (a\cdot (a\cdot b)),a\cdot a=a\cdot (b\cdot (a\cdot b)),a=(a\cdot a)\cdot ((b\cdot b)\cdot a),a=(a\cdot (a\cdot b))\cdot (b\cdot a),a\cdot a=((a\cdot b)\cdot b)\cdot a,a\cdot (a\cdot (a\cdot b))=b\cdot a,a\cdot a=a\cdot (b\cdot (b\cdot a)),b=((a\cdot a)\cdot a)\cdot (b\cdot b),a=(a\cdot a)\cdot ((b\cdot b)\cdot b),b=(a\cdot (a\cdot a))\cdot (b\cdot b),a=(a\cdot a)\cdot (b\cdot (b\cdot b)),b\cdot b=((a\cdot a)\cdot a)\cdot b,a\cdot a=a\cdot ((b\cdot b)\cdot b),b\cdot b=(a\cdot (a\cdot a))\cdot b,a\cdot a=a\cdot (b\cdot (b\cdot b)),(a\cdot a)\cdot a=(b\cdot b)\cdot b,b\cdot (b\cdot b)=(a\cdot a)\cdot a,a\cdot (a\cdot a)=b\cdot (b\cdot b),a=((a\cdot b)\cdot a)\cdot (a\cdot a),a=(a\cdot (b\cdot a))\cdot (a\cdot a),a\cdot a=(b\cdot (a\cdot a))\cdot a,a=((a\cdot b)\cdot a)\cdot (a\cdot b),a=(a\cdot (b\cdot a))\cdot (a\cdot b),a=(a\cdot b)\cdot (a\cdot (a\cdot b)),a\cdot b=a\cdot ((a\cdot b)\cdot a),a\cdot b=(a\cdot (a\cdot b))\cdot a,a\cdot b=a\cdot (a\cdot (b\cdot a)),a=((a\cdot b)\cdot a)\cdot (b\cdot a),a=(a\cdot b)\cdot ((a\cdot b)\cdot a),a=(a\cdot (b\cdot a))\cdot (b\cdot a),a=(a\cdot b)\cdot (a\cdot (b\cdot a)),a\cdot a=((b\cdot a)\cdot b)\cdot a,a\cdot ((a\cdot b)\cdot a)=b\cdot a,b\cdot a=(a\cdot (a\cdot b))\cdot a,a\cdot a=(b\cdot (a\cdot b))\cdot a,a\cdot (a\cdot (b\cdot a))=b\cdot a,a\cdot (a\cdot b)=(a\cdot b)\cdot a,a\cdot (a\cdot b)=a\cdot (b\cdot a),b=((a\cdot a)\cdot b)\cdot (a\cdot b),a=(a\cdot b)\cdot (a\cdot (b\cdot b)),(a\cdot a)\cdot b=(a\cdot b)\cdot b,a\cdot (a\cdot b)=a\cdot (b\cdot b),a=(a\cdot b)\cdot ((b\cdot a)\cdot a),a=(a\cdot (b\cdot b))\cdot (a\cdot a),a\cdot a=(b\cdot (b\cdot a))\cdot a,b\cdot (a\cdot a)=(a\cdot a)\cdot b,a\cdot (a\cdot b)=(b\cdot a)\cdot a,b=((a\cdot a)\cdot b)\cdot (b\cdot a),a=(a\cdot (b\cdot b))\cdot (a\cdot b),a\cdot b=((a\cdot a)\cdot b)\cdot b,a\cdot b=a\cdot (a\cdot (b\cdot b)),(a\cdot a)\cdot b=(b\cdot a)\cdot b,b\cdot (a\cdot b)=(a\cdot a)\cdot b,a=(a\cdot b)\cdot ((b\cdot b)\cdot a),a=(a\cdot (b\cdot b))\cdot (b\cdot a),b\cdot a=((a\cdot a)\cdot b)\cdot b,a\cdot (a\cdot (b\cdot b))=b\cdot a,b\cdot (b\cdot a)=(a\cdot a)\cdot b,b=((a\cdot a)\cdot b)\cdot (b\cdot b),a=((b\cdot a)\cdot a)\cdot (a\cdot a),a=((b\cdot a)\cdot a)\cdot (a\cdot b),a=(b\cdot a)\cdot (a\cdot (a\cdot b)),a\cdot b=((a\cdot b)\cdot a)\cdot a,a\cdot b=a\cdot ((b\cdot a)\cdot a),a\cdot b=(a\cdot (b\cdot a))\cdot a,a=((b\cdot a)\cdot a)\cdot (b\cdot a),a=(b\cdot a)\cdot ((a\cdot b)\cdot a),a=(b\cdot a)\cdot (a\cdot (b\cdot a)),b\cdot a=((a\cdot b)\cdot a)\cdot a,a\cdot ((b\cdot a)\cdot a)=b\cdot a,b\cdot a=(a\cdot (b\cdot a))\cdot a,a\cdot (b\cdot a)=(a\cdot b)\cdot a,b=(a\cdot b)\cdot ((a\cdot a)\cdot b),a=(b\cdot a)\cdot (a\cdot (b\cdot b)),a\cdot (b\cdot b)=(a\cdot b)\cdot a,a\cdot (b\cdot a)=a\cdot (b\cdot b),a=(b\cdot a)\cdot ((b\cdot a)\cdot a),(a\cdot b)\cdot a=(b\cdot a)\cdot a,a\cdot (b\cdot a)=(b\cdot a)\cdot a,a\cdot b=a\cdot ((b\cdot b)\cdot a),a\cdot b=(a\cdot (b\cdot b))\cdot a,(a\cdot b)\cdot a=(b\cdot b)\cdot a,a\cdot b=((a\cdot b)\cdot b)\cdot b,a\cdot b=((b\cdot a)\cdot a)\cdot a,a\cdot b=b\cdot ((a\cdot a)\cdot b),a\cdot b=(b\cdot (a\cdot a))\cdot b,a\cdot ((b\cdot b)\cdot a)=b\cdot a,a\cdot b=b\cdot ((a\cdot b)\cdot b),a\cdot b=b\cdot (b\cdot (a\cdot a)),b\cdot (a\cdot a)=(a\cdot b)\cdot b,a\cdot b=b\cdot ((b\cdot a)\cdot b),a\cdot b=b\cdot (b\cdot (a\cdot b)),(a\cdot b)\cdot b=(b\cdot a)\cdot b,a\cdot b=b\cdot (b\cdot (b\cdot a))\};$$
```
generateGrid[imaginaryTheorems, imaginaryAxioms, 6, 20, "Rainbow", \
True]
```
![gridD][10]
Note that axiom number 27, $(a\cdot b)\cdot c=a\cdot (b\cdot c)$, is a part of our axiom system (group theory). The significant factor here is that this axiom does not particularly stand out; the implication being that there is nothing special about our choice of the axiom system. Theoretically, if another axiom was chosen, and such a system was universal, it would not particularly be interesting or different. Rather, it would be plausible. The concept of computational universality implies these alternative systems are translatable to our system, but may possibly hold keys to solving problems that can be expressed, but cannot be solved with our current set of axioms.
## Implications
This project was neither a mathematically sound, nor a practical exploration. Rather, the results simply point to some simple, but fundamental implications about how our mathematics is conducted. Axioms are the foundations of math, and to an arguable extent, of our universe. Therefore, the exploration of the significance, or rather the insignificance of our choice of theorems, is an important step in augmenting our view of mathematics—not just a manipulation based on a certain system of logic, but the system of implication with a specific set of simple rules, that may differ from what we would expect.
### Footnote
[1] Wolfram, Stephen. A New Kind of Science. 2018.
[2] Partially developed with code written by Jonathan Gorard, Wolfram Research
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1-2-1.nks_multiway_diagram.png&userId=1371718
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1-3-1.modusP.png&userId=1371718
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1-3-2.deMorgan.png&userId=1371718
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1-3-3.wolframLogic.png&userId=1371718
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1-4-4.distribution.png&userId=1371718
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1-5-1.tarskiAxioms.png&userId=1371718
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2-1-1.exampleGrid.png&userId=1371718
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=grid4.png&userId=1371718
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=grid1.png&userId=1371718
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=grid3.png&userId=1371718Pyokyeong Son2018-07-13T15:06:33ZFinding quickly numbers with the same cross sum (sum of digits)
http://community.wolfram.com/groups/-/m/t/1383775
Hi Guys! I hope all of you are fine :) Maybe someone can tell me here how can I find, with Wolfram Alpha or Mathematica, numbers they have the same sum of digits? For Example: 9999 = 36, now I need a method (or a search criterion for Wolfram Alpha) to find quickly other numbers with the same cross sum. I hope anyone can help me here. Kind regards and best wishes.Nural I.2018-07-14T14:25:43ZHow to analyze storm event data
http://community.wolfram.com/groups/-/m/t/1384995
I have developed a database of storm events that contain a number of attributes for each storm event, e.g. the total storm rainfall depth plus the antecedent total depth of rainfall for a number of different days before the event (i.e. 60 days, 30 days, 7 days, 3 days and 1 day before). I also have a response variable being the water level (stage) at a water level gauge. When I plot just total rainfall versus stage response its all over the map (no real correlation) because I believe that the antecedent rainfall has a big effect on the water level stage reading (i.e if it rains a lot when its dry, there isnt much runoff so the water level gauge doesn't go up) but a smaller rain when its been wet has a big impact on water level reading and hence flooding.
I am not sure how to analyze this data that has multiple attributes, i.e. each data point contains values for several parameters (depths of rainfall at several different discrete times) versus a single response variable. I would like to develop equations that give the best fit of total storm rainfall and antecedent that best explains the water level at the downstream end of the watershed. In other words, if I could have a prediction of the upcoming storm, what combination of antecedent rainfall best predicts flooding.
I am hoping someone might suggest an approach and the related commands that would help me analyze the data. I have about 400 storm events and they are all in Excel so I would have to import form Excel into Mathematica and then analyze. I hope this isn't too general a question but I am not sure how to get started. Thank you for any thoughts, RogerRoger Leventhal2018-07-16T15:04:16Z[WSC18] Shortest Path Between Two Points on a Rough Terrain
http://community.wolfram.com/groups/-/m/t/1384972
This project seeks to find the shortest distance between two coordinate locations on the surface of the Earth given a maximum possible slope, or threshold. I created a function called shortestDistance that takes four parameters: the first coordinate location (or city), the second coordinate location (or city), the slope threshold, and the image resolution. Using a module, I declared all of my local variables:
shortestDistance[c1_, c2_, quality_, threshold_] :=
Module[{pos1, pos2, dist, bounds, br, center, geoRange, realPixelSize,
data, terrainImage, gradient, d, m, rel, xn, yn, edges, graph, v,
bin, del, pg, path, pix1, pix2, fullTerrain, mag, mesh, newPath,
dropOne, testPath, scaledLength, actualLength, $size},
After that, I stored the coordinate points in {x, y} format of the two user-entered locations into the variables "pos1" and "pos2":
pos1 = GeoPosition[c1];
pos2 = GeoPosition[c2];
The bounds of the image were adjusted to fit the entire area of interest. In this example, c1 represents Denver, CO and c2 represents Aspen, CO, and the adjusted terrain graph is shown below with the locations of the two cities and the midpoint between the cities.
bounds = CoordinateBounds[{pos1[[1]], pos2[[1]]}];
br = Max[#2 - #1 & @@@ bounds];
center = Mean[{pos1[[1]], pos2[[1]]}];
geoRange = Transpose[{center - br, center + br}];
![In this example, c1 represents Denver, CO and c2 represents Aspen, CO][1]
and the elevation data was stored as a matrix in the variable "data."
data = GeoElevationData[Transpose@geoRange];
I created four possible pixel dimensions for the size of the newly scaled image and assigned the value to the variable "$size" (eg. choice 2-> {200,200} would correspond to a 200 by 200 pixel resolution). The different resolutions were added to the menu because higher resolutions mean slower runtime: the user has the option to choose between speed and the quality of the image.
$size = Replace[
quality, {1 -> {50, 50}, 2 -> {200, 200}, 3 -> {300, 300},
4 -> {400, 400}}];
Next, I created a function "pixelSize" in order to determine the real distance of the path, in miles, given the distance of the path in pixels, and the definition of the function is shown below. Here, the function takes in two arguments: the range of the pixel values for the x and y coordinates as a single list "geoRange" and a "resolution" argument representing the pixel dimensions specified by the user, which was stored in $size in the previous step.
pixelSize[geoRange_, {resolution_, _}] :=
Module[{x1, x2, y1, y2},
{{x1, x2}, {y1, y2}} = geoRange;
GeoDistance[
GeoPosition[{x1, y1}],
GeoPosition[{x2, y1}]
]/resolution
];
In the function call below, realPixelSize represents the ratio of $\frac{actual}{scaled}$ pixel lengths, and the original terrain image is rescaled to an image of the user-entered pixel dimensions
realPixelSize = pixelSize[geoRange, $size];
terrainImage = ImageResize[Image[QuantityMagnitude[data]], $size];
The image below represents the resized terrain image, with lighter areas representing higher elevations while darker areas representing lower elevations:
![enter image description here][2]
In one of the most important steps, the gradient filter is applied to the scaled terrain image:
gradient = GradientFilter[terrainImage, 1];
Now for the gradient image...the image is lighter in areas with greater **slope** (unlike elevation of the terrain image) and darker in relatively flat terrain with little to no steepness.
![enter image description here][3]
In the next block of code, each pixel on the gradient image is connected to the four adjacent pixels, and the duplicates are deleted using the DeleteDuplicates function since every two adjacent pixels (eg. pixel A and pixel B) will be connected twice (A->B and B->A) before the duplicates are deleted.
d = #2 - #1 & @@@ geoRange;
m = #1 & @@@ geoRange;
rel = Round[Reverse[$size*(#[[1]] - m)/d] & /@ {pos1, pos2}];
{xn, yn} = ImageDimensions[gradient];
edges = DeleteDuplicatesBy[Flatten@Table[{
Pixel[i, j] \[UndirectedEdge] Pixel[i - 1, j],
Pixel[i, j] \[UndirectedEdge] Pixel[i + 1, j],
Pixel[i, j] \[UndirectedEdge] Pixel[i, j - 1],
Pixel[i, j] \[UndirectedEdge] Pixel[i, j + 1]
}, {i, 2, xn - 1}, {j, 2, yn - 1}], Sort];
graph = Graph[edges];
v = VertexList[graph];
Here is a visualization of the connected pixels after duplicates are deleted.
![enter image description here][4]
The Binarize function is now applied to the gradient image, and all the pixel values greater than the user-inputted threshold will be replaced with a white pixel while all the pixel values less than the threshold will be replaced with a black pixel. The resulting binarized image is stored in the variable "bin."
bin = Binarize[gradient, threshold];
![enter image description here][5]
All the white pixels (pixels with gradient above threshold) will be deleted using a separate function called FastVertexDelete, for which the definition and function call are down below:
----------
**Side Note:**
*Why wasn't **VertexDelete**, a predefined function in Mathematica, used?*
*VertexDelete took a long time to run, due to a possible bug in the function...to cut down runtime, a new function "FastVertexDelete" was created.*
----------
**Definition of FastVertexDelete:**
FastVertexDelete[ graph_Graph, delete_List ] :=
With[ { dropQ = AssociationMap[ True &, delete ] },
Graph[ Complement[ VertexList @ graph, delete ],
DeleteCases[ EdgeList @ graph, _[ _?dropQ, _ ] | _[ _, _?dropQ ] ]
]
];
FastVertexDelete[ graph_Graph, delete_ ] :=
FastVertexDelete[ graph, { delete } ];
**Function Call:**
del = Pixel @@@ PixelValuePositions[bin, White];
{pix1, pix2} = Pixel @@@ rel;
pg = FastVertexDelete[graph, Complement[Intersection[v, del], {pix1, pix2}]];
The Mathematica function FindShortestPath is now implemented to find the minimum distance path between the two user-entered coordinates::
path = Replace[FindShortestPath[pg, pix1, pix2], Except[_List] -> {}];
![enter image description here][6]
Now for a more in-depth analysis of the path finding mechanism...
## Visualization of FindShortestPath ##
----------
The connected "pixel plane" looks like this:
![enter image description here][7]
If a pixel is deleted using VertexDelete,
g = VertexDelete[graph, {Pixel[5, 5]}]
the resulting pixel plane looks like this:
![enter image description here][8]
When FindShortestPath is applied to two randomly chosen pixels on the plane,
FindShortestPath[g, Pixel[5, 3], Pixel[5, 8]]
the red dots show the shortest path.
![enter image description here][9]
----------
----------
However, if there is no path, an error message is produced:
If[path === {},
bin = bin;
"Error: There is no path.",
Otherwise, a function "dropOne" is implemented in order to minimize the length of the path even further: dropOne drops points on the path (excluding starting and ending points) that will still give a path disjoint from the white pixels that have been deleted.
bin = ImageMultiply[bin,
Erosion[ReplacePixelValue[ConstantImage[White, $size],
List @@@ path -> Black], 1]];
fullTerrain = ImageAdjust@Image[QuantityMagnitude[data]];
mag = ImageDimensions[fullTerrain]/$size;
mesh = ImageMesh[bin, Method -> "Exact",
DataRange -> Transpose[{{1, 1}, ImageDimensions[bin]}]];
newPath = List @@@ path[[2 ;; -2]];
**Definition:**
dropOne =
Function[path,
Replace[
SelectFirst[Range[2, Length[path] - 1],
RegionDisjoint[mesh, Line[Delete[path, #]]] &],
{d_Integer :> Delete[path, d], _ :> path}
]];
The new, improved path, achieved using the dropOne function, is named "testPath":
testPath = FixedPoint[dropOne, newPath];
Notice how testPath improves the distance by "smoothening out" the path.
![enter image description here][10]
The scaled length of the path (aka the path length in pixels) is now determined by using the EuclideanDistance. The scaled distance is converted to actual distance in miles by using the "realPixelSize" value computed at the beginning using the pixelSize function:
scaledLength =
N@Total[EuclideanDistance @@@
Partition[testPath, 2, 1]] (*with dropOne function*);
actualLength = scaledLength*realPixelSize (*in miles*);
**In this example with Denver and Aspen, the length of the path before dropOne was implemented was 128 miles while the length of the path after dropOne was implemented was 117.2 miles.**
A grid is created to output a list of lists, specifically, a list of images: the raw graph representing the original path, the smoothened graph representing the improved graph after the dropOne function is implemented, the gradient graph showing the shortest path, and the full terrain image with relative elevations.
Grid[{
{"Raw Graph: ",
ImageResize[HighlightImage[bin, Line[List @@@ path]], 500]},
{"Smoothened Graph: ",
ImageResize[HighlightImage[bin, Line[testPath]], 500]},
{"Gradient: ",
ImageResize[
HighlightImage[
ImageAdjust@
GradientFilter[terrainImage,
1], {Line[{##} - 0.5 & @@@ testPath]}], 500]},
{"Terrain with path marked in red: ",
HighlightImage[fullTerrain, {Line[mag*{##} & @@@ testPath]}]},
{"Distance: ", actualLength}}], Alignment -> Left
]]
Further Work
-------
A possible improvement that may be made is a quicker runtime: in fact, the image quality may be adjusted in my microsite in order to account for slow runtime. On the microsite, another problem is the need for the user to adjust the threshold in order to obtain the best possible graph: there should be a set numerical correlation between the actual gradient values and threshold since threshold is just a relative measurement of slope.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-15at11.27.45PM.png&userId=1372113
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-15at11.32.54PM.png&userId=1372113
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-15at11.37.32PM.png&userId=1372113
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-15at11.40.08PM.png&userId=1372113
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-16at8.49.25AM.png&userId=1372113
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-16at8.57.29AM.png&userId=1372113
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-15at11.40.08PM.png&userId=1372113
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-16at12.23.21PM.png&userId=1372113
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-16at12.28.15PM.png&userId=1372113
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-16at8.59.46AM.png&userId=1372113Hee Jae Hong2018-07-16T14:05:19Zknot theory package
http://community.wolfram.com/groups/-/m/t/1384652
Dear All,
How can I write a Mathematica file which uses the Mathematica package "knottheory" and creates a list of many (as many as possible) different non-alternating knots and links (rather minimal knot- and linkprojections) with at least 12 crossings (better: 16 or 18 crossings or even more) which satisfy: Each component of a link has as many positive crossings as negative crossings.
Best wishes,
Stephan Rosebrockstephan.rosebrock2018-07-16T11:57:29ZTabbed Mathematica interface
http://community.wolfram.com/groups/-/m/t/1384750
What is your take on an idea to have tabbed interface for notebook editing ?Marek Suplata2018-07-15T17:34:29Z[GIF] The Great Mystery
http://community.wolfram.com/groups/-/m/t/1363867
![The Great Mystery][1]
The image consists of multiple layers of circles in a hexagonal grid arrangement. The layer scaling factors are integer powers of the golden ratio. Seamless infinite zoom is achieved by gradually blending in / out the fine / large scale layers. *Draw* function draws a single animation frame:
v1 = {Cos[30 \[Degree]], Sin[30 \[Degree]]};
v2 = {Cos[90 \[Degree]], Sin[90 \[Degree]]};
n = 2;
g ={
White,
Table[
Circle[i v1 + j v2, 1/2],
{i, -3 n, 3 n, 1/2}, {j, -3 n, 3 n, 1/2}
]
};
m = 2;
Draw[ds_] :=
Graphics[
{AbsoluteThickness[5/3],
Table[
{Opacity[2/3 (m - Abs[i + ds])/m],
Scale[g, GoldenRatio^(i + ds), {0, 0}]},
{i, -m, m}
]
},
PlotRange -> {{-n, n}, {-n, n}},
Background -> Black,
ImageSize -> 800
];
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Circles_Hex_Scale_3.gif&userId=1362358Peter Karpov2018-06-25T17:50:33ZParameter in Manipulate command?
http://community.wolfram.com/groups/-/m/t/1384436
I want to visually evaluate the effect of changing the parameter "A" in the following Van Der Pol equation.
Could anyone tell the way to link the parameter "A" in the Manipulate command to the original "A" in the vdp equation?
Thank you in advance.
vdp = NDSolve[{x'[t] == y[t], y'[t] == A (1 - x[t]^2) y[t] - x[t],
x[0] == 0.1, y[0] == 1}, {x[t], y[t]}, {t, 0, 100}]
Manipulate[
Plot[Evaluate[{x[t], y[t]} /. vdp], {t, 0, 100}],
{A, 0, 1}]Tsukasa Oikawa2018-07-15T02:57:37Z[WSS18] A Discourse Language for Card Games
http://community.wolfram.com/groups/-/m/t/1379335
![enter image description here][1]
# A Discourse Language for Card Games
This project aims at constructing a platform to define any card game for Mathematica and further process, play and analyze the game.
We have implemented a simple structure to introduce game state, perform actions on the game state based on defined rules, play and visualize the game automatically, and construct the game tree of possible moves.
The full code notebook, including examples can be found [here][2]!
![enter image description here][3]
## Game Design
In our data structure, there are four main parts of game design: game state, action, rule, and goal
### Game State
Game state is an association with all needed information in one step of the game. It consists of players and other hands related to a list of cards, and also a control state, showing who controls the game at that step. A game state can be made using implemented functions and can be designed to a specific game:
playersList = {"Riccardo", "Matteo", "Sebastian", "Vitaliy"};
handsN = 5;
initialState = CreateGameState[playersList, handsN];
initialState // VisualizeCardGame
![enter image description here][4]
### Actions
Actions are functions that map a game state to another one. A simple implemented action is CardMove:
CardMove[hand1_->hand2_, card_][gameState_] := gameState//CardTake[hand1, card]//CardDrop[hand2];
### Rules
Rules have a "rule delayed" structure (LHS /; conditions :> RHS), in which the LHS is an association pattern, conditions are the game rules, and RHS is an action. As an example, imagine a this rule: The player in control of the game drops one of his card to stock in case the card has the same suit or same number as the card on top of the stack. This rule can be implemented as follows:
rule1 = gameState:<|___,player_->{___,card1_,___},___,"stock"->{___,card2_},___|> /; (PlayerTurnQ[player][gameState] && SameSuitOrNumberQ[card1,card2]) :> (gameState//CardMove[player->"stock", card1]//NextPlayer);
This structure can easily reproduce next step using Replace function. A list of all possible moves in a game with three rules can be reproduced by:
rules = {rule1, rule2, rule3}
possibleMovesList = ReplaceList[gameState, rules]
### Goals
Goal part computes the score and checks the end of the game. The function EndGameQ gets a game state as input and outputs True if it is end of the game state. For instance, if the game ends when a player runs out of cards, this function can be coded as:
EndGameQ[gameState_]:= AnyTrue[gameState/@playersList, (#=={})&];
Furthermore, scores and winners can be defined as a function on gamestate or they might be updated during the game.
## Game Play
### Dynamics
After the game is designed, it can be played by Mathematica. In a random play, the next game state is selected with equal probability from all possible allowed game state. Then a list of game states is created, representing the game play. The function "PlayRandomList" creates this list till the game ends or to a maximum steps, whichever comes first!
playList = PlayRandomList[initialState, rules, maxsteps];
Using this function, statistical analysis on a random play can be generated, such as the probability of wining the game for players. Here is an example of game similar to Crazy Eights. In this game each player at his turn should drop a card on stock with the same suit or same number as stock top. Otherwise, the player draws a card from stack and the game goes to the next player. The goal is to finish cards as soon as possible.
initialStateList = Table[CreateGameState[playersList, handsN], 200];
WinnersList = WinnerofRandomPlay[#, rules] & /@ initialStateList;
PieChart[AssociationMap[Count[WinnersList, #] &, playersList], ChartLabels -> Automatic]
![enter image description here][5]
### Game Tree
Finally, a game tree of all possible moves from an initial state can be generated. "GameTree" function creates a directed graph of certain depth, using initial state, rules, and depth of the graph. In this graph each vertex is a game state and they are connected in case they are reachable by game rules. PlotGameTree can be designed to plot the graph with vertex labeling of needed information. Here, it labels the vertices with last card in the stock in a tree of the game described above, with depth 4.
graph = GameTree[initialState, rules, 4];
PlotGameTree[graph]
![enter image description here][6]
## Future Directions
There are several other functions that can be implemented in the game. One of them is a function that outputs the information that each player has at any game state. With this add-on, and also designing a reward or score function, the game can be trained to a neural net and a professional computer game player can be developed.
Also, a more general game implementation can be followed using the same method.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9036gameplay3.gif&userId=1358195
[2]: https://github.com/AghilZadeh/Summer2018Starter/blob/master/StudentDeliverables/A%20Discourse%20Language%20for%20Card%20Games.nb
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2018-07-1114-08-07.png&userId=1358195
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2018-07-1114-19-49.png&userId=1358195
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2018-07-1114-58-51.png&userId=1358195
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Screenshotfrom2018-07-1115-08-38.png&userId=1358195Aghil Abed Zadeh2018-07-11T19:18:24ZError fitting function to data using NIntegrate and ' (differentiation).
http://community.wolfram.com/groups/-/m/t/1383008
I've been trying to fit a function to a data set. IT works with functions without integrals (like NIntegrate) and differentiations ( ' ). I'm trying to get it to work using both integrals and differentiations:
deltaData = {{1.00, 0}, {0.96, 0.3416}, {0.92, 0.4749}, {0.88, 0.5715}, {0.84, 0.648}, {0.80, 0.711}, {0.76, 0.764}, {0.72, 0.8089}, {0.66, 0.864}, {0.62, 0.8939}, {0.58, 0.919}, {0.54, 0.9399}, {0.50, 0.9569}, {0.46, 0.9704}, {0.42, 0.9809}, {0.38, 0.9885}, {0.34, 0.9938}, {0.30, 0.9971}, {0.26, 0.9989}, {0.22, 0.9997}, {0.16, 1}, {0.14, 1}};
deltaFit =
NonlinearModelFit[
deltaData, (1 + a*T + c*T^2 + e*T^3 + g*T^4 + i*T^5 + k*T^6)/(1 +
b*T + d*T^2 + f*T^3 + h*T^4 + j*T^5 + l*T^6), {a, b, c, d, e, f,
g, h, i, j, k, l}, T];
Plot[deltaFit[T], {T, 0, 1}]
data = {{0.203, 0.031}, {0.203, 0.030}, {0.246, 0.055}, {0.267, 0.072}, {0.300, 0.105}, {0.300, 0.105}, {0.330, 0.147}, {0.373, 0.214}, {0.397, 0.255}, {0.415, 0.293}, {0.445, 0.351}, {0.477, 0.430}, {0.493, 0.463}, {0.520, 0.538}, {0.541, 0.590}, {0.582, 0.717}, {0.589, 0.733}, {0.625, 0.847}, {0.645, 0.911}, {0.685, 1.029}, {0.688, 1.043}, {0.730, 1.181}, {0.751, 1.247}, {0.782, 1.338}, {0.793, 1.375}, {0.830, 1.472}, {0.856, 1.531}, {0.878, 1.585}};
AlphaBCS = 1.746;
delta[T_, p_, alpha_] := p*deltaFit[T] + (1 - p)*deltaFit[T]/alpha;
fx[T_, x_, p_, alpha_] := (Exp[AlphaBCS/T*(x^2 + delta[T, p, alpha]^2)^(1/2)] + 1)^(-1);
Ses[T_, p_, alpha_] := -6*AlphaBCS/Pi^2* NIntegrate[fx[T, x, p, alpha]*Log[fx[T, x, p, alpha]] + (1 - fx[T, x, p, alpha])*Log[1 - fx[T, x, p, alpha]], {x, 0,100}];
Ces[T_, p_, alpha_] := T*Ses'[T, p, alpha];
cesFit = NonlinearModelFit[data, Ces[T, p, alpha], {{p, 0.5}, {alpha, 1.764}}, T];
I don't know why I'm getting an Error as the code is working in the first fit I'm performing. This is the error message:
The function value ... a bunch of numbers ... Ses' ... more numbers ... is not a list of real numbers with dimensions {28} at {p,alpha} = {0.5`,1.764`}.
Does anybody know what I'm doing wrong? Is there anything I can replace the fit or integral with to make this work?Christoph Bernkopf2018-07-13T18:31:50Z[WSC18] Predicting the Halting Problem with Machine Learning
http://community.wolfram.com/groups/-/m/t/1384403
# A Machine Learning Analysis of the Halting Problem over the SKI Combinator Calculus
![A rasterised SK combinator with length 50, evaluated to 5 steps][16]
## Abstract
Much of machine learning is driven by the question: can we learn what we cannot compute? The learnability of the halting problem, the canonical undecidable problem, to an arbitrarily high accuracy for Turing machines was proven by Lathrop. The SKI combinator calculus can be seen as a reduced form of the untyped lambda calculus, which is Turing-complete; hence, the SKI combinator calculus forms a universal model of computation. In this vein, the growth and halting times of SKI combinator expressions is analysed and the feasibility of a machine learning approach to predicting whether a given SKI combinator expression is likely to halt is investigated.
## 1. SK Combinators
What we will refer to as 'SK Combinators' are expressions in the SKI combinator calculus, a simple Turing-complete language introduced by Schönfinkel (1924) and Curry (1930). In the same way that NAND gates can be used to construct any expression in Boolean logic, SK combinators were posed as a way to construct any expression in predicate logic, and being a reduced form of the untyped lambda calculus, any functional programming language can be implemented by a machine that implements SK combinators. While implementations of this language exist, these serve little functional purpose - instead, this language, a simple idealisation of transformations on symbolic expressions, provides a useful tool for studying complex computational systems.
### 1.1 Rules and Expressions
Formally, SK combinator expressions are binary trees whose leaves are labelled either '*S*', '*K*' or '*I*': each tree *(x y)* represents a function *x* applied to an argument *y*. When the expression is evaluated (i.e. when the function is applied to the argument), the tree is transformed into another tree, the 'value'. The basic 'rules' for evaluating combinator expressions are given below:
*k[x][y] := x*
The K combinator or 'constant function': when applied to *x*, returns the function *k[x]*, which when applied to some *y* will return *x*.
*s[x][y][z] := x[z][y[z]]*
The S combinator or 'fusion function': when applied to *x, y, z*, returns *x* applied to *z*, which is in turn applied to the result of *y* applied to *z*.
*i[x] := x*
The I combinator or 'identity function': when applied to *x*, returns *x*.
Note that the I combinator *I[x]* is equivalent to the function *S[K][a][x]*, as the latter will evaluate to the former in two steps:
*S[K][a][x]*
*= K[x][a[x]]*
*= x*
Thus the I combinator is redundant as it is simply 'syntactic sugar' - for the purposes of this exploration it will be ignored.
These rules can be expressed in the Wolfram Language as follows:
SKRules={k[x_][y_]:> x,s[x_][y_][z_]:> x[z][y[z]]}
### 1.2 Evaluation
The result of applying these rules to a given expression is given by the following functions:
SKNext[expr_]:=expr/.SKRules;
Returns the next 'step' of evaluation of the expression *expr* - evaluating all functions in *expr* according to the rules above without evaluating any 'new'/transformed functions.
SKEvaluate[expr_,n_]:=NestList[#1/.SKRules&,expr,n];
Returns the next *n* steps of evaluation of the expression *expr*
SKEvaluateUntilHalt[expr_,n_] := FixedPointList[SKNext,expr,n+1];
Returns the steps of evaluation of *expr* until either it reaches a fixed point or it has been evaluated for n steps, whichever comes first.
Note that, due to the Church-Rosser theorem (Church and Rosser, 2018), the order in which the rules are applied does not affect the final result, as long as the combinator evaluates to a fixed point / 'halts'. For combinators with no fixed point, which do not halt, the behaviour demonstrated as they evaluate could change based on the order of application of the rules - this is not explored here and is a topic for potential future investigation.
### 1.3 Examples
The functions above can be used to evaluate a number of interesting SK combinator expressions:
Column[SKEvaluateUntilHalt[s[k][a][x],10][[1;;-2]]]
[//]: # (No rules defined for Output)
The *I* combinator
Column[SKEvaluateUntilHalt[s[k[s[i]]][k][a][b],10][[1;;-2]]]
[//]: # (No rules defined for Output)
The reversal expression - *s[k][s[i]][k][a][b]* takes two terms, *a* and *b*, and returns *b[a]*.
## 2. Growth and Halting
### 2.1 Halting and Related Works
We will define a combinator expression to have halted if it has reached a fixed point - i.e. if no combinators in the expression can be evaluated, or if evaluating any of the combinators in the expression returns the original expression. As SK combinators are Turing-complete and so computationally universal, it is evident that the halting problem - determining whether or not a given SK combinator expression will halt - is undecidable for SK combinators. There are, however, patterns and trends in the growth of SK combinators, and it is arguably possible to speak of the probability of a given SK combinator expression halting.
Some investigations (Lathrop 1996) and (Calude and M. Dumitrescu 2018) have been made into probabilistically determining the halting time of Turing machines, with [2] proving that it is possible to compute some value K where for some arbitrary predetermined confidence *(1-\[Delta])* and accuracy *(1-\[Epsilon]),* a program that does
A. Input a Turing machine M and program I.
B. Simulate M on I for K steps.
C. If M has halted then print 1, else print 0.
D. Halt.
has a probability greater than *(1-δ)* of having an accuracy (when predicting whether or not a program will halt) greater than *(1-ε).* The key result of this is that, in some cases 'we can learn what we cannot compute' - 'learning' referring to Valiant's formal analysis as 'the phenomenon of knowledge acquisition in the absence of specific programming' (Valiant 1984).
### 2.2 Definitions and Functions
The size of a combinator expression can either be measured by its length (total number of characters including brackets) or by its leaf size (number of 's' and 'k' characters). We use the former in most cases, and the latter when randomly generating combinator expressions.
The number of possible combinator expressions with leaf size *n* is given by
SKPossibleExpressions[n_]:=(2^n)*Binomial[2*(n-2),n-1]/n
(Wolfram, 2002), which grows exponentially.
#### 2.2.1 Visualisation
We define a function to visualise the growth of a combinator, *SKRasterize*:
SKArray[expr_,n_]:=Characters/@ToString/@SKEvaluate[expr,n];
SKArray[expr_]:=SKArray[expr,10];
Generates a list of the steps in the growth of a combinator, where each expression is itself a list of characters ('s', 'k', '[', ']')
SKGrid[exp_,n_]:=ArrayPlot[SKArray[exp,n],{ColorRules->{"s"->RGBColor[1,0,0],"k"->RGBColor[0,1,0],"["->RGBColor[0,0,1],"]"->RGBColor[0,0,0]},PixelConstrained->True,Frame->False,ImageSize->1000}];
SKGrid[exp_]:=SKGrid[exp,10];
Generates an ArrayPlot of a list given by SKArray, representing the growth of a combinator in a similar manner to that of cellular automata up to step n. The y axis represents time - each row is the next expression in the evaluation of an SK combinator. Red squares indicate 'S', green squares indicate 'K', blue squares indicate '[' and black squares indicate ']'.
SKRasterize[func_,n_]:=Image[SKGrid[func,n][[1]]];
SKRasterize[func_]:=SKRasterize[func,10];
Generates a rasterized version of the ArrayPlot.
A visualisation of a given combinator can easily be produced, as follows:
SKRasterize[s[s[s]][s][s][s][k],15]
[//]: # (No rules defined for Output)
![The longest running halting expression with leaf size 7, halting in 12 steps (Wolfram, 2002)][1]
The longest running halting expression with leaf size 7, halting in 12 steps (Wolfram, 2002)
#### 2.2.2 Halting graphs
We can create a table of the length (string length) of successive combinator expressions as they evaluate as follows:
SKLengths[exp_,n_]:=StringLength/@ToString/@SKEvaluate[exp,n];
Returns a list of the lengths of successive expressions until step *n*
These can be plotted as a graph (x axis number of steps, y axis length of expression):
SKPlot[expr_,limit_]:=ListLinePlot[SKLengths[expr,limit],AxesOrigin->{1,0},AxesLabel->{"Number of steps","Length of expression"}];
Thus, a graph of the above combinator can be produced:
SKPlot[s[s[s]][s][s][s][k],15]
[//]: # (No rules defined for Output)
![A graph of the above combinator][2]
It is evident from the graph that this combinator halts at 12 steps.
#### 2.2.3 Random SK combinators
To empirically study SK combinators, we need a function to randomly generate them. Two methods to do this were found:
RecursiveRandomSKExpr[0,current_]:=current;
RecursiveRandomSKExpr[depth_,current_]:=
RecursiveRandomSKExpr[depth-1,
RandomChoice[{
RandomChoice[{s,k}][current],
current[RecursiveRandomSKExpr[depth-1,RandomChoice[{s,k}]]]
}]
];
RecursiveRandomSKExpr[depth_Integer]:=RecursiveRandomSKExpr[depth,RandomChoice[{s,k}]];
A recursive method, repeatedly appending either a combinator to the 'head' of the expression or a randomly generated combinator expression to the 'tail' of the expression. (Hennigan)
replaceWithList[expr_,pattern_,replaceWith_]:=ReplacePart[expr,Thread[Position[expr,pattern]->replaceWith]];
treeToFunctions[tree_]:=ReplaceRepeated[tree,{x_,y_}:>x[y]];
randomTree[leafCount_]:=Nest[ReplacePart[#,RandomChoice[Position[#,x]]->{x,x}]&,{x,x},leafCount-2];
RandomSKExpr[leafCount_]:=treeToFunctions[replaceWithList[randomTree[leafCount],x,RandomChoice[{s,k},leafCount]]];
Random combinator generation based on generation of binary trees - each combinator can be expressed as a binary tree with leaves 'S' or 'K'. (Parfitt, 2017)
While the first method gives a large spread of combinators with a variety of lengths, and is potentially more efficient, for the purposes of this exploration the second is more useful, as it limits the combinators generated to a smaller, more controllable sample space (for a given leaf size).
### 2.3 Halting Graphs
All combinators of leaf sizes up to size 6 evolve to fixed points (NKS):
exprs = Table[RandomSKExpr[6],10];
ImageCollage[Table[ListLinePlot[SKLengths[exprs[[n]],40]],{n,10}],Background->White]
[//]: # (No rules defined for Output)
![10 randomly generated combinators of size 6, with their lengths plotted until n=40.][3]
10 randomly generated combinators of size 6, with their lengths plotted until n=40.
As the leaf size increases, combinators take longer to halt, and some show exponential growth:
exprs = Table[RandomSKExpr[10],10];
ImageCollage[Table[ListLinePlot[SKLengths[exprs[[n]],40]],{n,10}],Background->White]
[//]: # (No rules defined for Output)
![10 randomly generated combinators of size 10, with their lengths plotted until n=20.][4]
10 randomly generated combinators of size 10, with their lengths plotted until n=20.
exprs = Table[RandomSKExpr[30],10];
ImageCollage[Table[ListLinePlot[SKLengths[exprs[[n]],40]],{n,10}],Background->White]
[//]: # (No rules defined for Output)
![10 randomly generated combinators of size 30, with their lengths plotted until n=40.][5]
10 randomly generated combinators of size 30, with their lengths plotted until n=40.
CloudEvaluate[exprs = Table[RandomSKExpr[50],10];
ImageCollage[Table[ListLinePlot[SKLengths[exprs[[n]],40]],{n,10}],Background->White]]
[//]: # (No rules defined for Output)
![10 randomly generated combinators of size 50, with their lengths plotted until n=40.][6]
10 randomly generated combinators of size 50, with their lengths plotted until n=40.
After evaluating a number of these combinators, it appears that they tend to either halt or grow exponentially - some sources (Parfitt, 2017) reference linear growth combinators, however none of these have been encountered as yet.
### 2.4 Halting Times
With a random sample of combinators, we can plot a cumulative frequency graph of the number of combinators that have halted at a given number of steps:
SKHaltLength[expr_,n_]:=Module[{x},
x=Length[SKEvaluateUntilHalt[expr,n+1]];
If[x>n,False,x]
]
Returns the number of steps it takes the combinator *expr* to halt; if *expr* does not halt within n steps, returns *False*.
GenerateHaltByTable[depth_,iterations_,number_]:=Module[{exprs,lengths},
exprs = Monitor[Table[RandomSKExpr[depth],{n,number}],n];
lengths = Monitor[Table[SKHaltLength[exprs[[n]],iterations],{n,number}],n];
Return[lengths]
]
Generates a table of the halt lengths of *number* random combinator expressions (*False* if they do not halt within *iterations* steps) with leaf size *depth*.
GenerateHaltData[depth_,iterations_,number_]:=Module[{haltbytable,vals},
haltbytable = GenerateHaltByTable[depth,iterations,number];
vals = BinCounts[Sort[haltbytable],{1,iterations+1,1}];
Table[Total[vals[[1;;n]]],{n,1,Length[vals]}]
]
Generates a table of the number of *number* random combinator expressions (*False* if they do not halt within *iterations* steps) with leaf size *depth* that have halted after a given number of steps
GenerateHaltGraph[depth_,iterations_,number_]:=Module[{cumulative,f},
cumulative=GenerateHaltData[depth,iterations,number];
f=Interpolation[cumulative];
{ListLinePlot[cumulative,PlotRange->{Automatic,{0,number}},GridLines->{{},{number}},Epilog-> {Red,Dashed,Line[{{0,cumulative[[-1]]},{number,cumulative[[-1]]}}]},AxesOrigin->{1,0},AxesLabel->{"Number of steps","Number of combinators halted"}],cumulative[[-1]]}
]
Plots a graph of the above data.
#### 2.4.1 Halting Graphs
We analyse halt graphs of random samples of 1000 combinators (to leaf size 30):
CloudEvaluate[GenerateHaltGraph[10,30,1000]]
[//]: # (No rules defined for Output)
![Leaf size 10: almost all combinators in the sample (997) have halted (99.7%).][7]
Leaf size 10: almost all combinators in the sample (997) have halted (99.7%).
CloudEvaluate[GenerateHaltGraph[20,30,1000]]
[//]: # (No rules defined for Output)
![Leaf size 20: 979 combinators in the sample have halted (97.9%).][8]
Leaf size 20: 979 combinators in the sample have halted (97.9%).
CloudEvaluate[GenerateHaltGraph[30,30,1000]]
[//]: # (No rules defined for Output)
![Leaf size 30: 962 combinators in the sample have halted (96.2%).][9]
Leaf size 30: 962 combinators in the sample have halted (96.2%).
CloudEvaluate[GenerateHaltGraph[40,30,1000]]
[//]: # (No rules defined for Output)
![Leaf size 40: 944 combinators in the sample have halted (94.4%).][10]
Leaf size 40: 944 combinators in the sample have halted (94.4%).
CloudEvaluate[GenerateHaltGraph[50,30,1000]]
[//]: # (No rules defined for Output)
![Leaf size 50: 889 combinators in the sample have halted (88.9%).][11]
Leaf size 50: 889 combinators in the sample have halted (88.9%).
Evidently, the rate of halting of combinators in the sample decreases as number of steps increases - the gradient of the graph is decreasing. As the graph levels out at around 30 steps, we will assume that the number of halting combinators will not increase significantly beyond this point.
As the leaf size increases, fewer combinators in the sample have halted by 30 steps - however, the graph still levels out, suggesting most of the combinators which have not halted by this point will never halt.
#### 2.4.2 Halting Times and Leaf Size
We can plot a graph of the number of halted combinators against leaf size:
CloudEvaluate[ListLinePlot[Table[{n,GenerateHaltGraph[n,30,1000][[2]]},{n,5,50,1}]]]
![A graph to show the number of combinators which halt within 30 steps in each of 45 random samples of 1000 combinators, with leaf size varying from 5 to 50.][12]
A graph to show the number of combinators which halt within 30 steps in each of 45 random samples of 1000 combinators, with leaf size varying from 5 to 50.
This graph shows that, despite random variation, the number of halted combinators decreases as the leaf size increases: curve fitting suggests that this follows a negative quadratic function.
FitData[data_,func_]:=Module[{fitd},fitd={Fit[data[[1,2,3,4,1]],func,x]};{fitd,Show[ListPlot[data[[1,2,3,4,1]],PlotStyle->Red],Plot[fitd,{x,5,50}]]}]
A curve-fitting function: plots the curve of best fit for *data* with some combination of functions *func*.
FitData[%,{1,x,x^2}]
[//]: # (No rules defined for Output)
![Curve-fitting on the data with a quadratic function][13]
{1012.07 - 1.18915 x - 0.0209805 x^2}
Curve-fitting on the data with a quadratic function yields a reasonably accurate curve of best fit.
## 3. Machine Learning Analysis of SK Combinators
The graphs above suggest that the majority of halting SK combinators with leaf size <=50 will halt before ~30 steps. Thus we can state that, for a randomly chosen combinator, it is likely that if it does not halt before 40 steps, it will never halt. Unfortunately a lack of time prohibited a formal analysis of this, in the vein of Lathrop's work - this is an area for future research.
We attempt to use modern machine learning methods to predict the likelihood of a given SK combinator expression halting before 40 steps:
### 3.1 Dataset Generation
We implement a function *GenerateTable* to produce tables of random SK expressions:
SKHaltLength[expr_,n_]:=Module[{x},
x=Length[SKEvaluateUntilHalt[expr,n+1]];
If[x>n,False,x]
]
Returns the number of steps *expr* takes to halt if the given expression *expr* halts within the limit given (*limit*), otherwise returns *False*
GenerateTable[depth_,iterations_,number_]:=Module[{exprs,lengths},
exprs = Monitor[Table[RandomSKExpr[depth],{n,number}],n];
lengths = Monitor[Table[exprs[[n]]-> SKHaltLength[exprs[[n]],iterations],{n,number}],n];
lengths = DeleteDuplicates[lengths];
Return[lengths]
]
Returns a list of *number* expressions with leaf size *depth* whose elements are associations with key *expression* and value *number of steps taken to halt* if the expression halts within *iterations* steps, otherwise *False*.
*GenerateTable* simply returns tables random SK expressions - as seen earlier, these tend to be heavily skewed datasets as around 90% of random expressions generated will halt. Thus we must process this dataset to create a balanced training dataset: this is done with the function *CreateTrainingData*:
CreateTrainingData[var_]:=Module[{NoHalt,Halt,HaltTrain,Train},
NoHalt = Select[var,#[[2]]==False&];
Halt = Select[var,#[[2]]==True&];
HaltTrain = RandomSample[Halt,Length[NoHalt]];
Train = Join[HaltTrain,NoHalt];
Return[Train]
];
Counts the number of non-halting combinators in *var* (assumption is this is less than number of halting combinators), selects a random sample of halting combinators of this length and concatenates the lists.
ConvertSKTableToString[sktable_]:=Table[ToString[sktable[[n,1]]]-> sktable[[n,2]],{n,1,Length[sktable]}];
Converts SK expressions in a table generated with *GenerateTable* to strings
We also implement a function to create rasterised training data (where instead of an individual SK combinator associated with either True or False, an image of the first 5 steps of evaluation of the combinator is associated with either True or False):
CreateRasterizedTrainingData[var_]:=Module[{NoHalt,Halt,HaltTrain,HaltTrainRaster,NoHaltTrainRaster,RasterTrain},
NoHalt = Select[var,#[[2]]==False&];
Halt = Select[var,#[[2]]==True&];
HaltTrain = RandomSample[Halt,Length[NoHalt]];
HaltTrainRaster=Monitor[Table[SKRasterize[HaltTrain[[x,1]],5]-> HaltTrain[[x,2]],{x,1,Length[HaltTrain]}],x];
NoHaltTrainRaster=Monitor[Table[SKRasterize[NoHalt[[x,1]],5]-> NoHalt[[x,2]],{x,1,Length[NoHalt]}],x];
RasterTrain = Join[HaltTrainRaster,NoHaltTrainRaster];
Return[RasterTrain]
];
Counts the number of non-halting combinators in *var* (assumption is this is less than number of halting combinators), selects a random sample of halting combinators of this length, evaluates and generates images of both halting and non-halting combinators and processes them into training data (image->True/False).
### 3.2 Markov Classification
#### 3.2.1 Training
As a first attempt, we generate 2000 random SK expressions with leaf size 5, 2000 expressions with leaf size 10 ... 2000 expressions with leaf size 50, evaluated up to 40 steps:
lengths = Flatten[Table[GenerateTable[n,40,2000],{n,5,50,5}]]
We convert all non-False halt lengths to 'True':
lengths = lengths/.(a_->b_)/;!(b===False):> (a->True);
We process the data and train a classifier using the Markov method:
TrainingData = CreateTrainingData[lengths];
TrainingData2 = ConvertSKTableToString[TrainingData];
HaltClassifier1 = Classify[TrainingData2,Method->"Markov"]
[//]: # (No rules defined for Output)
#### 3.2.2 Testing
We must now generate test data, using the same parameters for generating random combinators:
testlengths = Flatten[Table[GenerateTable[n,40,2000],{n,5,50,5}]]
testlengths = testlengths/.(a_->b_)/;!(b===False):> (a->True);
TestData = CreateTrainingData[testlengths];
TestData2 = ConvertSKTableToString[TestData];
The classifier can now be assessed for accuracy using this data:
TestClassifier1 = ClassifierMeasurements[HaltClassifier1,TestData2]
[//]: # (No rules defined for Output)
#### 3.2.3 Evaluation
A machine learning solution to this problem is only useful if the accuracy is greater than 0.5 (i.e. more accurate than a random coin flip). We test the accuracy of the classifier:
TestClassifier1["Accuracy"]
0.755158
This, while not outstanding, is passable for a first attempt. We find the training accuracy:
ClassifierInformation[HaltClassifier1]
![Classifier Information][14]
The training accuracy (71.3%) is slightly lower than the testing accuracy (75.5%) - this is surprising, and is probably due to a 'lucky' testing dataset chosen.
We calculate some statistics from a confusion matrix plot:
TestClassifier1["ConfusionMatrixPlot"]
![Confusion Matrix Plot][15]
Accuracy: 0.76
Misclassification rate: 0.24
Precision (halt): 0.722 (when 'halt' is predicted, how often is it correct?)
True Positive Rate: 0.83 (when the combinator halts, how often is it classified as halting?)
False Positive Rate: 0.32 (when the combinator doesn't halt, how often is it classified as halting?)
Precision (non-halt): 0.799 (when 'non halt' is predicted, how often is it correct?)
True Negative Rate: 0.68 (when the combinator doesn't halt, how often is it classified as not halting?)
False Negative Rate: 0.17 (when the combinator halts, how often is it classified as not halting?)
A confusion matrix plot shows that the true positive rate is larger than the true negative rate - this would suggest that it is easier for the model to tell when an expression halts than when an expression does not halt. This could be due to the model detecting features suggesting very short run time in the initial string - for instance, a combinator k[k][<expression>] would evaluate immediately to k and halt - however, these 'obvious' features are very rare.
### 3.3 Random Forest Classification on Rasterised Expression Images
Analysing strings alone, without any information about how they are actually structured or how they might evaluate, could well be a flawed method - one might argue that, in order to predict halting, one would need more information about how the program runs. Hence, another possible method is to generate a dataset of visualisations of the first 5 steps of a combinator's evaluation as follows:
SKRasterize[RandomSKExpr[50],5]
![A rasterised SK combinator with length 50, evaluated to 5 steps][16]
and feed these into a machine learning model. Although it might seem that this method is pointless - we are already evaluating the combinators to 5 steps, and we are training a model on a database of combinators evaluated to 40 steps to predict if a combinator will halt in <=40 steps, the point of the exercise is less to create a useful resource than to investigate the feasibility of applying machine learning to this type of problem. If more computational power was available, a dataset of combinators evaluated to 100 steps (when even more combinators will have halted) could be created: in such a case a machine learning model to predict whether or not a combinator will halt in <=100 steps would be a practical approach as the time taken to evaluate a combinator to 100 steps is exponentially longer than that taken to evaluate a combinator to 5 steps.
3.3.1 Training
We generate a dataset of 2000 random SK expressions with leaf size 5, 2000 expressions with leaf size 10 ... 2000 expressions with leaf size 50, evaluated up to 40 steps:
rasterizedlengths = Flatten[Table[GenerateTable[n,40,2000],{n,5,50,5}]];
In order to train a model on rasterised images, we must evaluate all SK expressions in the dataset to 5 steps and generate rasterised images of these:
RasterizedTrainingData = CreateRasterizedTrainingData[rasterizedlengths];
We then train a classifier on this data:
RasterizeClassifier=Classify[RasterizedTrainingData,Method->"RandomForest"]
#### 3.3.2 Testing
We must now generate test data, using the same parameters for generating random training data:
testrasterizedlengths = Flatten[Table[GenerateTable[n,40,2000],{n,5,50,5}]];
testrasterizedlengths = testrasterizedlengths/.(a_->b_)/;!(b===False):> (a->True);
TestRasterizedData = CreateRasterizedTrainingData[testrasterizedlengths];
The classifier can now be assessed for accuracy using this data:
TestRasterizeClassifier=ClassifierMeasurements[RasterizeClassifier,TestRasterizedData]
[//]: # (No rules defined for Output)
#### 3.3.3 Evaluation
A machine learning solution to this problem is only useful if the accuracy is greater than 0.5 (i.e. more accurate than a random coin flip). We test the accuracy of the classifier:
TestRasterizeClassifier["Accuracy"]
0.876891
This is significantly better than the Markov approach (75.5%). We find the training accuracy:
ClassifierInformation[RasterizeClassifier]
![enter image description here][17]
Again, the training accuracy (85.5%) is slightly lower than the testing accuracy (87.7%).
We calculate some statistics from a confusion matrix plot:
TestRasterizeClassifier["ConfusionMatrixPlot"]
![Confusion Matrix Plot][18]
Accuracy: 0.88
Misclassification rate: 0.12
Precision (halt): 0.911 (when 'halt' is predicted, how often is it correct?)
True Positive Rate: 0.83 (when the combinator halts, how often is it classified as halting?)
False Positive Rate: 0.08 (when the combinator doesn't halt, how often is it classified as halting?)
Precision (non-halt): 0.848 (when 'non halt' is predicted, how often is it correct?)
True Negative Rate: 0.92 (when the combinator doesn't halt, how often is it classified as not halting?)
False Negative Rate: 0.17 (when the combinator halts, how often is it classified as not halting?)
A confusion matrix plot shows that the false negative rate is larger than the false positive rate - this would suggest that it is easier for the model to tell when an expression halts than when an expression does not halt. The precision for halting is much higher than the precision for non-halting, indicating that if the model suggests a program will halt, this is much more likely to be correct than if it suggested that the program would not halt. An (oversimplified) way to look at this intuitively is to examine some graphs of lengths of random combinators:
![Random combinator length graphs][19]
Looking at combinators that halt (combinators for which the graph flattens out), some combinators 'definitely halt' - their length decreases until the graph flattens out:
!['definitely halts'][20]
'definitely halts' (1)
Some combinators have length that increases exponentially :
![exponentially increasing combinator length graph][21]
'possibly non-halting' (2)
And some combinators appear to have increasing length but suddenly decrease:
![increasing then decreasing combinator length graph][22]
'possibly non-halting' (3)
We do not know which features of the rasterised graphic the machine learning model extracts to make its prediction, but if, say, it was classifying based purely on length of the graphic, it would identify combinators like (1) as ' definitely halting', but would not necessarily be able to distinguish between combinators like (2) and combinators like (3), which both appear to be non - halting initially.
On a similar note, some functional programming languages (e.g. Agda - [7]) have the ability to classify a function as 'definitely halting' or 'possibly non-halting', just like our classifier, whose dataset is trained on functions that either 'definitely halt' (halt in <= 40 steps) or are 'possibly non-halting' (do not halt in <= 40 steps - might halt later).
### 3.4 Table of Comparison
![A table comparing statistics for Markov and Random Forest models][23]
### 4. Conclusions and Further Work
#### 4.1 Conclusions
The results of this exploration were somewhat surprising, in that a machine learning approach to determining whether or not a program will terminate appears to some extent viable - out of all the methods attempted, the random forest classifier applied to a rasterised image of the first five steps of the evaluation of a combinator achieved the highest accuracy of 0.88 on a test dataset of 1454 random SK combinator expressions. Note, though, that what is actually being determined here is whether or not a combinator will halt before some n steps (here, n=40) - we are classifying between combinators that 'definitely halt' and combinators which are 'possibly non-halting'.
### 4.2 Microsite
As an extension to this project, a Wolfram microsite was created and is accessible at [https://www.wolframcloud.com/objects/euan.l.y.ong/SKCombinators](https://www.wolframcloud.com/objects/euan.l.y.ong/SKCombinators) - within this microsite, a user can view a rasterised image of a combinator, a graph of the length of the combinator as it is evaluated, a statistical analysis of halting time relative to other combinators with the same leaf size and a machine learning prediction of whether or not the combinator will halt within 40 steps.
![Microsite Screenshot][24]
A screenshot of the microsite evaluating a random SK combinator expression
### 4.3 Implications, Limitations and Further Work
Although the halting problem is undecidable, the field of termination analysis - attempting to determine whether or not a given program will eventually terminate - has a variety of applications, for instance in program verification. Machine learning approaches to this problem would not only help explore this field in new ways but could also be implemented in, for instance, software debuggers.
The principal limitations of this method are that we are only predicting whether or not a combinator will halt in a finite number *k* of steps - while this could be a sensible idea if k is large, at present this system is very impractical due to small datasets and a small value of *k* used to train the classifier (*k *= 40). Another issue with the machine learning technique used is that the visualisations have different dimensions (longer combinators will generate longer images), and when the images are preprocessed and resized before being fed into the random forest model, downsampling/upsampling can lead to loss of data decreasing the accuracy of the model.
From a machine learning perspective, attempts at analysis of the rasterised images with a neural network could well prove fruitful, as would an implementation of a vector representation of syntax trees to allow the structure of SK combinators (nesting combinators) to be accurately extracted by a machine learning model.
Future theoretical research could include a deeper exploration of Lathrop's probabilistic method of determining *k*, an investigation of the 'halting' features the machine learning model is looking for within the rasterised images, a more general analysis of SK combinators (proofs of halting / non-halting for certain expressions, for instance) to uncover deeper patterns, or even an extension of the analysis carried out in the microsite to lambda calculus expressions (which can be transformed to an 'equivalent' SK combinator expression).
## Acknowledgements
We thank the mentors at the 2018 Wolfram High School Summer Camp - Andrea Griffin, Chip Hurst, Rick Hennigan, Michael Kaminsky, Robert Morris, Katie Orenstein, Christian Pasquel, Dariia Porechna and Douglas Smith - for their help and support writing this paper.
## References
A. Church and J. B. Rosser: Some properties of conversion. Transactions of the American Mathematical Society, 39 (3): 472\[Dash]482, 2018
R. H. Lathrop: On the learnability of the uncomputable. ICML 1996: 302-309.
C. S. Calude and M. Dumitrescu: A probabilistic anytime algorithm for the halting problem. Computability 7(2-3): 259-271, 2018.
L. G. Valiant: A theory of the learnable. Communications of the Association for Computing Machinery 27, (11): 1134-1142, 1984.
S. Wolfram: A New Kind of Science. 1121-1122, 2002.
E. Parfitt: Ways that combinators evaluate from the Wolfram Community\[LongDash]A Wolfram Web Resource. (2017)
http://community.wolfram.com/groups/-/m/t/965400
S-C Mu: Agda.Termination.Termination http://www.iis.sinica.edu.tw/~scm/Agda/Agda-Termination-Termination.html
Attached is a Wolfram Notebook (.nb) version of this computational essay.
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=1.png&userId=1371970
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=2.png&userId=1371970
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=3.png&userId=1371970
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=4.png&userId=1371970
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=5.png&userId=1371970
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=6.png&userId=1371970
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=7.png&userId=1371970
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=8.png&userId=1371970
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=9.png&userId=1371970
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=10.png&userId=1371970
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=11.png&userId=1371970
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=12.png&userId=1371970
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=13.png&userId=1371970
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Classify1.png&userId=1371970
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=15.png&userId=1371970
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=16.png&userId=1371970
[17]: http://community.wolfram.com//c/portal/getImageAttachment?filename=17.png&userId=1371970
[18]: http://community.wolfram.com//c/portal/getImageAttachment?filename=18.png&userId=1371970
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=19.png&userId=1371970
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=20.png&userId=1371970
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=21.png&userId=1371970
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=22.png&userId=1371970
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=23.png&userId=1371970
[24]: http://community.wolfram.com//c/portal/getImageAttachment?filename=24.png&userId=1371970Euan Ong2018-07-14T23:56:25Z[WSC18] Analysing Image Dithering
http://community.wolfram.com/groups/-/m/t/1383824
![image dithering cover][1]
This summer, as part of the Wolfram High School Summer Camp program, I decided to do a project analysing image dithering, and the various ways to do it. Over the course of two weeks, I learnt and understood how the algorithms work. Since resources about this process are sparse on the internet, *in this post, I not only implement the algorithms, but additionally describe in detail what image dithering is and how it works.*
*The second part of my project was to use machine learning to classify images that originate from different image dithering algorithms.*
After failing to get high accuracy with a large number of machine learning techniques, I finally came up with one that has an **accuracy of about 90%**.
Note that this community post is an adapted version of my computational essay. *My computational essay is attached at the bottom of the post.*
# What is image dithering?
![image-dithering-FS-example][2]
*Can you tell the difference between the two images above?*
The image on the right uses **just 24 colours** .
Yet, using Floyd-Steinberg dithering, it manages to look as detailed and beautiful as the one on the left, which is using more than 16 million colours!
Let' s formally define what the aim of image dithering is :
Given an RGB image with $256^3$ colors, reduce the colour space of the image to only belong to a certain color palette.
![chromaticity-plot comparison][3]
*A comparison of the chromaticity plots of the above images*
# Color Palette
First, let us tackle the easier problem at hand. Say that we are given the option to choose our color palette, the only restriction being the number of colours in the palette. What would be the best way to obtain a color palette that is the most appropriate for an image?
Thankfully, the Wolfram Language makes this task very easy for us, as we can simply use the inbuilt function `DominantColors[image, n]`. For example, regarding the image above:
![dominant-colors of the image][4]
would be the most appropriate color palette with 12 colours.
Here are some visualisations of the process of choosing the color palette in 3D RGB space.
The color palette of the original image:
![original color palette][5]
The color palette of the dithered image:
![final color palette][6]
*Notice how the final color palette is quite close to points on the diagonal of the cube. I go into a lot more detail about this in my computational essay.*
**Now, let us try to solve the main part of the problem, actually figuring out the mapping of a pixel colour to its new colour.**
# Naive idea
```
colorPallete = {0, 32, 64, 96, 128, 159, 191, 223};
pixel = {{34, 100, 222},
{200, 50, 150}};
result = pixel;
Do [
result[[x, y]] = First[Nearest[colorPallete, pixel[[x, y]]]];,
{x, Length[pixel]},
{y, Length[First[pixel]]}
];
Grid[result, Frame -> All, Spacings -> {1, 1}]
```
![grid naive idea][7]
## Extra!
I had to implement my own (but slower) version of the Wolfram function `Nearest` for the final functions, since pre-compiling Wolfram code to C code does not support `Nearest` natively.
However, at the time of writing, I have heard that there is an internal project going on in Wolfram to enable support for compiling all inbuilt Wolfram function to C natively.
# Better idea
As one can guess, this idea can be improved a lot. One of the important ideas is that of "smoothing". We want the transition between two objects/colours to look smoother. One way to do that would be make a gradient as the transition occurs.
However, how do you formalise a "gradient"? And how do you make a smooth one when all you have are 24 colours?
Dithering basically attempts to solves these problems.
To solve our questions, let's think about a pixel: what's the best way to make it look closest to its original colour?
In the naive idea, we created some error by rounding to some nearby values.
For each pixel, let's formalise the error as:
![error in pixel[1, 1] screenshot][8]
## Error diffusion
It is clear that the error should somehow be transmitted to the neighbouring elements so we can account for the error in a pixel in its neighbours. To maintain an even order of processing, let us assume that we will traverse the 2D array of pixels from the top-left corner, row-wise until we reach the bottom-right corner.
Therefore, it never makes sense to "push" the effects of an error to a cell we've already processed. Finally, let us see some ways to actually diffuse the error across the image.
# Floyd - Steinberg Dithering
In 1976, Robert Floyd and Louis Steinberg published the most popular dithering algorithm<sup>1</sup>. The pattern for error diffusion can be described as:
```
diffusionFormula = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPosition = {1, 2};
```
## What does this mean?
`diffusionFormula` is simply a way to encode the diffusion from a pixel.
`diffusionPosition` refers to the position of the pixel, relative to the `diffusionFormula` encoding.
So, for example, an error of `2` at pixel `{1, 1}` translates to the following additions:
```
pixel[[1, 2]] += error*(7/16);
pixel[[2, 1]] += error*(3/16);
pixel[[2, 2]] += error*(5/16);
pixel[[2, 3]] += error*(1/16);
```
![grid floyd steinberg error diffusion][9]
![floyd Steinberg dithering][10]
## How does one come up with these weird constants?
Notice how the numerator constants are the first 4 odd numbers.
The pattern is chosen specifically to create an even checkerboard pattern for perfect grey images using black and white.
![grayscale floyd steinberg dithering example][11]
*Example Grayscale image dithered with Floyd Steinberg*
![grayscale picture in picture thing][12]
*Note the checkerboard pattern in the image above.*
# Atkinson Dithering
Relative to the other dithering algorithms here, Atkinson's algorithm diffuses a lot less of the error to its surroundings. It tends to preserve detail well, but very continuous sections of colours appear blown out.
This was made by Bill Atkinson<sup>2</sup>, an Apple employee.The pattern for error diffusion is as below :
```
diffusionFormula = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8;
diffusionPosition = {1, 2};
```
![atkinson dithering example][13]
# Jarvis, Judice, and Ninke Dithering
This algorithm<sup>3</sup> spreads the error over more rows and columns, therefore, images should be softer(in theory).
The pattern for error diffusion is as below:
```
diffusionFormula = { {0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPosition = {1, 3};
```
![JJN algorithm example][14]
#### The final 2 dithering algorithms come from Frankie Sierra, who published the Sierra and Sierra Lite matrices<sup>4</sup> in 1989 and 1990 respectively.
# Sierra Dithering
Sierra dithering is based on Jarvis dithering, so it has similar results, but it's negligibly faster.
```
diffusionFormula = { {0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPosition = {1, 3};
```
![sierra dithering example][15]
# Sierra Lite Dithering
This yields results similar to Floyd-Steinberg dithering, but is faster.
```
diffusionFormula = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPosition = {1, 2};
```
![Sierra Lite dithering][16]
# Comparison
Here's an interactive comparison of the algorithms on different images:
```
Manipulate[
Dither[im, c,
StringDelete[StringDelete[StringDelete[algo, " "], ","],
"-"]], {{im, image, "Image"}, {im1, im2, im3}}, {{c, 12,
"Number of colours"}, 2, 1024, 1},
{{algo, "Floyd-Steinberg", "Algorithm"}, {"Floyd-Steinberg",
"Jarvis, Judice, Ninke", "Atkinson", "Sierra", "Sierra Lite"}}]
```
Download my computational essay to see it in action. Alternately, use the functions in the "Implementation" section to evaluate the code. `im1`, `im2`, `im3` can be replaced by images. I have submitted the comparison to [Wolfram Demonstrations][17] as well, so it should be available online soon.
# Side-note
Notice how the denominator in the `diffusionFormula` of a number of algorithms is a power of $2$?
This is because division by a power of 2 is equivalent to [bit-shifting][18] the number to the right by $\log_{2}(divisor)$ bits, making it much faster than division by any other number.
Given the improvements in computer hardware, this is not a major concern anymore.
## Come up with your own dithering algorithm!
I noticed that the dithering algorithms are almost the same as each other(especially the `diffusionPosition`).
However, I have made my functions so that you can just tweak the input arguments `diffusionFormula` and `diffusionPosition`, and test out your own functions!
Here's one I tried:
![my own dithering algorithm][19]
# Implementation
In this section, I will discuss the Wolfram implementation, and some of the features and functions I used in my code.
## applyDithering
Even though it's probably the least interesting part of the algorithm, actually applying the dithering is the most important part of the algorithm, and the one that gave me the most trouble.
I started off by writing it in the functional paradigm. With little knowledge of Wolfram, I stumbled through the docs to assemble pieces of the code. Finally, I had a "working" version of the algorithm, but there was a major problem: A $512\cdot512$ RGB image took 700 seconds for processing!
This number is way too large for an algorithm with linear time complexity in the size of input.
### Fixes
Some of the trivial fixes involved making more use of inbuilt functions(for example, `Nearest`).
The largest problem is that the Wolfram notebook is an interpreter, not a compiler. It interprets code every time it's run.
So the obvious step to optimising performance was using the `Compile` function in Wolfram.
But, there's a catch!
```
T (R1) 18 = MainEvaluate[ Hold[List][ I27, I28, T (R1) 16, R6]]
95 Element[ T (R2) 17, I20] = T (R1) 18
```
If you see something like the above in your machine code, your code is likely to be slow.
`MainEvaluate` basically means that the compiled function is calling back the kernel, which too, is a slow process.
To view the human readable form of your compiled Wolfram function, you can use:
```
Needs["CompiledFunctionTools`"];
CompilePrint[yourFunction]
```
To fix this, you need to basically write everything in a procedural form using loops and similar constructs.
The final step was `RuntimeOptions -> "Speed"`, which trades off some integer overflow checks etc. for a faster runtime.
Find the complete code for the function below:
```
applyDithering =
Compile[{{data, _Real, 3}, {diffusionList, _Real,
2}, {diffusionPos, _Integer, 1}, {colors, _Real, 2}},
Module[{lenx, leny, lenz, lenxdiff, lenydiff, error, val,
realLoc, closestColor, closestColordiff, res, a, diff = data,
diffusionFormula = diffusionList, xx, yy, curxx, curyy,
colorAvailable = colors, tmp, diffusionPosition = diffusionPos,
idx},
{lenx, leny, lenz} = Dimensions[data];
{lenxdiff, lenydiff} = Dimensions[diffusionFormula];
a = data;
res = data;
Do[
val = a[[x, y]];
realLoc = {x - diffusionPos[[1]] + 1,
y - diffusionPos[[2]] + 1};
closestColor = {1000000000., 1000000000., 1000000000.};
closestColordiff = 1000000000.;
Do[
tmp = N[Total[Table[(i[[idx]] - val[[idx]])^2, {idx, 3}]]];
If[tmp < closestColordiff,
closestColordiff = tmp;
closestColor = i;
];,
{i, colorAvailable}
];
error = val - closestColor;
res[[x, y]] = closestColor;
Do[
curxx = realLoc[[1]] + xx - 1;
curyy = realLoc[[2]] + yy - 1;
If[curxx > 0 && curxx <= lenx && curyy > 0 && curyy <= leny,
a[[curxx, curyy, z]] += error[[z]]*diffusionFormula[[xx, yy]]];,
{xx, lenxdiff},
{yy, lenydiff},
{z, 3}
];,
{x, lenx},
{y, leny}
];
Round[res]
],
CompilationTarget -> "C",
RuntimeOptions -> "Speed"
];
```
## Dither
This is the main function that uses `applyDithering`. Their are multiple definitions of the function, one with the hardcoded values, and the other to allow one to easily implement their own dithering algorithm.
```
(* This is the implementation that takes the algorithm name and applies it *)
Dither[img_Image, colorCount_Integer, algorithm_String: ("FloydSteinberg" | "JarvisJudiceNinke" |
"Atkinson" | "Sierra" | "SierraLite")] :=
Module[{diffusionFormulaFS, diffusionPositionFS, diffusionFormulaJJN, diffusionPositionJJN, diffusionFormulaA,
diffusionPositionA, diffusionFormulaS, diffusionPositionS, diffusionFormulaSL, diffusionPositionSL},
(* Floyd Steinberg algorithm constants *)
diffusionFormulaFS = {{0, 0, 7},
{3, 5, 1}} / 16;
diffusionPositionFS = {1, 2};
(* Jarvis, Judice, and Ninke algorithm constants *)
diffusionFormulaJJN = {{0, 0, 0, 7, 5},
{3, 5, 7, 5, 3},
{1, 3, 5, 3, 1}} / 48;
diffusionPositionJJN = {1, 3};
(* Atkinson algorithm constants *)
diffusionFormulaA = {{0, 0, 1, 1},
{1, 1, 1, 0},
{0, 1, 0, 0}} / 8 ;
diffusionPositionA = {1, 2};
(* Sierra algorithm constants *)
diffusionFormulaS = {{0, 0, 0, 5, 3},
{2, 4, 5, 4, 2},
{0, 2, 3, 2, 0}} / 32 ;
diffusionPositionS = {1, 3};
(* Sierra Lite algorithm constants*)
diffusionFormulaSL = {{0, 0, 2},
{0, 1, 1}} / 4;
diffusionPositionSL = {1, 2};
colorAvailable =
Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Switch[algorithm,
"FloydSteinberg",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaFS, diffusionPositionFS, colorAvailable],
"Byte"],
"JarvisJudiceNinke",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaJJN, diffusionPositionJJN, colorAvailable],
"Byte"],
"Atkinson",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaA, diffusionPositionA, colorAvailable],
"Byte"],
"Sierra",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaS, diffusionPositionS, colorAvailable],
"Byte"],
"SierraLite",
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormulaSL, diffusionPositionSL, colorAvailable],
"Byte"]
]
];
(* This is the function that makes it easy to make your own dithering
algorithm *)
Dither[img_Image, colorCount_Integer, diffusionFormula_List, diffusionPosition_List] := Module[{},
colorAvailable = Round[List @@@ ColorConvert[DominantColors[img, colorCount], "RGB"] * 255];
Image[
applyDithering[ImageData[RemoveAlphaChannel[img], "Byte"], diffusionFormula, diffusionPosition, colorAvailable],
"Byte"]
];
```
# **Classifying images**
The second part of my project involves classifying dithered images and mapping them to the algorithm they were obtained from.
This sounds like a relatively easy task for machine learning, but it turned out to be much harder. Besides, no similar research on image "metadata" has existed before, which made the task more rewarding.
I ended up creating a model which has an **accuracy of more than 90%**, which is reasonably good for machine learning.
If you are uninterested in the failures encountered and the details of the dataset used, please skip ahead to the section on "ResNet-50 with preprocessing".
# Dataset
To obtain data, I did a web search for images with a keyword chosen randomly from a dictionary of common words.
The images obtained are then run through the five algorithms I implemented and is stored as the training data. This is to ensure that the images aren't distinguished much in terms of their actual contents, since that would interfere with learning about the dithering algorithms used in the image.
The images were allowed to use up to 24 colours which are auto-selected, as described in the section on "Color Palette".
Here is the code for downloading, applying the dithering, and re-storing the images. Note that it is not designed with re-usability in mind, these are just snippets coded at the speed of thought:
```
(* This function scrapes random images from the internet and stores \
them to my computer *)
getImages[out_Integer, folderTo_String] := Module[{},
res = {};
Do [
Echo[x];
l = RemoveAlphaChannel /@
Map[ImageResize[#, {512, 512}] &,
Select[WebImageSearch[RandomChoice[WordList["CommonWords"]],
"Images"],
Min[ImageDimensions[#]] >= 512 &]];
AppendTo[res, Take[RandomSample[l], Min[Length[l], 2]]];
Pause[10];,
{x, out}
];
MapIndexed[Export[
StringJoin[folderTo, ToString[97 + #2[[1]]], "-",
ToString[#2[[2]]], ".png"], #1] &, res, {2}]
]
(* This function applies the dithering and stores the image *)
applyAndStore[folderFrom_String, folderTo_String] := Module[{},
images = FileNames["*.png", folderFrom];
origImages = Map[{RemoveAlphaChannel[Import[#]], #} &, images];
Map[Export[StringJoin[folderTo, StringTake[#[[2]], {66, -1}]],
Dither[#[[1]], 24, "Sierra"]] &, origImages]
];
```
Here are some more variable definitions and metadata about the dataset that is referenced in the following sections.
![dataset metadata][20]
# Plain ResNet-50
My first attempt was to use a popular neural net named "ResNet-50 Trained on ImageNet Competition Data,<sup>5</sup> and retrain it on my training data.
One of the major reasons for choosing this architecture was that it identifies the main object in an image very accurately, and is quite deep. Both these properties seemed very suitable for my use case.
However, the results turned out to be very poor. When I noticed this during the training session, I stopped the process early on. It can be speculated that the poor results were because it was trying to infer relations between the colours in the image.
# Border classification
Since the borders in an image are least affected by the image dithering algorithm, and simply rounded to the closest colours, it should be easier to learn the constants of the diffusionFormula from it.
Therefore, we can pre-process an image and only use its border pixels for classification.
## borderNet
Observing the aforementioned fact, I implemented a neural net which tried to work with just the borders of the image. This decreased the size of the data to $512 \cdot 4$ per image.
## borderNet with just left and top border
Since my implementation of the dithering algorithms starts by applying the algorithm from the top-left corner, the pattern in the left and top borders should be even easier for the net to learn. However, this decreased the size of the data even more to $512 \cdot 2$ per image.
Both the nets failed to work very well, and had **accuracies of around 20%**. This was probably the case because of the lack of data for the net to actually train well enough.
Wolfram code for the net follows:
```
borderNet = NetChain[{
LongShortTermMemoryLayer[100],
SequenceLastLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
},
"Input" -> {"Varying", 3},
"Output" -> NetDecoder[{"Class", classes}]
]
```
# Row and column specific classification
The aim with this approach was to first make the neural net infer patterns in the columns of the image, then combine that information and observe patterns in the rows of the image.
This didn't work very well either. The major reason for the failure was probably that the diffusion is not really as independent as the net might assume it to be.
# Row and column combined classification
Building on the results of Section 2.5, the right method to do the processing seemed to be to use two separate chains, and a `CatenateLayer` to combine the results.
For understanding the architecture, observe the `NetGraph` object below:
![netgraph-lstmNet branchy thingy][21]
The Wolfram language code for the net is as follows:
```
lstmNet = NetGraph[{
TransposeLayer[1 <-> 2],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 3}], "Input" -> {512, 512, 3}],
NetMapOperator[
NetBidirectionalOperator[LongShortTermMemoryLayer[25],
"Input" -> {512, 50}], "Input" -> {512, 512, 50}],
TransposeLayer[1 <-> 2],
SequenceLastLayer[],
SequenceLastLayer[],
LongShortTermMemoryLayer[25],
LongShortTermMemoryLayer[25],
SequenceLastLayer[],
SequenceLastLayer[],
CatenateLayer[],
DropoutLayer[0.3],
LinearLayer[Length@classes],
SoftmaxLayer[]
}, {
NetPort["Input"] -> 2 -> 3 -> 7 -> 9 -> 11,
NetPort["Input"] -> 1 -> 4 -> 5 -> 6 -> 8 -> 10 -> 12,
{11, 12} -> 13 -> 14 -> 15 -> 16
},
"Input" -> {512, 512, 3},
"Output" -> NetDecoder[{"Class", classes}]
];
```
However, this net didn't work very well either.
The net had a somewhat unconventional architecture, and the excessive parameter count crashed the Wolfram kernel, so they had to be cut down.
Ultimately, it only managed to get an **accuracy rate of around 25-30%**.
# ResNet-50 with preprocessing
The final idea was to use pre-processing to our advantage. Dithering, in its essence, shifts the error downward and towards the right. Therefore, one way to filter the image would be to pad the image with one row of pixels at the top and one column at the left, and subtracting the padded image from the original one.
Here's an example of what that looks like:
![FS image preprocessing for net][22]
The code for doing this to an image is as simple as:
```
img - ImageTake[ImagePad[img, 1], {1, 512}, {1, 512}]
```
*Notice how the image(right side, after processing) resembles the parts with the "checkerboard" pattern described in the section "How does one come up with these weird constants?" under "Floyd - Steinberg Dithering" .*
The main reason this net works well is that, even with same color palettes, the gradient of the images coming from dithering algorithms is quite different. This is because of the differences in the error diffusion, and by subtracting the padded image from the original image, we obtain a filtered version of the dithering patterns, making it easy for the neural net to spot them.
The net was trained on AWS for more than 7 hours, on a larger dataset of 1500 images.
The results outperformed my expectations, and on a test-set of more than 700 images, 300 of which were part of the original training data, it showed an **accuracy rate of nearly 91%**.
![classifier measurement object][23]
Here is a code of the net with details:
```
baseModel =
NetTake[NetModel["ResNet-50 Trained on ImageNet Competition Data",
"UninitializedEvaluationNet"], 23]
net = NetChain[{
NetReplacePart[baseModel,
"Input" -> NetEncoder[{"Image", {512, 512}}]],
LinearLayer[Length@classes],
SoftmaxLayer[]},
"Input" -> NetEncoder[{"Image", {512, 512}, ColorSpace -> "RGB"}],
"Output" -> NetDecoder[{"Class", classes}]
]
```
So, it's just the **ResNet - 50** modified to work with $512 \cdot 512$ images.
# Future Work
- Look into using machine learning for un-dithering an image.
- Look into creating new dithering algorithms that perform faster or better than the existing ones.
# Notes
All images and visualisations in this post were generated in Wolfram. Their code may be seen in the computational essay attached below.
I would like to thank all the mentors, especially Greg "Chip" Hurst, Michael Kaminsky, Christian Pasquel and Matteo Salvarezza, for their help throughout the project.
Further, I would like to thank Pyokyeong Son and Colin Weller for their help during the project, and refining the essay.
The original, high resolution copies of the images are credited to [Robert Lukeman][24], [Teddy Kelley][25], and [Sebastian Unrau][26] on [Unsplash][27].
# References
[1] : R.W. Floyd, L. Steinberg, An adaptive algorithm for spatial grey scale. Proceedings of the Society of Information Display 17, 75-77 (1976).
[2] : Bill Atkinson, private correspondence with John Balestrieri, January 2003 (unpublished)
[3] : J. F. Jarvis, C. N. Judice and W. H. Ninke, A Survey of Techniques for the Display of Continuous Tone Pictures on Bi-level Displays. Computer Graphics and Image Processing, 5 13-40, 1976
[4] : Frankie Sierra, in LIB 17 (Developer's Den), CIS Graphics Support Forum (unpublished)
[5] : K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition," arXiv:1512.03385 (2015)
# [Link to my computational essay][28]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-dithering-cover.gif&userId=1371661
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at3.50.44PM.png&userId=1371661
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.11.46PM.png&userId=1371661
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.15.27PM.png&userId=1371661
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-og.gif&userId=1371661
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=color-palette-final.gif&userId=1371661
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.19.57PM.png&userId=1371661
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.23.38PM.png&userId=1371661
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.28.38PM.png&userId=1371661
[10]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.29.39PM.png&userId=1371661
[11]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.31.40PM.png&userId=1371661
[12]: http://community.wolfram.com//c/portal/getImageAttachment?filename=image-1.png&userId=1371661
[13]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.38.01PM.png&userId=1371661
[14]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.39.37PM.png&userId=1371661
[15]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.41.39PM.png&userId=1371661
[16]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at4.43.33PM.png&userId=1371661
[17]: http://demonstrations.wolfram.com
[18]: https://stackoverflow.com/a/141873
[19]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-13at5.03.37PM.png&userId=1371661
[20]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.11.09AM.png&userId=1371661
[21]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.21.55AM.png&userId=1371661
[22]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.24.56AM.png&userId=1371661
[23]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-07-14at1.28.45AM.png&userId=1371661
[24]: https://unsplash.com/photos/zNN6ubHmruI?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[25]: https://unsplash.com/photos/_4Ib-a8g9aA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[26]: https://unsplash.com/photos/CoD2Q92UaEg?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[27]: https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[28]: https://www.dropbox.com/s/kmtzq6x4xkdn9y8/computational-essay.cdf?dl=0Nalin Bhardwaj2018-07-14T15:17:17ZHow to combine Styled Characters?
http://community.wolfram.com/groups/-/m/t/1383808
End result will be the PNG below, but when I use StringJoin to join Styled Characters, error pop up StringJoin::string: String expected at position 1 in StringJoin[M]. Possible issue is StringJoin works only with explicit strings, while TreeForm[Style["M",Red]] shows it is not a String and thus cannot use StringJoin, how could I make it?
StringJoin@Style[#, RandomColor[]] & /@ Characters["Mathematica"]
![enter image description here][1]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=O_1.png&userId=1146855freemanwangyl2018-07-14T13:42:43Z