Community RSS Feed
http://community.wolfram.com
RSS Feed for Wolfram Community showing any discussions in tag Import and Export sorted by activeThoughts on a Python interface, and why ExternalEvaluate is just not enough
http://community.wolfram.com/groups/-/m/t/1185247
`ExternalEvaluate`, introduced in M11.2, is a nice initiative. It enables limited communication with multiple languages, including Python, and appears to be designed to be relatively easily extensible (see ``ExternalEvaluate`AddHeuristic`` if you want to investigate, though I wouldn't invest in this until it becomes documented).
**My great fear, however, is that with `ExternalEvaluate` Wolfram will consider the question of a Python interface settled.**
This would be a big mistake. A *general* framework, like `ExternalEvaluate`, that aims to work with *any* language and relies on passing code (contained in a string) to an evaluator and getting JSON back, will never be fast enough or flexible enough for *practical scientific computing*.
Consider a task as simple as computing the inverse of a $100\times100$ Mathematica matrix using Python (using [`numpy.linalg.inv`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html)).
I challenge people to implement this with `ExternalEvaluate`. It's not possible to do it *in a practically useful way*. The matrix has to be sent *as code*, and piecing together code from strings just can't replace structured communication. The result will need to be received as something encodable to JSON. This has terrible performance due to multiple conversions, and even risks losing numerical precision.
Just sending and receiving a tiny list of 10000 integers takes half a second (!)
In[6]:= ExternalEvaluate[py, "range(10000)"]; // AbsoluteTiming
Out[6]= {0.52292, Null}
Since I am primarily interested in scientific and numerical computing (as I believe most M users are), I simply won't use `ExternalEvaluate` much, as it's not suitable for this purpose. What if we need to do a [mesh transformation](https://mathematica.stackexchange.com/q/155484/12) that Mathematica can't currently handle, but there's a Python package for it? It's exactly the kind of problem I am looking to apply Python for. I have in fact done mesh transformations using MATLAB toolboxes directly from within Mathematica, using [MATLink][1], while doing the rest of the processing in Mathematica. But I couldn't do this with ExternalEvaluate/Python in a reasonable way.
In 2017, any scientific computing system *needs* to have a Python interface to be taken seriously. [MATLAB has one][2], and it *is* practically usable for numerical/scientific problems.
----
## A Python interface
I envision a Python interface which works like this:
- The MathLink/WSTP API is exposed to Python, and serves as the basis of the system. MathLink is good at transferring large numerical arrays efficiently.
- Fundamental data types (lists, dictionaries, bignums, etc.) as well as datatypes critical for numerical computing (numpy arrays) can be transferred *efficiently* and *bidirectionally*. Numpy arrays in particular must translate to/from packed arrays in Mathematica with the lowest possible overhead.
- Python functions can be set up to be called from within Mathematica, with automatic argument translation and return type translation. E.g.,
PyFun["myfun"][ (* myfun is a function defined in Python *)
{1,2,3} (* a list *),
PyNum[{1,2,3}] (* cast to numpy array, since the interpretation of {1,2,3} is ambiguous *),
PySet[{1,2,3}] (* cast to a set *)
]
- The system should be user-extensible to add translations for new datatypes, e.g. a Python class that is needed frequently for some application.
- The primary mode of operation should be that Python is run as a slave (subprocess) of Mathematica. But there should be a second mode of operation where both Mathematica and Python are being used interactively, and they are able to send/receive structured data to/from each other on demand.
- As a bonus: Python can also call back to Mathematica, so e.g. we can use a numerical optimizer available in Python to find the minimum of a function defined in Mathematica
- An interface whose primary purpose is to call Mathematica from Python is a different topic, but can be built on the same data translation framework described above.
The development of such an interface should be driven by real use cases. Ideally, Wolfram should talk to users who use Mathematica for more than fun and games, and do scientific computing as part of their daily work, with multiple tools (not just M). Start with a number of realistic problems, and make sure the interface can help in solving them. As a non-trivial test case for the datatype-extension framework, make sure people can set up auto-translation for [SymPy objects][3], or a [Pandas dataframe][4], or a [networkx graph][5]. Run `FindMinimum` on a Python function and make sure it performs well. (In a practical scenario this could be a function implementing a physics simulation rather than a simple formula.) As a performance stress test, run `Plot3D` (which triggers a very high number of evaluations) on a Python function. Performance and usability problems will be exposed by such testing early, and then the interface can be *designed* in such a way as to make these problems at least solvable (if not immediately solved in the first version). I do not believe that they are solvable with the `ExternalEvaluate` design.
Of course, this is not the only possible design for an interface. J/Link works differently: it has handles to Java-side objects. But it also has a different goal. Based on my experience with MATLink and RLink, I believe that *for practical scientific/numerical computing*, the right approach is what I outlined above, and that the performance of data structre translation is critical.
----
## ExternalEvaluate
Don't get me wrong, I do think that the `ExternalEvaluate` framework is a very useful initiative, and it has its place. I am saying this because I looked at its source code and it appears to be easily extensible. R has zeromq and JSON capabilities, and it looks like one could set it up to work with `ExternalEvaluate` in a day or so. So does Perl, anyone want to give it a try? `ExternalEvaluate` is great because it is simple to use and works (or can be made to work) with just about any interpreted language that speaks JSON and zeromq. But it is also, in essence, a quick and dirty hack (that's extensible in a quick and dirty way), and won't be able to scale to the types of problems I mentioned above.
----
## MathLink/WSTP
Let me finally say a few words about why MathLink/WSTP are critical for Mathematica, and what should be improved about them.
I believe that any serious interface should be built on top of MathLink. Since Mathematica already has a good interface capable of inter-process communication, that is designed to work well with Mathematica, and designed to handle numerical and symbolic data efficiently, use it!!
Two things are missing:
- Better documentation and example programs, so more people will learn MathLink
- If the MathLink library (not Mathematica!) were open source, people would be able to use it to link to libraries [which are licensed under the GPL][6]. Even a separate open source implementation that only supports shared memory passing would be sufficient—no need to publish the currently used code in full. Many scientific libraries are licensed under the GPL, often without their authors even realizing that they are practically preventing them from being used from closed source systems like Mathematica (due to the need to link to the MathLink libraries). To be precise, GPL licensed code *can* be linked with Mathematica, but the result cannot be shared with anyone. I have personally requested the author of a certain library to grant an exception for linking to Mathematica, and they did not grant it. Even worse, I am not sure they understood the issue. The authors of other libraries *cannot* grant such a permission because they themselves are using yet other GPL's libraries.
[MathLink already has a more permissive license than Mathematica.][7] Why not go all the way and publish an open source implementation?
I am hoping that Wolfram will fix these two problems, and encourage people to create MathLink-based interfaces to other systems. (However, I also hope that Wolfram will create a high-quality Python link themselves instead of relying on the community.)
I have talked about the potential of Mathematica as a glue-language at some Wolfram events in France, and I believe that the capability to interface external libraries/systems easily is critical for Mathematica's future, and so is a healthy third-party package ecosystem.
[1]: http://matlink.org/
[2]: https://www.mathworks.com/help/matlab/matlab-engine-for-python.html
[3]: http://www.sympy.org/
[4]: http://pandas.pydata.org/
[5]: https://networkx.github.io/
[6]: https://en.wikipedia.org/wiki/Copyleft
[7]: https://www.wolfram.com/legal/agreements/mathlink.htmlSzabolcs Horvát2017-09-15T12:33:04ZAvoid truncated tweets texts when using ServiceExecute?
http://community.wolfram.com/groups/-/m/t/1289820
I use the Twitter command to extract tweets and do some analysis
twitter = ServiceConnect["Twitter", "New"]
listTws = twitter["TweetSearch", "Query" -> "Trump", MaxItems -> 10];
For example, the text of the first tweet can be extracted with
listTws[[1]]["Text"]
whose output is
"RT @BreitbartNews: \"Let\[CloseCurlyQuote]s turn a negative into a
positive... I want them to say, \[OpenCurlyQuote]Look, we are not
profiting off the deaths of children. We..."
The tweet is truncated after "We..." . If I look at the [original tweet][1] (ID 966454631361589249) , the original phrase is completed.
Who can I extract the whole sentence??
Is there a way to avoid to import retweets?? They are the majority and look like "RT @name blabla"
[Here][2] I read that I need to set "tweet_mode=extended". How can I do that with Mathematica?
**UPDATE**I have noticed that the truncated tweets' length is 140, whereas the total length is 280. This may be related to the new feature of twitter. Is there a way to fix it?
[2]: https://twittercommunity.com/t/retrieve-full-tweet-when-truncated-non-retweet/75542/3
[1]: http://%22https://twitter.com/nigel_trump/status/966454631361589249%22Francesco Sgarlata2018-02-21T23:57:16ZAnalyzing historical State of the Union address data
http://community.wolfram.com/groups/-/m/t/1275556
When I heard the State of the Union was coming up, I knew it was going to call for a Mathematica data deep dive! With the help of the data from [this site][1] and some archived Wikipedia data, I was able to do just that. Mathematica made it easy to compare and contrast the State of the Union addresses given by each president.
**Creating a CSV file of all the relevant data**
With all the data from the UCSB site, I was well on my way to doing something interesting. I was able to copy most of this information as plain text into an Excel document. However, with dates outside the normal date range, I had to be careful that none of these were converted into invalid dates. I also had to hand paste all of the URLs to the SOTU addresses I found on Wikipedia as well as the time length of the spoken speeches. I've linked the CSV file [here][2] as well as in the Mathematica notebook, so you can avoid the same hassle!
**Importing the data and speeches**
Using SemanticImport, this process was relatively simple. The dates, time lengths, and URLs were automatically recognized as entities, which saved me quite a bit of programming time.
fileLocation =
"https://amoeba.wolfram.com/index.php/s/8RtPmIDGwzrr0FR/download";
SOTU = SemanticImport[fileLocation, Automatic, "Rows"]
I was then able to use Import to pull all of the speeches from the URLs in obtained in that SemanticImport. These were all imported as lists of strings automatically.
speechList = Import[SOTU[[#]][[6]]] & /@ Range[Length[SOTU]]
Using the AppendTo functionality, I was easily able to add these imported speeches to the original dataset using an iterative loop.
i = 1;
While[i < Length[speechList] + 1,
AppendTo[SOTU[[i]], speechList[[i]]];
i++;
]
Finally, using the GroupBy function, I was able to sort this list into list associated with specific presidents. This was especially helpful for some of the Manipulate functions used later to create WordClouds.
SOTUgrouped = GroupBy[SOTU, First]
**Using WordCloud to visualize each speech**
WordCloud is one of my favorite functions to use in analyzing text and speeches. This function makes it extremely easy to quickly visualize the degree of word selection in textual data. In this case, I analyzed all of President Trump's State of the Union addresses (excluding the "applause" that was included in the transcript with WordSelectionFunction). Considering there was only one, this was relatively easy, but I wanted to create a skeleton for the next portion that would join all of his speeches together with the StringJoin function and Slot functionality.
string = StringJoin[
SOTUgrouped["Donald Trump"][[#]][[7]] & /@
Range[Length[SOTUgrouped["Donald Trump"]]]];
WordCloud[string, WordSelectionFunction -> (# != "applause" &)]
![enter image description here][3]
I will say this time and time again, one of the most interesting functions in Mathematica is the Manipulate function. Using this, I was able to do the same WordCloud for every single one of the presidents and give the user the ability to flip through different presidents and compare. Using the Keys function, I was able to quickly create a list of president for the variable President from my grouped list created earlier.
totalsWC = Manipulate[
string =
StringJoin[
SOTUgrouped[President][[#]][[7]] & /@
Range[Length[SOTUgrouped[President]]]];
WordCloud[string, WordSelectionFunction -> (# != "applause" &)],
{President, Keys[SOTUgrouped]}
]
![enter image description here][4]
To take this a step further, I thought it would be even more interesting to have the option to select either the president's speeches as a whole or particular speeches to see how the focus changed throughout the presidency. I first programmed the Speech variable to update dynamically based on the number of speeches under each president. Unfortunately, this would cause some issues when flipping between presidents if a Speech value was out of range. I used some simple If statements with the MemberQ function to make this a bit more user-friendly when jumping around between presidents. Overall though, I followed a pretty similar skeleton to the previous example.
bySpeechWC = Manipulate[
speeches :=
Join[{0 -> "All Speeches"}, # -> "Speech #" <> ToString[#] & /@
Range[Length[SOTUgrouped[President]]]];
wc := If[
MemberQ[Keys[speeches], Speech],
If[
Speech == 0,
WordCloud[string,
WordSelectionFunction -> (# != "Applause" && # != "applause" &),
ImageSize -> Large],
WordCloud[SOTUgrouped[President][[Speech]][[7]],
WordSelectionFunction -> (# != "Applause" && # != "applause" &),
ImageSize -> Large]
],
"Word Cloud cannot be generated until you choose a speech # \
within range"];
date := If[
MemberQ[Keys[speeches], Speech],
If[
Speech == 0,
"N/A",
SOTUgrouped[President][[Speech]][[2]]
],
"Date cannot be generated until you choose a speech # within \
range"];
string =
StringJoin[
SOTUgrouped[President][[#]][[7]] & /@
Range[Length[SOTUgrouped[President]]]];
Column[{
Row[{"President: " <> President}],
Row[{"Date: ", date}],
Row[{wc}]
}],
{President, Keys[SOTUgrouped]},
{Speech, Dynamic[speeches]},
ControlType -> PopupMenu,
Initialization :> (Speech = 0)
]
![enter image description here][5]
**Exploring word usage across different eras**
I decided to take this textual analysis a step further and compare the usage of the most common state of the union words over time. I started by creating a list of the top 100 used words across all of the speeches using WordCounts. I also used StringJoin to combine all of the speeches and DeleteStopwords to avoid counting words like "the", "and", "with", etc.
top100PresTerms =
Take[WordCounts[DeleteStopwords[StringJoin[speechList]],
IgnoreCase -> True], 100]
I knew I would need to tally these within each speech, so I started by testing the Tally function with George Washington's original State of the Union then used the Keys function to find each of the Keys in the top 100 within this speech. This list was put into a list of Associations. Missing[] terms were used as placeholders for top 100 words not found.
gw1 = WordCounts[DeleteStopwords[SOTU[[1]][[7]]], IgnoreCase -> True]
Association[# -> gw1[#] & /@ Keys[top100PresTerms]]
Using this same methodology, I was able to iterate through every speech in order to pair a date with their respective lists of associations of the top 100 words.
i = 1;
wordUse = {};
While[i < Length[SOTU] + 1,
allWords =
WordCounts[DeleteStopwords[SOTU[[i]][[7]]], IgnoreCase -> True];
compList = Association[# -> allWords[#] & /@ Keys[top100PresTerms]];
AppendTo[wordUse, SOTU[[i]][[2]] -> compList];
i++;
]
To explore how this list could be used, I picked a specific word, "government", from my top 100 list. I used this to create a time series of all of the government mentions in the speeches. This list was then usable for a DateListPlot to show the change in the frequency of this world over time.
govCount = {wordUse[[#]][[1]], wordUse[[#]][[2]]["government"]} & /@
Range[Length[wordUse]]
DateListPlot[govCount, ImageSize -> Large]
![enter image description here][6]
Using the same methodology, I was able to create similar time series for each of the top 100 words and store them in a list of associations, which would prove to be beneficial for the Manipulate function.
i = 1;
timeSeries = <||>;
While[i < Length[Keys[top100PresTerms]] + 1,
key = Keys[top100PresTerms][[i]];
keyList =
Association[
wordUse[[#]][[1]] -> wordUse[[#]][[2]][key] & /@
Range[Length[wordUse]]];
AppendTo[timeSeries, key -> DeleteMissing[keyList]];
i++;
]
This list of associations made it easy to pull the specific time series for each of the words in the top 100 list. You can see an example of this with the word "state".
timeSeries["state"]
Using this same methodology, I was able to use the Keys function to again pull specific time series from my new list of associations based on the selected top 100 word. The time series was then plotted on a DateListPlot. I allowed for the user to select and compare two different key words. A nice added feature is the legend that shows the word and also their position in the top 100 list. This was made possible with the Position function.
Manipulate[
DateListPlot[{timeSeries[keyWord1], timeSeries[keyWord2]},
Filling -> Bottom,
ImageSize -> Full,
PlotRange -> Full,
PlotStyle -> {Red, Blue},
GridLines -> {Range[DateObject[{1790}], DateObject[{2015}],
Quantity[5, "Years"]], Range[0, 200, 5]},
PlotLegends ->
Placed[{keyWord1 <> " " <>
ToString[Flatten[Position[Keys[top100PresTerms], keyWord1]]],
keyWord2 <> " " <>
ToString[Flatten[Position[Keys[top100PresTerms], keyWord2]]]},
Above]],
{keyWord1, Keys[top100PresTerms]},
{keyWord2, Keys[top100PresTerms]}
]
![enter image description here][7]
**Observing the trend of spoken vs. written speeches**
I noticed that there were also ebbs and flows of written vs. spoken speeches. To look at this, I tallied the spoken and written speeches of each president and put them into yet another list of associations.
i = 1;
presKeys = Keys[SOTUgrouped];
sORwTally = <||>;
While[i < Length[presKeys] + 1,
indivTally =
Tally[SOTUgrouped[presKeys[[i]]][[#]][[3]] & /@
Range[Length[SOTUgrouped[presKeys[[i]]]]]];
AppendTo[sORwTally, presKeys[[i]] -> indivTally];
i++;
]
I decided that I wanted to visualize these in a PairedBarChart, so I placed them into two lists, spoken and written. I used a series of If statements to test for "spoken" and "written" tallies in my original list, and used 0 as a placeholder in the respective list if one or the other was not present for a specific president.
i = 1;
spoken = {};
written = {};
While[i < Length[presKeys] + 1,
If[Length[sORwTally[presKeys[[i]]]] > 1,
AppendTo[spoken, sORwTally[presKeys[[i]]][[1]][[2]]];
AppendTo[written, sORwTally[presKeys[[i]]][[2]][[2]]];
,
If[sORwTally[presKeys[[i]]][[1]][[1]] == "spoken",
AppendTo[spoken, sORwTally[presKeys[[i]]][[1]][[2]]];
AppendTo[written, 0];
,
AppendTo[spoken, 0];
AppendTo[written, sORwTally[presKeys[[i]]][[1]][[2]]];
]
];
i++;
]
Using this complied list, it was simple to use the PairedBarChart. I experimented with ChartLabels to make the list a little more user-friendly and ChartStyle to give it a little extra color.
PairedBarChart[spoken, written,
ChartLabels -> {Placed[{"Spoken", "Written"}, Above], None,
presKeys}, ImageSize -> Full, ChartStyle -> "Rainbow"]
![enter image description here][8]
**Comparing the average word count of each president (written, spoken, and total)**
On a similar note, I thought it may be interesting to look at how word count of both spoken and written State of the Union addresses varied among presidents. I created a list of associations of presidents spoken, written, and total word count averages using a series of While loops. I added in some If loops to set the average to 0 instead of using the Mean function as I anticipated that some of the presidents would not have either an instance of "spoken" or "written" per what I found in my previous analysis.
i = 1;
presWordMeans = <||>;
While[i < Length[presKeys] + 1,
j = 1;
indS = {};
indW = {};
indAll = {};
sORwList =
SOTUgrouped[presKeys[[i]]][[#]][[3]] & /@
Range[Length[SOTUgrouped[presKeys[[i]]]]];
While[j < Length[sORwList] + 1,
If[sORwList[[j]] == "spoken",
AppendTo[indS, SOTUgrouped[presKeys[[i]]][[j]][[4]]],
AppendTo[indW, SOTUgrouped[presKeys[[i]]][[j]][[4]]]
];
AppendTo[indAll, SOTUgrouped[presKeys[[i]]][[j]][[4]]];
j++;
];
If[Length[indS] == 0,
sMean = 0;,
sMean = N[Mean[indS]];
];
If[Length[indW] == 0,
wMean = 0;,
wMean = N[Mean[indW]];
];
allMean = N[Mean[indAll]];
AppendTo[presWordMeans, presKeys[[i]] -> {sMean, wMean, allMean}];
i++;
]
Again, using the Manipulate function, I was able to make an interesting dynamic BarChart for users to compare this data across five different presidents. The Keys function again made it incredibly simple to create lists of presidents for the user to select from. ChartLegends were added to distinguish the different averages and ChartLabels were added to clarify which group corresponded to which president.
Manipulate[
BarChart[{presWordMeans[pres1], presWordMeans[pres2],
presWordMeans[pres3], presWordMeans[pres4], presWordMeans[pres5]},
ImageSize -> Full, BarSpacing -> {0, 1},
ChartLabels -> {{pres1, pres2, pres3, pres4, pres5}, None},
ChartStyle -> {Red, Blue, Gray},
ChartLegends ->
Placed[{"Average Spoken Words", "Average Written Words",
"Average Words"}, Above]],
{pres1, presKeys},
{pres2, presKeys},
{pres3, presKeys},
{pres4, presKeys},
{pres5, presKeys}
]
![enter image description here][9]
**Conclusion**
I hope this has proven to be an interesting exploration of textual data analysis. I rarely get a chance to dive into the social sciences with Mathematica, so it was certainly a fun experience for me. This should be a good example of all the different input capabilities as well as the several types of data visualization tools that can be used with the Manipulate function.
Download the full notebook [here][10] or via the attached file! (Sorry guys, but you'll have to upload this year's data on your own!)
[1]: http://www.presidency.ucsb.edu/sou.php
[2]: https://amoeba.wolfram.com/index.php/s/8RtPmIDGwzrr0FR
[3]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUtrumpwc.JPG&userId=1161398
[4]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUpreswc.JPG&userId=1161398
[5]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUbyspeechman.JPG&userId=1161398
[6]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUgovtdateplot.JPG&userId=1161398
[7]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUwordman.JPG&userId=1161398
[8]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUpairedbc.JPG&userId=1161398
[9]: http://community.wolfram.com//c/portal/getImageAttachment?filename=SOTUbcman.JPG&userId=1161398
[10]: https://amoeba.wolfram.com/index.php/s/gS3Z8hcgAmIwlXnSam Tone2018-01-31T15:41:28ZLoad a function from cubin, ptx or library file using CUDAFunctionLoad?
http://community.wolfram.com/groups/-/m/t/1289656
According to the documentation of `CUDAFunctionLoad` it should be easy to specify a compiled file (cubin, ptx, dll should all work) as the source for loading a `CUDAFunction`. Unfortunately it does not for me. Compiling from source works fine, but as soon as I try to load the `CUDAFunction` from the compiled file (I tried cubin, ptx and a dll) things fail.
Here is a very simple example that does not work for me, no matter what combination I try:
Let's create a cubin file first from a very simple CUDA kernel:
Needs["CUDALink`"];
code = "
__global__ void addTwo(int * in, int * out, int length) {
int index = threadIdx.x + blockIdx.x*blockDim.x;
if (index < length)
out[index] = in[index] + 2;
}";
cubinFile = CreateExecutable[code, "test", "Compiler" -> NVCCCompiler,
"CreateCUBIN" -> True];
This successfully creates `test.cubin`.
Unfortunately loading the function `addTwo` fails:
cudaFun = CUDAFunctionLoad[File[cubinFile],
"addTwo", {{_Integer, _, "Input"}, {_Integer, _,
"Output"}, _Integer}, 256, "ShellCommandFunction" :> Print,
"ShellOutputFunction" -> Print];
> CUDAFunctionLoad::invsrc: CUDALink encountered invalid source input.
> The source input must be either a string containing the program, or a
> list of one element indicating the file containing the program.
The input file should be valid, but maybe I am missing something obvious here. Interestingly enough going the same route creating a .ptx file yields a different error:
ptxFile =
CreateExecutable[code, "test", "Compiler" -> NVCCCompiler,
"CreatePTX" -> True];
cudaFun =
CUDAFunctionLoad[File[ptxFile],
"addTwo", {{_Integer, _, "Input"}, {_Integer, _,
"Output"}, _Integer}, 256, "ShellCommandFunction" :> Print,
"ShellOutputFunction" -> Print];
> CUDAFunctionLoad::notfnd: CUDALink resource not found.
In addition to cubin and ptx files I tried compiling a library file using `CreateLibrary`, which was created fine but also could not be loaded using `CUDAFunctionLoad`.
Any ideas on what is going wrong here and how I can actually load a CUDAFunction from a compiled file?
You can just copy and paste the code above into mathematica and run it as long as you have CUDA setup properly. Can you reproduce the behavior?
------------------
Additional Information:<br>
I am running Mathematica 11.2 on Windows 10.<br>
CUDA is setup properly and I can do all CUDA computations in Mathematica.Wizard2018-02-22T02:06:27ZSet Image acquisition ($ImagingDevice) on Unix?
http://community.wolfram.com/groups/-/m/t/1288310
When I do<br>
$ImagingDevice
I get the following message
Message[$ImagingDevice::notsupported, "Unix"]
What can I do to fix it?Santiago Hincapie2018-02-19T23:04:51ZUpload file to Wolfram Cloud?
http://community.wolfram.com/groups/-/m/t/1288289
I'm logged into my cloud account but I can't seem to save/open anything from MathematicaOnline's folders... Any suggestions?
![enter image description here][2]
[1]: http://community.wolfram.com//c/portal/getImageAttachment?filename=ScreenShot2018-02-19at4.42.44PM.png&userId=900170
[2]: http://community.wolfram.com//c/portal/getImageAttachment?filename=Untitled.jpeg&userId=900170Mike Sollami2018-02-19T22:01:28Z