I have some packages written in Mathematica. Some of them use CUDA. These packages are part of a complete application. Now I want to use the Wolfram Cloud as part of the total solution (webforms for userinterface). This leads to the question how the Wolfrom Cloud kernel will call my Local Internet connected Kernel, do some (cuda) calculations and deliver the results back to the Cloud Kernel in charge? Any thoughts on how to do this?
Thanks for your suggestion on how to approach this problem. I hadn't thought of it that way.
When I was writing the question I was thinking along the lines of creating a Link with the local (sub) kernel as in:
localkernel = LinkLaunch[First[$CommandLine]<> " -mathlink "] (read this in a Yu-Chang paper)
but then i would need a URLFetch or somthing in there to call my local (sub) kernel.
This needs mathlink (or WSTP) to set up the link. I don't know if it can run between a cloud kernel and a local kernel.
The last step would be to send LinkWrites and LinkReads from the cloud kernel to get my functions to evaluate.
Your first statement that you can not directly "call" a local kernel might be referring to that a wstp link is not possible. If so please confirm then I can drop my own approach and focus on implementing yours. Many thanks for your thoughts!
It's not possible to directly "call" a local kernel from the Cloud.
But a local kernel could poll the Cloud, do any calculations desired, and send the results back to the Cloud. The Cloud computation could create a cloud object for every pending computation, in a dedicated directory (using CloudPut[expr, "dir/some-id"]). The local kernel would periodically check for new cloud objects in that directory (using CloudObjects["dir"]) and process them.
When the local kernel is done, it could either (1) put something back into the cloud in another directory (which the Cloud kernel would poll), or it could (2) call an APIFunction, passing in the result. The advantage of (2) is that there's no polling necessary on the Cloud side, the disadvantage is that the APIFunction might be executed in a different kernel than the original computation, which might be a problem depending on the application.
MetaInformation might be a useful feature for this, e.g. to store a timestamp or ID or "done flag" for every computation. You can pass MetaInformation as an option to CloudPut, and you can use SetOptions on an existing cloud object. We're actively working on a mechanism to query for objects with certain meta data, but for now, polling a specific directory is probably the best option.
The polling could be done using scheduled tasks.
Sorry this might not be as easy as you would like it to be, but at least these are some ideas. If someone has a more concrete implementation, please share.