It's not possible to directly "call" a local kernel from the Cloud.
But a local kernel could poll the Cloud, do any calculations desired, and send the results back to the Cloud. The Cloud computation could create a cloud object for every pending computation, in a dedicated directory (using CloudPut[expr, "dir/some-id"]
). The local kernel would periodically check for new cloud objects in that directory (using CloudObjects["dir"]
) and process them.
When the local kernel is done, it could either (1) put something back into the cloud in another directory (which the Cloud kernel would poll), or it could (2) call an APIFunction
, passing in the result. The advantage of (2) is that there's no polling necessary on the Cloud side, the disadvantage is that the APIFunction might be executed in a different kernel than the original computation, which might be a problem depending on the application.
MetaInformation
might be a useful feature for this, e.g. to store a timestamp or ID or "done flag" for every computation. You can pass MetaInformation
as an option to CloudPut
, and you can use SetOptions
on an existing cloud object. We're actively working on a mechanism to query for objects with certain meta data, but for now, polling a specific directory is probably the best option.
The polling could be done using scheduled tasks.
Sorry this might not be as easy as you would like it to be, but at least these are some ideas. If someone has a more concrete implementation, please share.