Is anyone aware of Wolfram working on adding support for federated learning for model training or any external packages for this purpose?
With Wolfram's great support for the cloud and for the ease at which one can move data between client and cloud, the ability to aggregate parameters from multiple independently trained models would be a killer application for Wolfram, particularly when moving local datasets is inefficient or in highly regulated environments where data sharing is problematic.
I do not think anything is specifically built in, but there is support for spinning up (or connecting to existing) AWS instances. It would probably largely be a manual process of sharding your own data into whatever servers you want to use then training on that data local to each server.
There is an interesting open-source project called MathematicaForPrediction that supports training of ensembles of different types of models. But it does not really help with distributing your training to multiple nodes, as far as I know.
Thanks Alec! I'll take a look at that project.
What I was hoping for was some functionality in the Wolfram language for aggregating the optimized parameters from multiples of the same model that have been trained independently on separate sets of data.