Are you sure? Benchmarks I have seen for the new chips would indicate otherwise. Granted, these numbers are not direct comparisons of neural net computations, but it probably indicates what is possible.
As I see it, the main sticking point is that the open-source software Wolfram uses for GPU acceleration of neural net computations is dominated by NVIDIA. The trick, which I think is possible done by a small group of people, would be to make a function-call equivalent library that uses the Apple technology.
This should be within the capabilities of Wolfram ore even people in this community. At the very least, macOS and iOS (iPadOS) users would be able to do computations significantly faster than they can now. Even if it can't match NVIDIA, which is debatable, it would be orders of magnitude faster than just using the CPU.
Note that the M1 chip does not support eGPUs. It is possible that the chips that Apple ends up putting in the Mac Pro or iMac Pro will support these add-ons. Most of us don't have the deep pockets (or ample grant money) to invest in this solution, though.
I fooled around with this stuff back at the time when Apple used NVIDIA. What was frustrating was the fact that NVIDIA would change which cards were supported, and so you could never depend on the resource being available. This is certainly the case now for people using Windows or Linux Systems. The nice thing about Apple is that their APIs are abstracted from the hardware, so if the hardware changes (which it just did in a big way), the APIs do not.
I can understand the desire on the part of Wolfram to use cross-platform solutions whenever possible. However, Wolfram has, in the past, made use of specific hardware and OS functionality to fully exploit any advantages. Remember that the Notebook paradigm was available on the Mac (and Next) a long time before Windows because versions of Windows before Win 95 were not very capable. In addition, Mathematica users on Macs could make full use of the 32-bit (and 64-bit) architecture while Windows computers were still hobbled by 16 bit CPUs. Now the we have a glimpse of what Apple Silicon can do, I hope that Wolfram Research will once again take advantage of hardware and OS opportunities as it has in the past.
I have not done this type of coding for some time, or I would be tempted to take on the task myself.