I must apologize for getting this thread off on a tangent. Ultimately, the issue is not whether Apple's neural net/GPU engine is better than NVIDIA, but whether Mathematica will make full use of the hardware available to macOS (and probably iPadOS) users.
After what in my opinion is a slow start, Mathematica is making use of Metal (rather than relying on OpenGL), and the results are very good. This has permitted expansion into new areas, such as the very experimental ray tracing in the 12.2 beta.
All I am looking for as a long-time Mathematica on Macintosh user is that the software make full use of the available hardware. I don't care if NVIDIA is faster on different hardware, as long I have decent hardware acceleration with my hardware.
Mathematica has been characterized as a swiss army knife. The analogy is not exact, since software does not place the same constraints as physical design. However, the analogy is apt in the following sense: it meets the needs of the vast majority of users who need to use some mathematical techniques or C/S magic without having to deal with the messy details. I look at the developers of Mathematica 's functions as collaborators in a very real sense. Back in the dark ages, I filled that role when I worked in a research lab, but those says are long gone.
In the case of Machine learning, I have no doubt that there are dedicated tools that doo a better job, just as there are better tools for audio and image processing -- although there are some things that Mathematica can do that dedicated programs do not, simply because of access to mathematical functionality created for another purpose.
I am not concerned with the availability of optimized libraries similar to those for Intel (and NVIDIA). Worst case, the Rosetta II emulator should be able to handle this functionality and still be faster than than running on Intel. However, Apple has the resources to do what you suggest -- and they have already done so with previous transitions. Hardware evolves all the time. It can be painful, but that is what it is, living on the frontier. It is perhaps my perspective of having coded for nearly 50 years, that I have a different context.
Bottom line
What I would like to see is that any function in Wolfram language that has an option "UseGPU" will actually use the GPU on my Mac in the not too distant future. I don't care that I could get faster results by investing in different hardware (and spending the time and money to make it work). For almost everything else that I need my computer to do, Mathematica and Wolfram Language give me the tools I need to do a much better job than I could do myself. It is only the lack of Wolfram's use of the GPU and related hardware technologies (Neural engine, etc.) that is an issue.
I have spoken with several Mathematica users who use macOS, both within Wolfram Research and elsewhere, and this is something that all of them would like.