One more thing: Wolfram Research, and Stephen personally, has done a lot towards the democratization of computation. That is, WL has made the tools needed to do computation available fro a much wider audience.
Fully supporting Apple hardware should be a critical step in this process. Fully supporting Metal would result in a two or three orders of magnitude increase in processing speed for many operations. While a MacBook Air is always going to be slower than a Mac Pro, it is likely that a MacBook Air with full GPU support would be faster than a Mac Pro without.
The key idea is that anyone with a low end Apple product (including the iPad) could do significant work using Mathematica.
I'm not pretending that Apple is creating the GPU APIs for scientific computing. Games have been the driver for GPU development and use for some time now.
Wolfram has already gone far in making advanced computing and data science accessible to pretty much anyone who can afford a computer (and broadband). Fully supporting Metal (etc.) on these devices would lower the threshold to exploration and use for AI type computations, as well as any other computations that can benefit from the GPU.
I am not an expert on Windows or Linux hardware, but my guess is that NVIDIA GPUs are restricted to high end machines, and the hardware vendors have essentially written off all but the most dedicated people(with deep pockets), so there is no incentive, for example, for the group making the open source tools for GPUs to support anything but a narrow range of hardware.
As I have written previously, Apple has already done the heavy lifting in providing a set of APIs that work across their entire product range, and which will continue to work for the foreseeable future -- including across the rumored switch to ARM. I realize that there may be some issues for a cross-platform program to provide platform-specific code. However, Wolfram Research is already doing that, more so in recent years as the difficulties in maintaining a common code base have increased.
While the benefits of Machine learning have been badly hyped (again), we have seen that in a lot of specialized domains, it is very useful, so the hardware acceleration provided by using the GPU will benefit almost all Mathematica users, not just those specializing in Neural Nets. Being able to do a computation in seconds rather than hours will let a large number of people to try stuff that they might not attempt, otherwise.
Providing full GPU support in WL for the entire Apple range would benefit a far greater number of current and potential users of Mathematica than practically any other software initiative.