I like this post although is quite old and I have my own definition of better.. or somewhat of what is missing therefore I created this new post. I am also new to this planet and therefore, pardon me if I don't know much of Mathematica either, but these are a few first impressions.
In general automated theorem proving or computed checked proofs are not accepted cause people one one side have no clue of what is going and on the other side may ultimately loose their jobs cause they don't have enough knowledge to keep up.
The first one can be improved with better features of "the software", in this case Mathematica, the second not cause is essentially a culture thing and for the layman it can be summarized into a "not invented here syndrome".
This same problem goes more or less with the Wolfram language, and I would like to know more recent feelings from the community, given the original post is from 7 years ago now.
When you throw in NDSolve or similar stuff there is quire a lot happening under the hoods, and the user have no way to understand what is going on, there is a bit of the same problem of explainable AI, where people will not socially accept an AI algorithm as far as it cannot reasonably explain what the neural network does internally.
Same apply here when you run a command you would like to have some shape of printout of all the internal computational steps that can be human readable & usable, even better that can be put into a publication, whether this is a numerical solver or just a piece of algebra.
Then all these internal steps can be verified by humans that have enough time and patience, but is no longer debatable whether the extracted result is correct or not, and whether the error is in the engine of the car on in the driver of the car.
From a language implementation perspective this is more or less the opposite philosophy of the current, let's say version, of the Wolfram language "i.e. blackbox high level routines" .
My old prof used to say that you always want to know what is going on and when you add even a small layer of math (in that case it was about scattering) you can quickly have no clue anymore of what is going on, and if that small layer of math becomes a black box high level routine in a computer program this is even worse. Therefore you want to have a traceable process and in that case was stepping up from geometry to ray tracing to full scattering, and if you are not happy of one part of the process you can fine tune it in details. This means that algorithms shall not be just explainable but also tunable. This ultimately in essence is part of the scientific process, and somewhat is not visible/contained in the Wolfram language.
From an IT perspective instead the commercial business model does not help in this cause if you look at python notebooks now a day they have quite a healthy community around it, although I cannot currently judge for the Wolfram language. Therefore commoditizing i.e. providing for free or making open source part of the low level routines (e.g. basic algebra) would go a long way to get the buy in from the youngsters. On the other corner the community it's probably a bit siloed with respect to the rest of academia and for example this post itself may look like helping the empire of evil and that's not really the intention.
... And yes of course better documentation !
Still in terms of features another thing I would love to see is some shape of automated proof / computer checking support. I think this can go in the direction of AST manipulation support,
i.e. where a proof is essentially an abstract syntax tree, and you can manipulate the tree to have different additional pieces added in the proof. Then you can go :from and to" the AST to get to more standard stuff like latex and Wolfram language.
Finally I don't believe much in the selling mantra of functional programming, and this shall not be heavily adopted in the Wolfram scope neither, given is non standard and essentially in the last 80 years it never scaled to the full industry, therefore beautiful ideas from LISP but why the heck we are still on the level of a macro language / term rewriting only language? Why there is no type checking ? have you ever seen a large scale functional program? it can quickly become quite a monster.
The Wolfram language itself tries to be better than that, but I cannot comment with what level of success.
I heard of this team that wanted to move out of Mathematica due to "speed" concerns, is this true ?
All the industry is moving to containers, In that perspective I see Mathematica as yet another environment, but I don't see the urgency to go to more old fashioned C++.
On the other corner there is the technology change, E.g are there real people that want to move from Mathematica to C++ cause they need for example CUDA or MPI ?
I imagine there are already ways to integrate external libraries, here I name Cactus (e.g. https://einsteintoolkit.org/guidelines.html ), but in essence you would like more features for a kinda Mathematica headless which is somewhat already there with the interpreter support.
Given that in this post there are a few cool ideas that may lead to some shape of IP, If that helps I am also interested to discuss it further.
Kindly, Diego