In this post, I am seeking comments on the purpose and value of Mathematica and similar software for society at large.
I am in the process of reading Iain McGilchrist's book: The Matter with Things : Our Brains, Our Delusions, and the Unmaking of the World. I think it is an important book, and worth reading. He makes considerable reference to his earlier (and shorter) book: The Master and his Emissary, which I also recommend. For those people who may want to get a quick idea of what these books are about, McGilchrist has a lot of videos on YouTube, including a TedX talk.
The main subject of these books is the way we as humans are connected to the world, and how the two hemispheres of the brain look at reality in different ways. I cannot hope to summarize a 1400+ page book (or even his earlier 500+ page book) in a few words here, but to a very rough approximation, the right hemisphere (the Master) deals with gestalt, metaphor and the "big" picture, while the left hemisphere re-resents reality by using models, and values certainty, order and algorithms. We need both hemispheres to be fully human.
Computation, which is a component of maths -- but by no means the most important part in most cases -- is largely a left hemisphere activity. McGilchrist's thesis is that the left hemisphere has usurped the role that should be that of the right hemisphere in our current way of looking at reality, to the detriment of society.
What does this have to do with Mathematica?
Wolfram Language is being promoted as "the only FULL-SCALE COMPUTATIONAL LANGUAGE", as as a solution to many types of problems beyond maths and physics. I think that this is mistaken.
I have several examples.
Wolfram Language has functionality to do automated proofs. As Stephen himself has pointed out, these proofs are quite different from proofs that any human would ever provide. There is no narrative, no attempt at conveying understanding, in these proofs. None of these proofs would ever be considered 'beautiful', which has been a hallmark of an elegant proof for millenia. REAL proofs, if I may be so bold, rely on the right hemisphere's way of looking at the world. I think that the reason we do proofs in the first place is to understand the world, to get insight, to make connections, and WL (or any system that does similar 'computer proofs') is simply incapable of this. The issue is not just that computer proofs are different from human proofs; it is that they represent a way of looking at reality that is, in a very important sense, deficient. This issue with mathematical proof may only affect a small number of people, but the way of thinking that would value computer proofs over human ones is becoming pervasive today.
The term "Natural Language", as it us used in WL (and other similar systems), is essentially a redefinition of an everyday expression to be used in a technical sense. As the term is used in computer science, it differentiates itself from programming languages, such as c or Swift. However, it is a gross simplification of the language that people use every day -- or at least used until they were trained by chat-bots and phone trees to use a subset of language to navigate modern help systems. Real language is fluid, ambiguous, full of metaphor, and dependent on history, not only of the speaker, but of the listener. "Natural Language" is the left hemisphere's procrustean simplification of language to fit its world-view. It does have some uses, but any of us caught in chat-bot hell can attest that it is over used, and inappropriately employed in situations where it should not be.
Stephen has a thing about "Computational X", where "X" is taken to be pretty much any human endeavor. I think that this is simply wrong. What is true is that some fields, like biology, can make better use of mathematics now that some of the computational tedium has been eliminated. I was involved in some basic research in physical biochemistry in the 1970s and got to participate in this process. However, the computations that the computer made possible were always in service to human insight and intuition.
However, I doubt that there will ever be "computational poetry".
I also doubt that there will ever be any meaningful "computational music". This is a field in which I have some small experience. I am fully aware of Cope's work using AI to synthesize ersatz Bach and Chopin. McGilchrist goes into this in some detail, but the bottom line is that with the entire corpus of Bach's work as input, making something that sounds like Bach is not that hard. The fact that the pieces presented as examples of computer generated pseudo-Bach had to be hand curated indicates that there is a gap between what the computer can do and what Bach did. As several people have pointed out, the computer never came anywhere close to something like the Goldberg variations or the Sonatas and Partitas for unaccompanied violin.
Certainly, it is unlikely that a computer would be able to make convincing music in the style of Varèse, Cage, or even Wagner, to name three composers off the top of my head.
Now, I am sure that pop-music record executives have algorithms that will indicate whether a song is likely to be a hit -- but that is a business assessment, not a music assessment.
It is also important to differentiate the use of computers in music technology as opposed to the compositional process. Music has always used technology. Before the industrial revolution, the most complex devices ever made were pipe organs, for example. Böhm used a mathematical schema to design the scale for his 1847 flute. Composers today are making interesting uses of computers as instruments and as performance aids.
I could make a similar case for "computational art". The reason for my contention and why I think is it true is, or should be, obvious. All of the arts rely on emotions, and emotions are not reducible to an algorithm.
I find things like renderings of the Mandelbrot set interesting, and even decorative, but they are, in the end, unfulfilling. To use an example to illustrate my point, what Constable did in his landscapes is to let you see something that you only looked at before.
It may appear that I am restricting my examples to fine arts or art music, but I am pretty sure that most people can get more satisfaction from a seven-year old's drawing than any syntho-art.
SO, what's the point?
People are misusing computation in significant ways. This has always been the case, of course. Today, it is simply easier and cheaper to do so. Computation cannot deal with people as people, and when companies and governments treat people as things, bad things happen, as we should all be aware of. Computation cannot deal with ideas that cannot be quantified, which means that the most important aspects of reality are either ignored or dealt with imperfectly with proxies. As some of you may know, I developed and implemented a process control system for a large clinical lab. It was hugely successful (the lab personnel using the tool reduced their error rate by 4 orders of magnitude). It was successful largely because I worked with the lab so that I could use computation to present information to the technologists that would be useful to them to improve the process. Management completely misunderstood the process, of course. They say it as a QC system, which could be extended to eliminate technologists entirely. They also pressured me to turn the tool into a program that would evaluate employees, which was simply impossible, because the most important traits a good employee had were not quantifiable: helpfulness, support, creativity in problem solving, etc.
Judging by news articles, management at a lot of companies are making the same mistake. This is having a negative impact on workers far beyond what might be expected.
Do we, as users of Wolfram technologies, or Wolfram Research itself, have any moral responsibility here?
American (probably Global) business, as McGilchrist points out, are firmly locked in the left-hemisphere mode of thought, and as such, cannot see the problem. It might be a difficult path for Wolfram Research to market their tools as the greatest thing since sliced bread (which I think they are), but there are practical and ethical limits on how they should be used.
Perhaps we (people who write software) need some sort of professional association.
I gave a talk at the WTC in 2018 on Hermeneutics. If I were to revisit this topic, the talk would be done differently, and would be more along the line of this post. I think that this is an important matter and it need more thought.