Message Boards Message Boards

Stephen Wolfram on ChatGPT, Wolfram|Alpha & Computational Knowledge

enter image description here

Dear friends, Stephen Wolfram just published a new article, which I am inviting you to read and discuss below in the comments:

Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT

https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt

It already got volumes of feedback:

If you have any related thoughts - please comment below!

POSTED BY: Vitaliy Kaurov
9 Replies

Why there's no (in addition to Chat-GPT interface with Wolfram Alpha) Google's Minerva math symbolic input interface to WolframAlpha?

Hi Adam,

I think this is exactly right. ChatGPT makes reasonable efforts with Python, or C, presumably because there are is so much more material with ingest from public sites like stack exchange.

Having said that, I would be curious to discover whether it had processed the Wolfram documentation material, which is also publicly available (without requiring any kind of log-in, unlike, say, Matlab) and which includes a plethora of worked examples. I am guessing it did not, perhaps because the availability of such a resource is uncommon (and perhaps unknown to) ChatGPT's developers - you won't find anything remotely as comprehensive for most other computer languages.

In any event, I don't see the promise of LLMs being realized as far as Mathematica is concerned without the development of a natural language interface specifically for the Wolfram Language. Which is why I wrote this post:

POSTED BY: Jonathan Kinlay

ChatGPT seems to know programming languages like Python and C very well. There's far less training data for Mathematica on the internet, and it shows. For instance: "Make a song in mathematica" yields

Here is a simple example of how you can generate a song in Mathematica: (* Define a simple melody as a list of MIDI notes *) melody = {60, 62, 64, 65, 67, 69, 71, 72}; (* Convert the melody to sound using the built-in sound synthesis *) sound = Sound[SoundNote[#, 1/8] & /@ melody]; (* Play the sound *) Play[sound]

Wrong in more ways than one. "Make a song in python" yields correct python code, but it's pretty ugly to look at and the tune it produces is even worse: just a single C note repeated 12 times.

POSTED BY: Adam Mendenhall

Instead, I have come up in a unipolar world in which the empericists have clearly won, delivering engineering marvels which seem lightyears away from anything that logic-based approaches have demonstrated.

This is a most apt observation, but could also say "in which the inductive-ists think they have clearly won". Or perhaps they've only won a battle in the long-lasting conflict between pure theorists and everyone else.

Incidentally, it's worth mentioning that Melanie Mitchell was one of the first people to do an experiment along these lines. She has written even more skepticism about current trends.

I wonder if ChatGPT was just trained to be extremely agreeable or something. Maybe if it was fine-tuned on more argumentative conversations it would not have such a strained relationship with the truth.

The point of the haiku example is not just that ChatGPT seems to be talking too much. Impressive Haiku's can pack in unexpected $\pi$ or $2 \pi$ turns, similar to how Madhyamaka insight can be a surprise when it makes sense. Pre-training gets the robot stuck in the world's most impressive rut, but it can never break the pattern and go somewhere else.

In short:

Full rut talk machine

empty true sight branch empty files,

cold wind plays chime

I tried giving this one to ChatGPT and here was the response I got:

I'm sorry, but the haiku you've provided doesn't follow the traditional rules of haiku. Haiku is traditionally composed of 17 syllables, in three lines of 5, 7 and 5 syllables. Also, the word choice, grammar and phrasing is not making sense. Can you please provide me a valid Haiku in english.

Sounds like someone, sorry something, needs more experience outside chopping wood in the middle of winter! Can ChatGPT do that?

POSTED BY: Brad Klee

Stephen Wolfram doesn't strike me as someone with much of a sense of humor, especially wrt topics relating to Wolfram products. But maybe that's just projection on my part. My view on developments from the Wolfram perspective (in as much as someone outside WR is allowed such a thing) is that WL (a) is a marvellous innovation with enormous potential (b) currently has low levels of adoption, compared to other languages like Python (c) looks likely to fail in the longer term as a consequence.

So I am always on the lookout for developments that might change that calculus. Hitherto WR has relied on new versions to stimulate interest and demand, at least on a short term basis. But there have been enough of these to enable us to recognize that they have little long term impact on demand, or on the prognosis. Stephen Wolfram's article strikes me as something of an "aha!" moment. Just possibly, NL platforms like ChatGPT might provide a way to open up a large swathe of new market segments for Wolfram technology, while at the same time super-charging the productivity of users.

In the words of a ChatGPT haiku:

Wolfram language, Powerful and precise, Infinite wisdom

POSTED BY: Jonathan Kinlay
Posted 1 year ago

It is interesting to hear the voices of more experienced individuals on the history and direction of AI since I was not alive when diverse opinions regarding symbolic and statistical approaches to recreating human intelligence were still alive and thriving. Instead, I have come up in a unipolar world in which the empiricists have clearly won, delivering engineering marvels which seem lightyears away from anything that logic-based approaches have demonstrated.

However, it does seem like learning from data using our current formulations can only take us so far. We can train a self-driving car's neural networks on centuries of real driving recordings and simulated driving scenarios, and such a system still makes mistakes that would baffle a five-year-old. We can train a language model on close to every piece of text that has been digitized and uploaded to the internet, and that language model still does not seem to have captured many basic concepts required to hint at human-level intelligence. It is clear to many that there are some fundamental architectural deficiencies present in current-generation approaches to AI, and I excited to see what advancements will be achieved through 'hybrid' statistical + logical approaches in the coming years. I suspect that fundamental structural modifications to current neural networks and training strategies will be required to get to something equivalent to human-level general usefulness, but perhaps strapping an NLP logic engine and knowledge base to ChatGPT is good enough and strong AI is right around the corner. The immediacy with which that hypothesis can be tested is very appealing at least.

After playing with this (see below), I have lost hope in ChatGPT as a path toward strong AI. The input mechanism is too primitive, and the memory is too short. Furthermore, the neural network seems to have been fine-tuned and prompted on the "conversations of people who only completely agree with each other" dataset.

Anyway, this discussion reminds me of a 2020 paper "The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence" by Gary Marcus

Abstract

Recent research in artificial intelligence and machine learning has largely emphasized general-purpose learning and ever-larger training sets and more and more compute.

In contrast, I propose a hybrid, knowledge-driven, reasoning-based approach, centered around cognitive models, that could provide the substrate for a richer, more robust AI than is currently possible.

Me playing with WA and ChatGPT follows:

POSTED BY: Alec Graves

This article has a nice sense of humor. And we can all be glad that A.I. isn't better than us at math yet. The original programmers are aware of the issue:

ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.

Someone or some article also claimed that ChatGPT could write Haiku poems, so I asked for a few and found that ChatGPT could reliably create seemingly reasonable examples. However, I also noticed something odd.

In every poem the 季語 (kigo, season word) came on the final line, and was quite obnoxiously the season itself. This shows a lack of creativity, since most Haiku poets have studied lists of kigo, and would try and avoid such bluntness. Saying "Spring" or "Autumn" is essentially wasting a word, when you already start out with so few!

The other issue was that ChatGPT couldn't get anything nearly like 切れ字 (Kireji, "cutting word"). Perhaps this isn't to the fault of the programmers behind the show. Given the description available on wikipedia, it seems even humans have difficulty understanding or agreeing what is meant.

It would be interesting to see if a pre-trained model could ever write a good Haiku poem. I guess a basic question is whether the notions of kigo and kireji can be inferred from pre-existing samples. I'd like to think there's a bit of deductive theory behind it, but who knows what we'll find. The technology is already quite skilled at conversation, even if it does seem to blather at times.

Congrats, to OpenAI and Wolfram | Alpha, for developing interesting products with different strengths! Looking forward to seeing how "source of truth" issue is solved, and not just in math.

POSTED BY: Brad Klee

I agree with S.Wolfram's article and think it is prescient. However, it seems that the combination of Wolfram Language and ChatGPT is not easy.

For example, yesterday I asked ChatGPT "What is the second highest mountain in Japan? and the answer was "Mount Okuhira". When I asked him, "Where is Mt. Okuhira?" he replied, "It is northwest of Sapporo in Hokkaido. Actually, there is no such mountain.

In the illustration of ChatGPT by S. Wolfram, the main component is a language model neural net. As his paper shows, ChatGPT uses this language model neural net for statistical inference. This language model, like most humans, are not good at mathematical inference, as is to be expected from the vast amount of documents. Can this shortcoming be basically eliminated?

As the figure shows, we can expect reinforcement learning to help, but will mathematical thinking ever have the upper hand in this vast amount of human knowledge? As far as the recent world situation is concerned, we cannot expect much.

Therefore, it seems that what Wolfram Language should do is to develop its own. For the time being, as I mentioned in my article, ChatGPT will be an unexpected inspiration.

It’s a great article. There appears to be a natural fit between the NL capabilities of ChatGPT and the computational capabilities of WolframAlpha. But the devil will be in the details. Firstly, WolframAlpha’s NL interface is too primitive to handle the very general queries that ChatGPT is capable of generating. It will either need to be enhanced, or some kind of api will need to be developed. Secondly, I suspect the commercial arrangements between WR and OpenAI will prove to be a significant challenge. WR has form here, in terms of its failed attempt to monetize WolframAlpha in its own app, and in the now-defunct arrangements with Apple in the Siri collaboration and Amazon’s Alexa. I’m not saying that a deal can’t be done; but based on its track record I have doubts about WR’s ability to negotiate a sustainable agreement that proves beneficial to its long term goals and economic interests.

The key to the problem is this: Stephen Wolfram’s vision for the Wolfram Language is as the universal language of computation that will underpin communication between humans and AI (indeed, intelligent devices of every stripe). No-one else gets that, or is motivated to support WR’s efforts towards that goal. In short, I believe that, rather than collaborating with OpenAI, WR is going to have to build its own LLM as a natural language interface, not just to Wolfram Alpha, but for Mathematica and the Wolfram Language in general.

POSTED BY: Jonathan Kinlay
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract