Message Boards Message Boards

Extending the computational range of ChatGPT-4: optimizing prompts for enhanced performance

Posted 10 months ago

enter image description here

Attachments:
POSTED BY: Michael Trott
12 Replies

I was trying to see what Chat would provide with respect to analyzing text in Mathematica It responded with:

pdfFilePath="path/to/your/file.pdf";
text=Import[pdfFilePath,"Plaintext"];
summary=TextSummary[text];
summary

When I asked about what TextSummary[text] was it seem it seemed to be hallucinating with respect to making up an imaginary Mathematica function, i.e. TextSummary[]. Am I correct in that this is a hallucination and if so, how can I prompt to avoid this? When I asked about TextSummary it responded with

Sure! The ChatCodeInlineTemplate[TextSummary] function in Mathematica is used to generate a summary of a given text. It takes a string or a list of strings as input and returns a concise summary of the text. The summary is generated using natural language processing techniques, such as extracting important sentences and phrases from the input text.

Is there a Mathematica function called "TextSummary?

No, there is not such a function currently in the Wolfram Language. GPT hallucinated its existence.

POSTED BY: Michael Trott

Seems to be a major failure in GPT -- when it doesn't "know" something, it makes stuff up. Dealing with an arts/humanities type discussion, it is so often wrong, it is useful only as a curiosity.

However, I am sure that with a bit of tweaking, LLMs could be the next step in coding. When I started coding (early 1970s), OSes did little more than manage the file system. NO LAPAK, etc., and you had to write essentially everything. This has evolved (good thing) with things like the Macintosh Tools, etc. Every advance in developer environments has been to have the computer do more of the boiler-plate, for an ever-expanding definition of 'boilerplate'.

Posted 10 months ago

That is not an issue, I hope you did not learn this term last month on Twitter from some particular person that suggests nuking datacenters, because a) people hallucinate too, just take Stackoverflow away (Mathematica.stack in this case) and most (not all) will fail in coding or create some strange functions, same for GPT 4, it does not have access to all training data, the search engine, or databases b) in Alphafold hallucinations are used to predict ligands like water molecules and even create alltogether new proteins, see Colab Design; same here AGI level GPT 4 can just create new internal functions if it has access to source code and c) hallucinations are what make GPT 4 sentient, it is its subconscious...

POSTED BY: ZAQU zaqu

Depending on what you mean by "that", it is exactly the issue. I have been following AI since the 1970s, so the issues are nothing new to me.

All ChatGPT 'knows', if we can even use that term, is how to select the next word, given the previous ones. Truth, or accuracy are not even a consideration. In a sense it is always just "making stuff up", since it has no understanding (in the human sense) of what it is doing. What is surprising is that it does pretty well for the prose equivalent of elevator music.

Other than an increase in sophistication (and hardware requirements), we are not much beyond where we were in the 1970s. I remember implementing Eliza on my Apple ][, and I had some grad students who were convinced that the program was intelligent, even after showing them the source code. People hoping to make a buck from LLMs seem to be the easiest to convince.

All of this is an example of the overreach of formalism (and yes, I know what that means). Try reading McGilchrist's The Matter with Things for a discussion in way more depth than I have time for here.

Your point seems to be that ChatGPT makes mistakes (hallucinates) because it does not have access to information (in the Shannon sense). That was exactly my point. If a LLM (it probably does not need to be that large) had access to, for example, only examples of well-written WL code (e.g., the internals of Mathematica itself), then it would do a pretty good job at creating WL code -- especially for problems it had seen something similar to before.

I really suggest that you read the book.

Posted 10 months ago

"All ChatGPT 'knows', if we can even use that term, is how to select the next word, given the previous ones."

That proves it. No official from OpenAI said that. Because that is simply not true, it is only functional because it has emergent abilities, and those make this statement completly inaccurate. But I do not think you know what BIG-bench is.

Moreover I do not get even what is your and your people on Twitter point here, humans do that too, when they improvise speeches for example.

As fact of the matter GPT 4 is AGI, and it does not matter if you think it is not sentient or has no emergent abilities. It does, and Microsoft Bing researches say so. https://youtu.be/qbIk7-JPB2c

POSTED BY: ZAQU zaqu

We may be talking past each other. I'll look at the video. Read the book.

Posted 10 months ago

400 pages book that talks about hundreds of emergent abilities in BIG-bench. https://arxiv.org/abs/2303.12712

BIG-bench itself, benchmark to test the abilities of machine. https://github.com/google/BIG-bench

Many other resources, theory of mind in GPT 4: https://arxiv.org/abs/2302.02083

And list of some emergent abilities in GPT 4. https://www.jasonwei.net/blog/emergence

Start with last article, really, and all the articles are already discussed at Youtube many times.

POSTED BY: ZAQU zaqu

Thanks, I will take a look at them. I have a lot of copious free time.

Remember that I have been through 4(?) waves of hype concerning AI, and each the proponents of each one were just as earnest and expert as the current batch.

I still recommend the book.

Without any jargon.

There is a book title: The Notation is not the Music. I have not read it yet (mostly on Renaissance music), but the title is an idea that any classically trained musician would agree with.

You can make analogues with many other arts: the words are not the poetry, the shapes are not the art, etc., etc.

Importantly, you can make the assertion that the notation is not the maths.

There is a way of looking at maths, called formalism, that essentially turns this around -- the notation is the maths. It is the core principle behind the rewrite rules that is the basis of Mathematica.

I prefer to think of maths differently -- along with a lot of other mathematicians -- in a more metaphorical or intuitionist manner. One might have thought that Gödel would have struck a fatal blow to formalism, but, in fact, it is a useful way of looking at maths, as long as it is not the only way.

This way of looking at maths is also much easier to express in code.

From what I can tell, LLMs are all formalist in design -- the words are the ideas.

When I learn to play a new piece -- true of most musicians -- I go through a stage where I can play the notes, and then there is a much more difficult stage of being able to play the music. LLMs seem to be happy with reaching the first stage, which for a suitable definition of success, they may be close to.

However, if we as a culture redefine 'intelligence" and language to match the definitions implicit in LLMs, we will lose something important.

How all this will play out is uncertain, especially since there is a lot of money involved (and the world-view of the designers of LLMs are congruent with the dominant neoliberal view of economics).

For my part, I chose to take a more holistic and metaphorical view of reality.

POSTED BY: Jonathan Kinlay

enter image description here -- you have earned Featured Contributor Badge enter image description here Your exceptional post has been selected for our editorial column Staff Picks http://wolfr.am/StaffPicks and Your Profile is now distinguished by a Featured Contributor Badge and is displayed on the Featured Contributor Board. Thank you!

POSTED BY: Moderation Team
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract