Message Boards Message Boards

OpenAIMode: paclet for interaction w/ OpenAI's GPT & DALL-E via OpenAILink

POSTED BY: Anton Antonov
26 Replies

I see OpenAI Link has WL wrapper for the embedding API. I'm wondering if there is any WL code to use PineCone.io or any similar vector database to store embeddings.

POSTED BY: Tamas Simon

It is interesting -- one might consider to have a package that "just" finds embeddings from different services.

POSTED BY: Anton Antonov

Interestingly, the post at reddit/r/OpenAI I made about "OpenAIMode" was removed:

enter image description here

Any insights why was that done, would be appreciated! :)

POSTED BY: Anton Antonov

maybe they think you're promoting yourself ?

POSTED BY: Tamas Simon

Hmm... self promoting is not excluded from their stated list of rules. :) And I would say the video is "very relevant" to OpenAI.

POSTED BY: Anton Antonov

Yesterday -- in version 0.1.9 -- I added another type of cell, a chat completion cell, that has pale yellow background.

That cell has the evaluator function OpenAIInputExecuteToChat which uses the "OpenAILink" function OpenAIChatComplete.

OpenAIInputExecuteToChat takes all options of OpenAIChatComplete, and the option Epilog.

Attachments:
POSTED BY: Anton Antonov

Thanks. My interest is in creating a new kind of online learning experience. See here Actually, the notebook interface would be a better fit. Maybe we could merge our efforts?

POSTED BY: Tamas Simon

Hi

this mode is awesome. I'm wondering if we could improve it so that it keeps the chat session i.e. does not lose context. Maybe a session could be grouped together into a cell group.

POSTED BY: Tamas Simon

this mode is awesome.

Thank you, good to hear! Please start using the CellPrint* functions (if have not already.)

I'm wondering if we could improve it so that it keeps the chat session i.e. does not lose context. Maybe a session could be grouped together into a cell group.

That is a good idea.

I think we can do the following:

  1. Facilitate persistence storage of session queries.
    • For example using suitable function(s) given to the option Epilog.
  2. Restore those messages in a new session and make a Chat completion sequence of messages.

Remark: I introduced a third, Chat completion cell in "OpenAIMode" in version 0.1.9. (Yesterday.)

POSTED BY: Anton Antonov

Here is a related discussion on streamlining code generation and execution: "OpenAIMode code generation demo".

POSTED BY: Anton Antonov

Nice job.

I would like to try to set the model to "gpt-3.5-turbo"

But I get this error: Request to the OpenAI API failed with message: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

It seems to be a problem with OpenAILink, however that code appears to check if that is the proper API. Any suggestion?

POSTED BY: Luc Barthelet

Thank you for your interest in this paclet!

OpenAI completions are of two kinds: text completions and chat completions.

From my (recent) experiments, I know that text completions cannot be done with chat-models, and vice versa.

The problem you have -- with "OpenAILink" -- is that OpenAIMode`OpenAIInputExecuteToText uses ChristopherWolfram`OpenAILink`OpenAITextComplete.

In order to "fix" this in "OpenAILink" I can make OpenAIInputExecuteToText take an evaluator function. (Which, by the way, the underlying function OpenAIInputExecute does.) To a large extend I introduced that level of indirection to resolve these kind of problems.

I have to think about a proper design, though. OpenAIChatComplete takes chat message objects, so, they have to be specified too.

Please see the attached notebook.

enter image description here

Attachments:
POSTED BY: Anton Antonov

Anton, thank you. That works! Of course it takes a long time to get the answer, because it needs to complete. Has anyone looked into implementing "Stream" , which maybe we can manage with a Dynamic, until we get the [Done]...

https://platform.openai.com/docs/api-reference/completions/create#completions/create-stream

POSTED BY: Luc Barthelet

Well, I experimented with the "stream" parameter utilization, but using the Raku package "WWW::OpenAI", not the WL paclet "OpenAILink".

With "stream" I get a string that is not a "correct" JSON -- it is a string of concatenated JSON key-value pairs. It can be worked with, of course, but the implementation in "OpenAILink" might take a non-trivial effort.

Here is a screenshot showing one of my Raku experiments: enter image description here

POSTED BY: Anton Antonov

I got streaming working. Use URLSubmit ... Can share code sample if you want

POSTED BY: Tamas Simon

Yeah, please, do share.

POSTED BY: Anton Antonov

Something like this.

callOpenAIStreaming[ messages_List, model_String : $OpenAIModel] := 
 Module[{requestBody, response, retryCount = 0, responseData},
  With[{
    endpoint = "https://api.openai.com/v1/chat/completions"
    },
   $OpenAIChatCompletionResponse = "";
   requestBody = 
    ExportString[{"model" -> model, "messages" -> messages, 
      "temperature" -> 0.3, "stop" -> Null, "stream" -> True}, "JSON"];
   TaskWait[
    URLSubmit[
     HTTPRequest[endpoint, 
      Association["Method" -> "POST", 
       "Headers" -> {"Content-Type" -> "application/json", 
         "Authorization" -> "Bearer " <> $OpenAISecretAPIKey}, 
       "Body" -> requestBody]],
     HandlerFunctions -> <|
       "BodyChunkReceived" -> processChatCompetionChunk|>,
     HandlerFunctionsKeys -> {"BodyChunk"}
     ]];
   Return[$OpenAIChatCompletionResponse]
   ]
  ]

Set up a global variable. Not nice but we're already doing globals for the model and the API secret key. You an use this to display the response as it is streamed:

Dynamic[StringRiffle[TextSentences[$OpenAIChatCompletionResponse], 
  "\n"]]

The trick is to use URLSubmit[] and write a handler for "BodyChunkReceived". You really don't want async execution here. You can use TaskWait[] and wait for the entire response come back from ChatGPT and then return the global variable.

I parsed the JSON and it works, but I think my code is ugly... interested in how others do it.

POSTED BY: Tamas Simon

Thank you very much Anton. This is great! I'm having a problem installing the paclet. After step 3:

PacletSymbol["ChristopherWolfram/OpenAILink", 
       "ChristopherWolfram`OpenAILink`$OpenAIKey"] =  "KEY";

I get the following error:

""Tag PacletSymbol in \!(PacletSymbol[\"ChristopherWolfram/OpenAILink\ \", \"ChristopherWolframOpenAILink$OpenAIKey\"]) is Protected."

You seem to have a problem with setting up "OpenAILink". Maybe you have to restart the kernel and re-evaluate?

POSTED BY: Anton Antonov

enter image description here -- you have earned Featured Contributor Badge enter image description here Your exceptional post has been selected for our editorial column Staff Picks http://wolfr.am/StaffPicks and Your Profile is now distinguished by a Featured Contributor Badge and is displayed on the Featured Contributor Board. Thank you!

POSTED BY: EDITORIAL BOARD

Thank you for the recognition, Moderation Team!

POSTED BY: Anton Antonov

Do I need to give it my OpenAI API? If so, how?

POSTED BY: Kathryn Cramer

Thank you for interest in this paclet!

Do I need to give it my OpenAI API?

I assume you mean your OpenAI API authorization token. Then the answer is yes.

Generally, it is assumed that the installation and setup steps of "OpenAILink" have been completed. (Before using "OpenAIMode".)

If so, how?

Here is one way (to do the complete setup):

PacletInstall["ChristopherWolfram/OpenAILink"]

Needs["ChristopherWolfram`OpenAILink`"]

PacletSymbol[
  "ChristopherWolfram/OpenAILink", 
   "ChristopherWolfram`OpenAILink`$OpenAIKey"] = "<YOUR API KEY>";

SystemCredential["OpenAIAPI"] = "<YOUR API KEY>";

PacletInstall["AntonAntonov/OpenAIMode"]

Needs["AntonAntonov`OpenAIMode`"]
POSTED BY: Anton Antonov

I’ve tried this out in WolframCloud on my iPad and some of the featured described here do not work:

  • Although Shift-| b gives a text completion cell there is no cell icon to the left (I assume an issue with Stylesheets for cloud notebooks)
  • Setting OpenAIInputExecuteToImage does not generate images
  • ResourceFunction["MermaidJS"] is not on the WFR.
POSTED BY: Paul Abbott

Thanks for trying this paclet out!

I will respond to your observations list in the same order:

  • I have seen this "behavior" for some of my other "mode" notebook styles (on iPadOS.)

  • I can generate images with that cell on macOS. I will have to experiment with it on iPadOS. (Later today, this week...)

  • ResourceFunction["MermaidJS"] uses Command Line Interface of Mermaid-JS (that has to be installed on the hosting OS.) Because of this I consider making a paclet for MermaidJS.

    • The Mermaid diagram spec (in the notebook) can be pasted in https://mermaid.live -- that will produce the corresponding image(s).

    • There is resource function submission "MermaidInk" that uses the online interface. (Much more lightweight than "MermaidJS".)

      • For some reason(s) has not been published yet -- I will follow up on the status. (Again, later today, this week.)
      • Here is resource object that can be used in the meantime: "MermaidInk".
ResourceFunction[
CloudObject[
  "https://www.wolframcloud.com/obj/antononcube/DeployedResources/Function/MermaidInk"]]["
graph TD
    WL --> |ZMQ|Python --> |ZMQ|WL
"]
POSTED BY: Anton Antonov

See the:

POSTED BY: Anton Antonov
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract