Message Boards Message Boards

Chat Notebook under the hood

Posted 11 months ago

I have some questions about Chat Notebooks:

  1. Does LLMPrompt setup a system message? There seems to be additional system prompts in addition to what I set up with my prompt.

  2. How can I trace these - without doing some WireShark kind of tracing?

  3. There seem to be some undocumented LLM tools even when I use my own prompt and tools e.g. Web Image Search

  4. Can we please have a new group here in the community for everything LLM related?

  5. Setting the Model to gpt-4 as part of the LLMConfguration for a prompt (resource) does not seem to work. I may be doing something wrong. What's the right syntax?

  6. How do I add versioning to prompt resources? What if I don't want these in the public repository - can I still version them?

  7. When will Wolfram support other models like Anthropic, Google, perhaps adopt the poe wrapper API?

  8. There could be an LLMTool that supports existing plugins and does the web service call. Then plugin creators could - if they want - register their plugins with Wolfram the same way they register it with OpenAI (and Microsoft in the future). How about that?

  9. How could we attach documents to the prompt - and images in the future?

  10. Who's the product guy for all these? Theodore Gray?

  11. How does licensing and deployment work... if one wants to offer chat notebooks in the browser ("in the cloud") to customers? We need to talk about DEPLOYMENT !

Thanks!

POSTED BY: Tamas Simon

Need to be careful with token counts. Looks to me by default "Automatic Result Analysis" is on and everything is sent to OpenAI leading to high token count. Also, a tool's output can be Wolfram Language expression, for an image or diagram I think this again adds to the token count if one is not careful!

POSTED BY: Tamas Simon
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract