Group Abstract Group Abstract

Message Boards Message Boards

Running a local LLM using llamafile and Wolfram Language

Posted 1 year ago

Attachments:
POSTED BY: Jon McLoone
8 Replies

I understand that there is a project to do this. It will also call the library directly rather than via the server as I did here, for better efficiency. I don't know when to expect that to be available though, so be patient for now!

POSTED BY: Jon McLoone

Indeed, it was suggested at the last Tech Conference that 14.2 would have local LLM support, but this doesn't seem to be reflected in the documentation. (Haven't had the chance to upgrade yet)...

POSTED BY: Joshua Schrier
Posted 3 months ago

Anything useful along those lines available in 14.2 ? Thank you.

POSTED BY: Francesco S
POSTED BY: Dave Middleton
Posted 1 year ago

This is great news. I would love to "play" with Liama 3 locally using the built-in LLM functions I've already used for performing some of my use cases.

POSTED BY: Jacob Evans

Interesting that's good to know! To be honest calling the server was interesting as there are many systems that now build API by servers (like ollama for Mac).

I guess I'll have to give my bucks to OpenAI for a while then!!

POSTED BY: Ettore Mariotti
POSTED BY: Ettore Mariotti
POSTED BY: EDITORIAL BOARD
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard