Group Abstract Group Abstract

Message Boards Message Boards

Running a local LLM using llamafile and Wolfram Language

Posted 1 year ago

Attachments:
POSTED BY: Jon McLoone
8 Replies

Hi Jon! This is very interesting.

I was wondering if it's possible to configure a local language model as the default large language model (LLM) that the system uses when dealing with features like LLMFunction, etc.

It would be great if it were possible to leverage all the Wolfram Language technology already developed for LLMs, such as the prompt repository, and so on.

Does anyone have any idea about the feasibility of this?

POSTED BY: Ettore Mariotti

I understand that there is a project to do this. It will also call the library directly rather than via the server as I did here, for better efficiency. I don't know when to expect that to be available though, so be patient for now!

POSTED BY: Jon McLoone

Interesting that's good to know! To be honest calling the server was interesting as there are many systems that now build API by servers (like ollama for Mac).

I guess I'll have to give my bucks to OpenAI for a while then!!

POSTED BY: Ettore Mariotti
Posted 1 year ago

This is great news. I would love to "play" with Liama 3 locally using the built-in LLM functions I've already used for performing some of my use cases.

POSTED BY: Jacob Evans
Posted 5 months ago

Anything useful along those lines available in 14.2 ? Thank you.

POSTED BY: Francesco S
POSTED BY: Joshua Schrier
POSTED BY: Dave Middleton

enter image description here -- you have earned Featured Contributor Badge enter image description here Your exceptional post has been selected for our editorial column Staff Picks http://wolfr.am/StaffPicks and Your Profile is now distinguished by a Featured Contributor Badge and is displayed on the Featured Contributor Board. Thank you!

POSTED BY: EDITORIAL BOARD
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard