I understand that there is a project to do this. It will also call the library directly rather than via the server as I did here, for better efficiency. I don't know when to expect that to be available though, so be patient for now!
Indeed, it was suggested at the last Tech Conference that 14.2 would have local LLM support, but this doesn't seem to be reflected in the documentation. (Haven't had the chance to upgrade yet)...
Anything useful along those lines available in 14.2 ? Thank you.
This is great news. I would love to "play" with Liama 3 locally using the built-in LLM functions I've already used for performing some of my use cases.
Interesting that's good to know! To be honest calling the server was interesting as there are many systems that now build API by servers (like ollama for Mac).
ollama
I guess I'll have to give my bucks to OpenAI for a while then!!