Thanks for posting the code and examples of hooking up LLaMA with WL !
Since LLaMA manuals say that its completions API adheres to the completions API of OpenAI's ChatGPT, how easy it is (or would be) to make LLaMA access configurations for use in WL's LLM* functions? (Like LLMSynthesize and LLMFunction.)