Get up and running with your own LLM provider
aibaseurl
and aimodel
. Additionally, cloud-based LLMs will most likely require an API Key, which can be easily set with the aiapitoken
parameter.
http://localhost:8080/v1/chat/completions
) doesn’t work, try removing the /chat/completions
directories from the end of the URL or using just the hostname and port (e.g., http://localhost:8080
). This often resolves compatibility issues and allows Wave AI to communicate with your LLM provider successfully.
AI Base URL
, AI Model
, and AI Token
(if required) parameters as described in the previous section.
aibaseurl
, aimodel
, and aiapitoken
(if required) parameters using the /client:set command, as shown in the example below.
ctrl + space
shortcut. This will open an interactive chat session where you can have a continuous conversation with the AI assistant powered by your LLM provider model.aiapitoken
parameter is set to use the correct API Token for the configured service.aitimeout
parameter to a higher value. This will give your LLM provider more time to process and respond to your requests, especially if you are running it on a system with limited hardware resources.aibaseurl
parameter points to the correct URL and port number where your LLM provider is running. If you have changed the default port or are running your own LLM provider on a remote server, update the URL accordingly.aimodel
parameter to the specific model you want to use.aibaseurl
, aimodel
, and aiapitoken
parameters to their default values and reconfiguring your LLM provider from scratch.aibaseurl
, aimodel
, and aiapitoken
parameters to their default state by using the following commands.