Get Ollama up and running in Wave AI
aibaseurl
and aimodel
. These parameters can be set either through the UI or from the command line, but please note that the parameter names are slightly different depending on the method you choose.
11434
may be different depending on your specific installation. For remote Ollama instances, replace localhost
with the appropriate hostname or IP address of the server where Ollama is running. If the port number is different from the default 11434
, update it accordingly in the URL.ollama list
command in your terminal.AI Base URL
and AI Model
parameters as described in the previous section.
aibaseurl
and aimodel
parameters using the /client:set command, as shown in the example below.
ctrl + space
shortcut. This will open an interactive chat session where you can have a continuous conversation with the AI assistant powered by your Ollama model.aitimeout
parameter to a higher value. This will give Ollama more time to process and respond to your requests, especially if you are running it on a system with limited hardware resources.aibaseurl
parameter points to the correct URL and port number where Ollama is running. If you have changed the default port or are running Ollama on a remote server, update the URL accordingly.aimodel
parameter to the specific model you want to use. You can list available models using the ollama list command in your terminal.aibaseurl
and aimodel
parameters to their default values and reconfiguring Ollama from scratch. This can help rule out any configuration issues that might be causing problems.aibaseurl
and aimodel
parameters to their default state by using the following commands.