Installation
Please visit Ollama’s GitHub page for instructions on downloading and installing Ollama, as well as a quickstart guide and a full list of supported models.Configuration
After installing and configuring Ollama, you can start using it in Wave by setting two parameters:aibaseurl and aimodel. These parameters can be set either through the UI or from the command line, but please note that the parameter names are slightly different depending on the method you choose.
Parameters
- AI Base URL: Set this parameter to the base URL or endpoint that Wave AI should query. For Ollama running locally, use http://localhost:11434/v1. Please note that the port number
11434may be different depending on your specific installation. For remote Ollama instances, replacelocalhostwith the appropriate hostname or IP address of the server where Ollama is running. If the port number is different from the default11434, update it accordingly in the URL. - AI Model: Specify the Ollama model you want to use. This can be any “pulled” model in Ollama and doesn’t need to be actively running. To discover available models, use the
ollama listcommand in your terminal.
Configuring via the UI
To configure Ollama from Wave’s user interface, navigate to the “Settings” menu and set theAI Base URL and AI Model parameters as described in the previous section.
Configuring via the CLI
To configure Ollama using the command line, set theaibaseurl and aimodel parameters using the /client:set command, as shown in the example below.
Usage
Once you have installed and configured Ollama, you can start using it in Wave. There are two primary ways to interact with your newly configured LLM: Interactive Mode and by using the /chat command.- Interactive Mode: To enter Interactive Mode, click the “Wave AI” button in the command box or use the
ctrl + spaceshortcut. This will open an interactive chat session where you can have a continuous conversation with the AI assistant powered by your Ollama model. - /chat: Alternatively, you can use the /chat command followed by your question to get a quick answer from your Ollama model directly in the terminal.
Troubleshooting
If you encounter issues while using Ollama with Wave AI, consider the following troubleshooting steps:- Connection failures: If Wave AI fails to connect to Ollama or returns an error message, verify that Ollama is running and accessible from the system where Wave is installed. Check the Ollama logs for any error messages or indications of why the connection might be failing.
- Timeouts: If you’re unable to complete a query or incur frequent timeouts, try adjusting the
aitimeoutparameter to a higher value. This will give Ollama more time to process and respond to your requests, especially if you are running it on a system with limited hardware resources. - Incorrect base URL or port: Ensure that the
aibaseurlparameter points to the correct URL and port number where Ollama is running. If you have changed the default port or are running Ollama on a remote server, update the URL accordingly. - Incorrect model selection: If you have multiple Ollama models installed, make sure to set the
aimodelparameter to the specific model you want to use. You can list available models using the ollama list command in your terminal. - Unexpected behavior or inconsistent results: If you encounter unexpected behavior or inconsistent results when using Ollama with Wave AI, try resetting the
aibaseurlandaimodelparameters to their default values and reconfiguring Ollama from scratch. This can help rule out any configuration issues that might be causing problems.
Reset Wave AI
At any time if you find that you wish to return to the default Wave AI experience, you can reset theaibaseurl and aimodel parameters to their default state by using the following commands.

