Get Ollama up and running in Wave AI
As of September 2024, this version of Wave Terminal is deprecated. To learn more about our new version (>=v0.8.0), check out www.waveterm.dev. To find documentation for our new version, check out docs.waveterm.dev.
Ollama is an open-source language model that offers a powerful and flexible alternative to proprietary LLMs, allowing you to run the model locally or on your own server infrastructure. Ollama provides high-quality language generation and understanding capabilities while giving you full control over your data and privacy.
To see a full list of supported LLM providers, please visit the Third-Party LLM Support section in the Wave AI features page.
Please visit Ollama’s GitHub page for instructions on downloading and installing Ollama, as well as a quickstart guide and a full list of supported models.
After installing and configuring Ollama, you can start using it in Wave by setting two parameters: aibaseurl
and aimodel
. These parameters can be set either through the UI or from the command line, but please note that the parameter names are slightly different depending on the method you choose.
11434
may be different depending on your specific installation. For remote Ollama instances, replace localhost
with the appropriate hostname or IP address of the server where Ollama is running. If the port number is different from the default 11434
, update it accordingly in the URL.ollama list
command in your terminal.To configure Ollama from Wave’s user interface, navigate to the “Settings” menu and set the AI Base URL
and AI Model
parameters as described in the previous section.
To configure Ollama using the command line, set the aibaseurl
and aimodel
parameters using the /client:set command, as shown in the example below.
Once you have installed and configured Ollama, you can start using it in Wave. There are two primary ways to interact with your newly configured LLM: Interactive Mode and by using the /chat command.
ctrl + space
shortcut. This will open an interactive chat session where you can have a continuous conversation with the AI assistant powered by your Ollama model.If you encounter issues while using Ollama with Wave AI, consider the following troubleshooting steps:
aitimeout
parameter to a higher value. This will give Ollama more time to process and respond to your requests, especially if you are running it on a system with limited hardware resources.aibaseurl
parameter points to the correct URL and port number where Ollama is running. If you have changed the default port or are running Ollama on a remote server, update the URL accordingly.aimodel
parameter to the specific model you want to use. You can list available models using the ollama list command in your terminal.aibaseurl
and aimodel
parameters to their default values and reconfiguring Ollama from scratch. This can help rule out any configuration issues that might be causing problems.If you continue to face issues after trying these troubleshooting steps, please see the Additional Resources section below for further assistance, or feel free to reach out to us on Discord.
At any time if you find that you wish to return to the default Wave AI experience, you can reset the aibaseurl
and aimodel
parameters to their default state by using the following commands.
Note: This can also be done in the UI just as described in previous steps.
Get Ollama up and running in Wave AI
As of September 2024, this version of Wave Terminal is deprecated. To learn more about our new version (>=v0.8.0), check out www.waveterm.dev. To find documentation for our new version, check out docs.waveterm.dev.
Ollama is an open-source language model that offers a powerful and flexible alternative to proprietary LLMs, allowing you to run the model locally or on your own server infrastructure. Ollama provides high-quality language generation and understanding capabilities while giving you full control over your data and privacy.
To see a full list of supported LLM providers, please visit the Third-Party LLM Support section in the Wave AI features page.
Please visit Ollama’s GitHub page for instructions on downloading and installing Ollama, as well as a quickstart guide and a full list of supported models.
After installing and configuring Ollama, you can start using it in Wave by setting two parameters: aibaseurl
and aimodel
. These parameters can be set either through the UI or from the command line, but please note that the parameter names are slightly different depending on the method you choose.
11434
may be different depending on your specific installation. For remote Ollama instances, replace localhost
with the appropriate hostname or IP address of the server where Ollama is running. If the port number is different from the default 11434
, update it accordingly in the URL.ollama list
command in your terminal.To configure Ollama from Wave’s user interface, navigate to the “Settings” menu and set the AI Base URL
and AI Model
parameters as described in the previous section.
To configure Ollama using the command line, set the aibaseurl
and aimodel
parameters using the /client:set command, as shown in the example below.
Once you have installed and configured Ollama, you can start using it in Wave. There are two primary ways to interact with your newly configured LLM: Interactive Mode and by using the /chat command.
ctrl + space
shortcut. This will open an interactive chat session where you can have a continuous conversation with the AI assistant powered by your Ollama model.If you encounter issues while using Ollama with Wave AI, consider the following troubleshooting steps:
aitimeout
parameter to a higher value. This will give Ollama more time to process and respond to your requests, especially if you are running it on a system with limited hardware resources.aibaseurl
parameter points to the correct URL and port number where Ollama is running. If you have changed the default port or are running Ollama on a remote server, update the URL accordingly.aimodel
parameter to the specific model you want to use. You can list available models using the ollama list command in your terminal.aibaseurl
and aimodel
parameters to their default values and reconfiguring Ollama from scratch. This can help rule out any configuration issues that might be causing problems.If you continue to face issues after trying these troubleshooting steps, please see the Additional Resources section below for further assistance, or feel free to reach out to us on Discord.
At any time if you find that you wish to return to the default Wave AI experience, you can reset the aibaseurl
and aimodel
parameters to their default state by using the following commands.
Note: This can also be done in the UI just as described in previous steps.