Ollama configuration
Ollama is a self-hosted LLM provider. The only required option is the model name. If your instance of Ollama is not running locally, or you are not using the default port, you will need to provide the URL. See here for a full list of supported models.
siren-ai:
provider: 'ollama'
providerConfig:
ollama:
connection:
endpoint: 'http://localhost:11434' # Optional. Defaults to the URL of an unconfigured local Ollama instance
parameters:
model: 'llama2'