Configuration reference
The following table describes all configuration options available for the Siren AI plugin. Options are bolded if they are required, though provider config options marked as required are only required if the associated provider is in use.
| Option | Description | Type | Default |
|---|---|---|---|
|
Whether the plugin is enabled |
boolean |
|
|
The label of the model to use as defined in the |
string |
|
|
Make reasoning content that the LLM produces visible in the UI |
boolean |
|
|
A list of the model configurations, identified by their |
ModelConfig[] |
ModelConfig types
OpenAI ModelConfig
| Option | Description | Type | Default |
|---|---|---|---|
|
Model provider. Must be |
|
|
|
OpenAI API key. This can be found in the API key page. |
string |
|
|
OpenAI organization ID. |
string |
|
|
LLM timeout in milliseconds. |
integer (>0) |
|
|
The OpenAI model to use. For a full list options, see here. |
string |
|
|
See Temperature. |
float (0.0-2.0) |
|
|
See TopP. |
float (0.0-1.0) |
Azure OpenAI ModelConfig
| Option | Description | Type | Default |
|---|---|---|---|
|
Model provider. Must be |
|
|
|
Azure OpenAI endpoint. This can be found in the deployed Azure resource’s Keys and Endpoint page. |
string |
|
|
Azure OpenAI deployment name. This deployment determines the model used. |
string |
|
|
Azure OpenAI API key. This can be found in the deployed Azure resource’s Keys and Endpoint page. |
string |
|
|
LLM timeout in milliseconds. |
integer (>0) |
|
|
See Temperature. |
float (0.0-2.0) |
|
|
See TopP. |
float (0.0-1.0) |
OpenAI-compatible provider ModelConfig
| Option | Description | Type | Default |
|---|---|---|---|
|
Model provider. Must be |
|
|
|
The URL to access the model provider. Typically ends in |
string |
|
|
API key required by the provider. |
string |
|
|
LLM timeout in milliseconds. |
integer (>0) |
|
|
Model to use. |
string |
|
|
See Temperature. |
float (0.0-2.0) |
|
|
See TopP. |
float (0.0-1.0) |
AWS Bedrock ModelConfig
| Option | Description | Type | Default |
|---|---|---|---|
|
Model provider. Must be |
|
|
|
AWS region. |
string |
|
|
AWS profile created locally. |
string |
|
|
AWS access key ID. Can also be specified using |
string |
|
|
AWS secret access key. Can also be specified using |
string |
|
|
A security or session token to use with these credentials. Usually present for temporary credentials. Can also be specified using |
string |
|
|
AWS credential scope for this set of credentials. |
string |
|
|
AWS account ID. |
string |
|
|
LLM timeout in milliseconds. |
integer (>0) |
|
|
The model to use. See here for a full list of supported models. |
string |
|
|
See Temperature. |
float (0.0-2.0) |
|
|
See TopP. |
float (0.0-1.0) |
LLM parameters
Temperature
The temperature parameter controls the randomness and creativity of the model’s output by adjusting the probability distribution used when selecting the next token.
A higher temperature value makes the model’s output more diverse and creative by giving less probable words a higher chance of being selected. Conversely, a lower temperature value makes the output more focused and predictable by favoring the most probable words. This parameter allows users to fine-tune the balance between creativity and coherence in the model’s responses, depending on the desired application.
Note: The configuration may accept a range of 0 to 2 but the valid range for temperature depends on the provider or model you are using. Some providers accept values between 0 and 1, while others support a wider range, typically 0 to 2. Always choose a temperature value that falls within the range supported by your selected provider. If this parameter is not defined, it will default to your provider’s default.
TopP
The topP parameter, also known as nucleus sampling, is used to control the diversity of the output generated by an LLM. It works by considering only the smallest set of top probable tokens whose cumulative probability exceeds the value of topP.
For example, if topP is set to 0.9, the model will only consider the top 90% of probable tokens for generating the next word, effectively filtering out the less likely options. This results in more diverse and creative outputs when topP is set closer to 1, as the model has a wider range of tokens to choose from. Conversely, setting topP closer to 0 makes the output more predictable and focused, as it limits the model to a smaller set of highly probable tokens.
If this parameter is not defined, it will default to your provider’s default.