@wildix/wim-knowledge-base-client > GenerateSearchAnswerInput
GenerateSearchAnswerInput interface
Signature:
export interface GenerateSearchAnswerInput
Properties
Property | Modifiers | Type | Description |
|---|---|---|---|
string | undefined | (Optional) The unique identifier of the tenant when a service token is used. | ||
number | undefined | (Optional) Maximum number of tokens in the generated answer (1-10000). Higher values allow longer, more detailed answers. Lower values force concise responses. Defaults to 1000 tokens. Note: higher values increase latency and cost | ||
string | undefined | (Optional) The specific provider and model identifier. Examples: 'openai://gpt-4o' or 'openai://gpt-4o-mini' for OpenAI, 'mistral://mistral-small-2506' for Mistral. Check provider documentation for available models | ||
string | The question or query to be answered using the knowledge base. The LLM will search the knowledge base and generate an answer based on found results. Example: 'What is the product knowledge base?' | ||
The search results to be used to generate the answer. The LLM will use the search results to generate an answer based on the found results. | |||
string | undefined | (Optional) Optional system prompt to customize LLM behavior and context. Sets instructions for how the LLM should answer questions. Example: 'You are a helpful technical support assistant. Answer only based on the provided knowledge base. If information is not found, say "I don't have this information in the knowledge base."' | ||
number | undefined | (Optional) Controls answer randomness and creativity (0.0-1.0). Low values (0.0-0.3) produce consistent, deterministic answers. High values (0.7-1.0) produce more diverse and creative responses. Defaults to 0.3 for factual consistency |