It's important to acknowledge that no vendor can guarantee no hallucinations, as they are an inherent risk with current large language models (LLM) technology stemming from the generative nature of these models. However, we have taken every precaution to mitigate this risk and ensure trustworthy outputs:
- Carefully crafting prompts that provide clear guidance to the LLM to base its responses solely on the applicable content within Otter.
- Providing users with a feedback mechanism to report any incorrect responses, serving as feedback loops to continuously improve the models based on real-world data.
- Leveraging LLMs' reasoning capabilities to generate valid inferences that go beyond merely restating their training data.
- Ultimately, we have robust processes to ground Otter's LLM outputs in applicable content and enable continual enhancement through user feedback. Maintaining accuracy and trustworthiness remains our top priority.
In addition to the measures listed above, we also provide traceability of responses by providing users with the specific references or sources that were used to generate a response, so that they can validate the response themselves.Â
Ultimately, we have robust processes to ground Otter's LLM outputs in applicable content and enable continual enhancement through user feedback. Maintaining accuracy and trustworthiness remains our top priority.
Â