he can't do that π . it's llm limitation . i you get gpt4o or sonnet 3.5 and you will get lower hallucination.
But better if you add good reference documentation ... hallucination decrease because llm can use it for correct himself ...
dev are near to let us add our own api key . you will pay for that but be able to get better llm ...
Actually there are techniques they can use to increase search salience and decrease what are popularly called hallucinations. I shared an article recently with them in a feature request. I'll share it again here since it's salient and some of the techniques can be done by the end user (though it's tedious and costly to do it in a chat). https://decodingml.substack.com/p/your-rag-is-wrong-heres-how-to-fix
If it's true (and I don't know that is is) that there is a high hallucination rate, I strongly vot this one up.
One possibility would be (perhaps as an option) to automatically add a validate prompt.
So, you'd enter your prompt, get an answer and the tool would automatically be prompted to verify the answer against the source material. This would and should use up additional credits.
i don't understand . documentation must be embedded so llm begin by loading embedding in his context ... then you ask question ...
0 2 months ago Reply
he can't do that π . it's llm limitation . i you get gpt4o or sonnet 3.5 and you will get lower hallucination.
But better if you add good reference documentation ... hallucination decrease because llm can use it for correct himself ...
dev are near to let us add our own api key . you will pay for that but be able to get better llm ...
0 2 months ago Reply
Actually there are techniques they can use to increase search salience and decrease what are popularly called hallucinations. I shared an article recently with them in a feature request. I'll share it again here since it's salient and some of the techniques can be done by the end user (though it's tedious and costly to do it in a chat).
https://decodingml.substack.com/p/your-rag-is-wrong-heres-how-to-fix
And I will also share that the book this article comes from is currently on sale in the excellent Machine Learning Engineers Humble Bundle. https://www.humblebundle.com/books/machine-learning-engineer-masterclass-packt-books
0 3 days ago Reply
If it's true (and I don't know that is is) that there is a high hallucination rate, I strongly vot this one up.
One possibility would be (perhaps as an option) to automatically add a validate prompt.
So, you'd enter your prompt, get an answer and the tool would automatically be prompted to verify the answer against the source material. This would and should use up additional credits.
0 2 months ago Reply