RAGAS
The RAGAS metric is the harmonic mean of four distinct metrics:
RAGASAnswerRelevancyMetric
RAGASFaithfulnessMetric
RAGASContextualPrecisionMetric
RAGASContextualRecallMetric
It provides a score to holistically evaluate of your RAG pipeline's generator and retriever.
note
Since RAGAS is the harmonic mean of these four distinct metrics, a value of 0 in either one of these metrics will result in a final 0 value for RAGAS.
Required Arguments
To use the RagasMetric
, you'll have to provide the following arguments when creating an LLMTestCase
:
input
actual_output
expected_output
retrieval_context
Example
from deepeval import evaluate
from deepeval.metrics import RagasMetric
from deepeval.test_case import LLMTestCase
# Replace this with the actual output from your LLM application
actual_output = "We offer a 30-day full refund at no extra cost."
# Replace this with the expected output from your RAG generator
expected_output = "You are eligible for a 30 day full refund at no extra cost."
# Replace this with the actual retrieved context from your RAG pipeline
retrieval_context = ["All customers are eligible for a 30 day full refund at no extra cost."]
metric = RagasMetric(threshold=0.5, model="gpt-3.5-turbo")
test_case = LLMTestCase(
input="What if these shoes don't fit?",
actual_output=actual_output,
retrieval_context=retrieval_context
)
metric.measure(test_case)
print(metric.score)
# or evaluate test cases in bulk
evaluate([test_case], [metric])
There are three optional parameters when creating a RAGASMetric
:
- [Optional]
threshold
: a float representing the minimum passing threshold, defaulted to 0.5. - [Optional]
model
: a string specifying which of OpenAI's GPT models to use, OR any one of langchain's chat models of typeBaseChatModel
. Defaulted to 'gpt-3.5-turbo'.