Evaluating information retrieval

scroll ↓ to Resources

Contents

Note

  • end-to-end model evaluation is challenging, unless we expect a single short answer.Evaluate separately:
    • information extraction (did the system find the correct information?)
    • reasoning (Given correct information, did the system make the right conclusions?)
    • output generation (Was the final response clear and actionable?)
    • some domains are easier than others
      • coding: does the code pass tests?
    • user feedback or the way they interact with the results can be the ultimate metric
  • when evaluating the performance of the system you have, don’t forget to register what is missing
    • Inventory issues - Lack of data to fulfill certain user requests. Better algorithm can’t help with that,
    • Capability issues - Functionality gaps where a system can’t perform certain types of queries or filters.
  • RAG impact is dependent on the quality of retrieved documents, which in turn is evaluated by:
    • relevance: how good the system is at ranking relevant documents higher and irrelevant documents lower
    • information density: if two documents are equally relevant, we should prefer one that’s more concise and has fewer extraneous details
    • level of detail:
  • Separate retrieval evaluations vs generation evals and focus on the retrieval part first
    • retrieval is cheap, generation expensive
    • generation comes later in the pipeline and assumes the retrieval is correct
  • Group your evaluation set queries by difficulty into N groups (e.g. 5x20) and only start evaluating the next group once you reach the desired accuracy or recall on simpler questions.
  • Build your own relevance dataset
    • Better data is better than better models
    • public benchmark like MTEB rarely are as relevant as specific-application dataset
      • too generic, even when specialized on a topic
      • too clean, correct
      • Data may have been seen by the embedding model during pre-training
  • statistical validation of potential improvements to quantify confidence in performance differences and avoid investing in unreliable improvements.
    • create a @dataclass ExperimentConfig, functions to sample from available data and calculate metrics
    • bootstraping on N samples with various RAG configurations and calculate confidence interval
      • plot Recall@k for different K for pairs of ExperimentConfig
        • if confidence intervals are too large - increase N
        • if confidence intervals for two different configurations overlap - it is possible that the difference in performance was due to chance
    • t-test is another way to tell if the difference in the means of the two configurations is due to chance
      • use distribution of the means from bootstraping, not means themselves
      • high p-value and low t-statistics points to NO statistical significance
  • if you have a number of tools for various search-use-cases (somewhat similar to intent recognition) evaluate them independently
    • ask the model to make a plan which tools to use. track plan acceptance rates by the users

Build your own relevance dataset

Real data

  • If available, sample user queries and their outputs from a production RAG and put time to rank the results yourself or with the help of LLM
    • fancy way - use tracing tools like Langsmith, Log Fire
    • simply save user\session info, queries-answer pairs, retrieved chunks to Postgres
  • collect unstructured feedback (comments, issue reports)
    • hierarchical clustering to identify patterns and create a taxonomy of categories

Synthetic data

Types of experiments

  • Prioritize experiments based on potential impact and resources, log everything and present in tidy format

System architecture decisions

Other

Metrics

ML metric

Technical

Use-case defined

  • end-user-engagement: click, add, dwell
  • human-involved evaluation, but not end-users:
    • for instance, AI system generates emails to prospective buyers with several price options, conditional discounts and other upselling tricks. Prior sending these emails reviewed by sales people. If they make corrections\edits to the email, we consider that smth went wrong in model reasoning and analyze the pitfall.
  • satisfaction feedback
  • ratio of FAQ requests forwarded to a live agent
  • Revenue
  • Multi-objective ranking, not just optimizing relevance.

Resources


table file.inlinks, file.outlinks from [[]] and !outgoing([[]])  AND -"Changelog"