CLEF Lab Overview Paper Available

Our CLEF lab overview paper is now available online.

Abstract
In this paper we report on the first Living Labs for Information Retrieval Evaluation (LL4IR) CLEF Lab. Our main goal with the lab is to provide a benchmarking platform for researchers to evaluate their ranking systems in a live setting with real users in their natural task environments. For this first edition of the challenge we focused on two specific use-cases: product search and web search. Ranking systems submitted by participants were experimentally compared using interleaved comparisons to the production system from the corresponding use-case. In this paper we describe how these experiments were performed, what the resulting outcomes are, and conclude with some lessons learned.

Bibtex
@inproceedings{schuth_2015_overview,
title = {Overview of the Living Labs for Information Retrieval Evaluation (LL4IR) CLEF Lab 2015},
author = {Anne Schuth and Krisztian Balog and Liadh Kelly},
year = {2015},
date = {2015-09-08},
booktitle = {CLEF 2015},
publisher = {Springer},
}

Updates to API and Website

We’ve just made some improvements to our API and website

  • We are deploying the API using Tornado now, which should give a performance boost
  • Errors and warnings are now kept track off using Rollbar, this will allow us to act faster when something goes wrong
  • If there are errors, you will receive a traceback with link to the code on bitbucket making it easier for you to understand what went wrong
  • Our Website/Documentation/Dashboard now each have their own color scheme, making it easier to navigate

Please let us know what you think!