Research in personalized information retrieval (PIR) has generally been evaluated using small scale user studies. This approach greatly limits the scope for comparative evaluation of alternative methods for exploiting information about the users and their behaviour in adapting search to their needs.
The primary aim of the PIR-CLEF 2018 laboratory is to use the framework for evaluation of Personalised Information Retrieval (PIR) developed within the PIR-CLEF 2017 workshop to run a benchmark task for researchers responding to an open call for participation.
The PIR-CLEF 2017 pilot workshop at CLEF 2017 created an initial PIR task, which consists of a test collection created using the methodology that we have developed, and that we have described in .
For PIR-CLEF 2018 we will build on the pilot collection developed in PIR-CLEF 2017, to provide participants with a new set of search data for the comparative evaluation of alternative methods for PIR. Participants in PIR-CLEF 2018 will receive the PIR-CLEF 2017 pilot collection to perform development runs on the collection.
 Camilla Sanvitto, Debasis Ganguly, Gareth J. F. Jones, and Gabriella Pasi, A Laboratory-Based Method for the Evaluation of Personalised Search, Proceedings of the Seventh International Workshop on Evaluating Information, EVIA 2016, a Satellite Workshop of the NTCIR-12 Conference, National Center of Sciences, Tokyo, Japan, june 7, 2016, DBLP:conf/ntcir/2016evia