The 1st STELLA Community Workshop was held on 20 June 2022 at TH Köln. Eleven invited participants from DIPF Frankfurt, TIB Hannover, and ZBW Kiel attended the workshop to learn about the DFG-funded STELLA project’s outcomes and discuss the future directions and research questions regarding the evaluation of academic search systems.
One shortcoming of the previous measures (wins, losses, ties) derived from interleaving experiments is the simplified interpretation of click interactions. By weighting clicks differently, it is possible to account for the meaning of the corresponding elements on the search engine result page (SERP). In this blog post, we share our results on weighting clicks on elements of SERPs differently.
A novelty of our living lab implementation is the use of fully-fledged systems that run within a Docker container. Previous living labs were based on pre-computed results only. In our experiments, we validated both approaches and share the results in this blog post.
In this blog post, we share some general information about the [Living Labs for Academic Search (LiLAS)](https://clef-lilas.github.io/) at [CLEF2021](clef2021.clef-initiative.eu/). More specifically, we give an overview about the schedule, participants, and the volume of the logged user interaction data.
We have released a public documentation that covers introductory guides for participants, sites & organizers, as wells as REST endpoints and other technical details. The documentation is available at: https://stella-project.org/stella-documentation/
We have presented STELLA at [ISI 2021 - 16th International Symposium for Information Science](https://isi2021.net/)! In our talk, we provide an updated overview of the STELLA infrastructure and outline how participants can contribute to the living lab experiments.
It was held before our CLEF2021 lab in order to promote the infrastructure and discuss it with an expert community.
You will find the corresponding paper here: https://epub.uni-regensburg.de/44953
You will find a record of our talk here: https://av.
In one of our [earlier blog posts](../posts/STELLA-participant-systems-in-STELLA), we outlined how to participate. The corresponding repository of the template can be found at [GitHub](https://github.com/stella-project/stella-micro-template).
For those who prefer videos instead of reading guides, we have prepared an introduction that is now available on YouTube!
STELLA enables researchers and coders to deploy and evaluate search and recommendation algorithms in a real-world scenario. These so-called "Living Labs" provide a user-focused and thus more realistic evaluation approach.
What are the crucial steps when implementing a participant-system in STELLA?
In academia, we often rely on standard test collections with static relevance judgments that somewhat abstract away the specific information need of single users. Thus, evaluating search engine algorithms with real-world users is a desiderata. Finally, we want to know how users can benefit from innovative search engines. The STELLA project tries to bridge this gap with the help of an innovative infrastructure that makes it possible to evaluate experimental search algorithms in live environments. In this post we want to outline the general idea of our infrastructure.