Objectives

Advancing the state-of-the-art in large-scale semantic indexing and QA.

BioASQ organises respective challenge tasks to measure the ability of participating systems to semantically annotate (index) biomedical documents in a real-time fashion (as they become available on-line), as well as their ability to retrieve and synthesise relevant information from both textual and structured data, in order to provide real-time appropriate responses to natural language questions. The participating systems are pushed to their limits in terms of scalability, efficiency, accuracy, coverage, and conciseness of responses, as the underlying information sources contains large volumes of data of different types and the systems’ answers are compared against gold standard (reference) answers prepared by biomedical experts. To obtain good results, participants need to combine and improve state-of-the-art methods from information retrieval, machine learning, natural language processing, and text mining. This can help improve the state-of-the-art in these areas, and semantic indexing and QA in particular.

Establishment of BioASQ as a reference point in biomedical QA.

Although QA has a long history in artificial intelligence and computational linguistics, biomedical QA in particular is a relatively new field with distinct characteristics, most notably the heterogeneity and exponential growth of the underlying information sources, but also the very extensive use of domain-specific terminology. To the best of our knowledge, BioASQ is the first international challenge for generic biomedical QA and aims to establish it as a reference point in its field. Among other activities, BioASQ will establish a social network of biomedical experts, along with regulations and mechanisms to accredit user contributions. This network of experts will continue to exist after the end of the project, providing a platform for maintaining and extending the BioASQ benchmarks, based on contributions and evaluation by peers.

Building foundations for further competitions.

Beyond the social network of experts, BioASQ’s heritage will include a reusable biomedical semantic indexing and QA competition infrastructure. A large computational infrastructure (cluster of around 5000 cores) has already been made available to BioASQ. Building upon experience from the Large Scale Hierarchical Text Classification (LSHTC) challenges, the infrastructure to support BIOASQ’s challenge tasks has been developed (e.g., benchmark repositories, evaluation Web services, services allowing biomedical experts to add questions and gold reference answers). Additionally, more general functionality will be included (e.g., defining a new task, its benchmarks and evaluation processes). The BioASQ infrastructure will be useful not only during the project, but also in follow-up biomedical QA challenges, possibly based on new benchmarks produced by the BioASQ expert network and/or with additional challenge tasks.