Challenges - Tasks 11b, Synergy11, MedProcNER - Year 11

This year the BioASQ challenge will comprise the following tasks. Participants may choose to participate in any or all of the tasks and their subtasks.

BioASQ Task Synergy: Biomedical Semantic QA For developing issues

Task Synergy will use benchmark datasets of test biomedical questions for developing issues, such as COVID-19, in English. The participants will have to respond to each test question with relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). No special training questions are available for this task, but expert feedback will be incrementally provided instead, based on participant responses for each round. Using this feedback, the participants can improve their systems and provide better answers for persisting and/or new questions. Meanwhile, the participants may also train their systems using the training dataset from previous versions of task Synergy and of task b as well, both available at the BioASQ Participants Area . All the questions are constructed and assessed by biomedical experts from around Europe. Participation in the task can be partial, i.e. participants may enter the task in any of the rounds.  

The Synergy Task will run in four rounds, starting with an initial set of questions for developing issues, such as COVID-19, in the first round on January 9, 2023. The questions will persist in later rounds until fully answered. In addition, new versions of the questions or new questions may also added in later rounds. Separate winners will be announced for each round. Participation in the task can be partial; for example, it is acceptable to participate in only some of the rounds, to return only relevant articles (or only article snippets), or to return only exact answers (or only `ideal' answers). System responses will be manually assessed and feedback for the responses will be provided at the end of each round.

BioASQ Task 11b: Biomedical Semantic QA (involves IR, QA, summarization)

Task 12b will use benchmark datasets containing training and test biomedical questions, in English, along with gold standard (reference) answers. The participants will have to respond to each test question with relevant concepts (from designated terminologies and ontologies), relevant articles (in English, from designated article repositories), relevant snippets (from the relevant articles), relevant RDF triples (from designated ontologies), exact answers (e.g., named entities in the case of factoid questions) and 'ideal' answers (English paragraph-sized summaries). More than 4,000 training questions (that were used as dry-run or test questions in previous year) are already available, along with their gold standard answers (relevant concepts, articles, snippets, exact answers, summaries). About 500 new test questions will be used this year. All the questions are constructed by biomedical experts from around Europe. 

The test dataset of Task 12b will be released in batches, each containing approximately 100 questions. The task will start on March 2023. Separate winners will be announced for each batch. Participation in the task can be partial; for example, it is acceptable to participate in only some of the batches, to return only relevant articles (and no concepts, triples, article snippets), or to return only exact answers (or only `ideal' answers). System responses will be evaluated both automatically and manually. 

BioASQ Task MedProcNER: Medical procedure detection and entity linking

This task focuses on the detection, normalization and indexing of clinical procedures in medical documents in Spanish. Participating systems are challenged to create systems for three subtasks: (a) clinical procedure detection, (b) entity normalization to SNOMED CT and (c) whole document indexing using SNOMED CT codes.

The BioASQ Task MedProcNER was co-ogranized with the Barcelona Supercomputing Center. Read more...