Pages in this blog

Wednesday, June 28, 2017

Systematic Annotation of Literary Texts, A Shared Task

Posted to the Humanist Discussion Group:
Dear all,

We would like to draw your attention to a community-oriented initiative that will introduce a new format of collaboration into the field of Humanities: The 1st shared task on the analysis of narrative levels through annotation It is an extension of the established shared task format from the field of Computational Linguistics to Literary Studies and will commence this fall. The goal of the first stage of the (two-staged) shared task is the *collaborative creation of annotation guidelines*, which in turn will serve as a basis for the second round, an automatisation-oriented shared task. The 1st call for participation is to be sent in August 2017. The audience for the first round of the shared task are researchers interested in the (manual) analysis of narrative.

We are sending this pre-call in order to a) make you aware of this activity and b) give you the opportunity to coordinate a possible participation with your teaching or research activities in winter/fall.

Please check out our web page and feel free to point other colleagues to it. If you have questions or comments, please do not hesitate to contact us.

Best regards,
Evelyn Gius, Nils Reiter, Jannik Strötgen and Marcus Willand

Overview

FAQ

Leaflet
From the overview:
In this talk, we would like to outline a proposal for a shared task (ST) in and for the digital humanities. In general, shared tasks are highly productive frameworks for bringing together different researchers/research groups and, if done in a sensible way, foster interdisciplinary collaboration. They have a tradition in natural language processing (NLP) where organizers define research tasks and settings. In order to cope for the specialties of DH research, we propose a ST that works in two phases, with two distinct target audiences and possible participants.

Generally, this setup allows both “sides” of the DH community to bring in what they can do best: Humanities scholars focus on conceptual issues, their description and definition. Computer science researchers focus on technical issues and work towards automatisation (cf. Kuhn & Reiter, 2015). The ideal situation, that both “sides” of DH contribute to the work in both areas, this is challenging to achieve in practice. The shared task scenario takes this into account and encourages Humanities scholars without access to programming “resources” to contribute to the conceptual phase (Phase 1), while software engineers without interest in literature per se can contribute to the automatisation phase (Phase 2). We believe that this setup can actually lower the entry bar for DH research. Decoupling, however, does not imply strict, uncrossable boundaries: There needs to be interaction between the two phases, which is also ensured by our mixed organisation team. In particular, this setup does allow mixed teams to participate in both phases (and it will be interesting to see how they fare).

In Phase 1 of a shared task, participants with a strong understanding of a specific literary phenomenon (literary studies scholars) work on the creation of annotation guidelines. This allows them to bring in their expertise without worrying about feasibility of automatisation endeavours or struggling with technical issues. We will compare the different annotation guidelines both qualitatively, by having an in-depth discussion during a workshop, and quantitatively, by measuring inter-annotator agreement, resulting in a community guided selection of annotation guidelines for a set of phenomena. The involvement of the research community in this process guarantees that heterogenous points of view are taken into account.

The guidelines will then enter Phase 2 to actually make annotations on a semi-large scale. These annotations then enter a “classical” shared task as it is established in the NLP community: Various teams competitively contribute systems, whose performances will be evaluated in a quantitative manner.

Given the complexity of many phenomena in literature, we expect the automatisation of such annotations to be an interesting challenge from an engineering perspective. On the other hand, it is an excellent opportunity to initiate the development of tools tailored to the detection of specific phenomena that are relevant for computational literary studies.
Note that while Phase 1, the creation of guidelines, must be done by experts, the application of those guidelines on a large scale could be done by "student assistants". Perhaps there could be a Phase 3 that opens out to the public of people with a strong interest in literature. Anyone who's spent a fair amount of time cruising the web looking for literary resources knows that some high-quality work is being done by people with no academic affiliation. We're now talking about so-called "citizen science", or is that citizen humanities?

No comments:

Post a Comment