Accepted Papers

The official workshop proceedings will be available after the workshop. Please find the paper abstracts, preliminary paper versions and additional information below.

Full Papers

Bistìris Ontology: Towards a Structured Representation of Sardinian Traditional Female Costumes

by Giorgio Corona, Dario Guidotti, Laura Pandolfo and Luca Pulina

In the field of cultural heritage, several ontologies have been proposed to model the vast and diverse artistic and historical heritage, facilitating the interconnection and deeper understanding of the complex relationships among the cultural assets that comprise it. In this paper, we propose Bistiris, an ontology designed for describing a specific category of cultural objects: Sardinian traditional female costumes. Bistiris contributes to this context by aiming to provide a schema for representing this particular kind of cultural heritage asset. We detail the methodology of the Bistiris ontology and outline its practical application. Inspired by the work of domain experts, Bistiris incorporates a range of parameters. Its main objective is to highlight the distinctions between different local traditions, including variations among costumes from the same town. Bistiris achieves this goal as it is tailored for the analytical description of garments and costumes.

Exploring Prosopographical Information in the Virtual Record Treasury of Ireland’s Knowledge Graph for Irish History

by Beyza Yaman, Lucy McKenna, Alex Randles, Lynn Kilgallon, Peter Crooks and Declan O'Sullivan

The Virtual Record Treasury of Ireland (VRTI), a virtual archive currently in the third phase of its research programme (2023-2025), places a key emphasis on utilising Semantic Web technologies to further construct a comprehensive knowledge graph (KG) of historical Irish bibliographic data. To do this, computer scientists and historians are collaborating closely to enhance the VRTI Knowledge Graph for Irish History (VRTI-KG) in a multidisciplinary manner. As a result of the collaboration, biographical data is uplifted to RDF format and out-of-the-box tools have been employed to explore the KG. The prosopographical graph currently contains 8807 prominent men and 965 women from Irish history. This paper presents the technical architecture of the KG, and a discussion of the tools being used to interact with and explore the VRTI-KG as demonstrated through a use-case of a notable person represented in the graph.

Publishing Numismatic Public Finds on the Semantic Web for Digital Humanities Research -- CoinSampo Linked Open Data Service and Semantic Portal

by Heikki Rantala, Eljas Oksanen, Frida Ehrnsten and Eero Hyvönen

This paper presents the new web demonstrator and Linked Open Data (LOD) service CoinSampo. The data service is based on over 18000 numismatic citizen science finds that were reported to the National Museum of Finland between 2013 and 2023, and which are enriched using external data sources by data linking. The data has been converted to LOD using light-weight facet ontologies. The CoinSampo web application offers users faceted semantic search and various integrated data-analytic visualization options. The application is aimed at a broad range of user audiences, including scientific researchers, heritage professionals, citizen scientists, amateur archaeologists, educators, and anyone interested in learning about the past.

Towards a semantic representation of Egyptian demonology: requirements and benchmark study

by Bruno Sartini and Rita Lucarelli

This work proposes a first step in the semantic representation of Egyptian demonology, a complex domain within Egyptology. We first use insights from (i) literature review and (ii) empirical data analysis from structured descriptions of Egyptian demons extracted from the DemonThings database, to formulate functional and non-functional requirements, along with competency questions. We then assess the coverage of existing ontologies on the topic by testing the competency questions on linked open data about Egyptian demons generated following the structure of the ontologies. Although certain aspects, such as symbolism, were adequately addressed, deficiencies were identified in areas such as iconographic interpretations, linguistic relationships (with names and their transliterations), and specific conceptualizations of demon roles, their appearance, and the events to which they are connected. The study highlights the need for a specialized ontology tailored to the specific characteristics of Egyptian demons. Future work will focus on the development of such an ontology, with the potential integration of Semantic Web technologies into current digitization projects related to Egyptology.

eXtreme Design for Ontological Engineering in the Digital Humanities with Viewsari, a Knowledge Graph of Giorgio Vasari's The Lives

by Sarah Rebecca Ondraszek, Grischka Petri, Ulrike Blumenthal, Lisa Dieckmann, Etienne Posthumus and Harald Sack

Knowledge graphs (KGs) and ontologies have become valuable tools in the digital humanities (DH) for integrating interdisciplinary, diverse datasets and applying linked data and FAIR principles. In the form of a use case, this paper emphasizes the practical value of employing established best practices in ontological engineering, such as eXtreme Design (XD) and Ontology Design Patterns (ODPs), in the DH. As part of a broader project, it realizes the design and evaluation of a KG built upon Giorgio Vasari's seminal Renaissance text, Lives of the Most Excellent Painters, Sculptors, and Architects, a collection of artist biographies -- the Viewsari KG. This way, applying an ontology engineering methodology in a real-world DH project functions as a metaphor for guiding future projects in the domain, emphasizing the importance of thoughtful ontology design for representing DH content.

A Corpus of Biblical Names in the Greek New Testament to Study the Additions, Omissions, and Variations across different Manuscripts

by Christoph Werner, Zacharias Shoukry, Soham Al-Suadi and Frank Krüger

The analysis of textual variants of verses in the New Testament across different manuscripts has mainly been done by close reading with manual effort. With the increasing number of transcriptions of the different manuscripts, quantitative analyses (so-called distant reading) can be used to search for patterns of omission, addition, or other variations, to formulate novel hypotheses to be investigated by close reading. In this work, we present a corpus of biblical names including spelling variation and inflections and their mentions in the transcriptions of the New Testament. By integrating and semantically enriching the data collected from different sources, we established a corpus that can be used for the quantitative study of omission, addition, and variation of such biblical names. To illustrate the corpus, we implement some use cases and show that well-known cases can be quantitatively reproduced. The corpus and all code are published under open licenses to enable reproduction, update, and maintenance.

Short/Position Papers

PaleOrdia: Semantically describing (cuneiform) paleography using paleographic linked open data

by Timo Homburg

This publication describes PaleOrdia, a web application developed to visualize (cuneiform) paleographic sign variants in Wikidata and the data model developed in Wikidata to represent paleography. Modeling paleographic sign variants of (ancient) scripts in linked open data is a relatively new development and will enable better descriptions of digital scholarly editions with paleographic annotations supported by established web annotation data model vocabularies. As a use case for showcasing the capabilities of PaleOrdia, the cuneiform annotation tool Cuneur is presented as one way to harness the paleographic-linked open data for digital scholarly editions.

Towards LLM-based semantic analysis of historical legal documents

by Tania Litaina, Andreas Soularidis, Georgios Bouchouras, Konstantinos Kotis and Evangelia Kavakli

The preservation of legal documents such as notarial ones is of vital importance as they are evidence of legal transactions between the involved entities through the years, serving as historical legal knowledge bases. The emergence of Large Language Models (LLMs) and their ability to analyze big data and generate content (much faster and relatively better than humans do alone) has created new perspectives in many fields, including law. Motivated by the significant potential of LLMs, we investigate the capabilities and limitations of using them in semantically analyzing legal documents through experimentation with two most prevalent LLMs i.e., ChatGPT-3.5 and Gemini/Bard. The goal is to emphasize automated and faster semantic analysis of documents, placing questions (prompts) concerning the type and subject of contracts, the recognition of the involved named entities and their relationship(s) e.g., landlord-tenant or family relationships. The experiments conducted with digitized contract documents that have been converted from handwritten Greek originals into plain text (LLM input) using Transkribus, an AI-powered platform for text recognition and transcription. The LLM responses were evaluated against the results obtained from a human expert, performing better in terms of precision but not in recall.

Sustainable Semantics for Sustainable Research Data

by Steffen Hennicke, Pascal Belouin, Hassan El Hajj, Matthew Fielding, Robert Casties and Kim Pham

In view of the steadily growing volume of digital output from Humanities research projects in recent decades, the question of the long-term and sustainable preservation of this research data is becoming increasingly urgent. To meet this challenge, we are establishing the Central Knowledge Graph (CKG) as a key element of our documentation and publication strategy for research data. In this paper, we present two of the cornerstones of this strategy: The newly developed Project Description Layer Model (PDLM) provides the means to document the required contextual metadata about research projects and their digital outputs; the Zellij Semantic Documentation Protocol systematically documents the modeling patterns used to create CIDOC CRM representations of project data in a transparent and reusable way.

Non-Canonical Acts and their Topical Distribution

by Christian Vrangbæk, Eva Vrangbæk, Márton Kardos, Kristoffer Nielbo and Jacob Mortensen

This paper investigates how we can use topic modelling to characterize and place four apocryphal, i.e. non-canonical, “Acts stories” in a corpus of ancient Greek texts. In the research field of New Testament Apocrypha, there remains uncertainty concerning the classification of apocryphal text The analysis serves the purpose of creating a structured ontology to be used in classifying New Testament Apocrypha. We attempt to show that topic modelling can be a viable tool in classifying and characterizing these texts. The results show that a) our four target texts of non-canonical “Acts stories” are ambiguous and multifaceted in their topical distribution compared to other texts in the corpus, and b) that topic modelling is a viable tool in this analysis.

Digitalisation Workflows in the age of Transformer Models: A Case Study in Digital Cultural Heritage

by Mahsa Vafaie, Mary Ann Tan and Harald Sack

The advance of transformer architecture has caused a stir in the field of Artificial Intelligence (AI) and its various applications. Digitalisation of cultural heritage data, heavily dependent on AI, has not stayed clear of this impact. This paper presents case studies from two different digitalisation projects, to explore the integration of transformer-based technologies into digitalisation workflows for cultural heritage data. The transformative effects of these models on such workflows are showcased and the benefits and drawbacks of this paradigm shift are briefly discussed.

FAIR Paper: Applying FAIR to Academic Publishing

by Wouter Beek, Rick Maurits and Auke Rijpma

The FAIR principles have a significant and lasting impact on the way in which research is performed in the Digital Humanities. However, the FAIR principles have not yet significantly impacted the way in which research papers are published and disseminated. This paper describes a new approach towards academic publishing called ’FAIR Paper’. A FAIR Paper is an academic publication that lives on the Web, uses open standards, and is completely reproducible. We report on our findings based on an early Proof-of-Concept implementation of the FAIR Paper concept.