Results of AI Experimentation for Cataloging at the Library of Congress

Loading...
Thumbnail Image

Journal Title

Journal ISSN

Volume Title

Publisher

International Federation of Library Associations and Institutions (IFLA)

Abstract

This presentation details the Library of Congress’s (LOC) experimentation with Artificial Intelligence (AI) for cataloging, conducted under the Exploring Computational Description (ECD) project. The experimentation aims to enhance efficiency while maintaining high-quality records and supporting catalogers in their work. The experiments tested multiple AI models, including GPTs and open-source large language models such as MistralAI, using eBook data and prototyping human-in-the-loop (HITL) workflows. Results show promising performance for structured fields such as title and author (up to 99% accuracy) but significantly lower accuracy for complex fields like subject and genre (below 50%). Overall, model performance does not yet meet the 95% quality threshold required for full automation. These findings underscore the importance of HITL workflows and inform LOC’s next steps, including evaluating BIBFRAME versus MARC and addressing policy challenges such as copyright and training data bias to modernize cataloging practices. (presented on 15 August 2025 at "Pushing Boundaries to Next Generation Cataloguing: Experiments at the Edge of AI and Metadata" session)

Description

Citation