World Library and Information Congress (WLIC) Papers and Presentations

Permanent URI for this collectionhttps://repository.ifla.org/handle/20.500.14598/1941

Browse

Recent Submissions

Now showing 1 - 20 of 2002
  • Item type: Item ,
    AI and Copyright Literacy in the UK Policy Context
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Secker, Jane; Morrison, Chris
    This presentation addresses the rapidly evolving legal and practical challenges arising from the intersection of Artificial Intelligence (AI) and copyright law, with a specific focus on the United Kingdom's public policy context. The session examines the UK government's official consultations and proposed regulatory paths for AI, setting the high-level framework that will govern future practice. The core of the discussion explores the practical effects of this uncertainty, particularly where "copyright anxiety meets AI anxiety" within the academic sector. It examines the real-world questions and challenges currently facing library practitioners and staff in teaching and research environments. Ultimately, the presentation advocates for a focus on practical strategies and highlights the crucial need for enhanced Copyright and AI Literacy to empower the sector in navigating this new regulatory and technological future. (presented on 15 August 2025 at "Copyright and Other Legal Implications of AI" session)
  • Item type: Item ,
    The U.S. Copyright Office’s Report on AI and Copyright
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Weston, Chris
    This presentation offers an essential overview of the U.S. Copyright Office's (USCO) comprehensive multi-part report on Artificial Intelligence (AI) and its complex implications for copyright law and policy. Drawing on extensive public consultation, including a Notice of Inquiry that yielded thousands of comments, the USCO has analyzed the most critical legal tensions at the intersection of creative rights and technological development. The session highlights the authoritative source and timely nature of the US government’s key findings regarding the future legal landscape. It frames the discussion around three major axes: the use of copyrighted works for AI training data, the determination of copyrightability for AI-generated output, and the challenging regulatory and licensing solutions being considered. It gives insight into the policy direction that will shape how libraries, creators, and AI developers interact with content in the digital age. (presented on 15 August 2025 at "Copyright and Other Legal Implications of AI" session)
  • Item type: Item ,
    Choose Carefully: AI Regulation, Copyright, and the Recement IFLA Statement
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Wyber, Stephene
    This presentation examines the critical policy and regulatory tensions surrounding the rise of Artificial Intelligence (AI), framing the debate around the core question: What is the desired information environment of the future? It explores four key axes of debate—copyright, safety, regulation, and politics—highlighting the conflict between compensation models and fair use principles for text and data mining (TDM). The session makes the case for libraries to be central actors in this debate, advocating for their roles as information fiduciaries who keep users safe and as champions of knowledge inclusion that enable all people to access and create information. Finally, the presentation outlines the key tenets of the IFLA Statement on Copyright and Artificial Intelligence (April 2025), setting forth specific recommendations for libraries, governments, and rightsholders to ensure a responsible and ethical path forward for AI development. (presented on 15 August 2025 at "Copyright and Other Legal Implications of AI" session)
  • Item type: Item ,
    Catalysing Change: An AI Roadmap for Cataloguers at the National Library Board of Singapore (NLB)
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Jailani, Haliza
    This presentation outlines the National Library Board Singapore’s (NLB) comprehensive roadmap for integrating Artificial Intelligence (AI) into cataloging workflows to enhance efficiency, accuracy, and metadata enrichment. With more than three million bibliographic records and over a million name headings to manage, NLB is leveraging machine learning (ML) and AI tools to modernize cataloging processes while maintaining data quality and precision. The AI roadmap combines staff upskilling and experimentation through six proof-of-concept (POC) projects. By cultivating AI literacy among cataloguers and refining human–AI collaboration, NLB is building a sustainable and scalable model for metadata innovation. The presentation shares key learnings and outlines next steps toward integrating AI tools responsibly and effectively into national cataloging practices. (presented on 15 August 2025 at "Institutional Responses to AI: Libraries, Standards Bodies and Bibliographic Agencies in Transition" session)
  • Item type: Item ,
    Accessibility and Digital Repositories: Describing Digital Collections using LLMs
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Schlaack, Anna; Luke, Stephanie; Stein Kenfield, Ayla
    In April 2024, the United States Department of Justice announced web accessibility regulations that require all state and local governments to make their websites, mobile apps, and content accessible as prescribed by U.S. law. This prompted the University of Illinois Urbana-Champaign (U of I) Library to both evaluate the accessibility of our repository systems and the content we steward, and also review how digital assets are created. Broadly, there are three areas of focus for digital accessibility in our repositories: user interfaces, existing digital assets, and future digital asset creation. The University of Illinois boasts one of the largest physical library collections in the United States and hosts more than more than 3 million digital assets across our repository services, including digitized newspapers, special collections, scholarship, and digitized books. The content creation, ingest pipelines, and homegrown repositories have not been designed with accessibility first, making adherence to the web accessibility requirements a daunting challenge. U of I librarians and staff are investigating whether multimodal large language models (LLMs) can be leveraged to meet the technical accessibility requirements for description of digital assets. The authors designed a pilot project to generate alternative text (i.e., alt text) using a local installation of Meta’s pre-trained Llama 3.2-Vision. Our preliminary findings suggest that, while it’s a viable tool to describe some types of images, there are certain technical issues that need to be addressed before it could be implemented into daily workflows. Strategies to address the ethical challenges of using LLMs also need to be addressed, including environmental impact, copyright issues with LLM training, and bias inherent in the description of cultural heritage alt text. Our presentation shares the challenges and successes of the pilot project and discusses our institution’s possible approach to alt text creation based on the findings of this pilot. (presented on 15 August 2025 at "Pushing Boundaries to Next Generation Cataloguing: Experiments at the Edge of AI and Metadata" session)
  • Item type: Item ,
    Improving Performance in AI-based Automatic Classification through Feature Augmentation: A Case Study of KDC
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Chul, Jung; Soo-Sang, Lee; Jee-Hyun, Rho
    The objective of this study is to empirically examine the performance variations of an AI-based Korean Decimal Classification(KDC) automatic classification model through the augmentation of classification features, aiming to identify strategies that improve the consistency and accuracy in automated subject cataloguing of classification numbers. Experiments were conducted using 5,882 bibliographic records, where metadata from the library domain were supplemented with publishing metadata by integrating independent attributes from both sources. Core features(title, author) and KDC extracted from the National Library of Korea’s database were enriched with external features(keywords, book summary, tables of contents) collected from the Korea Publication Industry Promotion Agency’s BNK database. Feature composition was organized into three sets: Feature Set A(title, author), Feature Set B(title, author, keywords), and Feature Set C(title, author, keywords, book summary, tables of contents). Multi-class classification models based on KLUE-BERT were developed for each set, and their performance variations were systematically analyzed. The findings demonstrate that feature enrichment resulted in progressive improvements across all KDC main classes. The Arts(6XX) class exhibited the most substantial improvement, with a 124.24% increase in the F1-score from Feature Set C to Feature Set A. Significant gains were also observed in several other classes, including Science and Technology(57.14%), Social Sciences(40.00%), History(34.04%), and Literature(25.37%). Further analysis across the 61 divisions revealed that 28 divisions demonstrated continuous improvement, 20 showed limited improvement, 7 exhibited performance degradation, and 6 showed no significant change. These findings underscore the critical importance of feature augmentation in enhancing the performance of KDC automatic classification model, while indicating that its effectiveness may vary depending on the interaction between classification divisions and feature attributes. To improve classification performance, it is necessary to adopt not only feature enrichment but also more advanced strategies, including hierarchical classification structures, data refinement techniques, and sophisticated data augmentation methods. (presented on 15 August 2025 at "Pushing Boundaries to Next Generation Cataloguing: Experiments at the Edge of AI and Metadata" session)
  • Item type: Item ,
    Metadata for the Margins: Cyberpunk Cataloging with OpenAI
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Andres, Amy; Louis, Liya
    Standardized taxonomies and controlled vocabularies form the foundation of universal bibliographic control. Through our practice-based case study, we demonstrate how artificial intelligence bridges the gap between global standards and local needs for specialized and complementary collections that do not fit neatly into conventional cataloging taxonomies. We present a real-world application of an open access, efficient, cost effective, and context-sensitive AI-driven solution for cataloging non-bibliographic resources, with a focus on customizable templates tailored to specific item types. The templates we present are seamlessly integrated with AI to import descriptive metadata including images and videos in one record. In addition to streamlining metadata creation, the system supports multilingual input and output, enabling the generation of accurate metadata in different languages for culturally specific or regionally unique collections. We will show examples to highlight the flexibility and adaptability of the system to save data in different formats, emphasizing its capacity to support locally contextualized subject headings and to streamline workflows. The use of web-based AI populated templates also simplify staff training by enabling non-experts to create structured, reliable cataloging records fostering capacity-building in institutions lacking traditional cataloging expertise. Finally, we will discuss the potential applications and technical requirements necessary for implementation, ensuring that libraries across diverse institutional landscapes can adopt this solution, regardless of staffing or funding constraints. This presentation is ideal for librarians seeking innovative, cost-effective tools to enhance the discoverability of their diverse collections. We contribute a replicable, AI-driven, and human-centered metadata tool that promotes access, equity, and sustainability for every collection, no matter how small, unconventional, or underfunded has a place in the global metadata network, thereby strengthening the foundation of universal bibliographic control through reliable data to make libraries stronger. (presented on 15 August 2025 at "Pushing Boundaries to Next Generation Cataloguing: Experiments at the Edge of AI and Metadata" session)
  • Item type: Item ,
    Results of AI Experimentation for Cataloging at the Library of Congress
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Saccucci, Caroline; Potter, Abigail
    This presentation details the Library of Congress’s (LOC) experimentation with Artificial Intelligence (AI) for cataloging, conducted under the Exploring Computational Description (ECD) project. The experimentation aims to enhance efficiency while maintaining high-quality records and supporting catalogers in their work. The experiments tested multiple AI models, including GPTs and open-source large language models such as MistralAI, using eBook data and prototyping human-in-the-loop (HITL) workflows. Results show promising performance for structured fields such as title and author (up to 99% accuracy) but significantly lower accuracy for complex fields like subject and genre (below 50%). Overall, model performance does not yet meet the 95% quality threshold required for full automation. These findings underscore the importance of HITL workflows and inform LOC’s next steps, including evaluating BIBFRAME versus MARC and addressing policy challenges such as copyright and training data bias to modernize cataloging practices. (presented on 15 August 2025 at "Pushing Boundaries to Next Generation Cataloguing: Experiments at the Edge of AI and Metadata" session)
  • Item type: Item ,
    Unexpected AI in the Collections: A Collection Development Case Study on Navigating a New Academic Information Ecosystem
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Chau, Selena
    Use of generative AI in scholarly published content continues to be experimental and library metadata standards are not being developed fast enough to deal with the variety of machine-generated content that is emerging. A new type of ebook was found in our library catalog, a human-mediated repackaging of AI summarized content, without prior awareness or selection by staff. In response to this example, librarians across the University of California Libraries held discussions around patron needs, vendor relations needs, and collections development needs to address considerations for new collections processes and workflows. Librarians noted they’re frustrated by this repackaging of existing materials to be resold back to libraries, which requires more work and effort for staff to mediate or circumvent. In conversations with this global academic publisher and our books marketplace vendor, we advocate for more notification and identification tools of machine-generated content to prepare for a future where permutations of machine-human content are likely to arise. Although AI-generated content is now mentioned in several public library collection policies, it hasn’t yet been adopted in many academic library guidelines. As AI literacy remains a main focus for library services, it is important to ensure that our library collections practices are aligned with this need, and to ensure authenticity of the information we offer. This case study shares ways in which our academic library plans to work with the machine-generated content in our collections as the scholarly publishing landscape evolves. (presented on 15 August 2025 at "Curating in the Age of Generative AI: Global Perspectives on Collections, Ethics, Ownership, and Cultural Responsibility" session)
  • Item type: Item ,
    Reframing Bibliographic Control in the Age of Generative AI: Toward Inclusive Metadata Policies and Practices
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Al-Suqri, Mohammed; Al-Subhi, Nuha
    The integration of Artificial Intelligence (AI) and, more recently, generative AI into library and information science practices has catalyzed a paradigm shift in how bibliographic control, metadata creation, and legal deposit are conceptualized and operationalized. This study examines the multifaceted implications of generative AI on bibliographic practices with a specific focus on national bibliographies, metadata schemas, and policy frameworks. Drawing from the emerging experiences of national and academic libraries in the Arab Gulf region, particularly Oman, this study presents a hybrid analytical model that rethinks metadata policies in light of the increasing production of AI-generated content. The study highlights three key areas of transformation: (1) the reconceptualization of metadata to accommodate non-human authorship and novel forms of digital content, (2) the ethical and legal ambiguities surrounding copyright, authorship, and the inclusion of AI-generated works in national bibliographic repositories, and (3) the evolving role of information professionals whose skills and responsibilities must adapt to new AI-assisted workflows. Through case-based policy analysis and comparative examples, the study advocates for the development of inclusive national frameworks that recognize the cultural and informational value of generative AI outputs, while maintaining bibliographic integrity and legal compliance. Ultimately, this study contributes to the global discourse by proposing a roadmap for libraries to engage critically and constructively with AI technologies, ensuring their missions remain relevant, ethical, and forward-looking in an increasingly automated knowledge ecosystem. (presented on 15 August 2025 at "Curating in the Age of Generative AI: Global Perspectives on Collections, Ethics, Ownership, and Cultural Responsibility" session)
  • Item type: Item ,
    Generative Artificial Intelligence, Copyrights and Licenses from Latin America Approach
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Peña, Juan Miguel Palma
    At a global level, several governments, institutions and universities are debating the use that Generative Artificial Intelligence (GenIA) makes of data, research outputs and publications available in open access for its training, so that based on specialized literature and various sources of study, challenges are revealed regarding the use, permission and reference to the authors of those informational products by GenIA, the same situation that directly influences issues of copyright legislation of the products that use and produce said technology. Based on before raised, the aim of this study is identifying the copyright regulations and licenses to open data and research outputs that have been developed from Latin America to legislate the information used and produced by GenAI. Methodology for this research is carried out with a bibliographic review, as well as with quantitative and qualitative methods. For the exploratory analysis, were consulted WIPO Lex Data; Observatory National AI policies; Latin American Artificial Intelligence Index; Database Flexibilities to Copyright in Latin America; and Creative Commons chapters by Latin America. The search and retrieval information were based on five categories defined. The generals findings of analysis of fifteen countries are presented in details in the presentation. A general conclusion is that GenAI is an information action, which must be addressed with regulations of copyrights and with libraries collaboration in order to avoid incur on unfavorable actions. (presented on 15 August 2025 at "Curating in the Age of Generative AI: Global Perspectives on Collections, Ethics, Ownership, and Cultural Responsibility" session)
  • Item type: Item ,
    Cultural Rhythms at Risk: Libraries, Copyright, and AI in the Preservation of African Music
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Bouaamri, Asmaa; Otike, Frederick
    African music has long been central to the continent’s diverse cultures and everyday life. As an essential component of oral traditions, it functions as entertainment and a powerful medium of expression, storytelling, and communication (Stone, 2008). The cultural and social value attributed to African musical heritage is profound and often unparalleled. Yet, much of this heritage is at risk of disappearing due to insufficient preservation strategies. This study examines the significance of libraries, copyright law, and artificial intelligence (AI) in safeguarding African musical heritage as a form of indigenous knowledge and communication. It explores how libraries and archives can serve as custodians of musical heritage, in the face of legal and ethical limitations when digitizing and sharing culturally significant works. With the advent of Generative AI, many traditional forms of music are at risk, as they lack proper archival infrastructure and comprehensive preservation policies. While open-source digital repositories present viable solutions, their effectiveness remains limited by several factors, including the absence of specialized musical archives, inadequate institutional investment, and weak copyright frameworks. These vulnerabilities are especially alarming given the growing risks of AI-generated cultural appropriation and misuse. The study explores why African libraries have not been at the forefront of preserving musical heritage and examines how they can actively participate in protecting it from copyright violations and AI-driven exploitation. Drawing on case studies and policy analysis, the paper argues for a reimagined framework that balances intellectual property rights with equitable access and cultural sensitivity. It calls for collaborative strategies among cultural institutions, legal bodies, technologists, and local communities to ensure that AI enhances, rather than endangers, the living legacy of African musical traditions. (presented on 15 August 2025 at "Curating in the Age of Generative AI: Global Perspectives on Collections, Ethics, Ownership, and Cultural Responsibility" session)
  • Item type: Item ,
    From PDF to Prompt: Toward Universal Bibliographic Control Through Machine-Readable Cataloguing Rules
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Lowagie, Hannes
    This presentation introduces a pioneering approach to modernizing cataloguing practices through the transformation of cataloguing guidelines into a fully machine-readable format, aiming to contribute meaningfully to the long-term vision of Universal Bibliographic Control (UBC). In an era where AI technologies, particularly generative AI, are rapidly reshaping information management, this project reimagines how local cataloguing rules can be authored, maintained, and deployed. We present a real-world implementation from KBR, the national library of Belgium, which has progressed through several stages: from printed cataloguing manuals, to PDFs, to static HTML, and finally to a dynamic HTML interface that fetches and renders data from a structured machine-readable file, in this case a JSON file. This JSON serves as the backbone of our cataloguing guidelines. Uniquely, this same JSON is sent as a prompt input to generative AI tools to ensure that the AI adheres to our institution's specific cataloguing rules, establishing a triangular relationship: JSON as the core schema, that can be used to generate a HTML for the human cataloguer, and that van also be sent to an AI prompt input, ensuring consistency across both human-readable and machine-readable platforms. This proposal fits those two subtopics : 1. AI and Metadata: Our model directly enhances metadata creation and workflows by making cataloguing rules interoperable with AI tools. This streamlines operations and minimizes human error. 2. Generative AI Outputs: Our structured, prompt-ready rule set allows generative AI to produce cataloguing outputs that are both compliant and aligned with institutional standards, resolving bibliographic inconsistencies at scale. We conclude with a call to action: if every institution transforms its rules into machine-readable formats, we open the path to interoperability, and to the unification of cataloguing practices. True Universal Bibliographic Control is no longer just about standardization — it is about convergence of logic and content. (presented on 15 August 2025 at "Metadata's New Frontiers: AI-Driven Systems and Standards" session)
  • Item type: Item ,
    SGCAT: Using AI to Facilitate Cataloguing
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Ling, Ng Hui; Goh, Jeremy
    Cataloguing is traditionally a manual act of metadata creation that takes substantial focused manhours. Conventionally, libraries have relied on several approaches to increase efficiency of metadata creation, such as sharing of records via z39.50 protocol or via cooperatives like OCLC. However, cataloguing an item from scratch is still needed when they are new to library databases. Librarians must wade through stacks of new items individually to create valuable metadata according to comprehensive standards (e.g. MARC21, RDA) so that resources can be discovered by users. This is effortful and time-consuming, often taking anywhere from 30 minutes to hours depending on the item’s complexity. We developed a custom GPT prototype (“SGCAT”) to streamline bibliographic metadata creation and enhance efficiency in delivering library materials to patrons. Powered by OpenAI GPT, SGCAT is customised to follow specific cataloguing rules and local library practices, effectively serving as a smart cataloguing assistant. To ground it in fact, SGCAT pulls relevant bibliographic data from trusted sources such as NLB’s vendor-provided order information, Google Books and Open Library APIs. Through rigorous prompt engineering, SGCAT is currently able to draft a MARC record from a single ISBN in seconds, potentially speeding up cataloguing by at least 2x in combination with human review. SGCAT can follow instructions, maintaining consistency with cataloguing syntax and standards. By automatically generating informative abstracts, SGCAT takes over the workload from time-consuming transcription and summarisation tasks. These assistive capabilities help transform the cataloguer’s role from creator to reviewer, freeing them to focus more on cerebral tasks like subject cataloguing and record refinement. SGCAT has potential to be an assistant that elevates metadata quality and efficiency, speeding up time-to-shelf and improving discovery for patrons. The team intends to explore extending this prototype by enriching its knowledge base with more API sources, resolving , and enhancing SGCAT with multilanguage capability and multimodal input features to process cover images in the input prompts. (presented on 15 August 2025 at "Metadata's New Frontiers: AI-Driven Systems and Standards" session)
  • Item type: Item ,
    AI-Powered Bibliographic Control: Automating Cataloging, Standardization, and Data Capture
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Diaz, Claudio Daniel Henriquez; Vergara, María Paz Rioseco
    Artificial Intelligence (AI) optimizes bibliographic control by automating cataloging workflows, capturing data via mobile OCR, and normalizing records to established standards. Universal Bibliographic Control demands solutions that address the exponential growth of documentary output and the requirement for immediate access. AI accelerates record creation and review while maintaining consistency, even in resource-constrained environments. A human-in-the-loop system was implemented: librarians validate and correct AI-generated entries, reinforcing accountability and accuracy. Algorithmic transparency is ensured by documenting the AI’s criteria and decisions, creating an audit trail that facilitates suggestion evaluation and bias detection. Three use cases demonstrate the impact: 1. Automated Cataloging of Theses and Monographs. AI models segment PDF documents, extract essential fields, and generate MARC 21/RDA records. Average accuracy exceeding 93 % Cataloging time reduced from 30 minutes to under 5 minutes per item 2. Mobile Application for Image-Assisted Cataloging. Android devices capture cover pages; AI employs OCR and computer vision to propose normalized metadata. Provisional records are integrated into the bibliographic system via REST API. 3. Correction and Normalization of Historical Records. Models trained on RDA rules and authority lists detect inconsistencies and propose automatic or semi-automatic corrections. Over 40 000 records updated, 58 % improvement in access-point consistency, Reduction in authority collisions. Results include precision metrics, labor-hour savings, and seamless REST API integration with library management systems. This combination of advanced technology, governance, and measurable outcomes charts a roadmap for libraries to navigate new horizons of automation and governance effectively and securely. (presented on 15 August 2025, at "Metadata's New Frontiers: AI-Driven Systems and Standards" session)
  • Item type: Item ,
    Linguistic Barriers in Academic Research: Can AI Create a More Inclusive Future?
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Opdahl, Frode; Marmion, Aislinn
    Language plays a critical role in shaping access to knowledge, yet linguistic barriers continue to limit equity in academic research. This presentation explores how artificial intelligence can be harnessed to overcome these challenges and foster more inclusive, multilingual access to scholarly literature. Drawing on six years of experience developing AI tools designed to support language understanding, we will examine how AI technologies—initially created to help users navigate the gap between everyday and academic language—can now address broader systemic issues related to language in research and information seeking. While English remains the dominant language of scholarly communication, this predominance often marginalizes non-English speakers and restricts global collaboration. At the same time, native English speakers may also face barriers in understanding academic discourse. Through real-world examples and insights from our work, we will demonstrate how AI-powered tools can support users in discovering and interpreting research across languages, promote equitable participation in global academic conversations, and improve the visibility of research produced in underrepresented languages. By the end of this session, participants will be able to: - Recognize the impact of linguistic barriers on equity in academic research. - Discover how AI technologies can facilitate access to scholarly literature across different languages. - Consider practical ways to leverage AI tools to promote language inclusivity in academic libraries. This talk offers a timely look at the intersection of language, technology, and inclusion—highlighting how advances in AI can help academic institutions better support multilingual communities and ensure more equitable access to knowledge for all. (presented on 14 August 2025 at "AI in the Academic Field: Supporting Research and Learning" session)
  • Item type: Item ,
    A Comparative Analysis of AI Tools for Research Support: ChatGPT, Google Gemini, and Microsoft Copilot
    (International Federation of Library Associations and Institutions (IFLA), 2025-10) Q. Yang, Sharon; Whitfield, Sharon
    This presentation is based on a study that compared public ChatGPT 3.5, commercial ChatGPT 4, Google Gemini, and Microsoft Copilot in answering reference questions from the Rider University Library and Learning Commons in 2024. A total of 28 user-initiated questions were extracted from the chat reference transaction log during the 2024 academic year and input into the four AI applications as prompts. The responses from each AI tool were rated on a scale of 1 to 10 based on relevance, accuracy, friendliness, and the quality of information literacy instruction. The purpose of the study was to determine which AI application best supports research and information literacy. ANOVA (Analysis of Variance), a statistical test used to analyze differences between the means of more than two groups, was employed in the analysis. The results indicate that Google Gemini provides stronger support for research and information literacy education. In contrast, Microsoft Copilot tends to deliver brief and cursory responses. All of the AI tools attempt to manage hallucinations by avoiding citations. However, both ChatGPT and Google Gemini outperformed librarians in educating users about Boolean searches and recommending resources. To sum up, AI has made great progress in responses if we compare this study to the findings from those in 2023. (presented on 14 August 2025 at "AI in the Academic Field: Supporting Research and Learning" session)
  • Item type: Item ,
    Artificial Intelligence and the Evolution of Learning Resources Collections at Prince Mohammed Bin Fahd University
    (International Federation of Library Associations and Institutions (IFLA), 2025-10-06) Mohamed, Abdulla Mohamed
    As technological advancements accelerate, their impact on education—particularly regarding learning resources—becomes more reflective. Artificial intelligence (AI) has become a key driver of change across various industries, including academic libraries and learning support centers. At Prince Mohammed Bin Fahd University (PMU), the Learning Resources Center (LRC) plays a crucial role in supporting students and faculty by offering a diverse collection of educational materials, fostering literacy within the academic community. The center provides personalized tutoring for different types of assignments and discipline-specific coursework, empowering students with essential skills in communication, presentation, research, and other competencies necessary for success both within and beyond the university environment. Additionally, it facilitates access to reliable online primary and secondary sources, research materials, writing guides, and digital tools that support students’ research and writing efforts. This study aims to investigate how AI can influence and enhance the management, accessibility, and customization of the LRC’s collections, ensuring they stay relevant and effective in an increasingly digital world. The findings will guide PMU in modernizing its Learning Resources Center, making collections more user-friendly, tailored, and efficient. Furthermore, the research will contribute to the broader conversation about AI’s role in academic institutions, offering insights that could assist other universities seeking to innovate their library and resource services. Finally, this study seeks to integrate emerging AI technologies with the practical demands of managing academic resources at PMU. By analyzing current developments and proposing strategic approaches, it aims to position the university’s Learning Resources Center as a pioneer in innovative, AI-driven collection management. (presented on 14 August at the "AI in the Academic Field: Supporting Research and Learning" session}
  • Item type: Item ,
    Presentation - It Takes a Village: How Library Contributions to Open Infrastructure Shape Global Open Science
    (International Federation of Library Associations and Institutions (IFLA), 2025-10-15) Maistrovskaya, Mariya
    In the pursuit of a truly equitable and sustainable Open Science ecosystem, libraries are not just participants - they are foundational builders of the infrastructure that enables global knowledge sharing. This presentation explores how libraries, through community-governed initiatives and cross-sector partnerships, are actively shaping open infrastructure to reflect local priorities while contributing to global goals. Drawing on real-world case studies, we will highlight how library-led contributions to open platforms and collaborative support structures are helping grow a sustainable, scalable, and interoperative scholarly landscape. By recognizing and amplifying these contributions, we can better support a future where Open Science is not only collaborative and inclusive, but truly powered by the global community it serves.
  • Item type: Item ,
    It Takes a Village: How Library Contributions to Open Infrastructure Shape Global Open Science
    (International Federation of Library Associations and Institutions (IFLA), 2025) Maistrovskaya, Mariya
    In the pursuit of a truly equitable and sustainable Open Science ecosystem, libraries are not just participants - they are foundational builders of the infrastructure that enables global knowledge sharing. This presentation explores how libraries, through community-governed initiatives and cross-sector partnerships, are actively shaping open infrastructure to reflect local priorities while contributing to global goals. Drawing on real-world case studies, we will highlight how library-led contributions to open platforms and collaborative support structures are helping grow a sustainable, scalable, and interoperative scholarly landscape. By recognizing and amplifying these contributions, we can better support a future where Open Science is not only collaborative and inclusive, but truly powered by the global community it serves.