Reference
- Montesi, M.; Bornstein, B.Á.; Puig, N.B.; Blázquez-Ochando, M.; Sánchez-Díez, A. (2025). AILIS 1.0: A new framework to measure AI literacy in library and information science (LIS). The Journal of Academic Librarianship, 51(5), 103118. https://doi.org/10.1016/j.acalib.2025.103118
Comment
The increasing presence of artificial intelligence in higher education poses significant challenges, particularly in a sector that European legislation considers "high-risk" due to the impact educational decisions can have on people's lives. AI applications in this field are broad and promising, with particular relevance in supporting research and innovation in teaching and assessment methodologies. However, its adoption also entails notable risks: plagiarism, new forms of academic dishonesty, anxiety, introduction of biases, inequalities, misinformation, and reduced human interaction.
In the specific field of Library and Information Science, available educational guidelines prioritize AI's capabilities in data analysis and processing, bibliometric and scientometric research, and its role in scientific publishing and peer review. However, integrating AI into the classroom requires students not only to be familiar with practical applications but also to understand the ethical implications of its use and the potentially disruptive impact of AI on professional environments.
In academic libraries, AI is increasingly used for tasks such as cataloging, information literacy, marketing, and infrastructure management. However, adoption remains limited due to a lack of strategic planning, insufficient training, algorithmic illiteracy among librarians, and ethical concerns such as privacy. Although librarians show potential to enhance their competencies, the full benefits of AI can only be realized through well-structured training programs that equip them with the necessary skills to innovate and integrate AI into library routines.
Purpose of the research
This research presents a questionnaire specifically developed for higher education in Library and Information Science with the purpose of measuring students’ AI literacy and guiding the identification of areas where additional training may be needed. The questionnaire, named AILIS 1.0 (Artificial Intelligence Literacy in Library and Information Science), was developed based on prior instruments designed for higher education and adapted to the field of Library and Information Science through a review of relevant literature and expert judgment.
The questionnaire was administered to a sample of 100 students from various programs at the Faculty of Documentation Sciences of the Complutense University of Madrid and to 63 academic librarians from the same institution. The results aim to provide insights into the core themes of AI education and identify gaps in knowledge and competencies among both students and librarians. Additionally, in the case of librarians, the questionnaire seeks to provide evidence regarding their preparedness as instructors and trainers in AI.
AI Literacy: Conceptual Framework
According to Long and Magerko (2020), AI literacy encompasses "a set of competencies that enable individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace." Various authors agree that AI literacy is closely linked to other forms of literacy, such as information, digital, and data literacy.
In library contexts, information literacy is understood to enable a critical, reflective, and ethical engagement with AI systems, making it a fundamental component thereof. Integrating "algorithmic literacy"—the capacity to understand, interpret, create, and critically evaluate algorithms and their social impacts—into information literacy can provide the awareness, understanding, and skills necessary for meaningful use of AI.
AI literacy is understood as a complex and multidimensional construct requiring students to comprehend both the human and technological dimensions of AI in order to achieve "safe, ethical, and meaningful use of AI in education." Despite various theoretical approaches, there remains no clear consensus on its definition, and research in higher education has primarily focused on evaluating the impact of AI training programs, with general population AI literacy scales being more common than those tailored for higher education students.
Methodology: Instrument Development
The development of AILIS 1.0 followed a rigorous validation process based on multiple sources of evidence.
- Identification of Competencies. The questionnaire was developed based on a literature review and previous instruments used with university students. The exploratory review was conducted in Scopus in April 2024, selecting 17 studies, nine of which developed specific questionnaires to measure AI literacy. Among the scales reviewed, two (Hornberger et al., 2023; Lee & Park, 2024) were considered to present the most robust evidence in the context of higher education.
- Selection of Dimensions and Competencies. A group of seven experts in Library and Information Science (six professors and the director of a specialized library) initially identified five dimensions: What is AI, Functioning, Use, Ethics, and Evaluation. The 71 competencies collected under these dimensions from the 17 selected studies were evaluated by the experts through online forms, rating their importance for Library and Information Science training on a scale from 1 (not important) to 5 (very important). As a result of these consultations, 38 competencies were selected based on the Content Validity Index and the Coefficient of Variation for each item.
- Disciplinary adaptation. The disciplinary adaptation process included drafting specific questions for each competency, reviewing the disciplinary literature on AI, and incorporating data-related competencies—which account for 22.90% of the final instrument—distinguishing AILIS 1.0 from previous instruments. Additionally, competencies addressing the environmental impact of AI and its effects on individuals with functional diversity were included, in alignment with UNESCO’s recommendations on inclusion, equity, and diversity.
- Validation with students. The preliminary set of 39 competencies and their corresponding questions was evaluated by seven students via an online questionnaire. Students completed the test and provided modification suggestions, expressed doubts, and assessed the relevance of each item. The overall student feedback was used to filter out four competencies, resulting in a final questionnaire of 35 questions distributed across five dimensions.
- Scoring System. The scoring system combines multiple-choice questions with a single correct answer and three self-assessment items rated on a 1 to 5 scale. All self-assessment items are included in the Use dimension, based on the understanding that there may be multiple valid uses of AI. The final weighted score is calculated by assigning each dimension a weight proportional to the number of items it contains.
- Reliability. To assess the questionnaire’s reliability, two administrations were conducted with a group of 17 students one week apart. The unweighted Cohen’s Kappa coefficient was applied for dichotomous responses, and the weighted Kappa coefficient was used for ordinal scale questions. Additionally, the Intraclass Correlation Coefficient was calculated for the overall score and for individual scores per dimension.
Administration and Analysis
The questionnaire was administered between November 2024 and early March 2025. Student responses were collected during class hours, while academic librarians were invited via email. In both cases, the survey was completed online and allowed participants to review their scores and obtain a copy of their responses. The final sample included 100 students (63 undergraduate and 37 master’s) and 63 academic librarians from the Complutense University of Madrid, representing approximately 28.6% of undergraduate and master’s students in Library and Information Science at UCM and 18.9% of active librarians at the same institution.
In addition to questionnaire administration, three focus groups were conducted with 24 of the participating students, providing additional evidence on the validity of AILIS 1.0 and suggesting potential improvements.
Results
- Reliability. The average Cohen’s Kappa value for the entire instrument was 0.35 (95% confidence interval: 0.26–0.43), indicating low reliability. However, ICC values for each dimension and the overall score suggest a moderate level of agreement. The dimensions with the lowest reliability were Use and Ethics, which may reflect participants’ uncertainty or lack of clarity regarding best practices in these aspects of AI.
- Overall results. The highest scores were obtained in the Ethics and Evaluation dimensions. Correlation analysis revealed that scores for Functioning are positively correlated with all other dimensions except self-assessment of Use, while the Ethics dimension shows a significant positive correlation with all dimensions except self-assessment of Use (where the correlation is significant but weak and negative). This suggests that knowledge of AI’s technical functioning is an important predictor of literacy in other areas, whereas self-perceived competence does not necessarily align with actual performance.
- Comparison between groups. Librarians significantly outperformed students in the Use and Ethics dimensions, while both librarians and advanced students (fourth year and master’s) achieved higher scores than first-year students in Operation and Evaluation. Regarding self-assessment of Use, first-year students exhibited significantly higher self-efficacy despite obtaining lower performance test scores, indicating a tendency to overestimate their AI competencies—a finding corroborated by focus groups.
- Regarding levels of AI literacy based on the weighted overall score, 49.7% of participants were classified as having intermediate literacy, 32.5% as high literacy, and 17.8% as low literacy. Librarians showed the highest proportion of high literacy (49%), followed by advanced students (25%) and first-year students (21%).
- Gender differences. Contrary to expectations based on prior research on attitudes toward technology, no significant differences in performance scores were found between female and male participants. However, greater variability in scores was observed within the female group, indicating a higher diversity in levels of AI literacy among women.
- Prior training. Regardless of role, the majority of participants had not received prior training in AI. Participants who had received some form of training—whether in the classroom, through formal courses, or via self-directed learning—achieved significantly higher scores on the overall score and on the Functioning and Self-Assessment of Use dimensions, suggesting that training has a greater impact on self-efficacy than on performance-based measures of AI literacy.
Emergent themes from focus groups
The analysis of focus groups revealed four main themes:
- Human-machine communication. Students are aware that communicating with AI requires the ability to construct meaningful prompts
- Non-problematic aspects of AI use. Curiosity to explore and better understand AI emerges in some students, reflecting a positive and optimistic perspective. This attitude typically arises in connection with successful experiences using AI, particularly for tasks related to text processing and code generation.
- Problematic Aspects of AI Use and Negative Reactions. The growing presence of AI in students’ academic and professional lives brings with it a series of problematic aspects and associated emotional responses. Uncertainty, alongside frustration, confusion, and guilt, characterize this issue. Coexistence with humanized or human-like agents challenges students’ personal and professional identities, particularly given the constant need to adapt to rapidly evolving technologies.
- Generational Inequalities. Students recognize that AI impacts all generations, but understand that the burden is unequally distributed, with younger individuals bearing the greatest cost. They fear intellectual and professional impoverishment and perceive that older generations are not doing enough to address these challenges.
Discussion
Core Areas of AI Literacy. Both the analysis of the questionnaire results and the focus groups indicate that Operation, Ethics, and Evaluation are core areas of AI literacy in Library and Information Science. Operation has emerged in prior research as a dimension in which higher education students perceive themselves as least prepared. In this study, Operation gained prominence through the design of a questionnaire specifically targeted at a Library and Information Science audience, with particular emphasis on data-related competencies, distinguishing it from other frameworks designed to measure AI literacy in this field.
The results revealed that higher scores in Operation were associated with higher scores in all other dimensions, particularly Ethics and Evaluation. This suggests that genuine understanding of how AI functions may be a strong predictor of successful AI use, contrary to the traditional belief that self-efficacy in technology use is the best predictor. The ability to explain how AI works is an essential skill for human-AI co-creation activities.
Differences between groups. The comparison between participant groups reveals the impact of training on AI literacy. Librarians were found to be better prepared across all dimensions than first-year students, demonstrating a more realistic perception of their actual Usage competencies. Since half of the librarians achieved high scores on the test, their level of literacy is substantially higher than that of the students. This finding should encourage library professionals to assume a more active role in delivering AI training to higher education students, particularly during their early years.
Overestimation of competencies. Focus groups confirmed a tendency to overestimate actual AI literacy, particularly among less advanced students, who reported the highest self-assessed competencies but achieved the lowest scores on performance-based measures. This may be due to the user-friendly and intuitive interface of AI tools, which can be misleading and create a false sense of mastery because of their apparent ease of use.
Conclusions
AILIS 1.0 emerges as a promising tool not only for assessing current levels of AI literacy but also for identifying blind spots in curricula and guiding the design of targeted educational interventions. The study also calls for institutional strategies that recognize and formally support training in AI literacy. Librarians, as demonstrated by their higher performance and more balanced perception, can act as mentors, bridging the gap between academic training and professional practice.
Overall, AI literacy in Library and Information Science emerges as a construct that should not be conceived merely as an extension of other forms of literacy, so as not to underestimate its impact on higher education and professions in this field. It must also go beyond ethical awareness and traditional information literacy practices, complemented by stronger and more data-related technical competencies.