Skip to content

Competency N

>

Evaluate programs and services using measurable criteria.

Introduction

Effective evaluation is achieved through frequently assessing programs and services for their efficacy. Assessment in libraries involves collaborative evaluation of learning outcomes to enhance accountability and improve programs, focusing on shared goals with faculty and community partners while measuring student success in areas like information literacy and critical thinking (Hernon et al., 2015). Additionally, criteria for assessment can range from broad to specific and from standardized to self-determined. This flexibility in assessment criteria allows libraries to tailor their evaluation processes to meet the unique goals, values, and needs of their communities and stakeholders. A thorough evaluation process can help libraries identify areas for improvement, set strategic goals, and reveal opportunities for growth and development. By analyzing the data collected through assessment, libraries can make informed decisions about resource allocation and program enhancements. This continuous cycle of evaluation allows libraries to adapt to changing needs and ensure they are providing the most effective services to their patrons.

It is important to continually assess programs and services through their life cycles. Summative assessment evaluates the overall effectiveness of a program at the end of a session, while formative assessment occurs during the program to make real-time adjustments (Saunders & Wong, 2020). An example of in-library programming with summative assessment might involve collecting feedback from participants of a program that measures their confidence and skill levels at the end of the course. With a formative assessment during the program's sessions, instructors would ask participants about how they feel about specific tasks. If participants express difficulty with the material, instructors can adjust the curriculum to provide additional support in that area. Thus, both types of assessment are essential for ensuring continuous improvement and alignment with a program or service's intended goals through gathering valuable insights that facilitate measurable evaluation.

Engaging patrons in the evaluation process can also foster a collaborative environment that enhances the relevance and effectiveness of library services. User-centered information organizations prioritize ongoing evaluation of their resources and services, actively involving patrons in the co-creation of services (McDonald, 2022). This approach is essential for the sustainability and competency of modern information professionals. By involving users in the evaluation process, libraries can gain valuable insights into their needs and preferences, allowing them to tailor services accordingly. This participatory approach can foster a sense of ownership among users and also ensure that the services provided are relevant to the information communities being served.

Measurability

To ensure that library services and programs are effective and responsive to user needs, it is essential for librarians to adopt a measurable approach to evaluation. Performance measurement and evaluation are used to make comparative assessments against standards and targets, as well as to make judgments about past performance in order to create goals for the future (Appleton, 2017). Measurability allows librarians to collect data on the impact of their services, leading to evidence-based decision-making and continuous improvement. By setting clear objectives and measuring outcomes, librarians can not only improve and adjust but also demonstrate the value of their services to stakeholders.

One effective way to enhance measurability is to establish and apply clear, standardized guidelines. One broad, overarching set of standardized guidelines is the International Organization for Standards, or ISO’s library performance indicators. ISO’s (2023) performance indicators for libraries are meant to span all types and functions of library services and programs. Guidelines can also be domain-specific, like, for example, the Reference and User Services Association (RUSA) standards. RUSA (2023) provides detailed guidelines for the behavioral performance of conducting reference, which describes criteria to measure inclusion, approachability, engagement, searching, evaluation, and closure in reference interactions. These comprehensive guidelines are meant to ensure that reference librarians are providing the best possible service to patrons.

Measurable Criteria

In order to base assessments on truly measurable criteria, it is important that there be concrete and reliable evidence to support evaluation. Measurable criteria in research are established by operationalizing the research question, which entails defining each variable based on the methods required for measurement (Luo et al., 2017). This includes differentiating between simple variables, which are directly observable attributes such as age group, and composite variables, which are more complex constructs like community engagement that consist of several indicators. In this context, evidence-based library and information practice (EBLIP) encourages librarians to integrate user-reported, librarian-observed, and research-derived evidence, aiming to improve professional judgment and decision-making (Luo et al., 2017). Utilizing diverse forms of evidence allows librarians to make informed judgments about the effectiveness of their programs and services. For instance, user-reported evidence, such as feedback from surveys and focus groups, provides direct insights into user experiences and satisfaction. Librarian-observed evidence, such as usage statistics and service interactions, offers a quantitative perspective on service performance. Research-derived evidence, drawn from academic studies and best practices, provides a broader context for understanding trends and challenges in the field.

To further strengthen the evaluation process, it is crucial to leverage comprehensive data sources that provide a holistic view of library performance. The Public Library Survey, which includes roughly 9,000 public libraries, is conducted by the Public Library Association (PLA) (n.d.) that combines U.S. census data, PLA survey data, historic Public Library Data Service (PLDS) data, and Institute of Museum and Library Services Public Libraries Survey data in order to create benchmarks by which libraries can compare metrics and assess performance. By providing a comprehensive overview of library services and usage, data-driven tools like the PLA's benchmarks help libraries to more accurately measure their impact and help to make informed decisions based on quantitative evidence.

Competency Development

My primary introduction to evaluation and assessment in libraries began in my previous role as a research services assistant at an academic library. Many of my responsibilities, such as compiling monthly desk metrics and instruction quiz scores, centered around processing and reporting assessment data. This foundational experience in evaluation and assessment has continued to my current role in a public library, where I take an active role in helping to compile the metrics from the branch for the branch manager to include in quarterly reports. I have also recently been asked to serve as a team lead in the Quantitative Assessment Committee, with the goal of making the library system more data-driven at all levels.

As for the MLIS coursework that has prepared me for competency in evaluation, most if not all of the courses in the Data Analytics and Data-Driven Decision Making pathway of the Advanced Certificate in Strategic Management of Digital Assets and Services involved assessment. This is especially the case with both instances of INFO 246; Data Mining and Big Data Analytics taught me a great deal of tools, methods, and skills to include in any quantitative evaluation or assessment.

Evidence

In my data mining exercises focused on building a neural network, I aimed to analyze public library service survey data from the 2021 Library and Museum Services Public Libraries Survey, specifically for Ohio. I began by cleaning the dataset to include only relevant real and integer values, opting for per-thousand metrics to ensure comparability across libraries of different sizes. I then constructed a random dataset, creating two sample sets to train and test my neural network, ultimately using Library Journal’s 2021 Star ratings as a classification benchmark. After defining the attributes and setting up the neural network with two hidden layers, I trained the model and tested it against the remaining data. The results indicated that while the model labeled 98% of libraries as unranked, it achieved a 64% accuracy overall, with notable misclassifications of well-known libraries. This exercise not only deepened my understanding of data modeling and machine learning but also highlighted areas for improvement, such as incorporating historical ranking data to enhance the model's predictive capabilities.

This data mining exercise serves as evidence for Competency N, as it involved the systematic analysis of public library service survey data to construct and assess a neural network model for its ability to predict Library Journal Star Ratings. By selecting relevant metrics and employing a structured approach to training and testing the model, I demonstrated the ability to evaluate the effectiveness of a data-driven solution in classifying libraries based on their performance ratings. The results provided measurable insights into the model's accuracy and areas for improvement, showcasing how data analysis can inform decision-making and enhance the evaluation of library services. This process exemplified the importance of using quantifiable metrics to assess and refine programs, ultimately contributing to more informed and effective library management. The neural network was ultimately not successful in its intended goals, but learning from the experience, if I were to return to the model, I would add more hidden layers and potentially use a higher-powered PC.

In my exploration of America Public University’s Treyfrey Library’s LibAnswers, I assessed its effectiveness as a resource for finding data. I found that the system aligned well with RUSA guidelines for remote reference by responding to queries promptly and allowing patrons to find information quickly while also offering the option to consult a librarian for more complex questions. I noted the feedback feature at the bottom of each results page, which likely helps librarians evaluate the usefulness of answers. I also recognized that if services like LibAnswers became more prevalent, the role of reference librarians might shift towards anticipating common questions and fostering new collaborations between reference, technical services, and technology vendors.

This discussion post about evaluating LibAnswers serves as evidence for Competency N, as I looked at how the system effectively meets RUSA guidelines for remote reference by responding to queries promptly and acknowledging user questions in a timely manner. By evaluating the features of LibAnswers—such as the auto-complete search function, the availability of librarian support, and the user feedback mechanism—I demonstrated how these elements align with measurable criteria for assessing service effectiveness. This comparison not only highlights LibAnswers' strengths but also illustrates how adherence to established standards can guide libraries in evaluating and improving their programs and services to better serve patrons.

In Learning Activity 2, I reviewed instructional design examples, focusing on Kase Scenarios' "Sinister Obsession" and the CTRL-F Contemporary Verification Skills Workshop. The scenarios aimed at adults interested in OSINT, with instructional goals centered on skills like username enumeration and geolocation. I noted that the web-based format made it feasible to integrate some course materials into my library, although the subscription costs for platforms like Thinkific could be prohibitive. I appreciated that the technologies used, such as Zoom and YouTube, were accessible for the lesson plan I was developing, allowing for potential implementation of their methodologies. Overall, this activity deepened my understanding of how different instructional designs can effectively meet specific learning outcomes and the importance of evaluating the technologies and accessibility of these resources.

This learning activity serves as evidence for Competency N as it involved a thorough analysis of instructional design examples against established educational frameworks. By assessing Kase Scenarios and the CTRL-F Contemporary Verification Skills Workshop, I evaluated their alignment with cognitivist principles and their effectiveness in achieving specific learning outcomes. I examined the technologies used, their accessibility, and the overall design quality, which provided measurable criteria for determining the suitability of these resources for my library context. This critical evaluation not only highlighted the strengths and limitations of each example but also demonstrated how such assessments can inform the selection and implementation of effective instructional programs and services.

In response to the decision to remove four PCs from another branch, I compiled and reported on the PC usage at my branch to demonstrate the need for maintaining our current resources. I gathered data on usage statistics, including the number of total sessions and hours by month, to create a compelling visualization that clearly illustrated our branch's demand for computer access. This visualization was designed to highlight trends and patterns in PC usage, making it evident that our branch was experiencing high demand and could benefit from receiving the newly displaced PCs. I incorporated this visualization into a larger PowerPoint presentation, which a colleague and I then sent to the head of library IT, effectively advocating for the retention of our PCs and emphasizing the importance of supporting our patrons' needs for technology access.

This work activity serves as evidence for Competency N, as it involved the systematic collection and analysis of data to assess PC usage at my branch. By compiling usage statistics and creating a visual representation of the data, I was able to provide measurable evidence of the demand for computer resources, thereby justifying the need to retain our PCs in light of the planned removals from another branch. This approach not only demonstrated my ability to evaluate library services based on quantifiable metrics but also highlighted the importance of data-driven decision-making in advocating for resources that meet the needs of our patrons.

Conclusion

The evaluation and assessment of library programs and services is vital to ensuring their effectiveness and relevance to the communities they serve. By adopting a collaborative and user-centered approach, libraries can engage patrons in the evaluation process, fostering a sense of ownership and enhancing the overall quality of services. The implementation of measurable criteria, supported by standardized guidelines and evidence-based practices, allows librarians to make informed decisions that drive continuous improvement and strategic growth. This is especially important in light of the increasing information complexity of the contemporary world. The commitment to ongoing assessment not only demonstrates accountability but also highlights the value of library services to stakeholders. A robust evaluation framework empowers libraries to adapt, innovate, and thrive while helping to secure funding and support. As my career in libraries progresses, I look forward to exploring emerging and novel data-driven forms of assessment and evaluation.

References

Appleton, L. (2017). Libraries and key performance indicators : A framework for practitioners. Elsevier Science & Technology.

Hernon, P., Altman, E., & Dugan, R. E. (2015). Assessing service quality : satisfying the expectations of library customers (3rd ed.). ALA editions, an imprint of the American Library Association.

International Organization for Standardization. (2023). Information and documentation - Library performance indicators (ISO 11620:2023). Geneva: ISO. Retrieved from https://www.iso.org/standard/83126.html

Luo, L., Brancolini, K. R., & Kennedy, M. R. (2017). Enhancing library and information research skills : A guide for academic librarians. Bloomsbury Publishing USA.

McDonald, C. (2022). User experience. In S. Hirsh (Ed.), Information Services Today (3rd ed., pp. 192-202). Rowman & Littlefield.

Public Library Association. (n.d.). Benchmark: Library metrics and trends . Retrieved March 21, 2025, from https://www.ala.org/pla/data/benchmark

Reference and User Services Association. (2023). Guidelines for behavioral performance of reference and information service providers | reference and user services association. Retrieved March 21, 2025, from https://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral

Last Updated: 3/28/2025 9:25 PM PST