Competency N
Evaluate programs and services using measurable criteria.
Introduction
As with the other competencies discussed throughout this eportfolio, evaluation skills are necessary for all information professionals entering the workplace. More often than not, the organizations these individuals work for are called upon to answer the ever-present question about accountability: Why do we need you? This is where evaluation using measurable criteria comes in. Using measurable criteria, organizations can use data to visualize the results, outcomes, impacts, and successes of programs and services (Matthews, 2018).
Why Evaluate?
A significant canon of research illustrates that evaluation is grounded in good management and leadership. Evaluating programs and services contribute valuable evidence-based decision-making information and provides valuable data for those planning and delivering these programs and services (Matthews, 2018). They can highlight areas of improvement or new possibilities for services and programs, pinpoint where staff training is needed, and encourage the organization to think about how its activities affect the short-term outcomes and long-term impacts on its users. Conducting evaluations is also a way an organization can determine if its activities align with its mission and goals as well as demonstrate its added value to stakeholders.
What Should We Evaluate Programs and Services With?
When conducting an evaluation study, the first thing to do would be to frame the purpose and scope of the evaluation. After these details have been established, the next phase would be determining the measurable criteria used to highlight aspects of the evaluated service or program. The criteria create a normative framework for evaluators to maintain a consistent study. The criteria also set a standard language through which desired service or program attributes can be assigned values (numerical, descriptive, etc.).
The evaluators can create these criteria based on the purpose of the study, or they can use the standards set by organizations such as The World Wide Web Consortium (W3C) or the Reference and User Services Association (RUSA). The RUSA Guidelines for Behavioral Performance of Reference and Information Service Providers outlines guidelines for adult reference interactions in person and remotely (Reference and User Services Association, 2013). In the case of the W3C’s Web Content Accessibility Guidelines, it is a list of recommendations that would make Web content more accessible for a broader range of people, including those with physical, cognitive, and learning disabilities (World Wide Web Consortium, 2008). The goal is to make the web a place where all content is perceptible, operable, understandable and robust to everyone, regardless of ability (World Wide Web Consortium, 2016).
According to the Organisation for Economic Co-operation and Development (OECD) and Matthews (2018), these are the main questions that evaluators should ask when creating criteria for a service or program evaluation:
Extensiveness- How much is a service or program made available to the community?
Relevance- Is the service or program in question benefiting the members of the community? Is it useful?
Coherence- How well does the service or program fit within the organization’s current roster?
Effectiveness- Does the service or program achieve its goals?
Efficiency- Does the service or program use resources responsibly?
Impact- Does the service or program make a notable difference in the community?
Quality- How well does the staff provide the service or program?
Sustainability- How long can the organization keep the service or program going? Do the benefits gained from this service or program last?
Throughout this program, I have been interested in usability and user experience and have been fortunate enough to be able to take many courses focused on these topics, such as INFO 287- User Experience, INFO 251- Web Usability, INFO 287- Design Thinking, INFO 282- Project Management. In each of these courses, a lot of the work was related to evaluating library services. For many of these evaluations, we were required to use our critical thinking skills to determine the relevant criteria based on the project brief. While completing these evaluations, I found that I could use established criteria as the baseline for the study and then add criteria based on the specificity of the research question. The W3Cs Web Content Accessibility Guidelines and Steve Krug’s book- Don’t Make Me Think were very useful publications for this purpose.
What Does An Evaluation Look Like?
As I learned in the INFO 287 courses Design Thinking and User Experience, an evaluation study can take many forms, wholly dependent on the desired information and available resources (time, money, staff). Quantitative methods generate numerical data, including counting, measuring experiments, and statistical analyses. Qualitative methods, including ethnography, observation, interviews, and focus groups, generate text-based data- observations or commentary. Understanding the different methods that can be used to evaluate a service or program is essential because to interpret the resulting data effectively, a researcher should be aware of the quality or limitations of cede data. One crucial concern is data validity, which are the methods measuring what we think they are measuring.
Reporting the Results
Once all the data has been collected and analyzed, the information must be presented to stakeholders. Below is a list of the main sections of an evaluation report. It is important to note that this type of formal report is optional. The key is that the information from the evaluation is presented concisely. Evaluations can also be presented as quick memos, a simple presentation highlighting the key facts, or notes for a conversation/ meeting.
An executive summary (one to two pages that include the study objectives, key findings, and recommendations)
Introduction/ focus of the study
A literature review
The methods of data collection
Analysis of the data collected*
Practical recommendations*
Conclusions
Commentary on the limitations of the study
References (bibliography) and appendices (this is where the data can be displayed)
*these sections should include visuals- tables, charts, and graphics that highlight important information
Evidence
Website Usability Assessment
INFO 251 Web Usability with Diane Kovacs
Description
We were asked to select a library website to analyze for this assessment. We were to act as usability consultants and review the website's usability. The study report consisted of an environmental scan, the results from the assessment, and a presentation detailing the report's main points to the library. We also conducted a few usability exercises for this assessment, including creating user personas/ stories, card sorting and storyboarding.
Justification
This assessment is a strong demonstration of competency N because I was able to conduct a detailed evaluation of library service (website). I demonstrated that I could identify criteria that needed to be focused on and judge how well the library website fulfilled those requirements.
UX Customer Journey Map
INFO 287 User Experience with Aaron Schmidt
Description
For this UX assignment, we broke down a library service into its requisite steps to identify areas where it excelled or needed improvement. Using an app called Lucidchart, I generated a flowchart that broke down a customer's journey to use the 3D printer from home to the local library. Creating this layout enabled me to brainstorm and better visualize all the potential steps a customer could take.
Using this initial process flowchart as a guide and some usability standards and guidelines, I assessed the different steps and documented the areas needing improvement. The standards and guidelines were outlined in Krug’ Don’t Make Me Think, an essential text for INFO 287 that addressed web usability. The highlighted areas of improvement were used to create another flowchart that displayed the phases of an improved customer journey.
Justification
I selected this assignment as evidence for Competency N because it demonstrated how I could assess library service. Using a set of defined criteria, and an information visualization app, I could detail different pain points in the process and suggest alternative options to improve the process. Additionally, my design skills helped me to display this information in a comprehensive and easy-to-read manner.
Evaluating and Designing Websites Project
INFO 202 Information Retrieval System Design with Virginia Tucker
Group Members: Lisa Danes, Brayden Kelley, Lydia Lopez, Sabrina Weegar & Bailey Wells
Description
For this project, we were asked to lead an evaluation of an organization’s website to see if a redesign was necessary. The report generated from this evaluation presented recommendations for improving the website’s structure, organization and labelling to achieve a better user experience when navigating the site.
Justification
I chose this project to demonstrate my fulfilment of competency N because the report contents show that I can gather qualitative data about the website’s usability. It also shows that I can take that data and conduct an analysis to generate an evaluation report that would include recommendations for a potential redesign. Finally, the project report demonstrates my ability to distill information into high-quality tables and graphics to produce a concise and comprehensive report.
Conclusion
As we have shown, evaluation, data analysis, and visualization are vital skills that all information professionals should have. The evaluation process focuses on collaboration and iteration. When evaluations are done regularly, staff can create temporal trend data for library services, programs, and resources. This enables them to learn more about their organization and develop their evaluation and analysis skills.
From the MLIS courses and my work experiences, I have had many opportunities to conduct evaluation studies, some of which have been evidenced in this competency. As an analytical and organized individual, the process is very intuitive and exciting, and I plan to build up my skills as I progress in my career. Evaluations that can be used to improve user experiences in public libraries are of particular interest.
References
Cervone, H. F. (2021). Data Management, Analysis, and Visualization [Print]. In S. Hirsh (Ed.), Information Services Today (3rd ed., pp. 358–373). Rowman & Littlefield.
Harlow, J. (2022, September 20). What is a Key Performance Indicator (KPI)? - KPI.org. KPI.org. https://www.kpi.org/kpi-basics/#:~:text=Key%20Performance%20Indicators%20(KPIs)%20are,attention%20on%20what%20matters%20most.
Matthews, J. R. (2018). Evaluation: An Introduction to a Crucial Skill [Print]. In K. Haycock & M.-J. Romaniuk (Eds.), The Portable MLIS (2nd ed., pp. 255–264). Libraries Unlimited.
McDonald, C. (2021). User Experience [Print]. In S. Hirsh (Ed.), Information Services Today (3rd ed., pp. 192–202). Rowman & Littlefield.
Organisation for Economic Co-operation and Development. (2021). Applying Evaluation Criteria Thoughtfully. OECD Publishing.
Project Outcome. (2018). PLA | Project Outcome. Resources. https://www.projectoutcome.org/surveys-resources
Reference and User Services Association. (2013, May 28). Guidelines for Behavioral Performance of Reference and Information. Reference & User Services Association (RUSA). https://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral
World Wide Web Consortium. (2008, December 11). Web Content Accessibility Guidelines (WCAG) 2.0. Web Content Accessibility Guidelines (WCAG) 2.0. https://www.w3.org/TR/WCAG20/
World Wide Web Consortium. (2016). Introduction to Understanding WCAG 2.0. Introduction to Understanding WCAG 2.0 | Understanding WCAG 2.0. https://www.w3.org/TR/UNDERSTANDING-WCAG20/intro.html