AI Usage in Libraries
Instructions
This track focuses on exploring the sociotechnical aspects of human-centered design and computing in topics of interest, connecting these to existing research literature, particularly in areas like online communities. The final deliverable will be a traditional research paper.
Who benefits from AI systems in libraries, and who may be harmed or excluded?
How do users experience AI-based cataloging, chatbots, or recommendation systems?
What happens when AI makes mistakes or reinforces bias?
When do people prefer a human librarian over an AI system?
How should technology be designed so that it supports rather than replaces human judgment?
For example:
AI cataloging tools might save time, but they could reproduce biased subject headings or misclassify materials about marginalized groups.
AI chatbots for reference services may provide fast answers, but users may find them frustrating, impersonal, or less trustworthy than a librarian.
Reader’s advisory algorithms might recommend books efficiently, but could narrow users into “filter bubbles” rather than encouraging exploration.
Those are all human-centered design issues because they involve usability, trust, fairness, transparency, accessibility, and the relationship between humans and technology.
Your paper could frame the issue as: how can libraries design and use AI tools in ways that remain centered on human needs and library values? That lets you connect directly to HCD principles such as:
-
user needs and usability
-
transparency and explainability
-
accessibility
-
trust
-
bias and fairness
-
human oversight
-
designing technology to augment rather than replace people