What questions should AI prompt library professionals to ask about their sources of evidence?
Following thoughts on the potential for stronger partnership in library evidence-based practice and my previous writing on values and evidence, I also explored where ‘values, partnership, and evidence’ in libraries might sit with conversations on artificial intelligence (AI).
AI has already prompted conversations for LIS professionals across AI information retrieval, evidence synthesis, risk, governance, copyright, ethics, information and digital literacy, and prompt engineering. As many discussions turn to how we prompt AI tools, what questions should AI developments prompt us to ask about the sources of evidence we use in our professional practice?
This especially matters in the narratives – the stories – produced from our evidence base in libraries.
- Do we recognise and disrupt bias/deficit narratives in evidence collection, interpretation, and communication?
- Does our evidence (and use of AI) misrepresent any communities or replicate/reinforce marginalisation? I.e., reinforcing low expectations.
- How might library partnerships prevent misleading narratives?
- How are we (and when should we be) partnering when producing library stories about value and impact?
The potential for AI to inform library evidence-based practice is not just a prompt for partnership but a push to strengthen our understanding of what partnership means.
Listening
Here is an excellent place to set my library writing aside and find time to listen to the Indigenous Data Rights with Distinguished Professor Maggie Walter episode of the AI Australia Podcast.
Professor Maggie Walter speaks on data as a human artifact (“people create data”) and the importance of understanding the purpose behind why we collect data.
Existing datasets, built on harmful methodologies and data practices and used by AI, can present deficit narratives. Walter and Suina describe the need for a data ecosystem where “Indigenous decision-making is a prerequisite for ensuring Indigenous data reflects Indigenous priorities, values, culture, lifeworlds and diversity.” 1
Values, evidence, & decisions
In Evidence-Based Medicine (EBM), healthcare, and research more broadly, discussion continues around the potential for AI tools to support evidence synthesis, and also whether AI tools will undermine core values and principles shaping these professions.
There are also explorations of whether ‘value-flexible’ AI might support shared decision-making, especially where multiple stakeholders with conflicting values exist. Here, value-sensitive design in information systems raises questions about who the stakeholders affected are and what values are articulated and supported.
This has led to further questions on whether trust and patient-centered decision-making are compromised by black-box algorithms, and the need to shift the conversation to how humans interact with these systems. Opacity (the inner workings of how a system operates being hidden) and transparency (offering interpretable reasoning and justification, building trust) are key to this conversation. 2
In much of this writing, while AI systems are recognised as providing input for decision-making, professional knowledge and patient values enable good decision-making.
A local evidence ecosystem
While EBLIP shares EBM’s purpose (decision-making), sources of evidence in libraries have evolved a little differently. This is especially apparent in how we engage with local evidence and professional knowledge. These add a contextual and locally situated understanding of our decisions alongside research evidence.
If using AI in EBLIP and decision-making, we should be considering how the local context of our evidence and knowledge gets reflected. This means considering how AI engages with the (potentially conflicting) values embedded in the evidence we collect.
Engagement with our evidence base relies on professional knowledge that is situated. While much of our professional knowledge develops from wider professional engagement, our knowledge will still be place-based and localised.
We may have a national or global outlook, recognising the impact of shared challenges and relationships, but many stories of library impact remain situated with local communities. These stories speak to the value of the library as a social and cultural infrastructure and our partnerships within communities.
From evidence to stories
The evidence we collect is recast as stories. From evidence, we create narratives that place our underlying values and principles on display. We negotiate these values across our profession, institutions, and with library users and communities.
We need to tell these stories well. We also need to consider who should be involved in this process from the outset, beginning with the evidence we build value and impact stories around.
Incorporating AI into any stage of our evidence-based practice should then mean understanding how AI does or does not replicate the underlying (potentially conflicting) values that library users, our profession, a community, or an organisation may expect our decisions and narratives to reflect.
Decisions we claim are built on specific values will require situated knowledge, accountability, and transparency that may not be immediately apparent or communicable with AI. Our decision-making requires an added layer of negotiation in understanding whose/what values get included and why.
Engaging in meaningful partnerships across evidence-based practice may help mediate negotiating values. Partnerships ensure our evidence base and decisions have local input (reflecting the experiences of those impacted by decisions) to help achieve meaningful outcomes.
Knowledge as situated
Partnership approaches to evidence-based practice bring a unique understanding of how we engage with sources of evidence. AI can inform and complement a partnership approach (at various stages), but this does not mean it necessarily replicates the breadth of place-based knowledge that evidence is situated with.
Situated knowledge allows for serendipitous and unexpected directions. We see this in the experiential knowledge that library users and LIS professionals contribute to the questions and priorities determined as significant.
The situated knowledge we bring to producing evidence requires reflexivity towards library data and insights. Evidence in this context requires we engage with positionality to some extent. That is:
“… self-presence, self-knowledge, and self-identity must be intentional and practiced in order to best answer questions like What should I be looking for? Who should I be looking with? or What instrument should I be using to look?” (Cambo & Gergle, 2022).
With these questions and reflexivity, Cambo & Gergle suggest going beyond “evaluating [AI] models in terms of accuracy to evaluating models in terms who the model is accurate for.”
Values, AI, and decision-making
Understanding the values and principles aligned to AI (and their selection) presents opportunities and challenges for engaging in evidence-based decision-making. When using AI in evidence-based practice and decision-making:
- How are we achieving (and communicating) transparency?
- What role can AI play if we also claim our decisions are values-based?
- What values, and whose values?
Here, partnership provides opportunities to collect evidence with (not just for) those impacted by our decisions. We also return to the source of our evidence – people.
This means considering how partnership is incorporated into EBLIP and might facilitate locally situated knowledge that centers users. By recognising experiential knowledge as a type of expertise we can bring new perspectives to interpreting evidence and presenting library stories. This is while still facilitating structured approaches to analysis and finding opportunities to increase transparency (and trust) in decision-making.
We know AI can inform our engagement with research and contribute to analysis and decision-making. Yet evidence and decisions will still need to be mediated by our local, practice-based, and place-based approaches to evidence.
And so we start with questions. From the outset, before we start collecting or analysing evidence for a ‘problem’ or decision, we can understand what questions are important to ask by drawing on experiential knowledge from partnership.
Sources of evidence
Centering partnership as part of the EBLIP process means we don’t solely rely on library professionals or AI to define priorities. Partnerships contribute knowledge and evidence that can inform the questions we ask to guide evidence-based practice.
Current conversations around AI should prompt us to consider whether we are asking the right questions and maintaining transparency and authenticity in our decision-making. This includes the questions we ask in our approaches to collecting, analysing, interpreting, and communicating evidence (I.e., What should I be looking for and who with?).
As we ask questions about how AI can analyse and interpret data/evidence, support information retrieval, provide new insights for decision-making, and contribute innovative ideas for the stories we tell in libraries, we might also take this as an opportunity to strengthen our understanding of underlying values and partnerships in shared decision-making.
Both irrespective of AI and alongside it, I’m interested in “whose voices, perspectives, and values are reflected in and contribute to the LIS profession’s evidence base?.”3
That is to say, we should be asking: Who contributes to our sources of evidence, and who do our stories and decisions impact or empower?
- Walter & Suina, 2019, p. 234 [↩]
- Durán & Jongsma, 2021 [↩]
- Bell, 2022, p. 128 [↩]