Recently, Clare O’Hanlon shared an article by Cat Lockmiller titled False Positive: Transphobic Regimes, Ableist Abandonment, and Evidence-Based Practice. (Thanks, Clare!). As someone who adopts evidence-based decision-making and has previously called for critical approaches within EBLIP, I was excited to see a critical perspective.
While reading, I did wonder if some additional notes on EBLIP’s history and contemporary developments may have added depth to the library context it exists in. Regardless, the paper very astutely captured the problematic nature of values- and evidence-neutrality in EBP.
The paper has been on my mind since and I kept returning to jot down notes about it. The following quote captures the overarching sentiment:
“EBP protects and uplifts “evidence neutrality,” but as with “journalist neutrality,” and as with “library neutrality,” the retreat to what is neutral will inevitably re/produce class dominance among the normative and the naturalized” (Lockmiller, 2025).
For me, this highlighted where positionality, the potential for harm, lived experience, and community values, amongst other nuances, need to be factored into EBLIP.
Some of these considerations look like questions we’re already asking, though perhaps in a different light:
- What counts as evidence? What is an accepted methodology or type of evidence within a community or profession? What happens when this contradicts? How do values and positionality influence what counts as evidence?
- How are we scaling evidence? Is the outcome of a decision impacting an entire community, or is it focused on an individual? How does this affect the type of evidence or approaches that guide us? What is the reach of the outcome and its impact?
For the first question, a recent article comes to mind as an example. The evidence presented was, in some spaces, called into question by the communities it spoke about. The timing of the publication was also critiqued, given its proximity to dates such as Neurodiversity Celebration Week. On the whole, its rhetoric was received as harmful or out of touch by those most impacted.
For the second question, the scale and scope of evidence is important. We include local evidence in EBLIP because we recognise that local contexts matter for decision-making within specific organisational settings. Likewise, the type of evidence and its validity may look different depending on whether it’s applied in an individual, team, community, or whole-of organisational context. Speaking broadly, it’s partially why individual values are included in evidence based practice in healthcare contexts, alongside clinical expertise and research evidence.
Context, values, and impact matter in how we situate evidence-based decision-making and the stories we tell with it.
A risk, however, with EBLIP is that we take the language of evidence-based practice for granted, as well as the impact of evidence. The rhetoric of a decision or policy being evidence-based can be used to rationalise outcomes. This is without considering the validity of evidence in a specific context, one’s positionality, and the communities most impacted.
Without critical literacy on evidence-based practice, we risk skewed conclusions. It means missing community values and perspectives when interpreting evidence, and potentially introducing new (unrecognised) types of harm or risk to what may have been an initial endeavour to reduce specific risks.
Who actually carries the impact of evidence-based decisions is an important question if we’re considering risk and harm.
The language of EBLIP can’t be an unquestionable good without considering the groundwork and positionality guiding it. EBLIP ‘done well’ requires constant questioning, community understanding (not just engagement), and considering positionality in ‘weighing’ up evidence.
The credibility and rigour that evidence lends to our work heightens the need for care and critical literacy around it. We need to understand how the language and framing of evidence empowers particular narratives and norms (and those it does not).