
For more than a decade working in civil society, governance, and research evaluation, I’ve seen how excellent research can fail to shift policy or practice, and how policy can struggle without credible, trusted evidence. Communities that are most affected often remain at the periphery. Bridging the science–policy–society gap is not a communications afterthought; it is a design problem in how we produce, translate, and use knowledge. This blog draws on my practical experience and on a body of scholarship that explains why gaps persist and how to close them.
Why the Gap Persists
Classic work on research utilisation shows that evidence rarely travels a straight line from study to statute. Instead, it is used instrumentally, conceptually, or symbolically, depending on the politics, timing, and problem framing in play (see Weiss, 1979). Policymaking is not a technocratic funnel; it is a contested, multi-actor process under uncertainty and time pressure. That is why Nuance is essential: “evidence-based policymaking” is not the same as evidence-based medicine—governments must blend science with values, negotiation, and narrative (see Cairney & Oliver, 2017 and Cairney, 2020).
Moreover, knowledge is not neutral. It is produced and legitimated within institutions and cultures. The co-production tradition in Science and Technology Studies shows how science and social order shape each other (Jasanoff, 2004; see also book overview here). When we ignore these dynamics, we mistake publication for impact.
From Dissemination to Co-Production
In my early projects, we “disseminated” findings, policy briefs, presentations, and reports—and waited for uptake. Results were mixed. The turning point came through citizen-based monitoring (CBM) and participatory evaluation. When communities help define problems, collect data, and interpret findings, evidence becomes salient to local priorities and trusted by those who must act on it (Matlala, 2024). South Africa’s institutionalisation of CBM approved by Cabinet in 2013 offers a policy foundation for this shift (DPME CBM Framework; see also the DPME CBM hub and policy page here). The problem statement in that framework is blunt: monitoring data is too often gathered upwards for compliance, not outwards to drive solutions.
Co-production is not a slogan. It requires intentional spaces, methods, and incentives. In one evaluation of township schooling, community insights reframed our assumptions about teacher absenteeism by surfacing commuting safety and overcrowding factors not visible in administrative datasets. This is exactly the kind of boundary-spanning that strengthens salience, credibility, and legitimacy, the three attributes at the heart of effective knowledge systems (Cash et al., 2003; PubMed overview here).
Making Evidence Actionable: Translation, Mediation, Timing
Even the best evidence dies in the inbox if it isn’t actionable. The “knowledge translation” and “evidence use” literatures emphasise brokers, formats, and fit to decision cycles. Nutley, Walter, and Davies synthesize decades of practice: evidence informs public services when users are engaged early, presentation aligns with policy calendars, and intermediaries help convert “what we know” into “what we can do” (Nutley, Walter & Davies, 2007; book page here; related chapter entry here).
In my practice, the format that unlocked action was often one-page visuals that mapped bottlenecks and pointed to specific fixes. A dense 100-page report persuaded few; a single flow diagram showing how interdepartmental communication delays produced a three-month repair backlog moved a cross-silo meeting into being. Translation is not dumbing down; it is policy design in miniature.
Boundary organisations (or functions) units that sit between science and policy—are crucial here. They mediate, translate, and convene across communities of practice. Cash and colleagues show how such mechanisms increase the odds that knowledge will be used because they manage the boundaries rather than denying them (Cash et al., 2003). South Africa’s broader Monitoring & Evaluation architecture the Government-Wide M&E System is intended to institutionalise these learning loops across departments (Policy Framework for the GWM&E System).
Trust, Accountability, and the Social Life of Data
Trust is the currency of evidence use. Policymakers may distrust research they see as detached from implementation realities; communities may distrust evaluations that extract stories and offer little in return; researchers may distrust anecdotal knowledge that complicates clean models. Co-production and CBM help rebuild trust by sharing authorship of problem definitions, methods, and recommendations. DPME’s CBM documentation explicitly recognises the need for feedback loops in which citizen-generated evidence changes how programmes are run (DPME Framework PDF; progress notes 2013 and 2014). The Open Government Partnership similarly frames CBM as a means to improve accountability and responsiveness (OGP Commitment ZA0016).
Practically, this means returning to communities with results; validating interpretations jointly; and co-designing remedies that are administratively feasible and socially legitimate. In my projects, this step is often skipped due to time pressure, which proved decisive for implementation.
Digital Tools: Opportunity and Obligation
Digital platforms can close gaps by enabling real-time feedback, open data access, and direct dialogue between citizens and the state. My own research on social media in citizen-based monitoring has shown how posts and threads can flag service shortfalls quickly and publicly, creating pressure to respond. But technology is no panacea: without inclusion by design, digital tools risk amplifying inequalities. The policy question is not “digital or not,” but what blend of online and offline channels best protects equity while accelerating accountability. This aligns with the governance lesson that systems work when they enhance salience, credibility, and legitimacy simultaneously—not one at the expense of the others (Cash et al., 2003).
Five Design Principles for Bridging the Gap
Drawing on practice and scholarship, these principles consistently improved impact:
- Co-define the problem – Start with a joint scoping workshop that includes researchers, implementers, and community representatives. This operationalises co-production as described by Jasanoff (2004) and aligns with South Africa’s CBM stance on citizen involvement (Framework).
- Plan for translation from day one – Build policy-friendly outputs—briefs, logic maps, decision trees into the project plan. Treat “knowledge translation” as a workstream, not an afterthought (Nutley et al., 2007).
- Use boundary functions – Assign people or units to broker across communities, curating dialogue, reframing evidence in context, and protecting credibility/legitimacy (Cash et al., 2003).
- Respect policy realities – Policymaking is political and time-bound. Engage at the right windows with actionable options; avoid presenting evidence as if it alone dictates the answer (Cairney & Oliver, 2017).
- Institutionalise feedback loops – Embed community validation and response tracking in programme cycles so evaluation serves learning, not just compliance. South Africa’s GWM&E and CBM frameworks offer scaffolding for this (GWM&E; CBM framework).
What This Looks Like in Practice
Participatory indicator reviews
In municipal governance assessments, we co-reviewed indicators with ward committees and frontline staff. Community members flagged measures that were easy to report yet meaningless on the ground (e.g., “number of meetings held”). The team then aligned indicators to citizen-valued outcomes (e.g., repair times, complaint resolution). This echoes evidence-use guidance to prioritise fit for purpose over data availability (Nutley et al., 2007).
Decision-ready products
Rather than one omnibus report, we delivered a portfolio: a two-page executive note with options and trade-offs, a one-page visual of system bottlenecks, and a technical annex for analysts. This mirrors boundary-management advice to tailor outputs for credibility with experts and usability for decision-makers (Cash et al., 2003).
Community validation
Before finalising recommendations, we hosted a feedback session in the affected wards, adjusting interpretations where lived experience contradicted desk assumptions. This practice is embedded in the DPME CBM Framework and improves trust and implementation prospects (OGP ZA0016).
Capacity and Culture: The Hard Part
Evidence rarely fails on quality alone; it fails on capacity and culture. Policymakers need analytical support and time to absorb evidence; researchers need incentives to engage beyond publication; communities need resources to participate meaningfully. Building this capacity is slow, but there are promising models training that develops “bilingual” professionals fluent in both science and policy (see reflections from an international training effort in Chanvillard, 2024) and networks that normalise collaboration across sectors (Khomsi, 2024).
Culturally, universities must reward engagement and impact, not only journal metrics. Governments must value learning alongside compliance an aim built into South Africa’s M&E reforms (GWM&E Policy Framework) but unevenly realised in practice. Civil society can help by brokering and holding institutions accountable to their own evidence standards.
A Note on Ethics and Power
Finally, bridging is political. Decisions about what counts as “evidence,” whose voices are included, and what success looks like are power-laden. Co-production scholarship warns against treating participation as a checkbox; instead, we must redistribute epistemic authority and be transparent about trade-offs (Jasanoff, 2004). In practical terms, that means budgeting for community time, ensuring data return, and sharing credit for insights.
Conclusion: Building Bridges that Last
The science–policy–society gap will not close by itself. It narrows when we co-produce problems and solutions; when we translate evidence into decision-ready options; when boundary functions mediate across worlds; when institutions value learning; and when communities own the changes they seek. The South African experience—with a formal CBM framework and a government-wide M&E system offers scaffolding for this work, but practice must match policy (CBM Framework; GWM&E).
After more than ten years in this space, my conviction is simple: knowledge matters when it is shared, contextualised, and co-owned. That is the bridge worth building—between scientists, policymakers, and the communities we serve.
About the author: Dr Lesedi Senamele Matlala is a multifaceted leader and expert in public policy, governance, and social entrepreneurship. With a Master’s in Public Policy, Monitoring, and Evaluation, and a PhD in Public Management and Governance, Dr Lesedi’s academic foundation is complemented by extensive experience in Monitoring and Evaluation (M&E), User Experience (UX) research, and social entrepreneurship.
As a lecturer, researcher, and executive director of Go-Getters Brand, a research consultancy, Dr Lesedi imparts knowledge, inspires future leaders, and drives evidence-based solutions to complex societal issues. His research focuses on critical areas, including policy impact assessment, socio-economic studies, and digital governance, with a commitment to informing policy formulation and driving positive societal outcomes.
Dr Lesedi is deeply engaged in civil society, holding leadership positions in organisations such as the Association of Southern African Schools & Departments of Public Administration & Management (ASSADPAM), the South African Association of Public Administration and Management (SAAPAM), and the African Evaluation Association (AfrEA). He also champions youth leadership and empowerment through initiatives like the South African BRICS Youth Association (SABYA) and the World Literacy Foundation (WLF).
Dr Lesedi’s commitment to youth leadership and empowerment is exemplified through his involvement in numerous initiatives and organisations. As an Ambassador at the South African BRICS Youth Association (SABYA) and the World Literacy Foundation (WLF), he champions the rights and aspirations of young people, advocating for inclusive development and equitable opportunities. Additionally, Lesedi serves as a Sustainability Coordinator at the YALI Regional Leadership Centre, where he develops leadership skills and promotes sustainable development practices among emerging African leaders. He is also a Board Member for the Accessed ZA programme.
Recognised for his outstanding contributions, Dr Lesedi has received prestigious awards, including being named one of the Mail & Guardian 200 Young South Africans and a finalist for the TransUnion Rising Star Award. His commitment to evidence-informed decision-making, social activism, and entrepreneurial acumen makes him a transformative leader dedicated to driving positive change and building a brighter future for generations to come.
Acknowledgements: The author(s) is solely responsible for the content of this article, including all errors or omissions; acknowledgements do not imply endorsement of the content. The author is grateful to Charity Chisoro for her guidance in preparing and finalising this article, as well as her editorial support.
Disclaimer: The views expressed in published blog posts, as well as any errors or omissions, are the sole responsibility of the author/s and do not represent the views of the Africa Evidence Network, its secretariat, advisory or reference groups, or its funders; nor does it imply endorsement by the afore-mentioned parties.
Suggested citation: Matlala LS (2025) Bridging Science, Policy, and Communities: Making Evidence Matter in Practice. Blog posting on 20 October 2025. Available at: https://africaevidencenetwork.org/bridging-science-policy-and-communities-making-evidence-matter-in-practice/2025/10/20/



