• Aucun résultat trouvé

Wrestling with the complexity of evaluation for organizations at the boundary of science, policy and practice

N/A
N/A
Protected

Academic year: 2022

Partager "Wrestling with the complexity of evaluation for organizations at the boundary of science, policy and practice"

Copied!
24
0
0

Texte intégral

(1)

Article

Reference

Wrestling with the complexity of evaluation for organizations at the boundary of science, policy and practice

PITT, R., et al.

Abstract

Boundary organizations have been promoted as a measure to improve the effectiveness of conservation efforts by building stronger relationships between scientists, policy makers, industry and practitioners (Cook et al. 2013). While their promise has been discussed in theory, the work of and expectations for boundary organizations are less defined in practice.

Biodiversity conservation is characterized by complexity, uncertainty, dissent and tight budgets, so boundary organizations face the challenging task of demonstrating their value to diverse stakeholders. This paper examines the challenges that boundary organizations face when seeking to evaluate their work. While no ‘off-the-shelf' solution is available for a given boundary organization, many lessons can be learned from the evaluation literature. This paper synthesizes key areas of decision making to underpin the choice of evaluation approaches.

PITT, R., et al. Wrestling with the complexity of evaluation for organizations at the boundary of science, policy and practice. Conservation Biology, 2018, vol. 32, no. 5, p. 998-1006

DOI : 10.1111/cobi.13118

Available at:

http://archive-ouverte.unige.ch/unige:113625

Disclaimer: layout of this document may differ from the published version.

1 / 1

(2)

This article has been accepted for publication and undergone full peer review but has not been through the copyediting, typesetting, pagination and proofreading process, which may lead to differences between this version and the Version of Record. Please cite this article as doi:

10.1111/cobi.13118.

This article is protected by copyright. All rights reserved.

Wrestling with the complexity of evaluation for organizations at the boundary of science, policy and practice

Pitt, Ra., Wyborn, C.b, Page, G.c, Hutton, J. b, Virah Sawmy, M. b, Ryan, M. b and Gallagher, L. b a) University of Hawaii Manoa; 1960 East-West Road, Honolulu, Hawaii, 96848, USA.

ruthpitt@hawaii.edu

b) Luc Hoffmann Institute; WWF International, 1196 Gland, Switzerland C. Wyborn: cwyborn@wwfint.org

J. Hutton: jhutton@wwfint.org

M. Virah-Sawmy: mvirahsawmy@wwfint.org M. Ryan: melryan@wwfint.org

L. Gallagher: lgallagher@wwfint.org

c) SustainaMetrix, LLC; 502 Deering Avenue, Portland, ME 04103, USA.

gpage@sustainametrix.com

Corresponding author: R. Pitt

Running head: Evaluation in boundary organizations

Keywords: boundary organization; monitoring, evaluation and learning; science policy practice interface; credibility, salience, legitimacy, organizational learning

Article Impact statement: Appropriate evaluation for boundary organizations enhances their efforts to build relationships among conservation science, policy and practice.

Abstract

Boundary organizations have been promoted as a measure to improve the effectiveness of conservation efforts by building stronger relationships between scientists, policy makers, industry and practitioners (Cook et al. 2013). While their promise has been discussed in theory, the work of and expectations for boundary organizations are less defined in practice.

Biodiversity conservation is characterized by complexity, uncertainty, dissent and tight budgets, so boundary organizations face the challenging task of demonstrating their value to diverse stakeholders. This paper examines the challenges that boundary organizations face when seeking to evaluate their work. While no ‘off-the-shelf’ solution is available for a given boundary organization, many lessons can be learned from the evaluation literature. This paper

synthesizes key areas of decision making to underpin the choice of evaluation approaches, with

(3)

This article is protected by copyright. All rights reserved.

the aim of encouraging more productive conversations about evaluation of boundary organizations and the projects they deliver.

Introduction

“Boundary organizations” aim to support conservation by building stronger relationships between scientists, policy makers, industry and practitioners (Cook et al. 2013; Young et al.

2014; Bednarek et al. 2016). As the conservation community contemplates an increased role for evaluation, especially increased use of experimental and quasi-experimental evaluation

methods (see for example Ferraro & Pattanayak 2006; Baylis et al. 2016), we have also noticed increased interest in evaluation of boundary organizations. Such interest comes from donors demanding accountability, skeptics questioning the need for such organizations, partners wanting to know their time is well-spent, and boundary organizations themselves wishing to improve their work and demonstrate their value to conservation.

Demonstrating impact is not easy for any conservation organization but, as will be explored in this paper, appropriate evaluation approaches for boundary organizations may be different to those suitable for other areas of conservation. Boundary organizations face specific challenges associated with the nature of their role mediating between practitioners, researchers and decision-makers. Such challenges include negotiating diverse perspectives on evaluation and ensuring the evaluation approach is compatible with the assumptions underpinning boundary work. Few of the existing studies on evaluating boundary organizations have engaged with the evaluation literature, drawing instead on scholarship of boundary objects and boundary organizations (see for example Clark et al. 2016), or in research to understand how boundary organizations function in practice (Leith et al. 2015). While such research is useful, practitioners working in boundary organizations need practical advice on evaluation (see Bednarek et al.

2016). This paper aims to progress these discussions by providing principles to guide the selection of evaluation approaches based on conditions particular to boundary organizations.

(4)

This article is protected by copyright. All rights reserved.

The principles were identified based on review of the challenges faced by boundary

organizations and experiences working in boundary organizations within conservation. We suggest that boundary organizations can benefit from insights in fields facing similar challenges such as research impact evaluation and evaluation in complexity, and draw from this literature in our discussion. The resulting principles contribute a starting point for future conversations about evaluation for boundary organizations in conservation.

Understanding boundary organizations

The boundary metaphor is used in various ways to articulate relationships between “science”

and “non-science”, highlighting differences in language, approaches, cultures and worldviews.

Boundary work focuses on the objects (Star & Griesemer 1989), organizations (Guston 2001), and practices (Cash et al. 2003) that navigate the boundaries between science and policy.

Studies of boundary work can be described as having two foci (Wyborn 2015): those seeking to understand how scientific credibility is constructed through practices that differentiate science from non-science (after Gieryn 1983); and those seeking to manage the boundaries between science, policy and practice to aid the diffusion and uptake of scientific knowledge (after Guston 2001). Boundary organizations are conceptualized as entities situated between science, policy and practice with three primary characteristics: mediation between domains; accountability to both sides of the ‘boundary’; and, the use of boundary objects to support communication and collaboration (Guston 2001; Carr & Wilkinson 2005). Sometimes referred to as bridging organizations (Nel et al. 2016), these entities help researchers align their efforts with critical questions for policy and practice, and assist policy makers and practitioners to commission research that is aligned with their challenges (Gustafsson 2014; Nel et al. 2016; Beier et al 2017). While such organizations exist in many fields (although different labels may be used), this paper focuses on boundary organizations in conservation.

(5)

This article is protected by copyright. All rights reserved.

Boundary organizations in conservation are highly diverse (van Enst et al. 2016). They may be philanthropic, academic, governmental or non-governmental organizations, and may be independently governed or embedded within a parent organization. They may operate at a local, national or international scale, or operate across scales. Their activities may include convening and facilitating dialog; informing and analyzing policy; conducting research;

supporting lobbying and advocacy; providing technical support, training and capacity building;

supporting knowledge co-production and facilitating knowledge exchange or brokering (see Gustaffson 2014; Bednarek et al. 2016; Clapp et al. 2016; Nel et al. 2016).

Boundary organizations generate, coordinate and sustain collaborations, including providing space (whether physical or institutional) for collaboration to take place (Cash et al., 2003, Young et al. 2014). In doing so, they support debate that is grounded in relevant science and encourage the development of science that serves biodiversity and social development outcomes

(Bednarek et al., 2016; Clapp et al. 2016). There is a rich literature on boundary organizations both inside and outside of conservation. In practice, those working in and partnering with boundary organizations often grapple with defining roles, choosing appropriate strategies and goals, and setting realistic expectations.

Understanding why evaluation is challenging for boundary organizations in conservation

Evaluation is the systematic collection and analysis of data to provide feedback or to assess merit, worth or significance (Patton et al. 2014). While it is simple to state what evaluation is, explaining how to implement evaluation is far more complicated. The term ‘evaluation approach’ is used to refer to a guiding framework for what ‘good’ evaluation is and how it should be conducted (Alkin 2004). Accepted approaches to evaluation vary greatly across fields.

Boundary organizations working work with numerous groups and with researchers from multiple disciplines (Cook et al. 2013) often need to navigate conflicting perspectives about how

(6)

This article is protected by copyright. All rights reserved.

evaluation should be conducted. This presents many challenges, including building a shared understanding of what is meant by key terms, reaching agreement on evaluation design, and addressing the conceptual challenges involved in evaluating projects where the linkages between action and change are complex, dynamic and indirect. Throughout this paper,

‘stakeholders’ will be used to refer to any party with an interest in the findings of an evaluation including, but not limited to, funders, staff, partner organizations and project beneficiaries.

Unclear definitions

A starting point is to understand what stakeholders actually mean by the term ‘evaluation’.

Ferraro and Pattanayak (2006) use the term ‘program evaluation’ when arguing for greater use of experimental and quasi-experimental evaluation methods in conservation (methods that quantify causal effects in comparison to a counterfactual through the use of randomized controlled trials or statistical methods). Many evaluators, however, would interpret the term

‘program evaluation’ as requesting work that focuses on a program (rather than a policy, product or personnel), but still encompassing a wide range of possible evaluation questions and designs (Posavac 2011; Patton et al. 2014). Quantitative approaches to attribution (per Ferraro and Pattanayak 2006) are referred to as ‘impact evaluation’ by international organizations such as the World Bank and 3ie (the International Institute for Impact Evaluation). However, this has caused confusion for those in international development who use ‘impact evaluation’ to refer to evaluations focused on the long-term outcomes of an intervention (White 2010). As White (2010) points out, these definitions are not wrong or contradictory, just different. Indeed, many terms used in evaluation – such as outcome, indicators, theory of change or impact – require explicit conversations with key stakeholders to build shared understanding necessary for effective collaborations across conservation science, policy and practice interfaces. For example, Hearn and Buffardi (2016) show how the term ‘impact’ is used differently across sectors and disciplines, with definitions ranging from specific technical interpretations to general

descriptions of change (see Table 1). These different definitions mean that a diverse group of

(7)

This article is protected by copyright. All rights reserved.

stakeholders embark on an evaluation where they believe they have similar expectations when in fact they have different understandings of an evaluation and what it will look like.

Table 1 Definitions of impact (adapted from Hearn and Buffardi 2016)

“Paradigm wars”

Choices about evaluation approaches in a given sector are influenced by prevailing trends in research approaches, particularly attitudes to the merits of qualitative versus quantitative methods. To some, only experimental evaluation is considered ‘rigorous’ or ‘best practice’ (see Ferraro & Pattanayak 2006; Baylis et al. 2016). In sectors such as education and international development, these debates have (largely) given way to recognition of the strengths and limitations of such methods, with more nuanced discussion about when this kind of evaluation design is (and is not) appropriate (Patton 2008; White 2010). Understanding the impact of research on policy, for example, does not lend itself to experimental approaches and is more suited to qualitative approaches, including stakeholder mapping, case studies and mapping drivers of change (White 2010; Hansson & Polk 2018). Most evaluators agree that evaluations should be tailored to the nature of the questions being asked, the organization or program being evaluated, and the needs of its stakeholders (Margoluis et al. 2009). This flexible approach does, however, create new challenges: it may not yet be accepted in some of the specific sectors that boundary organization operates within or across, and methodological flexibility necessitates a range of decisions about the evaluation. An overview of such decisions is presented in Table 2.

Table 2 Overview of decisions underpinning evaluation design

Multiple accountabilities

Answering the questions involved in evaluation design is “not value-free or without

consequences” (Moser, 2009, p. 19), and inherently involves power dynamics (for example,

“how much input will be sought from the intended beneficiaries of a program, as opposed to the funders?”). Evaluation questions are often politically charged for organizations with multiple accountabilities to diverse stakeholders who may hold conflicting views and expectations. This

(8)

This article is protected by copyright. All rights reserved.

is invariably true for boundary organizations in conservation which, by their nature, are

“accountable and responsive to opposing, external authorities” (Guston, 2001 p.402) and thus making evaluation of ‘what works’ highly subjective (Walton 2014).

As an example of how this affects evaluation, Cash et al. (2003) argue that research needs to be credible (scientifically adequate, authoritative and believable), salient (relevant and timely) and legitimate (the result of a fair and inclusive process) to be translated into action. These criteria have been used to evaluate the effectiveness of the engagement processes that underpin boundary work (White et al., 2010; Clark et al., 2016). Understanding the perceptions of key stakeholders regarding the credibility, legitimacy and salience of activities may therefore be a useful component of an evaluation. However, attempts to improve in one of these areas may undermine another, and different audiences judge the criteria differently (Cash et al., 2003).

Therefore, to balance multiple accountabilities, evaluation would need to acknowledge

divergent perspectives on what these criteria mean, how they should be applied, and questions of whose perspective on these criteria takes precedence. White et al. (2010) seek to address the challenge of multiple accountabilities through a framework for assessing how different

stakeholders perceive credibility, legitimacy, and salience. They find that legitimacy for one group of stakeholders comes at the expense of legitimacy for others, highlighting the need for a nuanced approach, rather than a single quantitative performance indicator.

Unclear or contested definitions of problems and success

Many evaluation approaches start by defining the problem and the intended change, which also helps to clarify the evaluand (the focus and subject of the evaluation). In conservation, however, problem definitions are often contested. For example, some may see the problem from an economic perspective, while others see it from an ecological perspective (Leith et al. 2015).

Shared definitions of success are also difficult to establish for boundary organizations, due to conflicting perspectives on whether the primary objective of the organization and its projects should be producing cutting edge peer-reviewed science, supporting conservation objectives

(9)

This article is protected by copyright. All rights reserved.

with applied science, creating tangible changes in policy or practice, or enabling and informing the policy making process (Bednarek et al. 2015; Leith et al., 2015).

It is therefore unsurprising that the boundary organization literature lacks clear guidance on indicators of success (van Enst et al. 2016). An organization that focuses on supporting or improving decision-making processes may consider indicators such as improved

communication, stronger relationships, increased individual or institutional capacity, or one group having a greater awareness of the other’s needs (Hegger et al., 2012; Bednarek et al., 2015). Yet such indicators would not be seen as markers of success for stakeholders expecting demonstrable changes in policy, practice or conservation outcomes.

Complex relationships between science and policy

The rapidly proliferating field of research impact assessment offers potentially useful approaches for evaluating boundary organizations. However, as Greenhalgh et al. (2016) demonstrate, different approaches to assessing research impact make different assumptions about the assumed mechanism through which impact is achieved. They argue that approaches assuming a direct influence of research on practice and policy may be appropriate where stakeholders agree on what the problem is and what a solution would look like, but in other fields (such as public policy and public health) the links between research and impact are

‘complex, indirect and hard to attribute’ (2016,p.2). In addition, benefits resulting from partnerships or improved research infrastructure may occur in the longer term, and may be unexpected (Greenhalgh et al., 2016).

These findings are highly relevant for boundary organizations, which require evaluation approaches that are compatible with their assumptions about the relationships between science, practice and policy. Approaches assuming direct, linear pathways are less likely to be appropriate than approaches that acknowledge the importance of context, interactions and partnership building. As in other areas of conservation, boundary organizations grapple with the ‘mismatched timeframes’ between donor reporting requirements and the lengthy time

(10)

This article is protected by copyright. All rights reserved.

needed to achieve conservation objectives (Wahlén, 2014). The long and complex pathways through which boundary organizations have an impact on policy and practice are also

problematic for providing rapid feedback to support internal program improvement (Bednarek et al., 2015).

Collective efforts for change

Boundary organizations often seek to contribute to conservation by facilitating collective efforts or building capacity of others to make change (Bednarek et al. 2016). Defining the extent to which a given intervention or a single boundary organization working with many collaborators can be given credit for successful outcomes is therefore challenging, and can undermine

collective efforts in context where shared credit helps to build trust (Mayne 2012). Given the complex and collaborative nature of boundary organization work, attempts to quantitatively measure how a program caused desired outcomes are less useful than attempts to assess whether it is reasonable to assume that the program contributed to desired outcomes (Mayne, 2001).

Evolving programs and organizations

Case studies on boundary organization have emphasized their evolving nature (Parker & Crona 2012; Leith et al. 2015). Such evolution may be a response to external pressures, or the result of programs developing and adapting over time. Parker and Crona (2012) note that the work of boundary organizations is an ongoing process of negotiation between stakeholders, which may unfold in changing and unpredictable ways. Moreover, the work of boundary organizations often spans years if not decades and in that time the context in which they are acting will likely change. This makes it difficult to collect appropriate baseline data, and requires a flexible evaluation framework.

Practical challenges

Evaluation also presents many practical challenges. Evaluation requires resourcing, including money, staff capacity, and time. Organizations may not budget appropriately for evaluation, or may not be able to justify the use of resources for purposes other than program implementation.

(11)

This article is protected by copyright. All rights reserved.

Available resources may need to meet a range of needs, including external requirements for accountability and internal needs for learning. Staff new to evaluation may find it difficult to adapt recommended approaches to available resources, and may be overwhelmed by the diversity of perspectives and tools available (Bamberger et al. 2012).

Choosing evaluation approaches for boundary organizations

Boundary organizations work across a ‘border of diverse purposes, incongruent values, and potential mutual incomprehension’ (Crona & Parker, 2012, p. 4). Given this complexity, the diversity of evaluation approaches available, and the challenges identified above, it is unsurprising that many boundary organizations struggle with evaluation.

The process of evaluation often involves clarifying roles and assumptions, defining success, and gaining agreement on intended and intermediate outcomes. Utilization-focused evaluation (among other approaches) includes discussing these issues with the organization’s stakeholders so evaluation may itself be a useful mechanism for boundary organizations to coordinate action and build cooperation. While continued investment may hinge on evaluation findings, the evaluation process may also facilitate organizational learning and improve outcomes (Bell et al.

2011).

It is also important to recognize potential pitfalls of evaluation. For example, a focus on short- term, easily measured outcomes may create a ‘perverse incentive’ against more complex activities where outcomes are longer term and harder to measure (Greenhalgh et al., 2016).

Staff may feel pressure to reduce program ambition so that outcomes can be met, or may skew programs to meet poorly designed performance indicators, rather than remaining open to learning, seizing opportunities and recognizing unexpected benefits. Moreover, Greenhalgh et al. (2016) found that more theoretically sophisticated approaches for assessing research impact were labor-intensive, expensive and less feasible. They were therefore less likely to be adopted in practice. Conducting fewer, well-designed evaluations of specific programs to inform key

(12)

This article is protected by copyright. All rights reserved.

decisions may be a better starting point for boundary organizations new to evaluation, rather than investing in expensive or unwieldy evaluation systems.

From this brief overview, it can be seen that there is no single ‘correct’ way to conduct an evaluation for boundary organizations. In assessing approaches to meet their needs, we suggest that boundary organizations consider the following principles:

Engage diverse stakeholders in the selection of evaluation objectives and methods

Support learning and reflection in complex evolving projects

Assess contribution to change rather than attribution of cause and effect

Align with the assumptions, values and context of boundary organizations

We discuss these in more depth below; details on specific approaches mentioned are provided in Table 3.

Engage diverse stakeholders in selection of evaluation objectives and methods

The evaluation literature has explored the factors that support meaningful use of evaluations, and found that the key drivers are engagement, interaction and communication between evaluators and those who will use the findings (Johnson et al., 2009). This will resonate with boundary organizations, who argue that engagement, interaction and communication between scientists and decision-makers will increase the use of research findings. Boundary

organizations and evaluators have faced the same question (how can we ensure that findings from research/evaluation are used to inform practice?) and have arrived at similar answers (by ensuring that those who will use the findings are involved in design).

Therefore, when selecting an evaluation approach, boundary organizations should consider the credibility, legitimacy and salience of evaluation from the perspective of their diverse

stakeholders. These stakeholders could include funders, peer organizations, beneficiaries and

(13)

This article is protected by copyright. All rights reserved.

boundary organization staff. Approaches that support stakeholder engagement in evaluation include participatory and utilization-focused approaches.

Support learning and reflection in complex evolving projects

Case studies of boundary organizations have emphasized their evolving nature, and the need for the projects they undertake to be flexible and adaptive to deal with changing circumstances (Parker & Crona 2012; Leith et al., 2015). Boundary organizations will therefore benefit from the evaluation literature’s engagement with complexity theory (Walton 2014), and the development of evaluation approaches suitable for complex (that is, uncertain and emergent) organizations and/or interventions (Rogers 2008). For newer organizations, developmental evaluation may support the adaptation and iteration of their work (Patton 2011). For more established organizations, theory-based evaluation approaches may be suitable to support assessment of intermediate outcomes as indicators of progress toward intended impact, but attention will need to be paid to how program theory captures the complex aspects of their work (Rogers 2008).

Evolving boundary organizations would also benefit more from evaluation approaches that encourage learning and reflection than those that focus on judgment (Schneider 2009).

Developmental evaluation is well suited to this need, as the approach is intended to support innovation, experimentation and learning, with evaluation integrated into the design and delivery of projects and programs (Patton 2011). This is particularly true for efforts designed to fundamentally change social, ecological or governance systems.

Assess contribution to change rather than attribution of cause and effect

Boundary organizations work in collaboration with a number of different actors in complex and changing environments. In these circumstances it is difficult to clearly delineate which actor or initiative caused a specific change (Walton 2014). Attempts to quantitatively measure how a program caused desired outcomes are less useful than attempts to assess whether it is reasonable to assume that the program contributed to desired outcomes (Mayne 2001).

(14)

This article is protected by copyright. All rights reserved.

Boundary organizations should look to ‘theory-based’ evaluation approaches which start by mapping a theory of change and gathering evidence to test the assumptions therein (Riché 2012). Articulating the difference between where a program has direct control (such as activities for which it can be held accountable for successful implementation) and where a program has influence (where change is desired but involves many other partners and external influences) can also help to clarify expectations when communicating with other stakeholders, particularly funders.

Align with the assumptions, values and context of boundary organizations

Boundary organizations should ensure that their evaluation choices align with the assumptions and values underpinning their work (Schneider 2009). Narrowly focusing evaluation on the

‘end goal’ of changes in conservation status can result in an evaluation missing a significant amount of the work undertaken by boundary organizations, such as relationship building, creating trust and supporting communication, and mediating and translating between different perspectives (Cash et al. 2003; Bednarek et al. 2015; Leith et al. 2015; Hansson & Polk 2018).

While knowledge products are a common focus of work at the science-policy interface (and may therefore be an appropriate evaluand), the processes in which scientists, policymakers, and other stakeholders interact are critical to developing useful knowledge (Clark et al. 2016) and to other intermediate outcomes like capacity development or network strengthening that

contribute to desired conservation outcomes. Consequently, boundary organizations should adopt evaluation approaches that can accommodate the long timeframes between action and impact (such as theory-based evaluation), and the process-oriented work involved in

supporting collaboration, building relationships and partnerships (such as participatory evaluation and evaluation of collaboration).

As discussed above, boundary organizations argue for co-production and co-design, so they should adopt similar approaches to evaluation. Similarly, boundary organizations question a simplistic or linear relationship between science and policy, so they can learn from the research

(15)

This article is protected by copyright. All rights reserved.

impact evaluation literature and avoid approaches that are based on this assumption, drawing instead on societal impact assessment or realist evaluation approaches (Greenhalgh et al. 2016).

Progressing discussions about evaluation approaches for boundary organizations

Situated between sectors and disciplines, boundary organizations need to navigate conflicting perspectives around evaluation to match their needs and overall purpose. This paper has provided an overview of the kinds of decisions involved in selecting an evaluation approach, and where confusion and contention is likely to occur. Once selected, the evaluation approach will inform decisions about evaluation design, evaluation process, and data collection and analysis as well as budget. Choices about who decides and what to value are inherently political, so we recommend that practitioners be conscious that there are choices to make, and be explicit and transparent about the choices that are made. We hope this paper’s articulation of why evaluation is challenging is useful to the growing numbers of boundary organizations who are grappling with this challenge. Many evaluation reports are not publicly released or shared (a problem not unique to boundary organizations) but boundary organizations share common challenges and question. Therefore, sharing lessons learned about evaluation with the broader conservation community will help the field to develop.

Acknowledgments: Funding to support this manuscript was provided by the MAVA Foundation.

We thank three anonymous reviewers and the journal editor for their helpful comments that led to a much improved manuscript.

References

Alkin, M. C. 2004. Evaluation Roots: Tracing Theorists’ Views and Influences. Thousand Oaks, California: Sage Publications.

(16)

This article is protected by copyright. All rights reserved.

Bamberger, M., Rugh, J., & Mabry, L. 2012. RealWorld Evaluation: Working Under Budget, Time, Data, and Political Constraints (2nd ed). Thousand Oaks, California: Sage Publications.

Baylis, K., et al. 2016. Mainstreaming impact evaluation in nature conservation. Conservation Letters, 9(1), 58–64.

Bednarek, A., Shouse, B., Hudson, C. G., & Goldburg, R. 2015. Science-policy intermediaries from a practitioner’s perspective: The Lenfest Ocean Program experience. Science and Public Policy.

2(1)291-300

Bednarek, A., Wyborn, C., Meyer, R., Parris, A., Leith, P., McGreavy, B., & Ryan, M. 2016. Practice at the Boundaries: Summary of a workshop of practitioners working at the interfaces science, policy and society for environmental outcomes. Retrieved from

http://www.pewtrusts.org/~/media/assets/2016/07/practiceattheboundariessummaryofawo rkshopofpractitioners.pdf

Beier, P., Hansen, L. J., Helbrecht, L., & Behar, D. (2017). A How-to Guide for Coproduction of Actionable Science. Conservation Letters, 10(3), 288-296.

Bell, S., Shaw, B., & Boaz, A. 2011. Real-world approaches to assessing the impact of environmental research on policy. Research Evaluation, 20(3), 227–237.

Brisolara, S. 1998. The history of participatory evaluation and current debates in the field, New Directions for Evaluation, 80 (Winter), 25-41.

Carr, A. & Wilkinson, R. 2005. Beyond participation: Boundary organisations as a new space for farmers and scientists to interact. Society and Natural Resources 18:255-265

Cash, D. W., et al. 2003. Knowledge systems for sustainable development. Proceedings of the National Academy of Sciences, 100(14), 8086–8091.

(17)

This article is protected by copyright. All rights reserved.

Clapp, A., Hayter, R., Affolderbach, J., & Guzman, L. 2016. Institutional thickening and innovation:

reflections on the remapping of the Great Bear Rainforest. Transactions of the Institute of British Geographers, 41(3), 244–257.

Clark, W. C., Tomich, T. P., Noordwijk, M. van, Guston, D., Catacutan, D., Dickson, N. M., & McNie, E. 2016. Boundary work for sustainable development: Natural resource management at the Consultative Group on International Agricultural Research (CGIAR). Proceedings of the National Academy of Sciences, 113(17), 4615–4622.

Cook, C. N., Mascia, M. B., Schwartz, M. W., Possingham, H. P., & Fuller, R. A. 2013. Achieving conservation science that bridges the knowledge–action boundary. Conservation Biology, 27(4), 669–678.

Crona, B. I., & Parker, J. N. 2012. Learning in support of governance: theories, methods, and a framework to assess how bridging organizations contribute to adaptive resource governance.

Ecology and Society, 17(1).

Ferraro PJ, Pattanayak SK 2006. Money for nothing? A call for empirical evaluation of biodiversity conservation investments. PLOS Biology 4(4): e105.

Gieryn, T.F. 1983. Boundary-Work and the Demarcation of Science from Non- Science: Strains and Interests in Professional Ideologies of Scientists. American Sociological Review 48(6), pp.781-795.

Greenhalgh, T., Raftery, J., Hanney, S., & Glover, M. 2016. Research impact: a narrative review.

BMC Medicine, 14:78.

Guthrie, S., Wamae, W., Diepeveen, S., Wooding, S., Grant, J., 2013. Measuring research: a guide to research evaluation frameworks and tool, RAND monograph,

http://www.rand.org/pubs/monographs/MG1217.html

(18)

This article is protected by copyright. All rights reserved.

Guston, D. 2001. Boundary organizations in environmental science and policy: an introduction.

Science, Technology & Human Values, 26(4), 399–408.

Gustafsson, K. M. 2014. Biological diversity under development: A study of the co-production that is biological diversity. Journal of Integrative Environmental Sciences. 11(2):109-124.

Hansson, S. & Polk, M., 2018. Assessing the impact of transdisciplinary research: The usefulness of relevance, credibility, and legitimacy for understanding the link between process and impact.

Research Evaluation. 0(0):1-18

Hearn, S., & Buffardi, A. L. 2016. What is impact? (A Methods Lab publication). London: Overseas Development Institute. Retrieved from https://www.odi.org/sites/odi.org.uk/files/resource- documents/10352.pdf.

Hegger, D., Lamers, M., Van Zeijl-Rozema, A., & Dieperink, C. 2012. Conceptualising joint knowledge production in regional climate change adaptation projects: success conditions and levers for action. Environmental Science & Policy, 18, 52–65.

Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. 2009. Research on evaluation use: a review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377–410.

Leith, P., Haward, M., Rees, C., & Ogier, E. 2015. Success and evolution of a boundary organization. Science, Technology & Human Values, 41(3), 375–401.

Margoluis, R., Stem, C., Salafsky, N., & Brown, M. 2009. Design alternatives for evaluating the impact of conservation projects. In M. Birnbaum & P. Mickwitz (Eds.), Environmental program and policy evaluation: Addressing methodological challenges. New Directions for Evaluation, 122:85–96.

(19)

This article is protected by copyright. All rights reserved.

Mayne, J. 2001. Addressing attribution through contribution analysis: using performance measures sensibly. The Canadian Journal of Program Evaluation, 16(1):1.

Mayne, J. 2012. Contribution analysis: Coming of age? Evaluation, 18(3):270–280.

Moser, S. 2009. Making a difference on the ground: the challenge of demonstrating the effectiveness of decision support. Climatic Change, 95(1–2), 11–21.

Nel, J., et al. 2016. Knowledge co-production and boundary work to promote implementation of conservation plans. Conservation Biology, 30(1):176–88.

Parker, J.N., & Crona, B.I. 2012. All things to all people: boundary organizations & the contemporary research university. Retrieved from

http://www.stockholmresiliencecentre.org/download/18.2a759bb41277b00e3c380001363/1 381790382011/ALLTHINGSParker_Crona_in+review.pdf.

Patton, M.Q. 2008. Utilization-Focused Evaluation (4th ed). Thousand Oaks: Sage Publications.

Patton, M.Q. 2011. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: Guilford Press.

Patton, M.Q., Asibey, E., Dean-Coffey, J., Kelley, R., Miranda, R., Parker, S., & Newman, G. F. 2014.

What is Evaluation? Statement from the American Evaluation Association. Retrieved March 1, 2017, from http://www.eval.org/p/bl/et/blogid=2&blogaid=4.

Posavac, E.J (2011) Program Evaluation: Methods and Case Studies, Pearson: New Jersey.

Riché, M. 2012. Theory Based Evaluation: A wealth of approaches and an untapped potential.

Http://Ec. Europa. Eu/Regional_policy/Impact/Evaluation/Conf_doc/helsinki_mri_2012. Pdf, 30, 2012.

(20)

This article is protected by copyright. All rights reserved.

Rogers, P.J 2008. Using programme theory to evaluate complicated and complex aspects of interventions, Evaluation, 14(1): 29–48.

Star, S., Griesemer, J., 1989. Institutional Ecology, ‘Translations’ and Boundary Objects:

Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39. 19:387–

420.

Schneider, A. 2009. Why do some boundary organizations result in new ideas and practices and others only meet resistance? Examples from juvenile justice. The American Review of Public Administration, 39(1):60–79.

Shadish, W., Cook, T., & Leviton, L. 1991. Foundations of Program Evaluation. Newbury Park, California: Sage Publications.

Spaapen, J., & Drooge, L. van. 2011. Introducing “productive interactions” in social impact assessment. Research Evaluation, 20(3), 211–218.

van Enst, W., Runhaar, H., & Driessen, P. 2016. Boundary organisations and their strategies:

Three cases in the Wadden Sea. Environmental Science & Policy, 55(3):416–423.

Wahlén, C. 2014. Constructing conservation impact: understanding monitoring and evaluation in conservation NGOs. Conservation and Society, 12(1), 77.

Walton, M., 2014. Applying complexity theory: a review to inform evaluation design. Evaluation and program planning, 45:119–126.

White, D., Wutich, A., Larson, K., Gober, P., Lant, T., & Senneville, C. 2010. Credibility, salience, and legitimacy of boundary objects: water managers’ assessment of a simulation model in an immersive decision theatre. Science & Public Policy (SPP), 37(3), 219–232.

(21)

This article is protected by copyright. All rights reserved.

White, H. 2010. A contribution to current debates in impact evaluation. Evaluation, 16(2), 153–

164.

Wyborn, C. 2015. Connectivity conservation: Boundary objects, science narratives and the co- production of science and practice. Environmental Science and Policy, 51:292–303.

Young, J. et al. 2014. Improving the science-policy dialogue to meet the challenges of

biodiversity conservation: having conversations rather than talking at one-another. Biodiversity and Conservation, 23(2), 387–404.

TABLES

Table 1 Definitions of impact (adapted from Hearn and Buffardi, 2016) Type of use Implied definition of “impact” in this type of use

Colloquial use The general effect of an action or program (used interchangeably with terms such as result, outcome, effect or difference)

Boundless use Broad definition: all positive and negative, primary and secondary, direct or indirect, intended or unintended effects of a program

Counterfactual use

Technical use: a measurable change in a pre-specified variable, in comparison to the value the variable would have had in the absence of the intervention

Results-chain use

A long-term change that may beyond the timeframe or direct influence of the program in question; the final step in a causal chain from the program’s activities through to the desired change (as used in the monitoring and learning component of the Open Standards for the Practice of Conservation)

(22)

This article is protected by copyright. All rights reserved.

Table 2 Overview of decisions underpinning choice of evaluation approach

Area for decision making Associated questions to lead decision making Evaluand: what is the subject of the evaluation? Are all stakeholders clear about what is being

evaluated? Is the focus the boundary organization as a whole? The outcomes of a specific project? The usefulness of a specific boundary object? Or are we instead looking for generalizable research to assess boundary organizations as a strategy for linking science and decision making?

Evaluation purpose: why is the evaluation being conducted?

Is the purpose of the evaluation:

Advocacy? (to demonstrate the value and use of a program) Allocation? (to inform funding allocations across potential programs) Analysis? ( to inform learning and continuous improvement, and future program design) Accountability? (to provide assurance to funders about implementation and progress, or for internal accountability, especially within larger organizations) (Guthrie et al. 2013).

Knowledge construction: what counts as acceptable knowledge and evidence about the program and its effects, and what kinds of knowledge will be prioritized?

What methods will produce credible evidence? Is the focus on program effects (short term? long term?) or implementation? How important is internal vs external validity? Should the focus be on producing context-specific knowledge or

generalizable knowledge?

Knowledge use: how can evaluators can produce results that are useful for the program, and is this an important consideration?

How will the results be used? How quickly? How important is usefulness? Useful to whom?

Valuing: what role will values and the process of valuing play in this evaluation?

Is this a good program? What is meant by good?

What justifies these answers? Who gets to contribute to answering these questions?

Social programming: what do we believe about the nature of programs and their role in solving social problems?

How are programs improved and changed? How do programs respond to external constraints and pressures? How sensitive should the evaluation be to variation in implementation and local context?

Evaluation practice: what is the ‘correct’ role of the evaluator? What issues and constraints will shape the evaluation?

Given time, resource, budget and political

constraints – what would be a feasible evaluation?

What is the role of the evaluator: Scientist? Judge?

(23)

This article is protected by copyright. All rights reserved.

Advocate? Advisor? What questions should be asked and what methods should be used?

Source: Unless otherwise cited, this table draws on Shadish, Cook and Leviton, 1991.

Table 3 Evaluation approaches useful for boundary organizations to consider

Approach Description

Utilization-focused evaluation

Utilization-focused evaluation is based on the principle that the key criterion for a ‘good’ evaluation is whether it is useful to its intended users. Evaluations using this approach therefore aim for both the evaluation findings and the process itself inform decisions and improve performance, and this guides decisions about how the evaluation is conducted. For example, decisions about how to frame evaluation questions and how to collect data are driven by discussions about what kind of action the resulting knowledge would enable.

Utilization-focused evaluation typically starts by mapping the different users of the evaluation and their accountabilities, to ensure the right people become active participants in the evaluation (Patton 2008)

Developmental evaluation

Developmental evaluation is intended for use in complex or uncertain environments. This approach focuses on rapid feedback with real-time data collection, analysis and reflection on what is working, to support innovation, adaptation and learning. This ensures the evaluation is designed to produce data that will help to evolve the program to meet its objectives, and it commits organizations to purposeful evolution, allowing an experimental organizational culture (Patton 2011)

Societal impact

assessment and related approaches

Boundary organizations may draw on the current proliferation of evaluation approaches to research impact, including social impact assessment approaches.

One example is SIAMPI which stands for Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions between science and society. SIAMPI uses a mixed methods case study

approach to map ‘productive interactions’, which they define as exchanges between researchers and stakeholders in which knowledge is produced that is both scientifically robust and socially relevant. SIAMPI has an emphasis on learning, rather than judging so may suit emerging boundary organizations (Spaapen & van Drooge 2011) but it can be complex and resource intensive (Greenhalgh et al. 2016)

Theory-based

evaluation approaches

Theory-based evaluation refers to a variety of ways of developing a causal model that links the program’s activities to a chain of intended or observed outcomes, and then using this model to guide the evaluation. Theory-based

(24)

This article is protected by copyright. All rights reserved.

evaluation allows for assessment of short-term or intermediate outcomes, which can provide early evidence as to whether program is creating change that is contributing to the desired long-term outcome, even if the long-term outcome will occur in the distant future. Theory-based evaluation can also support conclusions about whether a program was unsuccessful due to a failure of theory, or a failure of implementation. Boundary organizations wishing to use theory-based evaluation are likely to benefit from working with evaluators with experience of adapting the approach for use in programs with complex and complicated elements (Rogers 2008)

Approaches include theory-based evaluation (Riché 2012) and contribution analysis (Mayne, 2001).

Participatory evaluation Participatory evaluation approaches involve the stakeholders of a program or policy in the evaluation process, with particular emphasis on involving program participants or beneficiaries. The term covers a wide range of different types of participation, but the emphasis is often on going beyond seeking the views of participants, to include them in the evaluation design process. Participatory approaches contrast sharply with approaches that prioritize an objective, external evaluator, and focus instead on empowerment and the democratization of knowledge (Brisolara 1998)

View publication stats View publication stats

Références

Documents relatifs

Combination oral and rectal therapy also allows for lower oral 5-ASA doses, which might reduce the risk of adverse events such as blood dyscrasias and hepa- totoxicity.. 10

Pre- sented in this paper, in the context of an example of designing a small thermal plant, is a description of an approach to exploring the solution space in the pro- cess of

Finally, we wish to mention that it easily follows from the proof above that connected linear eNCE languages of bounded degree are in NSPACE (log ri) (a linear grammar has the

Our results from a market value approach show that technology disclosures to open SSOs are positively correlated with company valuation if they explicitly refer to patents..

It should be noted that we have used the example of macaque social systems to describe the calculation of the complexity index, but data from a higher number of species would

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

At the time where the Vlasovian approach ran into the difficulty of describing the nonlinear regime of the bump-on- tail instability, the theory of chaos for finite number of degrees