A methodology for systematic mapping in environmental sciences

The last decade saw increasing concerns that scientific research was not being used to underpin policy and practice in the fields of conservation and environmental science [1–7], with decisions generally being experience-based rather than evidence-based [2, 8]. Methods for evidence-based decision-making are more developed in disciplines such as medicine and social science. In these sectors a suite of ‘systematic evidence synthesis’ methodologies have been developed to gather and collate evidence, and sometimes appraise studies and synthesise study results e.g. [9–11]. Evidence synthesis methods follow rigorous, objective and transparent processes that, unlike traditional literature reviews, aim to reduce reviewer selection bias and publication bias, and enable the reader to view all the decisions made for inclusion and appraisal of research, and how conclusions have been reached. Evidence syntheses are now receiving significant interest in environmental sciences, gaining increasing recognition from research funders e.g. [12, 13]. One of the most recognised evidence synthesis methods is systematic review, which is often regarded as the gold standard [2, 3, 8, 14, 15].

Systematic reviews use existing primary research to, where possible, answer a specific question by combining suitable data from multiple studies, either quantitatively (e.g. using meta-analysis) or qualitatively (e.g. using meta-ethnography) [11, 16]. In environmental sciences ‘meta-analysis’, a powerful statistical tool, is often used in quantitative reviews to combine the results of multiple studies [17]. This improves precision and power through increased effective sample size, and allows additional sources of variability across studies to be investigated [18]. This process of combining the results of multiple studies to answer a question is often called ‘synthesis’ [11]. However, ‘synthesis’ can also be used to describe the methodological process used to gather and collate evidence, which may or may not include extraction of results and combining of study results to answer a question. Here we use the term ‘evidence synthesis’ to describe the whole methodology used to gather and collate evidence (e.g. systematic review, systematic mapping) and the term ‘synthesis of results’ to describe the combining of results from multiple studies either quantitatively or qualitatively to answer a question.

Questions suitable for systematic review are structured to contain a number of key elements; explicit components that specify the essential aspects of a primary research study to be able to answer the review question [19]. In environmental evidence, the most common question type relates to the effects of an intervention or exposure and generally has 4 key elements that need to be specified; population (P), intervention (I) or exposure (E), comparator (C) and outcome (O) commonly referred to the PICO or PECO elements [17]. Other types of question structures exist [20] and may be developed for particular circumstances. For example, the European Food Safety Authority (EFSA) are often interested in questions related to the accuracy of a test method for detection or diagnosis, in which case the population (P), index test (I) and target condition (T) must be specified. This structure is often called a ‘PIT’ question type. For questions regarding the prevalence of a condition, or occurrence of an outcome for a particular population, the key elements are the population (P) and outcome (O), often referred to as ‘PO’ question types [12, 19]. Some examples of PICO, PECO, PIT and PO question types are given in Box 1.

These questions where all the key elements are clearly specified are termed ‘closed-framed’ [19] and help enable systematic review teams to envisage the type of primary research study designs and settings that would be included [12], Sometimes all elements of the question are not explicit in PICO or PECO type questions because the intervention or exposure and comparator elements are considered together, for example when comparing different levels of exposure to a chemical and the effects on the outcome, but these questions are still considered closed-framed [19].

Despite being ‘gold standards’ in evidence synthesis, systematic reviews are not always feasible. The ability of systematic reviews to produce a quantitative answer to a review question using meta-analysis can be hampered by data availability [21]. High quality quantitative data is not always abundant in environmental science [22] and methodological detail and results are often poorly reported, unreported, and/or unrecorded [23–25].

Often, multiple options for key question elements (e.g. multiple populations, interventions or exposures) are needed to answer questions. Also, policy-makers frequently ask questions relating to barriers to effectiveness of interventions (e.g. cost of implementation; lack of awareness of intervention) and how these can be overcome. The studies collated for these type of questions are often highly heterogeneous (mixed) including different methodologies and outcomes or a mixture of quantitative and qualitative research. This may make synthesising the results of individual studies (e.g. via meta-analysis), to answer the question, challenging or impossible. In these cases, a means of collating the evidence to identify sub-sets of evidence or questions suitable for systematic review would be beneficial, particularly where the evidence base is extensive [11, 16].

Questions posed by user groups in policy and practice are sometimes ‘open-framed’ (questions that lack specification of some key elements) and may not readily translate into closed-framed questions suitable for systematic review. Decision makers often ask questions relating to the state of evidence on a topic: How much evidence is there? Where is the evidence? What interventions or exposures have been studied? Which outcomes have been studied? How have the studies been undertaken? An example question relevant to environmental sciences might be: ‘What are ‘integrated landscape approaches’ and where and how have they been implemented in the tropics?’ (adapted from [26]). For this type of question it is difficult to define inclusion criteria for specific key elements (to decide what studies are relevant) and an iterative approach may have to be taken. The evidence gathered may be used to inform the development of new theories, conceptualisations or understandings [11, 16, 26]. In environmental sciences, a method of collating studies to address these types of question is often needed.

Sometimes the aim of collating evidence may be to inform secondary synthesis other than systematic review. For example, to gather data for modelling [27]. Stakeholders may also be interested in research activity already captured in existing systematic reviews either to ask questions about the nature of the research field or to identify primary research that could be used in further secondary synthesis [11]. Again, this highlights the need for a means of cataloguing all the available evidence in a comprehensive, transparent and objective manner to describe the state of knowledge, identify sub-sets of evidence or topics suitable for further secondary synthesis or identify where there is a lack of evidence.

In the social sciences, ‘systematic mapping’ methodology was developed in response to the need to adapt existing systematic review methodology for a broader range of circumstances including some of those mentioned above [10, 28–30].

Systematic mapping does not aim to answer a specific question as does a systematic review, but instead collates, describes and catalogues available evidence (e.g. primary, secondary, quantitative or qualitative) relating to a topic of interest [10]. The included studies can be used to develop a greater understanding of concepts, identify evidence for policy-relevant questions, knowledge gaps (topics that are underrepresented in the literature that would benefit from primary research), and knowledge clusters (sub-sets of evidence that may be suitable for secondary research, for example using systematic review) [10, 11, 30–32].

Systematic mapping follows the same rigorous, objective and transparent processes as do systematic reviews to capture evidence that is relevant to a particular topic, thus avoiding the potential pitfalls of traditional literature reviews (e.g. reviewer and publication bias). However, since systematic mapping is not restricted by having to include fully specified and defined key elements, it can be used to address open-framed or closed-framed questions on broad or narrow topics. Systematic mapping is particularly valuable for broad, multi-faceted questions relating to a topic of interest that may not be suitable for systematic review due to the inclusion of multiple interventions, populations or outcomes or evidence not limited to primary research. Systematic maps play an important role in evidence syntheses because they are able to cover the breadth of science often needed for policy-based questions [33].

In systematic mapping, the evidence collated is catalogued, usually in the form of a database, providing detailed ‘meta-data’ (a set of data that describes and gives information about other data) about each study (e.g. study setting, design, intervention/s, population/s) and the article it appears in (e.g. author, title, year, peer review journal, conference proceeding). These meta-data are used to describe the quantity and nature of research in a particular area. For example, the number of articles published in journals, books, conferences; the number of publications per year; the number of studies from each country of origin; the type and number of interventions; type and number of different study designs (e.g. survey, randomised controlled trial (RCT), cohort study); the population types (e.g. species studied). As systematic maps may include multiple populations, interventions or exposures, or outcomes (e.g. the number of studies investigating the effectiveness of a specific intervention, for a particular outcome in a specific population), more complex cross-tabulations can also be carried out. By interrogating the meta-data it is possible to identify trends, knowledge gaps and clusters. In further contrast with systematic reviews, systematic maps are unlikely to include extraction of study results or synthesis of results. To date those published within social science disciplines also exclude critical appraisal of included studies [10]. Table 1 outlines the key differences between systematic review and systematic mapping.

Table 1

Differences between a systematic map and systematic review