• Aucun résultat trouvé

Minutes from sub-group 3A webinar on corporate data inputs

N/A
N/A
Protected

Academic year: 2022

Partager "Minutes from sub-group 3A webinar on corporate data inputs"

Copied!
2
0
0

Texte intégral

(1)

Minutes from sub-group 3A webinar on corporate data inputs

8 July 2019

Aligning Biodiversity Measures for Business project

Attendees:

Leon Bennun (The Biodiversity Consultancy), Marion Hammerl (Global Nature Fund), Shane Sparg (BirdLife International), Joshua Berger (CDC Biodiversité), Eliette Verdier (CDC Biodiversité), Mark Goedkoop (PRé), Anne Malecot (AFD), Jihae Kwon (FAO), Annelisa Grigg (UNEP-WCMC), Katie Leach (UNEP-WCMC), Lars Mueller (European Commission), Arjan Ruijs (Actiam), Laure Berling (FAO), and Ana Deligny (FRB).

Minutes:

UNEP-WCMC began the webinar with a reminder of the context and objectives of the Aligning Biodiversity Measures for Business project, and an overview of the sub-groups. CDC Biodiversité, who are chairing sub-group 3A on corporate data inputs, then presented the objectives and expected outputs of the sub-group.

The objectives of this first webinar were to review the sub-group 3A working paper (providing feedback for this to develop into a position paper for the Brazil workshop) and build consensus on three outputs:

the common framework, input data mapping and common nomenclature. These minutes focus on the discussions and decisions taken. Please refer to the presentation slides for more details on the content presented during the webinar.

General feedback on the working paper:

In general the group felt the working paper was quite ambitious, notably regarding the database of existing biodiversity-related datasets and indicators. While there is currently no “super tool” which exists to access all of the data listed in the database, this mapping would highlight overlaps between input data and could be the basis to build a consensus on a common data request to businesses.

Proposed output 1: Common framework:

Quality tiers are usually used for impact factors1 and not data inputs. The proposal here is to extend the definition to also cover the data which requires certain impact factors to be used (e.g. if wheat production is provided, an average global yield needs to be used, i.e. this would relate to data quality tier 1). The group decided that further details on the linkages between quality tiers and data inputs was necessary and were encouraged to provide feedback on the tiers following the webinar. PRé will also share work from the Life Cycle Analysis community on this topic.

Proposed output 2: Data mapping:

The mapping of the ‘primary’ i.e. company data and ‘secondary’ i.e. global datasets, input data used by each of the different indicator approaches will draw on the survey of accounting approaches led by the EU B@B platform (survey from Johan Lammerant). The group discussed the links between primary and secondary data, for example it is necessary to know the economic activity of a company (= primary data) to use EXIOBASE’s environmental extension data on wheat production (= secondary data), but if primary data on wheat production of a business is not available then it may be challenging. The group

1 Coefficients which can be used to calculate impacts, e.g. the biodiversity loss per ton of CO2 emitted is an impact factor for CO2 emissions.

(2)

suggested clarifying the primary versus secondary terminology, possibly including two columns of primary data and/or changing the names of primary and secondary. The group were asked to provide alternative suggestions.

The linkage between data quality tiers and secondary vs primary input data was also discussed by the group. Noting that the concept of primary (company’s data) and secondary (usually from global datasets) input data is focused on the source of data, rather than the quality of the data. For example, when CDC Biodiversité worked with Michelin on their supply chain, the company provided the yield of the raw material they sourced in Indonesia. While this was primary data from the company, it would only be tier 2 in terms of data quality i.e. region or country-specific. It was decided that the linkages between data quality and input data types will be clarified in the next version of the working paper.

Related to the slide on ‘Data Categories – State’, it was clarified that ‘Priority areas’ means all datasets used to prioritize areas for protection.

Proposed output 3: Common nomenclature:

For this output the objective is to come up with a proposal for data that companies should collect (e.g.

“In 2018, on all my sites, I have xx ha of natural forests”) and which they should also push their suppliers to collect. The list might include some data which is not directly useful for some of the existing measurement approaches but which might be useful by others. By providing a common list of data required among all the measurement approaches, business will hopefully be reassured by the consensus which is being sought and can collect data accordingly.

According to a poll led during the workshop in Brussels in March 2019, the priority data for alignment from companies is land cover data. It was decided that this would be the sub-group’s focus for convergence initially.

Next steps:

• Sub-group 3A members to provide further feedback on the working paper following the webinar.

The final deadline to comment and propose changes is the end of August.

• Sub-group 3A members to complete the Doodle poll for the next webinar in September:

https://doodle.com/poll/2hrknrb3n6g7udh2.

Références

Documents relatifs

The technical specification of data quality control in the data collection, data input, subject indexing, data storage construction, data description and data

Specifically, this study examines whether the development of the Digital Data Genesis dynamic capability in firms leads to valuable outputs: data quality and data accessibility..

Figure 11: Summary of the proper preprocessing choice to deal with three particular boundaries (top boundary with a global force information, cluster of missing data and

Methods for the spectral data quality analysis and assessment of trust in expert data sources and the results of their application in the W@DIS information system

A meta-data based Data Quality system for managing Data Quality in data warehouses was proposed in [8], the authors started the paper by mentioning the important of total

To this end, we use three approaches including Formal Concept Analysis (FCA), redescription mining and translation rule discovery.. Then the main operations that we should perform

3 Data are available as Darwin Core Archive (DwC-A), which is a biod- iversity data standard that makes use of the DwC terms, it is composed of a set of files for describing

In our implementation we offer these Mementos (i.e., prior ver- sions) with explicit URIs in different ways: (i) we provide access to the original dataset descriptions retrieved