GET IN TOUCH
The Big Climate database compares the climate footprint of 540 common food products

In January 2024 it was updated and expanded to 503 foods and launched in a version for the UK market.

Now, we’re adding results for the Netherlands, France, and Spain and expanding the database with 37 new food items.

Regionalized data

Version 1.2 of The Big Climate Database includes regionalized data at the retail level for the Danish, British, Dutch, French, and Spanish markets and now contains the average climate impact of a total of 540 foods.

Regionalized means, that the climate impact of a French baguette or a Spanish Iberico ham will have the specific climate impact of the food purchased in a French or Spanish supermarket, including all emissions up to the point of purchase. This gives the data unique accuracy and flexibility.

Data for all included countries, the technical details behind the database, and the more than 17,000 underlying datasets are freely available at the new website:

www.thebigclimatedatabase.com.

9% on average increase in emissions

The most significant changes compared to version 1.1 for the Danish and British markets are due to an updated and detailed model of fertilizer production and for calculating greenhouse gases from indirect changes in land use.

These changes (among many others), mean that the climate impact for all products has increased by an average of about 9%. For 33 foods, the climate impact has increased by more than 20%, while for 176, it has decreased from version 1.1 to 1.2.

Future updates

This update of The Big Climate Database was carried out and financed by us at 2.-0 LCA consultants, and we’ll continue to improve the datasets behind it and will add more countries and products in the coming years.

Version 1.2 for the Danish market is available as a downloadable version at denstoreklimadatabase.dk, which is managed by CONCITO with targeted information for Danish users of the climate database. The web table will be updated shortly.

Facts about The Big Climate Database

The Big Climate Database was launched in 2021 by CONCITO in collaboration with 2.-0 LCA consultants and with support from the Salling Foundations.

The Big Climate Database was awarded the Nordic Council's Environmental Prize in 2021 for the project's significant changemaking potential.

Version 1.1 was launched in January 2024 with updates and corrections, including the addition of average climate footprints for beef, pork, and chicken, as well as a version for the UK market. The update was released by CONCITO in collaboration with 2.-0 LCA consultants and funded by the prize money from the Nordic Council's Environmental Prize. The UK market version was financed by Zedible.

Version 1.2, with results for Denmark, England, the Netherlands, France, and Spain, and 540 food products, was carried out and funded by 2.-0 LCA consultants. The results, the underlying datasets, and the methodology report are available at thebigclimatedatabase.com.

Our CEO, Jannick Schmidt, was invited on stage together with programme manager Michael Minter at yesterday’s prize show in Copenhagen’s Skuespilhuset to accept the Nordic Council Environment Prize 2021 for the "The Big Climate Database"! It is not often that great data attracts such limelight – but that happened last night.

 

The Big Climate Database was developed by us for the Danish Think Tank CONCITO with funding from the Salling foundations and has catalogued 500 of the most common foods in Denmark and calculated their CO2 footprint. We are so pleased that the Danish Think Tank CONCITO was honoured for their forward-thinking project.

In 2019 the award was won by Greta Thunberg for calling us to action.
Now it is time to get the right data out to people to substantiate their actions.

You can read the jury's motivation in English here:
https://www.norden.org/en/news/concito-denmarks-green-think-tank-wins-2021-nordic-council-environment-prize

In May this year we proudly presented the third version of our wastewater inventory model, WW LCI, at SETAC Europe’s 29th Annual Meeting in Helsinki, Finland (see the presentation). This constitutes the 5th consecutive platform presentation about WW LCI in a SETAC conference, which is a good sign of the scientific interest that our model has received so far from the LCA community. We take this milestone as an opportunity to look back at the story behind our model.

The development of WW LCI started in 2015 as one of our crowdfunded projects, together with the three companies Henkel, Procter & Gamble and Unilever. Our goal was to develop a model and Excel tool to calculate life cycle inventories (LCIs) of chemicals discharged in wastewater. The choice of partners for this project (consumer goods companies) was not a coincidence. Indeed, after use, many of their products, such as shampoos, washing detergents, etc., end up discharged in wastewater around the globe, which makes wastewater LCI modelling a necessity for these companies when carrying out cradle-to-grave LCA studies. Yet, the only commonly available LCI model covering this aspect to date was the one by my good friend Gabor Doka, developed for version 2 of the ecoinvent database. With our project, we aimed at overcoming several limitations of this model. First, we wanted our tool to describe wastewater as a mixture of individual chemical substances rather than a set of generic descriptors such as chemical oxygen demand (COD). Second, we wanted to cover several sludge disposal routes, namely landfarming, landfilling and incineration. Last but not least, we aimed to include the environmental burdens of untreated discharges, which are unfortunately still very common in developing countries. Before the end of 2015, the first version of WW LCI was ready, as well as an article that would ultimately be published the next year in the International Journal of LCA.

Shortly after the development of our model, we got in touch with Prof. Morten Birkved, from the Technical University of Denmark (currently at the University of Southern Denmark), who was involved in the development of SewageLCI, an inventory model to calculate emissions of chemicals through WWTPs. We decided to join forces and integrate the two models, eventually giving rise to the second version of WW LCI, thanks to the hard work of Pradip Kalbar, current Assistant Professor at the Centre for Urban Science & Engineering (CUSE) at IIT Bombay. Pradip’s work led to key improvements in WW LCI, such as the inclusion of wastewater treatment by means of septic tanks, tertiary treatment of wastewater with sand filtration, treatment of wastewater in WWTPs with primary treatment only, treatment of sludge by composting, as well as the integration in the tool of a database containing wastewater and sludge statistics for 56 countries. Also, Pradip was responsible for our second peer-reviewed publication, this time in the journal Science of the Total Environment.

After some quiet time, in 2018 I decided to get to grips with several limitations of the model, such as the fact that it did not support discharges of metals in wastewater, but more importantly, I realized that by describing wastewater as a mixture of individual chemicals, as in e.g. a list of ingredients in a shampoo formulation, I was closing the door to many LCA practitioners who typically can only describe the pollution content in wastewater with the very generic descriptors I had rejected in the first place, namely COD, among others. Thus, I adapted the model to support metals as well as the characterization of wastewater based on the four parameters COD, N-total, P-total and suspended solids. On top of this, many additional features were implemented, mainly aimed at an improved regionalization, that is, to try and make LCIs more country-specific. Some of the improvements made included: emissions of methane from open-stagnant sewers, climate-dependent calculation of heat balance in the WWTPs, capacity-dependent calculation of electricity consumption in the WWTPs, the inclusion of uncontrolled landfilling of sludge, the specification of effluent discharges to sea water or inland water, and last but not least, expanding the geographical coverage of the statistics database from 56 to (currently) 86 countries, representing 90% of the world’s population (figure 1). The result of this effort, in short, is our third and latest version of WW LCI, presented in May at the SETAC conference.

Figure 1. Geographical coverage of the country database in WW LCI.

As an example of the current tool capabilities, the figure below (taken from the SETAC presentation) shows the carbon footprint of discharging 1 m3 of a typical urban wastewater in 81 countries. As it can be seen, there is wide variability between countries (up to a factor 6), with highest emissions in those countries where methane from open and stagnant sewers is expected to occur. On the other hand, emissions are substantially lower in countries where wastewater is properly collected and treated in centralized WWTPs. Obviously, the carbon footprint is not the only relevant metric, and WW LCI can support others just as well, including eco-toxicity.

Figure 1. Country-specific carbon footprint of discharging 1 m3 wastewater with a composition of 500 ppm COD, 30 ppm N, 6 ppm P and 250 ppm SS. Global warming potential for 100 years. Impact assessment calculations in SimaPro 8.5. Biogenic CO2 emissions considered to have global warming potential of zero.

Needless to say, WW LCI is not perfect. We can mention as main model limitations the fact that it does not address uncertainty, its data-demanding nature when used to model specific chemicals, the not-so-easy operation of the excel tool and the export of LCIs being currently limited to the software SimaPro. In spite of this, to our knowledge this is the most complete, flexible and regionalized inventory tool to model urban wastewater discharges in LCA studies and we expect it will eventually become the preferred approach for professional LCA practitioners. We are just a few SETAC presentations away from it.

In a previous blog-post, I used this picture to illustrate the gap between the available information (the large circles) and how much of this information is typically used by current LCA practice (the smaller circles within each large one):

At 2.-0 LCA consultants we have been working hard to ensure that we use the most recent, transparent, reviewed, and spatially detailed data, modelling economic activities with data from globally complete and physically balanced IO-databases, including rebound effects based on marginal consumer behaviour, modelling impacts with a social footprinting method that covers the total global annual loss of natural habitats, human health, and social wellbeing, and by modelling values with data from welfare economics on market prices and representative population surveys, including equity-weighting and science-based discounting.

But staying on top of the current exponential growth of data requires the use of new social and digital technologies: To make efficient use of the options offered by what Klaus Schwab has called the fourth industrial revolution we need to cooperate as a community and use the automated tools of artificial intelligence to create and use linked open data. This is the reason that 2.-0 LCA consultants have decided to sponsor the work of BONSAI – The Big Open Network of Sustainability Assessment Information.

We see how other scientific communities, from astronomy to deep earth seismology, struggle with the same problems of managing the unprecedented amount of data needed to provide increasingly precise understanding and predictions within their fields. We see how these scientific communities have embraced open data as fundamental for the advancement of their research, and have started to cooperate across scientific disciplines.

Last month, I attended – together with BONSAI executive Michele De Rosa - the 9th plenary meeting of the Research Data Alliance (RDA). RDA is a community-driven organization with more than 5000 members from 123 countries, building the social and technical infrastructure to enable open sharing of big data. It was a vibrant, overwhelming experience that confirmed to us that the time for open science is now mature. With an interdisciplinary perspective, the many Interest Groups (IG) and Working Groups (WG) in RDA address common problems such as how to harmonize metadata structure, how to address data classification issues, how to credit scientists for sharing data, how to address the legal issues concerning the sharing and harvesting of data. If you are curious to know more, a full report from our activities at the RDA plenary is now available.​

The scientific domains of LCA, industrial ecology and IO-economics, were, to our knowledge, represented at RDA for the first time by us, which made us feel more like representatives of a laggard community than pioneers.

We believe that the LCA community needs to come up to speed and engage more intimately with the data science community, and we therefore intend to maintain a constant presence in RDA. Plenaries are held twice a year and the next plenary will be in September in Montreal. We are working on a session proposal to create an Interest Group within RDA to target the needs of our scientific domain and invite the LCA community to join us with contributions and suggestions.

crosschevron-down