published quarterly by the university of borås, sweden

vol. 27 no. Special issue, October, 2022

Proceedings of the 11th International Conference on Conceptions of Library and Information Science, Oslo Metropolitan University, May 29 - June 1, 2022

Algorithmically embodied emissions: the environmental harm of everyday life information in digital culture

Jutta Haider, Malte Rödl, and Sofie Joosse

Introduction. This conceptual paper introduces the notion of algorithmically embodied emissions to highlight how everyday choices facilitated by commercial algorithmic information systems such as commercial search engines, social media and recommender systems contribute to the climate crisis and other forms of environmental destruction.
Method/Analysis. The proposed concept is developed by integrating terminology from the fields of information studies, critical algorithm studies and environmental impact assessment, and by examining a strategic selection of examples.
Results. Through the examples, the authors show that semantic interpretation of queries as well as the information architecture involve normative dimensions with implications for the climate crisis and other forms of environmental destruction.
Conclusions. The paper proposes a terminological framework that integrates conceptual considerations from environmental impact assessment, environmental communication, information studies and critical algorithm studies to articulate how algorithmic information systems are co-constitutive of environmental harm. The paper further suggest to extend environmental impact assessment to include algorithmic harms in order to take into account how responsibility and accountability are distributed among different actors with profoundly different conditions and opportunities to exercise them.

DOI: https://doi.org/10.47989/colis2224


This conceptual paper introduces the notion of algorithmically embodied emissions as a concept to help us understand how the various everyday choices facilitated by algorithmic information systems as they are employed by multi-sided platforms, such as commercial search engines or various social media and recommender systems, contribute to the climate crisis and other forms of environmental destruction. The concept merges notions of embodied emissions from environmental impact assessment (EIA) with the idea of algorithmic harms (e.g. Marjanovic et al., 2021; Moss et al. , 2021). We suggest that algorithmically embodied emissions are brought about by widely available, commercial general-purpose information systems, and are particularly important when different forms of relevance collide, for instance when topical relevance, subject relevance, and societal relevance are in conflict. These colliding orders of relevance create what might be called second-order algorithmic harm, that is, they result from societal meaning-making processes that have implications for discourses and social norms, which in turn influence climate change and other environmental crises.

Such conceptual and terminological integration allows us to consider the contribution of algorithms — by which we here mean rules for filtering, selecting, sorting, and returning information, for example as part of a search engine — to collective meaning-making about everyday information and information practices in ways that normalise carbon-intensive activities as a form of algorithmic harm. With the notion of algorithmically embodied emissions, then, we propose a concept that helps articulate how emissions and their environmental harms are not limited to the actual operation or even production, use and recycling of material artefacts. Instead, in the case of information systems that employ “gatekeeper algorithms” (Tufekci, 2015) to curate and select content, emissions are furthermore co-constituted by the algorithmic options offered and excluded.


In its 2021 environmental report, Google (2021) lays out its sustainability strategy, focusing exclusively on what it considers to be the company's positive impacts on the environment and the climate. In particular, they celebrate the carbon neutrality of their operations, which they say they have maintained since 2007. They are not alone in such reporting. Many multi-sided platform businesses, such as Amazon, Facebook/Meta, Microsoft, Twitter and others, submit annual environmental reports in which they increasingly declare that their activities are green, sustainable and carbon-neutral or at least as good as carbon neutral (e.g. Facebook, 2021; Amazon, 2021; Google, 2021, Twitter, 2020). When doing so, these companies often emphasise how they reduce their operational emissions, for instance, through various offsetting schemes or, as in the case of Google, also by generating their own energy through wind or solar plants near their data centres or office buildings. Google, in particular, even claims to have compensated for its operations’ entire “legacy carbon footprint” by purchasing offsets for the operational carbon emissions that have been accrued throughout the company's existence. (Google, 2021, p.2; see also Shankland, 2020-09-14).

However, operational emissions are only a part of the emissions caused by these companies. Emissions also occur during production and at the end of the life cycle of their operational equipment, e.g. during mining, manufacturing, shipping, repair or recycling. Such emissions are of course much more difficult to measure, reduce and capture, but their inclusion is essential to measure the level of overall impact on the environment (Masanet et al., 2013) and corporations increasingly include some of those in their environmental reports. Nevertheless, and this is the argument made in this paper, there are additional emissions beyond those just mentioned that arise from the way information is selected and presented as a result of algorithmic curation. Not only that, the way these informational emissions are conceptualised has implications for how accountability can be discussed and how a harm framework can be developed.

In the remainder of the article, we first provide brief definitions and context for operational and embodied emissions. Second, we reflect on the role of commercial search engines, social media and different types of recommender systems in this context, at the example of Google’s marketing. Third, we illustrate how different assumptions about (pro-)environmental behaviour allow for different levels of critique towards algorithmic information systems. We then flesh out our concept of algorithmically embodied emissions as a form of informational algorithmic harm by means of strategically selected examples, before concluding with general remarks that point to possible future research in the nascent field of environmental information studies and related areas to better understand algorithmically induced ecological harm.

Classifying (carbon) emissions

Operational and embodied emissions are both concepts that originated in environmental impact assessment (EIA), which identifies the environmental impacts of projects, products, materials or infrastructure before an action is taken (Wood, 2002). Methodologically (in life cycle analysis and similar methods), environmental impacts are usually divided into two categories: operational impacts on the one hand and embodied impacts on the other, with greenhouse gas emissions being a particularly important impact in the context of climate change.

Operational impacts or emissions occur during the use of specific equipment and materials, and in the case of a server farm or data centres would include, for example, power generation, cooling, repair or building maintenance (Masanet et al., 2013). As mentioned above, most multi-sided platform businesses publish annual sustainability or environmental reports in which they discuss and account for their operational emissions (e.g. Facebook, 2021; Amazon, 2021; Google, 2021). In contrast, embodied emissions are those emissions that become inscribed into an item or process throughout the life cycle, that is an item ‘embodies’ these impacts. Embodied emissions concern all life stages not covered in operational impacts, such as the collection of raw materials, manufacturing and installation, but also disassembly, recycling or disposal (Masanet et al., 2013). The idea of embodied emissions is central to the allocation of greenhouse gas emissions away from producer countries and towards consumer countries (Hertwich and Glen, 2009), thereby introducing an idea of responsibility for global environmental impacts (Mittiga, 2019). Even though studies suggest that in commercially run data centres, the embodied greenhouse gas emissions are an order of magnitude smaller than operational emissions (Masanet et al., 2013), the operation of large, multi-sided platforms still requires huge investments in data centres. Yet embodied emissions are only beginning to be accounted for in the sustainability reports issued by multi-sided platforms, Google being somewhat of an exception (within limits), as we will reflect on shortly.

The sustainable choice?

As we will exemplify and discuss later, our argument does not only apply to Google and its products. However, the extreme dominance of Google, and in particular the integral position of its search engine in the everyday information practices of people in different walks of life in large parts of the world (Haider and Sundin, 2019), is a good point of reference. If nothing else, it offers a relatable tool with which to start thinking. And Google refers frequently to its sustainability and environmental responsibility: In the five-year sustainability strategy charted in Google’s 2021 environmental report mentioned earlier, there is also a section called “empowering users with technology”. It includes brief descriptions of a number of features implemented in the publicly available algorithmic information systems, most notably Google Maps and Nest. In a promotional video, entitled “Helping every day be more sustainable with Google” released on YouTube in October 2021 (Google, 2021-10-06), high profile company representatives showcase these together with several other features that are being incorporated into their products, including in Google Search, to help people make what they call "the sustainable choice".

The question, however, is how these "sustainable choices" are constituted and what (future) emissions are embodied in the practices afforded and the information conveyed. Examples mentioned in the video include "eco-friendly routing" for drivers of motor vehicles, improvements in the display of bicycle routes on Google Maps, flight options that show carbon dioxide emissions down to seat level, or search filters when shopping for low-energy household appliances or electric cars. Also brought up in the video is a carefully curated search engine results page (SERP) in response to entering the query "climate change" into Google’s search engine. In addition to facts about the causes and effects of climate change, it includes a tab called "Actions" under which appears a list compiled by the UN Campaign for Individual Action. Of the ten activities suggested by UN’s campaign, three appear on the SERP: “save energy at home”, “walk, cycle or take public transport”, and “eat more vegetables” (UN, 2021). This is in line with a general move by Google to explicitly consider some extent of societal interest when calculating the relevance of search results for selected queries in particularly important markets (Sundin et al., 2022).

It is not explicitly stated, but the understanding that unites these examples is that Google conceives of itself as a facilitator of information to support ‘sustainable’ consumer choices. Accordingly, the emissions and other impacts generated by the choices of the 'empowered users' of Google products are considered part of the environmental impact and carbon footprint of those individual users and thus their responsibility — and not that of Google. Yet, given that the design of Google's products, and indeed that of other similar algorithmic information systems, is beyond the control of end-users or even democratic or other social institutions, and that the workings of the algorithms involved are almost completely opaque, it might be equally appropriate to attribute the emissions generated to the providers, the platform companies. While such an approach may have its limitations as a precise calculation tool, it is nonetheless a useful means of making the environmental damage that is (potentially) caused by invisible algorithms the subject of enquiry. In this sense, it is a means to demand accountability, but also to reflect on the way accountability is distributed among different actors who have vastly different conditions and opportunities to enact it (Moss et al., 2021).

(Pro-)Environmental Behaviour

Both in society and in academia, certain ideas about what constitutes 'pro-environmental behaviour' exist. However, as part of a neoliberal consumer culture that values 'choice', regulation is often frowned upon and instead consumers ought to do things differently. Keller et al. (2016) identify three dominant ideas in social science thinking — albeit with very different ontological positions — on behaviour and behaviour change in which information and communication play a central role, as we will discuss below in relation to Google's commitment to 'sustainable choice' and its self-conceived state as an 'enabler' or this.

First, theories of planned behaviour assume that people are rational actors, and accordingly the ‘right’ information or other stimuli cause them to change their attitudes and as a result their behaviour. Here, Google’s position as the world’s most popular search engine gives it an advantage and responsibility to deliver the ‘right’ information to people, which then is best placed on top of the first search engine results page and ideally without diversity of potential interpretations of a search term. In this thinking, Google could be conceived of as the actor who defines what the ‘sustainable choice’ is.

Next, behavioural economics assumes that people are not rational actors, and do not always make the ‘right’ choices, but can be helped to think and do right by ‘choice architecture’, i.e. the material or virtual environment and their (default) settings. The famous (and notorious) 'nudging' approaches are part of this paradigm. In this understanding, Google’s information lay-out and additional information encourages certain choices and discourages others. Recommending the most eco-friendly route or displaying the carbon emissions associated with flights fits into this paradigm.

Finally, social practice theory conceives people as habitual and thus as acting in accordance with existing meanings, skills and things (Shove et al., 2012) associated with a given practice and its routines. Social practice theory helps us to see the interaction between people's abilities to type in the right search terms and Google's abilities to return results. This leads to shared learning and in a sense binds people to a particular search engine because they know how to get the results they want. It also helps us to see that the results that Google and other search engines returns are a particular interpretation of the search term, as they are an (often) limited set of search results, and as we will show below, that lack diversity and real alternatives in terms of sustainability considerations.

Regardless of which of these models of behaviour change we start from, Google acts as an invisible and ubiquitous intermediary between people and information, discourses, skills sources and other commercial platforms. Google could thus reasonably conceive of itself as being able to induce sustainable choices, as they do in the aforementioned video. However, Google is not a neutral information provider, but instead crawls, filters, selects and orders data based on certain and mostly undisclosed or implicit assumptions. This makes the opposite scenario equally plausible, namely that algorithms can also conceal information or discourses and thereby inhibit the necessary behavioural change (and, of course, systemic change) needed to reduce greenhouse gas emissions and mitigate climate change. For example, if the 'normal' and highly visible state of carbon-intensive practices is reproduced, alternative ways of being become less visible, and thereby less available for action.

It is generally suggested that any algorithm includes bias and prejudice of the societies they originate from through design and data (Airoldi, 2022; Draude et al., 2020). In our world, where the accelerating climate crisis is continuously met with deference of responsibility to individuals (in their limited role as ‘consumers’) and with lobbying against and delays caused for required legislation, it appears rather unlikely that a commercial search engine such as Google or any other dominant platform company would operate against its interest to enable a true ‘sustainable choice’.

In the next section, we present a series of strategically selected examples to illustrate our argument and situate it within a larger problem space of how contemporary everyday life practices and information are inextricably interwoven with and co-constituted by the algorithmic decisions of multi-sided platform companies. This, we argue, has largely unnoticed implications for societal carbon emissions and far-reaching consequences for the conditions under which meaning-making can take place.

Examples for algorithmically embodied emissions

We illustrate algorithmically embodied emissions with examples of common (Northern European, middle class, everyday life) practices such as travelling or searching for information about consumer goods, such as clothes, books or other media.

The following examples are based on test searches conducted in different parts of Sweden in November 2021 and repeated in January 2022. We were logged out of our user accounts at the respective platform or service when conducting the searches. We also accessed the platform services from different browsers and devices, but the differences we found were not relevant to our argument. As our reasoning is conceptual, we are more interested in capturing broader trends than in producing exact data that can be replicated, and therefore an exploratory approach that approximates 'everyday' experiences is sufficient (Rogers, 2019).

Example one: Places

Google Search, often through the integration of Google Maps, is able to also make travel suggestions and a search for the names of two major cities such as "Stockholm Paris" or "Stockholm Copenhagen" usually provides exactly that, travel recommendations. It could return other information about these places. However, the search engine (reasonably) assumes, based on previous search data, that the query concerns travel interests and accordingly returns results that reflect this. People have likely also learnt throughout their engagement with the search engine that it is sufficient to enter two place names into Google Search to obtain information about travel options. Another assumption by the search engine is that the most relevant result for most such place queries are flight options. Only for shorter distances, there is a map at the top of the page showing the estimated time and route by car or train. For longer distances between two countries, you will usually find a flight planner, links to different airlines and airfares before the same map is shown for the car journey. In the case of a journey from the Swedish capital Stockholm to the French capital Paris, information about train connections or journey times is not visible, neither in the organic results nor in the paid content. If Google does not suggest train travel as an immediate option for this route, it can also not alert users to this possibility, and accordingly not draw attention to the fact that train travel might be more environment-friendly.

The SERP generated in response to the query "Copenhagen Stockholm" looks slightly different and can provide some insights into how the algorithms work. If you enter the names of the cities in English, the SERP shows a more balanced result in terms of flight and train options (but does not point out their unequal environmental footprints). This may be due to the shorter distance between these locations. In contrast, entering the search query with the Swedish place names "Köpenhamn Stockholm" results in a SERP that strongly favours flight options in both paid and organic results, including an embedded price comparison tool and links to airlines and their associated emissions. This could indicate that data was missing (Golebiewski and Boyd, 2019), i.e. that fewer non-flying transport options or web content translated into Swedish was available and indexed at the time of the search, and less paid content about trains was placed on the platform. Again, the user may find it difficult to find the most sustainable alternative, because it is neither visible, nor are emissions compared across modes of transport.

If you enter the same search queries in alternative search engines such as Bing, DuckDuckGo or Ecosia, you get largely the same results with minor variations. Notably, the embedding of maps is inconsistent, and none has a built-in flight comparison tool.

Example two: Things

Google Search makes other semantic interpretations of search terms. For example, the Swedish word "barnkläder" (the English equivalent “children’s clothing” returns analogous results) will yield a list of nearby shops, followed by web links to various online and offline chains. This suggests that a person searching for this ‘thing’, according to Google, seeks to buy children’s clothing. There is no difference in the type of content between paid and organic results. Also, the autosuggest terms that automatically appear in the search box for the term all produce results that in turn list shops to buy new clothes (e.g. sale, different brands, different Swedish cities, boy or girl), while the ‘People also ask’ section includes similar questions that directly link to shops. Links to content about other more or less common practices in relation to children’s clothing, such as buying second-hand clothes, swapping, mending or sewing clothes are not included in the list and require adding search terms to indicate a 'special interest'. As in the example about ‘places’, the relevance assessment concerning ‘things’ involves assumptions at the semantic level that have value-based and ideological implications. Again, the search engine’s suggestions likely reproduce an underlying consumerist discourse on the internet that emphasises practices with a comparatively higher environmental impact compared to alternatives.

The search results on Bing and DuckDuckGo are virtually identical. However, there are signs that Ecosia is weighting the signals differently and giving some preference to fair-trade products, although it is still heavily weighted towards buying new items.

Example three: Content

Another example is Amazon.com, where a search for "climate change books" with the following settings, "location: USA, sorted by: featured", brings up a book commonly described as promoting a denialist agenda as one of the editor's picks (in the "best history" category) and thus first on the list. This is followed by a selection of books of which about half represent the scientific consensus and the rest promote business as usual or express doubts about climate change. The books highlighted in the "Editor's Picks" category change frequently, as does the order in which the titles are displayed. This also happens when we enter the same search terms over the course of a few months, but always with the same tendency for one or two books questioning anthropogenic climate change to appear in the first row, often as "Editor's Pick". As the name suggests, the "Editor's Pick" category includes an element of human selection. Nevertheless, the order in which these articles rise to the top of the list is an algorithmic decision, which in turn has implications that feed into future algorithmic decisions through machine learning processes. If you tick the box "kindle unlimited eligible", i.e. if you limit your search to items included in Amazon's own e-book subscription service that can be downloaded to the company's e-reading device, and again select the order "sorted: by featured", denialist books dominate at first glance. This time no editor’s picks or bestsellers are highlighted. The first three books on the list are books that outright deny the existence of anthropogenic climate change. The rest of the list contains a mix of denialist and evidence-based content for different audiences and in different genres. Even in this list, the titles and their order change over time, but the general tendency towards denialist content is constant.

For comparison, a search on Amazon.se (Sweden) at the same time showed that books by climate deniers are not at the top of the results page. A search on Amazon.de (Germany) resulted in a list containing two climate denier books in the top five results, but not the first.

Although the overall results in this example are not as uniformly against the best interest of the environment as in the other examples, the potential impact on society is possibly even more as profound. First, the disproportionate presence and prominence of denialist books in a mainstream (book)store over-represents fringe arguments and makes it more likely that someone seeking genuine information about the climate crisis will be reluctant to engage with a perspective that leads to greater care and respect for the environment. Second, the difference in an individual's social impact and, accordingly, ecological footprint is not limited to a single transaction, but to an entire lifestyle of climate denial and potential ripple effects throughout society as a whole. The point we want to highlight is not about excluding climate change denial content from this platform, or any other platform, but rather how it is weighted and presented in relation to other content that thereby slides further down the list.

Concluding remarks

In all three examples, the information architecture does not make it immediately obvious what a "sustainable choice" is, nor is it apparent from the outset that the search and its interpretation by the algorithm may have environmental implications. Yet, what is generally considered a more "sustainable choice" is not the default setting. Instead it is a few clicks or keystrokes away, as Google, Amazon and most likely other similar platform services, often default to options that have more severe environmental impacts - as does the widespread public discourse on which their data is based. This means that either prior knowledge is required or there must be curiosity about the sustainability of a particular search and the environmental implications of following a specific algorithmic suggestion. In other words, any searcher must already have the intent or curiosity to learn about sustainability. It is certainly not as effortless as, for instance, Google suggests in its promotional material.

Although direct effects between search results or feed displays and people's decisions are elusive at this stage of conceptual development, the impact of information architecture on societal meaning making regarding environmental issues is clear: semantic interpretation of search terms and other interactions with these systems assumes a default action that most people are presumed to follow. When additional information about the environmental impact or environmental orientation of particular search results or queries is hidden, people must rely on their own understanding and potentially critical thinking about a topic to either refine their queries or explore all results in depth.

When considering operational and embodied emissions, the complex ways in which technology and infrastructure interact with and shape human behaviour are often neglected (for exceptions, see e.g. Giddings 2015; Chester et al. 2013). Accordingly, Masanet et al.'s (2013) study of low-carbon data centres focuses on how changes in electricity generation or manufacturing can reduce operational and embodied emissions from data centres, but not on how data centres are used and how they enable and constrain social practices. We argue that a lack of consideration of how multi-sided platforms (or any technical infrastructure for that matter) interact with people's everyday practices is a significant omission in the interplay between environmental impact assessment and algorithmic impact assessment.

The multi-sided business model that underpins these systems has implications for the way relevance decisions are made and the order in which knowledge and information are displayed (Sundin et al., 2022). Different user groups (e.g. advertisers, content producers, searchers, etc.) are served by different forms of relevance, but all within the same consumer logic, which offers little incentive to consider the specific collective needs that societal relevance requires. Accordingly, the possibilities for making a 'sustainable choice', as Google suggested, are limited to choices within the late-liberal, capitalist logic of accountability (Juhila and Raitakari, 2016), which individualises social and environmental problems but socialises the harm. Anything that runs counter to this logic requires additional human thought and input into the algorithm.

Identifying and making visible this algorithmic social pollution or algorithmic harm (Marjanovic et al., 2021) as a second-order effect of algorithmic decision making that affects current and future generations through its constitutive role in societal discourse and cultural norms is the first and most important goal of our proof-of-concept. The myriad ways in which algorithmic information systems permeate everyday life and society at almost every level make it not only futile to decouple the environmental impact of practices from their informational design, but also problematic in terms of who can be held accountable for their decisions and by whom. Accordingly, we have proposed a terminological framework that integrates conceptual considerations from environmental impact assessment, environmental communication, information studies and critical algorithm studies to better articulate this problem and make its manifestation discernible.

Our point is not that content denying climate change, information about flights or shopping should be excluded from these or other platforms, but rather to draw attention to and question the weight given to this content and its associated discourses and practices in comparison to other possible content, which is thereby downgraded, made less visible and thus rendered less normal and actionable. Further research needs to explore the implications of our argument for the conceptualisation of other algorithmic systems and commercial platforms as contributors to environmental harm through algorithmic embodiment of emissions.


We are grateful to three anonymous reviewers for their suggestions and to the participants at Data & Society’s “The Social Life of Algorithmic Harms Academic Workshop”, in March 2022 for their thoughtful feedback and suggestions, but most of all for their open mindedness and intellectual generosity.
This research has been supported by Mistra, the Swedish Foundation for Strategic Environmental Research, through the research programme Mistra Environmental Communication.

About the authors

Jutta Haider is Professor at the Swedish School of Library and Information Science, University of Borås. She received her PhD from City, University of London, UK, and is reader in Information Studies at the Department of Arts and Cultural Sciences, Lund University, Sweden. She can be contacted at: jutta.haider@hb.se.
Malte Rödl is researcher in Environmental Communication at the Department of Urban and Rural Development, at the Swedish University of Agricultural Sciences in Uppsala. He received his PhD from the Sustainable Consumption Institute and the Alliance Manchester Business School, The University of Manchester, UK. He can be contacted at: malte.rodl@slu.se.
Sofie Joosse is researcher in Environmental Communication at the Department of Urban and Rural Development, at the Swedish Agricultural University in Uppsala. She received her PhD in Human Geography from Uppsala University, Sweden. She can be contacted at: sofie.joosse@slu.se.


How to cite this paper

Haider, J., Rödl, M., & Joosse, S. (2022). Algorithmically embodied emissions: the environmental harm of everyday life information in digital culture. In Proceedings of CoLIS, the 11th. International Conference on Conceptions of Library and Information Science, Oslo, Norway, May 29 - June 1, 2022. Information Research, 27(Special issue), paper colis2224. Retrieved from http://InformationR.net/ir/27-SpIssue/CoLIS2022/colis2224.html https://doi.org/10.47989/colis2224

Check for citations, using Google Scholar