This is the first part of a three-part series about driverless car development and meteorology. Part 2 and part 3 are also available to read. I adapted these posts from my writing in John Durham Peters’ Elemental Media seminar.
Introduction
Tech journalist Alex Davies (2021) begins the history of driverless cars with an epithet from The Odyssey:
Phaeacians have no need of men at helm
Nor rudders, as in other ships. Our boats
Intuit what is in the minds of men
And know all human towns and fertile fields.
They rush at full tilt, right across the gulf
Of salty sea, concealed in mist and clouds
The classic myth anticipates a key challenge for autonomous driving today: the inevitability of hazardous weather. Like the Phaeacians’ rudderless ships, driverless cars must traverse mist and clouds—and snow, ice, rain, and dust. Bad weather may bring detour, delay, or death; just as it can end a voyage in shipwreck, it can end a drive in car-wreck. The US Department of Transportation reports approximately 5.7 million vehicle crashes a year, and twenty-two percent are weather-related, killing 6,000 people and injuring 445,000 more. Various kinds of weather increase the risk of accidents: fog, dust, and rain reduce visibility; snow and ice inhibit traction; and strong winds can veer a car off course. These hazards challenge human and machine drivers alike. To safely operate beyond the clearest of days, autonomous driving systems must contend with the extremes and uncertainties of road weather.
Here, and in subsequent posts, I examine the weathering, so to speak, of driverless cars developed by companies like Waymo. Given its influence on driving, weather informs how driverless car developers understand their work. As their cars leave the testing grounds of the sunny Southwest, they come to know US cities by their climatic hazards: “snowy Novi, Michigan, rainy Kirkland, Washington, foggy San Francisco, and of course those dusty haboobs in Phoenix, Arizona.” In these places, they learn to recognize weather’s influence over where and when their cars can operate. In the process, they attune themselves to weather’s patterning of daily life more broadly: “Weather shapes our lives,” writes the Waymo Weather Team, “from dictating what we wear, to how we commute, to whether school is in.” To “solve for weather,” as they say, they design systems to manage rain, fog, and snow. Their systems filter these weather conditions from data collected by the cars’ sensors, rendering them invisible in the cars’ models of the world. However, there is more to “solving for weather” than mere data manipulation. Though filtered from the cars’ world models, weather leaves lasting imprints on the cars’ material design and modes of perception.
Navigating the Road
A brief overview of autonomous driving will clarify driverless cars’ relationship with weather. This paper analyzes cars that are fully autonomous, meaning that they can control all aspects of navigation—acceleration, breaking, steering, and signaling—without human intervention. Companies like Waymo build and run these cars as ridesharing fleets, offering on-demand service to riders rather than selling the cars directly to consumers. Today’s fully autonomous cars employ a variety of sensors and on-board computers. Waymo operates Chrysler Pacifica hybrid minivans and Jaguar I-PACE electric SUVs. The Waymo “Driver,” as it calls its passenger vehicles, collects data about its surroundings with camera, radar, and lidar sensors. In a process called “sensor fusion,” an on-board computer retrieves and overlays the data from each sensor. The central computer processes the data using a machine learning model to identify objects in its surroundings, locate itself among them, and predict their trajectories. With this information, it calculates the optimal, and ideally safest, route towards its human rider’s destination.
Media theorist Florian Sprenger describes driverless cars as “media complexes” whose systems produce “world models” with a contextually rich perception of their surroundings (2022, 623). Sensor fusion is central this process because it helps the cars’ computers attribute meaning to raw sensor data. When a car’s camera detects a stop sign, for example, the sign may correspond with an actual stop sign, a reflection in a window or puddle, or an image in a roadside advertisement. Each possibility requires a different response. To determine which kind of sign appears in the camera, the car’s computer triangulates the image with surface data from the lidar sensor. It then decides whether to stop for the real stop sign or drive past the image. The world models produced from such overlays are not complete representations of the world, but rather probabilistic renderings of potential obstacles. A potential obstacle whose presence is deemed unlikely is dropped from the world model. As Sprenger explains, “from the point of view of an autonomous system, the environment appears as an independent source of perturbation” (2022, 630). By reproducing their environment in this way, he argues, driverless cars create a world they can navigate through computational “microdecisions”—assessing probabilities and tradeoffs to generate a viable account of the road and a safe route along it.
The resulting world model forms a protective envelope around the car. Originally developed John Law in the context of ocean navigation (1984), a ship’s protective envelope includes “the set of knowledges embodied by mariners, media, and ships to ensure the durability of ships and the dominance of the sea” (Shiga 2013, 361). Throughout its history, the protective envelope evolved in response to changing relationships between ships and their environments, and advancements opened access new environments in turn. A protective envelope suited for sea travel could not support navigation up and down the Atlantic and Indian Oceans, so in the 15th century, Portuguese navigators added new measurement and logging techniques to their ships’ envelopes (Law 1984). By the early 20th century, ship speeds increased such that visual scanning could no longer detect obstacles in time for navigators to dodge them, limiting speeds and causing wrecks like the Titanic disaster. In this context, sonar promised object detection capabilities at longer distances and thus increased the speed at which ships could safely travel (Shiga 2013).
The protective envelopes of driverless cars mediate their environmental relationship similarly. The kinds of obstacles their sensors can detect and the sophistication of their processing techniques determine the range of environments through which they can safely navigate. In driverless car development, this determines the “operational design domain,” or ODD, which enumerates the geographic areas, times of day, traffic conditions, and environmental conditions in or during which they can legally operate (SAE International 2021). Like all vehicles or vessels, a driverless car’s ability to navigate is limited by its ability to maintain control within its environment. It achieves this control by accurately perceiving and responding to potential obstacles.

Weathering Driverless Cars
Weather figures centrally as an environmental constraint in driverless cars’ ODDs because of its capacity to disrupt their sensor perception and pierce their protective envelopes. Fog, rain, dust, and snow can obscure cameras’ fields of vision, preventing them from detecting signs and obstacles. These weather conditions also reduce lidar resolution, and snow can even scatter lasers and create fake obstacles in lidar fields. By studying these impacts, researchers suggest that heavy rain and fog may increase a camera and lidar sensor’s rates of perception errors by up to 40% and that snow can increase lidar perception error rates by up to 50% (Zhang et al. 2023). As a result, even today’s most advanced driverless cars’ ODDs exclude weather conditions like heavy rain and fog; when cars encounter such weather, they are pre-programmed to pull over. In driverless cars’ world models, weather figures as noise obstructing the road’s signals—the markings and objects the developers deem navigationally relevant. Too much noise prevents driving just as stormy seas delay voyages.

Rather than integrate weather into their cars’ world models and pursue more realistic renderings of their environments, developers instead attempt to filter the weather and remove its effects on sensor perception. As Waymo’s engineers explained in 2016, “we have to teach our cars to see through the raindrops and clouds…to properly detect objects.” The simplest method for “seeing through” weather employs physical design interventions like hydrophobic films that prevent the accumulation of water droplets on sensor lenses. A second method leverages weather’s differential effects on the car’s sensors; whereas conditions like fog may obscure or disrupt camera and LiDAR detections, they impact radar to a much lesser extent, so during sensor fusion, the car’s computer can overlay its camera and LiDAR views with radar to neutralize the effects of meteorological phenomena. A third method involves computationally de-noising sensor data. During data processing, a machine learning algorithm can detect and remove streaks of falling rain, thereby “de-raining” the data. It can similarly “de-fog” by recognizing and subtracting the waveforms characteristic of fog and haze (Zhang et al. 2023). Combined, these processes pursue a world model free of weather, homogenizing the road across space and time. In theory, this enables driving systems developed for clear weather to navigate foggy, rainy, and snowy environments. Driverless cars learn to “see” a world with perennially clear skies.
These attempts to control the weather for road navigation follow a history of similar efforts in the air and sea. In the early 20th century, for example, the US and Japanese militaries sought to clear the atmospheres around airports and battlefields to improve and expand their warplane operations. Yuriko Furuhata (2022) recounts how researchers in both countries conducted extensive observations of clouds, fog, and snow by sending balloons and cameras into the atmosphere. They analyzed their observations in search of the natural atmospheric conditions that form and dissipate such weather in hopes that by artificially reproducing such conditions, they could control it. As Howard T. Orville, chairman of Eisenhower’s Advisory Committee on Weather Control, explained, “before we can hope to control the weather, we must learn what causes weather” (Furuhata 2022, 35). In other words, controlling the weather requires knowledge of it.
Driverless cars’ filter processes may only control the weather in virtual renderings, but they too rely on extensive knowledge of weather to “see through” it. Their various filters all leverage some property of weather—water’s polarity, rain’s characteristic streaks, and fog’s typical waveform, for example. In part, developers’ knowledge of such properties comes from external meteorological research. Other knowledge is produced during the driverless car development process itself. Taking the place of weather stations and balloons, the cars encounter the weather and record its effects on their sensors. These effects typically appear as indexical signs: a sudden drop in camera distance, for example, or changes in traction control dynamics, or even the mere activation of the car’s windshield wipers. Driverless cars, like human drivers, learn to interpret such signs through experience, but they get far more of it: whereas the average American adult drives 13,476 miles per year, Waymo cars drove 2.3 million miles in 2021. As Waymo’s Chief Safety Officer explained, “experience is the best teacher, and at Waymo, we are working to build the world’s most experienced driver.” As Waymo cars traverse American roads, their sensor and driving data informs machine learning models that continually grow, expanding the interpretive grounds upon which autonomous driving systems perceive their surroundings.
To learn about weather through experience, driverless cars must encounter it first-hand. As a result, climate structures Waymo’s training process and geographic expansion. In the US, the first driverless car testing sites were concentrated in the Southwest, namely in Phoenix, where aside from the occasional dust storm, sunny weather provided little meteorological noise. Only after the original systems were thoroughly developed could driverless car companies obtain permits in wetter locations like “snowy Novi, Michigan, rainy Kirkland, Washington, [and Waymo’s own] foggy San Francisco.” Testing across such a climatic variety is not simply a matter of demonstrating the cars’ capacities to a skeptical public; it is, more fundamentally, a matter of “teaching” the cars to drive across a wider range of conditions, broadening the technology’s operational design domain. Though rendered transparent in driverless cars’ world models, weather nonetheless remains visible in their histories and trajectories, determining where, when, and for whom the technology is available.
Weather also imprints itself in the cars’ physical and computational designs. As Paul Kockelman observes, filters “have to take on (and not just take in) features of the substances they sieve, if only as ‘inverses’ of them…By necessity, they exhibit a radical kind of intimacy” (2013, 36). Searching for indexical signs of weather, driverless cars’ filters index the weather in turn, even in its absence. Their hydrophobic films signify rain via its polarity. Likewise, their computational filters encode rain’s characteristic streak pattern just as they encode fog’s characteristic waveforms. When these inverse features are matched with their corresponding opposites, they create real-time indicators of weather’s presence. These filters do not simply destroy the noisy weather data; rather they trap it at the interface between system and environment, holding it, as I will describe later, for non-navigational applications. Here, an important lesson: attending to autonomous systems’ world models alone risks missing key environmental entanglements.
References
- Davies, Alex. 2021. Driven: The Race to Create the Autonomous Car. Simon and Schuster.
- Furuhata, Yuriko. 2022. Climatic Media: Transpacific Experiments in Atmospheric Control. Elements. Durham, NC: Duke University Press.
- Gell, Alfred. 1999. “Vogel’s Net: Traps as Artworks and Artworks as Traps.” In The Art of Anthropology: Essays and Diagrams, repr, 187–214. London School of Economics Monographs on Social Anthropology 67. Oxford: Berg.
- Kockelman, Paul. 2013. “The Anthropology of an Equation: Sieves, Spam Filters, Agentive Algorithms, and Ontologies of Transformation.” HAU: Journal of Ethnographic Theory 3 (3): 33–61.
- Law, John. 1984. “On the Methods of Long-Distance Control: Vessels, Navigation and the Portuguese Route to India.” The Sociological Review 32 (S1): 234–63.
- SAE International. 2021. “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles.” J3016.
- Shiga, John. 2013. “Sonar: Empire, Media, and the Politics of Underwater Sound.” Canadian Journal of Communication 38 (3): 357–78.
- Sprenger, Florian. 2022. “Microdecisions and Autonomy in Self-Driving Cars: Virtual Probabilities.” AI & SOCIETY 37 (2): 619–34.
- Von Uexküll, Jakob. 1982. “The Theory of Meaning.” Semiotica 42 (1).
- Zhang, Yuxiao, Alexander Carballo, Hanting Yang, and Kazuya Takeda. 2023. “Perception and Sensing for Autonomous Vehicles under Adverse Weather Conditions: A Survey.” ISPRS Journal of Photogrammetry and Remote Sensing 196 (February): 146–77.