Spencer Kaplan

Weathering Driverless Cars (Part 2)

This post compares driverless cars’ classification systems to centuries-old cloud atlases. Despite filtering clouds from their road perceptions, driverless cars still sense the road atmospherically–that is, according to the same manner as meteorologists read the clouds.

This is the second part of a three-part series about driverless car development and meteorology. If you haven’t yet, read part 1 first. I adapted these posts from my writing in John Durham Peters’ Elemental Media seminar.

Atmospheric Roads

For historian of science Lorraine Daston, clouds belong to “a world of particulars so particular that no categories can parse them and no regularity tame them,” but as she demonstrates, cloud atlases produce a stable taxonomy by directing observations toward a small set of distinguishable details (2016, 52). For example, the 1896 International Cloud Atlas isolated certain details corresponding to different cloud types and labelled them in exemplary photographs. It created “grids for the calibration of perception” that could train observers to perceive clouds in a coordinated manner regardless of their position in time and space (Daston 2016, 59). Driverless cars learn to perceive clouds and other weather conditions in much the same way. From innumerable observations, their models parse characteristic features—streaks, scattering patterns, and waveforms—from which they can abduce the presence of corresponding weather. Driverless cars’ weather perception, like meteorological cloud observation, employs a procedure of dramatic feature reduction according to a standard set of “observation protocols” (Daston 2016,  63).

Daston’s account of cloud observation resonates with driverless cars’ perception processes more broadly. As Daston writes, “a classification depends on some degree of abstraction from the blooming, buzzing world of particulars, accentuating some significant features and muting others” (2016, 48). Autonomous driving systems apply this approach to all elements of the road. They perceive road surfaces, for example, according to its characteristic line markings. Likewise, they perceive cars, bikes, trucks, and pedestrians according to a small set of distinguishing details—arrangements of wheels or limbs, for instance. And the smaller the set of details, the better: less data to compute means quicker reaction times—as long as the resulting inferences are correct.

This approach applies not just to elements of the road, but to the entire road environment, which is reduced in the cars’ world models to the subset of features deemed relevant to navigation. As Sprenger describes these models, “the worlds that are modeled are not representations but, rather, they are necessarily constructions of what may be relevant to operate under given conditions of uncertainty” (2022, 629). As shown here, weather only features when it creates an obstacle like a snowbank; in all other cases it is reduced from view. Similar treatment might reasonably apply to other elements in the environment whose physical presence may not significantly impede driving—say, falling leaves, or a floating shopping bag. Driverless cars perceive the road just as observers trained by cloud atlases perceive the heavens.

(Left) Waymo’s road visualization system; (Right) “Altocumulus” in the 1930 International Atlas of Clouds and of States of the Sky

The cloudiness, so to speak, of driverless car sensor perception is further reflected in its technological lexicon. Here I refer specifically to lidar, which emits and records pointillist laser maps of its surroundings called “lidar clouds.” These clouds resemble their aqueous counterparts and function as similar kinds of signs. Besides signaling the presence of moisture, aqueous clouds index otherwise invisible dynamics in the atmosphere; “In meteorology,” writes Hsin-Yuan Peng, “cryptic signal appears as noise” (2022, 103).

Lidar clouds signify in the same way, indexing the road’s dynamic manifold of surfaces—surfaces that otherwise elude perception through other means like image recognition. Just as lidar is cloudy, the road is atmospheric: dynamic and perceived indexically. It, like the air, can only be read technologically and via reductions of some kind. In turn, the system’s filtering and perception algorithms may very well be cloud atlases. They instruct the car on how to, quite literally, read the lidar clouds through reduction, including, perhaps ironically, the reduction of aqueous clouds. We can modify Daston’s writing to describe lidar clouds: “observers had to learn to see the [road] in the same way, to divide up the continuum of [LiDAR] cloud forms at the same points, to connect the same words to the same things…descriptions of [LiDAR] cloud types functioned as templates and frames for observation”  (2016, 52). Driverless cars render the road anew, giving new valence to McLuhan’s assertion: “Depending on the type of the vehicle-medium the nature of the road medium alters greatly” (McLuhan 1960; quoted by Peters 2015, 104). Driverless cars may de-weather the road, but they nonetheless render it atmospherically.


References

  • Daston, Lorraine. 2016. “Cloud Physiognomy.” Representations 135 (1): 45–71.
  • Peng, Hsin-Yuan. 2022. “Intervals in Relief: Abe Masanao’s Stereoscopic Clouds.” Representations 159 (1): 90–121. https://doi.org/10.1525/rep.2022.159.4.90.
  • Peters, John Durham. 2015. The Marvelous Clouds: Toward a Philosophy of Elemental Media. Chicago ; London: the University of Chicago Press.

Discover more from Spencer Kaplan

Subscribe now to keep reading and get access to the full archive.

Continue reading