The Brains Behind How Digital Infrastructure Solves Chronic Airport Congestion, Part Deux

In a previous post, we described our pioneering analysis of a real-time digital transportation intervention at one of the US’ most prominent airports: Seattle-Tacoma International (SEA). Their landside team operates a digital messaging signboard that encourages incoming drivers to divert from Arrivals to Departures, or vice versa, when either is heavily congested. Currently the process is a manual one—the human operator of the signboard decides whether to recommend a diversion based on a video feed of the roadways.

When asked to investigate the efficacy of this practice, we found it was indeed successful at diverting enough traffic to reduce congestion (an estimated five vehicle-hours of travel time per day), and the level of success varied by time of day and the direction of the diversion. (Our comprehensive findings were published in the Transport Findings journal last fall.)

At this point, one might be tempted to declare SEA’s intervention modestly successful and call it a day. But there’s more to the story.

During our analysis, we noticed several time periods where either the Arrivals or Departures roadway were congested, but there was no record of a diversion. Why?

Congestion tends to build up and dissipate over long stretches of time (rather than instantaneously, or due to the presence/behavior of a handful of individual vehicles), which can make the video feed seem error-prone and delayed. Even when the signboard is actively and successfully reducing congestion, it may not be particularly apparent to the human observer.

The Opportunity

Given underlying numerical data is much more robust and reliable when it comes to reflecting the waxing and waning of congestion, could we design a data-driven decision-making algorithm that suggests diversions more optimally, where a human might understandably struggle?

The answer is a resounding yes!

With access to data from traffic sensors, an algorithm can detect more subtle indicators of congestion at any given moment. Moreover, by analyzing large amounts of historical traffic data, it can then predict the onset of congestion and intervene before it occurs.

The same interdisciplinary team crafted an approach with three components, all built upon foundational interdisciplinary concepts from statistics and optimization: 

1. Measurement
A model that infers the level of congestion

2. Forecasting
A model that predicts the state of traffic

3. Optimization
A framework for decision-making that leverages both of the above models to compute the best sequence of diversions for any given scenario

1. Measurement: A Model That Infers the Level of Congestion

First, our algorithm needed to understand at what point the Arrivals or Departures roadways are congested, and to what extent. And to do so, we needed to define “congestion” precisely and mathematically.

We started with the Fundamental Diagram of Traffic Flow, a commonly-used statistical relationship which, at a high level, states that for any given amount of vehicles moving in and out of an area, traffic moves along at a corresponding range of normal speeds; if the speed drops below that range, it crosses the “critical speed threshold”. 

But statistical guidelines like this don’t work in a vacuum—they must be applied to real-world places in order to achieve real-world results. In other words, we needed to establish a critical speed threshold specific to SEA’s Arrivals roadway, and another for its Departures roadway. In the absence of ground truth, we leaned on “unsupervised learning”, a body of statistical techniques that enables us to glean patterns from unlabeled data. We applied a particular method called cluster analysis to the fundamental diagrams of Arrivals and Departures, and were able to infer the critical speed threshold of each roadway.

The model then compares those threshold numbers to the current traffic speeds of their respective roadways. If the current traffic speed is the lower of the two numbers, the system is considered congested. The bigger the delta, the more congested the system. This metric is called the critical speed ratio—a numerical tipping point to define what is and is not congestion.

2. Forecasting: A Model That Predicts the State of Traffic

Of course, no forecasting model is perfect; our aim was to create one that enables our algorithm to make the correct decision often enough to meaningfully decrease congestion in ever-evolving conditions, and never to make congestion worse on either roadway.

This challenge is precisely one of system identification, the field that constructs mathematical models of complex time-varying systems (such as airport roadways) from observed data. One of the most popular and time-tested models is a linear dynamical system. The details can be quite involved, but the key idea is that the numerical values of what we are measuring in the system (in this case, vehicle speed and throughput) in the next time period depends proportionately on the values in the current time period. In other words, the more you understand about what’s already happening, the better you can predict what’s going to happen next.

We employed a fundamental method called least squares estimation on historical data at SEA, and learned the parameters of the system — guidelines for when congestion levels changed on the Arrivals and Departures roadways, what conditions typically led to those changes, and what tended to happen as a result.

3. Optimization: A Framework for Decision-making

Finally, once our algorithm was armed with parameters for current and predicted congestion, it would need to compute the best-possible sequence of diversions.

We started with model predictive control, a fundamental technique from the sequential decision-making literature that’s been used for decades by chemical plants, power systems, industrial robots, and the like. Model predictive control enabled us to optimize for the current time interval while taking into account future time intervals (through our forecasting model).

We also needed to establish optimization criteria to steer the algorithm’s decisions toward our ultimate goals. The criteria were two-fold:

  1. Minimize congestion in both roadways holistically, so that reducing congestion in one roadway doesn’t increase congestion in the other.

  2. Divert traffic only for as long as is required, and no more, in order to incur maximum impact on user behavior. If drivers think the diversions are unwarranted given the level of congestion, they’re less likely to follow the instructions.

Our three algorithm components could then come together as a virtuous cycle:

SEA signage diagram

Taking the Algorithm for a Test-Drive

Once we put the three components together, we needed to determine if our algorithm was any good. We presented it with 50 of the real-world untreated congestion scenarios from SEA’s data and evaluated its decisions.

These simulations showed the algorithm could have reduced travel times by up to two-thirds and idle vehicle emissions of up to 360 kilograms of CO2 for every hour—a win-win for both efficiency and sustainability goals. 

With such encouraging results, our second published paper was recently submitted to 2023 IEEE Intelligent Vehicle Symposium.

What’s Next: A Real-World Experiment?

For the moment, these counterfactual simulations are the closest we can come to estimating how our algorithm would work in the real world, and the degree to which it’s more effective than a noisy video feed and subjective human judgment. But critically, they’ve validated our approach—and more importantly, are building trust and confidence in the minds of airport staff who might use digital infrastructure like this to help manage a complex physical infrastructure through which tens of thousands of vehicles pass every day.

The latest news—the landside team at SEA is excited about our work, and we’re currently exploring a pilot project.

Thank you to our wonderful collaborators from Pacific Northwest National Lab for their partnership, and to the landside team at SEA for this opportunity.

Shushman Choudhury, Lead Research Scientist

Shushman Choudhury is Lacuna’s Lead Research Scientist and specializes in Artificial Intelligence techniques for real-time digital policy. He has a Ph.D. in Computer Science from Stanford University, where he developed optimization and decision-making algorithms for intelligent transportation systems. He also has an MS in Robotics from Carnegie Mellon University.

https://www.linkedin.com/in/shushman-choudhury-b29049139/
Previous
Previous

CEO Hugh Martin Leads ITS America’s Digital Infrastructure Working Group

Next
Next

Building Community with the Urban Freight Lab