<img alt="" src="https://secure.ruth8badb.com/159247.png" style="display:none;">

Say you work in an airline’s revenue management department.  And for key flights departing on the past three Fridays, high fare bookings are up 25%, 30%, and 45% respectively over the model forecast.  How many more seats will the model hold for full fare bookings next Friday?

  1. Up 33% (the average of demand increase the prior three Fridays)
  2. Up 10% (the 45% is considered an outlier by the model and the model factors in more history)
  3. No change (the fare premium over the next higher fare is too small to save inventory for)
  4. Down (the model needs a longer time horizon to adjust demand but picked up on higher volatility – which drives lower inventory)



Answer:  Each is a possible model solution, depending upon various assumptions and parameters.

Dynamic pricing is highly complex – and sometimes not intuitive.  A significant increase in observed demand for a period may not actually drive a significant increase in forecast demand.  And, many times even a large increase in forecast demand for a higher fare has little or no impact on the seats set aside for that demand.  Certainly, such results are examples of why airline dynamic pricing (“Revenue management”) is often called a “black box.”

Airlines could highlight all of the model assumptions, equations and coefficients.  But even then it is not likely to be clear what the model is doing.  Rather than a listing of “assumptions” or “regression model results”, analysts can benefit from “what if’s”, or more fundamental understanding of the implications of those assumptions, regression models and optimization routines. 

To evaluate macro strategies – the use of O&D revenue management vs. leg-based and for the implications of matching competitive fares, for example -- the airline industry may consult a simulation tool used by Boeing and MIT researchers called PODS (“Passenger Origin-Destination Simulation”).  This simulation tool has been used to measure the value of RM in general (4-6%) and the extra value of O&D RM (2-3%).  It has also shown that matching a competitor’s fares – ignoring a sophisticated RM model – is revenue negative.  PODS is useful for testing various RM models and enhancements.

But airlines would benefit in their understanding of their systems if they could apply such “what if” to more micro, airline- or market-specific factors.

For example, analysts may intervene in the models when demand appears particularly strong or weak.  They may, for example, raise demand somewhat arbitrarily by 10% -- without fully understanding the implications of doing so.  Instead of “flying blind,” analysts can gain insight by asking “what if”:

  • What if demand across all fare classes is 10% lower?  Will that change the current model seat allocations for the highest fares (would, for example, it change a model recommendation to hold 15% of the cabin for high fare passengers)?
  • What if demand for the higher fare classes increases by 10%?  How much more inventory will be set aside for high fare classes?
  • What if observed demand increases by 10%?  How quickly will the model capture the increase and build it into the forecasts? Then, in turn, how quickly will that impact inventory allocations?
  • What if the Marketing department attempts to stimulate demand with a low sale fare?  What buy-down is expected based on the model’s estimate of price elasticity?

Another way to look at “what if” is with respect to assumptions inherent in the model.

  • What are the implications of the existing fare class structure?  Are the existing fare classes sufficiently distinct that the model is able to forecast each reasonably accurately?  Is the fare differential between classes large enough to justify setting aside seats for higher classes?  Or does the model collapse the fare classes into far fewer actual groupings?
  • What if there were fewer fare classes?  Does forecast accuracy increase dramatically with fewer classes?  How does that change inventory allocations?
  • What are implications of existing competitive fare management?  How often does competitive fare responses override the model recommendations?  What is the estimated revenue gained/lost by competitive fare actions?
  • What is implication of existing elasticity assumptions (projected buydown)?  How much demand for the lower classes is moved to higher classes based on the buydown assumption?  How does this impact the model allocations of seats for high fare demand?
  • How does the model respond to sudden spikes in demand?  What constitutes “outliers” that are ignored by the model?  Or, alternatively, is demand uncertainty increased so much that demand is deemed “unforecastable”?

“What if” should be a tool of RM systems.  “What if” can help analysts better understand the models and can help guide effective intervention.  

Learn more about the tools that enable your airline to develop business rules for dynamic pricing.

Read the blog