• Publicado: 11 Jun 2018

  • Archivado en: vuca, economy, mexico, elections

VUCA index for Mexico

There is an acronym coined by the U.S. Army War College to refer to post-cold-war conditions: VUCA (Volatile, Uncertain, Complex and Ambiguous). It has stuck to the business world because that’s how the world is increasingly seen, especially since the election of Trump.

As VUCA becomes the norm, so do ways to adapt to such a world. Increasingly, business books are being updated with “anti-VUCA” strategies involving a military-style framework:

But even though it seems pretty straightforward, I have found that there is no real quantitative method to measure VUCA conditions.

This is a first attempt to pin down some sort of numeric value for VUCA conditions. I do the index for Mexico, as I understand the available information much better than anywhere else.

The index

Predictably, I divided the index into four subindices, each for a dimension we want to value. The code follows this structure.

First let’s install some packages (the entire script can be found in Github):

library(banxicoR)
library(inegiR)
token_inegi <- "xxxx" # your own API token
library(ggplot2)
library(eem)

Volatility

I define volatility as the range, as a percentage of the minimum close, of the Mexican stock market. This means that when the range is large, there is more perceived volatility. A better measure would be daily standard deviation but I have not been able to find a programmable, R-friendly way to obtain this data. Here, I obtain the data from INEGI.

vuca_v <- function(token_inegi){
  
  lows <- inegi_series(series = inegi_code("15321"), 
                       token = token_inegi)
  highs <- inegi_series(series = inegi_code("15322"), 
                        token = token_inegi)
  
  d <- data.frame("v" = (lows$Values - highs$Values)/lows$Values*100, 
                  "dates" = highs$Dates)
  d
}

Uncertainty

For uncertainty, I posit that when financial analysts have a hard time pinning down a prediction for next year, uncertainty must be high. Thus, I take the monthly survey the Bank of Mexico does to financial experts asking for predictions and look to the standard deviation of their exchange rate estimates. In over words, when financial experts have a large standard deviation in their exchange rate (peso versus dollar) predictions, than uncertainty is high. Standard deviation is measured as percentage of the closing rate that month.

vuca_u <- function(){
  std_dev <- banxico_series(series = "SR14880")
  fx <- banxico_series(series = "SF17909")
  
  names(fx) <- c("dates", "fx")
  names(std_dev) <- c("dates", "std")
  
  std_dev <- std_dev %>% 
    left_join(., fx) %>%
    mutate("u" = std/fx*100) %>%
    select(c("dates", "u"))
  
  std_dev
}

Complexity

Complexity is, well, complex to measure. I looked to the Observatory of Economic Complexity, which does a great job measuring macro-economic complexity, but data reporting of merchandise trade is too slow for my needs. For example, Mexico is more than a year behind in reporting merchandise trade at a desaggregate level.

So I thought about the basic needs of a firm.

Intuitively, a firm faces a complex world when their inputs are not easily predictable and they have to constantly adapt. This can be measured in many ways, but it seems reasonable to say that prices are one of the most important components. So, I took data from the National Production Price Index by INEGI to measure complexity. I define complexity as the standard deviation in monthly inflation of all the components of producer prices, as a percent of the average monthly inflation. In simpler terms, if all the components of producer prices move in the same amount then complexity is low, compared to large swings across components.

There are 15 components, so sorry for the code overload:

vuca_c <- function(token_inegi) {
  
  i1 <- inegiR::inegi_series(series = inegi_code("364705"), token = token_inegi)
  names(i1) <- c("s1", "dates")
  i2 <- inegiR::inegi_series(series = inegi_code("364710"), token = token_inegi)
  names(i2) <- c("s2", "dates")
  i3 <- inegiR::inegi_series(series = inegi_code("364711"), token = token_inegi)
  names(i3) <- c("s3", "dates")
  i4 <- inegiR::inegi_series(series = inegi_code("364714"), token = token_inegi)
  names(i4) <- c("s4", "dates")
  i5 <- inegiR::inegi_series(series = inegi_code("364717"), token = token_inegi)
  names(i5) <- c("s5", "dates")
  i6 <- inegiR::inegi_series(series = inegi_code("364739"), token = token_inegi)
  names(i6) <- c("s6", "dates")
  i7 <- inegiR::inegi_series(series = inegi_code("364749"), token = token_inegi)
  names(i7) <- c("s7", "dates")
  i8 <- inegiR::inegi_series(series = inegi_code("364755"), token = token_inegi)
  names(i8) <- c("s8", "dates")
  i9 <- inegiR::inegi_series(series = inegi_code("364758"), token = token_inegi)
  names(i9) <- c("s9", "dates")
  i10 <- inegiR::inegi_series(series = inegi_code("364760"), token = token_inegi)
  names(i10) <- c("s10", "dates")
  i11 <- inegiR::inegi_series(series = inegi_code("364763"), token = token_inegi)
  names(i11) <- c("s11", "dates")
  i12 <- inegiR::inegi_series(series = inegi_code("364765"), token = token_inegi)
  names(i12) <- c("s12", "dates")
  i13 <- inegiR::inegi_series(series = inegi_code("364769"), token = token_inegi)
  names(i13) <- c("s13", "dates")
  i14 <- inegiR::inegi_series(series = inegi_code("364772"), token = token_inegi)
  names(i14) <- c("s14", "dates")
  i15 <- inegiR::inegi_series(series = inegi_code("364775"), token = token_inegi)
  names(i15) <- c("s15", "dates")
  
  d <- i1 %>% 
    left_join(., i2) %>% 
    left_join(., i3) %>%
    left_join(., i4) %>%
    left_join(., i5) %>%
    left_join(., i6) %>%
    left_join(., i7) %>%
    left_join(., i8) %>%
    left_join(., i9) %>%
    left_join(., i10) %>%
    left_join(., i11) %>%
    left_join(., i12) %>%
    left_join(., i13) %>%
    left_join(., i14) %>%
    left_join(., i15) %>% 
    rowwise() %>%
    mutate("c" = sd(c(s1, s2, s3, s4, s5, s6, s7, s8,
                      s9, s10, s11, s12, s13, s14, s15))) %>%
    select(c("dates", "c"))
  
  d
}

Ambiguity

I define this component as simply not being sure of taking a decision. Not knowing whether to invest is different from being sure not to invest. So, I look to the same survey of financial experts conducted by the Bank of Mexico and observe the rate of response to investing in the future, extracting only the percentage of “not sure” responses.

vuca_a <- function(){
  not_sure <- banxico_series("SR15035")
  names(not_sure) <- c("dates", "a")
  not_sure
}

Putting it all together

Each part of the index is standardized to 100 in the beginning data point and given the same weight. I am aware that this is controversial, but I could not justify another way of doing it (i.e. why give more weight to uncertainty than complexity?).

vuca <- function(token_inegi, scales = c(0.25, 0.25, 0.25, 0.25)){
  df <- vuca_v(token_inegi = token_inegi) %>% 
    left_join(., vuca_u()) %>%
    left_join(., vuca_c(token_inegi = token_inegi)) %>%
    left_join(., vuca_a()) 

  df <- df[complete.cases(df), ]
  df <- as.tibble(df) %>% 
    mutate("v_ind" = v/first(v)*100, 
           "u_ind" = u/first(u)*100, 
           "c_ind" = c/first(c)*100, 
           "a_ind" = a/first(a)*100) %>%
    mutate("vuca" = v_ind*scales[1] + u_ind*scales[2] + c_ind*scales[3] + a_ind*scales[4] ) %>%
    tq_mutate(select = v, 
              mutate_fun = runMean, 
              n = 12, col_rename = "vuca_12m") # running 12m average
  
  df
}

# Downloading the vuca index
d <- vuca(token_inegi = token_inegi)

# graph
ggplot(d, aes(x = dates, y = vuca)) + 
  geom_line(color = eem_colors[1]) + 
  geom_smooth(color = eem_colors[3]) +
  eem::theme_eem() + 
  labs(title = "VUCA Index for Mexico", 
       x = "Dates" , y = "Index (100 = 2010/07)")

VUCA Index for Mexico

Interestingly enough, once we graph this index, we find that there has been two large spikes (january 2015 and november 2011) and a recent sustained decrease.

What can be said about these events? In January the big driver was Complexity, which means there might have been a large correction in producer prices, with a standard deviation almost four times larger than the initial reading. As for November 2011, the large shift occurred in the stock market, as volatility flared probably due to the death of Interior Minister Francisco Blake Mora.

As for the recent decrease, it might be counterintuive considering the daily flurry of news from Trump and the NAFTA negotiations, but it also makes sense.

A low VUCA does not mean a thriving economy. The economy can be in the midst of a recession but pretty much if every one understands the situation, VUCA is low. In fact, what is driving down the index these months is a low ambiguity component, which means analysts know whether they should invest or not.

Thus, the relatively low VUCA we are witnessing might seem to suggest that the market has come to a consensus around who will be the next President and what will happen to NAFTA.