The last 6 months have, more than ever, emphasized the importance of knowing what is coming. In this article, we take a closer look at forecasting. Forecasting can be applied to a range of HR-related topics. We will specifically examine how forecasting models can be deployed in R and end with an example analysis on the rise in popularity of the term “people analytics”.

The goal is to know what’s coming…

Predictions come in different shapes and sizes. There are many Supervised Machine Learning algorithms that can generate predictions of outcomes, such as flight risk, safety incidents, performance and engagement outcomes, and personnel selection. These examples represent the highly popular realm of “Predictive Analytics”. 

However, a less mainstream topic in the realm of prediction is that of “Forecasting” – often referred to as Time Series Analysis. In a nutshell, Forecasting takes values over time (e.g., closing price of a stock over 120 days) to forecast the likely value in the future. 

The main difference between predictive analytics and forecasting is best characterized by the data used. Generally, forecasting relies upon historical data, and the patterns identified therein, to predict future values. 

An HR-related example would be using historical rates of attrition in a business or geography to forecast future rates of attrition. In contrast, predictive analytics uses a variety of additional variables, such as company performance metrics, economic indicators, employment data, and so on, to predict future rates of turnover. Depending upon the use case, there is a time and a place for both approaches. 

In the current article, we focus on forecasting and highlight a new library in the R ecosystem called ModelTime. ModelTime enables the application of multiple forecasting models quickly and easily while employing a tidy framework (for those not familiar with R don’t worry about this). 

51 HR Metrics
Cheat Sheet

Data-driven HR starts by creating and implementing a set of relevant HR metrics that help you determine the efficiency and impact of the workforce and HR department.

Download the FREE cheat sheet with 51 HR Metrics

Download free pdf

To illustrate the ease of using ModelTime we forecast the future level of interest in the domain of People Analytics using Google Trends data (code included). From there we will discuss potential applications of forecasting supply and demand in the context of HR.

Data Collection

The time-series data we will use for our example comes directly from Google Trends. Google Trends is an online tool that enables users to discover trends in search behavior within Google Search, Google News, Google Images, Google Shopping, and YouTube.

To do so, users are required to specify the following:

  1. A search term (up to four additional comparison search terms are optional), 
  2. A geography (i.e., where the Google Searches were performed),
  3. A time period, and
  4. Google source for searches (e.g., Web Search, Image Search, News Search, Google Shopping, or YouTube Search).

It is important to note that the search data returned does NOT represent the actual search volume in numbers, but rather a normalized index ranging from 0-100. The values returned represent the search interest relative to the highest search interest during the time period selected. A value of 100 is the peak popularity for the term. A value of 50 means that the term is half as popular at that point in time. A score of 0 means there was not enough data for this term.

# Libraries
library(gtrendsR)
library(tidymodels)
library(modeltime)
library(tidyverse)
library(timetk)
library(lubridate)
library(flextable)


# Google Trends Parameters
search_term   <- "people analytics"
location      <- "" # global
time          <- "2010-01-01 2020-08-01" # uses date format "Y-m-d Y-m-d"
gprop         <- "web"

# Google Trends Data Request
gtrends_result_list <- gtrendsR::gtrends(
    keyword = search_term,
    geo     = location,
    time    = time,
    gprop   = gprop


)

# Data Cleaning
gtrends_search_tbl <- gtrends_result_list %>%
    pluck("interest_over_time") %>%
    as_tibble() %>%
    select(date, hits) %>%
    mutate(date = ymd(date)) %>%
    rename(value = hits)

# Visualization of Google Trends Data
gtrends_search_tbl %>%
    timetk::plot_time_series(date, value)

Time series plot

We can see from the visualization (go here for the interactive version of the graph) that the term “people analytics” has trended upwards in Google web searches from January 2010 through to August 2020. The blue trend line, established using a LOESS smoother (i.e., a non-parametric technique that tries to find a curve of best fit without assuming the data adheres to a specific distribution) illustrates a continual rise in interest. The raw data also indicates that the Google search term of “people analytics”, perhaps unsurprisingly, peaked in June of 2020. 

This peak may relate to the impact of COVID-19, specifically the requirement for organizations to deliver targeted ad-hoc reporting on personnel topics during this time. Irrespective, the future for People Analytics seems to be of increasing importance.

Modeling

Let’s move into some Forecasting! The process employed using ModelTime is as follows:

  1. We separate our dataset into “Training” and “Test” datasets. The Training data represents that data from January 2010 to January 2019, while the Test data represents the last 18 months of data (i.e., February 2019 – August 2020). A visual representation of this split is presented in the image you see below number 4.
  2. The Training data is used to generate an 18-month forecast using several different models. In this article, we have chosen the following models: Exponential Smoothing, ARIMA, ARIMA Boost, Prophet, and Prophet Boost.
  3. The forecasts generated are then compared to the Test data (i.e., actual data) to determine the accuracy of the different models.
  4. Based on the accuracy of the different models, one or more models are then applied to the entire dataset (i.e., Jan 2010 – August 2020) to provide a forecast into 2021. 
Training test data

We have presented the R code below for steps 1 through to 3.

# Train/Test
k <- 18

no_of_months <-

lubridate::interval(base::min(gtrends_search_tbl$date),
                            base::max(gtrends_search_tbl$date)) %/%
base::months(1)

prop <- (no_of_months - k) / no_of_months

# remove the last 18 months (i.e. k) of data from the training set so that we can determine the model accuracy

splits <- rsample::initial_time_split(gtrends_search_tbl, prop = prop)

# visualize the training data (i.e., black line) and test data (i.e., red line)
splits %>%
    tk_time_series_cv_plan() %>%
    plot_time_series_cv_plan(date, value) 

ONLINE & SELF-PACED

HR Business Partner
2.0 Certificate Program

Develop a comprehensive skillset that delivers strategic impact. Learn everything from consulting and data literacy skills to basic finance.

Download Syllabus