This spring, many businesses found themselves transformed overnight as work-from-home suddenly became the new normal. With in-office strategies for managing employees no longer viable, some firms are turning to technological innovation to track and manage performance—a field known as “people analytics.” This technology promises managers greater insight into how employees are actually working. To realize that potential, however, managers must also be conscious of the risks inherent in this technology, says Columbia University’s Bo Cowgill, winner of an Ideas Worth Teaching Award for his course People Analytics and Strategy. We spoke with Bo about the boom in people analytics during the pandemic, and why more data on employees does not liberate companies from making value judgments.
From increased remote work to furloughs and layoffs, the coronavirus pandemic has reshaped business in a matter of weeks. What role are people analytics playing in these unfolding events?
As this article in the Economist notes, the pandemic has increased the need for people analytics. This is obvious when you think about it. Managers can no longer monitor and assess workers by walking around the office. Data is one of the few management tools they have left.
From the beginning, personnel analytics have tried to enable management to go beyond the “eyeball test”. This was already evident in Michael Lewis’ book Moneyball, in which professional baseball scouts assessed players based on the physical traits they could see at a single quick glance (rather than analyzing on-the-field performance).
The eyeball approach has limitations, but at least it was easy to pull off. Now COVID-19 has taken that advantage off the table. No manager can drop into every worker’s house and see how they’re performing from their desk at home. The eyeball test doesn’t work well over Zoom either.
As a result, most managers are blind to what’s going on (in both a literal and figurative sense). This creates an opportunity to use this pandemic to improve how data is used in operations, strategy and management.
One of the learning objectives for your course is to: “Assess black-box algorithms for quality, effectiveness and bias.” Can you explain the real-world implications of this challenge for our readers—and what it entails in your course?
The most natural way to assess black box algorithms (systems opaque to external scrutiny; see We Need to Open Algorithms’ Black Box Before It’s Too Late—see also my article, Algorithmic Fairness and Economics) is to obtain and inspect the code and weights used in the algorithm. We call this “input regulation.” Input regulation is intuitive, popular and therefore the first instinct of many students and professionals.
Unfortunately, this approach is highly misleading and can leave blind spots about an algorithm’s effects. The most common proposed form of input regulation is to check to see whether algorithms utilize variables labeled “gender,” “race,” “age” or “ethnicity.” However, we already know that any decision process (human or algorithm) can effectively implement discrimination without directly utilizing these demographic variables, and instead using variables that are correlated with them.
The focus should instead be on outputs rather than inputs. This does not require that the black box be opened and inspected. However it does require that the algorithm be queried to assess its outputs. Analysts should also compare an algorithm to a realistic benchmark (such as an alternative process), rather than a standard or perfection. If algorithms don’t make decisions then something (probably humans) will. Human biases can be more damaging and difficult to eliminate than algorithmic ones.
You mentioned that you are in the process of updating your course. What factors will have the most impact on the future of people analytics, and how do you plan to incorporate the new Covid reality?
My updates introduce additional in-class simulations about people analytics and a new case about salary history decisions. I currently have no plans to teach a dedicated session about the Coronavirus outbreak, although I do reference the topic often. I generally think COVID-19 will accelerate trends that are already happening. The decline of the “eyeball test” (above) is an example of this. The course was oriented towards those trends even before the outbreak.
As alumni from your course go into leadership positions across industries and sectors, what is the one lesson that you hope will stick with them throughout their careers?
Better statistics do not liberate companies from making value judgements. To the contrary: Better data actually makes value judgments more important. In today’s world, we can measure the tradeoffs in any decision with more granularity than ever before. This puts a burden on leaders to develop a philosophy of acceptable (vs unacceptable) tradeoffs. They must then integrate this philosophy into a coherent strategy and set of tactics. Better data does not free leaders from these burdens; better data actually increases the burdens.
Interested in more innovative insights for business education? Browse our complete collection of interviews with outstanding educators, and subscribe to our weekly Ideas Worth Teaching digest!