Discussions about data privacy tend to focus on the consumer-seller dynamic. What personal information do companies have a right to collect, and how should they be expected to use and care for it? But another dynamic, between employer and worker, raises even thornier questions.
For years people analytics — the science of using data to manage employees — drew on details about age, gender, and tenure and ratings from performance reviews for insights. But that paltry harvest limited its usefulness.
More recently, sensor technology and real-time data collection have produced bumper crops of employee information for companies. Now managers can access second-by-second feedback on what a worker is doing and, to some extent, what a worker is feeling. Data from emails, chats, and calendar systems can be analyzed alongside traditional HR data.
Sensors can gather incredibly granular data on workers’ habits — everything from who speaks with whom, how much people interrupt one another, where they spend time, and even their stress levels. And as ID badges and office furniture join the internet of things, the information that companies have on their workers will expand by orders of magnitude. HR departments now have the potential to know nearly everything about employees.
Already, the new measurement tools have had an immensely positive impact — when deployed correctly and ethically. Companies have used data from wearable sensors and digital communication to quantify and reduce gender bias at work, increase alertness and reduce fatigue, significantly lift performance, and lower attrition, in industries from railways to finance to quick service restaurants.
And we are just beginning to tap the potential of these new technologies.
For workers, though, the value of all this data gathering isn’t as clear. Advanced people analytics may even hinder employees’ ability to freely manage their time and experiment. The numbers might suggest, for instance, that a new way of working isn’t productive, even though it could eventually lead to long-term company gains. Worse still, analytical tools open up the risk of abuse through Tayloristic overmonitoring.
Just because you can measure something doesn’t mean you should. Workers’ advocates worry that data-based surveillance gives employers unreasonable power over employees, and they aren’t sure companies can be trusted not to lose or abuse sensitive personal information.
After all, companies’ systems are frequently breached. And it’s not a long leap from monitoring employees’ stress to using health care data to predict medical conditions and take preemptive action.
Data also gives a false sense of validity. That is, it can make certain conclusions seem true (employee X is not productive because he generates 10% less output) even if there are legitimate alternative points of view (employee X is productive in a different way — by, say, reducing errors or training others).
Given this new reality, managers now face challenging questions: Should they use analytical tools that examine employees’ worktime habits to assess their performance? What data should firms have access to? Should they share their analyses with employees? Should they look at individual data? What about using data to determine the risk that an employee will develop a mental illness?
Companies, lawmakers, and regulators are already starting to grapple with rules for the use of monitoring tools in the workplace.
In the meantime, managers need guidance on how to run effective and ethical people analytics programs that will avoid an employee backlash or a heavy-handed legislative response. Through my work at MIT with Sandy Pentland and in designing products and services for my own analytics company, I have identified several scientifically backed ground rules for the use of monitoring technology. I’ve seen these techniques effectively mitigate potential issues, and I’ve seen serious problems arise when they weren’t used.
In general, successful rollouts of people analytics technologies take four to six weeks. While faster implementation may be possible in some organizations, it’s important to do it right.
That will show employees that management is being thoughtful about thorny ethical issues and ensure that the findings’ validity will be respected. Blowing off any one of these steps can cause opt-in rates to plummet and undermine a program for years.
Here’s your playbook for the ethical, smart use of employee data:
Opt in. It starts with one of the simplest and oldest privacy guidelines: If you launch a program collecting new kinds of data, requiring employees to opt in to it (and leaving out all who don’t) is essential. Forcing people to give up data about themselves at work may be strictly legal in the United States and several other countries, but that is not the case globally. Regulations such as GDPR, while not explicitly focused on the workplace, do spell out restrictions that would make data collection difficult for a multinational organization.
But even in jurisdictions that permit it, coerced monitoring or requiring employees to opt out (especially if the choice is obscured by, say, being buried in the fine print during onboarding) opens many ethical and business concerns.
First and foremost, it may backfire from a purely economic perspective. Groundbreaking research by Harvard Business School’s Ethan Bernstein has shown that when employees feel that everything they do is completely transparent, the result is often reduced performance. And when competition for talent is intense, workers may leave companies that compel them to give up their data.
Beyond that, firms face reputational risk. For example, Amazon, Tesco, and the Daily Telegraph all experienced weeks of negative media coverage for their proposed or poorly executed monitoring efforts.
Some of those programs were very well intended: The Telegraph’s was aimed at improving energy efficiency — something few employees would probably object to — through the use of desk sensors. But the media company rushed the rollout and provided little information to its employees before foisting the sensors on them. It was forced to quickly withdraw them after hard internal pushback and skewering in the media.
Setting up an opt-in program is challenging and time-intensive in the short term. The program must include strong protections for employees who choose not to participate so that they don’t feel coerced or penalized.
Chief among those protections is data aggregation to prevent individuals’ behavior from being identified. But I also advise further precautions, such as consent forms and data anonymization at the source of collection (so overeager, curious-to-a-fault managers can’t snoop on the minute-by-minute activities of employees).
To design opt-in consent forms that are clear, concise, and easy to understand, companies should take their cue from internal review boards (IRBs) at universities, which have stringent procedures for how researchers interact with human subjects.
On IRB forms, researchers must clearly specify what data is collected and how it will be used. Employees should be provided with appendices that spell out the specific database tables that will be populated, so they can see exactly what kind of information will be stored. Finally, companies also need to sign the forms, creating legally binding contracts with employees. (For an example, see the consent form we use at my company.)
Communication and transparency. Blindly sending out consent forms to all employees and hoping for high opt-in rates isn’t a winning strategy. The rollout of ethical people analytics involves lots of communication and constant transparency.
Click here to continue reading Ben Waber’s article.