Using Artificial Intelligence to Track Growing Remote Workforce and Related Litigation Implications

By: Lewis Brisbois' Labor & Employment Team

The COVID-19 pandemic has changed the workplace, with the most notable change surrounding acceptability of long-term remote work. An April 2020 Gallup survey found that whereas 31% of respondent U.S. workers reported that they worked remotely between March 16 and March 19, that number grew to 62% between March 30 and April 2. By April 2, 59% of respondents stated that they desired to work from home as much as possible in the future. 

In the year since, that trend increased and solidified. Although the jury is still out, history may be set to record the shift to remote work as a success. A March 2021 report by McKinsey Global Institute estimates that the adaptions made by companies in response to the pandemic, measures such as a shift to remote and online work-channels, automation, etc. present foundational elements that could accelerate annual productive growth by roughly 1% through 2024. That figure would be more than double the rate of observed annual productive growth compared with before the pandemic and the resulting financial crisis. The Congressional Budget Office’s recent forecast also predicts 1.5% productivity growth per year from 2021 to 2025. That figure is up from the 1.2% average productivity growth from 2008 to 2020. The National Bureau of Economic Research estimates that 79% of employees in the management industry are capable of working from home, with the numbers being even higher in areas such as the education sector. The singularity of these findings comes into clear relief when one recalls that in the “before times,” an employee’s request to work from home full-time often entailed detrimental consequences to one’s career.

Much of this success stems from the adjustments made by companies in areas of employee management and relations in response to the pandemic. The key mode of change came from employers leveraging technology to ensure continuity and the safety of their workforce and customers. But what was once a series of stopgap measures in immediate response to the pandemic is now becoming a permanent feature, forcing employers to re-assess in the long-term how to manage employees in an market characterized by enduring remote-only work arrangements. At its core, an employee’s interaction with the employer’s digital infrastructure at a remote workstation, whether at home or otherwise, are datapoints that inform things such as employee attendance, tardiness, compliance with meal and rest period requirements, productivity, and other similar facts. The volume of this data requires automation to effectively digest and evaluate the information. Artificial intelligence (AI) is increasingly being looked at in this context as a way to track productivity and monitor performance. AI is well-suited for this task, because it can learn from the various datapoints. For example, AI can acquire login times and other activity datapoints to help track work behaviors, understand employee work patterns, and identify non-productive time or identify other aberrations in productivity. 

The distance from AI helping to manage employee productivity to AI augmenting or even performing a decision-making act in employment events such as compensation review and disciplinary actions is not far. And in some cases, litigation can follow. Take, for example, the 2020 lawsuit brought by Uber drivers in the Netherlands. In that case, four Uber drivers (three from the United Kingdom and one from Portugal) challenged the legality of their termination, which resulted from Uber’s algorithms detecting irregular trips involving fraudulent activity. Based on that suspected fraud, Uber’s algorithm automatically deactivated the Uber app accounts linked to the plaintiffs. The plaintiffs described that act as algorithmic “robo-firing.” The claims were brought under Article 22 of the General Data Protection Regulation (GDPR), which embodies the right not to be subject to a “decision based solely on automated processing.” In April 2021, the court concluded that Uber’s decision regarding alleged violation by the plaintiffs of the terms and use of Uber’s app, the finding of alleged fraud, as well as the robo-terminations themselves could be regarded as “decisions based solely on automated process.” Accordingly, the court ordered Uber to reinstate the drivers, among other remedies. 

To be sure, nothing of the foregoing applies strictly within the United States. Whereas Europe tends to have highly developed regulatory schemes in this area, the U.S. does not. But what Europe lacks, the U.S. has in the form of an active plaintiffs bar and various statutes that effectively provide bounties for private plaintiffs to pursue collective, class, and representative actions. Themes such as a “common algorithm” theory of class certification are risks for companies and employers in this area. 

Moreover, with the election of President Biden, the federal government is expected to increase enforcement of federal laws in AI-based scenarios, such as claims involving alleged biases by AI systems. In an April 19, 2021 piece with the telegraphing title "Aiming for truth, fairness, and equity in your company’s use of AI," the Federal Trade Commission (FTC) took the position that a company using AI that inadvertently introduces bias or other unfair outcomes based on automated decision making could be prosecuted under existing federal laws, such as Section 5 of the Federal Trade Act. The FTC provided the example of “digital red-lining” in algorithmic advertising, i.e., using an algorithm to target sub-populations based on factors such as race, sex, religion, and others. To apparently ensure no one was mistaken as to its stance, the FTC stated, “If you don’t hold yourself accountable, the FTC may do it for you.” These types of public enforcement actions tend to trigger follow-on private class action lawsuits. 

Conclusion

It is a new era. The pandemic changed the nature of workplaces, and the workspace is likely to be characterized by an increasing and permanent shift relative to remote work. Staffing and managing a workforce in that space will see the further rise and implementation of AI. As that happens, there will be a concomitant rise in litigation risks arising out of the use of automated decision-making algorithms relative to employees. 

Companies and employers should look to coordinate implementation of AI in employee work spaces and in customer-facing scenarios with the aid of counsel. A key theme in the FTC’s piece included transparency. In the employment context, “transparency” may involve a company policy that provides employees with a “right to explainability” in how AI automation is being used to assess employee performance. Companies can create policies specific to “high risk AI,” such as those that impact an employee’s terms and conditions of employment, and different policies surrounding AI that simply augments employee work performance. Other steps can involve auditing algorithms and developing strategies designed to foster employee and customer trust in AI processes. 

For more information on this topic, contact the author of this post. Subscribe to this blog to receive email alerts when new posts go up.

< Previous Post Next Post >


Find an Attorney

Each of the Firm's offices include partners, associates and a professional staff dedicated to meeting the challenge of providing the firm's clients with extraordinary service.