For research scientists, the Hawthorne effect is a well-known problem of human behaviour reactivity - where individuals modify an aspect of their behaviour in response to their awareness of being observed. We see a similar effect in the context of workers being monitored remotely by employers’ technology to track their activity, as discussed in a recent Guardian article. Whether that is avoiding meetings during monitored hours or pausing the tracker when taking written notes, as these count as a 0% activity that will bring down their score.
Putting aside any ethical view point relevant to the use of such applications, employers should consider carefully the legal limitations on the use of such technology in workplace. Data protection, fair processing, and human rights issues are perhaps obvious ones, but where AI is being used to support decisions affecting workers based on AI driven data points, such as appraisal results or flexible working, employers should also consider the scope for bias in results. On the face of it, it can be easy to think that AI actually removes the possibility of bias with a focus purely on data, however, time after time we have seen a risk of algorithmic bias.
It’s also important to remember that, in contrast to proving direct discrimination, when proving indirect discrimination a claimant need only show that the disadvantage has been caused by the AI and not to show why the AI has caused the disadvantage. The cause of the discrimination must be proved, but the reason for it doesn’t.
AI is increasingly being used in the workplace, and, as with any new technology, it comes with both benefits and risks that must be managed accordingly.