It is basically about how AI (Artificial Intelligence) affects U (you) and can potentially cause OI (Occupational Illnesses).
Yes, AI can cause OI. How? Let us understand…
AI is around 60 years old. At the beginning of 1950, John Von Neumann and Alan Turing were the founding fathers of the technology behind AI. The term AI could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology, USA).
Herbert Simon, economist and sociologist, prophesied in 1957 that the AI would succeed in beating a human at chess in the next 10 years. Simon’s vision proved to be right as the success in May 1997 of Deep Blue (IBM’s expert system) at the chess game against Garry Kasparov fulfilled Herbert Simon’s 1957 prophecy 30 years later.
However, there was no one to support the financing and development of this form of AI.
The operation of Deep Blue was based on a systematic brute force algorithm in which all possible moves were accounted for. The human defeat was symbolic in history but Deep Blue had only managed a very limited scope regarding the rules of the chess game compared to the complexities of the world.
The new boom in AI started around 2010 due to massive volumes of data and very high efficiency of computer graphics card processors.
From no support in 1997, overnight, around 2010, research teams all over the world turned to this technology with benefits mankind had never imagined or thought about.
With regards to AI at workplace, the Covid pandemic accelerated the deployment of AI-based tools. One of the areas was employee monitoring.
The Covid pandemic forced many to work remotely. To monitor their wellbeing and work output many employers used technological tools that includes monitoring of internet access, using webcams, keystroke logging, time-tracking devices etc.
It was ok during the pandemic as it was a question of survival for everyone. Now that the pandemic is over, there are concerns that such monitoring is intrusive, excessive and overwhelming.
And in situations where AI is used to assess workloads, it may push workers harder than what a line manager would do to perform more tasks, faster. This happens because the background data says such workloads are theoretically possible. This can impact worker health especially if workers’ physical and mental strength have not been assessed.
AI improves safety but can create unsafe work parameters of working, due to lack of ‘prevention through design’ approach including automatic algorithm system, leading to work-related (occupational) illnesses and accidents.
In August 2022, the European Agency for Safety and Health at Work (OSHA) had released a report that examined the risks and opportunities presented by AI-based worker management systems and their impact.
The report found that AI can enable better monitoring of hazards and mental health of workers.
AI also contributes in improving safety at workplace – by human error reduction, automation of dangerous tasks, harassment monitoring, equipment integrity, crime detection and prevention in offices and factories.
The report also listed the dangers that it can pose; briefly summarized as follows:
- AI usage can ‘dehumanise’ workers by giving the sense that they have very limited control over their jobs.
- Usage of AI at workplace can also create unhealthy and pressured environment with little transparency about how decisions can be made or challenged.
- AI usage can create mistrust, limit worker participation and lead to work-life balance issues.
- AI also can cause serious mental and physical harm, including anxiety, musculoskeletal and cardiovascular disorders.
The report suggested that employers should pursue ‘prevention through design’ approach from the start. The design includes, among other things automatic algorithm system that is used for assignment of work and rating of worker performance.
In summer of 2021 food delivery company Deliveroo was fined by the Grante, the Italian data protection authority. It said that Deliveroo collected a disproportionate amount of personal data from its riders, in violation of the EU General Data Protection Regulation. This data was used for the automated rating of each rider’s performance and for the assignment of work. The Garante found that Deliveroo was not transparent enough about how such algorithms worked.
Foodinho, another Italian food delivery company was fined because the workings of its automatic algorithm system, used to evaluate the performance of workers, were not sufficiently transparent and did not ensure accurate results.
To prevent mismatch of work requires that the AI is trained using adequate data and that humans (including workers, their line managers and even Occupational Health physicians) are involved in decision-making.
In the UK, in response to concerns over staff safety and data protection, the Information Commissioner’s Office (ICO) issued draft guidance in October 2022 to help ensure employers’ monitoring of staff performance doesn’t turn into surveillance or harassment.
The ICO reminds companies they must make workers aware of the nature, extent and reasons for monitoring, and ensure it’s proportionate. It says, ‘Just because a form of monitoring is available, it does not mean it is the best way to achieve your aims.’
Anurag Bana, a senior project lawyer in the IBA’s Legal Policy & Research Unit, says ‘there needs to be an appropriate level of human oversight for any AI worker management system to protect employees’ and that ‘there should also be an algorithmic impact assessment procedure before any system is installed.’ He believes that a human rights due diligence exercise in respect of AI systems is essential in order that ‘automated decision-making does not produce harmful outcomes and workers can challenge how decisions are made to ensure transparency and accountability.’
The risk of AI is not ethics and compliance, but legal as well as reputational.
Bana says that employers need to demonstrate a duty of care to employees regarding AI use. ‘Providing information to employees about how and why AI is being used is not enough,’ he explains. ‘There needs to be consultation with staff about the business reasons for using AI and how it will positively impact them. You need to have employees’ buy-in before you start monitoring their performance in this way. You also should have an ethical framework in place that protects employees’ health and safety – it may be a good idea to conduct an assessment/check compliance against the ISO 45003 guidelines, which look at employees’ psychological health and safety at work.’
Johan Hübner, Chair of the IBA Artificial Intelligence and Robotics Subcommittee and a partner at Swedish law firm Delphi, says that ‘excessive monitoring can lead to higher levels of employee stress and increased ill health among employees.’
Where AI is used to allocate tasks, it’s ‘important to ensure that all dimensions of the allocated tasks are included in the AI-generated decision,’ adds Hübner.
In some countries, organisations could face fines and damages claims for injury or sickness caused by AI use.
The AI needs to consider the number of tasks allocated to each employee, their difficulty and how long each task will take. ‘Otherwise, the risk is that some employees become overworked while other employees are underworked, which could lead to ill-health in either scenario,’ says Hübner.
To prevent mismatch of work requires that the AI is trained using adequate data and that humans (including workers, their line managers and even Occupational Health physicians) are involved in decision-making.
Ida Nordmark, an associate at Delphi, says organisations could face fines and damages claims for employee injury or sickness caused by AI use.
For example, under Swedish labour law a company that causes ill health or injury in an employee because of it using AI is responsible for bearing the costs of his/her rehabilitation. In more serious cases, a company may be fined by the regulator for causing an employee’s illness in the workplace or be required to pay damages due to discrimination. ‘The most obvious risk is not legal, but reputational,’ says Nordmark.
This is the scene in some parts of Europe.
As of now, India does not have specific laws for data protection but personal information is safeguarded under Section 43A and Section 72A of The Information Technology Act. It gives a right to compensation for improper disclosure of personal information similar to GDPR (General Data Protection Regulation). In 2017, the Supreme Court of India declared the Right to privacy as a Fundamental Right protected under the Indian Constitution.
However, unlike the European governments, the Indian government and many other governments aren’t even thinking of effect of AI on worker health and the compensation involved therein.
AI improves safety but can create unsafe work parameters of working, due to lack of ‘prevention through design’ approach including automatic algorithm system, leading to work-related (occupational) illnesses and accidents.
For more info, contact ____________________________________________________________________________________________
Dr Ajay Sati is an Occupational Health physician who prefers to describe himself as an Occupationist, to denote, ‘an expert in diseases and other concerns of occupations’. Dr Sati has managed health and wellness programs in industries he worked, like the atomic energy, and energy (oil & gas) in India and overseas. An experienced virtual consultation expert he was involved in many greenfield and brownfield projects providing inputs from health point of view.