Controlling the rise of robots

3rd Sep 2020

Published in PharmaTimes magazine - September 2020

Charlotte Walker-Osborn considers whether the law will be able to keep up with the rise of robots, drones and automated processes

As COVID-19 continues to significantly impact lives and commerce across the globe, the world continues to increase its utilisation of technology and automation to allow businesses to endure. Many are asking is this the true rise of robots and are we legally equipped to deal with the ramifications?

Use of robots and drones to automate certain processes has been in place for a number of years. Prior to the pandemic, removing the human element seemed to be a key barrier to increased adoption of robots. However, social distancing and lock down has undoubtedly led to re-evaluation and has changed consumer preferences as everyone has been forced to adapt to increased use of technology in their everyday lives.

COVID-19 has shown that humans and supply chains are vulnerable. Robots, drones and artificial intelligence (AI) have been deployed to assist in a number of ways. For example, Starship Technologies recently expanded its robot shopping delivery service in Milton Keynes, UK, to include delivery of groceries to NHS workers without charge during the pandemic. In South Korea, robots have been distributing hand sanitiser and measuring temperatures. Walmart has also used robot cleaners during the crisis. In the pharma and healthcare sectors, AI is being heavily utilised in analysis of patterns and drug discovery around COVID-19. Chatbots – software which transmits information through text or voice interaction – have increased during the pandemic to disseminate health information. They are likely to become more prevalent in the healthcare and pharma industries when the pandemic recedes. Moreover, it is likely that use of AI and robotics will now be accelerated where it can be used for patient matching, testing and drug discovery, as well as delivering medications and sanitising equipment to reduce human exposure to viruses.

Legal considerations

However, is the law ready to address the legal ramifications?

There are a vast number of legal considerations for building, adopting, or using AI and automation. Presently, it is still largely the case that industry must ‘shoe horn’ and apply laws that have not been written with these forms of technology in mind. In particular, this is the case with laws around product liability, employment, intellectual property and data as well as usage in public spaces. Where liability lies, if these solutions go wrong, is pivotal. It should not be forgotten that this liability may be set out in contracts between businesses and with consumers as well as by legislation and case law.

There is a vast amount of work being done by legislators to bring the laws into the modern age, with a large focus around laws relating to use of autonomous vehicles and robots in public spaces. A number of countries have adopted or are putting in place laws in this arena, including the US, the UK, Germany and Singapore. These are essential to enable adoption and a lack of law in this area has undoubtedly hampered widespread adoption.

Significant progress has been made in the EU and UK in relation to use of personal data in these technologies. In these countries there is a vast body of guidance and consultation forming around use of robots and AI by privacy regulators which should be carefully applied.

How should humans treat machines that can think and act autonomously?

There is a lot of focus on robots and AI making errors or acting in a rogue manner. Arguably, there are not many current public AI solutions which have the capability to ‘go rogue’ in a way that will harm humans but more will come. If a robot has the potential to damage a person by its actions, should it be treated/punished in the same way as a human? Should there be a ‘kill switch’? Should we make a machine or its creator/programmer accountable, or the person who applies its intelligence to a use which may be improper?

What happens where a robot is given its own citizenship (as has happened in at least one country) and truly begins to think for itself? There is work to be done here – liability is complex because the issue could be with the build, the training data, the actual training of the AI, lack of software updates being applied on time, tampering via cyber threat and more. In many countries, the law can be applied to fit each circumstance but specific laws in this area will continue to come.

Focus on ethics, transparency and trustworthy automation

There is a promulgation of guidance, consultations and new laws across the world focusing on ethics, transparency, and trust in use of AI and automation. A seminal document is the EU’s AI White Paper, which outlines the European Commission’s proposals for how AI is brought into use in Europe.

The AI White Paper builds on the AI Strategy published in 2018 and the respective High-Level Expert Groups’ reports on trustworthy AI and liability for AI, both published in 2019.

The proposed framework for AI regulation contemplates both changes to the existing legislation that are required to ensure that AI is covered and the implementation of AI specific requirements/ new laws which may be needed, with a key focus on extending product safety legislation. The White Paper highlights that there is a need to consider the changing functionality of AI systems – existing legislation focuses on safety concerns at the time a product is placed on the market and it does not anticipate the dynamic nature of many AI infused solutions where modifications, software updates and machine learning may give rise to new risks not present at the time the product was initially put on the market.

It also focuses on the need to update allocation of responsibility in the supply chain – the current regime does not adequately address scenarios where AI is added to products after they have been placed in the market.

The White Paper also notes that AI could give rise to risks that are simply not anticipated under current legislation. These risks may be linked to cyber risks, personal security threats, risks that arise because of loss of connectivity, etc. These risks, for the reasons mentioned above, may evolve over time and are not necessarily present at the time the product or solution is first placed on the market. The Paper also focuses on the need for accuracy and correct treatment of data and ensuring that bias isn’t introduced into AI solutions. This is a much talked about area for AI – getting this wrong can lead to errors in results and potential discrimination.

Importantly, it is clear that the future EU regulatory framework is intended to have extraterritorial effect and would be applicable to all operators offering AI-enabled solutions in the EU, regardless of whether they are established in the region. It is likely many countries will follow the EU laws to some degree, including the UK.

Conclusions

Law is partially ready but much more is to come. Companies need to keep a watching brief on the new laws as they come into force and carefully apply, as best possible, current laws to their automation projects. Laws will come rapidly over the next few years and I anticipate increased focus on employment and tax laws and implications as automation truly begins to reduce the need for workers. Absence of perfect laws in this area currently and, generally moving forwards, liability and risk will need to be well-managed by governments and companies in their contracts both with each another and with their end-customers and with us as consumers.

Charlotte Walker-Osborn is partner and international head of artificial intelligence, international head of technology sector, at Eversheds Sutherland (International) LLP

PharmaTimes Magazine

Article published in September 2020 Magazine

Tags