Innovation is challenging just how far we trust technology – and tech providers – to play an increasingly central role in daily life: can the law keep up?
By Dan Tozer, Partner at Harbottle & Lewis
The march of the robots into daily life is now seemingly inevitable.
Almost a third of London jobs are expected to be in the hands of machines within the next twenty years, according to Think Tank the Centre for London, while the Royal Society of Arts predicts 4 million UK private sector jobs could be taken by robots within the decade. From data use (see the Cambridge Analytica scandal) to robot liability, innovations are bringing the issue of trust and responsibility in technology to the fore.
The recent and much anticipated House of Lords report on AI highlighted this issue. While rightly underlining the UK’s leading role in AI advancement, peers also noted the urgent need for legal clarity over AI, particularly with regards to liability. Law and regulation, based to such a large extent on precedent, traditionally struggles to keep up with technological advancements at the best of times, and we are now at a significant turning point.
With machines playing an increasingly central role in daily life, a degree of surety over where responsibility for AI lies, and where public trust resides, is required. As it stands, it is difficult to find this surety in our current legal frameworks as we head into the ‘AI age’ – particularly when it comes to ‘ownership’, in its broadest sense, of ideas and actions.
Law and regulation, based to such a large extent on precedent, traditionally struggles to keep up with technological advancements at the best of times, and we are now at a significant turning point
Can a robot own its own creations?
With machines now being used to create content largely independently, recent years have seen the emergence of a debate, particularly in the creative and arts industries, over the extent to which robots can own the intellectual property inherent in their creations.
In 2016, Gaetan Hadjeres and Francois Pachet at the Sony Computer Science Laboratories in Paris developed a machine learning neural network that has learned to produce choral cantatas of its own in the style of Bach. More recently, Shun Matsuzaka, creative planner for Japanese ad agency McCann, created the world’s first AI creative director, capable of directing a TV commercial. We are now left wondering how far these artistic creations belong to the machine, or belong to the human behind the machine.
In this context, it is worth considering to what extent a ‘non-human’ can own IP at all…
For example, in 2016, after a protracted legal battle revolving around a “selfie” taken by Naruto, a macaque money from the jungle of Indonesia, using a camera owned by photographer David Slater, a federal judge in San Francisco ruled that a monkey could not be declared a copyright owner. While of course not a robot issue, this highlighted the difficulty of ascribing ownership to a non-human ‘being’.
Crucially, to what extent and where ownership is attributed is central to the question of who retains responsibility, and where public trust ultimately needs to lie – particularly when machines expand in their ‘intelligence’, ‘learn’ and begin to take independent actions…
With machines now being used to create content largely independently, we have seen the emergence of a debate over the extent to which robots can own the intellectual property in their creations.
Earlier in 2018, an Amazon Alexa personal assistant placed an order of cat food on its own, after being ‘triggered’ by an advert for a cat food brand on a nearby TV. The case was dealt with by the Advertising Standards Authority, who ruled that the mistake was not the advertiser’s fault. This was an isolated case but, as robot assistants increase in sophistication, questions remain around who takes the blame and who takes the ownership, when a robot’s actions go wrong.
The recent tragedies involving driverless cars, including in Arizona during an Uber autonomous vehicle test, places clear focus on the issue of liability for AI decisions. Many questions remain in the autonomous vehicles space. For instance, if a vehicle’s algorithm is programmed to ‘preserve itself’ – i.e. to limit any physical damage to the vehicle and its inhabitants – are other road-users, or indeed occupants, potentially more at risk? Further, how far liability can be assigned to the manufacturer, the software developer, and the machine itself, will continue to be debated.
If a vehicle’s algorithm is programmed to ‘preserve itself’ – i.e. to limit any physical damage to the vehicle and its inhabitants – are other road-users, or indeed occupants, potentially more at risk?
How far would you trust a robot?
Of course, there may be situations where we are more likely to be more trusting of machines to perform tasks and services. One of the arguments for driverless vehicles is they are not susceptible to issues such as drink-driving, while opinions are still divided as to how far, for instance, patients would trust a robot surgeon.
Nils Bohr said, “prediction is very difficult, especially if it’s about the future.” However, we can safely predict that the robots are coming – indeed, to a large extent they are already here, driving many of the systems and processes we often take for granted. As AI becomes ever more embedded within our daily lives, clarity over issues including ownership and responsibility will need to be sought, and solutions developed, so we can venture into the brave new world of AI with trust in the innovation.