Parliamentarians in Britain have called on the Government to consider laws that would ensure artificial intelligence is never given the power to "hurt, destroy or deceive human beings".
The House of Lords Committee said that it wants to protect the country from 'passively accepting' the consequences of a robot uprising by putting ethics at the centre of its development.
Researchers behind the parliamentary study say that because Britain is poised to become a world leader in this 'controversial technological field', safeguards need to be set in place.
The report insists AI needs to be developed for the common good, and that the "autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence".
Committee chairman Lord Clement-Jones said: "[We have] a unique opportunity to shape AI positively for the public's benefit and to lead the international community in its ethical development – rather than passively accept its consequences."
The #LordsAIreport out today recognises that artificial intelligence should be developed for the common good and benefit of humanity. Find out more about how AI and #machinelearning shape the world around us in our interactive infographics https://t.co/ca2gdmTklN pic.twitter.com/9ZvpwmUaeU— The Royal Society (@royalsociety) April 16, 2018
"AI is not without its risks and the adoption of the principles proposed by the committee will help to mitigate these. An ethical approach ensures the public trusts this technology," he explains.
"It will also prepare them to challenge its misuse."
He says AI cannot weaken human rights and people "should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence".
After 5 #CCWUN meetings I'd say a common understanding is emerging that killer robots are weapons systems capable of selecting & attacking targets without some form of human control. See the sample in this table compiled by @NoelSharkey for the AI committee. pic.twitter.com/y7fBSWPocq— Mary Wareham (@marywareham) April 17, 2018
The report also stresses the importance of transparency, claiming the Government should establish a way to inform customers when AI is being used to make significant and sensitive decisions.
"It is not currently clear whether existing liability law will be sufficient when AI systems malfunction or cause harm to users," it states, "and clarity in this area is needed."
"The committee recommends that the Law Commission investigate this issue."
The concerns come in the same month HBO airs season two of Westworld, a show which sees "hosts" (perfect robotic replicas of humans) embark on a violent reckoning to gain independence.