Rogue robots spark legal fears

Robots going rogue has been the bread and butter of science fiction for decades, but artificial intelligence is becoming increasingly part of our everyday lives.

And now politicians are wrestling with some big questions; if a bot commits a crime, who is to blame - the robot, its master, or its maker?

Programmers at Zora Bots in Belgium are teaching robots to make decisions and operate as receptionists, carers and companions.

But some fear robots could be used for more sinister purposes.

"We must be really careful with what the robots can and cannot do," co-CEO Tommy Deblieck says.

The European Parliament is asking for clear laws to decide who's to blame if a robot is used to commit a crime - accidentally or not.

And they'll cover more than just humanoid bots. Medical robots, drones, and driverless cars pose tricky questions too.

"It is clear that this has to be transparent and clear to the customer because otherwise these robots... there will be no trust, and trust is what we need," says Mady Delvaux, Member of the European Parliament.

The EU is looking to set up an agency for robotics law and give robots special legal status as an "electronic person".

Lawmakers are also concerned about the amount of data robots will have access to.

If the European legislation goes through it could include a rule to see all robots have a 'kill switch' if something goes wrong.