AI can help detect fraud and cyber security threats, improve healthcare and financial risk management and cope with climate change.
Companies working with artificial intelligence need to install accountability mechanisms to prevent it's being misused, the European Commission said on Monday, under new ethical guidelines for a technology open to abuse.
AI projects should be transparent, have human oversight and secure and reliable algorithms, and they must be subject to privacy and data protection rules, the commission said, among other recommendations.
The European Union initiative taps into a global debate about when or whether companies should put ethical concerns before business interests, and how tough line regulators can afford to take on new projects without risking killing off innovation.
“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies,” the Commission digital chief, Andrus Ansip, said in a statement.
AI can help detect fraud and cyber security threats, improve healthcare and financial risk management and cope with climate change. But it can also be used to support unscrupulous business practices and authoritarian governments.
The EU executive last year enlisted the help of 52 experts from academia, industry bodies and companies including Google, SAP, Santander and Bayer to help it draft the principles.
Companies and organisations can sign up to a pilot phase in June, after which the experts will review the results and the Commission decide on the next steps.
IBM Europe Chairman Martin Jetter, who was part of the group of experts, said guidelines “set a global standard for efforts to advance AI that is ethical and responsible.”
The guidelines should not hold Europe back, said Achim Berg, president of BITKOM, Germany’s Federal Association of Information Technology, Telecommunications, and New Media.
“We must ensure in Germany and Europe that we do not only discuss AI but also make AI,” he said.