Europe published a code of ethics on artificial intelligence (AI) on 18 December 2018. Indeed, the European Union is pushing for companies and researchers to develop Artificial Intelligence in an ethical and transparent way.
The guidelines drawn in the EU code of ethics go in this direction and distinguish Europe in the approach to this technology compared to the United States or China. The old continent, in fact, is lagging behind the Chinese or Californian multinationals that have access to much more data than Europe, which does not have global digital platforms.
The code of ethics developed by the EU contains guidelines for illustrating technical (and non) solutions to ensure transparency and accountability for the AI, and to encourage companies and researchers to develop, at an early stage, solutions that enable the protection of privacy, elimination of discrimination and human supervision in their projects.
Human dignity has been placed at the center of this code, underlining that Artificial Intelligence should never harm human beings or nature. This protection must guarantee:
- physical security
- psychological security
- financial security.
Furthermore, intelligent machines must be designed to operate always in favor of the realization of man’s autonomy.
To the draft of the document worked on behalf of the European Commission, 52 experts selected from the academic world, industry and civil society, who considered it “necessary” that the devices concerning the AI are transparent, accessible and in principle understandable to all.
In this regard, last year the EU set itself the target of increasing investments with at least 20 billion euros (22.8 billion dollars) in the search for Artificial Intelligence by 2020 and the same quantity per year in the following decade.
The EU is betting on the fact that this document will help generate trust on the part of users, having set rules that apply to everyone and to the protection of consumers and businesses.