Understanding artificial intelligence

11-01-2018

Artificial intelligence (AI) systems already permeate daily life: they drive cars, decide on mortgage applications, translate texts, recognise faces on social networks, identify spam emails, create artworks, play games, and intervene in conflict zones. The AI revolution that began in the 2000s emerged from the combination of machine learning techniques and 'big data'. The algorithms behind these systems work by identifying statistical correlation in the data they analyse, enabling them to perform tasks for which intelligence is required if a human were to perform them. Nevertheless, data-driven AI can only perform one task at a time, and cannot transfer its knowledge. 'Strong AI', able to display human-like intelligence and common sense, and which might be able to set its own goals, is not yet within reach. Despite the fears portrayed in film and TV entertainment, the idea of a 'superintelligence' able to self-improve and dominate humans remains an esoteric possibility, as development of strong AI systems is not predicted for a few decades or more, if indeed development ever reaches this stage. Nevertheless, the development of data-driven AI systems implies adaptation of legal frameworks on the collection, use and storage of data, due to privacy and other issues. Bias in data supplied to AI systems can also reproduce or amplify bias in the decisions they make. However, the key issue remains the level of autonomy given to AI systems to make decisions that could be life-changing, keeping in mind that they only provide recommendations, that they do not understand the tasks they perform, and that there is no way to know how they reach their conclusions. AI systems are expected to impact society, especially the job market, and could increase inequalities. To counter the abuse of probabilistic prediction and the risks to privacy, in April 2016 the European Parliament and the Council of the EU adopted the General Data Protection Regulation. The European Parliament also requested an update of the Union legal framework on robotics and AI in February 2017.

Artificial intelligence (AI) systems already permeate daily life: they drive cars, decide on mortgage applications, translate texts, recognise faces on social networks, identify spam emails, create artworks, play games, and intervene in conflict zones. The AI revolution that began in the 2000s emerged from the combination of machine learning techniques and 'big data'. The algorithms behind these systems work by identifying statistical correlation in the data they analyse, enabling them to perform tasks for which intelligence is required if a human were to perform them. Nevertheless, data-driven AI can only perform one task at a time, and cannot transfer its knowledge. 'Strong AI', able to display human-like intelligence and common sense, and which might be able to set its own goals, is not yet within reach. Despite the fears portrayed in film and TV entertainment, the idea of a 'superintelligence' able to self-improve and dominate humans remains an esoteric possibility, as development of strong AI systems is not predicted for a few decades or more, if indeed development ever reaches this stage. Nevertheless, the development of data-driven AI systems implies adaptation of legal frameworks on the collection, use and storage of data, due to privacy and other issues. Bias in data supplied to AI systems can also reproduce or amplify bias in the decisions they make. However, the key issue remains the level of autonomy given to AI systems to make decisions that could be life-changing, keeping in mind that they only provide recommendations, that they do not understand the tasks they perform, and that there is no way to know how they reach their conclusions. AI systems are expected to impact society, especially the job market, and could increase inequalities. To counter the abuse of probabilistic prediction and the risks to privacy, in April 2016 the European Parliament and the Council of the EU adopted the General Data Protection Regulation. The European Parliament also requested an update of the Union legal framework on robotics and AI in February 2017.