European Parliament legislative resolution of 20 October 2020 on the proposal for a regulation of the European Parliament and of the Council amending Regulation (EU) No 168/2013 as regards specific measures on L-category end-of-series vehicles in response to the COVID-19 outbreak (COM(2020)0491 – C9-0285/2020 – 2020/0251(COD))
– having regard to the Commission proposal to Parliament and the Council (COM(2020)0491),
– having regard to Article 294(2) and Article 114 of the Treaty on the Functioning of the European Union, pursuant to which the Commission submitted the proposal to Parliament (C9‑0285/2020),
– having regard to Article 294(3) of the Treaty on the Functioning of the European Union,
– after consulting the European Economic and Social Committee,
– having regard to the undertaking given by the Council representative by letter of 14 October 2020 to approve Parliament’s position, in accordance with Article 294(4) of the Treaty on the Functioning of the European Union,
– having regard to Rules 59 and 52(1) of its Rules of Procedure,
– having regard to the report of the Committee on the Internal Market and Consumer Protection (A9-0190/2020),
1. Adopts its position at first reading hereinafter set out;
2. Calls on the Commission to refer the matter to Parliament again if it replaces, substantially amends or intends to substantially amend its proposal;
3. Instructs its President to forward its position to the Council, the Commission and the national parliaments.
Position of the European Parliament adopted at first reading on 20 October 2020 with a view to the adoption of Regulation (EU) 2020/… of the European Parliament and of the Council amending Regulation (EU) No 168/2013 as regards specific measures on L-category end-of-series vehicles in response to the COVID-19 pandemic
(As an agreement was reached between Parliament and Council, Parliament's position corresponds to the final legislative act, Regulation (EU) 2020/1694.)
Mobilisation of the European Globalisation Adjustment Fund: application EGF/2020/001 ES/Galicia shipbuilding ancillary sectors
European Parliament resolution of 20 October 2020 on the proposal for a decision of the European Parliament and of the Council on the mobilisation of the European Globalisation Adjustment Fund following an application from Spain - EGF/2020/001 ES/Galicia shipbuilding ancillary sectors (COM(2020)0485 – C9-0294/2020 – 2020/1996(BUD))
– having regard to the Commission proposal to the European Parliament and the Council (COM(2020)0485 – C9‑0294/2020),
– having regard to Regulation (EU) No 1309/2013 of the European Parliament and of the Council of 17 December 2013 on the European Globalisation Adjustment Fund (2014-2020) and repealing Regulation (EC) No 1927/2006(1) (EGF Regulation),
– having regard to Council Regulation (EU, Euratom) No 1311/2013 of 2 December 2013 laying down the multiannual financial framework for the years 2014-2020(2), and in particular Article 12 thereof,
– having regard to the Interinstitutional Agreement of 2 December 2013 between the European Parliament, the Council and the Commission on budgetary discipline, on cooperation in budgetary matters and on sound financial management(3) (IIA of 2 December 2013), and in particular point 13 thereof,
– having regard to the trilogue procedure provided for in point 13 of the IIA of 2 December 2013,
– having regard to the letters from the Committee on Employment and Social Affairs and from the Committee on Regional Development,
– having regard to the report of the Committee on Budgets (A9-0192/2020),
A. whereas the Union has set up legislative and budgetary instruments to provide additional support to workers who are suffering from the consequences of major structural changes in world trade patterns or of the global financial and economic crisis, and to assist their reintegration into the labour market; whereas this assistance is made through a financial support given to workers and the companies for which they worked;
B. whereas Spain submitted application EGF/2020/001 ES/Galicia shipbuilding ancillary sectors for a financial contribution from the EGF, following 960 redundancies(4) in the economic sectors classified under the NACE Revision 2 Division 24 (Manufacture of basic metals), 25 (Manufacture of fabricated metal products, except machinery and equipment), 30 (Manufacture of other transport equipment), 32 (Other manufacturing), 33 (Repair and installation of machinery and equipment) and 43 (Specialised construction activities) in the NUTS level 2 region of Galicia (ES11) in Spain;
C. whereas the application is based on the intervention criteria of point (a) of Article 4(2) of the EGF Regulation, which allows that a collective application involving SMEs located in one region may cover SMEs operating in different economic sectors as defined at NACE Revision 2 division level, provided that SMEs are the main or the only type of business in that region;
D. whereas SMEs are the backbone of the region’s economy which has more than 95 % of its enterprises with less than 250 workers and whereas the 38 enterprises concerned by this application are SMEs; whereas Galicia is part of the Atlantic Axis association and its economy considerably relies on cross-border companies and workers;
E. whereas the Galician shipbuilding follows the subcontracting pattern of the European shipbuilding sector which is mostly made up of small and medium shipyards, with a very high percentage of subcontracting in value and employment;
F. whereas Spain argues that, since 2004, Europe has been losing its merchant shipbuilding industry(5) to East-Asia and that the economic and financial crisis that started in 2008 resulted in a significant decline in orders, the expansion of shipbuilding in Asia and intensive global competition(6);
G. whereas subsidy policies and preferential fiscal treatment such as state aids and lower labour costs in East-Asian countries have resulted in market losses for Union shipbuilders;
H. whereas shipyards in Galicia build technologically advanced military ships, oil and chemical tankers, offshore vessels, oceanographic and seismic research vessels, tugboats, and passenger ships, fishing vessels;
I. whereas the closure of Factorias Vulcano shipyard in July 2019, and the request for creditors pre-bankruptcy by HJ Barreras shipyard in October 2019 have led to the redundancies, since half of the dismissals that are subject to this application occurred in companies that are creditors of HJ Barreras;
J. whereas the subcontractors of Factorias Vulcano have a high level of specialisation and therefore a high degree of dependence on the main shipyard with interdependencies and the same consequences on employment in the ancillary shipbuilding industry as if the companies were within a single NACE economic sector.
1. Agrees with the Commission that the conditions set out in Article 4(2) of the EGF Regulation are met and that Spain is entitled to a financial contribution of EUR 2 054 400 under that Regulation, which represents 60 % of the total cost of EUR 3 424 000, comprising expenditure for personalised services of EUR 3 274 000 and expenditure for preparatory, management, information and publicity, control and reporting activities of EUR 150 000;
2. Notes that the Spanish authorities submitted the application on 13 May 2020, and that, following the provision of additional information by Spain, the Commission finalised its assessment on 11 September 2020 and notified it to Parliament on the same day;
3. Notes that Spain started providing personalised services to the targeted beneficiaries on 13 August 2020 and that the period of eligibility for a financial contribution from the EGF will therefore be from 13 August 2020 to 13 August 2022;
4. Notes that Spain started incurring administrative expenditure for the implementation of the EGF on 8 June 2020 and that expenditure on preparatory, management, information and publicity, control and reporting activities from 8 June 2020 to 13 February 2023 will therefore be eligible for a financial contribution from the EGF;
5. Welcomes the fact that the coordinated package of personalised services was drawn up by Spain in consultation with the social partners and that through a collaboration agreement, the social partners will also be involved in the implementation of the services;
6. Welcomes the involvement of ASIME and the trade unions CCOO(7) and UGT(8), the social partners that participate in the Social Dialogue in Galicia, in the drawing of the co-ordinated package of personalised services and in the implementation of the services; stresses that the social partners should also be involved in the monitoring of the measures;
7. Takes into consideration that the shipyards sector and ancillary industries in Galicia had an annual turnover of around EUR 2 000 million in 2018, and that 10 000 direct jobs and 25,000 indirect jobs depended on shipbuilding, whereas last year the sector’s turnover dropped by 11 % and the number of jobs by 20,8 % (about 2000);
8. Underlines that those redundancies took place in a context of high level of unemployment (11,7 % in 2019) in the region of Galicia; welcomes, therefore, the re-skilling and up-skilling measures provided by this EGF support to make the regional shipbuilding sector, the cross-border economy and the overall labour market, more resilient and competitive in the future;
9. Emphasises that effective research, specialisation and technological innovation are key to strengthen the European shipbuilding industry and make it able to compete on a global scale with countries relying on lower labour costs, subsidy policies and preferential fiscal treatment;
10. Stresses that the personalised services to be provided to workers made redundant have to be tailor-made for each profile;
11. Notes that 94 % of targeted beneficiaries are men and 78,2 % of them are between 30 and 54 years old; notes that the personalised services to be provided to redundant workers include: information sessions and preparatory workshops, occupational guidance towards employment or self-employment, training (including training towards entrepreneurship will be provided for those aiming at self-employment), tutoring after reintegration into work, intensive job-search assistance and a variety of incentives;
12. Welcomes the inclusion of participation incentives of up to EUR 400, of contributions to commuting expenses of EUR 0,19/kilometre plus additional costs such as tolls and parking costs, of contributions to expenses for carers of dependent persons of to EUR 20 per day of participation, and of outplacement incentives where(dependent workers or self-employed will receive EUR 200 per month, for a maximum period of six months, in order to support the targeted beneficiaries in job search or training activities, conditional on active participation in the measures;
13. Recalls that the proposed actions constitute active labour market measures within the eligible actions set out in Article 7 of the EGF Regulation and do not substitute passive social protection measures;
14. Notes that the financial contribution will be managed and controlled by the same bodies that manage and control the European Social Fund and that the Xunta de Galicia(9) will be the intermediate body for the managing authority;
15. Stresses that the Spanish authorities have confirmed that the eligible actions do not receive assistance from other Union funds or financial instruments;
16. Reiterates that assistance from the EGF must not replace actions which are the responsibility of companies, by virtue of national law or collective agreements;
17. Recalls that according to the current rules, the EGF could be mobilised to support permanently dismissed workers and the self-employed in the context of the global crisis caused by COVID-19 without amending the EGF Regulation, as Spain is one of the Member States with a severe negative impact following the pandemic;
18. Approves the decision annexed to this resolution;
19. Instructs its President to sign the decision with the President of the Council and arrange for its publication in the Official Journal of the European Union;
20. Instructs its President to forward this resolution, including its annex, to the Council and the Commission.
ANNEX
DECISION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
on the mobilisation of the European Globalisation Adjustment Fund following an application from Spain – EGF/2020/001 ES/Galicia shipbuilding ancillary sectors
(The text of this annex is not reproduced here since it corresponds to the final act, Decision (EU) 2020/1598.)
By 2018, China, (35,5%) had become the leader, followed by Japan (23,4%) and South Korea (22,7%), whilst Europe's market share had fallen to only 6,8%. In terms of the order book in 2019, China as market leader had a share of 34%, South Korea 26% and Japan 15%.
Xunta de Galicia and in particular the Consellería de Facenda –Dirección General de política financiera, tesoro y fondos europeos / Servicio de inspección y control de fondos comunitarios in cooperation with the Consellería de Economía, Emprego e Industria –Secretaría Xeral de Emprego/Subdirección Xeral de Relacións Laborais will be the intermediate body for the managing authority.
Discharge 2018: General budget of the EU – European Economic and Social Committee
1. European Parliament decision of 20 October 2020 on discharge in respect of the implementation of the general budget of the European Union for the financial year 2018, Section VI – European Economic and Social Committee (2019/2060(DEC))
– having regard to the general budget of the European Union for the financial year 2018(1),
– having regard to the consolidated annual accounts of the European Union for the financial year 2018 (COM(2019)0316 – C9‑0055/2019)(2),
– having regard to the European Economic and Social Committee’s annual report to the discharge authority on internal audits carried out in 2018,
– having regard to the Court of Auditors’ annual report on the implementation of the budget concerning the financial year 2018, together with the institutions’ replies(3),
– having regard to the statement of assurance(4) as to the reliability of the accounts and the legality and regularity of the underlying transactions provided by the Court of Auditors for the financial year 2018, pursuant to Article 287 of the Treaty on the Functioning of the European Union,
– having regard to its decision of 13 May 2020(5) postponing the discharge decision for the financial year 2018, and the accompanying resolution,
– having regard to Article 314(10) and Articles 317, 318 and 319 of the Treaty on the Functioning of the European Union,
– having regard to Regulation (EU, Euratom) No 966/2012 of the European Parliament and of the Council of 25 October 2012 on the financial rules applicable to the general budget of the Union and repealing Council Regulation (EC, Euratom) No 1605/2002(6), and in particular Articles 55, 99, 164, 165 and 166 thereof,
– having regard to Regulation (EU, Euratom) 2018/1046 of the European Parliament and of the Council of 18 July 2018 on the financial rules applicable to the general budget of the Union, amending Regulations (EU) No 1296/2013, (EU) No 1301/2013, (EU) No 1303/2013, (EU) No 1304/2013, (EU) No 1309/2013, (EU) No 1316/2013, (EU) No 223/2014, (EU) No 283/2014, and Decision No 541/2014/EU and repealing Regulation (EU, Euratom) No 966/2012(7), and in particular Articles 59, 118, 260, 261 and 262 thereof,
– having regard to Rule 100 of and Annex V to its Rules of Procedure,
– having regard to the second report of the Committee on Budgetary Control (A9-0188/2020),
1. Refuses to grant the Secretary-General of the European Economic and Social Committee discharge in respect of the implementation of the budget of the European Economic and Social Committee for the financial year 2018;
2. Sets out its observations in the resolution below;
3. Instructs its President to forward this decision and the resolution forming an integral part of it to the European Economic and Social Committee, the European Council, the Council, the Commission and the Court of Auditors, and to arrange for their publication in the Official Journal of the European Union (L series).
2. European Parliament resolution of 20 October 2020 with observations forming an integral part of the decision on discharge in respect of the implementation of the general budget of the European Union for the financial year 2018 Section VI – European Economic and Social Committee (2019/2060(DEC))
The European Parliament,
– having regard to its decision on discharge in respect of the implementation of the general budget of the European Union for the financial year 2018, Section VI – European Economic and Social Committee,
– having regard to Rule 100 of and Annex V to its Rules of Procedure,
– having regard to the second report of the Committee on Budgetary Control (A9-0188/2020),
A. whereas in the context of the discharge procedure, the discharge authority wishes to stress the particular importance of further strengthening the democratic legitimacy of the Union bodies by improving transparency and accountability, and implementing the concept of performance-based budgeting and good governance of human resources;
B. whereas based on the outcome and recommendations of the investigation by the European Anti-Fraud Office (OLAF) Parliament's Committee on Budgetary Control expects to be informed by the European Economic and Social Committee (the Committee) on the measures taken in order to rectify the wrongdoings;
1. Welcomes the improvements made to the carry-over situation related to the budget line 'Members of the institution and delegates' by setting a deadline of six weeks for submitting the reimbursement claims; appreciates that since 1 January 2019 a reduction of carry-overs has been achieved;
2. Notes that due to a higher number of opinions and reports issued, which requested more members’ involvement in the preparation, higher costs for travelling and other reimbursements occurred;
3. Appreciates that the Committee plans to increase significantly its budget for IT in order to catch up, close the gap vis-a-vis the other Union bodies and further implement the Digital Strategy for the Committee adopted in June 2019; takes note of the efforts necessary to reinforce network capacities and end user equipment in order to allow 100 % of the staff to telework;
4. Notes that the Committee’s new structure, in place since 1 January 2020, attached the legal service directly to the secretary-general with the declared objectives of increasing the visibility and impact of the legal service and enabling it to provide legal support on a horizontal basis; takes note of the justification provided by the Committee but is concerned that the autonomy and full independence of the legal service might be affected; calls on the Committee to ensure that the legal service is officially and systematically involved in the most important matters of the Committee without leaving the decision on whether to consult it up to the different services; welcomes that the legal capacity was reinforced in the members’ working conditions unit to allow for the treatment of specific issues in relation to the statute of members; notes the reflections on exempting specialised staff, including staff belonging to the legal service, from the Committee’s mobility policy and calls on the Committee to report on the conclusions of this process to the discharge authority;
5. Confirms that the Committee received an asbestos-safe certificate for the VMA building without risk for normal use of the building; notes, however, that a limited amount of asbestos was present which was confirmed by further analysis; recognises that few materials containing asbestos fibres are located outside the office area of the VMA building and that it is planned to remove all those materials during the execution period of the renovation works;
6. Supports the request of the Committee to strengthen all efforts in respecting the content of the cooperation agreement between Parliament and the Committee; recalls, however, that under the 2014 agreement the Committee transferred a total of 36 translators to Parliament and only obtained the access to the European Parliamentary Research Service in exchange; notes that consequently the Committee had to hire contract staff and outsource its translation service; notes with concern that, to compensate the reduction of the translation staff, the Parliament has provided additional funds to the Committee for the outsourcing of the translation and that the Committee can reallocate these funds to other policy areas if they are not fully used for outsourced translation, which has happened in the previous years; is of the opinion that this stipulation is not in line with the principles of prudent and sound financial management and should be reviewed in the future;
State of play
7. Recalls that in its report of January 2020, OLAF concludes that the then president of Group I of the Committee was responsible for acts of harassment towards two members of staff, of inappropriate behaviour (serious misconduct) towards a Committee member and a staff member, and of misconduct towards other staff members working in the Group I Secretariat;
8. Recalls that OLAF concludes that the then president of Group I committed breaches of the obligations deriving from the Committee’s rules of procedure and its code of conduct; recalls that OLAF recommends that the Committee initiates the appropriate procedures with respect to the member concerned, as provided for in rule 8, part four, of the Committee’s rules of procedure, and takes all necessary steps to prevent any further cases of harassment by the member concerned at the workplace;
9. Deplores that several members of staff have suffered acts of psychological harassment by the then president of Group I for an unjustifiably long period of time; regrets that the anti-harassment measures in place in the Committee failed to tackle and to remedy this case sooner because of the senior position of the member concerned; regrets that the measures taken to protect the victims until the end of the investigation by OLAF were arguably improvised and insufficient, especially in light of the judgement in Case F-50/15(8), FS v European Economic and Social Committee (EESC), which should have served as a lesson for the Committee; notes with concern that shortcomings in the internal proceedings resulted in the inaction by the Committee's administration which translated into a breach of the duty of care and of the obligation to report to OLAF; calls on the Committee to take notice of this in the framework of the undertaken revision of the relevant decisions;
10. Notes that the Committee’s president received the OLAF report and recommendations on 17 January 2020; notes that the case was referred to the Committee’s advisory committee on the conduct of members on 23 January 2020; further notes that the advisory committee presented its conclusions on 28 April 2020, that the member concerned was invited to present his observations and that the Committee's president invited the Committee’s enlarged presidency to comment;
11. Notes that the Committee’s bureau by majority took the decisions to ask the member concerned to resign from his duties as president of Group I and to withdraw his application for the position of president of the Committee; notes that the bureau discharged the member concerned from all activities involving the management or administration of staff; notes that the bureau tasked the secretary-general with taking the necessary steps to ensure that, should proceedings be initiated by the public prosecutor against the member concerned, the Committee shall join those proceedings as a civil party; notes that the bureau tasked the secretary-general with communicating this decision to OLAF and Parliament; notes that this decision may, as appropriate, also be communicated to other institutions or bodies of the Member States;
12. Notes with concern that the decision of the Committee's bureau regarding the then president of Group I could not be fully enforced via the Committee's internal proceedings; notes that the member concerned decided to withdraw his candidacy for the position of the president of the Committee almost four months after the bureau's decision and then only on his own initiative; notes with concern that despite OLAF's findings and the bureau's decision the member concerned is able to impose his will and remain the president of Group I until the end of his term; calls on the Committee to carry on the revision of the Committee's rules of procedure and code of conduct to avoid such situation in the future;
13. Notes that OLAF submitted the case to the Belgian authorities and that the Belgian prosecutor is launching legal proceedings against the member concerned as psychological harassment can be prosecuted under Belgian law; notes that the plenary of the Committee decided to waive the immunity of the concerned member in its meeting of July 2020 in order to allow the Belgian prosecutor to continue the legal proceedings;
14. Points out that the Committee’s wrongdoings in this case have resulted in a material loss of public funds with respect to legal services, sick leave, victim protection, reduced productivity, meetings of the bureau and other bodies, etc.; considers it thus a case of concern regarding accountability, budgetary control and good governance of human resources in the Union institutions, bodies, offices and agencies; in that sense recalls that the Court of Auditors states in its Special Report 13/2019, The ethical frameworks of the audited EU institutions: scope for improvement, that ethical conduct in public affairs contributes to sounder financial management and increased public trust, and that any unethical behaviour by staff and members of the Union institutions and bodies attracts high levels of public interest and reduces trust in Union institutions;
15. Is astonished that the Committee's website features a statement by the member concerned in his capacity as president of Group I that is in reality a personal self-defence testimony and with the aggravating factor that cases are either already pending or expected before the Union judicial authorities and the Belgian authorities; deeply regrets that the disagreement between the presidency of the Committee and the presidency of Group I has been made public in this fashion at a great cost for the reputation and credibility of the Union institutions, bodies, offices and agencies;
16. Welcomes that the Committee initiated an in-depth assessment and reflection with respect to the overall existing framework supporting its zero-tolerance policy towards any behaviour which is likely to undermine human dignity; notes that this process aims to identify potential gaps and searches for further improvements in the interest of its staff and members;
17. Asks the Committee to keep the discharge authority informed about any currently ongoing OLAF investigations and the opening of new OLAF investigations concerning the Committee's members or staff with respect to harassment or any other concern;
18. Notes that the provisions of the Staff Regulation are not applicable to the Committee’s members, as they are not employees, but appointed as members of the Committee; observes that this circumstance has not prevent other Union institutions, bodies, offices and agencies from having specific, adequate and useful rules applicable to their members; in this sense recalls for example that Article 8, part 4, of the code of conduct of the Committee of the Regions prohibits the infringing member from being elected as office holder of the Committee and, if the member already holds such posts, entails dismissal from them; welcomes that the Committee is ready to consider further improvements to its system after a reflection that has now lasted more than two years; considers this to be an unreasonably long period; regrets that after the aforementioned period the Committee can only suggest awareness raising and training measures for members despite the clear need for further measures as set out in the report of the European Ombudsman on dignity at work in the EU institutions and agencies (SI/2/2018/AMF) and Parliament's recommendations;
19. Asks the Committee to inform the discharge authority on the procedures and processes the Committee has rolled out or intends to roll out on how cases of harassment or similar issues concerning staff will be avoided in the future so as to ensure that comparable regrettable developments which have caused negative publicity and damaged the reputation of the Committee will not be repeated;
20. Welcomes the increase of the number of confidential counsellors in order to improve the informal procedure and the possibility for staff to share their concerns on any perceived situation of harassment;
21. Warmly welcomes the Committee’s reflections, which will result in a detailed action plan to strengthen the zero-tolerance policy towards harassment at the Committee to ensure that such behaviour can never be tolerated; welcomes and supports the current revision package concerning harassment, whistleblowing and disciplinary procedures that will further improve the mechanisms allowing staff to make formal harassment complaints and improve the robustness of the relevant legal structures; recalls, however, that this process has been reported by the Committee to Parliament for years and that only now concrete measures seem to be taken; welcomes the setting-up of a working group that includes representatives from the administration and the staff committee with the aim of collecting the widest possible input for improvements; is disappointed that the Committee has achieved minimal progress over the last years despite the precise recommendations of Parliament urging the Committee to introduce rules and procedures concerning members involved in harassment cases;
22. Welcomes the continuation of various awareness-raising initiatives in order to inform staff accordingly on the follow-up to the 'Respect@work campaign'; welcomes the organisation of training activities meant to ensure that staff is aware of relevant ethical and organisational values and the associated rules and procedures.
Judgment of the Civil Service Tribunal (Third Chamber) of 12 May 2016, FS v European Economic and Social Committee (EESC), F-50/15, ECLI:EU:F:2016:119.
Discharge 2018: General budget of the EU – European Council and Council
1. European Parliament decision of 20 October 2020 on discharge in respect of the implementation of the general budget of the European Union for the financial year 2018, Section II – European Council and Council (2019/2057(DEC))
– having regard to the general budget of the European Union for the financial year 2018(1),
– having regard to the consolidated annual accounts of the European Union for the financial year 2018 (COM(2019)0316 – C9‑0052/2019)(2),
– having regard to the Council’s annual report to the discharge authority on internal audits carried out in 2018,
– having regard to the Court of Auditors’ annual report on the implementation of the budget concerning the financial year 2018, together with the institutions’ replies(3),
– having regard to the statement of assurance(4) as to the reliability of the accounts and the legality and regularity of the underlying transactions provided by the Court of Auditors for the financial year 2018, pursuant to Article 287 of the Treaty on the Functioning of the European Union,
– having regard to its decision of 13 May 2020(5) postponing the discharge decision for the financial year 2018, and the accompanying resolution,
– having regard to Article 314(10) and Articles 317, 318 and 319 of the Treaty on the Functioning of the European Union,
– having regard to Regulation (EU, Euratom) No 966/2012 of the European Parliament and of the Council of 25 October 2012 on the financial rules applicable to the general budget of the Union and repealing Council Regulation (EC, Euratom) No 1605/2002(6), and in particular Articles 55, 99, 164, 165 and 166 thereof,
– having regard to Regulation (EU, Euratom) 2018/1046 of the European Parliament and of the Council of 18 July 2018 on the financial rules applicable to the general budget of the Union, amending Regulations (EU) No 1296/2013, (EU) No 1301/2013, (EU) No 1303/2013, (EU) No 1304/2013, (EU) No 1309/2013, (EU) No 1316/2013, (EU) No 223/2014, (EU) No 283/2014, and Decision No 541/2014/EU and repealing Regulation (EU, Euratom) No 966/2012(7), and in particular Articles 59, 118, 260, 261 and 262 thereof,
– having regard to Rule 100 of and Annex V to its Rules of Procedure,
– having regard to the second report of the Committee on Budgetary Control (A9-0189/2020),
1. Refuses to grant the Secretary-General of the Council discharge in respect of the implementation of the budget of the European Council and of the Council for the financial year 2018;
2. Sets out its observations in the resolution below;
3. Instructs its President to forward this decision and the resolution forming an integral part of it to the European Council, the Council, the Commission and the Court of Auditors, and to arrange for their publication in the Official Journal of the European Union (L series).
2. European Parliament resolution of 20 October 2020 with observations forming an integral part of the decision on discharge in respect of the implementation of the general budget of the European Union for the financial year 2018, Section II – European Council and Council (2019/2057(DEC))
The European Parliament,
– having regard to its decision on discharge in respect of the implementation of the general budget of the European Union for the financial year 2018, Section II – European Council and Council,
– having regard to Rule 100 of and Annex V to its Rules of Procedure,
– having regard to the second report of the Committee on Budgetary Control (A9-0189/2020),
A. whereas the European Council and the Council, as Union institutions, should be democratically accountable to all the citizens of the Union for the funds entrusted to them to perform their duties;
B. whereas Parliament is the sole directly elected body among the Union institutions, with the responsibility to grant discharge in respect of the implementation of the general budget of the European Union;
C. whereas an open and transparent discharge procedure is required to protect the Union’s financial interests, to pursue the necessary fight against fraud and to guarantee transparency and democratic accountability towards Union citizens whereby every Union institution is accountable for the budget which it executes;
1. Stresses that for 10 consecutive years the Council has refused to cooperate in the discharge procedure and thus forced the Parliament to refuse to grant discharge; notes that the decision whether to grant discharge for 2018 was postponed in May 2020 similar to what was done in previous years;
2. Underlines that this state of affairs is not tenable for either institution: for the Council because no positive decision on the implementation of its budget has been granted since 2009, and for Parliament because it shows a lack of respect for Parliament's role as discharge authority and guarantor of the transparency and democratic accountability of the budget of the Union;
3. States that this situation damages the public trust in the financial management of the Union institutions; considers a continuation of the current situation as detrimental to the accountability of the Union and its institutions;
4. Recalls that, pursuant to the Treaty on the Functioning of the European Union (TFEU) and the Financial Regulation, Parliament is the only discharge authority of the Union, however, in full acknowledgment of the Council’s role as an institution giving recommendations on the discharge procedure; in this regard, asks the Council to give discharge recommendations with respect to the other Union institutions;
5. Recalls that according to TFEU the institutions enjoy administrative autonomy, their expenditure is set out in separate parts of the budget, and they are individually responsible for the implementation of their budgets;
6. Recalls that Parliament grants discharge to all Union institutions and bodies, based on the provisions of technical documents, replies to parliamentarian questions and hearings; regrets that Parliament repeatedly encounters difficulties in receiving answers from the Council due to a lack of cooperation, resulting in the refusal to grant discharge for more than 10 years;
7. Recalls that effective control of the Union’s budget implementation requires loyal cooperation among the institutions; recalls the wish of Parliament to start negotiations with the Council with a view to reach a mutually satisfactory agreement to finally overcome this situation of deadlock;
8. Emphasises the letter sent by Parliament's Committee on Budgetary Control on 25 May 2020 to the secretary-general of the Council to inform that Parliament's Committee on Budgetary Control has been mandated by Parliament's Conference of Presidents to re-open negotiations with the Council on the discharge procedure;
9. Informs that the Parliament's negotiating team consists of the Chair of Parliament's Committee on Budgetary Control, Ms Monika Hohlmeier; the Rapporteur for the Council 2018 discharge, Mr Tomáš Zdechovský; and the 1st Vice-Chair of Parliament’s Committee on Budgetary Control, Ms Isabel García Muñoz;
10. Informs that an updated version proposed by Parliament's negotiating team on 20 February 2020 of the 'non-paper' on the cooperation between Parliament and the Council during the annual discharge procedure was annexed to the letter mentioned in point 8.; notes that Parliament considers the 'non-paper' to be the starting point for the negotiations;
11. Informs that the 'non-paper' recognises the respective but different roles of both institutions in the discharge procedure by concluding that Parliament and the Council need a similar factual basis to deliver a recommendation (Council) or take a decision (Parliament);
12. Explains that the letter mentioned in point 8. invites the Council to suggest an appropriate date to start negotiations; informs that the positive trend of this process has been interrupted by the COVID-19 pandemic;
13. Points out that as long as no negotiations are taking place between the parties, Parliament’s views stand, and that negotiations between the parties are a precondition for resolving the issue;
14. Insists that the budget of the European Council and the budget of the Council be separated in order to contribute to more transparency, accountability and efficiency with respect to the expenditure for both institutions as recommended by Parliament in many of its discharge resolutions over the last years;
15. Insists that the combined efforts on achieving an inter-institutional agreement on a mandatory transparency register for lobbyists, accessible in a machine-readable format, are inevitable to enhance openness of the Union decision-making process and accountability of the Union institutions; again strongly regrets that the Council has not joined the transparency register scheme; calls on the Council to continue taking part in the discussions regarding the establishment of a common register together with Parliament, which agreed to restart the negotiations on March 2020, and the Commission in order to make it de facto mandatory for lobbyists to register if they want meetings with Union decision-makers; calls again on all Member State presidency teams to lead by example by refusing meetings with unregistered lobbyists;
16. Welcomes the Council's positive response to the European Ombudsman's recommendation in case 1069/2019/MIG on sponsorship of the Presidency of the Council of the European Union; takes note of the draft guidance sent by the general secretariat of the Council to the Member State delegations on 29 June 2020; reiterates that any actual or perceived conflict of interests jeopardises the reputation of the Council and the Union as a whole; calls on the Council to reflect on the non-binding character of the guidance; urges the Council to follow-up on the issue without delay;
17. Emphasises the importance of allowing citizens to follow the legislative process of the Union easily; reminds the Council to align its working methods with the standards of a parliamentary democracy, as required by the Treaties; reminds the Council to follow up systematically all the recommendations contained in European Ombudsman’s decision in strategic inquiry OI/2/2017/TE on the transparency of the Council legislative process; recalls that Parliament encouraged the European Ombudsman to continue to follow up on her inquiry;
18. Calls on the Council to step up its transparency efforts by, inter alia, publishing Council legislative documents including minutes of working group meetings and trilogues and other milestone working documents in line with the European Ombudsman’s recommendations; welcomes the improvements on the Council’s website, in particular related to transparency and access to documents; welcomes the clear chapters on legislative transparency, agendas and calendar of Council meetings and minutes and voting lists; acknowledges that the Council has been taking steps to foster a stronger culture of transparency;
19. Reiterates its deep concerns about the corporate sponsorship of Member States hosting the Union Presidency and echoes the concerns expressed by Union citizens and Members on the matter; is highly concerned by the possible reputational damage and the risk of loss of trust that this practice might cause the Union, its institutions and especially the Council in the eyes of Union citizens; moreover, strongly recommends the Council to envisage the budgetisation of Presidencies, requests the Council to forward this concern to the Member States, in particular to the current Presidency trio, and calls on the current Presidency trio to take these recommendations into serious consideration and report back to Parliament;
20. Reiterates its deep concern over the alleged conflict of interests of a number of Member State representatives involved in policy and budget decision-making processes; asks the Council to ensure that Member State representatives who personally benefit from Union subsidies do not participate in the related policy or budgetary discussions and votes.
European Parliament resolution of 20 October 2020 with recommendations to the Commission on the Digital Services Act: Improving the functioning of the Single Market (2020/2018(INL))
– having regard to Article 225 of the Treaty on the Functioning of the European Union,
– having regard to Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’)(1),
– having regard to Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services(2),
– having regard to Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services(3),
– having regard to Directive (EU) 2019/771 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the sale of goods, amending Regulation (EU) 2017/2394 and Directive 2009/22/EC, and repealing Directive 1999/44/EC(4),
– having regard to Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council (“Unfair Commercial Practices Directive”)(5),
– having regard to Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011(6),
– having regard to Directive 2006/123/EC of the European Parliament and of the Council of 12 December 2006 on services in the internal market(7),
– having regard to its resolution of 21 September 2010 on completing the internal market for e-commerce(8),
– having regard to its resolution of 15 June 2017 on online platforms and the digital single market(9),
– having regard to the Communication from the Commission of 11 January 2012, entitled “A coherent framework for building trust in the Digital Single Market for e-commerce and online services” (COM(2011)0942),
– having regard to the Commission Recommendation (EU) 2018/334 of 1 March 2018 on measures to effectively tackle illegal content online(10) and the Communication from the Commission of 28 September 2017, entitled “Tackling Illegal Content Online: Towards an enhanced responsibility of online platforms” (COM(2017)0555),
– having regard to the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 26 April 2018 on Tackling online disinformation: a European Approach (COM(2018)0236), which covers false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm,
– having regard to the Memorandum of Understanding on the sale of counterfeit goods via the internet of 21 June 2016 and its review in the Communication from the Commission to the European Parliament, the Council and the European Economic and Social Committee of 29 November 2017, entitled “A balanced IP enforcement system responding to today’s societal challenges” (COM(2017)0707),
– having regard to the opinion of the Committee of the Regions (ECON-VI/048) from 5 December 2019 on “a European framework for regulatory responses to the collaborative economy”,
– having regard to Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)(11),
– having regard to Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC(12),
– having regard to Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications)(13),
– having regard to Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases(14), Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society(15) and Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive)(16),
– having regard to the Communication from the Commission of 10 March 2020, entitled “An SME Strategy for a sustainable and digital Europe” (COM(2020)0103),
– having regard to the White Paper on Artificial Intelligence - A European approach to excellence and trust” of 19 February 2020 (COM(2020)0065),
– having regard to the communication from the Commission of 19 February 2020, entitled “Shaping Europe’s digital future” (COM(2020)0067),
– having regard to the commitments made by the Commission in its “Political Guidelines for the next European Commission 2019-2024”,
– having regard to the study by the European Parliamentary Research Service, entitled “Mapping the cost of Non-Europe 2019-2024” that shows that the potential gain of completing the Digital Single Market for services could be up to €100 billion,
– having regard to the study by the European Parliament’s Policy Department for Economic, Scientific and Quality of Life Policies, entitled “The e-commerce Directive as the cornerstone of the Internal Market” that highlights four priorities for improving the e-Commerce Directive,
– having regard to the studies provided by the Policy Department for Economic, Scientific and Quality of Life Policies for the workshop on “E-commerce rules, fit for the digital age” organised by the Internal Market and Consumer Protection (IMCO) committee,
– having regard to the European added value assessment study carried out by the European Parliamentary Research Service, entitled “Digital Services Act: European added value assessment”(17),
– having regard to the Vade-Mecum to Directive 98/48/EC, which introduces a mechanism for the transparency of regulations on information society services,
– having regard to Rules 47 and 54 of its Rules of Procedure,
– having regard to the opinions of the Committee on Transport and Tourism, Committee on Culture and Education, Committee on Legal Affairs and Committee on Civil Liberties, Justice and Home Affairs,
– having regard to the report of the Committee on the Internal Market and Consumer Protection (A9-0181/2020),
A. whereas e-commerce influences the everyday lives of people, businesses and consumers in the Union, and when operated in a fair and regulated level playing field, may contribute positively to unlocking the potential of the Digital Single Market, enhance consumer trust and provide newcomers, including micro, small and medium enterprises, with new market opportunities for sustainable growth and jobs;
B. whereas Directive 2000/31/EC (“the E-Commerce Directive”) has been one of the most successful pieces of Union legislation and has shaped the Digital Single Market as we know it today; whereas the E-Commerce Directive was adopted 20 years ago, the Digital Services Act package (“DSA”) should take into account the rapid transformation and expansion of e-commerce in all its forms, with its multitude of different emerging services, products, providers, challenges and various sector-specific legislations; whereas since the adoption of the E-Commerce Directive, the European Court of Justice (“the Court”) has issued a number of judgments in relation to it;
C. whereas currently Member States have a fragmented approach to tackling illegal content online; whereas, as a consequence, the service providers concerned can be subject to a range of different legal requirements which are diverging as to their content and scope; whereas there seems to be a lack of enforcement and cooperation between Member States, and challenges with the existing legal framework;
D. whereas digital services need to fully comply with rules related to fundamental rights, especially privacy, the protection of personal data, non-discrimination and the freedom of expression and information, as well as media pluralism and cultural diversity and the rights of the child, as enshrined in the Treaties and the Charter of Fundamental rights of the European Union (“the Charter”);
E. whereas in its Communication “Shaping Europe’s digital future”, the Commission committed itself to adopting, as part of the DSA, new and revised rules for online platforms and information service providers, to reinforcing the oversight over platforms’ content policies in the Union, and to looking into ex ante rules;
F. whereas the COVID-19 pandemic has brought new social and economic challenges that deeply affect citizens and the economy; whereas, at the same time, the COVID-19 pandemic is showing the resilience of the e-commerce sector and its potential as a driver for relaunching the European economy; whereas the pandemic has also exposed shortcomings of the current regulatory framework in particular with regard to consumer protection acquis; whereas that calls for action at Union level to have a more coherent and coordinated approach to address the difficulties identified and to prevent them from happening in the future;
G. whereas the COVID-19 pandemic has also shown how vulnerable EU consumers are to misleading trading practices by dishonest traders selling illegal products online that are not compliant with Union safety rules and other unfair conditions on consumers; whereas the COVID-19 pandemic has shown in particular that platforms and online intermediation services need to improve their efforts to detect and take down false claims and to tackle the misleading practices of rogue traders in a consistent and coordinated manner, in particular of those selling false medical equipment or dangerous products online; whereas the Commission welcomed the approach by the platforms after sending them the letters on 23 March 2020; whereas there is a need for an action at Union level to have a more coherent and coordinated approach in order to combat these misleading practices and to protect consumers;
H. whereas the DSA should ensure a comprehensive protection of the rights of consumers and users in the Union and therefore, its territorial scope should cover the activities of information society service providers established in third countries when their services, falling within the scope of the DSA, are directed at consumers or users in the Union;
I. whereas the DSA should clarify the nature of the digital services, falling within its scope, while maintaining the horizontal nature of the E-Commerce Directive, applying not only to online platforms, but to all providers of information society services as defined in Union law;
J. whereas the DSA should be without prejudice to Regulation (EU) 2016/679 (“GDPR”) setting out a legal framework to protect personal data, Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market, Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services, and Directive 2002/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector;
K. whereas the DSA should not affect Directive 2005/29/EC as amended by Directive (EU) 2019/2161, as well as Directives (EU) 2019/770 and (EU) 2019/771 on certain aspects concerning contracts for the supply of digital content and digital services and contracts for the sale of goods, and Regulation (EU) 2019/1150 on promoting fairness and transparency for business users of online intermediation services;
L. whereas the DSA should be without prejudice to the framework set out by Directive 2006/123/EC on services in the internal market;
M. whereas certain types of illegal content, constituting a major cause for concern, have already been defined in national and Union law, such as illegal hate speech, and should not be redefined in the DSA;
N. whereas enhancing transparency and helping citizens to acquire media and digital literacy regarding the dissemination of harmful content, hate speech and disinformation, as well as to develop critical thinking, and strengthening independent professional journalism and quality media will help promote diverse and quality content;
O. whereas the WHOIS database is a publicly accessible database which has been a useful instrument to find the owner of a particular domain name on the internet as well as the details and contact person of every domain name;
P. whereas the DSA should aim at ensuring legal certainty and clarity, including in the short-term rental market and mobility services, by promoting transparency and clearer information obligations;
Q. whereas the Commission’s agreement with certain platforms of the short-term rental sector on data sharing reached in March 2020 will enable local authorities to better understand the development of the collaborative economy and will allow for reliable and continuous data sharing and an evidence based policy making; whereas further steps to initiate a more comprehensive data sharing framework for short-term rental online platforms is needed;
R. whereas the COVID-19 pandemic had a serious impact on the Union tourism sector and showed the need to continue supporting cooperation on green corridors in order to ensure the smooth functioning of Union supply chains and movement of goods across the Union transport network;
S. whereas the evolving development and use of internet platforms for a wide set of activities, including commercial activities, transport and tourism and sharing goods and services, have changed the ways in which users and companies interact with content providers, traders and other individuals offering goods and services; whereas the Digital Single Market cannot succeed without users’ trust in online platforms that respect all applicable legislation and their legitimate interests; whereas any future regulatory framework should also address intrusive business models, including behavioural manipulation and discriminatory practices, which have major effects to the detriment of the functioning of the Single Market and to users’ fundamental rights;
T. whereas Member States should make efforts to improve access to, and the efficiency of, their justice and law enforcement systems in relation to determining the illegality of online content and in relation to dispute resolution concerning removal of content or disabling access;
U. whereas the DSA requirements should be easy to implement in practice by providers of information society services; whereas online intermediaries might encrypt or otherwise prevent access to content by third parties, including the hosting intermediaries storing the content itself;
V. whereas an effective way to decrease illegal activities is allowing new innovative business models to flourish and strengthening the Digital Single Market by removing unjustified barriers to the free movement of digital content; whereas barriers, which create national fragmented markets, help create a demand for illegal content;
W. whereas digital services should provide consumers with direct and efficient means of user-friendly, easily identifiable and accessible communication, such as email addresses, electronic contact forms, chatbots, instant messaging or telephone callback, and should provide for the information relating to those means of communication to be accessible to consumers in a clear, comprehensible and, where possible, uniform manner, and for consumers requests to be directed between different underlying digital services of the digital service provider;
X. whereas the DSA should guarantee the right for consumers to be informed if a service is enabled by artificial intelligence (“AI”), makes use of automated decision-making or machine learning tools or automated content recognition tools; whereas the DSA should offer the possibility to opt-out, limit or personalise the use of any automated personalisation features especially in view of rankings and more specifically, offer the possibility to see content in a non-curated order, give more control to users on the way content is ranked;
Y. whereas the protection of personal data, subject to automated decision-making processes, is already covered, among others, by the GDPR and the DSA should not seek to repeat or amend such measures;
Z. whereas the Commission should ensure that the DSA preserves the human centric approach to AI, in line with the existing rules on free movement of AI enabled services, while respecting the fundamental values and rights as enshrined in the Treaties;
AA. whereas the national supervisory authorities, where allowed by Union law, should have access to the software documentation and data sets of algorithms under review;
AB. whereas the concepts of transparency and explainability of algorithms should be understood as requiring that the information provided for the user is presented in a concise, transparent, intelligible and easily accessible form, using clear and plain language;
AC. whereas it is important to lay down measures to ensure effective enforcement and supervision; whereas the compliance with the provisions should be reinforced with effective, proportionate and dissuasive penalties, including the imposition of proportionate fines;
AD. whereas the DSA should balance the rights of all users and ensure that its measures are not drafted to favour one legitimate interest over another and to prevent the use of measures as offensive tools in any conflicts between businesses or sectors;
AE. whereas the ex ante internal market mechanism should apply where competition law alone is insufficient to adequately address identified market failures;
AF. whereas the legislative measures proposed as part of the DSA should be evidence based; whereas the Commission should carry out a thorough impact assessment, based on relevant data, statistics, analyses and studies of the different options available; whereas that impact assessment should also asses and analyse unsafe and dangerous products sold through online marketplaces; whereas the impact assessment should also take into account the lessons learned from the COVID-19 pandemic and take into account the resolutions from the European Parliament; whereas the DSA should be accompanied by implementation guidelines;
General principles
1. Welcomes the Commission’s commitment to submit a proposal for a Digital Services Act package (“DSA”), which should consist of a proposal amending the E-Commerce Directive and a proposal for ex ante rules on systemic operators with a gatekeeper role, on the basis of Article 225 of the Treaty on the Functioning of the European Union (TFEU); calls on the Commission to submit such a package on the basis of Articles 53(1), 62 and 114 TFEU, following the recommendations set out in the Annex to this resolution, on the basis of a thorough impact assessment which should include information on the financial implications of the proposals and be based on relevant data, statistics and analyses;
2. Recognises the importance of the legal framework set out by the E-Commerce Directive in the development of online services in the Union and believes that the principles that governed the legislators when regulating information society services providers in the late 90s are still valid and should be used when drafting any future proposals; highlights that the legal certainty brought by the E-Commerce Directive has provided small and medium enterprises (SMEs) with the opportunity to expand their business and to operate more easily across borders;
3. Is of the opinion that all providers of digital services established outside the Union must adhere to the rules of the DSA when directing services to the Union, in order to ensure a level playing field between European and third country digital service providers; asks the Commission to evaluate in addition whether there is a risk of retaliatory measure by third countries, while raising awareness on how Union law applies to service providers from third countries targeting the Union market;
4. Underlines the central role that the internal market clause, establishing the home country control and the obligation on Member States to ensure the free movement of information society services, has played in the development of the Digital Single Market; stresses the need to address the remaining unjustified and disproportionate barriers to the provision of digital services, such as complex administrative procedures, costly cross-border disputes settlements and access to information on the relevant regulatory requirements, including on taxation, as well as to ensure that no new unjustified and disproportionate barriers are created;
5. Notes that under the Union rules on free movement of services, Member States may take measures to protect legitimate public interest objectives, such as protection of public policy, public health, public security, consumer protection, combating the rental housing shortage, and prevention of tax evasion and avoidance, provided that those measures comply with the principles of non-discrimination and proportionality;
6. Considers that the main principles of the E-Commerce Directive, such as the internal market clause, freedom of establishment, the freedom to provide services and the prohibition on imposing a general monitoring obligation should be maintained; underlines that the principle of “what is illegal offline is also illegal online”, as well as the principles of consumer protection and user safety, should also become guiding principles of the future regulatory framework;
7. Highlights the importance of collaborative economy platforms, including in the transport and tourism sectors, on which services are provided by both individuals and professionals; calls on the Commission, following a consultation with all relevant stakeholders to initiate a more comprehensible sharing of non-personal data and coordination framework between platforms and national, regional and local authorities, aiming especially at sharing best practices and establishing a set of information obligations, in line with the EU Data Strategy;
8. Notes that the data protection regime has been significantly updated since the adoption of the E-Commerce Directive and emphasises that the rapid development of digital services requires a strong futureproof legislative framework to protect personal data and privacy; stresses in this regard that digital service providers need to comply with the requirements of Union data protection law, namely the GDPR and Directive 2002/58/EC (“the e-Privacy Directive”), currently under revision, with the broad framework of fundamental rights including, the freedom of expression, dignity and non-discrimination, and the right to an effective judicial remedy, and to ensure the security and safety of their systems and services;
9. Believes that the DSA should ensure consumer trust and clearly establish that consumer law and product safety requirements are complied with, in order to ensure legal certainty; points out that the DSA should pay special attention to users with disabilities and guarantee the accessibility of information society services; asks the Commission to encourage service providers to develop technical tools that allow persons with disabilities to effectively access, use and benefit from information society services;
10. Stresses the importance of maintaining the horizontal approach of the E-Commerce Directive; stresses, that “one-size-fits-all” approach is not suitable to address all the new challenges in today’s digital landscape and that the diversity of actors and services offered online needs a tailored regulatory approach; recommends distinguishing between economic and non-economic activities, and between different type of digital services hosted by platforms, rather than focusing on the type of the platform; considers, in this context, that any future legislative proposals should seek to ensure that new Union obligations on information society service providers are proportional and clear in nature;
11. Recalls that a large number of legislative and administrative decisions and contractual relationships use the definitions and the rules of the E-Commerce Directive, and that any change to them will, therefore, have important consequences;
12. Stresses that a predictable, future-proof, clear and comprehensive Union-level framework and fair competition are crucial in order to promote the growth of all European businesses, including small-scale platforms, SMEs, including micro companies, entrepreneurs and start-ups, to increase cross-border provision of information society services, to remove market fragmentation and to provide European businesses with a level playing field that enables them to fully take advantage of the digital services market and to be globally competitive on the world stage;
13. Underlines that the future internal market instrument on ex ante rules on systemic platforms and the announced new Competition Tool aiming at addressing gaps in competition law should be kept as separate legal instruments;
14. Recalls that the E-Commerce Directive was drafted in a technologically neutral manner to ensure that it is not rendered obsolete by technological developments arising from the fast pace of innovation in the IT sector and stresses that the DSA should continue to be future-proof and applicable to the emergence of new technologies with an impact on the digital single market; asks the Commission to ensure that any revisions continue to be technology-neutral in order to guarantee long-lasting benefits to businesses and consumers;
15. Takes the view that a level playing field in the internal market between the platform economy and the offline economy, based on the same rights and obligations for all interested parties - consumers and businesses - is needed; considers that the DSA should not tackle the issue of platform workers; believes therefore that social protection and social rights of workers, including of platform or collaborative economy workers, should be properly addressed in a separate instrument, in order to provide an adequate and comprehensive response to the challenges of today’s digital economy;
16. Considers that the DSA should be based on the common values of the Union that protect citizens’ rights and should aim to foster the creation of a rich and diverse online ecosystem with a wide range of online services, a competitive digital environment, transparency and legal certainty to unlock the full potential of the Digital Single Market;
17. Considers that the DSA provides an opportunity for the Union to shape the digital economy, not only at Union level, but also be a standard-setter for the rest of the world;
Fundamental rights and freedoms
18. Notes that information society services providers, and in particular online platforms, including social networking sites, have a wide-reaching ability to reach and influence broader audiences, behaviour, opinions, and practices, including vulnerable groups such as minors, and should comply with Union law on protecting users, their data and society at large;
19. Recalls that recent scandals regarding data harvesting and selling, such as Cambridge Analytica, fake news, disinformation, voter manipulation and a host of other online harms (from hate speech to the broadcast of terrorism) have shown the need to work on better enforcement and closer cooperation among Member States in order to understand the advantages and shortcomings of the existing rules and to reinforce the protection of fundamental rights online;
20. Recalls in this respect that certain established self-regulatory and co-regulatory schemes such as the Union’s Code of Practice on Disinformation have helped to structure a dialogue with platforms and regulators; suggests that online platforms should place effective and appropriate safeguards, in particular to ensure that they act in a diligent, proportionate and non-discriminatory manner, and to prevent the unintended removal of content which is not illegal; such measures should not lead to any mandatory ‘upload-filtering’ of content which does not comply with the prohibition of general monitoring obligations; suggests that measures to combat harmful content, hate speech and disinformation should be regularly evaluated and developed further;
21. Reiterates the importance of guaranteeing freedom of expression, information and opinion, and of having a free and diverse press and media landscape, also in view of the protection of independent journalism; insists on the protection and promotion of freedom of expression and on the importance of having a diversity of opinions, information, the press, media and artistic and cultural expressions;
22. Stresses that the DSA should strengthen the internal market freedoms and guarantee the fundamental rights and principles set out in the Charter; stresses that consumers’ and users’ fundamental rights, including those of minors, should be protected from harmful online business models, including those conducting digital advertising, as well as from behavioural manipulation and discriminatory practices;
23. Emphasises the importance of user empowerment with regard to the enforcement of their own fundamental rights online; reiterates that digital service providers must respect and enable their users’ right to data portability as laid down in Union law;
24. Points out that biometric data is considered to be a special category of personal data with specific rules for processing; notes that biometrics can and are increasingly used for identification and authentication of individuals, which, regardless of its potential advantages, entails significant risks to, and serious interferences with, the rights to privacy and data protection, particularly when carried out without the consent of the data subject, as well as enabling identity fraud; calls on the DSA to ensure that digital service providers store biometric data only on the device itself, unless central storage is allowed by law, to always give users of digital services an alternative for using biometric data set by default for the functioning of a service, and the obligation to clearly inform the customers on the risks of using biometric data;
25. Stresses that, in the spirit of the case-law on communications metadata, public authorities shall be given access to a user’s subscriber data and metadata only to investigate suspects of serious crimes with prior judicial authorisation; is convinced, however, that digital service providers must not retain data for law enforcement purposes unless a targeted retention of an individual user’s data is directly ordered by an independent competent public authority, in line with Union law;
26. Stresses the importance to apply effective end-to-end encryption to data, as it is essential for trust in and security on the Internet, and effectively prevents unauthorised third party access;
Transparency and consumer protection
27. Notes that the COVID-19 pandemic has shown the importance and resilience of the e-commerce sector and its potential as a driver for relaunching the European economy, but at the same time how vulnerable EU consumers are to misleading trading practices by dishonest traders selling counterfeit, illegal or unsafe products, and providing services online that are not compliant with Union safety rules or who impose unjustified and abusive price increases or other unfair conditions on consumers; stresses the urgent need to step up enforcement of Union rules and to enhance consumer protection;
28. Stresses that this problem is aggravated by difficulties in establishing the identity of fraudulent business users, thus making it difficult for consumers to seek compensation for the damages and losses experienced;
29. Considers that the current transparency and information requirements set out in the E-Commerce Directive on information society services providers and their business customers, and the minimum information requirements on commercial communications, should be strengthened in parallel with measures to increase compliance with existing rules, without harming the competitiveness of SMEs;
30. Calls on the Commission to reinforce the information requirements set out in Article 5 of the E-Commerce Directive and to require hosting providers to compare the information and identity of the business users with whom they have a direct commercial relationship, with the identification data by the relevant existing and available Union databases, in compliance with data protocol legislation; hosting providers should ask their business users to ensure that the information they provide is accurate, complete and updated and should be entitled and obliged to refuse or cease to provide their services to the latter, if the information about the identity of their business users is false or misleading; business users should be the ones in charge of notifying the service provider about any change in their business activity (for example, cessation of business activity);
31. Calls on the Commission to introduce enforceable obligations on information society service providers aiming at increasing transparency, information and accountability; calls on the Commission to ensure that enforcement measures are targeted in a way that takes into account the different services and does not inevitably lead to a breach of privacy and legal process; considers that those obligations should be proportionate and enforced by appropriate, effective, proportionate and dissuasive penalties;
32. Stresses that existing obligations, set out in the E-Commerce Directive and the Unfair Commercial Practices Directive on transparency of commercial communications and digital advertising, should be strengthened; points out that pressing consumer protection concerns about profiling, targeting and personalised pricing should be addressed, among others, by clear transparency obligations and information requirements;
33. Stresses that online consumers find themselves in an unbalanced relation to service providers and traders offering services supported by advertising revenue and advertisements that are directly targeting individual consumers, based on the information collected through big data and AI mechanisms; notes the potential negative impact of personalised advertising, in particular micro-targeted and behavioural advertisement; calls, therefore, on the Commission to introduce additional rules on targeted advertising and micro-targeting, based on the collection of personal data and to consider regulating micro- and behavioural targeted advertising more strictly in favour of less intrusive forms of advertising that do not require extensive tracking of user interaction with content; urges the Commission to also consider introducing legislative measures to make online advertising more transparent;
34. Underlines the importance, in view of the development of digital services, of the obligation for Member States to ensure that their legal system allows for contracts to be concluded by electronic means, while ensuring a high level of consumer protection; invites the Commission to review the existing requirements on contracts concluded by electronic means, including as regards notifications by Member States, and to update them if necessary; notes, in that context, the rise of “smart contracts” such as those based on distributed ledger technologies and asks the Commission to assess the development and use of distributed ledger technologies, including “smart contracts”, such as regards questions of validity and enforcement of smart contracts in cross-border situations, to provide guidance thereon, in order to ensure legal certainty for businesses and consumers, and to take legislative initiatives only if concrete gaps are identified following that assessment;
35. Calls on the Commission to introduce minimum standards for contract terms and general conditions, in particular with regard to transparency, accessibility, fairness and non-discriminatory measures, and to further review the practice of pre-formulated standard clauses in contract terms and conditions, which have not been individually negotiated in advance, including End-User Licensing Agreements, to seek ways of making them fairer and to ensure compliance with Union law, in order to allow easier engagement for consumers, including in the choice of clauses, to make it possible to obtain better informed consent;
36. Stresses the need to improve the efficiency of electronic interactions between businesses and consumers in light of the development of virtual identification technologies; considers that in order to ensure the effectiveness of the DSA, the Commission should also update the regulatory framework on digital identification, namely Regulation (EU) No 910/2014(18) (“the eIDAS Regulation”); considers that the creation of a universally accepted, trusted digital identity and trusted authentication systems would be a useful tool allowing to establish securely individual identities of natural persons, legal entities and machines in order to protect against the use of fake profiles; notes, in this context, the importance for consumers to securely use or purchase products and services online without having to use unrelated platforms and unnecessarily share data, including personal data, which is collected by those platforms; calls on the Commission to carry out a thorough impact assessment with regard to the creation of a universally accepted public electronic identity as an alternative to private single sign-in systems and underlines that this service should be developed so that data gathered is kept to an absolute minimum; consider that the Commission should assess the possibility to create an age verification system for users of digital services, especially in order to protect minors;
37. Stresses that the DSA should not affect the principle of data minimisation established by the GDPR, and, unless required by specific legislation otherwise, intermediaries of digital services should enable the anonymous use of their services to the maximum extent possible and only process data necessary for the identification of the user; that such collected data should not be used for any other digital services than those that require personal identification, authentication or age verification and that they should only be used with a legitimate purpose, and in no way to restrain general access to the internet;
AI and machine learning
38. Stresses that while AI-driven services or services making use of automated decision-making tools or machine learning tools, currently governed by the E-Commerce Directive, have the enormous potential to deliver benefits to consumers and service providers, the DSA should address the concrete challenges they pose in terms of ensuring non-discrimination, transparency, including on the datasets used and on targeted outputs, and understandable explanation of algorithms, as well as liability, which are not addressed in existing legislation;
39. Stresses furthermore that underlying algorithms need to fully comply with requirements on fundamental rights, especially privacy, the protection of personal data, the freedom of expression and information, the right to an effective judicial remedy, and the rights of the child, as enshrined in the Treaties and the Charter;
40. Considers that it is essential to ensure the use of high quality, non-discriminatory and unbiased underlying datasets, as well as to help individuals acquire access to diverse content, opinions, high quality products and services;
41. Calls on the Commission to introduce transparency and accountability requirements regarding automated decision-making processes, while ensuring compliance with requirements on user privacy and trade secrets; points out the need to allow for external regulatory audits, case-by-case oversight and recurrent risk assessments by competent authorities and to assess associated risks, in particular risks to consumers or third parties, and considers that measures taken to prevent those risks should be justified and proportionate, and should not hamper innovation; believes that the ‘human in command’ principle must be respected, inter alia, to prevent the rise of health and safety risks, discrimination, undue surveillance, or abuses, or to prevent potential threats to fundamental rights and freedoms;
42. Considers that consumers and users should have the right to be properly informed in a timely, concise and easily understandable and accessible manner, and that their rights should be effectively guaranteed when they interact with automated decision-making systems and other innovative digital services or applications; expresses concerns with regard to the existing lack of transparency as to the use of virtual assistants or chatbots, which may be particularly harmful to vulnerable consumers and underlines that digital service providers should not exclusively use automated decision-making systems for consumer support;
43. Believes, in that context, that it should be possible for consumers to be clearly informed when interacting with automated decision-making, and about how to reach a human with decision-making powers, how to request checks and corrections of possible mistakes resulting from automated decisions, as well as to seek redress for any damage related to the use of automated decision-making systems;
44. Underlines the importance to strengthen consumer choice, consumer control and consumer trust in AI services and applications; believes, therefore, that the set of rights of consumers should be expanded to better protect them in the digital world and calls on the Commission to consider in particular accountability and fairness criteria and control and the right to non-discrimination and unbiased AI datasets; considers that consumers and users should have more control on how AI is used and the possibility to refuse, limit or personalise the use of any AI-enabled personalisation features;
45. Notes that automated content moderation tools are incapable of effectively understanding the subtlety of context and meaning in human communication, which is necessary to determine whether assessed content may be considered to violate the law or terms of service; stresses therefore that the use of such tools should not be imposed by the DSA;
Tackling Illegal Content and Activities Online
46. Stresses that the existence and spread of illegal content and activities online is a severe threat that undermines citizens’ trust and confidence in the digital environment, harms the development of healthy digital ecosystems, and may also have serious and long-lasting consequences for the safety and fundamental rights of individuals; notes that, at the same time, illegal content and activities can be proliferated easily and their negative impact amplified within a very short period of time;
47. Notes that there is no ‘one size fits all’ solution to all types of illegal content and activities; stresses that content that might be illegal in some Member States, may not be ‘illegal’ in others, as only some types of illegal content are harmonised in the Union; calls for a strict distinction to be made between illegal content, punishable acts and illegally shared content on the one hand, and harmful content, hate speech and disinformation on the other, which are not always illegal and cover many different aspects, approaches and rules applicable in each case; takes the position that the legal liability regime should concern illegal content only as defined in Union or national law;
48. Believes, however, that, without prejudice to the broad framework of fundamental rights and existing sector-specific legislation, a more aligned and coordinated approach at Union level, taking into account the different types of illegal content and activities and based on cooperation and exchange of best practices between the Member States, will help address illegal content more effectively; underlines also the need to adapt the severity of the measures that need to be taken by service providers to the seriousness of the infringement and calls for improved cooperation and exchange of information between competent authorities and hosting service providers;
49. Considers that voluntary actions and self-regulation by online platforms across Europe have brought some benefits, but a clear legal framework for the removal of illegal content and activities is needed in order to ensure the swift notification and removal of such content online; underlines the need to prevent imposing a general monitoring obligation on digital service providers to monitor the information which they transmit or store and to prevent actively seeking, moderating or filtering all content and activities, neither de jure nor de facto; underlines that illegal content should be removed where it is hosted, and that access providers shall not be required to block access to content;
50. Calls on the Commission to ensure that online intermediaries, who, on their own initiative, take allegedly illegal content offline, to do so in a diligent, proportionate and non-discriminatory manner, and with due regard in all circumstances to the fundamental rights and freedoms of the users; underlines that any such measures should be accompanied by robust procedural safeguard and meaningful transparency and accountability requirements; and asks, where any doubts exist as to a content’s ‘illegal’ nature, that this content should be subject to human review and not be removed without further investigation;
51. Asks the Commission to present a study on the removal of content and data before and during the COVID-19 pandemic by automated decision-making processes and on the level of removals in error (false positives) that were included in the number of items removed;
52. Calls on the Commission to address the increasing differences and fragmentations of national rules in the Member States and to adopt clear and predictable harmonised rules and a transparent, effective and proportionate notice-and-action mechanism; it should provide sufficient safeguards, empower users to notify online intermediaries of the existence of potentially illegal online content or activities and help online intermediaries react quickly and be more transparent with the actions taken on potentially illegal content; is of the opinion that such measures should be technology-neutral and easily accessible to all actors to guarantee a high level of users’ and consumers’ protection;
53. Stresses that such a ‘notice-and-action’ mechanism must be human-centric; underlines that safeguards against the abuse of the system should be introduced, including against repeated false flagging, unfair commercial practices and other schemes; urges the Commission to ensure access to transparent, effective, fair, and expeditious counter-notice and complaint mechanisms and out-of-court dispute settlement mechanisms and to guarantee the possibility to seek judicial redress against content removal to satisfy the right to effective remedy;
54. Welcomes efforts to bring transparency to content removal; calls on the Commission to ensure that reports with information about the notice-and-action mechanisms, such as the number of notices, type of entities notifying content, nature of the content subject of complaint, response time by the intermediary, the number of appeals as well as the number of cases where content was misidentified as illegal or as illegally shared should be made publicly available;
55. Notes the challenges concerning the enforcement of legal injunctions issued within Member States other than the country of origin of a service provider and stresses the need to investigate this issue further; maintains that hosting service providers shall not be required to remove or disable access to information that is legal in their country of origin;
56. Stresses that the responsibility for enforcing the law, deciding on the legality of online activities and content, as well as ordering hosting service providers to remove or disable access to illegal content and that those orders are accurate, well-founded and respect fundamental rights, rests with independent competent public authorities;
57. Stresses that maintaining safeguards from the legal liability regime for online intermediaries set out in Articles 12, 13, 14 of the E-Commerce Directive and the general monitoring prohibition set out in Article 15 of the E-Commerce Directive are pivotal for facilitating the free movement of digital services, for ensuring the availability of content online and for protecting the fundamental rights of users and need to be preserved; in this context, underlines that the legal liability regime and ban on general monitoring should not be weakened via a possible new piece of legislation or the amendment of other sections of the E-commerce Directive;
58. Acknowledges the principle that digital services playing a neutral and passive role, such as backend and infrastructure services, are not responsible for the content transmitted over their services because they have no control over that content, have no active interaction with it or do not optimise it; stresses however, that further clarification regarding active and passive role by taking into account the case-law of the Court on the matter is needed;
59. Calls on the Commission to consider a requirement for hosting service providers to report illegal content, which may constitute a serious crime to the competent law enforcement authority, upon becoming aware of it;
Online marketplaces
60. Notes that, while the emergence of online service providers, such as online marketplaces, has benefited both consumers and traders, notably by improving choice, reducing costs and lowering prices, it has also made consumers more vulnerable to misleading trading practices by an increasing number of sellers, including from third countries, who are able to offer online illegal, unsafe or counterfeit products and services which often do not comply with Union rules and standards on product safety, and do not sufficiently guarantee consumer rights;
61. Stresses that consumers should be equally safe when shopping online or in stores; stresses that it is unacceptable that Union consumers are exposed to illegal, counterfeit and unsafe products, containing dangerous chemicals, as well as other safety hazards that pose risks to human health; insists on the necessity to introduce appropriate safeguards and measures for product safety and consumer protection in order to prevent the sale of non-compliant products or services on online marketplaces, and calls on the Commission to reinforce the liability regime on online marketplaces;
62. Stresses the importance of the rules of Regulation (EU) 2019/1020 on market surveillance and compliance of products about conformity of products entering the Union from third countries; calls on the Commission to take measures to improve compliance with legislation by sellers established outside the Union where there is no manufacturer, importer or distributor established in the Union and to remedy any current legal loophole which allows suppliers established outside the Union to sell online to European consumers products which do not comply with Union rules on safety and consumer protection, without being sanctioned or liable for their actions and leaving consumers with no legal means to enforce their rights or being compensated by any damages; stresses, in this context, the need for a possibility to always identify manufacturers and sellers of products from third countries;
63. Emphasises the need for online marketplaces to inform consumers promptly once a product they have purchased has been removed from the marketplace following a notification on its non-compliance with Union product safety or consumer protection rules;
64. Stresses the need to ensure that the providers of online marketplaces consult RAPEX and notify competent authorities as soon as they become aware of illegal, unsafe and counterfeit products on their platforms;
65. Considers that the providers of online marketplaces should enhance their cooperation with market surveillance authorities and the customs authorities, including by exchanging information on the seller of illegal, unsafe and counterfeit products;
66. Calls on the Commission to urge Member States to undertake more joint market surveillance actions and to step up collaboration with customs authorities in order to check the safety of products sold online before they reach consumers; asks the Commission to explore the possibility of the creation of an international network of consumer centres to help EU consumers in handling disputes with traders based in non-EU countries;
67. Asks the Commission to ensure that where online marketplaces offer professional services, a sufficient level of consumer protection is achieved through adequate safeguards and information requirements;
68. Believes that, in the tourism and transport market, the DSA should aim at ensuring legal certainty and clarity by creating a governance framework formalising the cooperation between platforms and national, regional and local authorities aiming especially at sharing best practices and establishing a set of information obligations of short-term rental and mobility platforms vis-à-vis their service providers concerning relevant national, regional and local legislation; calls on the Commission to further remove unjustified barriers by devising a sector-specific EU-coordinated effort involving all stakeholders to agree on sets of criteria, such as permits, or licenses, or, where applicable, a local or national registration number of a service provider, in line with Single Market rules, necessary to offer a service on a short term rental or mobility platform; stresses the importance to avoid imposing disproportionate information obligations and unnecessary administrative burden on all providers of services with particular emphasis on peer-to-peer service providers and SMEs;
69. Calls on the DSA, in line with the European Green deal, to promote sustainable growth and sustainability of e-commerce; stresses the importance of online marketplaces for promoting sustainable products and services and encouraging sustainable consumption; calls for measures to tackle misleading practices and disinformation regarding products and services offered online, including false ‘environmental claims’ while calling on the providers of online marketplaces to promote sustainability of e-commerce by providing consumers with clear and easily understandable information on the environmental impact of the products or services they buy online;
70. Invites the Commission to examine thoroughly the clarity and consistency of the existing legal framework applying to the online sale of products and services in order to identify possible gaps and contradictions and lack of effective enforcement; asks the Commission to conduct a thorough analysis of the interaction between the DSA and the Union product safety and chemicals legislation; asks the Commission to ensure consistency between the new rules on online marketplaces and the revision of Directive 2001/95/EC(19) (“the General Product Safety Directive”) and Directive 85/374/EEC(20) (“the Product Liability Directive”);
71. Notes the continued issues of the abuse or wrong application of selective distribution agreements to limit the availability of products and services across borders within the Single Market and between platforms; asks the Commission to act on this issue within any wider review of Vertical Bloc Exemptions and other policies under Article 101 TFEU, while refraining from its inclusion in the DSA;
Ex ante regulation of systemic operators
72. Notes that, today, some markets are characterised by large operators with significant network effects which are able to act as de facto “online gatekeepers” of the digital economy (“systemic operators”); stresses the importance of fair and effective competition between online operators with significant digital presence and other providers in order to promote consumer welfare; asks the Commission to conduct a thorough analysis of the different issues observed in the market so far and its consequences including on consumers, SMEs and the internal market;
73. Considers that by reducing barriers to market entry and by regulating systemic operators, an internal market instrument imposing ex ante regulatory remedies on those systemic operators with significant market power has the potential to open up markets to new entrants, including SMEs, entrepreneurs, and start-ups, thereby promoting consumer choice and driving innovation beyond what can be achieved by competition law enforcement alone;
74. Welcomes the Commission’s public consultation on the possibility of introducing, as part of the future DSA, a targeted ex ante regulation to tackle systemic issues which are specific to digital markets; stresses the intrinsic complementarity between internal market regulation and competition policy, as emphasised in the report by the Commission’s special advisers entitled “Competition Policy for the Digital Era”;
75. Calls on the Commission to define ‘systemic operators’ on the basis of clear indicators;
76. Considers that the ex ante regulation should build upon Regulation (EU) 2019/1150 (“the Platform to Business Regulation”) and its measures should be in line with the Union’s antitrust rules and within the Union’s policy on competition, which is currently under revision, to better address the challenges in the digital age; the ex ante regulation should ensure fair trading conditions applicable to all operators, including possible additional requirements and a closed list of the positive and negative actions such operators are required to comply with and/ or forbidden to engage in;
77. Calls on the Commission to analyse in particular the lack of transparency for recommendation systems of systemic operators including for the rules and criteria for the functioning of such systems and whether additional transparency obligations and information requirements need to be imposed;
78. Highlights that the imposition of ex ante regulatory remedies in other sectors has improved competition in those sectors; notes that a similar framework could be developed for identifying systemic operators with a “gatekeeper” role taking into account the specificities of the digital sector;
79. Draws attention to the fact that the size of business users of systemic operators varies from multinationals to micro-enterprises; underlines that ex ante regulation on systemic operators should not lead to the “trickling down” of additional requirements for the businesses that use them;
80. Underlines that the accumulation and harvesting of vast amounts of data and the use of such data by systemic operators to expand from one market into another, as well as the further possibility to push users to use a single operator’s e-identification for multiple platforms, can create imbalances in bargaining power and, thus, leads to the distortion of competition in the Single Market; considers that increased transparency and data sharing, between systemic operators and competent authorities is crucial in view of guaranteeing the functioning of an ex ante rule regulation;
81. Underlines that interoperability is key to enable competitive market, as well as users’ choice and innovative services, and to limit the risk of users’ and consumers’ lock-in effect; calls on the Commission to ensure appropriate levels of interoperability for systemic operators and to explore different technologies and open standards and protocols, including the possibility of a technical interface (Application Programming Interface);
Supervision, cooperation and enforcement
82. Believes that, in view of the cross-border nature of digital services, effective supervision and cooperation between Member States including exchange of information and best practices, is key to ensure the proper enforcement of the DSA; stresses that the imperfect transposition, implementation and enforcement of Union legislation by Member States creates unjustified barriers in the digital single market; calls on the Commission to address those in close cooperation with Member States;
83. Asks the Commission to ensure that Member States provide national supervisory authorities with the adequate financial means and human resources and enforcement powers to carry out their functions effectively and to contribute to their respective work;
84. Stresses that cooperation between national as well as other Member States’ authorities, civil society and consumer organisations is of utmost importance for achieving effective enforcement of the DSA; proposes to strengthen the country-of-origin principle through increased cooperation between Member States in order to improve the regulatory oversight of digital services and to achieve effective law enforcement in cross-border cases; encourages Member States to pool and share best practices and data sharing between national regulators, and to provide regulators and legal authorities with secure interoperable ways to communicate to each other;
85. Calls on the Commission to assess the most appropriate supervision and enforcement model for the application of the provisions regarding the DSA, and to consider the setup of a hybrid system, based on coordination and cooperation of national and Union authorities, for the effective enforcement oversight and implementation of the DSA; considers that such supervisory system should be responsible for the oversight, compliance, monitoring and application of the DSA and have supplementary powers to undertake cross-border initiatives and investigation and be entrusted with enforcement and auditing powers;
86. Takes the view that EU coordination in cooperation with the network of national authorities should prioritise addressing complex cross-border issues;
87. Recalls the importance of facilitating the sharing of non-personal data and promoting stakeholder dialogue; and encourages the creation and maintenance of a European research repository to facilitate the sharing of such data with public institutions, researchers, NGOs and universities for research purposes; calls on the Commission to build such tool upon existing best practices and initiatives such as the Platform observatory or the EU Blockchain Observatory;
88. Believes that the Commission, through the Joint Research Centre, should be empowered to provide expert assistance to the Member States, upon request, towards the analysis of technological, administrative, or other matters in relation to the Digital Single Market legislative enforcement; and calls on national regulators and the Commission to provide further advice and assistance to Union SMEs about their rights;
89. Calls on the Commission to strengthen and modernise the existing Union framework for out-of-court settlement under the E-Commerce Directive, taking into account developments under Directive 2013/11/EU(21), as well as court actions to allow for an effective enforcement and consumer redress; underlines the need to support consumers to use the court system; believes any revision should not weaken the legal protections of small businesses and traders that national legal systems provide;
Final aspects
90. Considers that any financial implications of the requested proposal should be covered by appropriate budgetary allocations;
o o o
91. Instructs its President to forward this resolution and the accompanying detailed recommendations to the Commission, the Council, and to the parliaments and governments of the Member States.
ANNEX TO THE RESOLUTION:
RECOMMENDATIONS AS TO THE CONTENT OF THE PROPOSAL REQUESTED
I. GENERAL PRINCIPLES
The Digital Services Act package (“DSA”) should contribute to the strengthening of the internal market by ensuring the free movement of digital services and the freedom to conduct a business, while at the same time guaranteeing a high level of consumer protection, and the improvement of users’ rights, trust and safety online.
The DSA should guarantee that online and offline economic activities are treated equally and that they are on a level playing field, which fully reflects the principle according to which “what is illegal offline is also illegal online”, taking into account the specific nature of the online environment.
The DSA should provide consumers and economic operators, especially micro, small and medium-sized enterprises, with legal certainty and transparency. The DSA should contribute to supporting innovation and removing unjustified and disproportionate barriers and restrictions to the provision of digital services.
The DSA should be without prejudice to the broad framework of fundamental rights and freedoms of users and consumers, such as the protection of private life and the protection of personal data, non-discrimination, dignity, the freedom of expression and the right to effective judicial remedy.
The DSA should build upon the rules currently applicable to online platforms, namely the E-Commerce Directive and the Platform to Business Regulation.
The DSA should include:
— a comprehensive revision of the E-Commerce Directive, based on Articles 53(1), 62 and 114 TFEU, consisting of:
— a revised framework with clear obligations with regards to transparency and information;
— clear and detailed procedures and measures related to effectively tackling and removing illegal content online, including a harmonised legally-binding European notice-and-action mechanism;
— effective supervision, cooperation and proportionate, effective and dissuasive sanctions;
— an internal market legal instrument based on Article 114 TFEU, imposing ex ante obligations on large platforms with a gatekeeper role in the digital ecosystem (“systemic operators”), complemented by an effective institutional enforcement mechanism.
II. SCOPE
In the interest of legal certainty, the DSA should clarify which digital services fall within its scope. The DSA should follow the horizontal nature of the E-Commerce Directive and apply not only to online platforms, but to all providers of information society services as defined in Union law.
A one-size-fits-all approach should be avoided. Different measures might be necessary for digital services offered in a purely business-to-business relationship, services which only have limited or no access to third parties or general public, and services which are targeted directly to consumers and the general public.
The territorial scope of the DSA should be extended to cover also the activities of companies, service providers and information society services established in third countries, when their activities are related to the offer of services or goods to consumers or users in the Union and directed at them.
If the Commission, following its review, considers that the DSA should amend the Annex of the E-Commerce Directive in respect of the derogations set out therein, it should not amend in particular the derogation of contractual obligations concerning consumer contracts.
The DSA should ensure that the Union and the Member States maintain a high level of consumer protection and that Member States can pursue legitimate public interest objectives, where it is necessary, proportionate and in accordance with Union law.
The DSA should define in a coherent way how its provisions interact with other legal instruments, aiming at facilitating free movement of services, in order to clarify the legal regime applicable to professional and non-professional services in all sectors, including activities related to transport services and short-term rentals, where clarification is needed.
The DSA should also clarify in a coherent way how its provisions interact with recently adopted rules on geo-blocking, product safety, market surveillance, platforms to business relations, consumer protection, sale of goods and supply of digital content and digital services(22), among others, and other announced initiatives such as the AI regulatory framework.
The DSA should apply without prejudice to the rules set out in other instruments, such as the GDPR, Directive (EU) 2019/790 (“the Copyright Directive”) and Directive 2010/13/EU (“the Audiovisual Media Services Directive”).
III. DEFINITIONS
In the definitions to be included therein, the DSA should:
— clarify to what extent new digital services, such as social media networks, collaborative economy services, search engines, WiFi hotspots, online advertising, cloud services, web hosting, messaging services, app stores, comparison tools, AI driven services, content delivery networks, and domain name services fall within its scope;
— clarify the nature of content hosting intermediaries (text, images, video, or audio content) on the one hand, and commercial online marketplaces (selling goods, including goods with digital elements, or services) on the other;
— clarify the difference between economic activities and content or transactions provided against remuneration, as defined by the Court, which also cover advertising and marketing practices on the one hand, and non-economic activities and content on the other;
— clarify what falls within the remit of the “illegal content” definition by making it clear that a violation of Union rules on consumer protection, product safety or the offer or sale of food or tobacco products, cosmetics and counterfeit medicines, or wildlife products also falls within the definition of illegal content;
— define the term “systemic operator” by establishing a set of clear indicators that allow regulatory authorities to identify platforms which enjoy a significant market position with a “gatekeeper” role, thereby playing a systemic role in the online economy; such indicators could include considerations such as whether the undertaking is active to a significant extent on multi-sided markets or has the ability to lock-in users and consumers, the size of its network (number of users), and the presence of network effects; barriers to entry, its financial strength, the ability to access data, the accumulation and the combination of data from different sources; vertical integration and its role as an unavoidable partner and the importance of its activity for third parties’ access to supply and markets, etc.;
— seek to codify the decisions of the Court, where needed, and having due regard to the many different pieces of legislation which use those definitions.
IV. TRANSPARENCY AND INFORMATION OBLIGATIONS
The DSA should introduce clear and proportionate transparency and information obligations; those obligations should not create any derogations or new exemptions to the current liability regime set out under Articles 12, 13, and 14 of the E-Commerce Directive and should cover the aspects described below:
1. General information requirements
The revised provisions of the E-Commerce Directive should strengthen the general information requirements with the following obligations:
— the information requirements in Article 5 and Articles 6 and 10 of the E-Commerce Directive should be reinforced;
— the “Know Your Business Customer” principle, limited to the direct commercial relationships of the hosting provider, should be introduced for business users; hosting providers should compare the identification data provided by their business users against the EU VAT and Economic Operator Identification and Registration (“EORI”) databases, where a VAT or EORI number exists; where a business is exempt from VAT or EORI registration, proof of identification should be provided; when a business user is acting as an agent for other businesses, it should declare themselves as such; hosting providers should ask their business users to ensure that all information provided is accurate and up-to-date, subject to any change, and hosting providers should not be allowed to provide services to business users when that information is incomplete or when the hosting provider has been informed by the competent authorities that the identity of their business user is false, misleading or otherwise invalid;
— the measure of exclusion from services referred to above should apply only to contractual business-to-business relationships and should be without prejudice to the rights of data subjects under the GDPR. That measure should be without prejudice to the protection of online anonymity for users, other than business users. The new general information requirements should further enhance Articles 5, 6 and 10 of the E-Commerce Directive in order to align those measures with the information requirements established in recently adopted legislation, in particular Directive 93/13/EEC(23) (“the Unfair Contract Terms Directive”), Directive 2011/83/EU(24) (“the Consumer Rights Directive”) and the Platform to Business Regulation;
— Article 5 of the E-Commerce Directive should be further modernised by requiring digital service providers to provide consumers with direct and efficient means of communication such as electronic contact forms, chatbots, instant messaging or telephone callback, provided that the information relating to those means of communication is accessible to consumers in a clear and comprehensible manner;
2. Fair contract terms and general conditions
The DSA should establish minimum standards for service providers to adopt fair, accessible, non-discriminatory and transparent contract terms and general conditions in compliance, with at least the following requirements:
— to define clear and unambiguous contract terms and general conditions in a plain and intelligible language;
— to explicitly indicate in the contract terms and general conditions what is to be understood as illegal content or behaviour according to Union or national law and to explain the legal consequences to be faced by users for knowingly storing or uploading illegal content;
— to notify users whenever a significant change that can affect users’ rights is made to the contract terms and general conditions and to provide an explanation thereof;
— to ensure that pre-formulated standard clauses in contract terms and general conditions, which have not been individually negotiated in advance, including in End-User Licensing Agreements, start with a summary statement based on a harmonised template, to be set out by the Commission;
— to ensure that the cancellation process is as effortless as the sign-up process (with no “dark patterns” or other influence on consumer decision);
— where automated systems are used, to specify clearly and unambiguously in their contract terms and general conditions the inputs and targeted outputs of their automated systems, and the main parameters determining ranking, as well as the reasons for the relative importance of those main parameters as compared to other parameters, while ensuring consistency with the Platforms-to-Business Regulation;
— to ensure that the requirements on contract terms and general conditions are consistent with and complement information requirements established by Union law, including those set out in the Unfair Contract Terms Directive, the Unfair Commercial Practices Directive, the Consumer Rights Directive, as amended by Directive (EU) 2019/2161, and with the GDPR;
3. Transparency requirements on commercial communications
— The revised provisions of the E-Commerce Directive should strengthen the current transparency requirements regarding commercial communications by establishing the principles of transparency-by-design and transparency-by-default.
— Building upon Article 6 and 7 of the E-Commerce Directive, the measures to be proposed should establish a new framework for Platform to Consumer relations on transparency as regards online advertising, digital nudging, micro targeting, recommendation systems for advertisement and preferential treatment; those measures should:
— include the obligation to disclose clearly defined types of information about online advertisement to enable effective auditing and control, such as information on the identity of the advertiser and the direct and indirect payments or any other remuneration received by service providers; that should also enable consumers and public authorities to identify who should be held accountable in case of, for example, false or misleading advertisement; the measures should also contribute to ensuring that illegal activities cannot be funded via advertising services;
— clearly distinguish between commercial and political online advertisement and ensure transparency of the criteria for the profiling targeted groups and the optimisation of advertising campaigns; enable consumers with a by default option not to be tracked or micro-targeted and to opt-in for the use of behavioural data for advertising purposes, as well as an opt-in option for political advertising and ads;
— provide consumers with access to their dynamic marketing profiles, so that they are informed on whether and for what purposes they are tracked and if the information they receive is for advertising purposes, and guarantee their right to contest decisions that undermine their rights;
— ensure that paid advertisements or paid placement in a ranking of search results should be identified in a clear, concise and intelligible manner, in line with Directive 2005/29/EC, as amended by Directive (EU) 2019/2161;
— ensure compliance with the principle of non-discrimination and with minimum diversification requirements, and identify practices constituting aggressive advertising, whilst encouraging consumer-friendly AI-technologies;
— introduce accountability and fairness criteria for algorithms used for targeted advertising and advertisement optimisation, and allow for external regulatory audits by competent authorities and for the verification of algorithmic design choices that involve information about individuals, without risk to violate user privacy and trade secrets;
— provide access to advertising delivery data and information about the exposure of advertisers, when it comes to where and when advertisements are placed, and the performance of paid vs unpaid advertising;
4. Artificial Intelligence and machine learning
The revised provisions should follow the principles listed below regarding the provision of information society services which are enabled by AI or make use of automated decision-making tools or machine learning tools, by:
— ensuring that consumers have the right to be informed if a service is enabled by AI, makes use of automated decision-making or machine learning tools or automated content recognition tools, in addition to the right not to be subject to a decision based solely on automated processing and to the possibility to refuse, limit or personalise the use of any AI-enabled personalisation features, especially in view of ranking of services;
— establishing comprehensive rules on non-discrimination and transparency of algorithms and data sets;
— ensuring that algorithms are explainable to competent authorities who can check when they have reasons to believe that there is an algorithmic bias;
— providing for a case-by-case oversight and recurrent risk assessment of algorithms by competent authorities, as well as human control over decision-making, in order to guarantee a higher level of consumer protection; such requirements should be consistent with the human control mechanisms and risk assessment obligations for automating services set out in existing rules, such as Directive (EU) 2018/958(25) (“the Proportionality Test Directive”), and should not constitute an unjustified or disproportionate restriction to the free moment of services;
— establishing clear accountability, liability and redress mechanisms to deal with potential harms resulting from the use of AI applications, automated decision-making and machine learning tools;
— establishing the principle of safety, security by design and by default and setting out effective and efficient rights and procedures for AI developers in instances where the algorithms produce sensitive decisions about individuals, and by properly addressing and exploiting the impact of upcoming technological developments;
— ensuring consistency with confidentiality, user privacy and trade secrets;
— ensuring that, when AI technologies introduced at the workplace have direct impacts on employment conditions of workers using digital services, there needs to be an comprehensive information to workers;
5. Penalties
The compliance to those provisions should be reinforced with effective, proportionate and dissuasive penalties, including the imposition of proportionate fines.
V. MEASURES RELATED TO TACKLING ILLEGAL CONTENT ONLINE
The DSA should provide clarity and guidance regarding how online intermediaries should tackle illegal content online. The revised rules of the E-Commerce Directive should:
— clarify that any removal or disabling access to illegal content should not affect the fundamental rights and the legitimate interests of users and consumers and that legal content should stay online;
— improve the legal framework taking into account the central role played by online intermediaries and the internet in facilitating the public debate and the free dissemination of facts, opinions, and ideas;
— preserve the underlying legal principle that online intermediaries should not be held directly liable for the acts of their users and that online intermediaries can continue moderating content under fair, accessible, non-discriminatory and transparent terms and conditions of service;
— clarify that a decision made by online intermediaries as to whether content uploaded by users is legal should be provisional, and that online intermediaries should not be held liable for it, as only courts of law should decide in the final instance what is illegal content;
— ensure that the ability of Member States to decide which content is illegal under national law is not affected;
— ensure that the measures online intermediaries are called to adopt are proportionate, effective and adequate in order to effectively tackle illegal content online;
— adapt the severity of the measures that need to be taken by service providers to the seriousness of the infringement;
— ensure that the blocking of access to, and the removal of, illegal content does not require blocking the access to an entire platform and services which are otherwise legal;
— introduce new transparency and independent oversight of the content moderation procedures and tools related to the removal of illegal content online; such systems and procedures should be accompanied by robust safeguards for transparency and accountability and be available for auditing and testing by competent authorities.
1. A notice-and-action mechanism
The DSA should establish a harmonised and legally enforceable notice-and-action mechanism based on a set of clear processes and precise timeframes for each step of the notice-and-action procedure. That notice-and-action mechanism should:
— apply to illegal online content or behaviour;
— differentiate among different types of providers, sectors and/or illegal content and the seriousness of the infringement;
— create easily accessible, reliable and user-friendly procedures tailored to the type of content;
— allow users to easily notify by electronic means potentially illegal online content or behaviour to online intermediaries;
— clarify, in an intelligible way, existing concepts and processes such as “expeditious action”, “actual knowledge and awareness”, “targeted actions”, “notices’ formats”, and “validity of notices”;
— guarantee that notices will not automatically trigger legal liability nor should they impose any removal requirement, for specific pieces of the content or for the legality assessment;
— require notices to be sufficiently precise and adequately substantiated so as to allow the service provider receiving them to take an informed and diligent decision as regards the effect to be given to the notice, and specify the requirements necessary to ensure that notices contain all the information necessary for the swift removal of illegal content;
— notices should include the location (URL and timestamp, where appropriate) of the allegedly illegal content in question, an indication of the time and date when the alleged wrongdoing was committed, the stated reason for the claim, including an explanation of the reasons why the notice provider considers the content to be illegal, and if necessary, depending on the type of content, additional evidence for the claim, and a declaration of good faith that the information provided is accurate;
— notice providers should have the possibility, but not be required, to include their contact details in a notice; where they decide to do so, their anonymity should be ensured towards the content provider; if no contact details are provided, the IP address or other equivalent can be used; anonymous notices should not be permitted when they concern the violation of personality rights or intellectual property rights;
— set up safeguards to prevent abusive behaviour by users who systematically, repeatedly and in bad faith submit wrongful or abusive notices;
— create an obligation for the online intermediaries to verify the notified content and reply in a timely manner to the notice provider and to the content uploader with a reasoned decision; such a requirement to reply should include the reasoning behind the decision, how the decision was made, if the decision was made by a human or an automated decision agent, and information about the possibility to appeal the decision by either party, with the intermediary, courts or other entities;
— provide information and remedies to contest the decision via a counter-notice, including if the content has been removed via automated solutions, unless such a counter-notice would conflict with an ongoing investigation by law enforcement authorities;
— safeguard that judicial injunctions issued in a Member State other than that of the online intermediaries should not be handled within the notice-and-action mechanism.
The DSA notice-and-action mechanism should be binding only for illegal content. That, however, should not prevent online intermediaries from being able to adopt a similar notice-and-action mechanism for other content.
2. Out-of-court dispute settlement related with the notice-and-action mechanisms
— The decision taken by the online intermediary on whether or not to act upon content flagged as illegal should contain a clear justification on the actions undertaken regarding that specific content. The notice provider should receive a confirmation of receipt and a communication indicating the follow-up given to the notification;
— The providers of the content that is being flagged as illegal should be immediately informed of the notice and, that being the case, of the reasons and decisions taken to remove, suspend or disable access to the content; all parties should be duly informed of all existing available legal options and mechanisms to challenge this decision;
— All interested parties should have the right to contest the decision through a counter-notice which must be subject to clear requirements and accompanied by an explanation; interested parties should also have recourse to out-of-court dispute settlement mechanisms;
— The right to be notified and the right to issue a counter-notice by a user before a decision to remove content is taken shall only be restricted or waived, where:
(a) online intermediaries are subject to a national legal requirement that online intermediation services terminate the provision of the whole of its online intermediation services to a given user, in a manner which does not allow it to respect that notice-and-action mechanism; or,
(b) the notification or counter-notice would impede an ongoing criminal investigation that requires to keep the decision to suspend or remove access to the content a secret.
— The rules of Article 17 of the E-Commerce Directive should be revised to ensure that independent out-of-court dispute settlement mechanisms are put in place and are available to users in the event of disputes over the disabling of access to, or the removal of, works or other subject matter uploaded by them;
— The out-of-court dispute settlement mechanism should meet certain standards, in particular in terms of procedural fairness, independence, impartiality, transparency and effectiveness; such mechanisms shall enable disputes to be settled impartially and shall not deprive the user of legal protection afforded by national law, without prejudice to the rights of users to have recourse to efficient judicial remedies;
— If the redress and counter-notice have established that the notified activity or information is not illegal, the online intermediary should restore the content that was removed or suspended without undue delay or allow for re-upload by the user;
— When issuing, contesting or receiving a notice, all interested parties should be notified of both the possibility of making use of an alternative dispute resolution mechanism and of the right to recourse to a competent national court;
— The out-of-court dispute settlement mechanisms should in no way affect the rights of the parties involved to initiate legal proceedings.
3. Transparency of the notice-and-action mechanism
The notice-and-action mechanisms should be transparent and publicly available; to that end, online intermediaries should be obliged to publish annual reports, which should be standardised and contain information on:
— the number of all notices received under the notice-and-action mechanism and the types of content they relate to;
— the average response time per type of content;
— the number of erroneous takedowns;
— the type of entities that issued the notices (private individuals, organisations, corporations, trusted flaggers, etc.) and the total number of their notices;
— information about the nature of the content’s illegality or the type of infringement for which it was removed;
— the number of contested decisions received by online intermediaries and how they were handled;
— the description of the content moderation model applied by the hosting intermediary, as well as of any automated tools, including meaningful information about the logic involved;
— the measures they adopt with regards to repeated offenders to ensure that those are effective in tackling such systemic abusive behaviour.
The obligation to publish that report and the detail it requires should take into account the size or the scale on which online intermediaries operate and whether they have only limited resources and expertise. Microenterprises and start-ups should be required to update this report only where there is significant change from one year to the next.
Online intermediaries should also publish information about their procedures and timeframes for intervention by interested parties, such as the time for the content uploader to respond with a counter-notification, the time at which the intermediary will inform both parties about the result of the procedure, and the time for different forms of appeal against any decision.
4. Safe harbour provisions in Article 12, 13 and 14 of the E-Commerce Directive
The DSA should protect and uphold the current limited exemptions from liability for information society service providers (online intermediaries) provided for in Article 12, 13, and 14 of the E-Commerce Directive.
5. Active and Passive hosts
The DSA should maintain the derogations in the E-Commerce Directive for intermediaries playing a neutral and passive role and address the lack of legal certainty regarding the concept of “active role” by codifying the case-law of the Court on that matter. It should also clarify that the hosting providers play an active role when creating the content or contributing to a certain degree to the illegality of the content, or if it amounts to adoption of the third-party content as one’s own, as judged by average users or consumers.
It should ensure that voluntary measures taken by online intermediaries to address illegal content should not lead to them being considered as having an active role, solely on the basis of those measures. However, the deployment of any such measures should be accompanied with appropriate safeguards and content moderation practices should be fair, accessible, non-discriminatory and transparent.
The DSA should maintain the exemptions from liability for backend and infrastructure services, which are not party to the contractual relations between online intermediaries and their customers and which merely implement decisions taken by the online intermediaries or their customers.
6. Ban on General Monitoring - Article 15 of the E-Commerce Directive
The DSA should maintain the ban on a general monitoring obligation under Article 15 of the E-Commerce Directive. Online intermediaries should not be subject to general monitoring obligations.
VI. ONLINE MARKETPLACES
The DSA should propose specific new rules for online marketplaces, for the online sale, promotion or supply of products and for the provision of services to consumers.
Those new rules should:
— be consistent with, and complementary to, a reform of the General Product Safety Directive;
— cover all entities that offer and direct services and/or products to consumers in the Union, including if they are established outside the Union;
— distinguish online marketplaces from other types of service providers, including other ancillary intermediation activities within the same company activity; if one of the services provided by a company fulfils the criteria necessary to be considered as a marketplace, the rules should fully apply to that part of the business regardless of the internal organisation of that company;
— ensure that online marketplaces make it clear from which country the products are sold or services are being provided, regardless whether they are provided or sold by that marketplace, a third party or a seller established inside or outside the Union;
— ensure that online marketplaces remove quickly any known misleading information given by the supplier, including misleading implicit guarantees and statements made by the supplier;
— ensure that online marketplaces, offering professional services, indicate when a profession is regulated within the meaning of Directive 2005/36/EC, in order to enable consumers to make both an informed choice and to verify, where necessary, with the relevant competent authority if a professional meets the requirements for a specific professional qualification;
— ensure that online marketplaces are transparent and accountable and cooperate with the competent authorities of the Member States in order to identify, where serious risks of dangerous products exist and to alert them as soon as they become aware of such products on their platforms;
— ensure that online marketplaces consult the Union Rapid Alert System for dangerous non-food products (RAPEX) and carry out random checks on recalled and dangerous products and, wherever possible, take appropriate action in respect to products concerned;
— ensure that once products have been identified as unsafe and/or counterfeit by the Union’s rapid alert systems, by national market surveillance authorities, by customs authorities or by consumer protection authorities, it should be compulsory to remove products from the marketplace expeditiously and maximum within two working days of receiving notification;
— ensure that online marketplaces inform consumers once a product they bought therein has been removed from their platform following a notification on its non-compliance with Union product safety and consumer protection rules; they should also inform consumers of any safety issues and of any action required to ensure that recalls are carried out effectively;
— ensure that online marketplaces put in place measures to act against repeat offenders who offer dangerous products, in cooperation with authorities in line with the Platform to Business Regulation, and that they adopt measures aimed at preventing the reappearance of dangerous product, which had been already removed;
— consider the option of requiring suppliers which are established in a third country to set up a branch in the Union or designate a legal representative established in the Union, who can be held accountable for the selling of products or services which do not comply with Union rules of safety to European consumers;
— address the liability of online marketplaces for consumer damages and for failure to take adequate measures to remove illegal products after obtaining the actual knowledge of such illegal products;
— address the liability of online marketplaces when those platforms have predominant influence over suppliers and essential elements of economic transactions, such as payment means, prices, default terms conditions, or conduct aimed at facilitating the sale of goods to a consumer in the Union market, and there is no manufacturer, importer, or distributor established in the Union that can be held liable;
— address the liability of online marketplaces if the online marketplace has not informed the consumer that a third party is the actual supplier of the goods or services, thus making the marketplace contractually liable vis-à-vis the consumer; liability should also be considered in case the marketplace knowingly provides misleading information;
— guarantee that online marketplaces have the right to redress towards a supplier or producer at fault;
— explore expanding the commitment made by some e-commerce retailers and the Commission to respectively remove dangerous or counterfeit products from sale more rapidly under the voluntary commitment schemes called “Product Safety Pledge” and "Memorandum of Understanding on the sale of counterfeit goods via the internet" and indicate which of those commitments could become mandatory.
VII. EX ANTE REGULATION OF SYSTEMIC OPERATORS
The DSA should put forward a proposal for a new separate instrument aiming at ensuring that the systemic role of specific online platforms will not endanger the internal market by unfairly excluding innovative new entrants, including SMEs, entrepreneurs and start-ups, thereby reducing consumer choice;
To that end, the DSA should, in particular:
— set up an ex ante mechanism to prevent (instead of merely remedy) market failures caused by “systemic operators” in the digital world, building on the Platform to Business Regulation; such a mechanism should allow regulatory authorities to impose remedies on systemic operators in order to address market failures, without the establishment of a breach of competition rules;
— empower regulatory authorities to impose proportionate and well-defined remedies on those companies which have been identified as “systemic operators”, based on criteria set out within the DSA and a closed list of the positive and negative actions those companies are required to comply with and/ or are prohibited from engaging in; in its impact assessment, the Commission should make a thorough analysis of the different issues observed on the market so far, such as:
— the lack of interoperability and appropriate tools, data, expertise, and resources deployed by systemic operators to allow consumers to switch or connect and interoperate between digital platforms or internet ecosystems;
— the systematic preferential display, which allows systemic operators to provide their own downstream services with better visibility;
— data envelopment used to expand market power from one market into adjacent markets, incurring in self-preferencing of their own products and services and engaging in practices aimed at locking-in consumers;
— the widespread practice of banning third-party business users from steering consumers to their own website through the imposition of contractual clauses;
— the lack of transparency of recommendation systems used by systemic operators, including of the rules and criteria for the functioning of such systems;
— ensure that systemic operators are given the possibility to demonstrate that the behaviour in question is justified;
— clarify that some regulatory remedies should be imposed on all “systemic operators”, such as transparency obligations in the way they conduct business, in particular how they collect and use data, and a prohibition for “systemic operators” to engage in any practices aimed at making it more difficult for consumers to switch or use services across different suppliers, or other forms of unjustified discrimination that exclude or disadvantage other businesses;
— empower regulatory authorities to adopt interim measures and to impose penalties on “systemic operators” that fail to respect the different regulatory obligations imposed on them;
— reserve the power to ultimately decide if an information society service provider is a “systemic operators” to the Commission, based on the conditions set out in the ex ante mechanism;
— empower users of "systemic operators" to be informed, to deactivate and be able to effectively control and decide what kind of content they want to see; users should also be properly informed of all the reasons why specific content is suggested to them;
— ensure that the rights, obligations and principles of the GDPR – including data minimisation, purpose limitation, data protection by design and by default, legal grounds for processing – are observed;
— ensure appropriate levels of interoperability requiring “systemic operators” to share appropriate tools, data, expertise, and resources deployed in order to limit the risks of users and consumers’ lock-in and the artificially binding users to one systemic operator with no realistic possibility or incentives for switching between digital platforms or internet ecosystems as part of those measures, the Commission should explore different technologies and open standards and protocols, including the possibility of a technical interface (Application Programming Interface) that allows users of competing platforms to dock on to the systemic operators and exchange information with it; systemic operators may not make commercial use of any of the data that is received from third parties during interoperability activities for purposes other than enabling those activities; interoperability obligations should not limit, hinder or delay the ability of intermediaries to patch vulnerabilities;
— ensure that the new ex ante mechanism is without prejudice to the application of competition rules, including on self-preferencing and overall vertical integration, and ensure that both policy tools are completely independent.
VIII. SUPERVISION, COOPERATION AND ENFORCEMENT
The DSA should improve supervision and enforcement of the existing rules and strengthen the internal market clause as the cornerstone of the Digital Single Market, by complementing it with a new cooperation mechanism aimed at improving the exchange of information, the cooperation and mutual trust and, upon request, mutual assistance between Member States, in particular between the authorities in the home country where the service provider is established and the authorities in the host country where the provider is offering its services.
The Commission should conduct a thorough impact assessment to assess the most appropriate supervision and enforcement model for the application of the provisions regarding the DSA, while respecting the principles of subsidiarity and proportionality.
In its impact assessment, the Commission should look into existing models, such as the Consumer Protection Cooperation (CPC) Network, the European Regulators Group for Audiovisual Media Services (ERGA), the European Data Protection Board (EDBP) and the European Competition Network (ECN), and consider the adoption of a hybrid system of supervision.
That hybrid system of supervision, based on EU coordination in cooperation with a network of national authorities, should improve the monitoring and application of the DSA, enforce compliance, including enforcing regulatory fines, other sanctions or measures, and should be able to carry out auditing of intermediaries and platforms. It should also settle, where needed, cross-border disputes between the national authorities, address complex cross-border issues, provide advice and guidance and approve Union-wide codes and decisions, and, together with the national authorities, it should be able to launch initiatives and investigations into cross-border issues. The ultimate oversight of the Member States’ obligations should remain with the Commission.
The Commission should report to the European Parliament and the Council, and, together with the national authorities, maintain a public ‘Platform Scoreboard’ with relevant information on the compliance with the DSA. The Commission should facilitate and support the creation and maintenance of a European research repository tool to facilitate the sharing of data with public institutions, researchers, NGOs and universities for research purposes.
The DSA should also introduce new enforcement elements into Article 16 of the E-Commerce Directive as regards self-regulation.
Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC (OJ L 257, 28.8.2014, p. 73).
Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (OJ L 210, 7.8.1985, p. 29).
Directive 2013/11/EU of the European Parliament and of the Council of 21 May 2013 on alternative dispute resolution for consumer disputes and amending Regulation (EC) No 2006/2004 and Directive 2009/22/EC (Directive on consumer ADR) (OJ L 165, 18.6.2013, p. 63).
Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts, most recently amended by Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules (OJ L 328, 18.12.2019, p. 7).
Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the European Parliament and of the Council and repealing Council Directive 85/577/EEC and Directive 97/7/EC of the European Parliament and of the Council (OJ L 304, 22.11.2011, p. 64).
Directive (EU) 2018/958 of the European Parliament and of the Council of 28 June 2018 on a proportionality test before adoption of new regulation of professions (OJ L 173, 9.7.2018, p. 25).
Digital Services Act: adapting commercial and civil law rules for commercial entities operating online
European Parliament resolution of 20 October 2020 with recommendations to the Commission on a Digital Services Act: adapting commercial and civil law rules for commercial entities operating online (2020/2019(INL))
– having regard to Article 225 of the Treaty on the Functioning of the European Union,
– having regard to Article 11 of the Charter of Fundamental Rights of the European Union and Article 10 of the European Convention on Human Rights,
– having regard to Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services(1),
– having regard to Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC(2),
– having regard to Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)(3) (hereinafter referred to as the “General Data Protection Regulation”),
– having regard to the Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive)(4),
– having regard to Directive 2008/52/EC of the European Parliament and of the Council of 21 May 2008 on certain aspects of mediation in civil and commercial matters(5),
– having regard to the proposal for a Regulation of the European Parliament and of the Council of 6 June 2018 establishing the Digital Europe Programme for the period 2021-2027 (COM(2018)0434),
– having regard to the Commission Recommendation (EU) 2018/334 of 1 March 2018 on measures to effectively tackle illegal content online(6),
– having regard to the Convention on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters(7) and the Convention on the Recognition and Enforcement of Foreign Arbitral Awards, signed on 10 June 1958 in New York,
– having regard to its resolution of 3 October 2018 on distributed ledger technologies and blockchains: building trust with disintermediation(8),
– having regard to the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 19 February 2020 on A European strategy for data (COM(2020)0066),
– having regard to the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 19 February 2020 on Shaping Europe’s digital future (COM(2020)0067),
– having regard to the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 25 May 2016 on Online Platforms and the Digital Single Market - Opportunities and Challenges for Europe (COM(2016)0288),
– having regard to the European added value assessment study carried out by the European Parliamentary Research Service, entitled ‘Digital Services Act: European added value assessment’(9),
– having regard to Rules 47 and 54 of its Rules of Procedure,
– having regard to the opinions of the Committee on the Internal Market and Consumer Protection and of the Committee on Culture and Education,
– having regard to the report of the Committee on Legal Affairs (A9-0177/2020),
A. whereas digital services, being a cornerstone of the Union’s economy and the livelihood of a large number of its citizens, need to be regulated in a way that guarantees fundamental rights and other rights of citizens while supporting development and economic progress, the digital environment and fostering trust online, taking into account the interests of users and all market participants, including SMEs and start-ups;
B. whereas some rules regarding online content-sharing providers and audiovisual media services have recently been updated, notably by Directive (EU) 2018/1808 and Directive (EU) 2019/790, a number of key civil and commercial law aspects have not been addressed satisfactorily in Union or national law, and whereas the importance of this issue has been accentuated by rapid and accelerating development over the last decades in the field of digital services, in particular the emergence of new business models, technologies and social realities; whereas in this context, a comprehensive updating of the essential provisions of civil and commercial law applicable to online commercial entities is required;
C. whereas some businesses offering digital services enjoy, due to strong data-driven network effects, significant market power that enables them to impose their business practices on users and makes it increasingly difficult for other players, especially start-ups and SMEs, to compete and for new businesses to even enter the market;
D. whereas ex-post competition law enforcement alone cannot effectively address the impact of the market power of certain online platforms, including on fair competition in the Digital Single Market;
E. whereas content hosting platforms evolved from involving the mere display of content into sophisticated organisms and market players, in particular social networks that harvest and exploit usage data; whereas users have legitimate grounds to expect fair terms with respect to access, transparency, pricing and conflict resolution for the usage of such platforms and for the use that platforms make of the users’ data; whereas transparency can contribute to significantly increasing trust in digital services;
F. whereas content hosting platforms may determine what content is shown to their users, thereby profoundly influencing the way we obtain and communicate information, to the point that content hosting platforms have de facto become public spaces in the digital sphere; whereas public spaces must be managed in a manner that protects public interests, respects fundamental rights and the civil law rights of the users in particular the right to freedom of expression and information;
G. whereas upholding the law in the digital world does not only involves effective enforcement of fundamental rights, in particular freedom of expression and information, privacy, safety and security, non-discrimination, respect for property and intellectual property rights, but also access to justice and due process; whereas delegating decisions regarding the legality of content or of law enforcement powers to private companies undermines transparency and due process, leading to a fragmented approach; whereas a fast-track legal procedure with adequate guarantees is therefore required to ensure that effective remedies exist;
H. whereas automated tools are currently unable to reliably differentiate illegal content from content that is legal in a given context and that therefore mechanisms, for the automatic detection and removal of content can raise legitimate legal concerns, in particular as regards possible restrictions of freedom of expression and information, protected under Article 11 of the Charter of Fundamental Rights of the European Union; whereas the use of automated mechanisms should, therefore, be proportionate, covering only justified cases, and following transparent procedures;
I. whereas Article 11 of the Charter of Fundamental Rights of the European Union also protects the freedom and pluralism of the media, which are increasingly dependent on online platforms to reach their audiences;
J. whereas digital services are used by the majority of Europeans on a daily basis, but are subject to an increasingly wide set of rules across the Union leading to significant fragmentation on the market and consequently legal uncertainty for European users and services operating across borders; whereas the civil law regimes governing content hosting platforms’ practices in content moderation are based on certain sector-specific provisions at Union and national level with notable differences in the obligations imposed and in the enforcement mechanisms of the various civil law regimes deployed; whereas this situation has led to a fragmented set of rules for the Digital Single Market, which requires a response at Union level;
K. whereas the current business model of certain content hosting platforms is to promote content that is likely to attract the attention of users and therefore generate more profiling data in order to offer more effective targeted advertisements and thereby increase profit; whereas this profiling coupled with targeted advertisement can lead to the amplification of content geared towards exploiting emotions, often encouraging and facilitating sensationalism in news feed and recommendation systems, resulting in the possible manipulation of users;
L. whereas offering users contextual advertisements requires less user data than targeted behavioural advertising and is thus less intrusive;
M. whereas the choice of algorithmic logic behind recommendation systems, comparison services, content curation or advertisement placements remains at the discretion of the content hosting platforms with little possibility for public oversight, which raises accountability and transparency concerns;
N. whereas content hosting platforms with significant market power make it possible for their users to use their profiles to log into third-party websites, thereby allowing them to track their activities even outside their own platform environment, which constitutes a competitive advantage in access to data for content curation algorithms;
O. whereas so-called smart contracts, which are based on distributed ledger technologies, including blockchains, that enable decentralised and fully traceable record-keeping and self-execution to occur, are being used in a number of areas without a proper legal framework; whereas there is uncertainty concerning the legality of such contracts and their enforceability in cross-border situations;
P. whereas the non-negotiable terms and conditions of platforms often indicate both applicable law and competent courts outside the Union, which may impede access to justice; whereas Regulation (EU) No 1215/2012 of the European Parliament and of the Council of 12 December 2012 on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters(10) lays down rules on jurisdiction; whereas the General Data Protection Regulation, clarifies the data subject’s right to private enforcement action directly against the controller or processor, regardless of whether the processing takes place in the Union or not and regardless whether the controller is established in the Union or not; whereas Article 79 of the General Data Protection Regulation stipulates that proceedings shall be brought before the courts of the Member State where the controller or processor has an establishment or, alternatively where the data subject has his or her habitual residence;
Q. whereas access to and mining of non-personal data is an important factor in the growth of the digital economy; whereas appropriate legal standards and data protection safeguards regarding the interoperability of data can, by removing lock-in effects, play an important part in ensuring fair market conditions;
R. whereas it is important to assess the possibility of tasking a European entity with the responsibility of ensuring a harmonised approach to the implementation of the Digital Services Act across the Union, facilitating coordination at national level as well as addressing the new opportunities and challenges, in particular those of a cross-border nature, arising from ongoing technological developments;
Digital Services Act
1. Requests that the Commission submit without undue delay a set of legislative proposals constituting a Digital Services Act with an adequate material, personal and territorial scope, defining key concepts and including the recommendations as set out in the Annex to this resolution; is of the view that, without prejudice to detailed aspects of the future legislative proposals, Article 114 of the Treaty on the Functioning of the European Union should be the legal basis;
2. Proposes that the Digital Services Act include a regulation that establishes contractual rights as regards content management, lays down transparent, fair, binding and uniform standards and procedures for content moderation, and guarantees accessible and independent recourse to judicial redress; stresses that legislative proposals should be evidence-based and seek to remove current and prevent potentially new unjustified barriers in the supply of digital services by online platforms while enhancing the protection of consumers and citizens; believes that the legislative proposals should aim at achieving sustainable and smart growth, address technological challenges, and ensure that the Digital Single Market is fair and safe for everyone;
3. Further suggests that the measures proposed for content moderation only apply to illegal content rather than content that is merely harmful; suggests, to this end, that the regulation include universal criteria to determine the market power of platforms in order to provide a clear definition of what constitutes a platform with significant market power and thereby determine whether certain content hosting platforms that do not hold significant market power can be exempted from certain provisions; underlines that the framework established by the Digital Services Act should be manageable for small businesses, SMEs and start-ups and should therefore include proportionate obligations for all sectors;
4. Proposes that the Digital Services Act impose an obligation on digital service providers who are established outside the Union to designate a legal representative for the interest of users within the Union, to whom requests could be addressed in order, for example, to allow for consumer redress in the case of false or misleading advertisements, and to make the contact information of that representative visible and accessible on the website of the digital service provider;
Rights as regards content moderation
5. Stresses that the responsibility for enforcing the law must rest with public authorities; considers that the final decision on the legality of user-generated content must be made by an independent judiciary and not a private commercial entity;
6. Insists that the regulation must prohibit content moderation practices that are discriminatory or entail exploitation and exclusion, especially towards the most vulnerable, and must always respect the fundamental rights and freedoms of users, in particular their freedom of expression;
7. Stresses the necessity to better protect consumers by providing reliable and transparent information on examples of malpractice, such as the making of misleading claims and scams;
8. Recommends that the application of the regulation should be closely monitored by a European entity tasked with ensuring compliance by content hosting platforms with the provisions of the regulation, in particular by monitoring compliance with the standards laid down for content management on the basis of transparency reports and monitoring algorithms employed by content hosting platforms for the purpose of content management; calls on the Commission to assess the options of appointing an existing or new European Agency or European body or of coordinating itself a network of national authorities to carry out these tasks (hereinafter referred to as “the European entity”);
9. Suggests that content hosting platforms regularly submit comprehensive transparency reports based on a consistent methodology and assessed on the basis of relevant performance indicators, including on their content policies and the compliance of their terms and conditions with the provisions of the Digital Services Act, to the European entity; further suggests that content hosting platforms publish and make available in an easy and accessible manner those reports as well as their content management policies on a publicly accessible database;
10. Calls for content hosting platforms with significant market power to evaluate the risk that their content management policies of legal content pose to society, in particular with regard to their impact on fundamental rights, and to engage in a biannual dialogue with the European entity and the relevant national authorities on the basis of a presentation of transparency reports;
11. Recommends that the Member States provide for independent dispute settlement bodies, tasked with settling disputes regarding content moderation; takes the view that in order to protect anonymous publications and the general interest, not only the user who uploaded the content that is the subject of a dispute but also a third party, such as an ombudsperson, with a legitimate interest in acting should be able to challenge content moderation decisions; affirms the right of users to further recourse to justice;
12. Takes the firm position that the Digital Services Act must not oblige content hosting platforms to employ any form of fully automated ex-ante controls of content unless otherwise specified in existing Union law, and considers that mechanisms voluntarily employed by platforms must not lead to ex-ante control measures based on automated tools or upload-filtering of content and must be subject to audits by the European entity to ensure that there is compliance with the Digital Services Act;
13. Stresses that content hosting platforms must be transparent in the processing of algorithms and of the data used to train them;
Rights as regards content curation, data and online advertisements
14. Considers that the user-targeted amplification of content based on the views or positions presented in such content is one of the most detrimental practices in the digital society, especially in cases where the visibility of such content is increased on the basis of previous user interaction with other amplified content and with the purpose of optimising user profiles for targeted advertisements; is concerned that such practices rely on pervasive tracking and data mining; calls on the Commission to analyse the impact of such practices and take appropriate legislative measures;
15. Is of the view that the use of targeted advertising must be regulated more strictly in favour of less intrusive forms of advertising that do not require any tracking of user interaction with content and that being shown behavioural advertising should be conditional on users’ freely given, specific, informed and unambiguous consent;
16. Notes the existing provisions addressing targeted advertising in the General Data Protection Regulation and Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications)(11);
17. Recommends, therefore, that the Digital Services Act set clear boundaries and introduce transparency rules as regards the terms for accumulation of data for the purpose of offering targeted advertisements as well as regards the functioning and accountability of such targeted advertisement, especially when data are tracked on third-party websites; maintains that new measures establishing a framework for Platform-to-Consumers relations are needed as regards transparency provisions on advertising, digital nudging and preferential treatment; invites the Commission to assess options for regulating targeted advertising, including a phase-out leading to a prohibition;
18. Stresses that in line with the principle of data minimisation and in order to prevent unauthorised disclosure, identity theft and other forms of abuse of personal data, the Digital Services Act should provide for the right to use digital services anonymously wherever technically possible; calls on the Commission to require content hosting platforms to verify the identity of those advertisers with which they have a commercial relationship to ensure accountability of advertisers in the event content promoted is found to be illegal; recommends therefore that the Digital Services Act include legal provisions preventing platforms from commercially exploiting third-party data in situations of competition with those third parties;
19. Regrets the existing information asymmetry between content hosting platforms and public authorities and calls for a streamlined exchange of necessary information; stresses that in line with the case law on communications metadata, public authorities must be given access to a user’s metadata only to investigate suspects of serious crime and with prior judicial authorisation;
20. Recommends that providers which support a single sign-on service with significant market power should be required to also support at least one open and decentralised identity system based on a non-proprietary framework; asks the Commission to propose common Union standards for national systems provided by Member States, especially as regards data protection standards and cross-border interoperability;
21. Calls on the Commission to assess the possibility of defining fair contractual conditions to facilitate data sharing and increase transparency with the aim of addressing imbalances in market power; suggests, to this end, exploring options to facilitate the interoperability, interconnectivity and portability of data; points out that data sharing should be accompanied by adequate and appropriate safeguards including effective anonymization of personal data;
22. Recommends that the Digital Services Act require platforms with significant market power to provide an application programming interface, through which third-party platforms and their users can interoperate with the main functionalities and users of the platform providing the application programming interface, including third-party services designed to enhance and customise the user experience, especially through services that customise privacy settings as well as content curation preferences; suggests that platforms publicly document all application programming interfaces they make available for the purpose of allowing for the interoperability and interconnectivity of services;
23. Is strongly of the view, on the other hand, that platforms with significant market power providing an application programming interface must not share, retain, monetise or use any of the data they receive from third-party services;
24. Stresses that interoperability and interconnectivity obligations must not limit, hinder or delay the ability of content hosting platforms to fix security issues, nor should the need to fix security issues lead to an undue suspension of the application programming interface providing interoperability and interconnectivity;
25. Recalls that the provisions on interoperability and interconnectivity must respect all relevant data protection laws; recommends, in this respect, that platforms be required by the Digital Services Act to ensure the technical feasibility of the data portability provisions laid down in Article 20(2) of the General Data Protection Regulation;
26. Calls for content hosting platforms to give users a real choice as to whether or not to give prior consent to being shown targeted advertising based on the user’s prior interaction with content on the same content hosting platform or on third-party websites; underlines that this choice must be presented in a clear and understandable way and its refusal must not lead to access to the functionalities of the platform being disabled; stresses that consent in targeted advertising must not be considered as freely given and valid if access to the service is made conditional on data processing; reconfirms that the Directive 2002/58/EC makes targeted advertising subject to an opt-in decision and that it is otherwise prohibited; notes that since the online activities of an individual allow for deep insights into their behaviour and make it possible to manipulate them, the general and indiscriminate collection of personal data concerning every use of a digital service interferes disproportionately with the right to privacy; confirms that users have a right not to be subject to pervasive tracking when using digital services;
27. Asks the Commission to ensure that, in the same spirit, consumers can still use a connected device for all its functions, even if consumers withdraw or do not give their consent to share non-operational data with the device manufacturer or third parties; reiterates the need for transparency in contract terms and conditions regarding the possibility and scope of data sharing with third parties;
28. Further calls for users to be guaranteed an appropriate degree of transparency and influence over the criteria according to which content is curated and made visible for them; affirms that this should also include the option to opt out from any content curation other than chronological order; points out that application programming interfaces provided by platforms should allow users to have content curated by software or services of their choice;
29. Underlines the importance for the Digital Services Act to prove legally sound and effective protection of children in the online environment, whilst refraining from imposing general monitoring or filtering obligations and ensuring full coordination and avoiding duplication with the General Data Protection Regulation and with the Audiovisual Media Services Directive.
30. Recalls that paid advertisements or paid placement of sponsored content should be identified in a clear, concise and intelligent manner; suggests that platforms should disclose the origin of paid advertisements and sponsored content; suggests, to this end, that content hosting platforms publish all sponsored content and advertisements and make them clearly visible to their users in an advertising archive that is publicly accessible, indicating who has paid for them, and, if applicable, on behalf of whom; stresses that this includes both direct and indirect payments or any other remuneration received by service providers;
31. Believes that, if relevant data show a significant gap in misleading advertising practices and enforcement between platforms based in the Union-based and platforms based in third countries, it is reasonable to consider further options to ensure compliance with the laws in force within the Union; stresses the need for a level playing field between advertisers from the Union and advertisers from third countries;
Provisions regarding terms and conditions, smart contracts and blockchains, and private international law
32. Notes the rise of so-called smart contracts such as those based on distributed ledger technologies without a clear legal framework;
33. Calls on the Commission to assess the development and use of distributed ledger technologies, including blockchain and, in particular, of smart contracts, provide guidance to ensure legal certainty for business and consumers, in particular regarding questions of legality, enforcement of smart contracts in cross border situations, and notarisation requirements where applicable, and make proposals for the appropriate legal framework;
34. Underlines that the fairness and compliance with fundamental rights standards of terms and conditions imposed by intermediaries on the users of their services must be subject to judicial review; stresses, that terms and conditions unduly restricting users’ fundamental rights, such as the right to privacy and to freedom of expression, should not be binding;
35. Requests that the Commission examine modalities to ensure appropriate balance and equality between the parties to smart contracts by taking into account the private concerns of the weaker party or public concerns such as those related to cartel agreements; emphasises the need to ensure that the rights of creditors in insolvency and restructuring procedures are respected; strongly recommends that smart contracts include mechanisms that can halt and reverse their execution and related payments;
36. Requests the Commission to, in particular, update its existing guidance document on Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights(12) in order to clarify whether it considers smart contracts to fall within the exemption in point (l) of Article 3(3) of that Directive, and, if so, under which circumstances, and to clarify the issue of the right of withdrawal;
37. Stresses the need for blockchain technologies, and smart contracts in particular, to be utilised in accordance with antitrust rules and requirements, including those prohibiting cartel agreements or concerted practices;
38. Considers that standard terms and conditions should not prevent effective access to justice in Union courts or disenfranchise Union citizens or businesses; calls on the Commission to assess whether the protection of access rights to data under private international law is uncertain and leads to disadvantages for Union citizens and businesses;
39. Emphasises the importance of ensuring that the use of digital services in the Union is fully governed by Union law under the jurisdiction of Union courts;
40. Concludes further that legislative solutions to these issues ought to be found at Union level if action at the international level does not seem feasible, or if there is a risk of such action taking too long to come to fruition;
41. Stresses that service providers established in the Union must not be required to remove or disable access to information that is legal in their country of origin;
o o o
42. Instructs its President to forward this resolution and the accompanying detailed recommendations to the Commission and the Council.
ANNEX TO THE RESOLUTION:
DETAILED RECOMMENDATIONS AS TO THE CONTENT OF THE PROPOSAL REQUESTED
A. PRINCIPLES AND AIMS OF THE PROPOSAL REQUESTED
THE KEY PRINCIPLES AND AIMS OF THE PROPOSAL:
— The proposal sets out both acts that should be included in the Digital Services Act and acts that are ancillary to the Digital Services Act.
— The proposal aims to strengthen civil and commercial law rules applicable to commercial entities operating online with respect to digital services.
— The proposal aims to strengthen and bring clarity on the contractual rights of users in relation to content moderation and curation.
— The proposal aims to further address inadmissible and unfair terms and conditions used for the purpose of digital services.
— The proposal addresses the issue of aspects of data collection being in contravention of fair contractual rights of users as well as data protection and online confidentiality rules.
— The proposal addresses the importance of fair implementation of the rights of users as regards interoperability and portability.
— The proposal raises the importance of private international law rules that provide legal clarity on the non-negotiable terms and conditions used by online platforms, as well as of ensuring the right to access data and guaranteeing access to justice.
— The proposal does not address aspects related to the regulation of online marketplaces, which should nevertheless be considered by the Digital Services Act Package to be proposed by the Commission.
— The proposal raises the need for assessment of the necessity of proper regulation of civil and commercial law aspects in the field of distributed ledger technologies, including blockchains and, in particular, addresses the necessity of the proper regulation of civil and commercial law aspects of smart contracts.
I. PROPOSALS TO BE INCLUDED IN THE DIGITAL SERVICES ACT
The key elements of the proposals to be included in the Digital Services Act should be:
A regulation on contractual rights as regards content management and that contains the following elements:
— It should apply to content management, including content moderation and curation, with regard to content accessible in the Union.
— It should provide proportionate principles for content moderation.
— It should provide formal and procedural standards for a notice and action mechanism, which are proportionate to the platform and the nature and impact of the harm, effective and future-proof.
— It should provide for an independent dispute settlement mechanism in the Member States without limiting access to judicial redress.
— It should indicate a set of clear indicators to define the market power of content hosting platforms, in order to determine whether certain content hosting platforms that do not hold significant market power can be exempted from certain provisions. Such indicators could include the size of its network (number of users), its financial strength, access to data, the degree of vertical integration, or the presence of lock-in effect.
— It should provide rules regarding the responsibility of content hosting platforms for goods sold or advertised on them taking into account supporting activities for SMEs in order to minimize their burden when adapting to this responsibility.
— It should make a clear distinction between illegal and harmful content when it comes to applying the appropriate policy options. In this regard, any measure in the Digital Services Act should concern only illegal content as defined in Union and national law.
— It should be based upon established principles as regards determining the law applicable to compliance with administrative law, and should, in light of the increasing convergence of user rights, clearly state that all aspects within its scope are governed by those principles.
— It should fully respect the Charter of Fundamental Rights of the European Union and Union rules protecting users and their safety, privacy and personal data, as well as other fundamental rights.
— It should provide for a dialogue between content hosting platforms with significant market power and the European entity on the risk management of content management of legal content.
The Commission should consider options for a European entity tasked with ensuring compliance with the provisions of the proposal through the following measures:
— regular monitoring of the algorithms employed by content hosting platforms for the purpose of content management;
— regular review of the compliance of content hosting platforms with the provisions of the regulation, on the basis of transparency reports provided by the content-hosting platforms and the public database of decisions on removal of content to be established by the Digital Services Act;
— working with content hosting platforms on best practices to meet the transparency and accountability requirements for terms and conditions, as well as best practices in content moderation and implementing notice-and-action procedures;
— cooperating and coordinating with the national authorities of Member States as regards the implementation of the Digital Services Act;
— managing a dedicated fund to assist the Member States in financing the operating costs of the independent dispute settlement bodies described in the regulation, funded by fines imposed on content hosting platforms for non-compliance with the provisions of the Digital Services Act as well as a contribution by content hosting platforms with significant market power;
— imposing fines for non-compliance with the Digital Services Act. The fines should contribute to the special dedicated fund intended to assist the Member States in financing the operating costs of the dispute settlement bodies described in the regulation. Instances of non-compliance should include:
– failure to implement the provisions of the regulation;
– failure to provide transparent, accessible, fair and non-discriminatory terms and conditions;
– failure to provide the European entity with access to content management algorithms for review;
– failure to submit transparency reports to the European entity;
— publishing biannual reports on all of its activities and reporting to Union institutions.
Transparency reports regarding content management should be established as follows:
The Digital Services Act should contain provisions requiring content hosting platforms to regularly publish and provide transparency reports to the European entity. Such reports should be comprehensive, following a consistent methodology, and should include in particular:
— information on notices processed by the content hosting platform, including the following:
– the total number of notices received, for which types of content, and the action taken accordingly;
– the number of notices received per category of submitting entity, such as private individuals, public authorities or private undertakings;
– the total number of removal requests complied with and the total number of referrals of content to competent authorities;
– the total number of counter-notices or appeals received, as well as information on how they were resolved;
– the average lapse of time between publication, notice, counter-notice and action;
— information on the number of staff employed for content moderation, their location, education and language skills, as well as any algorithms used to take decisions;
— information on requests for information by public authorities, such as those responsible for law enforcement, including the numbers of fully complied with requests and requests that were not or only partially complied with;
— information on the enforcement of terms and conditions and information on the court decisions ordering the annulment and/or modification of terms and conditions considered illegal by a Member State.
Content hosting platforms should, in addition, publish their decisions on content removal on a publicly accessible database to increase transparency for users.
The independent dispute settlement bodies to be established by the regulation should issue reports on the number of referrals brought before them, including the number of referrals given heed to.
II. PROPOSALS ANCILLARY TO THE DIGITAL SERVICES ACT
Measures regarding content curation, data and online advertisements in breach of fair contractual rights of users should include:
— Measures to minimise the data collected by content hosting platforms, based on interactions of users with content hosted on content hosting platforms, for the purpose of completing targeted advertising profiles, in particular by imposing strict conditions for the use of targeted personal advertisements and by requiring freely given, specific, informed and unambiguous prior consent of the user. Consent to targeted advertising shall not be considered as freely given and valid if access to the service is made conditional on data processing.
— Users of content hosting platforms shall be informed if they are subject to targeted advertising, given access to their profile built by content hosting platforms and the possibility to modify it, and given the choice to opt in or out and withdraw their consent to be subject to targeted advertisements.
— Content hosting platforms should make available an archive of sponsored content and advertisements that were shown to their users, including the following:
– whether the sponsored content or sponsorship is currently active or inactive;
– the timespan during which the sponsored content advertisement was active;
– the name and contact details of the sponsor or advertiser, and, if different, on behalf of whom the sponsored content or advertisement was placed;
– the total number of users reached;
– information on the group of users targeted.
The path to fair implementation of the rights of users as regards interoperability interconnectivity and portability should include:
— an assessment of the possibility of defining fair contractual conditions to facilitate data sharing with the aim of addressing imbalances in market power, in particular through the interoperability, interconnectivity and portability of data.
— a requirement for platforms with significant market power to provide an application programming interface, through which third-party platforms and their users can interoperate with the main functionalities and users of the platform providing the application programming interface, including third-party services designed to enhance and customise the user experience, especially through services that customise privacy settings as well as content curation preferences;
— provisions ensuring that platforms with significant market power providing an application programming interface may not share, retain, monetise or use any of the data they receive from third-party services;
— provisions ensuring that the interoperability and interconnectivity obligations may not limit, hinder or delay the ability of content hosting platforms to fix security issues, nor should the need to fix security issues lead to an undue suspension of the application programming interface providing interoperability and interconnectivity;
— provisions ensuring that platforms be required by the Digital Services Act to ensure the technical feasibility of the data portability provisions laid down in Article 20(2) of the General Data Protection Regulation;
— provisions ensuring that content hosting platforms with significant market power publicly document all application programming interfaces they make available for the purpose of allowing for the interoperability and interconnectivity of services.
The path to the proper regulation of civil and commercial law aspects of distributed ledger technologies, including blockchains and, in particular, smart contracts should comprise:
— measures ensuring that the proper legislative framework is in place for the development and deployment of digital services including distributed ledger technologies, such as blockchains and smart contracts;
— measures ensuring that smart contracts are fitted with mechanisms that can halt and reverse their execution, in particular given private concerns of the weaker party or public concerns such as those related to cartel agreements and in respect for the rights of creditors in insolvency and restructuring procedures;
— measures to ensure appropriate balance and equality between the parties to smart contracts, taking into account, in particular, the interest of small businesses and SMEs, for which the Commission should examine possible modalities;
— an update of the existing guidance document on Directive 2011/83/EU in order to clarify whether smart contracts fall within the exemption in point (i) of Article 3(3) of that Directive, as well as issues related to cross-border transactions, notarisation requirements and the right of withdrawal;
The path to equitable private international law rules that do not deprive users of access to justice should:
— ensure that standard terms and conditions do not include provisions regulating private international law matters to the detriment of access to justice, in particular through the effective enforcement of existing measures in this regard;
— include measures clarifying private international law rules concerning the activities of platforms regarding data, so that they are not detrimental to Union subjects;
— build on multilateralism and, if possible, be agreed in the appropriate international fora.
Only where it proves impossible to achieve a solution based on multilateralism in reasonable time, should measures applied within the Union be proposed, in order to ensure that the use of digital services in the Union is fully governed by Union law under the jurisdiction of Union courts.
B. TEXT OF THE LEGISLATIVE PROPOSAL REQUESTED
Proposal for a
REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
on contractual rights as regards content management
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
Having regard to the Treaty on the Functioning of the European Union, and in particular Article 114 thereof,
Having regard to the proposal from the European Commission,
After transmission of the draft legislative act to the national parliaments,
Having regard to the opinion of the European Economic and Social Committee,
Acting in accordance with the ordinary legislative procedure,
Whereas:
(1) The terms and conditions that digital service providers apply in relations with users are often non-negotiable and can be unilaterally amended by those providers. Action at a legislative level is needed to put in place minimum standards for such terms and conditions, in particular as regards procedural standards for content management;
(2) The civil law regimes governing the practices of content hosting platforms as regards content moderation are based on certain sector-specific provisions at Union level as well as on laws passed by Member States at national level, and there are notable differences in the obligations imposed by those civil law regimes on content hosting platforms and in their enforcement mechanisms.
(3) The resulting fragmentation of civil law regimes governing content moderation by content hosting platforms not only creates legal uncertainties, which might lead such platforms to adopt stricter practices than necessary in order to minimise the risks brought about by the use of their service, but also leads to a fragmentation of the Digital Single Market, which hinders growth and innovation and the development of European businesses in the Digital Single Market.
(4) Given the detrimental effects of the fragmentation of the Digital Single Market, and the resulting legal uncertainty for businesses and consumers, the international character of content hosting, the vast amount of content requiring moderation, and the significant market power of a few content hosting platforms located outside the Union, the various issues that arise in respect of content hosting need to be regulated in a manner that entails full harmonisation and therefore by means of a regulation.
(5) Concerning relations with users, this Regulation should lay down minimum standards for the fairness, transparency and accountability of terms and conditions of content hosting platforms. Terms and conditions should be clear, accessible, intelligible and unambiguous and include fair, transparent, binding and uniform standards and procedures for content moderation, which should guarantee accessible and independent recourse to judicial redress and comply with fundamental rights.
(6) User-targeted amplification of content based on the views or positions presented in such content is one of the most detrimental practices in the digital society, especially in cases where the visibility of such content is increased on the basis of previous user interaction with other amplified content and with the purpose of optimising user profiles for targeted advertisements.
(7) Algorithms that decide on the ranking of search results influence individual and social communications and interactions and can be opinion-forming, especially in the case of media content.
(8) In order to ensure, inter alia, that users can assert their rights, they should be given an appropriate degree of transparency and influence over the curation of content made visible to them, including the possibility to opt out of any content curation other than chronological order altogether. In particular, users should not be subject to curation without freely given, specific, informed and unambiguous prior consent. Consent to targeted advertising should not be considered as freely given and valid if access to the service is made conditional on data processing.
(9) Consent given in a general manner by a user to the terms and conditions of content hosting platforms or to any other general description of the rules relating to content management by content hosting platforms should not be taken as sufficient consent for the display of automatically curated content to the user.
(10) This Regulation does not oblige content hosting platforms to employ any form of automated ex-ante control of content, unless otherwise specified in existing Union law, and provides that content moderation procedures used voluntarily by platforms are not to lead to ex-ante control measures based on automated tools or upload-filtering of content.
(11) This Regulation should also include provisions against discriminatory content moderation practices, exploitation or exclusion, for the purposes of content moderation, especially when user-created content is removed based on appearance, ethnic origin, gender, sexual orientation, religion or belief, disability, age, pregnancy or upbringing of children, language or social class.
(12) The right to issue a notice pursuant to this Regulation should remain with any natural or legal person, including public bodies, to which content is provided through a website or application.
(13) After a notice has been issued, the uploader should be informed thereof by the content hosting platform and in particular about the reason for the notice and for the action to be taken, and should be provided information about the procedure, including about appeal and referral to independent dispute settlement bodies, and about available remedies in the event of false notices. Such information should, however, not be given if the content hosting platform has been informed by public authorities about ongoing law enforcement investigations. In such case, it should be for the relevant authorities to inform the uploader about the issue of a notice, in accordance with applicable rules.
(14) All concerned parties should be informed about a decision as regards a notice. The information provided to concerned parties should also include, apart from the outcome of the decision, at least the reason for the decision and whether the decision was made solely by a human, as well as relevant information regarding review or redress.
(15) Content should be considered as manifestly illegal if it is unmistakably and without requiring in-depth examination in breach of legal provisions regulating the legality of content on the internet.
(16) Given the immediate nature of content hosting and the often ephemeral purpose of content uploading, it is necessary to provide independent dispute settlement bodies for the purpose of providing quick and efficient extra-judicial recourse. Such bodies should be competent to adjudicate disputes concerning the legality of user-uploaded content and the correct application of terms and conditions. However, that process should not prevent the user from having the right of access to justice and further judicial redress.
(17) The establishment of independent dispute settlement bodies could relieve the burden on courts, by providing a fast resolution of disputes over content management decisions, without prejudice to the right to judicial redress before a court. Given that content hosting platforms which enjoy significant market power can particularly gain from the introduction of independent dispute settlement bodies, it is appropriate that they contribute to the financing of such bodies. This fund should be independently managed by the European entity in order to assist the Member States in financing the running costs of the independent dispute settlement bodies. Member States should ensure that such bodies are provided with adequate resources to ensure their competence and independence.
(18) Users should have the right of referral to a fair and independent dispute settlement body, as an alternative dispute settlement mechanism, to contest a decision taken by a content hosting platform following a notice concerning content they uploaded. Notifiers should have that right if they would have legal standing in a civil procedure regarding the content in question.
(19) As regards jurisdiction, the competent independent dispute settlement body should be that located in the Member State in which the content forming the subject of the dispute has been uploaded. It should always be possible for natural persons to bring complaints to the independent dispute settlement body of their Member States of residence.
(20) Whistleblowing helps to prevent breaches of law and detect threats or harm to the general interest that would otherwise remain undetected. Providing protection for whistleblowers plays an important role in protecting freedom of expression, media freedom and the public’s right to access information. Directive (EU) 2019/1937 of the European Parliament and of the Council(13) should therefore apply to the relevant breaches of this Regulation. Accordingly, that Directive should be amended.
(21) This Regulation should include obligations to report on its implementation and to review it within a reasonable time. For this purpose, the independent dispute settlement bodies provided for by Member States under this Regulation should submit reports on the number of referrals brought before them, the decisions taken – anonymising personal data as appropriate – including the number of referrals dealt with, data on systemic problems, trends and the identification of platforms not complying with decisions of independent dispute settlement bodies.
(22) Since the objective of this Regulation, namely to establish a regulatory framework for contractual rights as regards content management in the Union, cannot be sufficiently achieved by the Member States but can rather, by reason of its scale and effects, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union. In accordance with the principle of proportionality, as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective.
(23) Action at Union level as set out in this Regulation would be substantially enhanced by a European entity tasked with appropriate monitoring and ensuring compliance by content hosting platforms with the provisions of this Regulation. For this purpose, the Commission should consider the options of appointing an existing or new European Agency or European body or coordinating a network of national authorities, in order to review compliance with the standards laid down for content management on the basis of transparency reports and the monitoring of algorithms employed by content hosting platforms for the purpose of content management (hereinafter referred to as ‘the European entity’).
(24) In order to ensure that the risks presented by content amplification are evaluated, a biannual dialogue on the impact of content management policies of legal content on fundamental rights should be established between content hosting platforms with significant market power and the European entity together with relevant national authorities.
(25) This Regulation respects all fundamental rights and observes the freedoms and principles recognised in the Charter of Fundamental Rights of the European Union as enshrined in the Treaties, in particular the freedom of expression and information, and the right to an effective remedy and to a fair trial,
HAVE ADOPTED THIS REGULATION:
Article 1
Purpose
The purpose of this Regulation is to contribute to the proper functioning of the internal market by laying down rules to ensure that fair contractual rights exist as regards content management and to provide independent dispute settlement mechanisms for disputes regarding content management.
Article 2
Scope of application
1. This Regulation applies to content hosting platforms that host and manage content that is accessible to the public on websites or through applications in the Union, irrespective of the place of establishment or registration, or principal place of business of the content hosting platform.
2. This Regulation does not apply to content hosting platforms that:
For the purposes of this Regulation, the following definitions apply:
(1) ‘content hosting platform’ means an information society service within the meaning of point (b) of Article 1(1) of Directive (EU) 2015/1535 of the European Parliament and of the Council(15) of which the main or one of the main purposes is to allow signed-up or non-signed-up users to upload content for display on a publicly accessible website or application;
(2) 'content hosting platform with significant market power' means a content hosting platform with at least two of the following characteristics:
(a) the capacity to develop or preserve its user base because of network effects which lock-in a significant part of its users, or because its positioning in the downstream market allows it to create economic dependency;
(b) being of a considerable size in the market, measured either by the number of active users or by the annual global turnover of the platform;
(c) it is integrated into a business or network environment controlled by its group or parent company, which allows for market power to be leveraged from one market into an adjacent market;
(d) it has a gatekeeper role for a whole category of content or information;
(e) it has access to large amounts of high quality personal data, either provided by users or inferred about users based on monitoring their online behaviour, and such data are indispensable for providing and improving a similar service, as well as being difficult to access or replicate by potential competitors;
(3) ‘content’ means any concept, idea, form of expression or information in any format such as text, images, audio and video;
(4) 'illegal content' means any content which is not in compliance with Union law or the law of a Member State in which it is hosted;
(5) ‘content management’ means the moderation and curation of content on content hosting platforms;
(6) ‘content moderation’ means the practice of monitoring and applying a pre-determined set of rules and guidelines to content generated, published or shared by users, in order to ensure that the content complies with legal and regulatory requirements, community guidelines and terms and conditions, as well as any resulting measure taken by the platform, such as removal of content or the deletion or suspension of the user’s account, be it through automated means or human operators;
(7) ‘content curation’ means the practice of selecting, optimising, prioritising and recommending content based on individual user profiles for the purpose of its display on a website or application;
(8) ‘terms and conditions’ means all terms, conditions or specifications, irrespective of their name or form, which govern the contractual relationship between the content hosting platform and its users and which are unilaterally determined by the content hosting platform;
(9) ‘user’ means a natural or legal person that uses the services provided by a content hosting platform or interacts with content hosted on such a platform;
(10) ‘uploader’ means a natural or legal person that adds content to a content hosting platform irrespective of its visibility to other users;
(11) ‘notice’ means a formalised notification contesting the compliance of content with legal and regulatory requirements, community guidelines and terms and conditions.
Article 4
Principles for content management
1. Content management shall be conducted in a fair, lawful and transparent manner. Content management practices shall be appropriate, proportionate to the type and volume of content, relevant and limited to what is necessary in relation to the purposes for which the content is managed. Content hosting platforms shall be accountable for ensuring that their content management practices are fair, transparent and proportionate.
2. Users shall not be subjected to discriminatory practices, exploitation or exclusion, for the purposes of content moderation by the content hosting platforms, such as removal of user-generated content based on appearance, ethnic origin, gender, sexual orientation, religion or belief, disability, age, pregnancy or upbringing of children, language or social class.
3. Content hosting platforms shall provide the users with sufficient information on their content curation profiles and the individual criteria according to which content hosting platforms curate content for them, including information as to whether algorithms are used and their objectives.
4. Content hosting platforms shall provide users with an appropriate degree of influence over the curation of content made visible to them, including the choice of opting out of content curation altogether. In particular, users shall not be subject to content curation without their freely given, specific, informed and unambiguous prior consent.
Article 5
Structured risk dialogue on content management
As part of a structured risk dialogue with the European entity together with the relevant national authorities, content hosting platforms with significant market power shall present a biannual report to the European entity on the fundamental rights impact and on their risk management of their content management policies and how they mitigate those risks.
Article 6
Transparency obligation
1. Digital services providers shall take the measures necessary to enable the disclosure of the funding of any interest groups with which the users of the providers’ digital services are associated, and of details of the nature of the relationship between such interest groups and users. Such disclosure shall enable the person who is legally responsible to be identified.
2. Commercial digital service providers who are established outside the Union shall designate a legal representative for the purposes of user interests within the Union and make the contact information of that representative visible and accessible on their online platforms.
Article 7
Eligibility for issuing notices
1. Any natural or legal person or public body to which content is provided through a website, application, or other form of software, shall have the right to issue a notice pursuant to this Regulation.
2. Member States shall provide for penalties where a person acting for purposes relating to their trade, business, craft or profession systematically and repeatedly submits wrongful notices. Such penalties shall be effective, proportionate and dissuasive.
Article 8
Notice procedures
Content hosting platforms shall include in their terms and conditions clear, accessible, intelligible and unambiguous information regarding notice procedures, in particular:
(a) the maximum period within which the uploader of the content in question is to be informed about a notice procedure;
(b) the period within which the uploader can launch an appeal;
(c) the deadline for the content hosting platform to expeditiously treat a notice and take a decision;
(d) the deadline for the content hosting platform to inform both parties about the outcome of the decision including a justification for the action taken.
Article 9
Content of notices
1. A notice regarding content shall include at least the following information:
(a) a link to the content in question and, where appropriate, such as regarding video content, a timestamp;
(b) the reason for the notice;
(c) evidence supporting the claim made in the notice;
(d) a declaration of good faith from the notifier; and
(e) in the event of a violation of personality rights or intellectual property rights, the identity of the notifier.
2. In the event of violations referred to in point (e) of paragraph 1, the notifier shall be the person concerned by the violation of personality rights, or the holder of the intellectual property rights that were violated, or someone acting on behalf of that person.
Article 10
Information to the uploader
1. Upon a notice being issued, and before any decision on the content has been made, the uploader of the content in question shall receive the following information:
(a) the reason for the notice and for the action the content hosting platform might take;
(b) sufficient information about the procedure to follow;
(c) information on the right of reply laid down in paragraph 3; and
(d) information on the available remedies in relation to false notices.
2. The information required under paragraph 1 shall not be provided if the content hosting platform has been informed by public authorities about ongoing law enforcement investigations.
3. The uploader shall have the right to reply to the content hosting platform in the form of a counter-notice. The content hosting platform shall consider the uploader’s reply when taking a decision on the action to be taken.
Article 11
Decisions on notices
1. Content hosting platforms shall ensure that decisions on notifications are taken by qualified staff without undue delay following the necessary investigations.
2. Following a notice, content hosting platforms shall, without delay, decide whether to remove, take down or disable access to content that was the subject of a notice, if such content does not comply with legal requirements. Without prejudice to Article 14(2), the fact that a content hosting platform has deemed specific content to be non-compliant shall in no case automatically lead to content by another user being removed, taken down or being made inaccessible.
Article 12
Information about decisions
Once a content hosting platform has taken a decision, it shall inform all parties involved in the notice procedure about the outcome of the decision, providing the following information in a clear and simple manner:
(a) the reasons for the decision taken;
(b) whether the decision was made solely by a human or supported by an algorithm;
(c) information about the possibility for review as referred to in Article 13 and judicial redress for either party.
Article 13
Review of decisions
1. Content hosting platforms may provide a mechanism allowing users to request a review of decisions they take.
2. Content hosting platforms with significant market power shall provide the review mechanism referred to in paragraph 1.
3. In all cases, the final decision of the review shall be undertaken by a human.
Article 14
Removal of content
1. Without prejudice to judicial or administrative orders regarding content online, content that has been the subject of a notice shall remain visible while the assessment of its legality is still pending.
2. Content hosting platforms shall act expeditiously to make unavailable or remove content which is manifestly illegal.
Article 15
Independent dispute settlement
1. Member States shall provide independent dispute settlement bodies for the purpose of providing quick and efficient extra-judicial recourse when decisions on content moderation are appealed against.
2. The independent dispute settlement bodies shall be composed of independent legal experts with the mandate to adjudicate disputes between content hosting platforms and users concerning the compliance of the content in question with legal and regulatory requirements, community guidelines and terms and conditions.
3. The referral of a dispute regarding content moderation to an independent dispute settlement body shall not preclude a user from being able to have further recourse in the courts unless the dispute has been settled by common agreement.
4. Content hosting platforms with significant market power shall contribute financially to the operating costs of the independent dispute settlement bodies through a dedicated fund managed by the European entity, in order to assist the Member States in financing those bodies. Member States shall ensure the independent dispute settlement bodies are provided with adequate resources to ensure their competence and independence.
Article 16
Procedural rules for independent dispute settlement
1. The uploader as well as a third party, such as an ombudsperson with a legitimate interest in acting, shall have the right to refer a case of content moderation to the competent independent dispute settlement body in the event that a content hosting platform has decided to remove, take down or disable access to content, or otherwise to act in a manner that is contrary to the action preferred by the uploader as expressed by the uploader or constitutes an infringement of fundamental rights.
2. Where the content hosting platform has decided not to take down content that is the subject of a notification, the notifier shall have a right to refer the matter to the competent independent dispute settlement body, provided that the notifier would have legal standing in a civil procedure regarding the content in question.
3. As regards jurisdiction, the competent independent dispute settlement body shall be that located in the Member State in which the content that is the subject of the dispute has been uploaded. Natural persons shall be allowed in all cases to bring complaints to the independent dispute body of their Member States of residence.
4. Where the notifier has the right to refer a case of content moderation to an independent dispute settlement body in accordance with paragraph 2, the notifier may refer the case to the independent dispute settlement body located in the Member State of habitual residence of the notifier or the uploader, if the latter is using the service for non-commercial purposes.
5. Where a case of content moderation relating to the same question is the subject of a referral to another independent dispute settlement body, the independent settlement body may suspend the procedure as regards a referral. Where a question of content moderation has been the subject of recommendations by an independent dispute settlement body, the independent dispute settlement body may decline to treat a referral.
6. The Member States shall lay down all other necessary rules and procedures for the independent dispute settlement bodies within their jurisdiction.
Article 17
Personal data
Any processing of personal data carried out pursuant to this Regulation shall be carried out in accordance with Regulation (EU) 2016/679 of the European Parliament and of the Council(16) and Directive 2002/58/EC of the European Parliament and of the Council(17).
Article 18
Reporting of breaches and protection of reporting persons
Directive (EU) 2019/1937 shall apply to the reporting of breaches of this Regulation and to the persons reporting such breaches.
Article 19
Amendments to Directive (EU) 2019/1937
Directive (EU) 2019/1937 is amended as follows:
(1) in point (a) of Article 2(1), the following point is added:
“(xi) online content management;”;
(2) in Part I of the Annex, the following point is added:
“K. Point (a)(xi) of Article 2(1) - online content management.
Regulation [XXX] of the European Parliament and of the Council on contractual rights as regards content management.”.
Article 20
Reporting, evaluation and review
1. Member States shall provide the Commission with all relevant information regarding the implementation and application of this Regulation. On the basis of the information provided and of public consultation, the Commission shall, by ... [three years after entry into force of this Regulation], submit a report to the European Parliament and to the Council on the implementation and application of this Regulation and consider the need for additional measures, including, where appropriate, amendments to this Regulation.
2. Without prejudice to reporting obligations laid down in other Union legal acts, Member States shall, on an annual basis, submit the following statistics to the Commission:
(a) the number of disputes referred to independent dispute settlement bodies and the types of content that were the subject of disputes;
(b) the number of cases settled by the independent dispute settlement bodies, categorised according to outcome.
Article 21
Entry into force
This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union.
It shall apply from XX.
This Regulation shall be binding in its entirety and directly applicable in all Member States.
Directive (EU) 2019/1937 of the European Parliament and of the Council of 23 October 2019 on the protection of persons who report breaches of Union law (OJ L 305, 26.11.2019, p. 17).
Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (OJ L 241, 17.9.2015, p. 1).
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ L 201, 31.7.2002, p. 37).
Digital Services Act and fundamental rights issues posed
152k
63k
European Parliament resolution of 20 October 2020 on the Digital Services Act and fundamental rights issues posed (2020/2022(INI))
– having regard to the Treaty on European Union (TEU), in particular Article 2 thereof,
– having regard to the Treaty on the Functioning of the European Union (TFEU), in particular Article 16 and Article 114 thereof,
– having regard to the Charter of Fundamental Rights of the European Union, in particular Article 6, Article 7, Article 8, Article 11, Article 13, Article 21, Article 22, Article 23, Article 24, Article 26, Article 38, and Article 47 thereof,
– having regard to Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘e-Commerce Directive’)(1),
– having regard to Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (‘General Data Protection Regulation’, (GDPR))(2),
– having regard to Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (‘Directive on privacy and electronic communications’)(3),
– having regard to Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities(4),
– having regard to Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC(5) (‘Copyright Directive’),
– having regard to the Commission Recommendation (EU) 2018/334 of 1 March 2018 on measures to effectively tackle illegal content online(6),
– having regard to the Europol Internet Organised Crime Threat Assessment (IOCTA) of 18 September 2018,
– having regard to the relevant case law of the Court of Justice of the European Union,
– having regard to Rule 54 of its Rules of Procedure,
– having regard to the opinions of the Committee on the Internal Market and Consumer Protection and the Committee on Culture and Education,
– having regard to the report of the Committee on Civil Liberties, Justice and Home Affairs (A9-0172/2020),
A. whereas fundamental rights, such as the protection of privacy and personal data, the principle of non-discrimination, as well as freedom of expression and information, need to be ingrained at the core of a successful and durable EU policy on digital services; whereas these rights need to be seen both in the letter of the law, as well as in the spirit of their implementation;
B. whereas the types of digital services and the roles of digital service providers have drastically changed since the adoption of the e-Commerce Directive 20 years ago;
C. whereas the trust of users can only be gained by digital services that respect users’ fundamental rights, which would not only uptake of services, but would also offer a competitive advantage and stable business model for companies;
D. whereas the data protection rules applicable to all providers offering digital services in the EU’s territory were recently updated and harmonised across the EU with the General Data Protection Regulation; whereas privacy rules for electronic communications, which are a subset of digital services, are covered by the Directive on privacy and electronic communications and are currently under revision;
E. whereas the amount of all types of user-generated content shared and services provided via online platforms, including cloud services, has increased exponentially and at an unprecedented pace facilitated by advanced technologies; whereas this includes illegal content such as images depicting child sexual abuse material (CSAM) online and content that is legal but that may be harmful for society and democracy, such as disinformation on COVID-19 remedies;
F. whereas online hate speech and disinformation have become increasingly widespread in recent years as individuals and disruptive actors make use of online platforms to increase polarisation, which, in turn, is used for political purposes; whereas women, persons of colour, persons belonging to or perceived as belonging to ethnic or linguistic minorities and LGBTIQ persons are often targeted by discriminatory hate speech, bullying, threats and scapegoating online;
G. whereas this trend has been aided by online platforms whose business model is based on the collection and analysis of user data with a view to generating more traffic and ‘clicks’, and, in turn, more profiling data and thus more profit; whereas this leads to the amplification of sensationalist content; whereas hate speech and disinformation harm the public interest by undermining respectful and honest public discourse and pose threats to public security since they can incite real-world violence; whereas combating such content is key in order to ensure respect for fundamental rights and to defend the rule of law and democracy in the EU;
H. whereas social media and other content distribution platforms utilise profiling techniques to target and distribute their content as well as advertisements; whereas data collected from the digital traces of individuals can be mined in ways that allow for a highly accurate inference of very intimate personal information, especially when such data is merged with other data sets; whereas the Cambridge Analytica and Facebook scandals showed the risks associated with opaque data processing operations of online platforms by revealing that certain voters had been micro-targeted with political advertising and, at times, even with targeted disinformation;
I. whereas the automated algorithms that decide how to handle, prioritise, distribute and delete third-party content on online platforms, including during political and electoral campaigns, often reproduce existing discriminatory patterns in society, thereby leading to a high risk of discrimination for persons already affected; whereas the widespread use of algorithms for content removal or blocking also raises concerns over the rule of law and questions related to legality, legitimacy and proportionality;
J. whereas a small number of mostly non-European service providers have significant market power and exert influence on the rights and freedoms of individuals, our societies and democracies by controlling how information, services and products are presented, which therefore have a significant impact on the functioning of the Member States and on their citizens; whereas the decisions of these platforms can have far-reaching consequences for the freedom of expression and information and for media freedom and pluralism; ;
K. whereas the policy approach to tackle illegal content online in the EU has mainly focused on voluntary cooperation and court-order-mandated takedowns thus far, but a growing number of Member States are adopting further national legislation addressing illegal content in a non-harmonised manner; whereas provisions to address certain types of illegal content were included in recent sectoral legislation at EU level;
L. whereas a pure self-regulatory approach of platforms does not provide adequate transparency, accountability and oversight; whereas such an approach neither provides proper information to public authorities, civil society and users on how platforms address illegal content and activities and content that violates their terms and conditions, nor on how they curate content in general;
M. whereas such an approach may not guarantee compliance with fundamental rights and creates a situation where judicial responsibilities are partially handed over to private parties, which poses the risk of interference with the right to freedom of expression;
N. whereas regulatory oversight and supervision is sector-specific in the EU; whereas further and more comprehensive coordination between the different oversight bodies across the EU would be beneficial;
O. whereas the lack of robust, comparable public data on the prevalence of illegal and harmful content online, on notices and the court-mandated and self-regulatory removal thereof, and on the follow-up by competent authorities creates a deficit of transparency and accountability, both in the private and public sector; whereas there is a lack of information regarding the algorithms used by platforms and websites and the way platforms address the erroneous removal of content;
P. whereas child sexual exploitation online is one of the forms of illegal content that is facilitated by technological developments; whereas the vast amount of CSAM circulating online poses serious challenges for detection, investigation and, most of all, victim identification efforts; whereas, according to Europol, reports of online sharing of CSAM that were made to US-based NCMEC increased by 106 % within the last year;
Q. whereas according to the Court of Justice of the European Union (CJEU) jurisprudence, content should be removed following a court order from a Member State; whereas host providers may have recourse to automated search tools and technologies to detect and remove content that is equivalent to content previously declared unlawful, but should not be obliged to monitor generally the information that it stores, or to actively seek facts or circumstances indicating illegal activity, as provided for in Article 15(1) of Directive 2000/31/EC;
R. whereas a trusted electronic identification is elementary in order to ensure secure access to digital services and to carry out electronic transactions in a safer way; whereas currently only 15 Member States have notified the Commission of their electronic identity scheme for cross-border recognition in the framework of the Regulation (EU) No 910/2014(7) (‘eIDAS Regulation’);
S. whereas the internet and internet platforms are still a key location for terrorist groups’ activities, and they are used as a tool for sowing propaganda, recruitment and promotion of their activities;
1. Believes in the clear societal and economic benefits of a functioning digital single market for the EU and its Member States; welcomes these benefits, in particular improved access to information and the strengthening of the freedom of expression; stresses the important obligation to ensure a fair digital ecosystem in which fundamental rights as enshrined in the Treaties and the Charter of Fundamental Rights of the European Union, especially freedom of expression and information, non-discrimination, media freedom and pluralism, privacy and data protection, are respected and user-safety is ensured online; emphasises the fact that legislative and other regulatory interventions in the digital single market aiming to ensure compliance with this obligation should be strictly limited to what is necessary; recalls that content removal mechanisms used outside the guarantees of a due process contravene Article 10 of the European Convention on Human Rights;
2. Urges the Commission to adopt a tailored regulatory approach in order to address the differences that still persist between online and offline worlds and the challenges raised by the diversity of actors and services offered online; considers, in this regard, it essential to apply different regulatory approaches to illegal and legal content; stresses that illegal content online and cyber-enabled crimes should be tackled with the same rigour and on the basis of the same legal principles as illegal content and criminal behaviour offline, and with the same guarantees for citizens; recalls that the e-Commerce Directive is the legal framework for online services in the internal market that regulates content management;
3. Deems it necessary that illegal content is removed swiftly and consistently in order to address crimes and fundamental rights violations; considers that voluntary codes of conduct only partially address the issue;
4. Calls on digital service providers to take content offline in a diligent, proportionate and non-discriminatory manner, and with due regard in all circumstances to the fundamental rights of the users and to take into account the fundamental importance of the freedom of expression and information in an open and democratic society with a view to avoiding the removal of content, which is not illegal; requests that digital service providers, which on their own initiative want to restrict certain legal content of their users, explore the possibility of labelling that content, rather than it taking offline, thus giving users the chance to choose to access that content on their own responsibility;
5. Takes the position that any legally mandated content take-down measures in the Digital Services Act should concern illegal content only, as defined in EU and national law, and that the legislation should not include any undefined concepts and terms as this would create legal uncertainty for online platforms and put fundamental rights and freedom of speech at risk;
6. Acknowledges, however, that the current digital ecosystem also encourages problematic behaviour, such as micro-targeting based on characteristics exposing physical or psychological vulnerabilities, the spreading of hate speech, racist content and disinformation, emerging issues such as the organised abuse of multiple platforms, and the creation of accounts or manipulation of online content by algorithms; notes with concern that some business models are based on showing sensational and polarising content to users in order to increase their screen time and thereby the profits of the online platforms; underlines the negative effects of such business models on the fundamental rights of individuals and for society as a whole; calls for transparency on the monetisation policies of online platforms;
7. Emphasises, therefore, that the spreading of such harmful content should be contained; firmly believes that media literacy skills, user control over content proposed to them and public access to high-quality content and education are crucial in this regard; welcomes, therefore, the Commission initiative to create a European Digital Media Observatory to support independent fact-checking services, increase public knowledge on online disinformation and support public authorities in charge of monitoring digital media;
8. calls on the Commission and the Member States to support independent and public service media and educational initiatives on media literacy and targeted awareness-raising campaigns within civil society; points out that special attention should be paid to harmful content in the context of minors using the internet, especially as regards to their exposure to cyberbullying, sexual harassment, pornography, violence and self-harm;
9. Notes that since the online activities of an individual allow for deep insights into their personality and make it possible to manipulate them, the general and indiscriminate collection of personal data concerning every use of a digital service interferes disproportionately with the right to privacy and the protection of personal data; highlights, in particular, the potential negative impact of micro-targeted and behavioural advertisements and of assessments of individuals, especially minors and vulnerable groups, by interfering in the private life of individuals, posing questions as to the collection and use of the data used to personalise advertising, offer products or services or set prices; confirms that the right of users not to be subject to pervasive tracking when using digital services has been included in GDPR and should be properly enforced across the EU; notes that the Commission has proposed to make targeted content curation subject to an opt-in decision in its proposal for a new regulation concerning the respect for private life and the protection of personal data in electronic communications (2017/0003(COD));
10. Deems that misleading or obscure political advertising is a special class of online threat because it influences the core mechanisms that enable the functioning of our democratic society, especially when such content is sponsored by third-parties, including foreign actors; underlines that when profiling is deployed at scale for political micro-targeting to manipulate voting behaviour, it can seriously undermine the foundations of democracy; calls, therefore, on digital service providers to take the necessary measures to identify and label content uploaded by social bots and expects the Commission to provide guidelines on the use of such persuasive digital technologies in electoral campaigns and political advertising policy; calls, in this regard, for the establishment of strict transparency requirements for the display of paid political advertisement;
11. Deems it necessary that illegal content is removed consistently and without undue delay in order to address infringements, especially those relating to children and terrorist content, and fundamental rights violations with the necessary safeguards in place, such as the transparency of the process, the right to appeal and access to effective judicial redress; considers that voluntary codes of conduct and standard contractual terms of service lack adequate enforcement and have proven to only partially address the issue; stresses that the ultimate responsibility for enforcing the law, deciding on the legality of online activities and ordering hosting service providers to remove or disable access to illegal content rests with independent competent authorities;
12. Acknowledges the fact that, while the illegal nature of certain types of content can be easily established, the decision is more difficult for other types of content as it requires contextualisation; warns that current automated tools are not capable of critical analysis and of adequately grasping the importance of context for specific pieces of content, which could lead to unnecessary takedowns and harm the freedom of expression and the access to diverse information, including on political views, thus resulting in censorship; highlights that human review of automated reports by service providers or their contractors does fully not solve this problem, especially if it is outsourced to private staff that lack sufficient independence, qualification and accountability;
13. Notes with concern that illegal content online can easily and quickly be multiplied and its negative impact therefore amplified within a very short period of time; nevertheless believes that the Digital Services Act should not contain any obligation for hosting service providers or other technical intermediaries to use automated tools in content moderation;
14. Recalls that illegal content online should not only be removed by online platforms, but should also be followed up by law enforcement and the judiciary where criminal acts are concerned; calls on the Commission to consider obliging online platforms to report serious crime to the competent authority when they have received knowledge of such a crime; finds, in this regard, that a key issue in some Member States is not the fact that they only have unresolved cases but also unopened ones; calls for barriers to filing complaints with the competent authorities to be removed; is convinced that, given the borderless nature of the internet and the fast dissemination of illegal content online, cooperation between service providers and national competent authorities, as well as cross-border cooperation between national competent authorities, should be improved and based on the principles of necessity and proportionality; stresses, in this regard, the need to respect the legal order of the EU and the established principles of cross-border cooperation and mutual trust; calls on Member States to equip their law enforcement and judicial authorities with the necessary expertise, resources and tools to allow them to effectively and efficiently deal with the increasing number of cases involving illegal content online and with dispute resolution concerning the taking offline of content, and to improve access to justice in the area of digital services;
15. Underlines that a specific piece of content may be deemed illegal in one Member State but is covered by the right to freedom of expression in another; highlights that in order to protect freedom of speech, to avoid conflicts of laws, to avert unjustified and ineffective geo-blocking and to aim for a harmonised digital single market, hosting service providers should not be required to remove or disable access to information that is legal in the Member State that they are established in, or where their designated legal representative resides or is established; recalls that national authorities can only enforce removal orders by independent competent authorities addressed to service providers established in their territory; considers it necessary to strengthen the mechanisms of cooperation between the Member States with the support of the Commission and relevant Union agencies; calls for a structured dialogue between Member States in order to determine the risk of specific types of content and to identify potential differences in assessment of such risks between Member States;
16. Underlines that illegal content should be removed where it is hosted, and that mere conduit intermediaries should not be required to block access to content;
17. Strongly believes that the current EU legal framework governing digital services should be updated with a view to addressing the challenges posed by the fragmentation between the Member States and new technologies, such as the prevalence of profiling and algorithmic decision-making that permeates all areas of life, as well as ensuring legal clarity and respect for fundamental rights, in particular the freedom of expression and the right to privacy in a futureproof manner given the rapid development of technology;
18. Welcomes the Commission’s commitment to introducing a harmonised approach addressing obligations for digital service providers, including online intermediaries, in order to avoid fragmentation of the internal market and inconsistent enforcement of regulations; calls on the Commission to propose the most efficient and effective solutions for the internal market as a whole, while avoiding new unnecessary administrative burdens and keeping the digital single market open, fair, safe and competitive for all its participants; stresses that the liability regime for digital service providers must be proportionate, must not disadvantage small and medium-sized providers and must not unreasonably limit innovation and access to information;
19. Considers that the reform should build on the solid foundation of and full compliance with existing EU law, especially the General Data Protection Regulation and the Directive on privacy and electronic communications, which is currently under revision, and respect the primacy of other sector-specific instruments such as the Audiovisual Media Services Directive; underlines that the modernisation of the e-commerce rules can affect fundamental rights; therefore urges the Commission to be extremely vigilant in its approach and to also integrate international human rights standards into its revision;
20. Highlights that the practical capacity of individual users to understand and navigate the complexity of the data ecosystems is extremely limited, as is their ability to identify whether the information they receive and services they use are made available to them on the same terms as to other users; calls, therefore, on the Commission to place transparency and non-discrimination at the heart of the Digital Services Act;
21. Insists that the Digital Services Act must aim to ensure a high level of transparency as regards the functioning of online services and a digital environment free of discrimination; stresses that, besides the existing strong regulatory framework that protects privacy and personal data, an obligation for online platforms is needed to ensure the legitimate use of algorithms; calls, therefore, on the Commission to develop a regime based on the e-Commerce Directive that clearly frames the responsibility of service providers to address the risks faced by their users and to protect their rights and to provide for an obligation of transparency and explainability of algorithms, penalties to enforce such obligations, the possibility of human intervention and other measures such as annual independent audits and specific stress tests to assist and enforce compliance;
22. Stresses that some digital service providers have to be able to identify users unambiguously in an equivalent manner to offline services; notes an unnecessary collection of personal data, such as mobile phone numbers, by online platforms at the point of registration for a service, often caused by the use of single sign-in possibilities; underlines that the GDPR clearly describes the data minimisation principle, thereby limiting the collected data to only that strictly necessary for the purpose; recommends that online platforms that support a single sign-in service with a dominant market share should be required to also support at least one open identity system based on a non-proprietary, decentralised and interoperable framework;
23. Underlines that where a certain type of official identification is needed offline, an equivalent secure online electronic identification system needs to be created; believes that online identification can be improved by enforcing eIDAS Regulation’scross-border interoperability of electronic identifications across the European Union; asks the Commission to explore the creation of a single European sign-in system as an alternative to private single sign-in systems and to introduce an obligation for digital services to always also offer a manual sign-in option, set by default; underlines that this service should be developed so that the collection of identifiable sign-in data by the sign-in service provider is technically impossible and data gathered is kept to an absolute minimum; recommends, therefore, that the Commission also explore the creation of a verification system for users of digital services, in order to ensure the protection of personal data and age verification, especially for minors, which should not be used commercially or to track the users cross-site; stresses that these sign-in and verification systems should apply only to digital services that require personal identification, authentication or age verification; recalls that the Member States and Union institutions have to guarantee that electronic identifications are secure, transparent, process only the data necessary for the identification of the user and are used for a legitimate purpose only and are not used commercially, and are not used to restrain general access to the internet or to track the users cross-site;
24. Deems it indispensable to have the full harmonisation and clarification of rules on liability at EU level to guarantee the respect of fundamental rights and the freedoms of users across the EU; believes that such rules should maintain liability exemptions for intermediaries that do not have actual knowledge of the illegal activity or information on their platforms; expresses its concern that recent national laws to tackle hate speech and disinformation lead to an increasing fragmentation of rules and to a lower level of fundamental rights protection in the EU;
25. Calls, to this end, for legislative proposals that keep the digital single market open and competitive by providing harmonised requirements for digital service providers to apply effective, coherent, transparent and fair procedures and procedural safeguards to address illegal content in line with national and European law, including via a harmonised notice-and-action procedure;
26. Believes, in this regard, that it is crucial for online platforms to be provided with clear rules, requirements and safeguards with regard to liability for third-party content; proposes that a common regulatory framework be put in place in order to efficiently identify and remove illegal content;
27. Highlights that rules on notice-and-action mechanisms should be complemented by requirements for platforms to take specific measures that are proportionate to their scale of reach as well as their technical and operational capacities in order to effectively address the appearance of illegal content on their services; recognises, therefore, where technologically feasible, on the basis of sufficiently substantiated orders by independent competent public authorities, and taking full account of the specific context of the content, that digital service providers may be required to execute periodic searches for distinct pieces of content that a court had already declared unlawful, provided that the monitoring of and search for the information concerned by such an injunction are limited to information conveying a message whose content remains essentially unchanged compared with the content which gave rise to the finding of illegality and containing the elements specified in the injunction, which, in line with the judgment of the Court of Justice of 3 October 2019 in case C-18/18(8), are identical or equivalent to the extent that would not require the host provider to carry out an independent assessment of that content;
28. Maintains that the choice of the concrete measures should be left to the platforms; supports a balanced approach based on a dialogue with stakeholders and an assessment of the risks incurred by the platforms, as well as a clear chain of responsibility to avoid unnecessary regulatory burdens for the platforms and unnecessary and disproportionate restrictions on fundamental rights, in particular the freedom of expression, access to information, including on political ideas, and the right to privacy; stresses that certain obligations can be further specified by sectoral legislation; emphasises that any measure put in place to this end cannot constitute, either de jure or de facto, a general monitoring requirement;
29. Stresses the need for appropriate safeguards and due process obligations, including a requirement for human oversight and verification, in addition to counter notice procedures, to allow content owners and uploaders to defend their rights adequately and in a timely manner, and to ensure that removal or blocking decisions are legal, accurate, well-founded, protect users and respect fundamental rights; highlights that persons who systematically and repeatedly submit wrongful or abusive notices should be sanctioned; recalls that besides counter-notice procedures and out-of-court dispute settlements by platforms in accordance with the internal complaints system, the possibility of effective judicial redress should remain available to satisfy the right to effective remedy;
30. Supports the preservation of the current framework on the limited liability for content and the country of origin principle, but considers improved coordination for removal requests between national competent authorities to be essential; underlines that illegal content should be removed where it is hosted; emphasises that such requests should be subject to legal safeguards in order to prevent abuse and ensure full respect of fundamental rights; highlights that removal requests from competent authorities should be specific and clearly state the legal basis for removal; stresses that an effective oversight and enforcement mechanism, including proportionate sanctions taking into account their technical and operational capacities, should apply to those service providers that fail to comply with lawful orders;
31. Recalls that digital service providers must not be legally required to retain personal data of their users or subscribers for law enforcement purposes, unless a targeted retention is ordered by an independent competent authority in full respect of Union law and CJEU jurisprudence; further recalls that such retention of data should be limited to what is strictly necessary with respect to the categories of data to be retained, the means of communication affected, the persons concerned and the retention period adopted;
32. Believes that in order to protect fundamental rights, the Digital Services Act should introduce rules aiming to ensure that the terms of service of digital service providers are clear, transparent, fair and made available in an easy and accessible manner to users; deplores the fact that the terms of service of some content platforms force law enforcement officers to use personal accounts to investigate certain complaints, which poses a threat both to these investigations and to personal safety, calls for more efficient coordination between Member States regarding the follow up of law enforcement on flagged illegal content; recalls that take-down-orders from independent competent authorities have to always be based on law, not on the terms of service of the service providers;
33. Calls on the Commission to ensure that users have access to diverse and quality content online as a means towards ensuring that citizens are adequately informed; expects the Digital Services Act to ensure that quality media content is easy to access and easy to find on third-party platforms and that removals of content are in line with human rights standards and limited to content that is manifestly illegal or has been found illegal by an independent competent authority; stresses that legal content should not be subject to any legal removal or blocking obligations;
34. Supports greater dialogue between the Member States, competent authorities and relevant stakeholders with the aim of developing, evaluating and improving soft law approaches, such as the EU Code of Practice on Disinformation, in order to further address categories of legal content, including disinformation; expects the Commission to issue guidelines including increased transparency rules on content moderation and advertising policy in a specific instrument accompanying the Digital Services Act to ensure that the removal and the blocking of legal content on the basis of terms and conditions are limited to the absolute minimum; calls, further, on the Commission to establish a framework that prohibits platforms from exercising a second layer of control over content that is provided under a media service provider’s responsibility and that is subject to specific standards and oversight;
35. Emphasises, moreover, that users should be given more choice and control with regard to the content that they see, including more options on the way content is ranked to them and the possibility to opt-out from any content curation; strongly believes that the design and performance of recommendation systems should be user-friendly and subject to full transparency;
36. Deems that accountability, both in the private and public sector, and evidence-based policy making require robust data on the incidence and the tackling of illegal activity and the removal of illegal content online, as well as robust data on the content curation algorithms of online platforms;
37. Calls, in this regard, for an annual, comprehensive and consistent public reporting obligation for platforms, proportionate to their scale of reach and operational capacities, more specifically on their content moderation procedures, including information on adopted measures against illegal activities online and standardised data on the amount of content removed and the underlying legal reasons and bases, the type and justification of removal requests received, the number of requests whose execution was refused and the reasons therefore; stresses that such reports, covering actions taken in a given year, should be submitted by the end of the first quarter of the following year;
38. Calls, moreover, for an annual public reporting obligation for national authorities, including standardised data on the number of removal requests and their legal bases, on the number of removal requests that were subject to administrative or judicial remedies, on the outcome of these proceedings, with a mention of the outcomes that specified content or activities wrongly identified as illegal, and on the total number of decisions imposing penalties, including a description of the type of penalty imposed;
39. Expresses its concern regarding the fragmentation and the documented lack of financial and human resources for the supervision and oversight bodies; calls for increased cooperation between the Member States with regard to regulatory oversight of digital services;
40. Considers that in order to guarantee proper enforcement of the Digital Services Act, the oversight of compliance with procedures, procedural safeguards and transparency obligations laid down in this act should be harmonised within the digital single market; supports, in this regard, strong and rigorous enforcement by an independent EU oversight structure that has the competence to impose fines on the basis of an assessment of a clearly defined set of factors, such as proportionality, technical and organisational measures, and negligence; believes that this should include the possibility for fines to be based on a percentage of the annual global turnover of the company;
41. Stresses that audits of digital service providers’ internal policies and algorithms should be made with due regard to Union law, in particular to the fundamental rights of the services’ users, taking into account the importance of non-discrimination and the freedom of expression and information in an open and democratic society, and without publishing commercially sensitive data; urges that there is the need to assess, upon complaint or upon initiative of the oversight bodies, whether and how digital service providers amplify content, for example through recommendation engines and optimisation features such as autocomplete and trending;
42. Considers that the transparency reports drawn up by platforms and national competent authorities should be made publicly available and analysed for structural trends in removal, detection and blocking at EU level;
43. Underlines the importance of empowering users to enforce their own fundamental rights online, including by means of easily accessible, impartial, transparent, efficient and free complaint procedures, reporting mechanisms for illegal content and criminal behaviour for individuals and companies, legal remedies, educational measures and awareness-raising on data protection issues and child online safety;
44. Believes that past experience has proved the effectiveness of allowing innovative business models to flourish and of strengthening the digital single market by removing barriers to the free movement of digital services and preventing the introduction of new, unjustified national barriers, and that the continuation of this approach would reduce the fragmentation of the internal market; considers, furthermore, that the Digital Services Act can offer opportunities to develop citizens’ knowledge and skills in the field of digitalisation, while at the same time guaranteeing a high level of consumer protection, including by safeguarding online safety;
45. Emphasises the indispensability of agreed standards of essential security in cyberspace in order for digital services to provide their full benefits to citizens; notes, therefore, the urgent need for the Member States to take coordinated action to ensure basic cyber hygiene and to prevent avoidable dangers in cyberspace, including through legislative measures;
46. Instructs its President to forward this resolution to the Council and the Commission.
Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC (OJ L 257, 28.8.2014, p. 73).
European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL))
– having regard to Article 225 of the Treaty on the Functioning of the European Union,
– having regard to Article 114 of the Treaty on the Functioning of the European Union,
– having regard to the Charter of Fundamental Rights of the European Union,
– having regard to Council Regulation (EU) 2018/1488 of 28 September 2018 establishing the European High Performance Computing Joint Undertaking(1),
– having regard to Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin(2) (Racial Equality Directive),
– having regard to Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation(3) (Equal Treatment in Employment Directive),
– having regard to Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)(4) (GDPR), and to Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA(5),
– having regard to the Interinstitutional Agreement of 13 April 2016 on Better Law-Making(6),
– having regard to the proposal for a regulation of the European Parliament and of the Council of 6 June 2018 establishing the Digital Europe Programme for the period 2021-2027 (COM(2018)0434),
– having regard to the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 11 December 2019 on The European Green Deal (COM(2019)0640),
– having regard to the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 19 February 2020 on Artificial Intelligence - A European approach to excellence and trust (COM(2020)0065),
– having regard to the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 19 February 2020 on A European strategy for data (COM(2020)0066),
– having regard to the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 19 February 2020 on Shaping Europe’s digital future (COM(2020)0067),
– having regard to the Council of the European Union’s conclusions on Shaping Europe’s Digital future of June 2020,
– having regard to its resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics(7),
– having regard to its resolution of 1 June 2017 on digitising European industry(8),
– having regard to its resolution of 12 September 2018 on autonomous weapon systems(9),
– having regard to its resolution of 11 September 2018 on language equality in the digital age(10),
– having regard to its resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics(11),
– having regard to the report of 8 April 2019 of the High-Level Expert Group on Artificial Intelligence set up by the Commission entitled ‘Ethics Guidelines for Trustworthy AI’,
– having regard to the European Added Value Assessment study carried out by the European Parliamentary Research Service, entitled 'European framework on ethical aspects of artificial intelligence, robotics and related technologies: European Added Value Assessment'(12),
– having regard to the briefings and studies prepared at the request of the Panel for the Future of Science and Technology (STOA), managed by the Scientific Foresight Unit within the European Parliamentary Research Service, entitled “What if algorithms could abide by ethical principles?”, “Artificial Intelligence ante portas: Legal & ethical reflections”, “A governance framework for algorithmic accountability and transparency”, “Should we fear artificial intelligence?” and “The ethics of artificial intelligence: Issues and initiatives”,
– having regard to the Council of Europe’s Framework Convention for the Protection of National Minorities, Protocol No 12 to the Convention for the Protection of Human Rights and Fundamental Freedoms, and the European Charter for Regional or Minority Languages,
– having regard to the OECD Council Recommendation on Artificial Intelligence adopted on 22 May 2019,
– having regard to Rules 47 and 54 of its Rules of Procedure,
– having regard to the opinions of the Committee on Foreign Affairs, the Committee on the Internal Market and Consumer Protection, the Committee on Transport and Tourism, the Committee on Civil Liberties, Justice and Home Affairs, the Committee on Employment and Social Affairs, the Committee on the Environment, Public Health and Food Safety and the Committee on Culture and Education,
– having regard to the report of the Committee on Legal Affairs (A9-0186/2020),
Introduction
A. whereas the development, deployment and use of artificial intelligence (also referred to as ‘AI’), robotics and related technologies is carried out by humans, and their choices determine the potential of such technologies to benefit society;
B. whereas artificial intelligence, robotics and related technologies that have the potential to generate opportunities for businesses and benefits for citizens and that can directly impact all aspects of our societies, including fundamental rights and social and economic principles and values, as well as have a lasting influence on all areas of activity, are being promoted and developed quickly;
C. whereas artificial intelligence, robotics and related technologies will lead to substantial changes to the labour market and in the workplace; whereas they can potentially replace workers performing repetitive activities, facilitate human-machine collaborative working systems, increase competitiveness and prosperity and create new job opportunities for qualified workers, while at the same time posing a serious challenge in terms of reorganisation of the workforce;
D. whereas the development of artificial intelligence, robotics and related technologies can also contribute to reaching the sustainability goals of the European Green Deal in many different sectors; whereas digital technologies can boost the impact of policies as regards environmental protection; whereas they can also contribute to reducing traffic congestion and emissions of greenhouse gases and air pollutants;
E. whereas, for sectors like public transport, AI-supported intelligent transport systems can be used to minimise queuing, optimise routing, enable persons with disabilities to be more independent, and increase energy efficiency thereby enhancing decarbonisation efforts and reducing the environmental footprint;
F. whereas these technologies bring about new business opportunities which can contribute to the recovery of Union industry after the current health and economic crisis if greater use is made of them, for instance, in the transport industry; whereas such opportunities can create new jobs, as the uptake of these technologies has the potential to increase businesses' productivity levels and contribute to efficiency gains; whereas innovation programmes in this area can enable regional clusters to thrive;
G. whereas the Union and its Member States have a particular responsibility to harness, promote and enhance the added value of artificial intelligence and make sure that AI technologies are safe and contribute to the well-being and general interest of their citizens as they can make a huge contribution to reaching the common goal of improving the lives of citizens and fostering prosperity within the Union, by contributing to the development of better strategies and innovation in a number of areas and sectors; whereas, in order to exploit the full potential of artificial intelligence and make users aware of the benefits and challenges that AI technologies bring, it is necessary to include AI or digital literacy in education and training, including in terms of promoting digital inclusion, and to conduct information campaigns at Union level that give an accurate representation of all aspects of AI development;
H. whereas a common Union regulatory framework for the development, deployment and use of artificial intelligence, robotics and related technologies (‘regulatory framework for AI’) should allow citizens to share the benefits drawn from their potential, while protecting citizens from the potential risks of such technologies and promoting the trustworthiness of such technologies in the Union and elsewhere; whereas that framework should be based on Union law and values and guided by the principles of transparency, explainability, fairness, accountability and responsibility;
I. whereas such a regulatory framework is of key importance in avoiding the fragmentation of the Internal Market resulting from differing national legislation and will help foster much needed investment, develop data infrastructure and support research; whereas it should consist of common legal obligations and ethical principles as laid down in the proposal for a Regulation requested in the annex to this resolution; whereas it should be established according to the better regulation guidelines;
J. whereas the Union has a strict legal framework in place to ensure, inter alia, the protection of personal data and privacy and non-discrimination, to promote gender equality, environmental protection and consumers’ rights; whereas such a legal framework consisting of an extensive body of horizontal and sectorial legislation, including the existing rules on product safety and liability, will continue to apply in relation to artificial intelligence, robotics and related technologies, although certain adjustments of specific legal instruments may be necessary to reflect the digital transformation and address new challenges posed by the use of artificial intelligence;
K. whereas there are concerns that the current Union legal framework, including the consumer law and employment and social acquis, data protection legislation, product safety and market surveillance legislation, as well as antidiscrimination legislation, may no longer be fit for purpose to effectively tackle the risks created by artificial intelligence, robotics and related technologies;
L. whereas in addition to adjustments to existing legislation, legal and ethical questions relating to AI technologies should be addressed through an effective, comprehensive and future-proof regulatory framework of Union law reflecting the Union’s principles and values as enshrined in the Treaties and the Charter of Fundamental Rights of the European Union (‘Charter’) that should refrain from over-regulation, by only closing existing legal loopholes, and increase legal certainty for businesses and citizens alike, namely by including mandatory measures to prevent practices that would undoubtedly undermine fundamental rights;
M. whereas any new regulatory framework needs to take into consideration all the interests at stake; whereas careful examination of the consequences of any new regulatory framework on all actors in an impact assessment should be a prerequisite for further legislative steps; whereas the crucial role of small and medium-sized enterprises (SMEs) and start-ups especially in the Union economy justifies a strictly proportionate approach to enable them to develop and innovate;
N. whereas artificial intelligence, robotics and related technologies can have serious implications for the material and immaterial integrity of individuals, groups, and society as a whole, and potential individual and collective harm must be addressed with legislative responses;
O. whereas, in order to respect a Union regulatory framework for AI, specific rules for the Union’s transport sector may need to be adopted;
P. whereas AI technologies are of strategic importance for the transport sector, including due to them raising the safety and accessibility of all modes of transport, and creating new employment opportunities and more sustainable business models; whereas a Union approach to the development of artificial intelligence, robotics and related technologies in transport has the potential to increase the global competitiveness and strategic autonomy of the Union economy;
Q. whereas human error is still involved in about 95% of all road traffic accidents in the Union; whereas the Union aimed to reduce annual road fatalities in the Union by 50% by 2020 compared to 2010, but, in view of stagnating progress, renewed its efforts in its EU Road Safety Policy Framework 2021-2030 - Next steps towards "Vision Zero"; whereas in this regard, AI, automation and other new technologies have great potential and vital importance for increasing road safety by reducing the possibilities for human error;
R. whereas the Union’s regulatory framework for AI should also reflect the need to ensure that workers’ rights are respected; whereas regard should be had to the European Social Partners Framework Agreement on Digitalisation of June 2020;
S. whereas the scope of the Union’s regulatory framework for AI should be adequate, proportionate and thoroughly assessed; whereas it should cover a wide range of technologies and their components, including algorithms, software and data used or produced by them, a targeted risk-based approach is necessary to avoid hampering future innovation and the creation of unnecessary burdens, especially for SMEs; whereas the diversity of applications driven by artificial intelligence, robotics and related technologies complicates finding a single solution suitable for the entire spectrum of risks;
T. whereas data analysis and AI increasingly impact on the information made accessible to citizens; whereas such technologies, if misused, may endanger fundamental rights to freedom of expression and information as well as media freedom and pluralism;
U. whereas the geographical scope of the Union’s regulatory framework for AI should cover all the components of artificial intelligence, robotics and related technologies developed, deployed or used in the Union, including in cases where part of the technologies might be located outside the Union or not have a specific location;
V. whereas the Union’s regulatory framework for AI should encompass all relevant stages, namely the development, the deployment and the use of the relevant technologies and their components, requiring due consideration of the relevant legal obligations and ethical principles and should set the conditions to make sure that developers, deployers and users are fully compliant with such obligations and principles;
W. whereas a harmonised approach to ethical principles relating to artificial intelligence, robotics and related technologies requires a common understanding in the Union of the concepts that form the basis of the technologies such as algorithms, software, data or biometric recognition;
X. whereas action at Union level is justified by the need to avoid regulatory fragmentation or a series of national regulatory provisions with no common denominator, and to ensure a homogenous application of common ethical principles enshrined in law when developing, deploying and using high-risk artificial intelligence, robotics and related technologies; whereas clear rules are needed where the risks are significant;
Y. whereas common ethical principles are only efficient where they are also enshrined in law, and those responsible for ensuring, assessing and monitoring compliance are identified;
Z. whereas ethical guidance, such as the principles adopted by the High-Level Expert Group on Artificial Intelligence, provides a good starting point but cannot ensure that developers, deployers and users act fairly and guarantee the effective protection of individuals; whereas such guidance is all the more relevant with regard to high-risk artificial intelligence, robotics and related technologies;
AA. whereas each Member State should designate a national supervisory authority responsible for ensuring, assessing and monitoring the compliance of the development, deployment and use of high-risk artificial intelligence, robotics and related technologies with the Union’s regulatory framework for AI, and for allowing discussions and exchanges of views in close cooperation with relevant stakeholders and civil society; whereas national supervisory authorities should cooperate with each other;
AB. whereas in order to ensure a harmonised approach across the Union and the optimal functioning of the Digital Single Market, coordination at Union level by the Commission, and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context, should be assessed as regards the new opportunities and challenges, in particular those of a cross-border nature, arising from ongoing technological developments; whereas, to this end, the Commission should be tasked with finding an appropriate solution to structure such coordination at Union level;
Human-centric and human-made artificial intelligence
1. Takes the view that, without prejudice to sector-specific legislation, an effective and harmonised regulatory framework based on Union law, the Charter and international human rights law, and applicable, in particular, to high-risk technologies, is necessary in order to establish equal standards throughout the Union and effectively protect Union values;
2. Believes that any new regulatory framework for AI consisting of legal obligations and ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies should fully respect the Charter and thereby respect human dignity, autonomy and self-determination of the individual, prevent harm, promote fairness, inclusion and transparency, eliminate biases and discrimination, including as regards minority groups, and respect and comply with the principles of limiting the negative externalities of technology used, of ensuring explainability of technologies, and of guaranteeing that the technologies are there to serve people and not replace or decide for them, with the ultimate aim of increasing every human being’s well-being;
3. Emphasises the asymmetry between those who employ AI technologies and those who interact and are subject to them; in this context, stresses that citizens’ trust in AI can only be built on an ethics-by-default and ethics-by-design regulatory framework which ensures that any AI put into operation fully respects and complies with the Treaties, the Charter and secondary Union law; considers that building on such an approach should be in line with the precautionary principle that guides Union legislation and should be at the heart of any regulatory framework for AI; calls, in this regard, for a clear and coherent governance model that allows companies and innovators to further develop artificial intelligence, robotics and related technologies;
4. Believes that any legislative action related to artificial intelligence, robotics and related technologies should be in line with the principles of necessity and proportionality;
5. Considers that such an approach will allow companies to introduce innovative products onto the market and create new opportunities, while ensuring the protection of Union values by leading to the development of AI systems which incorporate Union ethical principles by design; considers that such a values-based regulatory framework would represent added value by providing the Union with a unique competitive advantage and make a significant contribution to the well-being and prosperity of Union citizens and businesses by boosting the internal market; underlines that such a regulatory framework for AI will also represent added value as regards promoting innovation in the internal market; believes that for example, in the transport sector, this approach presents Union businesses with the opportunity to become global leaders in this area;
6. Notes that the Union’s legal framework should apply to artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies;
7. Notes that the opportunities based on artificial intelligence, robotics and related technologies rely on ‘Big Data’, with a need for a critical mass of data to train algorithms and refine results; welcomes in this regard the Commission’s proposal for the creation of a common data space in the Union to strengthen data exchange and support research in full respect of European data protection rules;
8. Considers that the current Union legal framework, in particular on protection and privacy and personal data, will need to fully apply to AI, robotics, and related technologies and needs to be reviewed and scrutinised on a regular basis and updated where necessary, in order to effectively tackle the risks created by these technologies, and, in this regard, could benefit from being supplemented with robust guiding ethical principles; points out that, where it would be premature to adopt legal acts, a soft law framework should be used;
9. Expects the Commission to integrate a strong ethical approach into the legislative proposal requested in the annex to this resolution as a follow up to the White Paper on Artificial Intelligence, including on safety, liability and fundamental rights, which maximises the opportunities and minimises the risks of AI technologies; expects that the legislative proposal requested will include policy solutions to the major recognised risks of artificial intelligence including, amongst others, on the ethical collection and use of Big Data, the issue of algorithmic transparency and algorithmic bias; calls on the Commission to develop criteria and indicators to label AI technology, in order to stimulate transparency, explainability and accountability and incentivise the taking of additional precautions by developers; stresses the need to invest in integrating non-technical disciplines in AI study and research, taking into account the social context;
10. Considers that artificial intelligence, robotics and related technologies must be tailored to human needs in line with the principle whereby their development, deployment and use should always be at the service of human beings and never the other way round, and should seek to enhance well-being and individual freedom, as well as preserve peace, prevent conflicts and strengthen international security, while at the same time maximising the benefits offered and preventing and reducing its risks;
11. Declares that the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including but not exclusively by human beings, should always be ethically guided, and designed to respect and allow for human agency and democratic oversight, as well as allow the retrieval of human control when needed by implementing appropriate control measures;
Risk assessment
12. Stresses that any future regulation should follow a differentiated and future oriented risk-based approach to regulating artificial intelligence, robotics and related technologies, including technology-neutral standards across all sectors, with sector-specific standards where appropriate; notes that, in order to ensure uniform implementation of the system of risk assessment and that there is compliance with related legal obligations to ensure a level-playing field among the Member States and to prevent fragmentation of the internal market, an exhaustive and cumulative list of high-risk sectors and high-risk uses or purposes is needed; stresses that such a list must be the subject of regular re-evaluation and notes that, given the evolving nature of these technologies, the way in which their risk assessment is carried out may need to be reassessed in the future;
13. Considers that the determination of whether artificial intelligence, robotics and related technologies should be considered high-risk, and thus subject to mandatory compliance with legal obligations and ethical principles as laid down in the regulatory framework for AI, should always follow from an impartial, regulated and external ex-ante assessment based on concrete and defined criteria;
14. Considers, in that regard, that artificial intelligence, robotics and related technologies should be considered high-risk when their development, deployment and use entail a significant risk of causing injury or harm to individuals or society, in breach of fundamental rights and safety rules as laid down in Union law; considers that, for the purposes of assessing whether AI technologies entail such a risk, the sector where they are developed, deployed or used, their specific use or purpose and the severity of the injury or harm that can be expected to occur should be taken into account; the first and second criteria, namely the sector and the specific use or purpose, should be considered cumulatively;
15. Underlines that the risk assessment of these technologies should be done on the basis of an exhaustive and cumulative list of high-risk sectors and high-risk uses and purposes; strongly believes that there should be coherence within the Union when it comes to the risk assessment of these technologies, especially when they are assessed both in light of their compliance with the regulatory framework for AI and in accordance with any other applicable sector-specific legislation;
16. Considers that this risk-based approach should be developed in a way that limits the administrative burden for companies, and SMEs in particular, as much as possible by using existing tools; such tools include but are not limited to the Data Protection Impact Assessment list as provided for in Regulation (EU) 2016/679;
Safety features, transparency and accountability
17. Recalls that the right to information of consumers is anchored as a key principle under Union law and underlines that it therefore should be fully implemented in relation to artificial intelligence, robotics and related technologies; opines it should especially encompass transparency regarding interaction with artificial intelligence systems, including automation processes, and regarding their mode of functioning, capabilities, for example how information is filtered and presented, accuracy and limitations; considers that such information should be provided to the national supervisory authorities and national consumer protection authorities;
18. Underlines that consumers’ trust is essential for the development and implementation of these technologies, which can carry inherent risks when they are based on opaque algorithms and biased data sets; believes that consumers should have the right to be adequately informed in an understandable, timely, standardised, accurate and accessible manner about the existence, reasoning, possible outcome and impacts for consumers of algorithmic systems, about how to reach a human with decision-making powers, and about how the system’s decisions can be checked, meaningfully contested and corrected; underlines, in this regard, the need to consider and respect the principles of information and disclosure on which the consumer law acquis has been built; considers it necessary to provide detailed information to end-users regarding the operation of transport systems and AI-supported vehicles;
19. Notes that it is essential that the algorithms and data sets used or produced by artificial intelligence, robotics, and related technologies are explainable and, where strictly necessary and in full respect of Union legislation on data protection, privacy and intellectual property rights and trade secrets, accessible by public authorities such as national supervisory authorities and market surveillance authorities; further notes that, in accordance with the highest possible and applicable industry standards, documentation should be stored by those who are involved in the different stages of the development of high-risk technologies; notes the possibility that market surveillance authorities may have additional prerogatives in that respect; stresses in this respect the role of lawful reverse-engineering; considers that an examination of the current market surveillance legislation might be necessary to ensure that it responds ethically to the emergence of artificial intelligence, robotics and related technologies;
20. Calls for a requirement for developers and deployers of high-risk technologies to, where a risk assessment so indicates, provide public authorities with the relevant documentation on the use and design and safety instructions, including, when strictly necessary and in full respect of Union legislation on data protection, privacy, intellectual property rights and trade secrets, source code, development tools and data used by the system; notes that such an obligation would allow for the assessment of their compliance with Union law and ethical principles and notes, in that respect, the example provided by the legal deposit of publications of a national library; notes the important distinction between transparency of algorithms and transparency of the use of algorithms;
21. Further notes that, in order to respect human dignity, autonomy and safety, due consideration should be given to vital and advanced medical appliances and the need for independent trusted authorities to retain the means necessary to provide services to persons carrying these appliances, where the original developer or deployer no longer provides them; for example; such services would include maintenance, repairs and enhancements, including software updates that fix malfunctions and vulnerabilities;
22. Maintains that high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, regardless of the field in which they are developed, deployed and used, should be developed by design in a secure, traceable, technically robust, reliable, ethical and legally binding manner and be subject to independent control and oversight; considers especially that all players throughout the development and supply chains of artificial intelligence products and services should be legally accountable and highlights the need for mechanisms to ensure liability and accountability;
23. Underlines that regulation and guidelines concerning explainability, auditability, traceability and transparency, as well as, where so required by a risk assessment and strictly necessary and while fully respecting Union law such as that concerning data protection, privacy, intellectual property rights and trade secrets, access by public authorities to technology, data and computing systems underlying such technologies, are essential to ensuring citizens’ trust in those technologies, even if the degree of explainability is relative to the complexity of the technologies; points out that it is not always possible to explain why a model has led to a particular result or decision, black box algorithms being a case in point; considers, therefore, that the respect of these principles is a precondition to guarantee accountability;
24. Considers that citizens, including consumers, should be informed when interacting with a system using artificial intelligence in particular to personalise a product or service for its users, whether and how they can switch off or limit such personalisation;
25. Points out in this regard that, if they are to be trustworthy, artificial intelligence, robotics and their related technologies must be technically robust and accurate;
26. Stresses that the protection of networks of interconnected AI and robotics is important and strong measures must be taken to prevent security breaches, data leaks, data poisoning, cyber-attacks and the misuse of personal data, and that this will require the relevant agencies, bodies and institutions both at Union and national level to work together and in cooperation with end users of these technologies; calls on the Commission and Member States to ensure that Union values and respect for fundamental rights are observed at all times when developing and deploying AI technology in order to ensure the security and resilience of the Union’s digital infrastructure;
Non-bias and non-discrimination
27. Recalls that artificial intelligence, depending on how it is developed and used, has the potential to create and reinforce biases, including through inherent biases in the underlying datasets, and therefore, create various forms of automated discrimination, including indirect discrimination, concerning in particular groups of people with similar characteristics; calls on the Commission and the Member States to take any possible measure to avoid such biases and to ensure the full protection of fundamental rights;
28. Is concerned by the risks of biases and discrimination in the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies; recalls that, in all circumstances, they should respect Union law, as well as human rights and dignity, and autonomy and self-determination of the individual, and ensure equal treatment and non-discrimination for all;
29. Stresses that AI technologies should be designed to respect, serve and protect Union values and physical and mental integrity, uphold the Union’s cultural and linguistic diversity and help satisfy essential needs; underlines the need to avoid any use that might lead to inadmissible direct or indirect coercion, threaten to undermine psychological autonomy and mental health or lead to unjustified surveillance, deception or inadmissible manipulation;
30. Firmly believes that the fundamental human rights enshrined in the Charter should be strictly respected so as to ensure that these emerging technologies do not create gaps in terms of protection;
31. Affirms that possible bias in and discrimination by software, algorithms and data can cause manifest harm to individuals and to society, therefore they should be addressed by encouraging the development and sharing of strategies to counter these, such as de-biasing datasets in research and development, and by the development of rules on data processing; considers this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change;
32. Maintains that ethical values of fairness, accuracy, confidentiality and transparency should be the basis of these technologies, which in this context entails that their operations should be such that they do not generate biased outputs;
33. Underlines the importance of the quality of data sets used for artificial intelligence, robotics and related technologies depending on their context, especially regarding the representativeness of the training data, on the de-biasing of data sets, on the algorithms used, and on data and aggregation standards; stresses that those data sets should be auditable by national supervisory authorities whenever called upon to ensure their conformity with the previously referenced principles;
34. Highlights that, in the context of the widespread disinformation war, particularly driven by non-European actors, AI technologies might have ethically adverse effects by exploiting biases in data and algorithms or by deliberately altering training data by a third country, and could be also exposed to other forms of dangerous malign manipulation in unpredictable ways and with incalculable consequences; there is therefore an increased need for the Union to continue investment in research, analysis, innovation and cross-border and cross-sector knowledge transfer in order to develop AI technologies that would be clearly free of any sort of profiling, bias and discrimination, and could effectively contribute to combating fake news and disinformation, while at the same time respecting data privacy and the Union’s legal framework;
35. Recalls the importance of ensuring effective remedies for individuals and calls on the Member States to ensure that accessible, affordable, independent and effective procedures and review mechanisms are available to guarantee an impartial human review of all claims of violations of citizens’ rights, such as consumer or civil rights, through the use of algorithmic systems, whether stemming from public or private sector actors; underlines the importance of the draft Directive of the European Parliament and of the Council on representative actions for the protection of the collective interests of consumers and repealing Directive 2009/22/EC on which a political agreement was reached on 22 June 2020, as regards future cases challenging the introduction or ongoing use of a AI system entailing a risk of violating consumer rights, or seeking remedies for a violation of rights; asks the Commission and the Member States to ensure that national and Union consumer organisations have sufficient funding to assist consumers in exercising their right to a remedy in cases where their rights have been violated;
36. Considers therefore that any natural or legal person should be able to seek redress for a decision made by artificial intelligence, robotics or related technology to his or her detriment in breach of Union or national law;
37. Considers that, as a first point of contact in cases of suspected breaches of the Union’s regulatory framework in this context, national supervisory authorities could equally be addressed by consumers with requests for redress in view of ensuring the effective enforcement of the aforementioned framework;
Social responsibility and gender balance
38. Emphasises that socially responsible artificial intelligence, robotics and related technologies have a role to play in contributing to finding solutions that safeguard and promote fundamental rights and values of our society such as democracy, the rule of law, diverse and independent media and objective and freely available information, health and economic prosperity, equality of opportunity, workers’ and social rights, quality education, protection of children, cultural and linguistic diversity, gender equality, digital literacy, innovation and creativity; recalls the need to ensure that the interests of all citizens, including those who are marginalised or in vulnerable situations, such as persons with disabilities, are adequately taken into account and represented;
39. Underlines the importance of achieving a high level of overall digital literacy and training highly skilled professionals in this area as well as ensuring the mutual recognition of such qualifications throughout the Union; highlights the need of having diverse teams of developers and engineers working alongside key societal actors to prevent gender and cultural biases being inadvertently included in AI algorithms, systems and applications; supports the creation of educational curricula and public-awareness activities concerning the societal, legal, and ethical impact of artificial intelligence;
40. Stresses the vital importance of guaranteeing freedom of thought and expression, thus ensuring that these technologies do not promote hate speech or violence; thus considers hindering or restricting freedom of expression exercised digitally to be unlawful under the fundamental principles of the Union, except where the exercise of this fundamental right entails illegal acts;
41. Stresses that artificial intelligence, robotics and related technologies can contribute to reducing social inequalities and asserts that the European model for their development must be based on citizens’ trust and greater social cohesion;
42. Stresses that the deployment of any artificial intelligence system should not unduly restrict users’ access to public services such as social security; therefore calls on the Commission to assess how this objective can be achieved;
43. Stresses the importance of responsible research and development aiming at maximising the full potential of artificial intelligence, robotics and related technologies for citizens and the public good; calls for mobilisation of resources by the Union and its Member States in order to develop and support responsible innovation;
44. Stresses that technological expertise will be increasingly important and it will therefore be necessary to update continuously training courses, in particular for future generations, and to promote the vocational retraining of those already in the labour market; maintains, in this regard, that innovation and training should be promoted not only in the private sector but also in the public sector;
45. Insists that the development, deployment and use of these technologies should not cause injury or harm of any kind to individuals or society or the environment and that, accordingly, developers, deployers and users of these technologies should be held responsible for such injury or harm in accordance with the relevant Union and national liability rules;
46. Calls on Member States to assess whether job losses resulting from the deployment of these technologies should lead to appropriate public policies such as a reduction of working time;
47. Maintains that a design approach based on Union values and ethical principles is strongly needed to create the conditions for widespread social acceptance of artificial intelligence, robotics and related technologies; considers this approach, aimed at developing trustworthy, ethically responsible and technically robust artificial intelligence, to be an important enabler for sustainable and smart mobility that is safe and accessible;
48. Draws attention to the high added value provided by autonomous vehicles for persons with reduced mobility, as such vehicles allow such persons to participate more effectively in individual road transport and thereby facilitate their daily lives; stresses the importance of accessibility, especially when designing MaaS-systems (Mobility as a Service);
49. Calls on the Commission to further support the development of trustworthy AI systems in order to render transport safer, more efficient, accessible, affordable and inclusive, including for persons with reduced mobility, particularly persons with disabilities, taking account of Directive (EU) 2019/882 of the European Parliament and of the Council(13) and of Union law on passenger rights;
50. Considers that AI can help to better utilise the skills and competences of people with disabilities and that the application of AI in the workplace can contribute to inclusive labour markets and higher employment rates for people with disabilities;
Environment and sustainability
51. States that artificial intelligence, robotics and related technologies should be used by governments and businesses to benefit people and the planet, contribute to the achievement of sustainable development, the preservation of the environment, climate neutrality and circular economy goals; the development, deployment and use of these technologies should contribute to the green transition, preserve the environment, and minimise and remedy any harm caused to the environment during their lifecycle and across their entire supply chain in line with Union law;
52. Given their significant environmental impact, for the purposes of the previous paragraph, the environmental impact of developing, deploying and using artificial intelligence, robotics and related technologies could, where relevant and appropriate, be evaluated throughout their lifetime by sector specific authorities; such evaluation could include an estimate of the impact of the extraction of the materials needed, and the energy consumption and the greenhouse gas emissions caused, by their development, deployment and use;
53. Proposes that for the purpose of developing responsible cutting-edge artificial intelligence solutions, the potential of artificial intelligence, robotics and related technologies should be explored, stimulated and maximised through responsible research and development that requires the mobilisation of resources by the Union and its Member States;
54. Highlights the fact that the development, deployment and use of these technologies provide opportunities for promotion of the Sustainable Development Goals outlined by the United Nations, global energy transition and decarbonisation;
55. Considers that the objectives of social responsibility, gender equality, environmental protection and sustainability should be without prejudice to existing general and sectorial obligations within these fields; believes that non-binding implementation guidelines for developers, deployers and users, especially of high-risk technologies, regarding the methodology for assessing their compliance with this Regulation and the achievement of those objectives should be established;
56. Calls on the Union to promote and fund the development of human-centric artificial intelligence, robotics and related technologies that address environment and climate challenges and that ensure the respect for fundamental rights through the use of tax, procurement or other incentives;
57. Stresses that, despite the current high carbon footprint of development, deployment and use of artificial intelligence, robotics and related technologies, including automated decisions and machine learning, those technologies can contribute to the reduction of the current environmental footprint of the ICT sector; underlines that these and other properly regulated related technologies should be critical enablers for attaining the goals of the Green Deal, the UN Sustainable Development Goals and the Paris Agreement in many different sectors and should boost the impact of policies delivering environmental protection, for example policies concerning waste reduction and environmental degradation;
58. Calls on the Commission to carry out a study on the impact of AI technology’s carbon footprint and the positive and negative impacts of the transition to the use of AI technology by consumers;
59. Notes that, given the increasing development of AI applications, which require computational, storage and energy resources, the environmental impact of AI systems should be considered throughout their lifecycle;
60. Considers that in areas such as health, liability must ultimately lie with a natural or legal person; emphasises the need for traceable and publicly available training data for algorithms;
61. Strongly supports the creation of a European Health Data Space proposed by the Commission in its Communication on a European strategy for data which aims at promoting health-data exchange and at supporting research in full respect of data protection, including processing data with AI technology, and which strengthens and extends the use and re-use of health data; encourages the upscaling of cross-border exchange of health data, the linking and use of such data through secure, federated repositories, specific kinds of health information, such as European Health Records (EHRs), genomic information, and digital health images to facilitate Union-wide interoperable registers or databases in areas such as research, science and health sectors;
62. Highlights the benefits of AI for disease prevention, treatment and control, exemplified by AI predicting the COVID19 epidemic before the WHO; urges the Commission to adequately equip ECDC with the regulatory framework and resources for gathering necessary anonymised real-time global health data independently in conjunction with the Member States, so as, among other purposes, to address issues revealed by the COVID19 crisis;
Privacy and biometric recognition
63. Observes that data production and use, including personal data such as biometric data, resulting from the development, deployment and use of artificial intelligence, robotics and related technologies are rapidly increasing, thereby underlining the need to respect and enforce the rights of citizens to privacy and protection of personal data in line with Union law;
64. Points out that the possibility provided by these technologies for using personal and non-personal data to categorise and micro-target people, identify vulnerabilities of individuals, or exploit accurate predictive knowledge, has to be counterweighed by effectively enforced data protection and privacy principles such as data minimisation, the right to object to profiling and control the use of one’s data, the right to obtain an explanation of a decision based on automated processing and privacy by design, as well as those of proportionality, necessity and limitation based on strictly identified purposes in compliance with GDPR;
65. Emphasises that when remote recognition technologies, such as recognition of biometric features, notably facial recognition, are used by public authorities, for substantial public interest purposes, their use should always be disclosed, proportionate, targeted and limited to specific objectives, restricted in time in accordance with Union law and have due regard for human dignity and autonomy and the fundamental rights set out in the Charter. Criteria for and limits to that use should be subject to judicial review and democratic scrutiny and should take into account its psychological and sociocultural impact on civil society;
66. Points out that while deploying artificial intelligence, robotics and related technologies within the framework of public power decisions has benefits, it can result in grave misuse, such as mass surveillance, predictive policing and breaches of due process rights;
67. Considers that technologies which can produce automated decisions, thus replacing decisions taken by public authorities, should be treated with the utmost precaution, notably in the area of justice and law enforcement;
68. Believes that Member States should have recourse to such technologies only if there is thorough evidence of their trustworthiness and if meaningful human intervention and review is possible or systematic in cases where fundamental liberties are at stake; underlines the importance for national authorities to undertake a strict fundamental rights impact assessment for artificial intelligence systems deployed in these cases, especially following the assessment of those technologies as high-risk;
69. Is of the opinion that any decision taken by artificial intelligence, robotics or related technologies within the framework of prerogatives of public power should be subject to meaningful human intervention and due process, especially following the assessment of those technologies as high-risk;
70. Believes that the technological advancement should not lead to the use of artificial intelligence, robotics and related technologies to autonomously take public sector decisions which have a direct and significant impact on citizen’s rights and obligations;
71. Notes that AI, robotics and related technologies in the area of law enforcement and border control could enhance public safety and security, but also needs extensive and rigorous public scrutiny and the highest possible level of transparency both with regards to the risk assessment of individual applications, as well as a general overview of the use of AI, robotics and related technologies in the area of law enforcement and border control; considers that such technologies bear significant ethical risks that must be adequately addressed, considering the possible adverse effects on individuals when it comes, in particular to their rights to privacy, data protection and non-discrimination; stresses that their misuse can become a direct threat to democracy and that their deployment and use must respect the principles of proportionality and necessity, the Charter of Fundamental Rights, as well as the relevant secondary Union law, such as data protection rules; stresses that AI should never replace humans in issuing judgments; considers that decisions, such as getting bail or probation, that are heard in court, or decisions based solely on automated processing producing a legal effect concerning the individual or which significantly affect them, must always involve meaningful assessment and human judgement;
Good governance
72. Stresses that appropriate governance of the development, deployment and use of artificial intelligence, robotics and related technologies, especially high-risk technologies by having measures in place focusing on accountability and addressing potential risks of bias and discrimination, can increase citizens’ safety and trust in those technologies;
73. Considers that a common framework for the governance of these technologies, coordinated by the Commission and/or any relevant institutions, bodies, offices or agencies of the Union that may be designated for this task in this context, to be implemented by national supervisory authorities in each Member State, would ensure a coherent Union approach and prevent a fragmentation of the single market;
74. Observes that data are used in large volumes in the development of artificial intelligence, robotics and related technologies and that the processing, sharing of, access to and use of such data must be governed in accordance with the law and the requirements of quality, integrity, interoperability, transparency, security, privacy and control set out therein;
75. Recalls that access to data is an essential component in the growth of the digital economy; points out in this regard that interoperability of data, by limiting lock-in effects, plays a key role in ensuring fair market conditions and promoting a level playing field in the Digital Single Market;
76. Underlines the need to ensure that personal data are protected adequately, especially data on, or stemming from, vulnerable groups, such as people with disabilities, patients, children, the elderly, minorities, migrants and other groups at risk of exclusion;
77. Notes that the development, deployment and use of artificial intelligence, robotics and related technologies by public authorities are often outsourced to private parties; considers that this should not compromise the protection of public values and fundamental rights in any way; considers that public procurement terms and conditions should reflect the ethical standards imposed on public authorities, when applicable;
Consumers and the internal market
78. Underlines the importance of a regulatory framework for AI being applicable where consumers within the Union are users of, subject to, targeted by, or directed towards an algorithmic system, irrespective of the place of establishment of the entities that develop, sell or employ the system; furthermore, believes that, in the interest of legal certainty, the rules set out in such a framework should apply to all developers and across the value chain, namely the development, deployment and use of the relevant technologies and their components, and should guarantee a high level of consumer protection;
79. Notes the intrinsic link between artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, and fields such as the internet of things, machine learning, rule-based systems or automated and assisted decision making processes; further notes that standardised icons could be developed to help explain such systems to consumers whenever those systems are characterised by complexity or are enabled to make decisions that impact the lives of consumers significantly;
80. Recalls that the Commission should examine the existing legal framework and its application, including the consumer law acquis, product liability legislation, product safety legislation and market surveillance legislation, in order to identify legal gaps, as well as existing regulatory obligations; considers that this is necessary in order to ascertain whether it is able to respond to the new challenges posed by the emergence of artificial intelligence, robotics and related technologies and ensure a high level of consumer protection;
81. Stresses the need to effectively address the challenges created by artificial intelligence, robotics and related technologies and to ensure that consumers are empowered and properly protected; underlines the need to look beyond the traditional principles of information and disclosure on which the consumer law acquis has been built, as stronger consumer rights and clear limitations regarding the development, deployment and use of artificial intelligence, robotics and related technologies will be necessary to ensure such technology contributes to making consumers’ lives better and evolves in a way that respects fundamental and consumer rights and Union values;
82. Points out that the legislative framework introduced by Decision No 768/2008/EC(14) provides for a harmonised list of obligations for producers, importers and distributors, encourages the use of standards and provides for several levels of control depending on the dangerousness of the product; considers that that framework should also apply to AI embedded products;
83. Notes that for the purpose of analysing the impacts of artificial intelligence, robotics and related technologies on consumers, access to data could, when in full respect of Union law, such as that concerning data protection, privacy and trade secrets, be extended to national competent authorities ; recalls the importance of educating consumers to be more informed and skilled when dealing with artificial intelligence, robotics and related technologies, in order to protect them from potential risks and uphold their rights;
84. Calls on the Commission to propose measures for data traceability, having in mind both the legality of data acquisition and the protection of consumer rights and fundamental rights, while fully respecting Union law such as that concerning data protection, privacy, intellectual property rights and trade secrets;
85. Notes that these technologies should be user-centric and designed in a way that allows everyone to use AI products or services, regardless of their age, gender, abilities or characteristics; notes their accessibility for persons with disabilities is of particular importance; notes that there should not be a one-size-fits-all approach and universal design principles addressing the widest possible range of users and following relevant accessibility standards should be considered; stresses that this will enable individuals to have equitable access to and to actively participate in existing and emerging computer-mediated human activities and assistive technologies.
86. Stresses that where money originating from public sources significantly contributes to the development, deployment or use of artificial intelligence, robotics and related technologies, in addition to open procurement and open contracting standards, consideration could be given to the possibility of having the code, the generated data -as far as they are non-personal- and the trained model made public by default upon agreement with the developer, in order to guarantee transparency, enhance cybersecurity and enable the reuse thereof so as to foster innovation; stresses that, in this way, the full potential of the single market can be unlocked, avoiding market fragmentation;
87. Considers that AI, robotics and related technologies have enormous potential to deliver opportunities for consumers to have access to several amenities in many aspects of their lives alongside better products and services, as well as to benefit from better market surveillance, as long as all applicable principles, conditions, including transparency and auditability, and regulations continue to apply;
Security and defence
88. Highlights that the security and defence policies of the European Union and its Member States are guided by the principles enshrined in the Charter and by those of the United Nations Charter, and by a common understanding of the universal values of respect for the inviolable and inalienable rights of the human person, human dignity, of freedom, of democracy, of equality and of the rule of law; stresses that all defence-related efforts within the Union framework must respect those universal values whilst promoting peace, security and progress in Europe and in the world;
89. Welcomes the endorsement, by the 2019 Meeting of High Contracting Parties to the United Nations Convention on Certain Conventional Weapons (CCW), of 11 Guiding Principles for the development and use of autonomous weapons systems; regrets however the failure to agree on a legally binding instrument regulating lethal autonomous weapons (LAWS), with an effective enforcement mechanism; welcomes and supports the report by the Commission’s High-Level Expert Group on Artificial Intelligence entitled ‘Ethics Guidelines for Trustworthy AI’ published on 9 April 2019 and its position on lethal autonomous weapon systems (LAWS); urges Member States to develop national strategies for the definition and status of lethal autonomous weapons (LAWS) towards a comprehensive strategy at Union level and to promote, together with the Union’s High Representative/Vice-President of the Commission (‘HR/VP’) and the Council, the discussion on LAWS in the UN CCW framework and other relevant fora and the establishment of international norms regarding the ethical and legal parameters for the development and use of fully autonomous, semi-autonomous and remotely operated lethal weapons systems; recalls in this respect its resolution on autonomous weapon systems of 12 September 2018 and calls once again for the urgent development and adoption of a common position on lethal autonomous weapon systems, for an international ban on the development, production and use of lethal autonomous weapon systems enabling strikes to be carried out without meaningful human control and without respect for the human-in-the-loop principle, in line with the statement of the world’s most prominent AI researchers in their open letter from 2015; welcomes the agreement of Council and Parliament to exclude lethal autonomous weapons ‘without the possibility for meaningful human control over the selection and engagement decisions when carrying out strikes’ from actions funded under the European Defence Fund; believes that ethical aspects of other AI-applications in defence, such as intelligence, surveillance and reconnaissance (ISR) or cyber operations must not be overlooked, and special attention must be paid to the development and deployment of drones in military operations;
90. Underlines that emerging technologies in the defence and security sector not covered by international law should be judged taking account of the principle of respect for humanity and the dictates of public conscience;
91. Recommends that any European framework regulating the use of AI-enabled systems in defence, both in combat and non-combat situations, must respect all applicable legal regimes, in particular international humanitarian law and international human rights law, and it must be in compliance with Union law, principles and values, keeping in mind the disparities in terms of technical and security infrastructure throughout the Union;
92. Recognises that unlike defence industrial bases, critical AI innovations could come from small Member States, thus a CSDP-standardised approach should ensure that smaller Member States and SMEs are not crowded out; stresses that a set of common EU AI capabilities matched to Member States operating concepts can bridge the technical gaps that could leave out States lacking the relevant technology, industry expertise or the ability to implement AI systems in their defence ministries;
93. Considers that current and future security and defence-related activities within the Union framework will draw on AI, on robotics and autonomy, and on related technologies and that reliable, robust and trustworthy AI could contribute to a modern and effective military; the Union must therefore assume a leading role in research and development of AI systems in the security and defence field; believes that the use of AI-enabled applications in security and defence could offer a number of direct benefits to the operation commander, such as higher quality collected data, greater situational awareness, increased speed for decision-making, reduced risk of collateral damage thanks to better cabling, protection of forces on the ground, as well as greater reliability of military equipment and hence reduced risk for humans and of human casualties; stresses that the development of reliable AI in the field of defence is essential for ensuring European strategic autonomy in capability and operational areas; recalls that AI systems are also becoming key elements in countering emerging security threats, such as cyber and hybrid warfare both in the online and offline spheres; underlines at the same time all the risks and challenges of unregulated use of AI; notes that AI could be exposed to manipulation, to errors and inaccuracies;
94. Stresses that AI technologies are, in essence, of dual use, and the development of AI in defence-related activities benefits from exchanges between military and civil technologies; highlights that AI in defence-related activities is a transverse disruptive technology, the development of which may provide opportunities for the competitiveness and the strategic autonomy of the Union;
95. Recognises, in the hybrid and advanced warfare context of today, that the volume and velocity of information during the early phases of a crisis might be overwhelming for human analysts and that an AI system could process the information to ensure that human decision-makers are tracking the full spectrum of information within an appropriate timeframe for a speedy response;
96. Underlines the importance of investing in the development of human capital for artificial intelligence, fostering the necessary skills and education in the field of security and defence AI technologies with particular focus on ethics of semi-autonomous and autonomous operational systems based on human accountability in an AI-enabled world; stresses in particular the importance of ensuring that ethicists in this field have appropriate skills and receive proper training ; calls on the Commission to present as soon as possible its ‘Reinforcement of the Skills Agenda’, announced in the White Paper on Artificial Intelligence on 19 February 2020;
97. Stresses that quantum computing could represent the most revolutionary change in conflict since the advent of atomic weaponry and thus urges that the further development of quantum computing technologies be a priority for the Union and Member States; recognises that acts of aggression, including attacks on critical infrastructure, aided by quantum computing will create a conflict environment in which the time available to make decisions will be compressed dramatically from days and hours to minutes and seconds, forcing Member States to develop capabilities that protect themselves and train both its decision makers and military personnel to respond effectively within such timeframes;
98. Calls for increased investment in European AI for defence and in the critical infrastructure that sustains it;
99. Recalls that most of the current military powers worldwide have already engaged in significant R&D efforts related to the military dimension of artificial intelligence; considers that the Union must ensure that it does not lag behind in this regard;
100. Calls on the Commission to embed cybersecurity capacity-building in its industrial policy in order to ensure the development and deployment of safe, resilient and robust AI-enabled and robotic systems; calls on the Commission to explore the use of blockchain-based cybersecurity protocols and applications to improve the resilience, trustworthiness and robustness of AI infrastructures through disintermediated models of data encryption; encourages European stakeholders to research and engineer advanced features that would facilitate the detection of corrupt and malicious AI-enabled & robotics systems which could undermine the security of the Union and of citizens;
101. Stresses that all AI-systems in defence must have a concrete and well-defined mission framework, whereby humans retain the agency to detect and disengage or deactivate deployed systems should they move beyond the mission framework defined and assigned by a human commander, or should they engage in any escalatory or unintended action; considers that AI-enabled systems, products and technology intended for military use should be equipped with a ‘black box’ to record every data transaction carried out by the machine;
102. Underlines that the entire responsibility and accountability for the decision to design, develop, deploy and use AI-systems must rest on human operators, as there must be meaningful human monitoring and control over any weapon system and human intent in the decision to use force in the execution of any decision of AI-enabled weapons systems that might have lethal consequences; underlines that human control should remain effective for the command and control of AI-enabled systems, following the human-in-the-loop, human-on-the-loop and human-in-command principles at the military leadership level; stresses that AI-enabled systems must allow the military leadership of armies to assume its full responsibility and accountability for the use of lethal force and exercise the necessary level of judgment, which machines cannot be endowed with as such judgment must be based on distinction, proportionality and precaution, for taking lethal or large-scale destructive action by means of such systems; stresses the need to establish clear and traceable authorisation and accountability frameworks for the deployment of smart weapons and other AI-enabled systems, using unique user characteristics like biometric specifications to enable deployment exclusively by authorised personnel;
Transport
103. Highlights the potential of using artificial intelligence, robotics and related technologies for all autonomous means of road, rail, waterborne and air transport, and also for boosting the modal shift and intermodality, as such technologies can contribute to finding the optimal combination of modes of transport for the transport of goods and passengers; furthermore, stresses their potential to make transport, logistics and traffic flows more efficient and to make all modes of transport safer, smarter, and more environmentally friendly; points out that an ethical approach to AI can also be seen as an early warning system, in particular as regards the safety and efficiency of transport;
104. Highlights the fact that the global competition between companies and economic regions means that the Union needs to promote investments and strengthen the international competitiveness of companies operating in the transport sector, by establishing an environment favourable for the development and application of AI solutions and further innovations, in which Union-based undertakings can become world leaders in the development of AI technologies;
105. Stresses that the Union’s transport sector needs an update of the regulatory framework concerning such emerging technologies and their use in the transport sector and a clear ethical framework for achieving trustworthy AI, including safety, security, the respect of human autonomy, oversight and liability aspects, which will increase benefits that are shared by all and will be key to boosting investment in research and innovation, development of skills and the uptake of AI by public services, SMEs, start-ups and businesses and at the same time ensuring data protection as well as interoperability, without imposing an unnecessary administrative burden on businesses and consumers;
106. Notes that the development and implementation of AI in the transport sector will not be possible without modern infrastructure, which is an essential part of intelligent transport systems; stresses that the persistent divergences in the level of development between Member States create the risk of depriving the least developed regions and their inhabitants of the benefits brought by the development of autonomous mobility; calls for the modernisation of transport infrastructure in the Union, including its integration into the 5G network, to be adequately funded;
107. Recommends the development of Union-wide trustworthy AI standards for all modes of transport, including the automotive industry, and for testing of AI-enabled vehicles and related products and services;
108. Notes that AI systems could help to reduce the number of road fatalities significantly, for instance through better reaction times and better compliance with rules; considers, however, that it will be impossible for use of autonomous vehicles to result in the elimination of all accidents and underlines that this makes the explainability of AI decisions increasingly important in order to justify shortcomings and unintended consequences of AI decisions;
Employment, workers’ rights, digital skills and the workplace
109. Notes that the application of artificial intelligence, robotics and related technologies in the workplace can contribute to inclusive labour markets and impact occupational health and safety, while it can also be used to monitor, evaluate, predict and guide the performance of workers with direct and indirect consequences for their careers; whereas AI should have a positive impact on working conditions and be guided by respect for human rights as well as the fundamental rights and values of the Union; whereas AI should be human centric, enhance the well-being of people and society and contribute to a fair and just transition; such technologies should therefore have a positive impact on working conditions guided by respect for human rights as well as the fundamental rights and values of the Union;
110. Highlights the need for competence development through training and education for workers and their representatives with regard to AI in the workplace to better understand the implications of AI solutions; stresses that applicants and workers should be duly informed in writing when AI is used in the course of recruitment procedures and other human resource decisions and how in this case a human review can be requested in order to have an automated decision reversed;
111. Stresses the need to ensure that productivity gains due to the development and use of AI and robotics do not only benefit company owners and shareholders, but also profit companies and the workforce, through better working and employment conditions, including wages, economic growth and development, and also serve society at large, especially where such gains come at the expense of jobs; calls on the Member States to carefully study the potential impact of AI on the labour market and social security systems and to develop strategies as to how to ensure long-term stability by reforming taxes and contributions as well as other measures in the event of smaller public revenues;
112. Underlines the importance of corporate investment in formal and informal training and life-long learning in order to support the just transition towards the digital economy; stresses in this context that companies deploying AI have the responsibility of providing adequate re-skilling and up-skilling for all employees concerned, in order for them to learn how to use digital tools and to work with co-bots and other new technologies, thereby adapting to changing needs of the labour market and staying in employment;
113. Considers that special attention should be paid to new forms of work, such as gig and platform work, resulting from the application of new technologies in this context; stresses that regulating telework conditions across the Union and ensuring decent working and employment conditions in the digital economy must likewise take the impact of AI into account; calls on the Commission to consult with social partners, AI-developers, researchers and other stakeholders in this regard;
114. Underlines that artificial intelligence, robotics and related technologies must not in any way affect the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States, in accordance with national law and/or practice, or affect the right to negotiate, to conclude and enforce collective agreements, or to take collective action in accordance with national law and/or practice;
115. Reiterates the importance of education and continuous learning to develop the qualifications necessary in the digital age and to tackle digital exclusion; calls on the Member States to invest in high quality, responsive and inclusive education, vocational training and life-long learning systems as well as re-skilling and up-skilling policies for workers in sectors that are potentially severely affected by AI; highlights the need to provide the current and future workforce with the necessary literacy, numeracy and digital skills as well as competences in science, technology, engineering and mathematics (STEM) and cross-cutting soft skills, such as critical thinking, creativity and entrepreneurship; underlines that special attention must be paid to the inclusion of disadvantaged groups in this regard;
116. Recalls that artificial intelligence, robotics and related technologies used at the workplace must be accessible for all, based on the design for all principle;
Education and culture
117. Stresses the need to develop criteria for the development, the deployment and the use of AI bearing in mind their impact on education, media, youth, research, sports and the cultural and creative sectors, by developing benchmarks for and defining principles of ethically responsible and accepted uses of AI technologies that can be appropriately applied in these areas, including a clear liability regime for products resulting from AI use;
118. Notes that every child enjoys the right to public education of quality at all levels; therefore, calls for the development, the deployment and the use of quality AI systems that facilitate and provide quality educational tools for all at all levels and stresses that the deployment of new AI systems in schools should not lead to a wider digital gap being created in society; recognises the enormous potential contribution that AI and robotics can make to education; notes that AI personalised learning systems should not replace educational relationships involving teachers and that traditional forms of education should not be left behind, while at the same time pointing out that financial, technological and educational support, including specialised training in information and communications technology must be provided for teachers seeking to acquire appropriate skills so as to adapt to technological changes and not only harness the potential of AI but also understand its limitations; calls for a strategy to be developed at Union level in order to help transform and update our educational systems, prepare our educational institutions at all levels and equip teachers and pupils with the necessary skills and abilities;
119. Emphasises that educational institutions should aim to use AI systems for educational purposes that have received a European certificate of ethical compliance;
120. Emphasises that opportunities provided by digitisation and new technologies must not result in an overall loss of jobs in the cultural and creative sectors, the neglect of the conservation of originals or in the downplaying of traditional access to cultural heritage, which should equally be encouraged; notes that AI systems developed, deployed and used in the Union should reflect its cultural diversity and its multilingualism;
121. Acknowledges the growing potential of AI in the areas of information, media and online platforms, including as a tool to fight disinformation in accordance with Union law; underlines that, if not regulated, it might also have ethically adverse effects by exploiting biases in data and algorithms that may lead to disseminating disinformation and creating information bubbles; emphasises the importance of transparency and accountability of algorithms used by video-sharing platforms (VSP) as well as streaming platforms, in order to ensure access to culturally and linguistically diverse content;
National supervisory authorities
122. Notes the added value of having designated national supervisory authorities in each Member State, responsible for ensuring, assessing and monitoring compliance with legal obligations and ethical principles for the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, thus contributing to the legal and ethical compliance of these technologies;
123. Believes that these authorities must be required to, without duplicating their tasks, cooperate with the authorities responsible for implementing sectorial legislation in order to identify technologies which are high-risk from an ethical perspective and in order to supervise the implementation of required and appropriate measures where such technologies are identified;
124. Indicates that such authorities should liaise not only among themselves but also with the European Commission and other relevant institutions, bodies, offices and agencies of the Union in order to guarantee coherent cross-border action;
125. Suggests that, in the context of such cooperation, common criteria and an application process be developed for the granting of a European certificate of ethical compliance, including following a request by any developer, deployer or user of technologies not considered as high-risk seeking to certify the positive assessment of compliance carried out by the respective national supervisory authority;
126. Calls for such authorities to be tasked with promoting regular exchanges with civil society and innovation within the Union by providing assistance to researchers, developers, and other relevant stakeholders, as well as to less digitally-mature companies, in particular small and medium-sized enterprises or start-ups; in particular regarding awareness-raising and support for development, deployment, training and talent acquisition to ensure efficient technology transfer and access to technologies, projects, results and networks;
127. Calls for sufficient funding by each Member State of their designated national supervisory authorities and stresses the need for national market surveillance authorities to be reinforced in terms of capacity, skills and competences, as well as knowledge about the specific risks of artificial intelligence, robotics and related technologies;
Coordination at Union level
128. Underlines the importance of coordination at Union level as carried out by the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context, in order to avoid fragmentation, and of ensuring a harmonised approach across the Union; considers that coordination should focus on the mandates and actions of the national supervisory authorities in each Member State as referred to in the previous sub-section, as well as on sharing of best practices among those authorities and contributing to the cooperation as regards research and development in the field throughout the Union; calls on the Commission to assess and find the most appropriate solution to structure such coordination; examples of relevant existing institutions, bodies, offices and agencies of the Union are ENISA, the EDPS and the European Ombudsman;
129. Believes that such coordination, as well as a European certification of ethical compliance, would not only benefit the development of Union industry and innovation in that context but also increase the awareness of citizens regarding the opportunities and risks inherent to these technologies;
130. Suggests a centre of expertise be created, bringing together academia, research, industry, and individual experts at Union level, to foster exchange of knowledge and technical expertise, and to facilitate collaboration throughout the Union and beyond; further calls for this centre of expertise to involve stakeholder organisations, such as consumer protection organisations, in order to ensure wide consumer representation; considers that due to the possible disproportionate impact of algorithmic systems on women and minorities, the decision levels of such a structure should be diverse and ensure gender equality; emphasises that Member States must develop risk-management strategies for AI in the context of their national market surveillance strategies;
131. Proposes that the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context provide any necessary assistance to national supervisory authorities concerning their role as first points of contact in cases of suspected breaches of the legal obligations and ethical principles set out in the Union’s regulatory framework for AI, including the principle of non-discrimination; it should also provide any necessary assistance to national supervisory authorities in cases where the latter carry out compliance assessments in view of supporting the right of citizens to contest and redress, namely by supporting, when applicable, the consultation of other competent authorities in the Union, in particular the Consumer Protection Cooperation Network and national consumer protection bodies, civil society organisations and social partners located in other Member States;
132. Acknowledges the valuable output of the High-Level Expert Group on Artificial Intelligence, comprising representatives from academia, civil society and industry, as well as the European AI Alliance, particularly ‘The Ethics Guidelines for Trustworthy Artificial Intelligence’, and suggests that it might provide expertise to the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context;
133. Notes the inclusion of AI-related projects under the European Industrial Development Programme (EDIDP); believes that the future European Defence Fund (EDF) and the Permanent structured cooperation (PESCO) may also offer frameworks for future AI-related projects that could help to better streamline Union efforts in this field, and promote at the same time the Union’s objective of strengthening human rights, international law, and multilateral solutions; stresses that AI-related projects should be synchronised with the wider Union civilian programmes devoted to AI; notes that, in line with the European Commission’s White Paper of 19 February 2020 on Artificial Intelligence, excellence and testing centres concentrating on research and development of AI in the field of security and defence should be established with rigorous specifications underpinning the participation of and investment from private stakeholders;
134. Takes note of the Commission's White Paper of 19 February 2020 on Artificial Intelligence and regrets that military aspects were not taken into account; calls on the Commission and on the HR/VP to present, also as part of an overall approach, a sectorial AI strategy for defence-related activities within the Union framework, that ensures both respect for citizens’ rights and the Union’s strategic interests, and that is based on a consistent approach spanning from the inception of AI-enabled systems to their military uses, and to establish a working Group on security and defence within the High-Level Expert Group on Artificial Intelligence that should specifically deal with policy and investment questions as well as ethical aspects of AI in the field of security and defence; calls on the Council, the Commission and on the VP/HR to enter into a structured dialogue with Parliament to that end;
European certification of ethical compliance
135. Suggests that common criteria and an application process relating to the granting of a European certificate of ethical compliance be developed in the context of coordination at Union level, including following a request by any developer, deployer or user of technologies not considered as high-risk seeking to certify the positive assessment of compliance carried out by the respective national supervisory authority;
136. Believes that such European certificate of ethical compliance would foster ethics by design throughout the supply chain of artificial intelligence ecosystems; suggests, therefore, that this certification could be, in the case of high-risk technologies, a mandatory prerequisite for eligibility for public procurement procedures on artificial intelligence, robotics and related technologies;
International cooperation
137. Is of the opinion that effective cross-border cooperation and ethical standards can be achieved only if all stakeholders commit to ensure human agency and oversight, technical robustness and safety, transparency and accountability, diversity, non-discrimination and fairness, societal and environmental well-being, and respect the established principles of privacy, data governance and data protection, specifically those enshrined in Regulation (EU) 2016/679;
138. Stresses that the Union’s legal obligations and ethical principles for the development, deployment and use of these technologies could make Europe a world leader in the artificial intelligence sector and should therefore be promoted worldwide by cooperating with international partners while continuing the critical and ethics-based dialogue with third countries that have alternative models of artificial intelligence regulation, development and deployment models;
139. Recalls that the opportunities and risks inherent to these technologies have a global dimension, as the software and data they use are frequently imported into and exported out of the Union, and therefore there is a need for a consistent cooperation approach at international level; calls on the Commission to take the initiative to assess which bilateral and multilateral treaties and agreements should be adjusted to ensure a consistent approach and promote the European model of ethical compliance globally;
140. Points out the added value of coordination at Union level as referred to above in this context as well;
141. Calls for synergies and networks to be established between the various European research centres on AI as well as other multilateral fora, such as the Council of Europe, the United Nations Educational Scientific and Cultural Organization (UNESCO), the Organisation for Economic Co-operation and Development’s (OECD),the World Trade Organisation and the International Telecommunications Union (ITU), in order to align their efforts and to better coordinate the development of artificial intelligence, robotics and related technologies;
142. Underlines that the Union must be at the forefront of supporting multilateral efforts to discuss, in the framework of the UN CCW Governmental Expert Group and other relevant fora, an effective international regulatory framework that ensures meaningful human control over autonomous weapon systems in order to master those technologies by establishing well defined, benchmark-based processes and adopting legislation for their ethical use, in consultation with military, industry, law enforcement, academic and civil society stakeholders, to understand the related ethical aspects and to mitigate the inherent risks of such technologies and prevent use for malicious purposes;
143. Recognises the role of NATO in promoting Euro-Atlantic security and calls for cooperation within NATO for the establishment of common standards and interoperability of AI systems in defence; stresses that the transatlantic relationship is important for the preservation of shared values and for countering future and emerging threats;
144. Stresses the importance of the creation of an ethical code of conduct underpinning the deployment of weaponised AI-enabled systems in military operations, similar to the existing regulatory framework prohibiting the deployment of chemical and biological weapons; is of the opinion that the Commission should initiate the creation of standards on the use of AI-enabled weapons systems in warfare in accordance with international humanitarian law, and that the Union should pursue the international adoption of such standards; considers that the Union should engage in AI diplomacy in international fora with like-minded partners like the G7, the G20 and the OECD;
Final aspects
145. Concludes, following the above reflections on aspects related to the ethical dimension of artificial intelligence, robotics and related technologies, that the legal and ethical dimensions should be enshrined in an effective, forward-looking and comprehensive regulatory framework at Union level, supported by national competent authorities, coordinated and enhanced by the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context, regularly supported by the aforementioned centre of expertise and duly respected and certified within the internal market;
146. In accordance with the procedure laid down in Article 225 of the Treaty on the Functioning of the European Union, requests the Commission to submit a proposal for a Regulation on ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies on the basis of Article 114 of the Treaty on the Functioning of the European Union and based on the detailed recommendations set out in the annex hereto; points out that the proposal should not undermine sector-specific legislation but should only cover identified loopholes;
147. Recommends that the European Commission, after consulting with all the relevant stakeholders, review, if necessary, existing Union law applicable to artificial intelligence, robotics and related technologies in order to address the rapidity of their development in line with the recommendations set out in the annex hereto, avoiding over-regulation, including for SMEs;
148. Believes that a periodical assessment and review, when necessary, of the Union regulatory framework related to artificial intelligence, robotics and related technologies will be essential to ensure that the applicable legislation is up to date with the rapid pace of technological progress;
149. Considers that the legislative proposal requested would have financial implications if any European body were entrusted with the above-mentioned coordination functions and the necessary technical means and human resources to fulfil its newly attributed tasks were provided;
o o o
150. Instructs its President to forward this resolution and the accompanying detailed recommendations to the Commission and the Council.
ANNEX TO THE RESOLUTION:
DETAILED RECOMMENDATIONS AS TO THE CONTENT OF THE PROPOSAL REQUESTED
A. PRINCIPLES AND AIMS OF THE PROPOSAL REQUESTED
I. The main principles and aims of the proposal are:
˗ to build trust at all levels of involved stakeholders and of society in artificial intelligence, robotics and related technologies, especially when they are considered high-risk;
˗ to support the development of artificial intelligence, robotics and related technologies in the Union, including by helping businesses, start-ups and small and medium-sized enterprises to assess and address with certainty current and future regulatory requirements and risks during the innovation and business development process, and, during the subsequent phase of use by professionals and private individuals, by minimising burdens and red tape;
˗ to support deployment of artificial intelligence, robotics and related technologies in the Union by providing the appropriate and proportionate regulatory framework which should apply without prejudice to existing or future sectorial legislation, with the aim of encouraging regulatory certainty and innovation while guaranteeing fundamental rights and consumer protection;
˗ to support use of artificial intelligence, robotics and related technologies in the Union by ensuring that they are developed, deployed and used in a manner that is compliant with ethical principles;
˗ to require transparency and better information flows among citizens and within organisations developing, deploying or using artificial intelligence, robotics and related technologies, as a means of ensuring that these technologies are compliant with Union law, fundamental rights and values, and with the ethical principles of the proposal for Regulation requested.
II. The proposal consists of the following elements:
˗ a ‘Regulation on ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies’;
˗ the coordination role at Union level by the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context and a European certification of ethical compliance;
˗ the support role of the European Commission;
˗ the role of the ‘Supervisory Authority’ in each Member State to ensure that ethical principles are applied to artificial intelligence, robotics and related technologies;
˗ the involvement and consultation of, as well as provision of support to, relevant research and development projects and concerned stakeholders, including start-ups, small and medium-sized enterprises, businesses, social partners, and other representatives of civil society;
˗ an annex establishing an exhaustive and cumulative list of high-risk sectors and high-risk uses and purposes;
III. The ‘Regulation on ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies’ builds on the following principles:
˗ human-centric, human-made and human-controlled artificial intelligence, robotics and related technologies;
˗ mandatory compliance assessment of high-risk artificial intelligence, robotics and related technologies;
˗ safety, transparency and accountability;
˗ safeguards and remedies against bias and discrimination;
˗ right to redress;
˗ social responsibility and gender equality in artificial intelligence, robotics and related technologies;
˗ environmentally sustainable artificial intelligence, robotics and related technologies;
˗ respect for privacy and limitations on the use of biometric recognition;
˗ good governance relating to artificial intelligence, robotics and related technologies, including the data used or produced by such technologies.
IV. For the purposes of coordination at Union level, the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context should carry out the following main tasks:
˗ cooperating in monitoring the implementation of the proposal for a Regulation requested and relevant sectorial Union law;
˗ cooperating regarding the issuing of guidance concerning the consistent application of the proposal for a Regulation requested, namely the application of the criteria for artificial intelligence, robotics and related technologies to be considered high-risk and the list of high-risk sectors and high-risk uses and purposes set out in the annex to the Regulation;
˗ cooperating with the ‘Supervisory Authority’ in each Member State regarding the development of a European certificate of compliance with ethical principles and legal obligations as laid down in the proposal for a Regulation requested and relevant Union law, as well as the development of an application process for any developer, deployer or user of technologies not considered as high-risk seeking to certify their compliance with the proposal for a Regulation requested;
˗ cooperating regarding the supporting of cross-sector and cross-border cooperation through regular exchanges with concerned stakeholders and civil society, in the EU and in the world, notably with businesses, social partners, researchers and competent authorities, including as regards the development of technical standards at international level;
˗ cooperating with the ‘Supervisory Authority’ in each Member State regarding the establishing of binding guidelines on the methodology to be followed for the compliance assessment to be carried out by each ‘Supervisory Authority’;
˗ cooperating regarding the liaising with the ‘Supervisory Authority’ in each Member State and the coordinating of their mandate and tasks;
˗ cooperating on raising awareness, providing information and engaging in exchanges with developers, deployers and users throughout the Union;
˗ cooperating on raising awareness, providing information, promoting digital literacy, training and skills and engaging in exchanges with designers, developers, deployers, citizens, users and institutional bodies throughout the Union and internationally;
˗ cooperating regarding the coordination of a common framework for the governance of the development, deployment and use of artificial intelligence, robotics and related technologies to be implemented by the ‘Supervisory Authority’ in each Member State;
˗ cooperating regarding serving as a centre for expertise by promoting the exchange of information and supporting the development of a common understanding in the Single Market;
˗ cooperating regarding the hosting of a Working Group on Security and Defence.
V. Additionally, the Commission should carry out the following tasks:
˗ drawing up and subsequently updating, by means of delegated acts, a common list of high-risk technologies identified within the Union in cooperation with the ‘Supervisory Authority’ in each Member State;
˗ updating, by means of delegated acts, the list provided for in the Annex to the Regulation.
VI. The ‘Supervisory Authority’ in each Member State should carry out the following main tasks:
˗ contributing to the consistent application of the regulatory framework established in the proposal for a Regulation requested in cooperation with the ‘Supervisory Authority’ in the other Member States, as well as other authorities responsible for implementing sectorial legislation, the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context, namely regarding the application of the risk assessment criteria provided for in the proposal for a Regulation requested and of the list of high-risk sectors and of high-risk uses or purposes set out in its annex, and the following supervision of the implementation of required and appropriate measures where high-risk technologies are identified as a result of such application;
˗ assessing whether artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed and used in the Union are to be considered high-risk technologies in accordance with the risk assessment criteria provided for in the proposal for a Regulation requested and in the list set out in its annex;
˗ issuing a European certificate of compliance with ethical principles and legal obligations as laid down in the proposal for Regulation requested and relevant Union law, including when resulting from an application process for any developer, deployer or user of technologies not considered as high-risk seeking to certify their compliance with the proposal for a Regulation requested, as developed by the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context;
˗ assessing and monitoring their compliance with ethical principles and legal obligations as laid down in the proposal for a Regulation requested and relevant Union law;
˗ being responsible for establishing and implementing standards for the governance of artificial intelligence, robotics and related technologies, including by liaising and sustaining a regular dialogue with all relevant stakeholders and civil society representatives; to that end, cooperating with the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context regarding the coordination of a common framework at Union level;
˗ raising awareness, providing information on artificial intelligence, robotics and related technologies to the public, and supporting the training of relevant professions, including in the judiciary, thereby empowering citizens and workers with the digital literacy, skills and tools necessary for a fair transition;
˗ serving as a first point of contact in cases of a suspected breach of the legal obligations and ethical principles set out in the proposal for a Regulation requested and carrying out a compliance assessment in such cases; in the context of this compliance assessment, it may consult and/or inform other competent authorities in the Union, notably the Consumer Protection Cooperation Network, national consumer protection bodies, civil society organisations and social partners.
VII. The key role of stakeholders should be to engage with the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context and the ‘Supervisory Authority’ in each Member State.
B. TEXT OF THE LEGISLATIVE PROPOSAL REQUESTED
Proposal for a
REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
on ethical principles for the development, deployment and use of artificial intelligence, robotics and related technologies
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
Having regard to the Treaty on the Functioning of the European Union, and in particular Article 114 thereof,
Having regard to the proposal from the European Commission,
After transmission of the draft legislative act to the national parliaments,
Having regard to the opinion of the European Economic and Social Committee,
Acting in accordance with the ordinary legislative procedure,
Whereas:
(1) The development, deployment and use of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should be based on a desire to serve society. Such technologies can entail opportunities and risks, which should be addressed and regulated by a comprehensive regulatory framework at Union level, reflecting ethical principles, to be complied with from the moment of the development and deployment of such technologies to their use.
(2) Compliance with such a regulatory framework regarding the development, deployment and use of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, in the Union should of a level that is equivalent in all Member States, in order to efficiently seize the opportunities and consistently address the risks of such technologies, as well as avoid regulatory fragmentation. It should be ensured that the application of the rules set out in this Regulation throughout the Union is homogenous.
(3) In this context, the current diversity of the rules and practices to be followed across the Union poses a significant risk of fragmentation of the Single Market and to the protection of the well-being and prosperity of individuals and society alike, as well as to the coherent exploration of the full potential that artificial intelligence, robotics and related technologies have for promoting innovation and preserving that well-being and prosperity. Differences in the degree of consideration on the part of developers, deployers and users of the ethical dimension inherent to these technologies can prevent them from being freely developed, deployed or used within the Union and such differences can constitute an obstacle to a level playing field and to the pursuit of technological progress and economic activities at Union level, distort competition and impede authorities in the fulfilment of their obligations under Union law. In addition, the absence of a common regulatory framework, reflecting ethical principles, for the development, deployment and use of artificial intelligence, robotics and related technologies results in legal uncertainty for all those involved, namely developers, deployers and users.
(4) Nevertheless, while contributing to a coherent approach at Union level and within the limits set by it, this Regulation should provide a margin for implementation by Member States, including with regard to how the mandate of their respective national supervisory authority is to be carried out, in view of the objective it is to achieve as set out herein.
(5) This Regulation is without prejudice to existing or future sectorial legislation. It should be proportionate with regard to its objective so as not to unduly hamper innovation in the Union and be in accordance with a risk-based approach.
(6) The geographical scope of application of such a framework should cover all the components of artificial intelligence, robotics and related technologies throughout their development, deployment and use in the Union, including in cases where part of the technologies might be located outside the Union or not have a specific or single location, such as in the case of cloud computing services.
(7) A common understanding in the Union of notions such as artificial intelligence, robotics, related technologies and biometric recognition is required in order to allow for a unified regulatory approach and thus legal certainty for citizens and companies alike. They should be technologically neutral and subject to review whenever necessary.
(8) In addition, the fact that there are technologies related to artificial intelligence and robotics that enable software to control physical or virtual processes, at a varying degree of autonomy(15), needs to be considered. For example, for automated driving of vehicles, six levels of driving automation have been propose by SAE international standard J3016.
(9) The development, deployment and use of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should complement human capabilities, not substitute them and ensure that their execution does not run against the best interests of citizens and that it complies with Union law, fundamental rights as set out in the Charter of Fundamental Rights of the European Union (the ‘Charter’), settled case-law of the Court of Justice of the European Union, and other European and international instruments which apply in the Union.
(10) Decisions made or informed by artificial intelligence, robotics and related technologies should remain subject to meaningful human review, judgment, intervention and control. The technical and operational complexity of such technologies should never prevent their deployer or user from being able to, at the very least, trigger a fail-safe shutdown, alter or halt their operation, or revert to a previous state restoring safe functionalities in cases where the compliance with Union law and the ethical principles and legal obligations laid down in this Regulation is at risk.
(11) Artificial intelligence, robotics and related technologies whose development, deployment and use entail a significant risk of causing injury or harm to individuals or society in breach of fundamental rights and safety rules as laid down in Union law, should be considered as high-risk technologies. For the purposes of assessing them as such, the sector where they are developed, deployed or used, their specific use or purpose and the severity of the injury or harm that can be expected to occur should be considered. The degree of severity should be determined based on the extent of the potential injury or harm, the number of affected persons, the total value of damage caused and the harm to society as a whole. Severe types of injury and harm are, for instance, violations of children’s, consumers’ or workers’ rights that, due to their extent, the number of children, consumers or workers affected or their impact on society as a whole entail a significant risk to breach fundamental rights and safety rules as laid down in Union law. This Regulation should provide an exhaustive and cumulative list of high-risk sectors, and high-risk uses and purposes.
(12) The obligations laid down in this Regulation, specifically those regarding high-risk technologies, should only apply to artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed or used in the Union, which, following the risk assessment provided for in this Regulation, are considered as high-risk. Such obligations are to be complied with without prejudice to the general obligation that any artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, should be developed, deployed and used in the Union in a human-centric manner and based on the principles of human autonomy and human safety in accordance with Union law and in full respect of fundamental rights such as human dignity, right to liberty and security and right to the integrity of the person.
(13) High-risk technologies should respect the principles of safety, transparency, accountability, non-bias or non-discrimination, social responsibility and gender equality, right to redress, environmental sustainability, privacy and good governance, following an impartial, objective and external risk assessment by the national supervisory authority in accordance with the criteria provided for in this Regulation and in the list set out in its annex. This assessment should take into account the views and any self-assessment made by the developer or deployer.
(14) The Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated for this purpose should prepare non-binding implementation guidelines for developers, deployers and users on the methodology for compliance with this Regulation. In doing so, they should consult relevant stakeholders.
(15) There should be coherence within the Union when it comes to the risk assessment of these technologies, especially in the event they are assessed both in light of this Regulation and in accordance with any applicable sector-specific legislation. Accordingly, national supervisory authorities should inform other authorities carrying out risk assessments in accordance with any sector-specific legislation when these technologies are assessed as high-risk following the risk assessment provided for in this Regulation.
(16) To be trustworthy high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies should be developed, deployed and used in a safe, transparent and accountable manner in accordance with the safety features of robustness, resilience, security, accuracy and error identification, explainability, interpretability, auditability, transparency and identifiability, and in a manner that makes it possible to disable the functionalities concerned or to revert to a previous state restoring safe functionalities, in cases of non-compliance with those features. Transparency should be ensured by allowing access to public authorities, when strictly necessary, to technology, data and computing systems underlying such technologies.
(17) Developers, deployers and users of artificial intelligence, robotics and related technologies, especially high-risk technologies, are responsible to varying degrees for the compliance with safety, transparency and accountability principles to the extent of their involvement with the technologies concerned, including the software, algorithms and data used or produced by such technologies. Developers should ensure that the technologies concerned are designed and built in line with the safety features set out in this Regulation, whereas deployers and users should deploy and use the concerned technologies in full observance of those features. To this end, developers of high-risk technologies should evaluate and anticipate the risks of misuse that can reasonably be expected regarding of the technologies they develop. They must also ensure that the systems they develop indicate to the extent possible and through appropriate means, such as disclaimer messages, the likelihood of errors or inaccuracies.
(18) Developers and deployers should make available to users any subsequent updates of the technologies concerned, namely in terms of software as stipulated by contract or laid down in Union or national law. In addition where a risk assessment so indicates, developers and deployers should provide public authorities with the relevant documentation on the use of the technologies concerned and safety instructions in that regard, including, when strictly necessary and in full respect of Union law on data protection, privacy and intellectual property rights and trade secrets, the source code, development tools and data used by the system.
(19) Individuals have a right to expect the technology they use to perform in a reasonable manner and to respect their trust. The trust placed by citizens in artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, depends on the understanding and comprehension of the technical processes. The degree of explainability of such processes should depend on the context of those technical processes, and on the severity of the consequences of an erroneous or inaccurate output, and needs to be sufficient for challenging them and for seeking redress. Auditability, traceability, and transparency should address any possible unintelligibility of such technologies.
(20) Society’s trust in artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, depends on the degree to which their assessment, auditability and traceability are enabled in the technologies concerned. Where the extent of their involvement so requires, developers should ensure that such technologies are designed and built in a manner that enables such an assessment, auditing and traceability. Within the limits of what is technically possible, developers, deployers and users should ensure that artificial intelligence, robotics and related technologies are deployed and used in full respect of transparency requirements, and allowing auditing and traceability.
(21) In order to ensure transparency and accountability, citizens should be informed when a system uses artificial intelligence, when artificial intelligence systems personalise a product or service for its users, whether they can switch off or limit the personalisation and when they are faced with an automated decision making technology. Furthermore, transparency measures should be accompanied, as far as this is technically possible, by clear and understandable explanations of the data used and of the algorithm, its purpose, its outcomes and its potential dangers.
(22) Bias in and discrimination by software, algorithms and data is unlawful and should be addressed by regulating the processes through which they are designed and deployed. Bias can originate both from decisions informed or made by an automated system as well as from data sets on which such decision making is based or with which the system is trained.
(23) Software, algorithms and data used or produced by artificial intelligence, robotics and related technologies should be considered biased where, for example, they display suboptimal results in relation to any person or group of persons, on the basis of a prejudiced personal or social perception and subsequent processing of data relating to their traits.
(24) In line with Union law, software, algorithms and data used or produced by artificial intelligence, robotics and related technologies should be considered discriminatory where they produce outcomes that have disproportionate negative effects and result in different treatment of a person or group of persons, including by putting them at a disadvantage when compared to others, based on grounds such as their personal traits, without objective or reasonable justification and regardless of any claims of neutrality of the technologies.
(25) In line with Union law, legitimate aims that could under this Regulation be considered to objectively justify any differential treatment between persons or group of persons are the protection of public safety, security and health, the prevention of criminal offences, the protection of fundamental rights and freedoms, fair representation and objective requirements for holding a professional occupation.
(26) Artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, should contribute to sustainable progress. Such technologies should not run counter to the cause of preservation of the environment or the green transition. They could play an important role in achieving the Sustainable Development Goals outlined by the United Nations with a view to enabling future generations to flourish. Such technologies can support the monitoring of adequate progress on the basis of sustainability and social cohesion indicators, and by using responsible research and innovation tools requiring the mobilisation of resources by the Union and its Member States to support and invest in projects addressing those goals.
(27) The development, deployment and use of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should in no way purposefully cause or accept by design injury or harm of any kind to individuals or society. Accordingly, high-risk technologies in particular should be developed, deployed and used in a socially responsible manner.
(28) Therefore, developers, deployers and users should be held responsible, to the extent of their involvement in the artificial intelligence, robotics and related technologies concerned, and in accordance with Union and national liability rules, for any injury or harm inflicted upon individuals and society.
(29) In particular, the developers who take decisions that determine and control the course or manner of the development of artificial intelligence, robotics and related technologies, as well as the deployers who are involved in their deployment by taking decisions regarding such deployment and by exercising control over the associated risks or benefiting from such deployment, with a controlling or managing function, should be generally considered responsible for avoiding the occurrence of any such injury or harm, by putting adequate measures in place during the development process and thoroughly respecting such measures during the deployment phase, respectively.
(30) Socially responsible artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, can be defined as technologies which contribute to find solutions that safeguard and promote different aims regarding society, most notably democracy, health and economic prosperity, equality of opportunity, workers’ and social rights, diverse and independent media and objective and freely available information, allowing for public debate, quality education, cultural and linguistic diversity, gender balance, digital literacy, innovation and creativity. They are also those that are developed, deployed and used having due regard for their ultimate impact on the physical and mental well-being of citizens and that do not promote hate speech or violence. Such aims should be achieved in particular by means of high-risk technologies.
(31) Artificial intelligence, robotics and related technologies should also be developed, deployed and used with a view to supporting social inclusion, democracy, plurality, solidarity, fairness, equality and cooperation and their potential in that context should be maximised and explored through research and innovation projects. The Union and its Member States should therefore mobilise their communication, administrative and financial resources for the purpose of supporting and investing in such projects.
(32) Projects relating to the potential of artificial intelligence, robotics and related technologies to deal with the question of social well-being should be carried out on the basis of responsible research and innovation tools so as to guarantee the compliance with ethical principles of those projects from the outset.
(33) The development, deployment and use of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should take into consideration their environmental footprint. In line with obligations laid down in applicable Union law, such technologies should not cause harm to the environment during their lifecycle and across their entire supply chain and should be developed, deployed and used in a manner that preserves the environment, mitigates and remedies their environmental footprint, contributes to the green transition and supports the achievement of climate neutrality and circular economy goals.
(34) For the purposes of this Regulation, developers, deployers and users should be held responsible, to the extent of their respective involvement in the development, deployment or use of any artificial intelligence, robotics and related technologies considered as high-risk, for any harm caused to the environment in accordance with the applicable environmental liability rules.
(35) These technologies should also be developed, deployed and used with a view to supporting the achievement of environmental goals in line with the obligations laid down in applicable Union law, such as reducing waste production, diminishing the carbon footprint, combating climate change and preserving the environment, and their potential in that context should be maximised and explored through research and innovation projects. The Union and the Member States should therefore mobilise their communication, administrative and financial resources for the purpose of supporting and investing in such projects.
(36) Projects relating to the potential of artificial intelligence, robotics and related technologies in addressing environmental concerns should be carried out on the basis of responsible research and innovation tools so as to guarantee from the outset the compliance of those projects with ethical principles.
(37) Any artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed and used in the Union should fully respect Union citizens’ rights to privacy and protection of personal data. In particular, their development, deployment and use should be in accordance with Regulation (EU) 2016/679 of the European Parliament and of the Council(16) and Directive 2002/58/EC of the European Parliament and of the Council(17).
(38) In particular, the ethical boundaries of the use of artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, should be duly considered when using remote recognition technologies, such as recognition of biometric features, notably facial recognition, to automatically identify individuals. When these technologies are used by public authorities for reasons of substantial public interest, namely to guarantee the security of individuals and to address national emergencies, and not to guarantee the security of properties, the use should always be disclosed, proportionate, targeted and limited to specific objectives and restricted in time in accordance with Union law and having due regard to human dignity and autonomy and the fundamental rights set out in the Charter. Criteria for and limits to that use should be subject to judicial review and submitted to democratic scrutiny and debate involving civil society.
(39) Governance that is based on relevant standards enhances safety and promotes the increase of citizens’ trust in the development, deployment and use of artificial intelligence, robotics and related technologies including software, algorithms and data used or produced by such technologies.
(40) Public authorities should conduct impact assessments regarding fundamental rights before deploying high-risk technologies which provide support for decisions that are taken in the public sector and that have a direct and significant impact on citizens’ rights and obligations.
(41) Among the existing relevant governance standards are, for example, the ‘Ethics Guidelines for Trustworthy AI’ drafted by the High-Level Expert Group on Artificial Intelligence set up by the European Commission, and any other technical standards such as those adopted by the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI), at European level, the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE), at international level.
(42) Sharing and use of data by multiple participants is sensitive and therefore the development, deployment and use of artificial intelligence, robotics and related technologies should be governed by relevant rules, standards and protocols reflecting the requirements of quality, integrity, security, reliability, privacy and control. The data governance strategy should focus on the processing, sharing of and access to such data, including its proper management, auditability and traceability, and guarantee the adequate protection of data belonging to vulnerable groups, including people with disabilities, patients, children, minorities and migrants or other groups at risk of exclusion. In addition, developers, deployers and users should be able, where relevant, to rely on key performance indicators in the assessment of the datasets they use for the purposes of enhancing the trustworthiness of the technologies they develop, deploy and use.
(43) Member States should appoint an independent administrative authority to act as a supervisory authority. In particular, each national supervisory authority should be responsible for identifying artificial intelligence, robotics and related technologies considered as high-risk in the light of the risk assessment criteria provided for in this Regulation and for assessing and monitoring the compliance of these technologies with the obligations laid down in this Regulation.
(44) Each national supervisory authority should also carry the responsibility of the good governance of these technologies under the coordination of the Commission and/or any relevant institutions, bodies, offices or agencies of the Union that may be designated for this purpose. They therefore have an important role to play in promoting the trust and safety of Union citizens, as well as in enabling a democratic, pluralistic and equitable society.
(45) For the purposes of assessing technologies which are high-risk in accordance with this Regulation and monitoring their compliance with it, national supervisory authorities should, where applicable, cooperate with the authorities responsible for assessing and monitoring these technologies and enforcing their compliance with sectorial legislation.
(46) National supervisory authorities should engage in substantial and regular cooperation with each other, as well as with the European Commission and other relevant institutions, bodies, offices and agencies of the Union, in order to guarantee a coherent cross-border action, and allow for consistent development, deployment and use of these technologies within the Union in compliance with the ethical principles and legal obligations laid down in this Regulation.
(47) In the context of such cooperation and in view of achieving full harmonisation at Union level, national supervisory authorities should assist the Commission regarding drawing up a common and exhaustive list of high-risk artificial intelligence, robotics and related technologies in line with the criteria provided for in this Regulation and its Annex. Furthermore a granting process should be developed for the issuing of a European certificate of ethical compliance, including a voluntary application process for any developer, deployer or user of technologies not considered as high-risk seeking to certify their compliance with this Regulation.
(48) National supervisory authorities should ensure the gathering of a maximum number of stakeholders such as industry, businesses, social partners, researchers, consumers and civil society organisations, and provide a pluralistic forum for reflection and exchange of views so as to achieve comprehensible and accurate conclusions for the purpose of guiding how governance is regulated.
(49) National supervisory authorities should ensure the gathering of a maximum number of stakeholders such as industry, businesses, social partners, researchers, consumers and civil society organisations, and provide a pluralistic forum for reflection and exchange of views, to facilitate cooperation with and collaboration between stakeholders, in particular from academia, research, industry, civil society and individual experts, so as to achieve comprehensible and accurate conclusions for the purpose of guiding how governance is regulated.
(50) Additionally, these national supervisory authorities should provide professional administrative guidance and support to developers, deployers and users, particularly small and medium-sized enterprises or start-ups, encountering challenges as regards complying with the ethical principles and legal obligations laid down in this Regulation.
(51) The Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated for this purpose should establish binding guidelines on the methodology to be used by the national supervisory authorities when conducting their compliance assessment.
(52) Whistle-blowing brings potential and actual breaches of Union law to the attention of authorities with a view to preventing injury, harm or damage that would otherwise occur. In addition, reporting procedures ameliorate the information flow within companies and organisations, thus mitigating the risk of flawed or erroneous products or services being developed. Companies and organisations developing, deploying or using artificial intelligence, robotics and related technologies, including data used or produced by those technologies, should set up reporting channels and persons reporting breaches should be protected from retaliation.
(53) The rapid development of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, as well as of the technical machine learning, reasoning processes and other technologies underlying that development are unpredictable. As such, it is both appropriate and necessary to establish a review mechanism in accordance with which, in addition to its reporting on the application of the Regulation, the Commission is to regularly submit a report concerning the possible modification of the scope of application of this Regulation.
(54) Since the objective of this Regulation, namely to establish a common regulatory framework of ethical principles and legal obligations for the development, deployment and use of artificial intelligence, robotics and related technologies in the Union, cannot be sufficiently achieved by the Member States, but can rather, by reason of its scale and effects, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union. In accordance with the principle of proportionality, as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective.
(55) Coordination at Union level as set out in this Regulation would be best achieved by the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context in order to avoid fragmentation and ensure the consistent application of this Regulation. The Commission should therefore be tasked with finding an appropriate solution to structure such coordination at Union level in view of coordinating the mandates and actions of the national supervisory authorities in each Member State, namely regarding the risk assessment of artificial intelligence, robotics and related technologies, the establishment of a common framework for the governance of the development, deployment and use of these technologies, the developing and issuing of a certification of compliance with the ethical principles and legal obligations laid down in this Regulation, supporting regular exchanges with concerned stakeholders and civil society and creating a centre of expertise, bringing together academia, research, industry, and individual experts at Union level to foster exchange of knowledge and technical expertise, and promoting the Union’s approach through international cooperation and ensuring a consistent reply worldwide to the opportunities and risks inherent in these technologies.
HAVE ADOPTED THIS REGULATION:
Chapter I
General provisions
Article 1
Purpose
The purpose of this Regulation is to establish a comprehensive and future-proof Union regulatory framework of ethical principles and legal obligations for the development, deployment and use of artificial intelligence, robotics and related technologies in the Union.
Article 2
Scope
This Regulation applies to artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed or used in the Union.
Article 3
Geographical scope
This Regulation applies to artificial intelligence, robotics and related technologies where any part thereof is developed, deployed or used in the Union, regardless of whether the software, algorithms or data used or produced by such technologies are located outside of the Union or do not have a specific geographical location.
Article 4
Definitions
For the purposes of this Regulation, the following definitions apply:
(a) ‘artificial intelligence’ means a system that is either software-based or embedded in hardware devices, and that displays intelligent behaviour by, inter alia, collecting, processing, analysing, and interpreting its environment, and by taking action, with some degree of autonomy, to achieve specific goals(18);
(b) ‘autonomy’ means an AI-system that operates by interpreting certain input and using a set of pre-determined instructions, without being limited to such instructions, despite the system’s behaviour being constrained by and targeted at fulfilling the goal it was given and other relevant design choices made by its developer;
(c) ‘robotics’ means technologies that enable automatically controlled, reprogrammable, multi-purpose machines(19) to perform actions in the physical world traditionally performed or initiated by human beings, including by way of artificial intelligence or related technologies;
(d) ‘related technologies’ means technologies that enable software to control with a partial or full degree of autonomy a physical or virtual process, technologies capable of detecting biometric, genetic or other data, and technologies that copy or otherwise make use of human traits;
(e) ‘high risk’ means a significant risk entailed by the development, deployment and use of artificial intelligence, robotics and related technologies to cause injury or harm to individuals or society in breach of fundamental rights and safety rules as laid down in Union law, considering their specific use or purpose, the sector where they are developed, deployed or used and the severity of injury or harm that can be expected to occur;
(f) ‘development’ means the construction and design of algorithms, the writing and design of software or the collection, storing and management of data for the purpose of creating or training artificial intelligence, robotics and related technologies or for the purpose of creating a new application for existing artificial intelligence, robotics and related technologies;
(g) ‘developer’ means any natural or legal person who takes decisions that determine and control the course or manner of the development of artificial intelligence, robotics and related technologies;
(h) ‘deployment’ means the operation and management of artificial intelligence, robotics and related technologies, as well as their placement on the market or otherwise making them available to users;
(i) ‘deployer’ means any natural or legal person who is involved in the specific deployment of artificial intelligence, robotics and related technologies with a controlling or managing function by taking decisions, exercising control over the risk and benefiting from such deployment;
(j) ‘use’ means any action relating to artificial intelligence, robotics and related technologies other than development or deployment;
(k) ‘user’ means any natural or legal person who uses artificial intelligence, robotics and related technologies other than for the purposes of development or deployment;
(l) ‘bias’ means any prejudiced personal or social perception of a person or group of persons on the basis of their personal traits;
(m) ‘discrimination’ means any differential treatment of a person or group of persons based on a ground which has no objective or reasonable justification and is therefore prohibited by Union law;
(n) ‘injury or harm’ means, including where caused by hate speech, bias, discrimination or stigmatisation, physical or mental injury, material or immaterial harm such as financial or economic loss, loss of employment or educational opportunity, undue restriction of freedom of choice or expression or loss of privacy, and any infringement of Union law that is detrimental to a person;
(o) ‘good governance’ means the manner of ensuring that the appropriate and reasonable standards and protocols of behaviour are adopted and observed by developers, deployers and users, based on a formal set of rules, procedures and values, and which allows them to deal appropriately with ethical matters as or before they arise.
Article 5
Ethical principles of artificial intelligence, robotics and related technologies
1. Any artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be developed, deployed and used in the Union in accordance with Union law and in full respect of human dignity, autonomy and safety and other fundamental rights set out in the Charter.
2. Any processing of personal data carried out in the development, deployment and use of artificial intelligence, robotics and related technologies, including personal data derived from non-personal data and biometric data, shall be carried out in accordance with Regulation (EU) 2016/679 and Directive 2002/58/EC.
3. The Union and its Member States shall encourage research projects intended to provide solutions, based on artificial intelligence, robotics and related technologies, that seek to promote social inclusion, democracy, plurality, solidarity, fairness, equality and cooperation.
Chapter II
Obligations for high-risk technologies
Article 6
Obligations for high-risk technologies
1. The provisions in this Chapter shall only apply to artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed or used in the Union which are considered high-risk.
2. Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be developed, deployed and used in a manner that ensures that they do not breach the ethical principles set out in this Regulation.
Article 7
Human-centric and human-made artificial intelligence
1. Any artificial high-risk technologies, including software, algorithms and data used or produced by such technologies, shall be developed, deployed and used in a manner that guarantees full human oversight at any time.
2. The technologies referred to paragraph 1 shall be developed, deployed and used in a manner that allows full human control to be regained when needed, including through the altering or halting of those technologies.
Article 8
Safety, transparency and accountability
1. Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be developed, deployed and used in a manner that ensures that they are:
(a) developed, deployed and used in a resilient manner so that they ensure an adequate level of security by adhering to minimum cybersecurity baselines proportionate to identified risk, and one that prevents any technical vulnerabilities from being exploited for malicious or unlawful purposes;
(b) developed, deployed and used in a secure manner that ensures there are safeguards that include a fall-back plan and action in case of a safety or security risk;
(c) developed, deployed and used in a manner that ensures a reliable performance as reasonably expected by the user regarding reaching the aims and carrying out the activities they have been conceived for, including by ensuring that all operations are reproducible;
(d) developed, deployed and used in a manner that ensures that the performance of the aims and activities of the particular technologies is accurate; if occasional inaccuracies cannot be avoided, the system shall indicate, to the extent possible, the likeliness of errors and inaccuracies to deployers and users through appropriate means;
(e) developed, deployed and used in an easily explainable manner so as to ensure that there can be a review of the technical processes of the technologies;
(f) developed, deployed and used in a manner such that they inform users that they are interacting with artificial intelligence systems, duly and comprehensively disclosing their capabilities, accuracy and limitations to artificial intelligence developers, deployers and users;
(g) in accordance with Article 6, developed, deployed and used in a manner that makes it possible, in the event of non-compliance with the safety features set out in subparagraphs (a) to (g), for the functionalities concerned to be temporarily disabled and to revert to a previous state restoring safe functionalities.
2. In accordance with Article 6(1), the technologies mentioned in paragraph 1 of this Article, including software, algorithms and data used or produced by such technologies, shall be developed, deployed and used in transparent and traceable manner so that their elements, processes and phases are documented to the highest possible and applicable standards, and that it is possible for the national supervisory authorities referred to in Article 18 to assess the compliance of such technologies with the obligations laid down in this Regulation. In particular, the developer, deployer or user of those technologies shall be responsible for, and be able to demonstrate, compliance with the safety features set out in paragraph 1.
3. The developer, deployer or user of the technologies mentioned in paragraph 1 shall ensure that the measures taken to ensure compliance with the safety features set out in paragraph 1 can be audited by the national supervisory authorities referred to in Article 18 or, where applicable, other national or European sectorial supervisory bodies.
Article 9
Non-bias and non-discrimination
1. Any software, algorithm or data used or produced by high-risk artificial intelligence, robotics and related technologies developed, deployed or used in the Union shall be unbiased and, without prejudice to paragraph 2, shall not discriminate on grounds such as race, gender, sexual orientation, pregnancy, disability, physical or genetic features, age, national minority, ethnicity or social origin, language, religion or belief, political views or civic participation, citizenship, civil or economic status, education, or criminal record.
2. By way of derogation from paragraph 1, and without prejudice to Union law governing unlawful discrimination, any differential treatment between persons or groups of persons may be justified only where there is an objective, reasonable and legitimate aim that is both proportionate and necessary insofar as no alternative exists which would cause less interference with the principle of equal treatment.
Article 10
Social responsibility and gender equality
Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed and used in the Union shall be developed, deployed and used in compliance with relevant Union law, principles and values, in a manner that does not interfere in elections or contribute to the dissemination of disinformation, respects worker’s rights, promotes quality education and digital literacy, does not increase the gender gap by preventing equal opportunities for all and does not disrespect intellectual property rights and any limitations or exceptions thereto.
Article 11
Environmental sustainability
Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be assessed as to their environmental sustainability by the national supervisory authorities referred to in Article 18 or, where applicable, other national or European sectorial supervisory bodies, ensuring that measures are put in place to mitigate and remedy their general impact as regards natural resources, energy consumption, waste production, the carbon footprint, climate change emergency and environmental degradation in order to ensure compliance with the applicable Union or national law, as well as any other international environmental commitments the Union has undertaken.
Article 12
Respect for privacy and protection of personal data
The use and gathering of biometric data for remote identification purposes in public areas, as biometric or facial recognition, carries specific risks for fundamental rights and shall be deployed or used only by Member States’ public authorities for substantial public interest purposes. Those authorities shall ensure that such deployment or use is disclosed to the public, proportionate, targeted and limited to specific objectives and location and restricted in time, in accordance with Union and national law, in particular Regulation (EU) 2016/679 and Directive 2002/58/EC, and with due regard for human dignity and autonomy and the fundamental rights set out in the Charter, namely the rights to respect for privacy and protection of personal data.
Article 13
Right to redress
Any natural or legal person shall have the right to seek redress for injury or harm caused by the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, in breach of Union law and the obligations set out in this Regulation
Article 14
Risk assessment
1. For the purposes of this Regulation, artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be considered high-risk technologies when, following a risk assessment based on objective criteria such as their specific use or purpose, the sector where they are developed, deployed or used and the severity of the possible injury or harm caused, their development, deployment or use entail a significant risk to cause injury or harm that can be expected to occur to individuals or society in breach of fundamental rights and safety rules as laid down in Union law.
2. Without prejudice to applicable sectorial legislation, the risk assessment of artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be carried out, in accordance with the objective criteria provided for in paragraph 1 of this Article and in the exhaustive and cumulative list set out in the Annex to this Regulation, by the national supervisory authorities referred to in Article 18 under the coordination of the Commission and/or any other relevant institutions, bodies, offices and agencies of the Union that may be designated for this purpose in the context of their cooperation.
3. In cooperation with the national supervisory authorities referred to in paragraph 2, the Commission shall, by means of delegated acts in accordance with Article 20, draw up and subsequently update a common list of high-risk technologies identified within the Union.
4. The Commission shall also, by means of delegated acts in accordance with Article 20, regularly update the list provided for in the Annex to this Regulation.
Article 15
Compliance assessment
1. High-risk artificial intelligence, robotics and related technologies shall be subject to an assessment of compliance with the obligations set out in Articles 6 to 12 of this Regulation, as well as to subsequent monitoring, both of which shall be carried out by the national supervisory authorities referred to in Article 18 under the coordination of the Commission and/or any other relevant institutions, bodies, offices and agencies of the Union that may be designated for this purpose.
2. The software, algorithms and data used or produced by high-risk technologies which have been assessed as compliant with the obligations set out in this Regulation pursuant to paragraph 1 shall also be considered to comply with those obligations, unless the relevant national supervisory authority decides to conduct an assessment on its own initiative or at the request of the developer, the deployer or the user.
3. Without prejudice to sectorial legislation, the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be specifically designated for this purpose shall prepare binding guidelines on the methodology to be used by the national supervisory authorities for the compliance assessment referred to in paragraph 1 by the date of the entry into force of this Regulation.
Article 16
European certificate of ethical compliance
1. Where there has been a positive assessment of compliance of high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, carried out in line with Article 15, the respective national supervisory authority shall issue a European certificate of ethical compliance.
2. Any developer, deployer or user of artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, that are not considered as high-risk and that are therefore not subject to the obligations laid down in Articles 6 to 12 and to the risk assessment and compliance assessment provided for in Articles 14 and 15, may also seek to certify the compliance with the obligations laid down in this Regulation, or part of them where so justified by the nature of the technology in question as decided by the national supervisory authorities. A certificate shall only be issued if an assessment of compliance has been carried out by the relevant national supervisory authority and that assessment is positive.
3. For the purposes of issuing the certificate referred to in paragraph 2, an application process shall be developed by the Commission and/or any other relevant institutions, bodies, offices and agencies of the Union that may be designated for this purpose.
Chapter III
Institutional oversight
Article 17
Governance standards and implementation guidance
1. Artificial intelligence, robotics and related technologies developed, deployed or used in the Union shall comply with relevant governance standards established in accordance with Union law, principles and values by the national supervisory authorities referred to in Article 18 in accordance with Union law, principles and values, under the coordination of the Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated for this purpose and in consultation with relevant stakeholders.
2. The standards referred to in paragraph 1 shall include non-binding implementation guidelines on the methodology for compliance with this Regulation by developers, deployers and users and shall be published by the date of entry into force of this Regulation.
3. Data used or produced by artificial intelligence, robotics and related technologies developed, deployed or used in the Union shall be managed by developers, deployers and users in accordance with relevant national, Union, other European organisations’ and international rules and standards, as well as with relevant industry and business protocols. In particular, developers and deployers shall carry out, where feasible, quality checks of the external sources of data used by artificial intelligence, robotics and related technologies, and shall put oversight mechanisms in place regarding their collection, storage, processing and use.
4. Without prejudice to portability rights and rights of persons whose usage of artificial intelligence, robotics and related technologies has generated data, the collection, storage, processing, sharing of and access to data used or produced by artificial intelligence, robotics and related technologies developed, deployed or used in the Union shall comply with the relevant national, Union, other European organisations’ and international rules and standards, as well as with relevant industry and business protocols. In particular, developers and deployers shall ensure those protocols are applied during the development and deployment of artificial intelligence, robotics and related technologies, by clearly defining the requirements for processing and granting access to data used or produced by these technologies, as well as the purpose, scope and addressees of the processing and the granting of access to such data, all of which shall at all times be auditable and traceable.
Article 18
Supervisory authorities
1. Each Member State shall designate an independent public authority to be responsible for monitoring the application of this Regulation (‘supervisory authority’), and for carrying out the risk and compliance assessments and the certification provided for in Articles 14, 15 and 16, without prejudice to sectorial legislation.
2. Each national supervisory authority shall contribute to the consistent application of this Regulation throughout the Union. For that purpose, the supervisory authorities in each Member State shall cooperate with each other, the Commission and/or other relevant institutions, bodies, offices and agencies of the Union that may be designated for this purpose.
3. Each national supervisory authority shall serve as a first point of contact in cases of suspected breach of the ethical principles and legal obligations laid down in this Regulation, including discriminatory treatment or violation of other rights, as a result of the development, deployment or use of artificial intelligence, robotics and related technologies. In such cases, the respective national supervisory authority shall carry out a compliance assessment in view of supporting the right of citizens to contest and redress.
4. Each national supervisory authority shall be responsible for supervising the application of the relevant national, European and international governance rules and standards referred to in Article 17 to artificial intelligence, robotics and related technologies, including by liaising with the maximum possible number of relevant stakeholders. For that purpose, the supervisory authorities in each Member State shall provide a forum for regular exchange with and among stakeholders from academia, research, industry and civil society.
5. Each national supervisory authority shall provide professional and administrative guidance and support concerning the general implementation of Union law applicable to artificial intelligence, robotics and related technologies and the ethical principles set out in this Regulation, especially to relevant research and development organisations and small and medium-sized enterprises or start-ups.
6. Each Member State shall notify to the European Commission the legal provisions which it adopts pursuant to this Article by ... [OJ: please enter the date one year after entry into force] and, without delay, any subsequent amendment affecting them.
7. Member States shall take all measures necessary to ensure the implementation of the ethical principles and legal obligations laid down in this Regulation. Member States shall support relevant stakeholders and civil society, at both Union and national level, in their efforts to ensure a timely, ethical and well-informed response to the new opportunities and challenges, in particular those of a cross-border nature, arising from technological developments relating to artificial intelligence, robotics and related technologies.
Article 19
Reporting of breaches and protection of reporting persons
Directive (EU) 2019/1937 of the European Parliament and of the Council(20) shall apply to the reporting of breaches of this Regulation and the protection of persons reporting such breaches.
Article 20
Coordination at Union level
1. The Commission and/or any relevant institutions, bodies, offices and agencies of the Union that may be designated in this context shall have the following tasks:
— ensuring a consistent risk assessment of artificial intelligence, robotics and related technologies referred to in Article 14 to be carried out by the national supervisory authorities referred to in Article 18 on the basis of the common objective criteria provided for in Article 8(1) and in the list of high-risk sectors and of high-risk uses or purposes set out in the Annex to this Regulation;
— taking note of the compliance assessment and subsequent monitoring of high-risk artificial intelligence, robotics and related technologies referred to in Article 15 to be carried out by the national supervisory authorities referred to in Article 18;
— developing the application process for the certificate referred to in Article 16 to be issued by the national supervisory authorities referred to in Article 18;
— without prejudice to sectorial legislation, preparing the binding guidelines referred to in Article 17(4) on the methodology to be used by the national supervisory authorities referred to in Article 18;
— coordinating the establishment of the relevant governance standards referred to in Article 17 by the national supervisory authorities referred to in Article 18, including non-binding implementation guidelines for developers, deployers and users on the methodology for compliance with this Regulation;
— cooperating with the national supervisory authorities referred to in Article 18 regarding their contribution to the consistent application of this Regulation throughout the Union pursuant to Article 18(2);
— serving as a centre for expertise by promoting the exchange of information related to artificial intelligence, robotics and related technologies and supporting the development of a common understanding in the Single Market, issuing additional guidance, opinions and expertise to the national supervisory authorities referred to in Article 18, monitoring the implementation of relevant Union law, identifying standards for best practice and, where appropriate, making recommendations for regulatory measures; in doing so, it should liaise with the maximum possible number of relevant stakeholders and ensure that the composition of its decision levels is diverse and ensures gender equality;
— hosting a Working Group on Security and Defence aimed at looking into policy and investment questions specifically related to the ethical use of artificial intelligence, robotics and related technologies in the field of security and defence.
Article 21
Exercise of delegation
1. The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in this Article.
2. The power to adopt delegated acts referred to in Article 14(3) and (4) shall be conferred on the Commission for a period of 5 years from (date of entry into force of this Regulation).
3. The delegation of power referred to in Article 14(3) and (4) may be revoked at any time by the European Parliament or by the Council. A decision to revoke shall put an end to the delegation of the power specified in that decision. It shall take effect the day following the publication of the decision in the Official Journal of the European Union or a later date specified therein. It shall not affect the validity of any delegated act already in force.
4. Before adopting a delegated act, the Commission shall consult experts designated by each Member State in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law Making.
5. As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council.
6. A delegated act adopted pursuant to Article 14(3) and (4) shall enter into force only if no objection has been expressed either by the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or, if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.
Article 22
Amendment to Directive (EU) 2019/1937
Directive (EU) 2019/1937 is amended as follows:
(1) In Article 2(1), the following point is added:
‘(xi) development, deployment and use of artificial intelligence, robotics and related technologies.’
(2) In Part I of the Annex, the following point is added:
‘K. Point (a)(xi) of Article 2(1) - development, deployment and use of artificial intelligence, robotics and related technologies.
“(xxi) Regulation [XXX] of the European Parliament and of the Council on ethical principles for the development, deployment and use artificial intelligence, robotics and related technologies”.’
Article 23
Review
The Commission shall keep under regular review the development of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, and shall by ... [OJ: please enter the date three years after entry into force], and every three years thereafter, submit to the European Parliament, the Council and the European Economic and Social Committee a report on the application of this Regulation, including an assessment of the possible modification of the scope of application of this Regulation.
Article 24
Entry into force
This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union.
It shall apply from XX.
This Regulation shall be binding in its entirety and directly applicable in all Member States.
Done at ...,
For the European Parliament For the Council
The President The President
ANNEX
Exhaustive and cumulative list of high-risk sectors and of high-risk uses or purposes that entail a risk of breach of fundamental rights and safety rules.
High-risk sectors
— Employment
— Education
— Healthcare
— Transport
— Energy
— Public sector (asylum, migration, border controls, judiciary and social security services)
— Defence and security
— Finance, banking, insurance
High-risk uses or purposes
— Recruitment
— Grading and assessment of students
— Allocation of public funds
— Granting loans
— Trading, brokering, taxation, etc.
— Medical treatments and procedures
— Electoral processes and political campaigns
— Public sector decisions that have a significant and direct impact on the rights and obligations of natural or legal persons
Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services (OJ L 151, 7.6.2019, p. 70).
Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82).
For automated driving of vehicles, six levels of driving automation have been proposed by SAE International standard J3016, last updated in 2018 to J3016_201806. https://www.sae.org/standards/content/j3016_201806/
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ L 201, 31.7.2002, p. 37).
Directive (EU) 2019/1937 of the European Parliament and of the Council of 23 October 2019 on the protection of persons who report breaches of Union law (OJ L 305, 26.11.2019, p. 17).
Civil liability regime for artificial intelligence
European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL))
– having regard to Article 225 of the Treaty on the Functioning of the European Union,
– having regard to Articles 114 and 169 of the Treaty on the Functioning of the European Union,
– having regard to Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products(1) (‘Product Liability Directive’),
– having regard to Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market (‘Unfair Commercial Practices Directive’)(2) and Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights(3), as well as other consumer protection rules,
– having regard to Regulation (EU) 2017/745 of the European Parliament and the Council of 5 April 2017 on medical devices(4),
– having regard to Council Regulation (EU) 2018/1488 of 28 September 2018 establishing the European High Performance Computing Joint Undertaking(5),
– having regard to Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services(6),
– having regard to the Interinstitutional Agreement of 13 April 2016 on Better Law-Making and the Better Regulations Guidelines(7),
– having regard to the proposal for a regulation of the European Parliament and of the Council of 6 June 2018 establishing the Digital Europe programme for the period 2021-2027 (COM(2018)0434),
– having regard to the Commission communication of 25 April 2018 on Artificial Intelligence for Europe (COM(2018)0237),
– having regard to the Commission communication of 7 December 2018 on a Coordinated Plan on Artificial Intelligence (COM(2018)0795),
– having regard to the Commission communication of 8 April 2019 on Building Trust in Human-Centric Artificial Intelligence (COM(2019)0168),
– having regard to the Commission report of 19 February 2020 to the European Parliament, the Council and the European Economic and Social Committee on safety and liability implications of Artificial Intelligence, the Internet of Things and robotics (COM(2020)0064),
– having regard to the Commission White Paper of 19 February 2020 on Artificial Intelligence - A European approach to excellence and trust (COM(2020)0065),
– having regard to its resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics(8),
– having regard to its resolution of 1 June 2017 on digitizing European industry(9),
– having regard to its resolution of 12 September 2018 on autonomous weapon systems(10),
– having regard to its resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics(11),
– having regard to its resolution of 12 February 2020 on automated decision-making processes: ensuring consumer protection and free movement of goods and services(12),
– having regard to the report of 8 April 2019 of the High-Level Expert Group on Artificial Intelligence entitled “Ethics Guidelines for trustworthy AI”,
– having regard to the report of 8 April 2019 of the High-Level Expert Group on Artificial Intelligence entitled “A definition of AI: Main Capabilities and Disciplines”,
– having regard to the report of 26 June 2019 of the High-Level Expert Group on Artificial Intelligence entitled “Policy and investment recommendations for trustworthy AI”,
– having regard to the report of 21 November 2019 of the Expert Group on Liability and New Technologies – New Technologies Formation entitled “Liability for Artificial Intelligence and other emerging digital technologies“,
– having regard to the European added value assessment study carried out by the European Parliamentary Research Service, entitled 'Civil liability regime for artificial intelligence: European added value assessment'(13)
– having regard to the European Parliamentary Research Service STOA Policy Briefing of June 2016 on legal and ethical reflections concerning robotics(14),
– having regard to the Study of the Directorate General for internal policies of the European Parliament of October 2016 for the Legal Affairs Committee entitled “European Civil Law Rules in Robotics”(15),
– having regard to Rules 47 and 54 of its Rules of Procedure,
– having regard to the opinions of the Committee on the Internal Market and Consumer Protection and the Committee on Transport and Tourism,
– having regard to the report of the Committee on Legal Affairs (A9-0178/2020),
A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim and receive compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for natural and legal persons to avoid causing harm or damage in the first place or price into their behaviour the risk of having to pay compensation;
B. whereas any future-oriented civil liability legal framework has to instil confidence in the safety, reliability and consistency of products and services, including in digital technology, in order to strike a balance between efficiently and fairly protecting potential victims of harm or damage and, at the same time, providing enough leeway to make it possible for enterprises, and particularly small and medium-sized enterprises, to develop new technologies, products or services; whereas this will help build confidence and create stability for investment; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the operator, the affected person or any other third party;
C. whereas the legal system of a Member State can adjust its liability rules for certain actors or can make them stricter for certain activities; whereas strict liability means that a party can be held liable despite the absence of fault; whereas in many national tort laws, the defendant is held strictly liable if a risk which that defendant has created for the public, such as in the form of cars or hazardous activities, or a risk which he cannot control, like animals, results in harm or damage being caused;
D. whereas any future Union legislation, having as a goal the explicit assignment of liability as regards Artificial Intelligence (AI) - systems, should be preceded by analysis and consultation with the Member States on the compliance of the proposed legislative act with economic, legal and social conditions;
E. whereas the issue of a civil liability regime for AI should be the subject of a broad public debate, taking into consideration all the interests at stake, especially the ethical, legal, economic and social aspects, to avoid misunderstandings and unjustified fears that such technology may cause among citizens; whereas careful examination of the consequences of any new regulatory framework on all actors in an impact assessment should be a prerequisite for further legislative steps;
F. whereas the notion of AI-systems comprises a large group of different technologies, including simple statistics, machine learning and deep learning;
G. whereas using the term “automated decision-making” could avoid the possible ambiguity of the term AI; whereas “automated decision-making” involves a user delegating initially a decision, partly or completely, to an entity by way of using software or a service; whereas that entity then in turn uses automatically executed decision-making models to perform an action on behalf of a user, or to inform the user’s decisions in performing an action;
H. whereas certain AI-systems present significant legal challenges for the existing liability framework and could lead to situations in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system, or which code, input or data have ultimately caused the harmful operation; whereas this factor could make it harder to identify the link between harm or damage and the behaviour causing it, with the result that victims might not receive adequate compensation;
I. whereas the legal challenges also result from the connectivity between an AI-system and other AI-systems and non-AI-systems, their dependency on external data, their vulnerability to cybersecurity breaches as well as from the design of increasingly autonomous AI-systems using, inter alia, machine-learning and deep-learning techniques;
J. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges and eliminate the risk of users being less willing to accept emerging technology; whereas fair compensation procedures mean that each person who suffers harm caused by AI-systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system; whereas the user needs to be sure that potential damage caused by systems using AI is covered by adequate insurance and that there is a defined legal route for redress;
K. whereas legal certainty is also an essential condition for the dynamic development and innovation of AI-based technology, in particular for start-ups, micro, small and medium-size enterprises, and its practical application in everyday life; whereas the crucial role of start-ups, micro, small and medium-size enterprises, especially in the European economy, justifies a strictly proportionate approach to enable them to develop and innovate;
L. whereas the diversity of AI-systems and the diverse range of risks the technology poses complicates efforts to find a single solution, suitable for the entire spectrum of risks; whereas, in this respect, an approach should be adopted in which experiments, pilots and regulatory sandboxes are used to come up with proportional and evidence-based solutions that address specific situations and sectors, where needed;
Introduction
1. Considers that the challenge related to the introduction of AI-systems into society, the workplace and the economy is one of the most important questions on the current political agenda; whereas technologies based on AI could and should endeavour to improve our lives in almost every sector, from the personal sphere, for example the transport sector, personalised education, assistance to vulnerable persons, fitness programs, and credit provisions, to the working environment, for example alleviation from tedious and repetitive tasks, and to global challenges such as climate change, healthcare, nutrition and logistics;
2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses of AI-systems and to avoid regulatory fragmentation in the Union, uniform, principle-based and future-proof legislation across the Union for all AI-systems is crucial; is of the opinion that, while sector-specific regulations for the broad range of possible applications are preferable, a horizontal and harmonized legal framework based on common principles seems necessary to ensure legal clarity, to establish equal standards across the Union and to effectively protect our European values and citizens’ rights;
3. States that the Digital Single Market needs to be fully harmonized, since the digital sphere is characterized by rapid cross-border dynamics and international data flows; considers that the Union will only achieve the objectives of maintaining the Union’s digital sovereignty and of boosting digital innovation in Europe with consistent and common rules in line with a culture of innovation;
4. Notes that the global AI race is already underway and that the Union should play a leading role in it, by exploiting its scientific and technological potential; strongly emphasises that technology development must not undermine the protection of users from damage that can be caused by devices and systems using AI; encourages the promotion of the Union standards on civil liability at an international level;
5. Firmly believes that the new common rules for AI-systems should only take the form of a regulation; considers that the question of liability in cases of harm or damage caused by an AI-system is one of the key aspects to address within this framework;
Liability and Artificial Intelligence
6. Believes that there is no need for a complete revision of the well-functioning liability regimes, but that the complexity, connectivity, opacity, vulnerability, the capacity of being modified through updates, the capacity for self-learning and the potential autonomy of AI-systems, as well as the multitude of actors involved represent nevertheless a significant challenge to the effectiveness of Union and national liability framework provisions; considers that specific and coordinated adjustments to the liability regimes are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;
7. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are nearly always the result of someone building, deploying or interfering with the systems; notes in this respect that it is not necessary to give legal personality to AI-systems; is of the opinion that the opacity, connectivity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the different persons in the whole value chain who create, maintain or control the risk associated with the AI-system liable;
8. Considers that the Product Liability Directive (PLD) has, for over 30 years, proven to be an effective means of getting compensation for harm triggered by a defective product, but should nevertheless be revised to adapt it to the digital world and to address the challenges posed by emerging digital technologies, ensuring, thereby, a high level of effective consumer protection, as well as legal certainty for consumers and businesses, while avoiding high costs and risks for SMEs and start-ups; urges the Commission to assess whether the PLD should be transformed into a regulation, to clarify the definition of ‘products’ by determining whether digital content and digital services fall under its scope and to consider adapting concepts such as ‘damage’, ‘defect’ and ‘producer’; is of the opinion that, for the purpose of legal certainty throughout the Union, following the review of the PLD, the concept of ‘producer’ should incorporate manufacturers, developers, programmers, service providers as well as backend operators; calls on the Commission to consider reversing the rules governing the burden of proof for harm caused by emerging digital technologies in clearly defined cases, and after a proper assessment; points out the importance of ensuring that the updated Union act remains limited to clearly identified problems for which feasible solutions already exist and at the same time allows future technological developments to be covered, including developments based on free and open source software; notes that the PLD should continue to be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; highlights that any update of the product liability framework should go hand in hand with the update of Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety(16) in order to ensure that AI systems integrate safety and security by design principles;
9. Considers that the existing fault-based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third party like a hacker or for persons whose property is damaged by such a third party, as the interference regularly constitutes a fault-based action; notes that only for specific cases, including those where the third party is untraceable or impecunious, does the addition of liability rules to complement existing national tort law seem necessary;
10. Considers it, therefore, appropriate for this report to focus on civil liability claims against the operator of an AI-system; affirms that the operator’s liability is justified by the fact that he or she is controlling a risk associated with the AI-system, comparable to an owner of a car; considers that due to the AI-system’s complexity and connectivity, the operator will be in many cases the first visible contact point for the affected person;
Liability of the operator
11. Opines that liability rules involving the operator should cover all operations of AI-systems, irrespective of where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly would not have contractual liability claims against the operator; notes that when harm or damage materialises, such persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the operator of the AI-system and thus, corresponding liability claims might fail;
12. Considers it appropriate to understand ‘operator’ to cover both the frontend and backend operator, as long as the latter is not covered by the PLD; notes that the frontend operator should be defined as the natural or legal person who exercises a degree of control over a risk connected with the operation and functioning of the AI-system and benefits from its operation; states that the backend operator should be defined as the natural or legal person who, on a continuous basis, defines the features of the technology, provides data and essential backend support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system; considers that exercising control means any action of the operator that influences the operation of the AI-system and thus the extent to which it exposes third parties to its potential risks; considers that such actions could impact the operation of an AI-system from start to finish, by determining the input, output or results, or could change specific functions or processes within the AI-system;
13. Notes that there could be situations in which there is more than one operator, for example a backend and frontend operator; considers that, in that event, all operators should be jointly and severally liable while having the right to recourse proportionately against each other; is of the opinion that the proportions of liability should be determined by the respective degrees of control the operators had over the risk connected with the operation and functioning of the AI-system; considers that product traceability should be improved in order to better identify those involved in the different stages;
Different liability rules for different risks
14. Recognises that the type of AI-system the operator is exercising control over is a determining factor regarding liability; notes that an AI-system that entails an inherent high risk and acts autonomously potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing civil liability regimes, it seems reasonable to set up a common strict liability regime for those high-risk autonomous AI-systems; underlines that such a risk-based approach, that might encompass several levels of risk, should be based on clear criteria and an appropriate definition of high risk and provide for legal certainty;
15. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and goes beyond what can reasonably be expected; considers that when determining whether an AI-system is high-risk, the sector in which significant risks can be expected to arise and the nature of the activities undertaken must also be taken into account; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk causes harm or damage and the manner in which the AI-system is being used;
16. Recommends that all high-risk AI-systems be exhaustively listed in an Annex to the proposed Regulation; recognises that, given the rapid technological developments and the required technical expertise, the Commission should review that Annex without undue delay, but at least every six months, and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee, similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which includes national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders, with the addition of ethics experts and anthropologists, sociologists and mental health specialists; is also of the opinion that the European Parliament should appoint consultative experts to advise the newly established standing committee;
17. Notes that the development of technologies based on AI is hugely dynamic and continuously accelerating; stresses that, to ensure adequate protection for users, a fast-track approach is needed to analyse new devices and systems using AI-systems that emerge on the European market, concerning potential risks; recommends that all procedures in this regard should be simplified as much as possible; further suggests that the assessment by the Commission of whether an AI-system poses a high-risk should start at the same time as the product safety assessment, in order to prevent a situation in which a high-risk AI-system is already approved for the market but not yet classified as high-risk and thus operates without mandatory insurance cover;
18. Notes that the Commission should assess how the data collected, recorded or stored on high-risk AI-systems for the purposes of gathering evidence in case of harm or damage caused by that AI-system could be accessed and used by the investigating authority and how the traceability and auditability of such data could be improved, while taking into account fundamental and privacy rights;
19. States that in line with strict liability systems of the Member States, the proposed Regulation should cover violations of the important legally protected rights to life, health, physical integrity and property, and should set out the amounts and extent of compensation, as well as the limitation period; is of the opinion that the proposed Regulation should also incorporate significant immaterial harm that results in a verifiable economic loss above a threshold harmonised in Union liability law, that balances the access to justice of affected persons and the interests of other involved persons; urges the Commission to re-evaluate and to align the thresholds for damages in Union law; is of the opinion that the Commission should analyse in depth the legal traditions in all Member States and their existing national laws that grant compensation for immaterial harm, in order to evaluate if the inclusion of immaterial harm in AI-specific legislative acts is necessary and if it contradicts the existing Union legal framework or undermines the national law of the Member States;
20. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability; believes that the affected person should nevertheless benefit from a presumption of fault on the part of the operator, who should be able to exculpate itself by proving it has abided by its duty of care;
21. Considers that an AI system that has not yet been assessed by the Commission and the newly-formed standing committee and, thus, is not yet classified as high-risk and not included in the list set out in the Annex to the proposed Regulation, should nevertheless, by way of exception to the system provided for in paragraph 20, be subject to strict liability if it caused repeated incidents resulting in serious harm or damage; notes that if that is the case, the Commission should also assess, without undue delay, the need to revise that Annex to add the AI-system in question to the list; is of the opinion that, if, following that assessment, the Commission decides to include that AI-system on the list, that inclusion should have retroactive effect from the time of the first proven incident caused by that AI-system, which resulted in serious harm or damage;
22. Requests the Commission to evaluate the need for legal provisions at Union level on contracts to prevent contractual non-liability clauses, including in Business-to-Business and Business-to-Administration relationships;
Insurances and AI-systems
23. Considers liability coverage to be one of the key factors that defines the success of new technologies, products and services; observes that proper liability coverage is also essential for assuring the public that it can trust the new technology despite the potential for suffering harm or for facing legal claims by affected persons; notes at the same time that this regulatory system focuses on the need to exploit and enhance the advantages of AI-systems, while putting in place robust safeguards;
24. Is of the opinion that, based on the significant potential to cause harm or damage and by taking Directive 2009/103/EC of the European Parliament and of the Council of 16 September 2009 relating to insurance against civil liability in respect of the use of motor vehicles, and the enforcement of the obligation to insure against such liability(17) into account, all operators of high-risk AI-systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation laid down by the proposed Regulation; is mindful of the fact that such technology is currently still very rare, since it presupposes a high degree of autonomous decision making and that, as a result, the current discussions are mostly future-oriented; believes, nevertheless, that uncertainty regarding risks should not make insurance premiums prohibitively high and thereby an obstacle to research and innovation;
25. Believes that a compensation mechanism at Union level, funded with public money, is not the right way to fill potential insurance gaps; considers that a lack of data on the risks associated with AI-systems, combined with an uncertainty regarding developments in the future, make it difficult for the insurance sector to come up with adapted or new insurance products; considers that leaving the development of mandatory insurance entirely to the market is likely to result in a one-size-fits-all approach with disproportionately high premiums and the wrong incentives, stimulating operators to opt for the cheapest insurance rather than for the best coverage, and could become an obstacle to research and innovation; considers that the Commission should work closely with the insurance sector to see how data and innovative models can be used to create insurance policies that offer adequate coverage for an affordable price;
Final aspects
26. Requests the Commission to submit, on the basis of Article 225 of the Treaty on the Functioning of the European Union, a proposal for a Regulation on liability for the operation of Artificial Intelligence-systems, following the recommendations set out in the Annex hereto;
27. Considers that the requested proposal will not have financial implications;
o o o
28. Instructs its President to forward this resolution and the accompanying recommendations to the Commission and the Council.
ANNEX TO THE RESOLUTION:
DETAILED RECOMMENDATIONS FOR DRAWING UP A EUROPEAN PARLIAMENT AND COUNCIL REGULATION ON LIABILITY FOR THE OPERATION OF ARTIFICIAL INTELLIGENCE-SYSTEMS
A. PRINCIPLES AND AIMS OF THE PROPOSAL
This Report addresses an important aspect of digitisation, which itself is shaped by cross-border activities, global competition and core societal considerations. The following principles should serve as guidance:
1. A genuine Digital Single Market requires full harmonisation by means of a Regulation.
2. New legal challenges posed by the development of Artificial Intelligence (AI)-systems have to be addressed by establishing maximal legal certainty throughout the liability chain, including for the producer, the operator, the affected person and any other third party.
3. There should be no over-regulation and red tape must be prevented as this would hamper European innovation in AI, especially in the case of technology, products or services developed by SMEs or start-ups.
4. Civil liability rules for AI should seek to strike a balance between the protection of the public, on the one hand, and business incentives to invest in innovation, especially AI systems, on the other.
5. Instead of replacing the well-functioning existing liability regimes, a few necessary adjustments should be made by introducing new and future-oriented ideas.
6. The future proposal for a Regulation and the Product Liability Directive are two pillars of a common liability framework for AI-systems and require close coordination and alignment between all political actors, at Union and national levels.
7. Citizens should be entitled to the same level of protection and rights, irrespective of whether the harm is caused by an AI-system or not, or if it takes place physically or virtually, so that their confidence in the new technology is strengthened.
8. Both material and immaterial harm should be taken into account in the future proposal for a Regulation. Based on, among other documents, its Communication of 19 February 2020 on the safety and liability implications of AI and robotics, the European Commission is called upon to profoundly analyse the legal traditions in all Member States as well as the existing legislative provisions that grant compensation for immaterial harm in order to evaluate if the inclusion of immaterial harm in the future proposal for a Regulation is legally sound and necessary from the perspective of the affected person. Based on the currently available information, Parliament believes that significant immaterial harm should be included if the affected person suffers a noticeable, meaning a verifiable, economic loss.
B. TEXT OF THE PROPOSAL REQUESTED
Proposal for a
REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
on liability for the operation of Artificial Intelligence-systems
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
Having regard to the Treaty on the Functioning of the European Union, and in particular Article 114 thereof,
Having regard to the proposal from the European Commission,
After transmission of the draft legislative act to the national parliaments,
Having regard to the opinion of the European Economic and Social Committee(18),
Acting in accordance with the ordinary legislative procedure(19),
Whereas:
(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party held liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to instil confidence in the safety, reliability and consistency of products and services, including emerging digital technologies, such as artificial intelligence (”AI”), the Internet of Things (IoT) or robotics, in order to strike a balance between efficiently protecting potential victims of harm or damage and at the same time providing enough leeway to make the development of new technologies, products or services possible.
(2) Especially at the beginning of the life cycle of new products and services, after being pre-tested, there is a certain degree of risk for the user as well as for third persons that something will not function properly. This process of trial-and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the risks accompanying new products and services have been properly mitigated by strong product safety legislation and liability rules.
(3) The rise of AI, however, presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element) and the multitude of actors who intervene in their life-cycle make it extremely expensive or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input caused the harmful operation. That difficulty is compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches, as well as by the increasing autonomy of AI-systems triggered by machine-learning and deep-learning capabilities. In addition to these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising human dignity and European values and freedoms, by tracking individuals against their will, by introducing social credit systems, by taking biased decisions in matters of health insurance, credit provision, court orders, recruitment or employment or by constructing lethal autonomous weapon systems.
(4) It is important to point out that the advantages of deploying AI-systems will by far outweigh the disadvantages. They will help to fight climate change more effectively, to improve medical examinations as well as working conditions, to better integrate disabled and ageing persons into society and to provide tailor-made education courses for all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI-systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation procedures is the best way forward.
(5) An adequate liability regime is also necessary to counterweigh the breach of safety rules. However, the liability regime laid down in this Regulation needs to take into consideration all interests at stake. A careful examination of the consequences of any new regulatory framework on small and medium-sized enterprises (SMEs) and start-ups is a prerequisite for further legislative action. The crucial role that such enterprises play in the European economy justifies a strictly proportionate approach in order to enable them to develop and innovate. On the other hand, the victims of harm or damage caused by AI-systems need to have a right to redress and to full compensation for the harm or damage that they have suffered.
(6) Any required changes in the existing legal framework should start with the clarification that AI-systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the vast majority of AI-systems are used for handling trivial tasks without or with minimum risks for the society. By using the term “automated decision-making”, the possible ambiguity of the term AI could be avoided. That term describes a situation in which a user initially delegates a decision, partly or completely, to an entity, by means of software or a service. That entity, in turn, uses automatically executed decision-making models to perform an action on behalf of a user, or to inform the user’s decision in performing an action.
(7) There are however also AI-systems that are developed and deployed in a critical manner and are based on technologies such as neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. An operator of such an AI-system might, for instance, argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it wascaused by an autonomous operation of his or her AI-system. Moreover, the mere operation of an autonomous AI-system should not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which the allocation of liability could be unfair or inefficient, or in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the operator and ends up without compensation.
(8) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice, according to which the person that creates or maintains a risk for the public is liable if that risk causes harm or damage, and thus should ex-ante minimise or ex-post compensate that risk. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments to the existing legislation and the introduction of well-accessed and targeted new provisions would be sufficient to accommodate the AI-related challenges, with a view to preventing regulatory fragmentation and ensuring the harmonisation of civil liability legislation throughout the Union in connection with AI.
(9) Council Directive 85/374/EEC(20) (‘the Product Liability Directive’) has proven for over 30 years to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI-system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during the necessary review of that Directive. The existing fault-based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damage caused by an interfering third person, as that interference regularly constitutes a fault-based action, where the third-party uses the AI system to cause harm. Consequently, this Regulation should focus on claims against the operator of an AI-system.
(10) The liability of the operator under this Regulation is based on the fact that he or she exercises a degree of control over a risk connected with the operation and functioning of an AI-system, which is comparable to that of an owner of a car. The more sophisticated and more autonomous a system is, the greater the impact of defining and influencing the algorithms, for example by continuous updates, becomes. As there is often more than one person who could, in a meaningful way, be considered as ‘operating’ the AI-system, under this Regulation ‘operator’ should be understood to cover both the frontend and the backend operator . Although in general, the frontend operator appears as the person who ‘primarily’ decides on the use of the AI-system, the backend operator could in fact have a higher degree of control over the operational risks. If the backend operator also qualifies as ‘producer’ as defined in Article 3 of the Product Liability Directive, that Directive should apply to him or her. If there is only one operator and that operator is also the producer of the AI-system, this Regulation should prevail over the Product Liability Directive.
(11) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as an operator. If not, the extent of the user’s grossly negligent or intentional contribution to the risk might lead to the user’s fault-based liability to the claimant. Applicable consumer rights of the user should remain unaffected.
(12) This Regulation should enable the affected person to bring forward liability claims throughout the liability chain and throughout the lifecycle of an AI-system. It should also cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should, however, address cases of third party liability, where an AI-system operates in a public space and exposes many persons to a risk. In that situation, the affected persons will often not be aware of the operating AI-system and will not have any contractual or legal relationship towards the operator. Consequently, the operation of the AI-system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the operator of the AI-system, while facing severe difficulties to prove fault on the part of the operator.
(13) The type of AI-system the operator is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the user or the public to a much higher degree and in a manner that is random and goes beyond what can reasonably be expected. This means that at the start of the autonomous operation of the AI-system, the majority of the potentially affected persons are unknown and not identifiable, for example persons on a public square or in a neighbouring house, compared to the operation of an AI-system that involves specific persons, who have regularly consented to its deployment before, for example surgery in a hospital or a sales demonstration in a small shop. Determining how significant the potential is of a high-risk AI-system to cause harm or damage is dependent on the interplay between the purpose of use for which the AI system is put on the market, the manner in which the AI-system is being used, the severity of the potential harm or damage, the degree of autonomy of decision-making that can result in harm and the likelihood that the risk materialises. The degree of severity should be determined based on relevant factors such as the extent of the potential harm resulting from the operation on affected persons, including in particular effects on fundamental rights, the number of affected persons, the total value for the potential damage, as well as the harm to society as a whole. The likelihood for the harm or damage to occur should be determined based on relevant factors such as the role of the algorithmic calculations in the decision-making process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend on relevant factors such as the context and sector in which the AI-system operates, if it could have legal or factual effects on important legally protected rights of the affected person, and whether the effects can reasonably be avoided.
(14) All AI-systems with a high risk should be exhaustively listed in an Annex to this Regulation. Given the rapid technical and market developments worldwide, as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex without undue delay, but at least every six months, and, if necessary, amend it by means of delegated acts. The assessment by the Commission of whether an AI-system poses a high-risk should start at the same time as the product safety assessment, in order to prevent a situation in which a high-risk AI-system is already approved for the market but not yet classified as high-risk and thus operates without mandatory insurance cover. To give businesses and research organisations enough planning and investment security, changes to the critical sectors should only be made every twelve months. Operators should be called upon to notify the Commission if they are working on new technology, products or services that fall under one of the existing critical sectors provided for in the Annex and which later could qualify as a high-risk AI-system.
(15) It is of particular importance that the Commission carry out appropriate consultations with the relevant stakeholders during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making(21). A standing committee called 'Technical Committee – high-risk AI-systems' (TCRAI) should support the Commission in its regular review under this Regulation. That standing committee should comprise representatives of the Member States, as well as a balanced selection of stakeholders, including consumer organisation, associations representing affected persons, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States' experts, and their experts systematically have access to meetings of Commission expert groups as well as the standing TCRAI-committee, when dealing with the preparation of delegated acts.
(16) This Regulation should cover harm or damage to life, health, physical integrity, property and significant immaterial harm that results in a verifiable economic loss above a threshold, harmonised in Union liability law, that balances the access to justice of affected persons with the interests of other involved persons. The Commission should re-evaluate and align the thresholds for damages in Union law. Significant immaterial harm should be understood as meaning harm as a result of which the affected person suffers considerable detriment, an objective and demonstrable impairment of his or her personal interests and an economic loss calculated having regard, for example, to annual average figures of past revenues and other relevant circumstances. This Regulation should also determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. This Regulation should set out a significantly lower ceiling for compensation than that provided for in the Product Liability Directive, as this Regulation only refers to the harm or damage of a single person resulting from a single operation of an AI-system, while the former refers to a number of products or even a product line with the same defect.
(17) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability, unless stricter national laws and consumer protection legislation is in force. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation, as well as the limitation period, should continue to apply. A person who suffers harm or damage caused by an AI-system not listed as a high-risk AI-system should benefit from the presumption of fault of the operator.
(18) The diligence which can be expected from an operator should be commensurate with (i) the nature of the AI system: (ii) the legally-protected right potentially affected: (iii) the potential harm or damage the AI-system could cause: and (iv) the likelihood of such damage. Thereby, it should be taken into account that the operator might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the operator has observed the due care that can reasonably be expected from him or her in selecting a suitable AI-system, if the operator has selected an AI-system which has been certified under a scheme similar to the voluntary certification scheme envisaged by the Commission(22). It should be presumed that the operator has observed the due care that can reasonably be expected from him or her during the operation of the AI-system, if the operator can prove that he or she actually and regularly monitored the AI-system during its operation and that he or she notified the manufacturer about potential irregularities during the operation. It should be presumed that the operator has observed the due care that can reasonably be expected from him or her as regards maintaining the operational reliability, if the operator installed all available updates provided by the producer of the AI-system. Since the level of sophistication of operators can vary depending on whether they are mere consumers or professionals, the duties of care should be adapted accordingly.
(19) In order to enable the operator to prove that he or she was not at fault, or to enable the affected person to prove the existence of fault, producers should have the duty to cooperate with both parties concerned, including by providing well-documented information. Both producers established within and outside the Union should furthermore have the obligation to designate an AI-liability representative within the Union as a contact point for replying to all requests from operators, in a manner similar to the data protection officers as set out in Article 37 of Regulation (EU) 2016/679 of the European Parliament and of the Council(23) , to the manufacturer's representative as set out in Articles 3(41) and 13(4) of Regulation (EU) 2018/858 of the European Parliament and of the Council(24) or to the authorised representative as set out in Articles 4(2) and 5 of Regulation (EU) 2019/1020 of the European Parliament and of the Council(25) .
(20) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life, including the waste and recycling management. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for SMEs, as well as for start-ups, in relation to insuring and financing their research and development projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals, but also to determine whether businesses, especially SMEs and start-ups, are able to raise capital, innovate, research, and ultimately offer new products and services, as well as whether consumers trust such products and services and are willing to use them despite the potential risks and legal claims being brought in respect of such products or services.
(21) Insurance can help guarantee that victims receive effective compensation and pool the risks of all insured persons. One of the factors on which insurance companies base their offer of insurance products and services is risk assessment, based on access to sufficient historical claims data. A lack of access to, or an insufficient quantity of, high quality data could be a reason why creating insurance products for new and emerging technologies is difficult at the beginning. However, greater access to, and optimising the use of, data generated by new technologies, coupled with an obligation to provide well-documented information, would enhance insurers’ ability to model emerging risk and to foster the development of more innovative cover.
(22) Given that historical claims data are missing, how and under which conditions liability is insurable should be investigated, with a view to linking insurance to the product and not to the responsible person. There are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. However, a “one-size-fits-all” solution is difficult to envisage and the insurance market will need time to adapt. The Commission should work closely with the insurance market to develop innovative insurance products that could close the insurance gap. In exceptional cases, such as an event incurring collective damages, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund, for a limited period of time, that addresses the specific needs of those cases. Special compensation funds could also be set up to cover those exceptional cases in which an AI-system, which is not yet classified as high-risk AI-system and thus, is not yet insured, causes harm or damage. In order to ensure legal certainty and to fulfil the obligation to inform all potentially affected persons, the existence of the special compensation fund as well as the conditions to benefit from it should be made public in a clear and comprehensive manner.
(23) It is of utmost importance that any future changes to this Regulation go hand in hand with the necessary review of the Product Liability Directive, in order to revise it in a comprehensive and consistent manner and to guarantee the rights and obligations of all parties concerned throughout the liability chain. The introduction of a new liability regime for the operator of AI-systems requires that the provisions of this Regulation and the review of the Product Liability Directive be closely coordinated in terms of substance as well as approach so that they together constitute a consistent liability framework for AI-systems, balancing the interests of producer, operator, consumer and the affected person, as regards the liability risk and the relevant compensation arrangements. Adapting and streamlining the definitions of AI-system, frontend and backend operator, producer, defect, product and service throughout all pieces of legislation is therefore necessary and should be envisaged in parallel.
(24) Since the objectives of this Regulation, namely to create a future-oriented and unified approach at Union level, setting common European standards for European citizens and businesses to ensure the consistency of rights and legal certainty throughout the Union and to avoid fragmentation of the Digital Single Market, which would hamper the goal of maintaining digital sovereignty, of fostering digital innovation in Europe and of ensuring a high-level protection of citizen and consumer rights, require that the liability regimes for AI-systems are fully harmonized. This cannot be sufficiently achieved by the Member States due to the rapid technological change, the cross-border development as well as the usage of AI-systems and eventually, the conflicting legislative approaches across the Union, but can rather, by reason of the scale or effects of the action, be achieved at Union level. The Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve those objectives,
HAVE ADOPTED THIS REGULATION:
Chapter I
General provisions
Article 1
Subject matter
This Regulation sets out rules for the civil liability claims of natural and legal persons against operators of AI-systems.
Article 2
Scope
1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity of a natural person, to the property of a natural or legal person or has caused significant immaterial harm resulting in a verifiable economic loss.
2. Any agreement between an operator of an AI-system and a natural or legal person who suffers harm or damage because of the AI-system, which circumvents or limits the rights and obligations set out in this Regulation, concluded before or after the harm or damage occurred, shall be deemed null and void as regards the rights and obligations laid down in this Regulation.
3. This Regulation is without prejudice to any additional liability claims resulting from contractual relationships, as well as from regulations on product liability, consumer protection, anti-discrimination, labour and environmental protection between the operator and the natural or legal person who suffered harm or damage because of the AI-system and that may be brought against the operator under Union or national law.
Article 3
Definitions
For the purposes of this Regulation, the following definitions apply:
(a) ‘AI-system’ means a system that is either software-based or embedded in hardware devices, and that displays behaviour simulating intelligence by, inter alia, collecting and processing data, analysing and interpreting its environment, and by taking action, with some degree of autonomy, to achieve specific goals;
(b) 'autonomous’ means an AI-system that operates by interpreting certain input and by using a set of pre-determined instructions, without being limited to such instructions, despite the system’s behaviour being constrained by, and targeted at, fulfilling the goal it was given and other relevant design choices made by its developer;
(c) ‘high risk’ means a significant potential in an autonomously operating AI-system to cause harm or damage to one or more persons in a manner that is random and goes beyond what can reasonably be expected; the significance of the potential depends on the interplay between the severity of possible harm or damage, the degree of autonomy of decision-making, the likelihood that the risk materializes and the manner and the context in which the AI-system is being used;
(d) ‘operator’ means both the frontend and the backend operator as long as the latter’s liability is not already covered by Directive 85/374/EEC;
(e) ‘frontend operator’ means any natural or legal person who exercises a degree of control over a risk connected with the operation and functioning of the AI-system and benefits from its operation;
(f) ‘backend operator’ means any natural or legal person who, on a continuous basis, defines the features of the technology and provides data and an essential backend support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system;
(g) 'control' means any action of an operator that influences the operation of an AI-system and thus the extent to which the operator exposes third parties to the potential risks associated with the operation and functioning of the AI-system; such actions can impact the operation at any stage by determining the input, output or results, or can change specific functions or processes within the AI-system; the degree to which those aspects of the operation of the AI-system are determined by the action depends on the level of influence the operator has over the risk connected with the operation and functioning of the AI-system;
(h) ‘affected person’ means any person who suffers harm or damage caused by a physical or virtual activity, device or process driven by an AI-system, and who is not its operator;
(i) ‘harm or damage’ means an adverse impact affecting the life, health, physical integrity of a natural person, the property of a natural or legal person or causing significant immaterial harm that results in a verifiable economic loss;
(j) ‘producer’ means the producer as defined in Article 3 of Directive 85/374/EEC.
Chapter II
High-risk AI-systems
Article 4
Strict liability for high-risk AI-systems
1. The operator of a high-risk AI-system shall be strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system.
2. All high-risk AI-systems and all critical sectors where they are used shall be listed in the Annex to this Regulation. The Commission is empowered to adopt delegated acts in accordance with Article 13, to amend that exhaustive list, by:
(a) including new types of high-risk AI-systems and critical sectors in which they are deployed;
(b) deleting types of AI-systems that can no longer be considered to pose a high risk; and/or
(c) changing the critical sectors for existing high-risk AI-systems.
Any delegated act amending the Annex shall come into force six months after its adoption. When determining new high-risk AI-systems and/or critical sectors to be inserted by means of delegated acts in the Annex, the Commission shall take full account of the criteria set out in this Regulation, in particular those referred to in Article 3(c).
3. Operators of high-risk AI-systems shall not be able to exonerate themselves from liability by arguing that they acted with due diligence or that the harm or damage was caused by an autonomous activity, device or process driven by their AI-system. Operators shall not be held liable if the harm or damage was caused by force majeure.
4. The frontend operator of a high-risk AI-system shall ensure that operations of that AI-system are covered by liability insurance that is adequate in relation to the amounts and extent of compensation provided for in Articles 5 and 6 of this Regulation. The backend operator shall ensure that its services are covered by business liability or product liability insurance that is adequate in relation to the amounts and extent of compensation provided for in Article 5 and 6 of this Regulation. If compulsory insurance regimes of the frontend or backend operator already in force pursuant to other Union or national law or existing voluntary corporate insurance funds are considered to cover the operation of the AI-system or the provided service, the obligation to take out insurance for the AI-system or the provided service pursuant to this Regulation shall be deemed fulfilled, as long as the relevant existing compulsory insurance or the voluntary corporate insurance funds cover the amounts and the extent of compensation provided for in Articles 5 and 6 of this Regulation.
5. This Regulation shall prevail over national liability regimes in the event of conflicting strict liability classification of AI-systems.
Article 5
Amount of compensation
1. An operator of a high-risk AI-system that has been held liable for harm or damage under this Regulation shall compensate:
(a) up to a maximum amount of EUR two million in the event of the death of, or in the event of harm caused to the health or physical integrity of, an affected person, resulting from an operation of a high-risk AI-system;
(b) up to a maximum amount of EUR one million in the event of significant immaterial harm that results in a verifiable economic loss or of damage caused to property, including when several items of property of an affected person were damaged as a result of a single operation of a single high-risk AI-system; where the affected person also holds a contractual liability claim against the operator, no compensation shall be paid under this Regulation, if the total amount of the damage to property or the significant immaterial harm is of a value that falls below [EUR 500](26).
2. Where the combined compensation to be paid to several persons who suffer harm or damage caused by the same operation of the same high-risk AI-system exceeds the maximum total amounts provided for in paragraph 1, the amounts to be paid to each person shall be reduced pro-rata so that the combined compensation does not exceed the maximum amounts set out in paragraph 1.
Article 6
Extent of compensation
1. Within the amount set out in Article 5(1)(a), compensation to be paid by the operator held liable in the event of physical harm followed by the death of the affected person, shall be calculated based on the costs of the medical treatment that the affected person underwent prior to his or her death, and of the pecuniary prejudice sustained prior to death caused by the cessation or reduction of the earning capacity or the increase in his or her needs for the duration of the harm prior to death. The operator held liable shall furthermore reimburse the funeral costs for the deceased affected person to the party who is responsible for defraying those expenses.
If, at the time of the incident that caused the harm leading to his or her death, the affected person was in a relationship with a third party and had a legal obligation to support that third party, the operator held liable shall indemnify the third party by paying maintenance to the extent to which the affected person would have been obliged to pay, for the period corresponding to an average life expectancy for a person of his or her age and general description. The operator shall also indemnify the third party if, at the time of the incident that caused the death, the third party had been conceived but had not yet been born.
2. Within the amount set out in Article 5(1)(b), compensation to be paid by the operator held liable in the event of harm to the health or the physical integrity of the affected person shall include the reimbursement of the costs of the related medical treatment as well as the payment for any pecuniary prejudice sustained by the affected person, as a result of the temporary suspension, reduction or permanent cessation of his or her earning capacity or the consequent, medically certified increase in his or her needs.
Article 7
Limitation period
1. Civil liability claims, brought in accordance with Article 4(1), concerning harm to life, health or physical integrity, shall be subject to a special limitation period of 30 years from the date on which the harm occurred.
2. Civil liability claims, brought in accordance with Article 4(1), concerning damage to property or significant immaterial harm that results in a verifiable economic loss shall be subject to special limitation period of:
(a) 10 years from the date when the property damage occurred or the verifiable economic loss resulting from the significant immaterial harm, respectively, occurred, or
(b) 30 years from the date on which the operation of the high-risk AI-system that subsequently caused the property damage or the immaterial harm took place.
Of the periods referred to in the first subparagraph, the period that ends first shall be applicable.
3. This Article shall be without prejudice to national law regulating the suspension or interruption of limitation periods.
Chapter III
Other AI-systems
Article 8
Fault-based liability for other AI-systems
1. The operator of an AI-system that does not constitute a high-risk AI-system as laid down in Articles 3(c) and 4(2) and, as a result is not listed in the Annex to this Regulation, shall be subject to fault-based liability for any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system.
2. The operator shall not be liable if he or she can prove that the harm or damage was caused without his or her fault, relying on either of the following grounds:
(a) the AI-system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken, or
(b) due diligence was observed by performing all the following actions: selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates.
The operator shall not be able to escape liability by arguing that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The operator shall not be liable if the harm or damage was caused by force majeure.
3. Where the harm or damage was caused by a third party that interfered with the AI-system by modifying its functioning or its effects, the operator shall nonetheless be liable for the payment of compensation if such third party is untraceable or impecunious.
4. At the request of the operator or the affected person, the producer of an AI-system shall have the duty of cooperating with, and providing information to, them to the extent warranted by the significance of the claim, in order to allow for the identification of the liabilities.
Article 9
National provisions on compensation and limitation period
Civil liability claims brought in accordance with Article 8(1) shall be subject, in relation to limitation periods as well as the amounts and the extent of compensation, to the laws of the Member State in which the harm or damage occurred.
Chapter IV
Apportionment of liability
Article 10
Contributory negligence
1. If the harm or damage is caused both by a physical or virtual activity, device or process driven by an AI-system and by the actions of an affected person or of any person for whom the affected person is responsible, the extent of liability of the operator under this Regulation shall be reduced accordingly. The operator shall not be liable if the affected person or the person for whom he or she is responsible is solely to blame for the harm or damage caused.
2. An operator held liable may use the data generated by the AI-system to prove contributory negligence on the part of the affected person, in accordance with Regulation (EU) 2016/679 and other relevant data protection laws. The affected person may also use such data as a means of proof or clarification in the liability claim.
Article 11
Joint and several liability
If there is more than one operator of an AI-system, they shall be jointly and severally liable. If a frontend operator is also the producer of the AI-system, this Regulation shall prevail over the Product Liability Directive. If the backend operator also qualifies as a producer as defined in Article 3 of the Product Liability Directive, that Directive should apply to him or her. If there is only one operator and that operator is also the producer of the AI-system, this Regulation should prevail over the Product Liability Directive.
Article 12
Recourse for compensation
1. The operator shall not be entitled to pursue a recourse action unless the affected person has been paid in full any compensation which that person is entitled to receive under this Regulation.
2. In the event that the operator is held jointly and severally liable with other operators in respect of an affected person and has fully compensated that affected person, in accordance with Article 4(1) or 8(1), that operator may recover part of the compensation from the other operators, in proportion to his or her liability.
The proportions of liability shall be based on the respective degrees of control the operators had over the risk connected with the operation and functioning of the AI-system. If the contribution attributable to a jointly and severally liable operator cannot be obtained from him or her, the shortfall shall be borne by the other operators. To the extent that a jointly and severally liable operator compensates the affected person and demands adjustment of advance payments from the other liable operators, the claim of the affected person against the other operators shall be subrogated to the operator. The subrogation of claims shall not be asserted to the disadvantage of the original claim.
3. In the event that the operator of a defective AI-system fully indemnifies the affected person for harm or damages in accordance with Article 4(1) or 8(1) of this Regulation, he or she may take action for redress against the producer of the defective AI-system in accordance with Directive 85/374/EEC and with national provisions concerning liability for defective products.
4. In the event that the insurer of the operator indemnifies the affected person for harm or damage in accordance with Article 4(1) or 8(1), any civil liability claim of the affected person against another person for the same damage shall be subrogated to the insurer of the operator to the extent of the amount the insurer of the operator has compensated the affected person.
Chapter V
Final provisions
Article 13
Exercise of the delegation
1. The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in this Article.
2. The power to adopt delegated acts referred to in Article 4(2) shall be conferred on the Commission for a period of five years from [date of application of this Regulation].
3. The delegation of power referred to in Article 4(2) may be revoked at any time by the European Parliament or by the Council. A decision to revoke shall put an end to the delegation of the power specified in that decision. It shall take effect the day following the publication of the decision in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.
4. Before adopting a delegated act, the Commission shall consult the standing Technical Committee for high-risk AI-systems (TCRAI-committee) in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making.
5. As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council.
6. A delegated act adopted pursuant to Article 4(2) shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of two months of notification or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by two months at the initiative of the European Parliament or of the Council.
Article 14
Review
By 1 January 202X [3 years after the date of application of this Regulation], and every three years thereafter, the Commission shall present to the European Parliament, the Council and the European Economic and Social Committee a detailed report reviewing this Regulation in light of further development of Artificial Intelligence.
When preparing the report referred to in the first subparagraph, the Commission shall request relevant information from Member States relating to case law, court settlements as well as accident statistics, such as the number of accidents, damage suffered, AI applications involved, compensation paid by insurance companies, as well as an assessment of the number of claims brought by affected persons, either individually or collectively, and of the time frames in which those claims are dealt with in court.
The Commission’s report shall be accompanied, where appropriate, by legislative proposals, intended to address any gaps identified in the report.
Article 15
Entry into force
This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union.
It shall apply from 1 January 202X.
This Regulation shall be binding in its entirety and directly applicable in the Member States.
Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (OJ L 210, 7.8.1985, p. 29).
Please refer to page 24 of Commission White Paper of 19 February 2020 on Artificial Intelligence - A European approach to excellence and trust (COM(2020)0065).
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).
Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).
To be revised by the Commission as set out in paragraph 19 of the resolution.
Intellectual property rights for the development of artificial intelligence technologies
149k
59k
European Parliament resolution of 20 October 2020 on intellectual property rights for the development of artificial intelligence technologies (2020/2015(INI))
– having regard to the Treaty on the Functioning of the European Union (TFEU), in particular Articles 4, 16, 26, 114 and 118 thereof,
– having regard to the Berne Convention for the Protection of Literary and Artistic Works,
– having regard to the Interinstitutional Agreement of 13 April 2016 on Better Law-Making(1) and the Commission’s Better Regulations Guidelines (COM(2015)0215),
– having regard to the World Intellectual Property Organisation (WIPO) Copyright Treaty, the WIPO Performances and Phonograms Treaty and the WIPO revised Issues Paper of 29 May 2020 on Intellectual Property Policy and Artificial Intelligence,
– having regard to Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC(2),
– having regard to Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases(3),
– having regard to Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 on the legal protection of computer programs(4),
– having regard to Directive (EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure(5),
– having regard to Directive (EU) 2019/1024 of the European Parliament and of the Council of 20 June 2019 on open data and the re-use of public sector information(6),
– having regard to Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (‘General Data Protection Regulation’)(7),
– having regard to Regulation (EU) 2018/1807 of the European Parliament and of the Council of 14 November 2018 on a framework for the free flow of non-personal data in the European Union(8),
– having regard to Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services(9),
– having regard to the Commission White Paper of 19 February 2020 entitled ‘Artificial Intelligence - A European approach to excellence and trust’ (COM(2020)0065),
– having regard to the work of the High-Level Expert Group on Artificial Intelligence set up by the Commission,
– having regard to the Commission communications entitled ‘A European Data Strategy’ (COM(2020)0066) and ‘A New Industrial Strategy for Europe’ (COM(2020)0102),
– having regard to the Guidelines for Examination in the European Patent Office of November 2019,
– having regard to the digital economy working paper 2016/05 of the Commission’s Joint Research Centre and its Institute for Prospective Technological Studies entitled ‘An Economic Policy Perspective on Online Platforms’,
– having regard to the political guidelines for the next European Commission 2019-2024 entitled ‘A Union that strives for more: my agenda for Europe’,
– having regard to its resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics(10),
– having regard to Rule 54 of its Rules of Procedure,
– having regard to the opinions of the Committee on the Internal Market and Consumer Protection, the Committee on Transport and Tourism and the Committee on Culture and Education,
– having regard to the report of the Committee on Legal Affairs (A9-0176/2020),
A. whereas the Union’s legal framework for intellectual property aims to promote innovation, creativity and access to knowledge and information;
B. whereas Article 118 TFEU stipulates that the Union legislator must establish measures for the creation of European intellectual property rights (IPRs) to provide uniform protection of those rights throughout the Union; whereas the single market is conducive to the stronger economic growth needed to ensure the prosperity of Union citizens;
C. whereas recent developments in artificial intelligence (AI) and similar emerging technologies represent a significant technological advance that is generating opportunities and challenges for Union citizens, businesses, public administrations, creators and the defence sector;
D. whereas AI technologies may render the traceability of IPRs and their application to AI-generated output difficult, thus preventing human creators whose original work is used to power such technologies from being fairly remunerated;
E. whereas the aim of making the Union the world leader in AI technologies must encompass efforts to regain and safeguard the Union’s digital and industrial sovereignty, ensure its competitiveness and promote and protect innovation, and must require a structural reform of the Union’s industrial policy to allow it to be at the forefront of AI technologies while respecting cultural diversity; whereas the Union's global leadership in AI calls for an effective intellectual property system which is fit for the digital age, enabling innovators to bring new products to the market; whereas strong safeguards are crucial to protect the Union’s patent system against abuse, which is detrimental to innovative AI developers; whereas a human-centred approach to AI that is compliant with ethical principles and human rights is needed if the technology is to remain a tool that serves people and the common good;
F. whereas the Union is the appropriate level at which to regulate AI technologies in order to avoid fragmentation of the single market and differing national provisions and guidelines; whereas a fully harmonised Union regulatory framework in the field of AI will have the potential to become a legislative benchmark at international level; whereas new common rules for AI systems should take the form of a regulation in order to establish equal standards across the Union and whereas legislation must be future-proofed to ensure it can keep pace with the fast development of this technology, and must be followed up on through thorough impact assessments; whereas legal certainty fosters technological development, and whereas public confidence in new technologies is essential for the development of this sector, as it strengthens the Union’s competitive advantage; whereas the regulatory framework governing AI should therefore inspire confidence in the safety and reliability of AI and strike a balance between public protection and business incentives for investment in innovation;
G. whereas AI and related technologies are based on computational models and algorithms, which are regarded as mathematical methods within the meaning of the European Patent Convention (EPC) and are therefore not patentable as such; whereas mathematical methods and computer programs may be protected by patents under Article 52(3) of the EPC when they are used as part of an AI system that contributes to producing a further technical effect; whereas the impact of such potential patent protection should be thoroughly assessed;
H. whereas AI and related technologies are based on the creation and execution of computer programs which, as such, are subject to a specific copyright protection regime, whereby only the expression of a computer program may be protected, and not the ideas, methods and principles which underlie any element of it;
I. whereas an increasing number of AI-related patents are being granted;
J. whereas the development of AI and related technologies raises questions about the protection of innovation itself and the application of IPRs to materials, content or data generated by AI and related technologies, which can be of an industrial or artistic nature and which create various commercial opportunities; whereas in this regard it is important to distinguish between AI-assisted human creations and creations autonomously generated by AI;
K. whereas AI and related technologies are heavily dependent on pre-existing content and large volumes of data; whereas increased transparent and open access to certain non-personal data and databases in the Union, especially for SMEs and start-ups, as well as interoperability of data, which limits lock-in effects, will play a crucial role in advancing the development of European AI and supporting the competitiveness of European companies at the global level; whereas the collection of personal data must respect fundamental rights and data protection rules and requires tailored governance, namely in terms of data management and the transparency of data used in developing and deploying AI technologies, and this throughout the entire lifecycle of an AI-enabled system;
1. Takes note of the Commission White Paper on ‘Artificial Intelligence - A European approach to excellence and trust’ and the European Data Strategy; stresses that the approaches outlined therein are likely to contribute to unlocking the potential of human-centred AI in the EU; notes, however, that the issue of the protection of IPRs in the context of the development of AI and related technologies has not been addressed by the Commission, despite the key importance of these rights; highlights the necessity of creating a single European data space and believes that the use thereof will play an important role in innovation and creativity in the Union economy, which should be incentivised; stresses that the Union should play an essential role in laying down basic principles on the development, deployment and use of AI, without hindering its advancement or impeding competition;
2. Highlights the fact that the development of AI and related technologies in the transport and tourism sectors will bring innovation, research, the mobilisation of investment and considerable economic, societal, environmental, public and safety benefits, while making these sectors more attractive to new generations and creating new employment opportunities and more sustainable business models, but stresses that it should not cause harm or damage to people or society;
3. Stresses the importance of creating an operational and fully harmonised regulatory framework in the field of AI technologies; suggests that such a framework should take the form of a regulation rather than a directive in order to avoid fragmentation of the European digital single market and promote innovation;
4. Calls on the Commission to take into account the seven key requirements identified in the Guidelines of the High-Level Expert Group, as welcomed by it in its communication of 8 April 2019(11), and properly implement them in all legislation dealing with AI;
5. Stresses that the development, deployment and use of AI technologies and the growth of the global data economy make it necessary to address significant technical, social, economic, ethical and legal issues in a variety of policy areas, including IPRs and their impact on these policy areas; highlights that in order to unlock the potential of AI technologies, it is necessary to remove unnecessary legal barriers, so as not to hamper the growth of or innovation in the Union’s developing data economy; calls for an impact assessment to be conducted with regards to the protection of IPRs in the context of the development of AI technologies;
6. Stresses the key importance of balanced IPR protection in relation to AI technologies, and of the multidimensional nature of such protection, and, at the same time, stresses the importance of ensuring a high level of protection of IPRs, of creating legal certainty and of building the trust needed to encourage investment in these technologies and ensure their long-term viability and use by consumers; considers that the Union has the potential to become the frontrunner in the creation of AI technologies by adopting an operational regulatory framework that is regularly assessed in the light of technological developments and by implementing proactive public policies, particularly as regards training programmes and financial support for research and public-private sector cooperation; reiterates the need to ensure sufficient leeway for the development of new technologies, products and services; emphasises that creating an environment conducive to creativity and innovation by encouraging the use of AI technologies by creators must not come at the expense of the interests of human creators, nor the Union’s ethical principles;
7. Considers also that the Union must address the various aspects of AI by means of definitions that are technologically neutral and sufficiently flexible to encompass future technological developments as well as subsequent uses; considers it necessary to continue to reflect on interactions between AI and IPRs, from the perspective of both intellectual property offices and users; believes that the challenge of assessing AI applications creates a need for some transparency requirements and the development of new methods as, for instance, adaptive learning systems may recalibrate following each input, making certain ex ante disclosures ineffective;
8. Stresses the importance of streaming services being transparent and responsible in their use of algorithms, so that access to cultural and creative content in various forms and different languages as well as impartial access to European works can be better guaranteed;
9. Recommends that priority be given to assessment by sector and type of IPR implications of AI technologies; considers that such an approach should take into account, for example, the degree of human intervention, the autonomy of AI, the importance of the role and the origin of the data and copyright-protected material used and the possible involvement of other relevant factors; recalls that any approach must strike the right balance between the need to protect investments of both resources and effort and the need to incentivise creation and sharing; takes the view that more thorough research is necessary for the purposes of evaluating human input regarding AI algorithmic data; believes that disruptive technologies such as AI offer both small and large companies the opportunity to develop market-leading products; considers that all companies should benefit from equally efficient and effective IPR protection; therefore calls on the Commission and the Member States to offer support to start-ups and SMEs via the Single Market Programme and Digital Innovation Hubs in protecting their products;
10. Suggests that this assessment focus on the impact and implications of AI and related technologies under the current system of patent law, trademark and design protection, copyright and related rights, including the applicability of the legal protection of databases and computer programs, and the protection of undisclosed know-how and business information (‘trade secrets’) against their unlawful acquisition, use and disclosure; acknowledges the potential of AI technologies to improve the enforcement of IPRs, notwithstanding the need for human verification and review, especially where legal consequences are concerned; emphasises, further, the need to assess whether contract law ought to be updated in order to best protect consumers and whether competition rules need to be adapted in order to address market failures and abuses in the digital economy, the need to create a more comprehensive legal framework for the economic sectors in which AI plays a part, thus enabling European companies and relevant stakeholders to scale up, and the need to create legal certainty; stresses that the protection of intellectual property must always be reconciled with other fundamental rights and freedoms;
11. Points out that mathematical methods as such are excluded from patentability unless they are used for a technical purpose in the context of technical inventions, which are themselves patentable only if the applicable criteria relating to inventions are met; points out, further, that if an invention relates either to a method involving technical means or to a technical device, its purpose, considered as a whole, is in fact technical in nature and is therefore not excluded from patentability; underlines, in this regard, the role of the patent protection framework in incentivising AI inventions and promoting their dissemination, as well as the need to create opportunities for European companies and start-ups to foster the development and uptake of AI in Europe; points out that standard essential patents play a key role in the development and dissemination of new AI and related technologies and in ensuring interoperability; calls on the Commission to support the establishment of industry standards and encourage formal standardisation;
12. Notes that patent protection can be granted provided that the invention is new and not self-evident and involves an inventive step; notes, further, that patent law requires a comprehensive description of the underlying technology, which may pose challenges for certain AI technologies in view of the complexity of the reasoning; stresses also the legal challenges of reverse engineering, which is an exception to the copyright protection of computer programs and the protection of trade secrets, which are in turn of crucial importance for innovation and research and which should be duly taken into account in the context of the development of AI technologies; calls on the Commission to assess possibilities for products to be adequately tested, for example in a modular way, without creating risks for IPR holders or trade secrets due to extensive disclosure of easily replicated products; stresses that AI technologies should be openly available for educational and research purposes, such as more effective learning methods;
13. Notes that the autonomisation of the creative process of generating content of an artistic nature can raise issues relating to the ownership of IPRs covering that content; considers, in this connection, that it would not be appropriate to seek to impart legal personality to AI technologies and points out the negative impact of such a possibility on incentives for human creators;
14. Points out the difference between AI-assisted human creations and AI-generated creations, with the latter creating new regulatory challenges for IPR protection, such as questions of ownership, inventorship and appropriate remuneration, as well as issues related to potential market concentration; further considers that IPRs for the development of AI technologies should be distinguished from IPRs potentially granted for creations generated by AI; stresses that where AI is used only as a tool to assist an author in the process of creation, the current IP framework remains applicable;
15. Takes the view that technical creations generated by AI technology must be protected under the IPR legal framework in order to encourage investment in this form of creation and improve legal certainty for citizens, businesses and, since they are among the main users of AI technologies for the time being, inventors; considers that works autonomously produced by artificial agents and robots might not be eligible for copyright protection, in order to observe the principle of originality, which is linked to a natural person, and since the concept of ‘intellectual creation’ addresses the author’s personality; calls on the Commission to support a horizontal, evidence-based and technologically neutral approach to common, uniform copyright provisions applicable to AI-generated works in the Union, if it is considered that such works could be eligible for copyright protection; recommends that ownership of rights, if any, should only be assigned to natural or legal persons that created the work lawfully and only if authorisation has been granted by the copyright holder if copyright-protected material is being used, unless copyright exceptions or limitations apply; stresses the importance of facilitating access to data and data sharing, open standards and open source technology, while encouraging investment and boosting innovation;
16. Notes that AI makes it possible to process a large quantity of data relating to the state of the art or the existence of IPRs; notes, at the same time, that AI or related technologies used for the registration procedure to grant IPRs and for the determination of liability for infringements of IPRs cannot be a substitute for human review carried out on a case-by-case basis, in order to ensure the quality and fairness of decisions; notes that AI is progressively gaining the ability to perform tasks typically carried out by humans and stresses, therefore, the need to establish adequate safeguards, including design systems with human-in-the-loop control and review processes, transparency, accountability and verification of AI decision-making;
17. Notes, with regard to the use of non-personal data by AI technologies, that the lawful use of copyrighted works and other subject matter and associated data, including pre-existing content, high-quality datasets and metadata, needs to be assessed in the light of the existing rules on limitations and exceptions to copyright protection, such as the text and data mining exception, as provided for by the Directive on copyright and related rights in the Digital Single Market; calls for further clarification as regards the protection of data under copyright law and potential trademark and industrial design protection for works generated autonomously through AI applications; considers that voluntary non-personal data sharing between businesses and sectors should be promoted and based on fair contractual agreements, including licencing agreements; highlights the IPR issues arising from the creation of deep fakes on the basis of misleading, manipulated or simply low-quality data, irrespective of such deep fakes containing data which may be subject to copyright; is worried about the possibility of mass manipulation of citizens being used to destabilise democracies and calls for increased awareness-raising and media literacy as well as for urgently needed AI technologies to be made available to verify facts and information; considers that non-personal auditable records of data used throughout the life cycles of AI-enabled technologies in compliance with data protection rules could facilitate the tracing of the use of copyright-protected works and thereby better protect right-holders and contribute to the protection of privacy; stresses that AI technologies could be useful in the context of IPR enforcement, but would require human review and a guarantee that any AI-driven decision-making systems are fully transparent; stresses that any future AI regime may not circumvent possible requirements for open source technology in public tenders or prevent the interconnectivity of digital services; notes that AI systems are software-based and rely on statistical models, which may include errors; stresses that AI-generated output must not be discriminatory and that one of the most efficient ways of reducing bias in AI systems is to ensure – to the extent possible under Union law – that the maximum amount of non-personal data is available for training purposes and machine learning; calls on the Commission to reflect on the use of public domain data for such purposes;
18. Stresses the importance of full implementation of the Digital Single Market Strategy in order to improve the accessibility and interoperability of non-personal data in the EU; stresses that the European Data Strategy must ensure a balance between promoting the flow of, wider access to and the use and sharing of data on the one hand, and the protection of IPRs and trade secrets on the other, while respecting data protection and privacy rules; highlights the need to assess in that connection whether Union rules on intellectual property are an adequate tool to protect data, including sectoral data needed for the development of AI, recalling that structured data, such as databases, when enjoying IP protection, may not usually be considered to be data; considers that comprehensive information should be provided on the use of data protected by IPRs, in particular in the context of platform-to-business relationships; welcomes the Commission’s intention to create a single European data space;
19. Notes that the Commission is considering the desirability of legislation on issues that have an impact on relationships between economic operators whose purpose is to make use of non-personal data and welcomes a possible revision of the Database Directive and a possible clarification of the application of the directive on the protection of trade secrets as a generic framework; looks forward to the results of the public consultation procedure launched by the Commission on the European Data Strategy;
20. Stresses the need for the Commission to aim to provide balanced and innovation-driven protection of intellectual property, for the benefit of European AI developers, to strengthen the international competitiveness of European companies, including against possible abusive litigation tactics, and to ensure maximum legal certainty for users, notably in international negotiations, in particular as regards the ongoing discussions on AI and data revolution under the auspices of WIPO; welcomes the Commission’s recent submissions of the Union’s views to the WIPO public consultation on the WIPO draft Issues Paper on Intellectual Property Policy and Artificial Intelligence; recalls in this regard the Union’s ethical duty to support development around the world by facilitating cross-border cooperation on AI, including through limitations and exceptions for cross-border research and text and data mining, as provided for by the Directive on copyright and related rights in the Digital Single Market;
21. Is fully aware that progress in AI will have to be paired with public investment in infrastructure, training in digital skills and major improvements in connectivity and interoperability in order to come to full fruition; highlights, therefore, the importance of secure and sustainable 5G networks for the full deployment of AI technologies but, more importantly, of necessary work on the level of infrastructure and security thereof throughout the Union; takes note of the intensive patenting activity taking place in the transport sector when it comes to AI; expresses its concern that this may result in massive litigation that will be detrimental to the industry as a whole and may also affect traffic safety if we do not legislate on the development of AI-related technologies at Union level without further delay;
22. Endorses the Commission’s willingness to invite the key players from the manufacturing sector – transport manufacturers, AI and connectivity innovators, service providers from the tourism sector and other players in the automotive value chain – to agree on the conditions under which they would be ready to share their data;
23. Instructs its President to forward this resolution to the Council and the Commission as well as to the parliaments and the governments of the Member States.
‘Building trust in human-centric artificial intelligence’ (COM(2019)0168).
Recommendation to the Council and the VPC/HR concerning the Implementation and governance of Permanent Structured Cooperation (PESCO)
146k
63k
European Parliament recommendation of 20 October 2020 to the Council and the Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy concerning the implementation and governance of Permanent Structured Cooperation (PESCO) (2020/2080(INI))
– having regard to the Treaty on European Union (TEU) and in particular its Article 36, Article 42(6), Article 46, and its Protocol (No 10) on permanent structured cooperation,
– having regard to Council Decision (CFSP) 2017/2315 of 11 December 2017 establishing permanent structured cooperation (PESCO) and determining the list of participating Member States(1),
– having regard to Council Decision (CFSP) 2018/340 of 6 March 2018 establishing the list of projects to be developed under PESCO(2),
– having regard to Council Decision (CFSP) 2018/909 of 25 June 2018 establishing a common set of governance rules for PESCO projects(3),
– having regard to Council Decision (CFSP) 2018/1797 of 19 November 2018 amending and updating Decision (CFSP) 2018/340 establishing the list of projects to be developed under PESCO(4),
– having regard to Council Decision (CFSP) 2019/1909 of 12 November 2019 amending and updating Decision (CFSP) 2018/340 establishing the list of projects to be developed under PESCO(5),
– having regard to the Council conclusions of 13 November 2017 on security and defence in the context of the EU Global Strategy,
– having regard to the Council conclusions of 19 November 2018 on Security and Defence in the context of the EU Global Strategy,
– having regard to the Council conclusions of 17 June 2019 on Security and Defence in the context of the EU Global Strategy,
– having regard to the Council Recommendation of 15 October 2018 concerning the sequencing of the fulfilment of the more binding commitments undertaken in the framework of permanent structured cooperation (PESCO) and specifying more precise objectives (2018/C374/01)(6),
– having regard to its resolution of 16 March 2017 on constitutional, legal and institutional implications of a common security and defence policy: possibilities offered by the Lisbon Treaty(7),
– having regard to the Arms Trade Treaty, which entered into force in December 2014,
– having regard to European Court of Auditors Review No 09/2019 of September 2019 on European defence,
– having regard to Rule 118 of its Rules of Procedure,
– having regard to the report of the Committee on Foreign Affairs (A9-0165/2020),
A. whereas in accordance with Article 42(2) TEU, the common security and defence policy (CSDP) includes the progressive framing of a common EU defence policy, which will lead to a common defence being put in place when the European Council, acting unanimously, so decides; whereas PESCO constitutes an important step towards achieving this objective;
B. whereas PESCO should be used to further operationalise and develop the obligation laid out in Article 42(7) TEU to provide mutual aid and assistance, as recalled in the joint notification by Member States to the Council and to the High Representative of the Union for Foreign Affairs and Security Policy on PESCO, signed by 23 Member States on 13 November 2017, in order to improve the readiness of the Member States to provide solidarity to a fellow Member State if it becomes the victim of an armed aggression on its territory;
C. whereas according to Article 1(a) of the Protocol (No 10) on permanent structured cooperation established by Article 42 TEU, one of the objectives of PESCO is for the Member States to develop their defence capabilities more intensively by furthering their national contributions and participation, where appropriate, in multinational forces, the main European equipment programmes, and in the European Defence Agency’s activities;
D. whereas Article 1(b) of Protocol No 10 states that the Member States are to ‘have the capacity to supply by 2010 at the latest either at national level or as a component of multinational force groups, targeted combat units for the missions planned, structured at a tactical level as a battle group, with support elements including transport and logistics, capable of carrying out the tasks referred to in Article 43 TEU, within a period of five to 30 days, in particular in response to requests from the United Nations Organisation, and which can be sustained for an initial period of 30 days and be extended up to at least 120 days’; whereas Article 1(b) needs to be revised in order to adequately respond to the challenging geopolitical environment; whereas the Member States are still far from achieving this goal;
E. whereas the establishment of an EU common defence strategy is needed now more than ever in the context of multiple and growing threats;
F. whereas the level of ambition under the EU Global Strategy in the field of security and defence covers crisis management and capacity building in partner countries with the aim of protecting Europe and its citizens; whereas no Member State can protect itself alone, given that the security and defence threats the EU faces, and which are targeted against its citizens, territories and infrastructures, are common multi-faceted threats that cannot be addressed by a single Member State on its own; whereas an effective EU system for efficient, coherent, strategic and joint use of resources would be advantageous for the EU’s overall level of security and defence and is more than ever necessary in a fast-deteriorating security environment; whereas increased efforts at cooperation on cyber defence, such as information sharing, training and operational support, are needed in order to better counter hybrid threats;
G. whereas the main actors of PESCO are the participating Member States (pMS), which provide the capabilities for implementing CSDP (Article 42(1) and Article 42(3) TEU), and which deploy them in EU operations and missions where the Council entrusts them with the execution of a task, within the Union framework (Article 42(1), (4) and (5), Article 43 and Article 44 TEU), and which develop their defence capabilities, inter alia, when appropriate within the framework of the European Defence Agency (Article 42(3) and Article 45 TEU);
H. whereas PESCO’s long-term vision is to provide the Union with operational capacity drawing on military assets which are complemented by civilian means, to achieve a coherent full-spectrum force package available to the Member States for military CSDP; whereas PESCO should enhance the EU’s capacity to act as an international security provider in order to contribute effectively and credibly to international, regional and European security, including by preventing the importation of insecurity, and to enhance interoperability in order to protect EU citizens and maximise the effectiveness of defence spending by reducing duplication, overcapacity and uncoordinated procurement;
I. whereas according to Council decision (CFSP) 2017/2315 establishing PESCO, enhanced defence capabilities of the Member States will also benefit NATO, following the single set of forces principle, provided that duplication is avoided and interoperability is prioritised, while strengthening the European pillar within the alliance and responding to repeated calls for more balanced transatlantic burden-sharing; whereas NATO remains the cornerstone of the security architecture of many Member States;
J. whereas PESCO creates a binding framework between the pMS, which committed themselves to jointly investing, planning, developing and operating defence capabilities within the Union framework in a permanent and structured manner by subscribing to 20 binding commitments in five areas set by the TEU; whereas these commitments should constitute a move from mere defence cooperation towards full interoperability as well as the enhancement of Member States’ defence forces through bilateral mutual beneficial partnerships; whereas these binding commitments are evaluated annually in the national implementation plans by the PESCO secretariat, which can be consulted by the participating Member States; whereas despite these binding commitments, no effective compliance mechanism for PESCO is in place; whereas PESCO projects should be implemented in a manner that reflects the industrial capacity, duplication concerns or budgetary constraints of pMS; whereas the compliance mechanism for PESCO should be improved;
K. whereas the pMS must show full political engagement with the 20 binding commitments to which they have subscribed; whereas military capacity planning cycles usually take longer than three years; whereas the current national military capacity planning cycles are mostly driven by the previously established NATO Defence Planning Process; whereas more progress should be achieved with regard to significantly embedding PESCO into national defence planning processes in order to ensure the capacity of pMS to finalise PESCO projects;
L. whereas PESCO was originally conceived as an avant-garde, comprising the Member States willing and able to upgrade their cooperation in defence to a new level of ambition; whereas the fact that there are 25 pMS must not lead PESCO to be constrained by the ‘lowest common denominator’ approach; whereas the number of pMS indicates a willingness for closer cooperation in security and defence;
M. whereas work on the first three waves of PESCO projects has led to the establishment and adoption of 47 projects; whereas to date, none has come to fruition; whereas the projects in the first wave are mainly capability-building projects involving as many Member States as possible; whereas the inclusive nature of PESCO projects should not lead the pMS to water down their ambitions; whereas it is essential that PESCO focus on projects that deliver genuine added value;
N. whereas there seems to be no overarching common logic between the 47 PESCO projects; whereas the current list of projects lacks coherence, scope and strategic ambition so that the most obvious capability gaps will not be filled, and does not adequately or fully address critical shortfalls as identified by the Headline Goal Process through the Capability Development Plan (CDP) and the Coordinated Annual Review on Defence (CARD); whereas one of these projects has been stopped in order to avoid unnecessary duplication; whereas other projects did not make sufficient progress or are at risk of being stopped, and around 30 projects are still in the conceptual development and preparatory phase; whereas the development of ambitious military capacity projects can take up to 10 years; whereas the vast majority of PESCO projects coincide with European Defence Fund (EDF) and NATO shortfalls;
O. whereas the second phase of PESCO is to start in 2021; whereas this second phase will deliver concrete and significant results, which means that a prioritisation of projects is necessary;
P. whereas certain PESCO projects are focussed on operational deployment, such as EUFOR Crisis Response Operation Core (EUFOR CROC), Military Mobility and Network of Logistic Hubs, while others are more focussed on the development of military capacities, such as Cyber Rapid Response Teams and Mutual Assistance in Cyber Security (CRRTs); whereas both approaches are needed to decisively contribute to the evolution towards an EU common integrated security and defence strategy;
Q. whereas some of the most strategic PESCO projects have the potential to decisively contribute to the Union’s strategic autonomy and to decisively contribute to the creation of a coherent full-spectrum force package;
R. whereas major European defence projects such as the Future Air Combat System (FCAS) and the Main Ground Combat System (MGCS) currently remain outside the scope of PESCO;
S. whereas it is crucial to prioritise and address the capability gaps identified in the CDP, and to build on the CARD with the aim of increasing Europe’s strategic autonomy;
T. whereas only some of the current PESCO projects do sufficiently address the capability shortcomings identified under the CDP and CARD or already sufficiently take into account the High Impact Capacity Goals deriving from the CDP, and should be considered as a priority;
U. whereas the consistency, coherence and mutual reinforcement between PESCO, CARD, national implementation plans (NIPs) and the CDP has to be further improved;
V. whereas the NATO Defence Planning Process (NDPP) contributes to national defence planning processes in 21 pMS which are members of NATO;
W. whereas interactions between Member States’ national priorities, EU priorities and NATO priorities should take place at the earliest possible convenience where appropriate and relevant; whereas EU and NATO priorities should be better harmonised in order to achieve EU capability targets;
X. whereas while taking into account the different nature of the two organisations and their respective responsibilities, PESCO should be an effective and complementary tool to address the capability development priorities and provide the military capabilities identified in the EU and may make a contribution to the NATO objectives;
Y. whereas in conjunction with the EU Global Strategy, a specific defence and security strategy such as the EU Security and Defence White Book suggested in numerous Parliament reports could facilitate a shared understanding of current and future challenges and provide important guidance to PESCO and the CDP deriving from an understanding of strategic ambitions and actions to be taken in the long run;
Z. whereas currently, PESCO projects are dependent on the 25 participating Member States’ financial contributions; whereas it is expected that, as a result of the COVID-19 pandemic, national defence budgets will suffer reductions; whereas paradoxically, several of the current 47 PESCO projects, if funded accordingly, could strengthen Member States’ preparedness, should another massive public health crisis occur: Military Mobility, the European Medical Command and many other projects in areas related to logistics and transportation, healthcare, disaster relief, preparedness against chemical, biological, radiological and nuclear (CBRN) weapons and the fight against malicious cyber activities and hostile disinformation campaigns; whereas cutting funding for the strategic capabilities that the EU and its Member States currently lack would also weaken their ability to jointly act against future pandemics, CBRN threats and other unpredictable risks with major international impacts;
AA. whereas funding dual-use transport infrastructure will benefit both civilian and military mobility, and whereas implementing harmonised administrative procedures could lead to resources being moved through proper supply routes across the EU and help in building a common security and defence environment;
AB. whereas PESCO and the future EDF must be mutually reinforcing and whereas interlinkages between them must be further developed in order to deliver critical capabilities identified under the CDP;
AC. whereas the prospect of receiving co-financing for the research and development capacities deriving from certain PESCO projects via the future EDF has led pMS to multiply their proposals and has encouraged exchanges and cooperation; whereas all proposals must have the EU’s best common strategic interest in mind;
AD. whereas in some specific cases, the participation of third countries, provided they meet an agreed set of political, substantive and legal conditions, in individual PESCO projects might be in the strategic interest of the Union, particularly when it comes to the provision of technical expertise or additional capabilities, and in case of strategic partners; whereas any third country participation in PESCO projects should not undermine the objective of fostering the EU CSDP;
AE. whereas third country participation can only be exceptional, decided on a case-by-case basis and at the invitation of the EU Member States; whereas any such participation should provide added value to certain projects, and contribute to strengthening PESCO and the CSDP and to meeting more demanding commitments, subject to very strict conditions and on the basis of established and effective reciprocity;
AF. whereas an agreement on third country participation in PESCO projects is long overdue;
AG. whereas, with regard to the current role of the Political and Security Committee (PSC) in the context of PESCO and capability development, Parliament has already requested that ‘the mandate of the PSC referred to in Article 38 TEU needs to be interpreted narrowly’;
AH. whereas the governance of PESCO is led by pMS; whereas the PESCO secretariat should continue to facilitate liaison with other EU actors as regards possible synergies with other EU instruments and initiatives to ensure transparency and inclusiveness and avoid unnecessary duplications;
AI. whereas the deepening of defence cooperation among Member States at EU level should go hand in hand with the strengthening of the powers of scrutiny of Member States’ parliaments and the European Parliament;
AJ. whereas the Connecting Europe Facility should focus on projects related to military mobility and interoperability, which are crucial when it comes to unexpected conflict and crisis; whereas PESCO should contribute to the creation of an effective Schengen area for military mobility, with the aim of reducing procedures at borders and keeping infrastructure burdens to a minimum; whereas the Rail Baltica project, which is vital for the integration of the Baltic countries into the European rail network, should be welcomed in this regard, and its full effectiveness should be assured;
AK. whereas PESCO can in this respect contribute to greater coherence, coordination and interoperability in security and defence, and to consolidating solidarity, cohesion and the resilience of the Union;
AL. whereas Parliament should, jointly with the Council, exercise legislative and budgetary functions, as well as functions of political control and consultation as laid down in the Treaties;
AM. whereas Parliament calls on the Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy to forward his annual report on the implementation of PESCO;
AN. whereas the combined research and development efforts of pMS under PESCO will give way to significant technological breakthroughs, in turn providing the Union with a competitive edge in the areas of modern defence capabilities;
1. Recommends that the Council and the Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy:
(a)
inform and consult Parliament on the review of PESCO, and ensure that Parliament’s views are duly taken into consideration, in line with Article 36 TEU, especially in the context of the current strategic review of the first PESCO phase, which ends in 2020, in order to ensure reinforced accountability, transparency and scrutiny;
(b)
Stress the importance of pursuing conflict resolution as a priority;
(c)
implement the Union’s strategic vision and define common threats by, inter alia, implementing the level of ambition defined by the 2016 EU Global Strategy, including through the ongoing work of the Strategic Compass, which needs to be carried out in cooperation with all relevant stakeholders and institutions, and strengthen PESCO’s operational dimension;
(d)
prepare, as soon as possible, on the basis of the results of the discussion on the Strategic Compass, a fully-fledged EU Security and Defence White Book; take note of the fact that the first results of the Strategic Compass are expected in the first half of 2022;
(e)
ensure synergy effects and coherence between different EU defence initiatives and operations;
(f)
encourage the pMS through focused proposals and adequate communication to evolve from a strictly national focus on defence to a stronger European one and to undertake structured efforts to increase the use of a European collaborative approach as a priority, as no individual pMS has the potential to address identified capacity shortfalls alone; encourage pMS and the Member States more generally not to reduce their defence spending in the coming years, and especially not their financial involvement in European cooperative projects;
(g)
increase the EU’s budgetary ambition for the strengthening of defence capabilities, notably through the sufficient financing of the future EDF and Military Mobility in the upcoming multiannual financial framework (MFF);
(h)
ensure that PESCO is effectively used as an instrument towards sustainable and efficient EU defence cooperation, improving the defence capabilities of pMS and interoperability as a common goal, especially in terms of availability, interoperability, flexibility and deployability of forces in line with the ambition for greater EU strategic autonomy, while maintaining close cooperation between willing pMS, increasing EU-NATO cooperation as regards EU-NATO members and maintaining close cooperation with other international partners;
(i)
ensure that the funding of capacities derived from PESCO projects by the EDF is focused on a set of strategic key projects, in line with the priorities of the CDP, in order to maximise its impact; ensure that the selection of PESCO projects is in line with the High Impact Capacity Goals of the CDP;
(j)
recognise that Parliament, jointly with the Council, exercises legislative and budgetary functions, as well as functions of political control and consultation as laid down in the Treaties;
(k)
incorporate directly into the PESCO project cycle the link between PESCO and the European Defence Industrial Procurement Programme (EDIDP) and EDF with the aim of contributing more effectively to the achievement of the Union’s ambitions in the area of security and defence; impose the documentation of each project, before selection on the budgetary side;
(l)
focus PESCO efforts on projects aimed at systematically strengthening military CSDP,
(i)
which contribute to remedying significant capability shortfalls with a more operational focus, in direct response to the needs of European armed forces engaged in operations,
(ii)
with a strategic and integrative dimension, such as EUFOR CROC, Military Mobility, Network of Logistic Hubs or CRRT, or
(iii)
that create additional synergies and effects of scale, where appropriate;
(m)
focus PESCO on constructive projects with a genuine European strategic dimension, thereby strengthening Europe’s defence industrial and technological base;
(n)
underline the importance of a small number of strategic projects, in particular strategic enablers (command and control, transport, intelligence), which should be prioritised as they lay down the foundations of a more integrated European defence;
(o)
take note of the fact that the creation of PESCO in the framework of the Lisbon Treaty was seen as the establishment of an avant-garde of Member States willing to pool resources and capabilities to achieve ambitious common objectives in the field of security and defence; consider the need for the Union to progressively develop a common framework under the responsibility of the Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy, within which the Member States would conduct their own national defence policy reviews, share results and pool intelligence as a means of establishing the foundation of a genuine European defence;
(p)
recognise the value, in this regard, of the political guidelines of the Commission regarding defence policy, and in particular regarding the need for bold steps towards a genuine European Defence Union, and for an integrated and comprehensive approach to the EU’s security; take the view that the creation of a new Commission Directorate-General for Defence Industry and Space should serve as a catalyst for enhanced coherence, fair cooperation and integrated coordination in the creation of defence capabilities across the Member States, as well as for strengthening EU military infrastructure and improving the efficiency of EU industry and the internal market;
(q)
recognise that Parliament should play a prominent role in the scrutiny and supervision of the implementation and evaluation of the CSDP; keep Parliament fully informed and consulted in the context of the current strategic review of the first PESCO phase, which ends in 2020; take the view that increasing defence cooperation among Member States at EU level should go hand in hand with the strengthening of Parliament’s power of scrutiny;
(r)
strive to ensure that key capabilities such as future key land, sea, air, cyber and other platforms for the armed forces of the Member States be brought under PESCO or at least be closely connected to it, as appropriate, in order
(i)
to increase the operational readiness of military CSDP, and
(ii)
to ensure that PESCO efforts are complementary to existing capabilities and are used in a manner that resolves existing shortfalls and offsets overhead expenses;
(s)
formulate innovative incentives to improve the interoperability and deployment of CSDP missions and operations;
(t)
increase investment in interconnecting civilian transport infrastructure that is compatible with planning for military mobility;
(u)
study, as part of the reform of the EU Battlegroup (EU BG) system, whether to bring it under PESCO in order to increase its operational capacity, modularity and agility, by establishing standing multinational units dedicated to fulfilling military tasks as specified in Article 43 TEU and to enhancing the EU’s ability to conduct crisis management operations, including the most demanding ones such as peace-making, and to use it as a strategic over-the-horizon force;
(v)
support and promote, where relevant, the grouping of PESCO projects into capability clusters and assess their strategic relevance, keeping in mind the objective of achieving a full-spectrum force package, and concentrate efforts on those that have the highest potential to deliver European strategic autonomy; review the current list of 47 projects and either cluster or cancel projects, at the discretion of pMS, which are making insufficient progress or present insufficient mutually beneficial gain to the EU;
(w)
promote compliance with the 20 PESCO commitments by establishing a clear and simple definition of compliance benchmarks, and by ensuring that future project proposals address a specific EU Capability Development Priority; ensure that any reviews of project progress are based on clear and transparent criteria including when co-financed in the framework of EDIDP/future EDF; ensure that such criteria serve as indicators for all Member States participating in PESCO projects; ensure that the pMS further increase the quality and the granularity of the information provided in their National Implementation Plans, in which they outline how they intend to meet the 20 PESCO commitments;
(x)
enhance the coherence of EU defence planning and development tools and initiatives; use the synergies between the PESCO project cycle and other defence capability processes such as the EU headline Goal Process, the CDP and CARD in order to enable more focused, mature, better developed and structured projects to be submitted; make sure the submission cycle enables the synchronised implementation of several European initiatives, including the EDF;
(y)
encourage pMS to embed CDP into their national defence planning processes with a view to helping them to overcome capability shortcomings;
(z)
reaffirm the central role of the PESCO secretariat as a single point of contact for all projects and invite the secretariat to carry out regular situation updates on the progress of projects to Parliament as well as for the benefit of all stakeholders, using information collected from the Member State(s) in charge of project coordination; encourage pMS to continue to engage in a more effective dialogue with the PESCO secretariat regarding the review and update of their National Implementation Plans;
(aa)
call on the pMS to ensure tangible progress in the achievement of the current PESCO projects;
(ab)
clarify the role of the Political and Security Committee in the PESCO process, which is not provided for by the TEU, and ensure, in this context, the important role played by the European Union Military Committee (EUMC) in the provision of ad hoc military advice to the Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy;
(ac)
involve the EUMC in the work of defining a full-spectrum force package;
(ad)
examine the establishment of an EU Council on Defence based on the existing Foreign Affairs Council in defence ministers format, which is also the EDA ministerial Steering Board and the PESCO format of EU Defence Ministers, in order to guarantee the prioritisation of resources and effective cooperation and integration among the Member States, as appropriate;
(ae)
clarify or define the link between the governance of PESCO and that of the EDF and inform Parliament in the ex-post control process when it comes to EDF funding of PESCO projects;
(af)
consider, as requested by some pMS, changing the cycle of submission of PESCO projects with the aim of increasing the focus and maturity and improving the structure of these projects;
(ag)
clarify the rules governing third-party participation in PESCO, taking into consideration the importance of EU decision-making autonomy and full reciprocity and understanding that a case-by-case approach is most beneficial for the EU, taking into account
(i)
the need to prepare and adopt a comprehensive and fundamental document to regulate future cooperation with third-party participation in PESCO projects, and
(ii)
the fact that the decision-making process regarding the involvement of a third party should be taken at the level of each PESCO project;
(ah)
encourage ‘future threats’ to be used as the basis of future PESCO project proposals; strengthen partnerships with NATO, the UN, the African Union and beyond; ensure that the involvement and inclusion of SME’s is considered in all relevant aspects of PESCO projects;
(ai)
ensure that PESCO projects further develop and increase the industrial capacity of pMS in the fields of nanotechnologies, super-computers, artificial intelligence, drone technology, robotics and others, in turn securing European self-reliance and independence from foreign importers in these areas, as well as facilitating the creation of new jobs;
(aj)
take note of the fact that the COVID-19 pandemic has shown that the Union does not have enough competence when it comes to healthcare; recognise that in parallel, an EU common defence strategy needs to be established to respond in the event of an attack on the EU’s borders and territories, and that PESCO is a positive step towards this objective;
(ak)
acknowledge the crucial role played by the European armed forces in addressing the challenges posed by the COVID-19 pandemic, both in terms of the management of the health emergency and support to civilian missions and operations, and the fact that they also have a cross-border dimension and solidarity function; see the potential benefits of new ambitious PESCO projects for the development of common European capabilities in this field, expanding on the work of previous projects, notably the Deployable Military Disaster Relief Capability Package and the European Medical Command;
(al)
call for the Council and the participating Member States to focus on cyber resilience and prepare a collective strategy and procedures to respond to cyber incidents through PESCO projects in order to create a more resilient environment within the Member States;
(am)
take note of Parliament’s position on the Conference on the Future of Europe as expressed in its resolution of 15 January 2020(8), namely that security and the role of the EU in the world should be identified among pre-defined but non-exhaustive policy priorities, and recognise that this would be an opportunity to involve citizens in the debate on strengthening PESCO as a way of making progress toward an autonomous common security and defence policy for our Union;
2. Instructs its President to forward this recommendation to the Council and the Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy.
– having regard to Article 8 and to Title V, notably Articles 21, 22, 36 and 37, of the Treaty on European Union (TEU), as well as to Part Five of the Treaty on the Functioning of the European Union (TFEU),
– having regard to the Association Agreement between the European Union and the European Atomic Energy Community and their Member States, of the one part, and the Republic of Moldova, of the other part (AA), which includes a Deep and Comprehensive Free Trade Area (DCFTA) and fully entered into force on 1 July 2016,
– having regard to the establishment of a visa-free regime for citizens of the Republic of Moldova in March 2014, as a result of the amendments to Council Regulation (EC) No 539/2001(1) made by the European Parliament and the Council,
– having regard to the signature in November 2017 of a Memorandum of Understanding, a Loan Facility Agreement and a Grant Agreement on micro-financial assistance worth EUR 100 million for the period 2017-2018,
– having regard to the Moldovan National Action Plan on the Implementation of the Republic of Moldova-European Union Association Agreement (NAPIAA) for 2017-2019,
– having regard to its previous resolutions relating to the Republic of Moldova, in particular the previous resolution on the Implementation of the EU-Moldova Association Agreement of 14 November 2018(2), its resolution of 5 July 2018 on the political crisis in Moldova following the invalidation of the mayoral elections in Chișinău(3), of 15 November 2017 on the Eastern Partnership in the run-up to the November 2017 Summit(4), of 4 July 2017 on providing macro-financial assistance to the Republic of Moldova(5), and of 21 January 2016 on Association Agreements / Deep and Comprehensive Free Trade Areas with Georgia, Moldova and Ukraine(6),
– having regard to the EU’s decision of July 2018 to freeze the disbursement of the first instalment of macro-financial assistance, following the Supreme Court ruling on the Chișinău mayoral election, and its decision of November 2018 to cut its financial assistance, following concerns about the rule of law and the democratic backsliding of the country,
– having regard to the subsequent EU decision of July 2019 to resume budget support disbursements in light of the Republic of Moldova’s commitment to reform the justice system,
– having regard to the EU decision of October 2019 to disburse a first instalment of macro-financial assistance worth EUR 30 million, as a result of the implementation of key reforms to improve democratic standards and protect the rule of law,
– having regard to the Commission and European External Action Service (EEAS) joint staff working document on the Association Implementation Report on the Republic of Moldova of 11 September 2019,
– having regard to the outcome of the fifth Association Council meeting between the EU and the Republic of Moldova of 30 September 2019,
– having regard to the Joint Declarations of the Eastern Partnership Summits, most recently that of 24 November 2017 in Brussels,
– having regard to the conclusions of the Foreign Affairs Council on the Republic of Moldova of 26 February 2018,
– having regard to Resolution 2308 of the Parliamentary Assembly of the Council of Europe (PACE) of 3 October 2019 on ‘The functioning of democratic institutions in the Republic of Moldova’,
– having regard to the 2019 Transparency International corruption perception list that awards the Republic of Moldova the 120th place out of 180 countries and territories assessed (first place being the best), while on the 2018 Transparency International corruption perception list the Republic of Moldova held the 117th place,
– having regard to the Democracy Index 2019 of The Economist Intelligence Unit, which classifies the Republic of Moldova as a “Hybrid Regime”,
– having regard to Freedom House’s 2020 “Freedom in the World” report, which gives the Republic of Moldova a “partly free” assessment, and its 2020 “Nations in Transit” report, which assesses the Republic of Moldova as a “Transitional or Hybrid Regime”,
– having regard to the Moldovan National Action Plan on the Implementation of the EU-Moldova Association Agreement, the National Action Plan on Human Rights 2018-2022, the National Strategy on Preventing and Combating Violence against Women and Domestic Violence 2018-2023, which mentions explicitly the ratification of the Istanbul Convention,
– having regard to the analyses and recommendations issued by the Organisation for Economic Co-operation and Development (OECD), in particular of 8 March 2018 on Young Moldova: Problems, Values and Aspirations; and of 20 April 2018 on Youth Well-being Policy Review of Moldova,
– having regard to the opinions and recommendations of the Organisation for Security and Cooperation in Europe Office for Democratic Institutions and Human Rights (ODIHR) and of the Council of Europe’s Venice Commission, in particular of 15 March 2018 on electoral reform in Moldova, of 24 June 2019 on the constitutional situation with particular reference to the possibility of dissolving parliament, and of 14 October 2019 on the draft law on the reform of the Supreme Court of Justice and the Prosecutor’s Office,
– having regard to the Commission’s Joint Communication on the “Eastern Partnership policy beyond 2020 - Reinforcing Resilience, an Eastern Partnership that delivers for all” of 18 March 2020,
– having regard to the Council Conclusions of 11 May 2020 on the Eastern Partnership policy beyond 2020,
– having regard to the Joint Communication of the Commission and the High Representative of the Union for foreign affairs and security policy to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 8 April 2020 on the Global EU response to COVID-19 and the Decision (EU) 2020/701 of the European Parliament and of the Council of 25 May 2020 on providing macro-financial assistance to enlargement and neighbourhood partners in the context of the COVID-19 pandemic(7),
– having regard to the Commission’s Report to the European Parliament and the Council, Third report under the visa suspension mechanism, and the accompanying Staff Working Document, published on 10 July 2020,
– having regard to the European Parliament’s recommendation to the Council, the Commission and the Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy on the Eastern Partnership, in the run-up to the June 2020 Summit,
— having regard to the recommendations and activities of the EU-Moldova Parliamentary Association Committee, Euronest Parliamentary Assembly, the Eastern Partnership Civil Society Forum, EU-Moldova Civil Society Platform and of other representatives of civil society in the Republic of Moldova,
– having regard to the Statement and Recommendations adopted on the occasion of the 7th Meeting of the EU-Moldova Parliamentary Association Committee, held in Strasbourg on 18-19 December 2019,
– having regard to the conclusions of the European Parliament election observation mission to the Republic of Moldova parliamentary elections of 24 February 2019 integrated in the international election observation mission led by the OSCE/ODIHR,
– having regard to the Commission economic aid package adopted on 29 March 2020 to help the Republic of Moldova, among other countries, in its fight against the COVID-19 pandemic, which included the redirection of existing instruments to mitigate the socioeconomic impact of the crisis;
– having regard to Rule 54 of its Rules of Procedure, as well as Article 1(1)(e) of, and Annex 3 to, the decision of the Conference of Presidents of 12 December 2002 on the procedure for granting authorisation to draw up own-initiative reports,
– having regard to the opinion of the Committee on International Trade,
– having regard to the report of the Committee on Foreign Affairs (A9-0166/2020),
A. whereas through the AA/DCFTA the EU and the Republic of Moldova committed to promote political association and achieve economic integration, and the Republic of Moldova committed to incorporating the EU acquis into its own laws and practices in a large number of areas; whereas, in order to support these efforts, the Union committed to provide substantial financial and budgetary assistance to the Republic of Moldova, under the condition that core European values and principles, such as rule of law, human and democratic rights are respected and that the fights against corruption, organized crime, money laundering, oligarchic structures and nepotism are guaranteed; whereas in serious cases of backsliding cooperation can be reversed;
B. whereas on 13 September 2017 Parliament and the Council adopted a decision to provide macro-financial assistance to the Republic of Moldova worth EUR 100 million in the context of the IMF programme to support the country’s economic and financial reforms;
C. whereas the EU repeatedly expressed concerns about the rule of law, the lack of progress in the prosecution of those responsible for the bank fraud exposed in 2014, and continued breaches of human rights;
D. whereas the 2018 Transparency International corruption perception list and Freedom House’s 2020 reports show slight progress in the Republic of Moldova in the most recent past, while the overall trend in those indexes, as well as the Democracy Index, demonstrates a deteriorating long-term trend in the state of democracy, corruption, political rights and civil liberties in the Republic of Moldova;
E. whereas, despite changes in government, the Republic of Moldova’s State institutions remain weak and the Republic of Moldova continues to struggle with the problem of State capture, as the concentration of power and control over all important sectors and institutions at the highest level of government has not significantly decreased;
F. whereas, due to serious violations to the rule of law and to the democratic process, the EU suspended in 2018 the disbursement of the last two instalments under the budget support programme for justice sector reforms;
G. whereas on 11 June 2019 the European Court of Human Rights (in the case of Ozdil and Others v. the Republic of Moldova) found that the Republic of Moldova had violated the rights to liberty, security, privacy and family life when in September 2018 its Intelligence and Security Service (SIS) detained and forcibly returned to Turkey five Turkish citizens who had been seeking asylum; whereas this disguised extradition is just one example of a systematic pattern of enforced and involuntary disappearance, illegal detention and deportation to Turkey of Turkish nationals in dozens of countries around the world;
H. whereas, following the formation in June 2019 of a government committed to carrying out ambitious reforms with a programme focused on reform of the judiciary, the Commission disbursed the first instalment of Macro-Financial Assistance and resumed disbursements for sector budget support programmes, while declaring that it would continue to apply strict conditionality; whereas on 10 July 2020, the Commission approved the disbursement of a second and final instalment of 30 million euros from its Macro-Financial Assistance (MFA) programme;
I. whereas, however, the Republic of Moldova has not been able to access the rest of funds available under this programme, which expired in July 2020; whereas this assistance remains conditional upon implementing previously agreed reforms, in particular those aimed at strengthening the rule of law, democratic standards and bringing tangible results to the citizens;
J. whereas in November 2019 the Moldovan parliament adopted a motion of no confidence in the government formed in June 2019 and a minority government and subsequently a new coalition government have been formed; whereas representatives of the Union institutions have expressed concern at the way in which the old government has been replaced and with regard to the reform process undertaken by the Republic of Moldova through the AA/DCFTA;
K. whereas the new coalition government’s majority in the Parliament of the Republic of Moldova has been constantly shrinking as deputies defected from the ruling alliance; whereas the Republic of Moldova will hold Presidential elections in the fall and is currently facing a period of acute political instability; whereas President Igor Dodon has underlined that the Parliament must be dissolved and early elections must be held as soon as possible; whereas on the 7 July 2020, the Constitutional Court of the Republic of Moldova has ruled that early elections could be held only after the Presidential elections;
L. whereas on 17 April 2020, the Russian and Moldovan governments signed an agreement for a loan of EUR 200 million to be provided by the Russian Federation to the Republic of Moldova with a preferential 2 % interest rate, which was negotiated by the presidents of both countries; whereas this agreement has been ratified on 23 April 2020 and, on the same day, following an appeal introduced by members of the parliamentary opposition, the Constitutional Court (CC) of the Republic of Moldova suspended the ratification law of the loan agreement until it would complete the examination of its compatibility with the Constitution; whereas on 6 May 2020, the President of the CC reported pressure from the Moldovan authorities on the CC, as well as attempts to discredit its judges; whereas on 7 May 2020, the CC declared the loan agreement unconstitutional; whereas a new loan agreement with the Russian Federation is currently under negotiations;
M. whereas the COVID-19 pandemic has proved the rising need for coordination in tackling common threats between the Union and neighbouring countries; whereas the Union has responded to that need with, among other tools, the provision of a financial help package to its neighbours;
N. whereas, during the COVID-19 crisis, solidarity with the Eastern Partnership countries is of paramount importance and the Union provided substantial support to address the impact of the outbreak in the region; whereas, in this context, the Republic of Moldova will benefit from EUR 87 million in redirected bilateral funding;
O. whereas the Union is also making further MFA loans of EUR 100 million available to the Republic of Moldova, as part of the decision to provide MFA to ten partner countries in the neighbourhood in order to help them limit the economic fallout from the coronavirus pandemic; whereas the first instalment of the exceptional MFA package will be disbursed as swiftly as possible, given that the Memorandum of Understanding (MoU) with the Republic of Moldova has been ratified; whereas in order to receive the second instalment, to be disbursed within one year from the signing of the MoU, the country will have to respect certain conditionalities; whereas an important precondition for the granting of this MFA is that the country respects effective democratic mechanisms, including a multi-party parliamentary system, the rule of law and guarantees respect for human rights; whereas the conclusion of the MoU should be welcome, the implementation of the undertaken commitments should be guaranteed;
P. whereas the Republic of Moldova has made international and national commitments to promote gender equality and the empowerment of women; whereas the country has adopted measures to promote the political representation of women, including through adoption of a mandatory 40 % gender quota for the political parties’ electoral lists; whereas further efforts are needed in order to advance the objectives of the 2017-2021 National Gender Equality Strategy, including appropriate funding and stronger implementation mechanisms;
Q. whereas, despite all economic progress, the social impact of the financial assistance and the reform efforts has been rather marginal; whereas the Republic of Moldova remains one of the poorest countries in Europe facing a detrimental social situation with deserted villages and extreme poverty; whereas in 2018 38,5 % of the workers in the Republic of Moldova were informally employed having no access to any form of social protection;
R. whereas, since 1989, the population of the Republic of Moldova has shrunk by almost a third; whereas in terms of demographics, these are the worst figures in the whole of Europe; whereas Moldovans leave for higher pay and better education and facilities; whereas such a development has deep and long-lasting political, economic and social consequences; whereas the Republic of Moldova is confronted with labour shortages and a lack of professionals such as nurses and doctors; whereas the elderly, a large proportion of whom rely on remittances, are the Republic of Moldova’s most vulnerable and prone to poverty;
S. whereas the solution of the problems of the Republic of Moldova cannot be provided from outside the country and there is a need to enhance ownership by the Moldovan people to meet the challenges of the country; whereas it remains important to address the main challenges such as fight against corruption and oligarchic structures, the adherence to democratic standards, the need to find solutions for the multi-faced social problems, to ensure media plurality and to tackle poverty and emigration;
Common values and general principles
1. Recalls that the common values on which the Union is built, namely democracy, respect for human rights and fundamental freedoms, and the rule of law, lie also at the heart of the political association and the economic integration between the Union and the Republic of Moldova; reaffirms the Union's commitment to support the Republic of Moldova’s European path through political association, economic integration and the respective reforms; notes that the AA/DCFTA remains of primary importance for the development of the Republic of Moldova, especially in current exceptional times, and commends the sustained engagement in this process of Moldovan society and authorities; recalls, however, that further progress must be achieved in its implementation, to deliver its full potential and benefits, particularly by focusing on the independence of State institutions, their resilience against the influence from oligarchs, fight against corruption, justice, the strengthening rule of law and on improving the living conditions of the citizens; underlines that the AA/DCFTA was the main vector encouraging and supporting the process of structural reforms, democracy and the rule of law;
2. Welcomes all intentions towards a closer political, human, and economic integration with the Union in line with the principle of differentiation and based on the performance, results, and aspirations of the Republic of Moldova's authorities and society;
3. Notes the conclusions of the IMF’s March 2020 Article IV consultation and the IMF Board’s sixth and final review of the Republic of Moldova’s economic performance under the Extended Credit Facility and Extended Fund Facility arrangements, with particular regard to the rehabilitation of the Moldovan banking system and the strengthening of financial sector governance;
4. Welcomes the disbursement of the second instalment of the Union MFA; acknowledges the reform efforts pursued by the Republic of Moldova in areas including the fight against corruption, the strengthening of the anti-money laundering framework, the adoption of a new law on the activities of NGOs, and notes that the Republic of Moldova has joined the OECD anti-corruption peer-review program (Istanbul Action Plan);
5. Is of the opinion that the disbursement of the second instalment of the EU Macro Financial Assistance Program for 2017-2020 should be followed by the efforts of the Moldovan authorities in order to fulfil the relevant conditions in the areas related to strengthening the anti-money-laundering framework with respect to which it should deliver tangible and lasting results, as well as to strengthening the independence of the national bank;
6. Calls on the Moldovan Government and the EU to cooperate to overcome the negative impact of the COVID-19 crisis on social and economic development;
7. Welcomes the outcome of the negotiations on the Memorandum of Understanding on the new exceptional EU MFA program aimed at countering the negative economic impact of the COVID-19 pandemic;
AA’s implementation importance in the ongoing political developments and the run-up to the November 1 Presidential Elections
8. Notes that the November 2019 Activity Program of the of the Republic of Moldova is less ambitious than the previous government’s 2030 Global Agenda and is concerned that political instability and frequent government changes are affecting the implementation of AA/DCFTA provisions and limiting the pace of reforms; supports the connection of the next Association Agenda with the new NAPIAA and emphasizes the importance of a rapid adoption of the new Agenda as an instrument to accelerate the implementation of the Association Agreement and update its priorities with the active parliamentary participation and input from civil society and other stakeholders in the EU and the Republic of Moldova; insists that the continuation of EU political and financial support remains conditional on the delivery of tangible reforms, in particular concerning the rule of law and the judiciary; in this regard, reiterates the importance of implementing all priority reforms agreed in the Association Agenda and of fulfilling the conditionalities agreed for the disbursement of second and third tranches of the MFA;
9. Welcomes the Republic of Moldova’s constructive contribution to the cooperation within the Eastern Partnership and encourages a permanent and intensified political exchange among countries part of AA/DCFTAs and the Commission on association-related reforms; calls on the Commission to make a proper use of existing mechanisms to continue monitoring the concrete implementation of reforms and develop a conditionality mechanism, including clear benchmarks, with the meaningful involvement of civil society, particularly at local level; deems it essential, in this context, to step up the financial support for CSOs, which play a critical role in fostering participation in public debates and in monitoring both the action of Moldovan authorities and the effectiveness of the Union’s policies towards the country; in addition, suggests using the experience of the Support Group for Ukraine to create a similar structure for the Republic of Moldova, in order to increase the effectiveness and visibility of the Union’s support;
10. Underlines that the situation in the Republic of Moldova should be closely monitored in the long term, including during the pre-electoral period, in accordance with the normal OSCE/ODIHR practices and standards, particularly in the current period of crisis, as the forthcoming presidential elections will be a test for democracy and the rule of law in the country;
11. In this respect, calls on the Moldovan authorities to ensure free and fair presidential elections scheduled on 1 November 2020 and urges the Moldovan authorities to further improve the electoral legislation in order to ensure the effectiveness of the right to vote, the fairness of the electoral campaigns, the transparency of the legislative process and democratic oversight, so as to allow adequate public scrutiny of the government’s and the parliament’s activity; demands that the Moldovan authorities refrain from altering rules and regulations for political gain, which will always end in political unrest and instability affecting the commitment for structural reforms; with a view to future elections, underlines the importance of democratic legitimacy of the government, transparency in coalition building, respect for the will of the voters and the importance that a government majority reflects the vote of the people;
12. Calls on the Moldovan authorities to strengthen democratic mechanisms, including a multi-party parliamentary system, and to ensure free, independent and pluralistic media as well as a fair access to finance and media; in this context, demands that Moldovan authorities strengthen resilience against disinformation and information manipulation by domestic and foreign actors, online and offline and implement measures to address the even more urgent need to tackle vote buying, intimidation of election observers, electoral bribery and other corrupt practices, as well as the misuse of State resources, as those practices undermine and destroy all democratic efforts made by political actors of the Republic of Moldova;
13. Emphasizes the need for a strong and fair political rivalry among presidential candidates, which would not be possible without healthy and transparent system of party-financing and presidential campaign financing;
14. Urges the Moldovan government to put in place all necessary measures to ensure that the citizens of the Republic of Moldova living in the Transnistrian region as well as outside of the Republic of Moldova can participate in elections in an inclusive, transparent and fair way, free from foreign interference;
Reforms and institutional framework
15. Welcomes the reforms that led to the introduction of a visa-free regime with the Union; the program has been used extensively by the citizens of the Republic of Moldova and represents a very good example of how the implementation of the AA/DCFTA touches upon the lives of citizens by fostering people-to-people contacts with other fellow Europeans; calls on the Union and the Republic of Moldova to further improve people-to-people contacts and exchanges in order to build mutually positive images of each other among the populations;
16. Welcomes the fact that since 2014 more than 2,3 million Moldovan citizens have benefited from the visa-free regime and notes that according to the latest Commission report, the Republic of Moldova continues to meet the visa liberalisation requirements and that visa-free movement continues to bring positive economic, social and cultural benefits both to the Union and the Republic of Moldova; encourages both sides to uphold the free movement of people also during crises;
17. Acknowledges the efforts undertaken by the Moldovan authorities in implementing the recommendations outlined in the annual visa suspension mechanism reports; recommends the continuation of the implementation of the benchmarks related to the visa liberalisation policy and calls on the authorities to continue those efforts to satisfy the visa liberalisation benchmarks, in particular in the areas of anti-corruption, to strengthen the judiciary, to apply the anti-money laundering legislation and to take concrete action to address the increase of unfounded asylum applications; in this respect, is concerned about the rise of the number of Moldovan nationals found to be illegally staying in the Schengen+ area (47 % rise) and about the rise in asylum applications (48 % rise); urges Moldovan authorities to further implement the commitments made in the context of the liberalised visa regime for the Schengen area in the area of effective migration management and to ensure asylum rights for third-country applicants in the Republic of Moldova;
18. Welcomes the adoption by the Parliament of the Republic of Moldova of numerous legislative acts in line with the country’s commitments enshrined in the AA, namely related to public administration, public financial management and justice system reforms; underlines the importance of a full implementation of these acts, including by adopting secondary legislation;
19. Welcomes the progress achieved on public financial management and calls on the Moldovan authorities to accelerate the implementation of other AA/DCFTA reforms based on an improvement of the rule of law;
20. Acknowledges the Republic of Moldova's essential steps to increase the performance of public administration; to this end, calls on the Moldovan government to ensure full implementation of the Public Administration Reform for 2016–2020 in line with the OECD/SIGMA principles of public administration; furthermore, encourages Moldovan authorities to increase transparency and combat widespread corruption in the public administration as well as to establish a public administration national school;
21. Underscores that a more efficient and sustainable AA implementation stems from an impartial and professional administration of State institutions and agencies; in this regard, reiterates its concern regarding the lack of a constant commitment to improvements in the public sector, which discourages competent people from pursuing a career in public administration, and stresses the need for the development of a professional public administration and the encouragement of young people to take up a career in the public sector, so as to achieve a more transparent administration in which nepotism and favouritism do not lead to chronic politicisation;
22. Urges to start a more comprehensive decentralization reform as soon as possible, including the reform of the Republic of Moldova’s administrative-territorial system, regional development and administrative decentralisation, with the possibility to generate local taxes; in this respect, underlines the need for more in-depth and broader cooperation between local authorities, for a reduced number of local administrations and additional measures to ensure their greater independence and decrease their operating costs; calls on the Moldovan authorities to uphold the principles of local democracy and local autonomy in accordance with the European Charter of local self-government by providing proper competencies and sufficient funding for the local governments and by assuring their effectiveness;
23. Is concerned by the high level of concentration and politicisation of the media and advertising sectors, leading to a low level of public confidence in the media; calls on the Moldovan authorities to continue the reform of the media sector with a stronger involvement of the civil society in the process in particular, calls on the Republic of Moldova to review the audio-visual code and liberalise the advertising market in line with European standards of media freedom and pluralism, as recommended by the Commission and the Venice Commission, so as to ensure full transparency of the ownership in the media and the advertising market;
24. Takes the view that strengthening media pluralism and its independence should be a priority for the Union and the Republic of Moldova in their partnership relations and that this should be reflected appropriately also in the financial allocations; calls on the Commission to increase support to the independent media, including in the regions; urges Moldovan authorities to refrain from exploiting the COVID-19 pandemic to adopt measures curtailing freedom of speech and limiting the media’s ability to report the whole dimension of the impact of the Corona crisis on the society in an independent and unbiased way; expresses concern at the spreading of fake news and disinformation in the Republic of Moldova during the coronavirus crisis and points out the need for both the local authorities and the Union to develop specific programmes that promote media literacy, combat disinformation and support quality, fact-checked media content;
25. Urges the Moldovan authorities to foster free and independent media and, including by conducting an independent audit, to ensure the effectiveness of the Audio-visual Council as an independent regulator combating the ongoing intimidation of journalists, politicization and lack of transparency of public and regulatory institutions, the lack of public access to information and quality media content, as well as ensuring the transparency of media ownership;
26. Underlines that the Union is the biggest provider of aid to the Republic of Moldova; observes with great concern the continuous propaganda, disinformation campaigns and denigrating messages by governing politicians against the Union, which paint a distorted and unrealistic picture on public television and in the media; regrets such public attacks on the Union’s aid and image as they undermine the implementation of the AA and EU – Republic of Moldova relations; calls on the Moldovan authorities to end to the disinformation and anti-EU propaganda campaigns to which the citizens of the Republic of Moldova in general are exposed and to step up support in the fight against fake news, hybrid warfare in communication, targeted disinformation campaigns and the degradation of media programs; underlines that political involvement in mass media structurally undermines fundamental freedoms and access to information;
27. Deplores the progressive distancing from the European path of the current government in Chișinău, to the detriment of country's democratic aspirations and urges all pro-European political parties to find solutions through dialogue, in order to ensure the continuity of the European integration process of the Republic of Moldova and to fully benefit from all the advantages offered by the AA/DCFTA;
28. Calls on the Moldovan authorities to put more efforts in order to ensure that the AA/DCFTA opportunities and EU assistance and programmes reach the local level, including in the remote parts of the country, in particular rural areas, so as to enable inhabitants to push for positive changes in their communities, in particular those more vulnerable to post-Soviet sentiments and Russian manipulation;
29. Takes the view that the authorities should provide transparent information on the external assistance that they intend to seek and that financing from the Russian Federation should be discussed openly in Parliament and with experts and civil society, including as regards the geo-strategic conditionalities and the long term impact on the economy arising from this type of financing ; considers that, when it comes to conditionalities attached to EU financial assistance, the authorities should also provide the necessary explanations to the public; underlines that EU conditionalities are to be seen as opportunities to conduct the necessary reforms;
30. Highlights the need to fight Russian disinformation through fact-based and accessible quality information as well as through public campaigns aiming to increase public awareness; encourages the authorities of the Republic of Moldova to look for more in-depth collaboration with the Union and its Members States in order to enhance the implementation of good practices and solutions for countering disinformation, propaganda, manipulation and hostile influencing carried out by external forces aimed at dividing, destabilizing, and undermining the integrity of the internal political processes and the relations with the Union;
31. Acknowledges the progress in the adoption of the new law on non-commercial organisations by the Moldovan parliament, as part of the conditionality requirements to obtain the EU MFA; expects its swift and effective implementation will foster the full respect of the rights and freedoms of civil society and non-governmental organization as well as the freedom of association will be fully respected and calls for more support from the Moldovan government in the development of civil society; points out the central role NGOs play in any democratic society and expresses hope that the new legislation will improve the transparency of public decision-making and provide a modernized framework for the functioning of civil society in the country; urges the Moldovan authorities to refrain from any pressure on NGOs and other civic actors; regrets the distrust and hostility with which political officials approach civil society in general; urges a more meaningful and active involvement of civil society in the policymaking and implementation processes, particularly as regards human rights and fundamental freedoms, with respect to which NGOs could act as a watchdog and hold the respective State institutions accountable; with that in mind, calls on the Commission and the Member States to provide political, technical and financial support to the civil society and urges EU Institutions to establish clear rules helping to avoid the provision of grants to “GONGO’s”(NGOs established and financed by governments through informal channels);
32. Calls on the Moldovan authorities to promote transparency in public decision-making and to ensure proper involvement and consultation of stakeholders and civil society at all stages, which will also increase public scrutiny and the social acceptability of the reforms conducted;
33. Welcomes the amendments to the electoral legislation adopted in August 2019, and the ruling of the Moldovan Constitutional Court of February 2020 on the territorial requirements for establishing political parties;
34. Points out that the COVID-19 crisis has brought to light the fact that the health system of the Republic of Moldova is underdeveloped and is struggling to cope with the recent surge in the number of cases; urges the Commission, Member States and the Republic of Moldova to increase cooperation on public health resilience, exchange best practices and work with the civil society, the business and SME communities on establishing epidemic strategies focusing on the most vulnerable groups in society; calls on the Moldovan government to strengthen the healthcare system, improve sanitation standards, especially in hospitals, as well as to provide its population with all relevant information about the pandemic in a transparent and inclusive manner;
Cooperation in the field of common foreign and security policy (CFSP) and progress on resolving the Transnistria conflict
35. Welcomes the Republic of Moldova participation in common security and defence policy (CSDP) missions and operations on cyber-security and cyber-crime investigations, as well as the Republic of Moldova’s cooperation with NATO and its alignment with EU's CFSP Declarations; calls on the EU institutions to include the Republic of Moldova in new formats of cooperation concerning cyber security, hybrid threats and cyber-crime investigations;
36. Recognizes the importance of the European Union's Border Assistance Mission to Moldova and Ukraine (EUBAM) in harmonising border management and customs regime to that of the Union, also with regard to the solution of the Transnistrian issue;
37. Acknowledges the Republic of Moldova's unique experience and expertise and the contribution that it can provide to the Union collective security and defence policy and encourages a deeper cooperation in EU-related defence policies, including participation in PESCO once the issue of involvement of third countries is clarified;
38. Reiterates the EU’s support for the sovereignty and territorial integrity of the Republic of Moldova and for the efforts in the framework of the 5+2 negotiation process to reach a peaceful, lasting, comprehensive, political settlement of the Transnistrian conflict, based on the respect for the sovereignty and territorial integrity of the Republic of Moldova within its internationally recognized borders, with a special status for Transnistria, that would ensure the protection of human rights also on the territories currently not controlled by constitutional authorities; reminds that the UN General Assembly adopted on 22 June 2018 a resolution urging the Russian Federation to withdraw its troops and armaments unconditionally from the territory of the Republic of Moldova and reaffirming the support for the immediate implementation of that resolution;
39. Encourages the Moldovan Government to continue promoting an environment favourable to the settlement of conflicts and supporting activities that increase confidence and people-to-people contacts across conflict-divided communities;
40. Acknowledges the increased security interdependence between the Republic of Moldova and its Transnistria region and the stability of both as the main factor for prevention and resolution of security challenges such as hybrid threats, cyberattacks, election cyber-meddling, disinformation and propaganda campaigns, and third-party interference in the political, electoral, and other democratic processes;
41. Welcomes the Moldovan government’s efforts to extend the benefits of the DCFTA and the visa-free regime to the Transnistrian region, which enabled significant growth in mobility and trade with the region as well as all activities that enhance the economic collaboration and increase the level of goods’ and services’ exchanges between the Republic of Moldova and Transnistria;
42. Considers that, by guaranteeing tariff-free access to EU markets for Transnistrian businesses registered on the west bank of the Dniester and submitted to customs checks by Moldovan officials, the DCFTA resulted in a massive swing in the direction of trade from the Eurasian Economic Union to the Union; encourages the Moldovan authorities to further advance towards trade and engagement with EU markets in order to enhance market access, transparency, good business practices and reduce the capacity for market manipulation and monopolization by oligarchs;
43. Underlines that any resolution to the Transnistrian issue must respect the Republic of Moldova’s sovereign right to choose its own defence and foreign policy orientation;
44. Urges the authorities of the Republic of Moldova to consider the development and implementation of the package of laws regarding the fields of conflict prevention and crisis management that was part of NAPIAA in 2017-2019;
Rule of law and good governance
45. Is concerned by the slow course of reforms on the rule of law and democratic institutions; urges the government of the Republic of Moldova to complete judicial reforms without delay, so as to warrant the independence, impartiality and effectiveness of the judiciary and specialised anti-corruption institutions; in doing so, calls on the Moldovan government to ensure a transparent process of drafting of the amendments to the Moldovan Constitution concerning the Supreme Council of Magistrates (SCM), and of their subsequent adoption, using international precedents and good practices, in line with the recommendations of the Venice Commission and in consultation with the Council of Europe and EU experts, civil society, and other interested actors; regrets that the amendments regarding the appointment of the SCM members have been rushed through Parliament; underlines the need to guarantee the independence of the SCM and calls on the Moldovan authorities to ensure a merit-based selection and promotion of judges;
46. Calls on the authorities to continue effective consultations in order to adopt a concept and action plan for the justice reform based on a comprehensive diagnostic, ensuring wide consensus from stakeholders and in strict adherence with the Moldovan Constitution and European standards;
47. Is concerned by the low level of trust in the integrity and effectiveness of the judiciary and by the susceptibility of the judicial branch to political pressure which hampers its independence; calls on the authorities of the Republic of Moldova to ensure transparency in the judicial appointment processes and that the Prosecutor General, his staff, and public prosecutors in general, work independently and abide by the highest standards of professionalism and integrity;
48. In this regard, points out that a lack of resources and a lack of knowledge about good governance, the rule of law and human rights is permeating and adversely affecting the effective functioning of the Moldovan administration and calls on the Commission to increase funding through the available budget support and technical assistance instruments, aimed at strengthening the justice and law enforcement authorities' capacity and efficiency, taking into account progress in the implementation of the reforms;
49. Urges the authorities of the Republic of Moldova to strengthen the complete independence of the Constitutional Court and to ensure that it is not subject to any form of political interference; firmly rejects any attempt of intimidation or pressure on the judges of the Constitutional Court and condemns the enormous pressure, blackmailing and harassment the Court’s judges went through before delivering the decision on the Russian loan; deeply regrets the attempts to politicize the Constitutional Court and the inactivity of prosecutors and of the anti-corruption centre in defending the independence of the Constitutional Court;
50. Is concerned by the persistent long-term trend of lacking progress on corruption in the Republic of Moldova, and therefore urges the Government to step up the fight against corruption and state capture, as well as money laundering, smuggling and organised crime, including human trafficking; calls on the government of the Republic of Moldova to adopt concrete measures to reinforce the independence, integrity and effectiveness of the National Anticorruption Centre’s Office and the Anti-Corruption Prosecutor’s Office, as well as to ensure the de-politicization of public anti-corruption institutions and law enforcement agencies; points out the need for sustained and consistent efforts to prevent and prosecute high-level corruption and organized crime; takes the view that this is the only way to re-establish the trust of the Moldovan citizens and ensure the enactment of lasting reforms in the Republic of Moldova; calls on the Commission to provide much more consistent support to civil society organisations monitoring fraud and money laundering activities;
51. Urges the authorities to step up efforts to fight organized crime and dismantle criminal schemes;
52. Welcomes the adoption of the law on Anti-Money Laundering sanctions on 21 May 2020 and calls for the swift elaboration of guidelines on the application of the new legislation, as well as for specialized training for the concerned authorities; calls on all interested parties to maintain consistent efforts in combating smuggling, money laundering, dismantling criminal networks and reducing the influence of oligarchs; calls for an enhanced cooperation with Europol, Interpol, and customs organizations such as WCO and OECD's Anti-corruption Networks;
53. Notes with concern the conclusions of the Commission’s and EEAS’ 2019 Association Implementation Report on Moldova according to which the establishment of instruments and bodies that aim to prevent fraud and money laundering has been slow; expects the new Government to build upon the recent steps taken by the previous government as regards fighting corruption and unravelling criminal and money-laundering schemes;
54. Notes the action undertaken in order to pursue the prosecution of the massive banking fraud exposed in 2014 and other money laundering cases; however, reiterates its concern at the persistent failure to bring transparent prosecution of all those responsible for the bank fraud exposed in 2014, as well as at the slow recovery of stolen assets; reiterates its concern that no substantial recovery of assets has been made so far and stresses that further steps need to be taken in this direction; calls on the Moldovan authorities to speed up the prosecution process, bring all those responsible to justice without further delay and to recover misappropriated funds; invites the Member States to offer the Republic of Moldova's authorities substantial support in the investigation of the case if there will be any requests for it;
55. Welcomes the adoption, on 18 June 2020, of the new law to abolish the Citizenship for Investment Programme as of 1 September 2020, at the end of the existing moratorium; takes the view that this is an essential move in order to reduce the risks of corruption, tax evasion and money laundering in the Republic of Moldova; notes that until the programme is cancelled only existing applications will continue to be processed and calls on the Commission to monitor carefully how this will be done;
56. Urges the authorities of the Republic of Moldova to increase transparency on the funding of political parties, and to investigate all irregularities in a fair and unbiased way; stresses the need to fight corruption inside the Moldovan political class; is deeply concerned by recent allegations of buying off of Members of the Parliament in order to change political affiliation, as well as allegations of kidnapping, intimidation and pressure on elected representatives; points out that these allegations must be investigated and that such behaviour is incompatible with the values which are at the core of the AA with the Republic of Moldova; draws attention also to the responsibility of political parties to fight corruption within their own ranks; in addition, calls on the authorities to ensure that no funds of charitable foundations are used in the electoral campaigning; urges the authorities to forbid the use of the administrative funds in favour of the governing political class during the election campaign;
Human rights and fundamental freedoms
57. Acknowledges the improvement of the legislation on the protection of human rights, notably as a result of the new 2018-2022 Human Rights Action Plan; calls on the Moldovan authorities to significantly increase efforts and adopt implementation measures and secondary legislation so as to uphold those rights and fundamental freedoms, including in particular for minorities and vulnerable groups, such as women and children being misused by human traffickers, linguistic minorities, people with disabilities, Roma and LGBT+ persons, thus recognizing the respect for human rights as a critical criterion and a vital condition for a democratic society; is concerned that significant human rights problems remain unresolved and unpunished, such as pressure and politically motivated prosecutions and detentions, torture, arbitrary detention, harsh and life-threatening prison conditions, arbitrary or unlawful interference with privacy, use of forced or compulsory child labour;
58. Expresses deep concern at the situation of Moldovans stranded in EU Member States due to the COVID-19 crisis without social protection; calls on the Commission and the Member States to ensure, in the context of COVID-19, the equal treatment of third-country seasonal workers with EU nationals, as stated in Directive 2014/36/EU(8), recalling that such workers have the same labour and social rights as EU citizens; calls on the Member States to ensure quality housing for cross-border and seasonal workers, which should be decoupled from their remuneration, and ensure decent facilities, tenant privacy and written tenancy contracts enforced by labour inspectorates, and to establish standards in this regard;
59. Notes with concern that the implementation of the commitments stemming from the association agreement in the social sphere, especially in the fields of labour inspection, anti-discrimination measures, and social dialogue, is limited; is concerned that the progress in addressing macro-financial vulnerabilities remains insufficient to significantly boost living standards and is now endangered by the consequences of the COVID-19 crisis; insists on the mandatory involvement of trade unions, as well as civil society organizations, in the implementation of the Association Agreements;
60. Underlines that the Union must hold the Republic of Moldova accountable for its commitments with regards to the social dimension of the AA; calls on the Commission to provide a detailed annual progress reports on the implementation of social and labour-related issues of the Association Agreement which does not only analyses the transposition of relevant Union directives and norms, but also their actual implementation; calls on the Commission to embrace proposals of labour experts to introduce a mechanism to sanction the violations of the standards agreed upon; suggest to use the disbursement of MFA as a leverage or conditionality to force the Republic of Moldova to enhance the labour conditions of their workforce;
61. Expresses its concern regarding respect for human rights in the Transnistrian region, especially against the background of the COVID-19 pandemic;
62. Calls on the Commission to upgrade neglected areas of AAs, which include important policy areas such as gender, the European Green Deal and the prevention of health crises;
63. Underlines that gender equality is a key precondition for sustainable and inclusive development; urges the Moldovan government and authorities to implement measures to further improve women’s representation and equal treatment at all levels of political and societal life; requests the Commission to mainstream gender equality in all its policies, programmes and activities in relation to the Republic of Moldova and encourages the authorities of the Republic of Moldova to promote programs that include consistent gender equality dimension, offer more support to the most disadvantaged and vulnerable groups in society, implement legislation to fight hate speech and physical violence perpetrated against the more vulnerable groups;
64. Urges the Moldovan authorities to ratify the Istanbul Convention, which was signed by the Republic of Moldova on 6 February 2017, but whose ratification is lagging behind despite being mentioned as an explicit objective of the National Action Plan on Human Rights 2018-2022 and of the National Strategy on Preventing and Combating Violence against Women and Domestic Violence 2018-2023; recalls that violence against women and girls is prevalent in the Republic of Moldova and two in five women have experienced physical and/or sexual violence at the hands of a partner or non-partner since the age of 15;
65. Calls for further steps in implementing the national legislation for preventing and combatting trafficking in human beings and for a substantial increase in the quality of services provided to victims as well as for more protection, assistance, and support of victims of crime, especially children, during investigations and after the judicial process; in addition, calls for more support during the social reintegration of the victims; calls for increased cooperation between the judicial authorities and the enforcement agencies of the Republic of Moldova and the Member States, in order to reduce cross-border crime, particularly human trafficking and trafficking in illegal drugs;
66. Calls on the authorities to guarantee the right to a fair trial and the respect of human rights in detention and correctional facilities, including by addressing inadequate healthcare provisions; in this regard, stresses the need to provide a safe environment for prisoners; additionally, calls for measures to avoid selective and politically motivated justice;
67. Reiterates its call to the Moldovan authorities to ensure that any extradition requests coming from third countries are processed in a transparent manner while following judicial procedures fully in line with European principles and standards;
68. Calls for more concrete measures to improve detention conditions and to eliminate the detention of people with disabilities in psychiatric hospitals against their will; calls for the complete elimination of torture and ill-treatment in prisons as a method of pressure on imprisoned or detained political opponents;
69. Acknowledges the measures taken at national level to prevent and fight torture, but underlines that the Republic of Moldova continues to be condemned frequently at the European Court of Human Rights for torture and ill-treatment; urges therefore the establishment of a fully independent agency specifically for the investigation of allegations of torture and other human rights violations committed by police and other law enforcement officers;
70. Is concerned by the continued the presence in the public debate of instances of hate speech, originating also from politicians as well as by religious and community leaders; underlines in this regard that women and LGBTI+ people in particular have been targeted; calls on public officials to refrain from engaging in hate speech and to publicly disavow hate speech whenever it occurs and on the authorities to fine-tune the legal and institutional framework to combat hate speech in order to counter the phenomenon with all available mechanisms;
71. Reminds that a draft bill establishing Magnitsky-type legislation has already been introduced in the Moldovan Parliament; encourages the legislative body to move forward with the examination of the law which would, if adopted, contribute to fighting human rights abuses, corruption and money-laundering;
Trade and economic cooperation
72. Is of the opinion that the EU assistance to the Republic of Moldova should continue to prioritize improving the living standards of the citizens, targeting areas such as facilitating the development of SMEs, helping the youth, as well as the general reform of the education and health sectors;
73. Welcomes entrepreneurial initiatives that aim to develop the Moldovan start-up scene; recognises, however, that further public sector reforms and financial assistance are needed in order to create additional employment opportunities that will entice young people and skilled workers to return to their country of origin;
74. Calls on the Commission to contribute to addressing the economic challenges faced by young people in the Republic of Moldova by investing in programmes favouring youth and social entrepreneurship and to strengthen the connection between the education system’s reform and labour market demands; stresses the need to invest in programmes which aim at young people coming from rural areas, since this category is one of the most vulnerable and lacks socio-economic opportunities, compared to young people from urban areas;
75. Acknowledges that the “brain drain phenomenon”, that is often caused by the lack of trust in judiciary, nepotism and lack of proper reforms in the country, represents a serious threat to Moldovan future and is concerned by the large-scale emigration of Moldovan citizens, which accentuates negative demographic trends; encourages the Moldovan government to implement further measures to prevent and counter such a phenomenon, in particular by creating opportunities and improving conditions and wages for young workers in their home country, so that they can return home after studying or training abroad, and by supporting youth entrepreneurship; calls on the Commission to focus on this issue in its programmes;
76. Welcomes the diversification of the Moldovan economy and the significant increase in trade between the Republic of Moldova and the Union, as well as the fact that the Union is the largest investor in the country; welcomes the fact that, in 2018, the Union accounted for 70 % of the Republic of Moldova’s total exports and 56 % of its total trade; encourages further progress in areas such as the customs code, the protection of intellectual property rights including geographical indications, the improvement of sanitary and phytosanitary standards, the improvement of market conditions in the field of energy, public procurement, and access to finance for SMEs;
77. Encourages the full implementation of the DCFTA in order to further increase the EU – Republic of Moldova bilateral trade and investment relationship, including by removing non-tariff barriers to trade, facilitating access to the single market and making progress when it comes to integrating into it; recalls that the DCFTA with the Republic of Moldova must respect the rules set out in the sustainable development chapters, in line with international commitments, in particular the Paris Agreement, and WTO rules;
78. Commends the adoption by the Moldovan Parliament of the European LEADER approach as the basis for its national rural policy; however, encourages the Republic of Moldova, including through dedicated measures in the next National Strategy for Agriculture and Rural Development, to make full use of preferential export opportunities into the Union through more efficient and sustainable cultivation of farmlands, as well as more democratic access and use of land, thus generating agricultural products that would amplify the Republic of Moldova’s relative agricultural advantages;
79. Welcomes the regulatory approximation with the EU acquis, and encourages the Commission to provide to the Moldovan institutions and public administration technical and financial help for this endeavour and the subsequent implementation; stresses that such aid should be used to increase knowledge of human rights and the rule of law and calls on the Moldovan authorities to progress more rapidly on approximation to the AA/DCFTA, including on animal health and food safety standards;
80. Welcomes the National Strategy ‘Digital Moldova 2020’, but demands the Commission to support and assist programmes and reforms concerning media and information literacy to reflect the current digital age, as well as to upgrade sectoral cooperation in the digital economy; calls on the Republic of Moldova to build a reliable digital market economy, enhancing the need for progress in open data, expanding the admittance to digital systems and increasing the access for citizens at electronic services and different communications solutions;
81. Calls on the Commission to support investment in sectors with potential for development, growth and competiveness in the EU, notably in the three sectors of strategic significance (i.e. sustainable energy and climate, the digital single market and cyber security, and transport);
82. Calls on the Moldovan government to also focus on the social dimension of trade and sustainable development by respecting and enforcing labour standards, ratifying and fully implementing all ILO conventions, and eliminating remaining deficiencies in the labour inspection system as well as by addressing the limitations and shortcomings of the labour inspection system and the problems of the judiciary system, which have a negative impact on the enforcement of labour standards;
83. Calls on the Moldovan authorities to adopt and implement policies aimed at regulating the participation of the entities from jurisdictions that do not implement international transparency standards (offshore jurisdictions) or the businesses that are directly or indirectly controlled by such companies from trading with public authorities (public procurement, privatization, concession, and public-private partnership);
84. Calls on the EU to consider the possibility for countries having an AA/DCFTA with the EU to accede to the Single Euro Payments Area (SEPA), since it would be beneficial for citizens and provide new opportunities for SMEs to develop;
Energy, environment and climate change
85. Calls on the Moldovan Government to further reform the energy sector in order to increase resilience, transparency in costs and contracts in the sector, and to improve energy independence and efficiency, particularly by increasing energy interconnections with the Union, diversifying energy sources, including renewable energy, and reducing dependence on fossil fuels; stresses that all these aspects are of paramount importance for enhancing the country’s energy security;
86. Welcomes actions to strengthen the institutional capacity and the independence of the energy regulator and encourages the necessary and urgent actions to be taken for the implementation of the Third Energy Package, in particular in the field of natural gas, and to ensure full compliance with the Energy Community acquis; in particular, calls on the National Agency for Energy Regulations of the Republic of Moldova to approve energy market rules based on fair competition and ensure compliance by all market participants, including State-owned traders;
87. Stresses the importance to increase infrastructure cooperation in the region, also with a view to diversifying the Republic of Moldova’s energy supplies, and to improve the connectivity of the Republic of Moldova’s energy sector while ensuring environmental sustainability;
88. Underlines the importance of the diversification of the Republic of Moldova’s electricity system; urges the Moldovan authorities to ensure the timely implementation of the project for interconnection of the Republic of Moldova-Romania electricity systems by providing the necessary support and resources;
89. Encourages the Moldovan authorities to continue the efforts to reinforce the energy security of the country and commends the finalisation of the Ungheni – Chișinău gas pipeline by the end of 2020; furthermore, invites the Commission to include the Republic of Moldova in the stress tests conducted for the internal energy market;
90. Commends the arrangements agreed between the Republic of Moldova, Ukraine and Romania in December 2019 to enable gas transfers to Ukraine and the Republic of Moldova via the Trans-Balkan pipeline, and the February 2020 Action Plan to ensure the independence of the transmission system operator Moldovatransgaz;
91. Welcomes the steps undertaken to asynchronously interconnect the electricity system of the Republic of Moldova with the EU via Romania, being a significant milestone on the way to strengthening and diversifying the Republic of Moldova’s energy infrastructure. Calls on all the authorities to fulfil the objective of connecting the Republic of Moldova with Romania’s power grid by 2024, with the support of the EU;
92. Welcomes the February 2019 package on climate and the environment of the Republic of Moldova and its national response, which made it the fourth country in the world to submit its updated National Determined Contribution (NDC2), including an increased ambition to reduce GHG emissions, to the Secretariat of the UN Framework Convention on Climate Change (UNFCCC); calls for increased efforts on the national commitments related to the 2015 Paris Agreement to fight climate change, as well as for implementation of climate change mainstreaming in all areas of policy-making;
93. Calls on the Republic of Moldova to further enhance its engagement in the fight against climate change, in particular waste management, and management of water from the Nistru river, as well as on the Commission to facilitate the Republic of Moldova’s participation in the European Green Deal and to ensure that the DCFTA does not contradict the environmental objectives and initiatives laid down therein;
94. Acknowledges the importance of a further modernization of the education system in the Republic of Moldova, as well as the growing role of youth in all life sectors, and invite EU to offer further support, especially in the area of vocational education and training (VET), in order to meet labour market needs; stresses the need to promote opportunities in volunteering and civic engagement for young people and to invest more in young people by expanding funding for and increasing the participation of Moldovan representatives in the existing mobility programmes, such as Erasmus+, Creative Europe and Horizon 2020;
95. Encourages the Commission to perform a consultation, prepare and create tailored programs for citizens including a direct contact with beneficiaries through the on-line platform for applying and reporting the use of the funds made available by those programs; calls in this regard to consider the Green Deal purposes, as well as the day-to-day needs of citizens in the Republic of Moldova;
Institutional provisions
96. Stresses that without the sincere determination of the political class to reform the country and to genuinely implement the AA with the Union, no true and lasting development could be reached; in this regard, encourages all political actors and political forces in the country to contribute and initiate multi-party formats and collaboration in good faith on the Republic of Moldova’s strategic goals thus contributing to the quality of democracy and to the improvement of life conditions of people; with that in mind, encourages the Moldovan authorities to make use of a Jean Monnet Dialogue to support inter-party dialogue and parliamentary capacity building;
97. Calls for all EU institutions and the Member States, in close cooperation with the authorities of the Republic of Moldova, to better communicate the benefits of the AA/DCFTA and the EU assistance to the citizens of the Republic of Moldova;
98. Calls on the Commission to strengthen the Delegation of the European Union to the Republic of Moldova, reinforce monitoring and strengthen the project team in Chișinău, so as to help the Republic of Moldova effectively communicate its approximation to EU law, fight against disinformation and promote positive images of the EU and the Republic of Moldova to all parties concerned;
o o o
99. Instructs its President to forward this resolution to the Council, the European Commission and the Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy, and to the President, Government and Parliament of the Republic of Moldova.
Council Regulation (EC) No 539/2001 of 15 March 2001 listing the third countries whose nationals must be in possession of visas when crossing the external borders and those whose nationals are exempt from that requirement (OJ L 81, 21.3.2001, p. 1).
Directive 2014/36/EU of the European Parliament and of the Council of 26 February 2014 on the conditions of entry and stay of third-country nationals for the purpose of employment as seasonal workers (OJ L 94, 28.3.2014, p. 375).