Report on Artificial Intelligence: Part I – the existing regulatory landscape

Artificial intelligence (AI) has been placed front and centre in many countriesʼ economic strategies1, probably unsurprising as AI is one of the defining technologies of the Fourth Industrial Revolution2. Nascent AI regulation around the world today is characterised by soft approaches either aimed at incentivising innovation in the manufacturing or digital sectors or encouraging break through research. The ethical implications of AI are either regulated through specific AI codes in companies concerned with good corporate social responsibility, in research institutes (private or public) concerned with ethical research and innovation or not regulated at all. These AI ethical codes are not formally scrutinised by any public administration, nor are they legislatively required, and so it is difficult to assess the quality and effectiveness of such codes in minimising the negative implications of AI. The purpose of this howtoregulate report is to examine existing AI regulatory landscape (Part I) and present regulatory approaches for minimising the harms of AI (Part II – outline for future regulation of AI), evidently without putting into question the utility of AI.

At the November 2017 Web Summit technology conference in Portugal, physicist Stephen Hawking said

[AI], could be the biggest event in the history of our civilisation. Or the worst. We just donʼt know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it…Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy…I believe that we can create AI for the good of the world. That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.3

The howtoregulate article “Regulating Research and Technology Risks: Part I – Research Risks” presents regulatory tools which can help to contain the kinds of risks at the research phase that Hawking above alludes to by asking two fundamental questions:

  • Are authorities informed on what is going on in the field of research?
  • Are authorities empowered to investigate risks linked to research and take measures to contain the risks?

In relation to that first question, it is recommended that regulators consider the February 2018 report “The Malicious Use of AI” by 26 experts on the security implications of emerging technologies such as AI. This report provides a good summary of the landscape of potential security threats from malicious uses of AI, focusing on the kinds of attacks we are likely to see soon if adequate defences are not developed. Some of the security-relevant properties of AI include:

  • its dual-use (military or non-military);
  • both its efficiency and scalability;
  • exceeding human capabilities;
  • its anonymity and psychological distance;
  • its novel unresolved vulnerabilities4.

The threats posed by malicious uses of AI could affect our digital security, physical security and political security. Examples of which include5:

  • Digital: criminals training machines to hack or socially engineer victims at scales beyond what humans are doing now, for example malicious actors can manage a number (perhaps tens or hundreds, not thousands) of fake email accounts to extort money from victims but an AI could manage up to millions of email accounts and extort a lot more victims.
  • Physical: non-state actors weaponising consumer drones or consumer robots.
  • Political: privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns, such as those covered in the howtoregulate article “Countering ʻfake newsʼ ”.

The second question considering empowerments of authorities is considered in the next Part II – outline for future regulation of AI.

A. Questions of substance for regulating artificial intelligence

I. What is artificial intelligence?

1. There exists no agreed definition for artificial intelligence among AI experts in industry or the law or in policy-making. Certainly, there are common elements in various definitions around such as digital, technology, intelligence, systems, learning, reasoning and computer programme. It is also contentious what is considered AI or not, for example what was AI 10-15 years ago, is today not considered AI. This shows how quickly the field of AI adapts. Although there is no widely accepted definition of AI, its dual-use characteristics is widely recognised, with one commentator observing that “AI advances for the good guys are also advances for the bad guys”6.

2. Clearly, the regulators first task is to define AI and from there determine the scope of regulation. Developing both a definition and scope requires technological knowledge of AI and so AI experts and regulators should work together. The regulator must have a good understanding of the current technological state of AI so as not to underestimate AI capabilities or overestimate future ones. Noting the focus on regulatory approaches for minimising the harms of AI, the definition in the “Malicious Use of AI” report will be used:

AI refers to the use of digital technology to create systems that are capable of performing tasks commonly thought to require intelligence. Machine learning is variously characterised as either a subfield of AI or a separate field, and refers to the development of digital systems that improve their performance on a given task over time through experience.7

Other definitions used at international and national level will, however, also be included throughout this article.

II. Should some decisions always be made by a human, and should some tasks not be left to AI?

3. In seeking to define AI, the regulator, or more importantly the public generally should think about what it wants AI to do? AI promises to solve many of the worldʼs toughest problems, because AI has the capacity to analyse so much data to arrive at a well-reasoned solution. Japan has a declining birth rate and ageing population. It is investing heavily in robots, which need AI to operate, to solve problems about caring for the aged, dealing with mobility issues etc. This raises the question of what should AI not do, should AI be allowed to kill (this is happening in a limited form now, although the human operator makes the final decision to kill via the drone), should AI look after our children and the elderly? The UKʼs main agency for funding research in engineering and the physical sciences, the Engineering and Physical Sciences Research Council, developed the Principles of robotics, principle 4 provides that “Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent”8.

III. Should AI systems be regulated by ethics or law?

4. Generally, industry prefers a deregulated landscape and when it comes to technology, it is widely-believed that too much regulations stifle innovation. In the AI industry it is very much an unregulated landscape, with much of the formal government regulations focused on funding to promote innovation and take-up of AI applications. Of course some government legislation around privacy, data protection, cybersecurity, liability and product safety, while not necessarily developed with AI in mind, do regulate AI development to a degree. In terms of ethical development and application of AI this is mostly regulated in the form of codes of conduct, and corporate governance-type documents. Certainly, the AI industry is to be commended for its ethical approach to AI development, many of the leaders in AI research and industry all seem to be alert to the ethical issues, which may in part explain high—levels of consumer distrust in AI. Time will tell whether the proliferation of codes of ethics and standards will be sufficient to mitigate the risks posed by AI but some legal experts believe that where AI applications impact on human rights, legislation is required to protect those human rights9.

5. A recent report commissioned by the Wellcome Trust about AI in health was released in April 2018, exploring how long-standing principles of medical ethics apply in this new world of technological innovation10. The report raises important ethical, social and political challenges, some of which require regulatory action such as what does ‘duty of care’ mean when applied to those who are developing algorithms for use in healthcare and medical research? This challenge underscores the importance of a final medical decision being a human one, which may have AI input but regulations or perhaps professional medical codes should require that medical practitioners understand the reasoning behind the AI recommendation.

6. Concerning, the issue of safety of AI products with mobility some legally enforceable regulations should exist to ensure that a level of cybersecurity exists to prevent hacking and commandeering that AI mobile product for a malicious purpose. Singapore has recently completed updating its cybersecurity laws and has designated certain entities (known as critical infrastructure institutions) as requiring a higher level of cybersecurity and standards for testing their systems11.

IV. Are existing laws sufficient to regulate artificial intelligence?

7. As mentioned in the previous Section III, there exists already legislation that influences the malicious use of AI such as data privacy laws, cybersecurity laws, product liability and international humanitarian law concerning targeting combatants etc. These laws were not made with AI in mind and so the extent to which existing laws are sufficient to regulate the negative implications of AI are difficult to gauge without reviewing these laws. Japan and Singapore are examples of countries that are reviewing their existing laws to determine if they are AI-adequate see Part D. National Regulations Sections I and V.

8. Another issue about existing laws are their definitions, are present legal definitions around responsibility and liability sufficient in all applications of AI? While it was not researched the extent to which the courts have, if they have at all, adjudicated on questions of liability for AI systemsʼ autonomous decisions, it would be an unsatisfactory state of affairs to approach AI liability in this way, not to mention the delays for victims harmed12. There is also the problem of the AI “black box” whereby the decisions made by an AI is either unknown or difficult to discern, which has implications for issues around forseeability when deciding liability claims. However, this problem may resolve itself in the future as international researchers recently taught AI to justify its reason and point to evidence when it makes a decision13.

9. In the howtoregulate article “Robots: no regulatory race against the machine yet”, the issue of what the safety threshold should be prior to a robotʼs introduction into the market was considered at Part IV, Paragraph 1, the same question applies for AI not in a robot application. This safety threshold should also be one set by law.

V. Should some AI research not be funded?

10. The UK House of Lords Select Committee on AI asked the Leverhulme Centre for the Future of Intelligence at Cambridge University whether some AI research should not be funded. It was suggested that a very small percentage (around 1%) of AI research regarding applications with a high risk of misuse should not be published, on the grounds that the risks outweighed the benefits. The Committee recommended that universities and research councils providing grants and funding to AI researchers must insist that applications for such money demonstrate an awareness of the implications of the research and how it might be misused, and include details of the steps that will be taken to prevent such misuse, before any funding is provided. This recommendation is certainly useful for public money but could be difficult to implement in respect to private money.

B. International and supranational regulatory framework

1. At the international and supranational level AI is regulated through a system of norms and standards as opposed to a formal framework of regulations. A recent example of the effectiveness of these norms occurred following the boycott by AI researchers from nearly 30 countries of the South Korea University lab, Korea Advanced Institute of Science and Technology (KAIST), and its partner, defence manufacturer Hanwha Systems. The researchers said they would not collaborate with the university or host visitors from the lab because of a Korea Times article describing the lab as “joining the global competition to develop autonomous arms”14. Following the boycott the labʼs president reaffirmed that “As an academic institution, we value human rights and ethical standards to a very high degree…I reaffirm once again that KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control”15.

I. United Nations Centre for Artificial Intelligence and Robotics

2. The United Nations (UN) Centre for Artificial Intelligence and Robotics (the Centre) will soon open a facility in the Hague and the UN Interregional Crime and Justice Research Institute (UNICRI) responsible for the Centre, launched the AI and robotics programme in 2015. The aim of the Centre is to enhance understanding of the risk-benefit duality of AI and robotics through improved coordination, knowledge collection and dissemination, awareness-raising and outreach activities16. The Centreʼs approach is to build consensus among concerned communities (national, regional, international, public and private) in a balanced and comprehensive manner to progress AI and robotics governance17. Although no ʻUN-endorsed definitionʼ for AI could be found, an AI for Good Global Summit was convened in June 2017 where AI was viewed as a set of associated technologies and techniques that can be used to complement traditional approaches, human intelligence and analytics and/or other techniques18. Once the Centre is fully established it will presumably focus on a definition of AI to focus its work.

3. Since 2014, under the aegis of the Convention on Certain Conventional Weapons (CCW), experts have been meeting annually to discuss questions related to emerging technologies in the area of lethal autonomous weapons (LAWS). The CCW contains the rules for protection of civilians from injury by weapons that are used in armed conflicts and also to protect combatants from unnecessary suffering19. The CCW includes the following protocols: I (Non-detectable Fragments), II (Use of mines, booby-traps and other devices, III (Use of incendiary weapons), IV (Blinding Laser Weapons) and V (Explosive remnants of war). The Report of the 2017 Group of Governmental Experts on LAWS affirms that:

  • CCW offers an appropriate framework for dealing with the issue of emerging technologies in the area of lethal autonomous weapons systems;
  • International humanitarian law continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous weapons systems;
  • Responsibility for the deployment of any weapons system in armed conflict remains with States. The human element in the use of lethal force should be further considered;
  • Acknowledging the dual nature of technologies in the area of intelligent autonomous systems that continue to develop rapidly, the Group’s efforts in the context of its mandate should not hamper progress in or access to civilian research and development and use of these technologies; and
  • Given the pace of technology development and uncertainty regarding the pathways for the emergence of increased autonomy, there would be a need to keep potential military applications of related technologies under review in the context of the Group’s work20.

II. European Union

4. On 10 April 2018 a Declaration of cooperation on AI was signed by 25 countries to “work together on the most important issues raised by AI, from ensuring Europeʼs competitiveness in the research and deployment of AI, to dealing with social, economic, ethical and legal questions”21. On 25 April 2018 the European Commission presented its approach to boost investment and set ethical guidelines through increasing public and private investment of AI, preparing for socio-economic changes, and ensuring an appropriate ethical and legal framework22. The Commission aims to present ethical guidelines on AI development by the end of 2018, based on the EUʼs Charter of Fundamental Rights, and building on the March 2018 report of the European Group on Ethics in Science and New Technologies. A European AI Alliance will be created, the call for applications closed in April 2018 and the groupʼs set up is anticipated for May 2018. The Commission will also issue a guidance by mid-2019 on the interpretation of the Products Liability Directive, to ensure legal clarity for consumers and producers in the case of defective products applying AI technology.

5. In terms of specific completed AI regulation, the EUʼs algorithmic regulation in financial markets is the most advanced. Since, 3 January 2018, Article 26 of the EU Markets in Financial Instruments Directive 2 (MiFID 2) requires investment firms to include details of the computer algorithms responsible for the investment decision and for executing the transaction. The regulatory body responsible for receiving such details, the European Securities and Markets Authority, has also released a Question and Answer document, which sets out strict guidelines governing the timestamping and recordkeeping of algorithmic trading events within 100 microseconds of accuracy, measured against Coordinated Universal Time23.

6. The EU is also in the process of updating its regulations on unmanned aircraft systems (drones) and regulations relating to motor insurance third party liability (Directive 2009/103/EC) and liability for defective products in light of automatic vehicle technology (eg. various degrees of driverless cars).

III. Organisation for Economic Cooperation and Development

7. The Organisation for Economic Cooperation and Development (OECD) is encouraging policy debate about AI, particularly around the impact AI could have in areas of OECD policy focus such as the economic and social well-being of people around the world. By understanding digital transformation and its effects on economies and societies, the OECD seeks to articulate recommendations for pro-active – rather than reactive – policies that will help drive growth and societal well-being. In 2016 the OECD held its Technology Foresight Forum, a biennial meeting that began in 2005 to help identify opportunities and challenges for the Internet economy posed by technical development on AI. The 2016 Forum participants defined AI as the capability of a computer programme to perform functions usually associated with intelligence in human beings, such as learning, understanding, reasoning and interacting, in other words to “do the right thing at the right time”24. For example, machines understanding human speech, competing in strategic game systems, driving cars autonomously or interpreting complex data are currently considered to be AI applications25. Following the 2016 Forum, the OECD convened a multi-stakeholder discussion in October 2017, “AI: Intelligent Machines, smart policies”. Outcomes of interest for regulators include:

  • The need to identify the degree to which policies are sufficient to address challenges presented by AI.
  • The need to ensure human-centric AI that maximises benefits and minimises risks.
  • The need for a cross-stakeholder collaboration such as a CERN-like (the European Organisation for Nuclear Research where physicists and engineers probe the fundamental structure of the universe) research agency for AI or a data commons (a shared virtual space where scientists can work with the digital objects of biomedical research such as data and analytical tools).
  • Reviewing existing safety and liability regulations and standards based on existing AI-embedded connected products, and not AI-science fiction. There is no one size fits all solution, however, reviewing liability rules should help to identify whether and when a fault-based or strict liability regime may be applied.
  • Holding national and international debates about AI-embedded connected products to explore whether AI will bring safer products.
  • The need to understand the safety benefits that AI may bring, and match them up with possible risks.

8. The OECD may consider during the course of 2018 to develop guidance or a draft Council Recommendation on AI, building on work underway in some countries like Japan, the United States and the United Kingdom, as well as private sector or research initiatives such as the AI Partnership.

C. Standards and Industry initiatives

I. The Institute of Electrical and Electronics Engineers

The Institute of Electrical and Electronics Engineers (IEEE) is the worldʼs largest technical professional organisation dedicated to advancing technology for the benefits of humanity through developing standards and encouraging global debate by convening conferences and writing articles. The IEEE is currently seeking public comments to its Ethically Aligned Design: A Vision for Prioritising Human Well-being with Autonomous and Intelligent Systems that encourages technologists to prioritise ethical considerations in the creation of such systems. The purpose of Ethically Aligned Design is to:

  • Advance a public discussion about how to establish ethical and social implementations for intelligent and autonomous systems and technologies, aligning them to defined values and ethical principles that prioritise human well-being in a given cultural context.
  • Inspire the creation of Standards (IEEE P7000™ series and beyond) and associated certification programs.
  • Facilitate the emergence of national and global policies that align with these principles.26

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) operates to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritise ethical considerations so that these technologies are advanced for the benefit of humanity. These principles include:

Principle 1 – Human Rights: How can we ensure that A/IS do not infringe upon human rights?

Principle 2 – Prioritise Well-being: Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being.

Principle 3 – Accountability: How can we assure that designers, manufacturers, owners, and operators of A/IS are responsible and accountable?

Principle 4 – Transparency: How can we ensure that A/IS are transparent?

Principle 5 – A/IS Technology Misuse and Awareness of It: How can we extend the benefits and minimise the risks of A/IS technology being misused?

II. International Standardization Office

The International Electrotechnical Commission of the International Organization for Standardization (ISO) announced in January 2018 that it was seeking national experts to join its committee on AI. The new committee is intended to serve as the focus for the ISOʼs standardisation programme on AI and will provide guidance to other ISO committees that are developing AI applications. The first meeting of the ISO AI committee was in April 2018 in Beijing China. The following completed ISO standards have either AI applications or work with AI:

  • ISO/IEC JTC 1/SC 41: develops International Standards for the internet of things, making connectivity possible;
  • ISO/IEC JTC 1/SC 38: addresses the standardisation of cloud computing for the storage and retrieval of data.27

AI technologies rely upon IEC Standards for hardware components such as:

  • touchscreens (IEC TC 110)
  • audio, video and multimedia systems and equipment (IEC TC 100).
  • ISO/IEC JTC 1/SC 37, which develops International Standards for generic biometric technologies including voice recognition and IEC TC 100/TA 16 addresses the issue of voice recognition within the context of active assisted living.28

III. Partnership on AI

Partnership on AI is an industry-led non-profit consortium set up by Google, Apple, Facebook, Amazon, IBM and Microsoft in September 2016 to develop ethical standards for researchers in AI in cooperation with academics and specialists in policy and ethics. Partnership on AI is still in its development phase of recruiting personnel, having recruited its Founding Executive Director in October 2017. The diverse, international multidisciplinary consortium has grown to over 50 Partner organisations in support of a mission to advance AI research and understanding and to minimise the risks and maximise the benefits associated with AI technologies, including machine perception, learning, and automated reasoning. The work of the Partnership on AI covers the following 7 thematic pillars:

  1. Safety-critical AI;
  2. Fair, transparent, and accountable AI;
  3. Collaborations between people and AI systems;
  4. AI, labour, and the economy;
  5. Social and societal influences of AI;
  6. AI and social good; and
  7. Special initiatives29.

IV. Code of ethics for data scientists

At the 2017 Data for Good Exchange, an annual conference exploring how data science can help solve problems for social good, it was announced that a code of ethics for data scientists called the “Community Principles on Ethical Data Sharing” (CPEDS) would be developed. CPEDS will provide a set of guidelines about responsible data sharing and collaboration. The CPEDS will also include a code of conduct around data sharing, so that data scientists can be thoughtful, responsible and ethical agents of change in their respective organisations. The aim of CPEDS is not to propose comprehensive solutions for questions like how to minimise algorithmic bias but it will aim to define priorities for overall ethical behaviour related to data sharing. Preliminary work to date has been focused on framing a larger discussion and arranging the discussion into a framework of themes of concern, including:

  • Data itself: Overall practices surrounding the collection, storage, and distribution of data and understanding and minimising intrinsic bias in collected data;
  • Questions and problems: Identifying valuable and relevant problems to work on and working with pre-existing resources and parties in those fields;
  • Algorithms and models: Understanding and minimising bias in algorithms/models and responsibly dealing with black-box algorithms;
  • Technological products and applications: Responsibility for how one’s research is applied; identifying and guarding against the potential for misuse;
  • Community: Fostering a data science community culture that is actively welcoming to people from diverse backgrounds and deliberately promoting equity and representation, and finding ethical, non-invasive ways to track progress.30

D. National regulations

I. Japan

1. Japanʼs approach to regulation of AI is focused around its application in robotics as outlined in its Robot Strategy. Japanʼs Robot Strategy aims to create a society where robots are an important part of everyday human life and the Robot Revolution Initiative outlined in that strategy is the private-led organisational platform to promote the “Robot Revolution”. The howtoregulate article “Robots: no regulatory race against the machine yet” summarises Japanʼs regulatory approach to robots, which is focused on consumer safety. The Robot Strategy is also linked to Japanʼs Society 5.0 directly managed by Japanʼs Cabinet Office. Society 5.0 is a “human-centered society that balances economic advancement with the resolution of social problems by a system that highly integrates cyberspace and physical space”31 and is a society that Japan is aspiring to become. AI is a critical enabler in Society 5.0 where “a huge amount of information from sensors in physical space is accumulated in cyberspace. In cyberspace, this big data is analysed by AI, and the analysis results are fed back to humans in physical space in various forms”32. Japanʼs regulatory approach to AI facilitates innovation promoted through public private partnership, mainly through the Robot Revolution Initiative that matches users and manufacturers. Japan´s AI regulation facilitates the take-up of AI, for example it has updated its privacy laws to mitigate the challenges posed by AI, and is reviewing its product liability laws.

2. In terms of global robot density Japan ranks fourth33 but in terms of countries with the most robots, Japan ranks first34. Japan is an active proponent of robotic and AI international norms, governance and standards. At the G7 Information and Communication Ministers Meeting in April 2016, Japan proposed that G7 countries take the lead in international discussions for AI research and development (R&D) guidelines beginning with the OECD. The G7 countries agreed, and Japanʼs draft “AI R&D Guidelines”, was used as the basis for discussions of a non-regulatory and non-binding international framework for AI R&D35. The purpose of the AI R&D Guidelines is to increase benefits and mitigate risks of AI systems through the sound progress of AI networks that (1) protect usersʼ interests through deterring the spread of risks; and (2) realising a human-centred society. The AI R&D Guidelines are technology-neutral, ensure a balance between benefits and risks, should be reviewed regularly and revised flexibly and are made up of 9 principles36:

  1. Collaboration: Pay attention to the interconnectivity and interoperability of AI systems.
  2. Transparency: Pay attention to the verifiability of inputs/outputs of AI systems and explainability of their decisions.
  3. Controllability: Pay attention to the controllability of AI systems.
  4. Safety: Take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices.
  5. Security: Pay attention to the security of AI systems
  6. Privacy: Take it into consideration that AI systems will not infringe the privacy of users or third parties.
  7. Ethics: Respect human dignity and individual autonomy in R&D of AI systems.
  8. User Assistance: Take it into consideration that AI systems will support users and make it possible to give them opportunities for choice in appropriate manners.
  9. Accountability: Make efforts to fulfill their accountability to stakeholders including users of AI systems.

3. As part of the Robot Strategy and working towards Society 5.0, Japan amended its Act on the Protection of Personal Information (APPI) in 2015 to take regulate the appropriate use of big data, including personal data, coming into full affect in May 2017. The amended APPI seeks to balance the protection of an individualʼs rights and interests and the utility of personal information and creates obligations that a personal information handling business operator shall fulfil. The APPI established the Personal Protection Commission, which aggregated the supervising authorities of various regulatory bodies, clarified the definition of personal information, established regulations concerning “anonymously processed information”, meaning information that has been produced by processing personal information in a way to make a specific individual unidentifiable and hence disallowing reconstruction of the personal information. For further details of the amended APPI see the presentation “The Amended Act on the APPI” at slide 88. With regards to Japanʼs approach to product liability issues relating to AI, discussions regarding the creation of new legislation to cover these issues has only just begun. Based on Japanʼs existing product liability laws, difficulties arise around who is responsible for legal liability of an AIʼs autonomous acts that infringe the rights of others and the claimants burden of proof in identifying the defendants negligent act37. For example, it might be difficult for the claimant to establish the defendantʼs negligent act in relation to the autonomous act of the AI, as the defendant might not have been able to foresee AIʼs autonomous act at the time of putting the product onto the market.

II. The United States of America

4. The USʼs approach to AI regulation has been incremental, and focused around specific pieces of legislation related to particular technologies such as drones and automatic vehicles. In December 2017 Congress introduced 3 bills related with AI. The first bill “FUTURE of Artificial Intelligence Act of 2017” (AI Act) that establishes a Federal Advisory Committee to analyse and report on the impact and growth of AI technology within 540 days following enactment of the AI Act. The AI Act defines AI as including the following:

   (A) Any artificial systems that perform tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance. Such systems may be developed in computer software, physical hardware, or other contexts not yet contemplated. They may solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.

(B) Systems that think like humans, such as cognitive architectures and neural networks.

(C) Systems that act like humans, such as systems that can pass the Turing test or other comparable test via natural language processing, knowledge representation, automated reasoning, and learning.

(D) A set of techniques, including machine learning, that seek to approximate some cognitive task.

(E) Systems that act rationally, such as intelligent software agents and embodied robots that achieve goals via perception, planning, reasoning, learning, communicating, decision making, and acting.

(2) ARTIFICIAL GENERAL INTELLIGENCE.—The term “artificial general intelligence” means a notional future artificial intelligence system that exhibits apparently intelligent behavior at least as advanced as a person across the range of cognitive, emotional, and social behaviors.

(3) NARROW ARTIFICIAL INTELLIGENCE.—The term “narrow artificial intelligence” means an artificial intelligence system that addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition.38

Once established, the Advisory Committee is to provide the Secretary of Commerce with advice on the following topics:

(A) The competitiveness of the United States, including matters relating to the promotion of public and private sector investment and innovation into the development of artificial intelligence.

(B) Workforce, including matters relating to the potential for using artificial intelligence for rapid retraining of workers, due to the possible effect of technological displacement.

(C) Education, including matters relating to science, technology, engineering, and mathematics education to prepare the United States workforce as the needs of employers change.

(D) Ethics training and development for technologists working on artificial intelligence.

(E) Matters relating to open sharing of data and the open sharing of research on artificial intelligence.

(F) International cooperation and competitiveness, including matters relating to the competitive international landscape for artificial intelligence-related industries.

(G) Accountability and legal rights, including matters relating to the responsibility for any violations of laws by an artificial intelligence system and the compatibility of international regulations.

(H) Matters relating to machine learning bias through core cultural and societal norms.

(I) Matters relating to how artificial intelligence can serve or enhance opportunities in rural communities.

(J) Government efficiency, including matters relating to how to promote cost saving and streamline operations.39

5. The second bill “AI Jobs Act of 2018”, once enacted, will require the Secretary of Labor, in collaboration with others outlined in the bill, to prepare and submit to the relevant House of Representative Committee, a report on AI and its impact on the workforce, including:

   (1) Outline the specific data, and the availability of such data, necessary to properly analyze the impact and growth of artificial intelligence.

(2) Identification of industries that are projected to have the most growth in artificial intelligence use, and whether the technology will result in the enhancement of workers’ capabilities or their replacement.

(3) Analysis of the expertise and education (including computer science literacy) needed to develop, operate, or work alongside artificial intelligence over the next two decades, as compared to the levels of such expertise and education among the workforce as of the date of enactment of this Act.

(4) Analysis of which demographics (including ethnic, gender, economic, age, and regional) may experience expanded career opportunities, and which such demographics may be vulnerable to career displacement, due to artificial intelligence.

(5) Any recommendations to alleviate workforce displacement, prepare future workforce members for the artificial-intelligence economy, and any other relevant observations or recommendations within the field of artificial intelligence.40

III. United Kingdom (UK)

6. The UKʼs regulatory focus on AI has to date consisted of a series of new AI-related bodies such as the AI Council, the Government Office for AI (OAI), Centre for Data Ethics and Innovation (CDEI), and a National Institute of AI (which is the Alan Turing Institute). The establishment of the AI Council and the OAI is underway, starting first with the recruitment of a leader for the OAI41. It is not yet clear how these two new bodies would be constituted, or how they might function beyond their broad remit of research and innovation, stimulating AI demand and accelerating uptake across all sectors of the economy, awareness raising of the advantages of advanced data anlytic technologies and promoting greater diversity in the AI workforce. The UK Government is also recruiting for the Chair of the CDEI, who is required to “start work on key issues straight away” with its findings to be “used to inform the final design and work programme of the permanent CDEI”, which may see the future CDEI given “statutory footing in due course”42. At this stage the objectives of the CDEI is to advise the Government on ethical, safe and innovative uses of data, including AI. At this stage the CDEI will not be a regulatory body but it will provide the leadership that will shape how AI is used. Lastly, the National Institute for AI, which would be the responsibility of the Alan Turing Institute, also the centre for the study of data science, will be the “champion of AI research”.Most recently in April 2018, the UK House of Lords Select Committee on AI, released its report “AI in the UK: ready, willing and able?”, which recommended that:

Blanket AI-specific regulation, at this stage, would be inappropriate. We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI.43

And with respect to the issues of liability arising from harm cause by the application of AI in a product, the House of Lords stated:

317. In our opinion, it is possible to foresee a scenario where AI systems may malfunction, underperform or otherwise make erroneous decisions which cause harm. In particular, this might happen when an algorithm learns and evolves of its own accord. It was not clear to us, nor to our witnesses, whether new mechanisms for legal liability and redress in such situations are required, or whether existing mechanisms are sufficient.

318. Clarity is required. We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area. At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.44

IV. China

7. In the last two years China has identified AI as a priority in its Five-Year Plan for National Science and Technology Innovation. After the US, China is the second most prolific AI researcher in the world producing 23% of the papers related to AI45. China plans to soon publish a guideline and detailed regulations on AI development, safety and ethics so the specifics of Chinaʼs AI regulation is presently publicly unknown but definitely an announcement to monitor given Chinaʼs AI prominence46. However, it must be noted that “China seems to have a low level of engagement with Western countries and institutions on discussions of AI safety across private, public, and academic sectors”47.

8. Being a one party state, Chinaʼs policy goals can quickly translate innovation in AI and develop regulations to facilitate the application of AI. Chinaʼs “techno-utilitarian” approach to regulation means that it iterates as it goes by launching technology quickly and if there is a problem fix it, including changing laws48. An example of this approach to regulation is best seen in the area of autonomous vehicles, where such vehicles were identified as a key sector for development in 2015. In December 2017, Beijing became the first Chinese city to allow autonomous vehicle road tests and in January 2018 completed the first draft of national rules for driverless vehicle road testing.

V. Singapore

9. Singaporeʼs strategic AI policy objective is to build its AI ecosystem through initiatives that connect businesses with AI solution providers, building AI talent through a special AI Singapore Apprenticeship Programme, creating AI libraries (the first is a speech recognition engine) and providing regulatory certainty49. Singapore has adopted a “light-touch” regulatory approach to AI, given the nascent state of development, to further encourage market adoption and AI development. Like Japan, Singapore recognises the importance of data in AI development and in July 2017 released a Guide to Data Sharing which outlines approaches for data sharing in compliance with Singaporeʼs Personal Data Protection Act. The Guide also articulates a data sharing arrangement framework within a regulatory sandbox which exempts enterprises from certain obligations to trial and support innovative uses of personal data, supported with examples of types of exemptions50. The purpose of Singaporeʼs regulatory sandbox initiative is to create a safe space for trials and experimentation51.

10. Other regulatory approaches in Singapore have developed along sector-specific lines. For example the Monetary Authority of Singapore (MAS) has brought together industry to work on guidelines for the “responsible and ethical” use of AI and data analytics in the financial sector52. It aims to complete the guide that will cover all segments of the financial sector, including the fintech firms, by the end of the year. The MAS approach to determining when to regulate or not financial technologies is based around three principles:

  1. getting the timing right, regulation must not front-run innovation but it is important for the regulator to keep pace with the evolution of technology;
  2. using a materiality and proportionality test to introduce regulation when the risks cross a certain threshold and ensuing the regulation is proportional to the risk posed; and
  3. holistic view of risks posed by new technologies or solutions, the regulatory approach would be to incentivise the risks mitigation aspects, while restraining the new risks53.

MAS has developed guidelines to promote secure cloud computing by financial institutions who are directly responsible for security and developing regulations that enables customers to benefit from greater financial advice choice at lower costs from automated robo-advisers54.

VI. Estonia

11. In March 2018, Estonia announced that it will prepare a bill to allow the use of fully autonomous information systems, in all areas of life and to ensure the clarity of the law around responsibility of the decisions made by such systems, as well as the level of supervision55. An expert group made up of state authorities, universities, companies and independent experts will prepare the bill and the AI strategy, which is to include the development of a test environment in Estonia. Of the jurisdictions researched, Estonia is the first to embark on the ambitious plan to regulate AI in general. The Digital Advisor to the Estonian Prime Minister said

We decided not to work on traffic laws only, but also the legalisation of artificial intelligence in general. We want to have a discussion within the society so that we agree on the rules of engagement of AI liability. We want to have everyone who is a nonspecialist understand the issue better.56

Given that the Estonian approach is about the general use of AI, its regulations will be one to watch.

VI. Regulation of autonomous vehicles

12. Regulations concerning autonomous vehicles (AVs) are some of the more sophisticated regulation of AI systems and as such is worth examining for regulatory approaches to issues around cybersecurity and liability. The issue of cybersecurity is a concern where a hacker successfully hacks the AV and controls it remotely and liability is a concern that when the AV is in autonomous mode, who is responsible when the AV causes harm (responsible entities under traditional vehicle laws are the driver or the manufacturer). While only time will tell how well consumers take to AVs when the mass roll out begins (Volvo released in 2017 in Gothenburg, Sweden 100 of its XC90 with Volvoʼs IntelliSafe Auto Pilot System, which will drive families and commuters around, autonomously, at 50 kilometres speed on selected roads, and has accepted “full liability whenever one if its cars is in autonomous mode”57) but various surveys about consumer trust in AVs suggests that distrust is high58.

13. Germanyʼs Federal Ministry of Transport and Digital Infrastructure released its Ethics Commissionʼs complete report on Automated and Connected Driving. The report observes that technological advances toward increasing automation in cars makes them more safer and reduces more accidents, but it adds:

Nevertheless, at the level of what is technologically possible today […] it will not be possible to prevent accidents completely. This makes it essential that decisions be taken when programming the software of conditionally and highly automated driving systems.

[…] At the fundamental level, it all comes down to the following questions. How much dependence on technologically complex systems – which in the future will be based on artificial intelligence, possibly with machine learning capabilities – are we willing to accept in order to achieve, in return, more safety, mobility and convenience? What precautions need to be taken to ensure controllability, transparency and data autonomy? What technological development guidelines are required to ensure that we do not blur the contours of a human society that places individuals, their freedom of development, their physical and intellectual integrity and their entitlement to social respect at the heart of its legal regime?59

The report lists 20 guidelines for the motor industry to consider in the development of any automated driving systems. The German transport minister said that cabinet adopted the guidelines, making it the first government in the world to do so and recommends regulators around the world to follow a similar approach60. The key elements of the 20 guidelines are:

  • Automated and connected driving is an ethical imperative if the systems cause fewer accidents than human drivers (positive balance of risk).
  • Damage to property must take precedence over personal injury. In hazardous situations, the protection of human life must always have top priority.
  • In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible.
  • In every driving situation, it must be clearly regulated and apparent who is responsible for the driving task: the human or the computer.
  • It must be documented and stored who is driving (to resolve possible issues of liability, among other things).
  • Drivers must always be able to decide themselves whether their vehicle data are to be forwarded and used (data sovereignty).61

14. The UK expects to allow AVs on British roads by 2021 and has developed a Bill that addresses questions over liability for damages caused when the AV is in automated mode. The UK transport minister said that “a new compulsory insurance framework that covers the use of AVs and when the driver has legitimately handed control to the vehicle. This will ensure that victims have quick and easy access to compensation”. Specifically, the Bill states that where an AV is insured, the insurer is liable for damage caused by the AV when driving itself and where the AV is not insured, the owner of the AV is liable for damage62.

15. The US federally and at state-level has the most advanced and complex system of AV regulation which is based around the six SAE – Society of Automotive Engineers (standard J3016) international taxonomy for levels of autonomy (automated driving systems – ADS). Level 0 is no automation, Level 2 is Partial automation such as driver assistance systems in steering and acceleration-deceleration, Level 3 conditional automation where the human driver will respond appropriately to a request to intervene and Levels 5 and 6 is high and full automation, where a driver may respond to a request to intervene (5) or no human driver is required (6). For the full taxonomy of levels see the end of this article. This list outlines US statesʼ self-driving vehicles enacted legislation. Most US states requires the driver to be in the driverʼs seat of an AV and to have US$5 million in liability insurance. Some US states´regulations require a human operator to have one hand on the steering wheel at all times, which means that AVs with Levels 3 to 5 may not engage those levels ADS. The US state of Arizona has regulated for vehicles that have autonomous vehicle technology added by a third party, which releases the original manufacturer from liability, unless the original non-driverless vehicle was manufactured defective63. The US National Highway Traffic Safety Administration (NHTSA) of the federal Department of Transport has set a Federal Motor Vehicle Safety Standards (FMVSSs) for new motor vehicles and motor vehicle equipment (with which manufacturers must certify compliance before they sell their vehicles)64. For AVs a safety evaluation report is required to be submitted that explains how the manufacturer addresses: system safety, data recording, cybersecurity, human-machine interface, crashworthiness, documentation of capabilities, post-crash behaviour, account for applicable laws and automation function65. Many US states´regulations enable test driving of Level 5 ADS through a permit system.

VIII. Further links

The following additional resources not referenced in this article but were reviewed as part of the research are recommended for those interested in the subject of AI regulation:

Japan has a detailed site in English that explains its laws and policies in the amended Act on the Protection of Personal Information: https://www.ppc.go.jp/en/legal/.

The European AI Landscape Workshop Report outlines the EU Member Statesʼ current AI ecosystem (January 2018) and associated countries: https://ec.europa.eu/digital-single-market/en/news/european-artificial-intelligence-landscape.

World Economic Forum page on AI is very useful: https://www.weforum.org/center-for-the-fourth-industrial-revolution/areas-of-focus.

This article was written by Valerie Thomas, on behalf of the Regulatory Institute, Brussels and Lisbon.

Annex 1 – Society of Automotive Engineers “Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems” (standard J3016)

SAE Level

Name

Narrative definitions

Execution of steering and acceleration/ deceleration

Monitoring of driving environment

Fallback performance of dynamic driving task

System capability (driving modes)

Human driver monitors the driving environment

0

No Automation The full-time performance by the human driver of all aspects of the dynamic driving task, even when “enhanced by warning or intervention systems” Human driver Human driver Human Driver N/A

1

Drive Assistance The driving mode-specific execution by a driver assistance system of “either steering or acceleration/deceleration” using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task Human driver and system Some driving modes

2

Partial Automation The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/ deceleration System
Automated driving system monitors the driving environment

3

Conditional Automation The driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene System System Human driver Some driving modes

4

High Automation even if a human driver does not respond appropriately to a request to intervene System Many driving modes

5

Full Automation under all roadway and environmental conditions that can be managed by a human driver System All driving modes

1Japan, China and the UK have all made AI a central part of their economic strategies, see Part III of this article for the details.

2Term coined by World Economic Forum founder Klaas Schwab.

3Crofts, A., “Notes from #WebSummit: Opening Address from Stephen Hawking ʻThe impending impact of AI on humanity: for better, or for worse´”, Medium, 7 November 2017, https://medium.com/web-summelier/notes-from-websummit-opening-address-from-stephen-hawking-442bb4305ff4.

4Brundage, M., Avin, S. et al, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, pp. 16-17, https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf.

5Ibid.

6Gershgorn, D., “AI experts list the real dangers of artificial intelligence”, Quartz, 22 February 2018, https://qz.com/1213524/ai-experts-list-the-real-dangers-of-artificial-intelligence/.

7See n. 4, p. 9.

8Engineering and Physical Science Research Council, Principles of Robotics, https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/.

9The Alan Turing Institute, AI, ethics and the law: What challenges and what opportunities, 18 January 2018, p. 3, https://aticdn.s3-eu-west-1.amazonaws.com/2018/03/140318-Ai-ethics-and-the-law-public-panel-report.pdf.

10Wellcome Trust, Ethical, Social and Poltical Challenges of AI in Health, p. 4, https://wellcome.ac.uk/sites/default/files/ai-in-health-ethical-social-political-challenges.pdf.

11Gdsfhds https://www.out-law.com/en/articles/2018/february/singapore-finalises-new-cybersecurity-act/.

12If the regulator is interested in reading about liability for AI decisions this articles concerns machine bias in use of algorithms in US courts to assess the likelihood of an individual re-offending. Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016) ‘Machine Bias’, Pro Publica, available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

13Greene, T., “Bye bye black box: Researchers teach AI to explain itself”, TNW News, https://thenextweb.com/artificial-intelligence/2018/02/27/bye-bye-black-box-researchers-teach-ai-to-explain-itself/.

14Haas, B., “ ʻKiller robotsʼ: AI experts call for boycott over lab at South Korea university”, The Guardian, 5 April 2018, https://www.theguardian.com/technology/2018/apr/05/killer-robots-south-korea-university-boycott-artifical-intelligence-hanwha.

15Ibid.

16UN Interregional Crime and justice Research Institute, Artificial Intelligence and Robots http://www.unicri.it/topics/ai_robotics/.

17Ibid.

18UN International Telecommunications Union, AI for Good Global Summit Report 2017, page 13, https://www.itu.int/en/ITU-T/AI/Documents/Report/AI_for_Good_Global_Summit_Report_2017.pdf.

19International Committee of Red Cross, Factsheet on the 1980 Convention on Certain Conventional Weapons https://www.icrc.org/en/document/1980-convention-certain-conventional-weapons#.VKkpP2SG-rY.

20UN Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, Report of the 2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems, 20 November 2017, pages 4-5, https://www.unog.ch/80256EDD006B8954/(httpAssets)/B5B99A4D2F8BADF4C12581DF0048E7D0/$file/2017_CCW_GGE.1_2017_CRP.1_Advanced_+corrected.pdf.

21European Commission Digital Single Market, EU Member States sign up to cooperate on AI, 10 April 2018, https://ec.europa.eu/digital-single-market/en/news/eu-member-states-sign-cooperate-artificial-intelligence

22European Commission, Press Release: Aritifical intelligence: Commission outlines a European apporach to boost investment and set ethical guidelines, 25 April 2018, http://europa.eu/rapid/press-release_IP-18-3362_en.htm.

23European Securities and Markets Authority, Questions and Answers on MiFID II and MiFIR market structures topics, 28 March 2018, p. 22 https://www.esma.europa.eu/system/files_force/library/esma70-872942901-38_qas_markets_structures_issues.pdf.

24OECD, “The Economic and Social Implications of Artificial Intelligence Summary”, Technology Foresight Forum 2016, page 2 http://www.oecd.org/internet/ieconomy/DSTI-CDEP(2016)17-ENG.pdf.

25Ibid.

26The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017. http://standards. ieee.org/develop/indconn/ec/autonomous_ systems.html.

27Mouyal, N., “AI is listening to you: Recent advances in voice recognition being Ai technology to the home”, e-tech News & views from the IEC, Issue 01/2018, https://iecetech.org/Technology-Focus/2018-01/AI-is-listening-to-you.

28Ibid.

29Partnership on AI, Thematic pillars, https://www.partnershiponai.org/thematic-pillars/.

30Data for Democracy, Code of Ethics, http://datafordemocracy.org/projects/ethics.html.

31Japan Cabinet Office, Society 5.0, http://www8.cao.go.jp/cstp/english/society5_0/index.html.

32Ibid.

33International Federation of Robotics, Media Release: Robot Density Rises Globally https://ifr.org/ifr-press-releases/news/robot-density-rises-globally

34World Economic Forum, “This map shows the countries with the most robots” https://www.weforum.org/agenda/2016/03/this-map-shows-the-countries-with-the-most-robots

35Conference Toward AI Network Society, AI R&D Guidelines, page 2 http://www.soumu.go.jp/main_content/000507517.pdf.

36OECD Conference, “Session 5. AI Policy Landscape: AI R&D Guideleines”, AI: Intelligent Machines, Smart Policies, Paris 26-27 October 2017, http://www.oecd.org/going-digital/ai-intelligent-machines-smart-policies/conference-agenda/ai-intelligent-machines-smart-policies-hirano.pdf.

37Ikeda, J. et al, “Product liability and safety in Japan: Overview”, Thomson Reuters Practical Law, as at 1 January 2018, https://uk.practicallaw.thomsonreuters.com/w-012-7145?transitionType=Default&contextData=(sc.Default)&firstPage=true&bhcp=1.

38Section 3 of H.R.4625 – FUTURE of Artificial Intelligence Act of 2017 https://www.congress.gov/bill/115th-congress/house-bill/4625/titles

39JD Supra https://www.jdsupra.com/legalnews/multiple-artificial-intelligence-bills-22422/.

40Section 3 of H.R. 4829 – AI JOBS Act of 2018 https://www.congress.gov/bill/115th-congress/house-bill/4829/text.

41Trendall, S., Government Recruits for “public face” of AI https://www.civilserviceworld.com/articles/news/government-recruits-%E2%80%98public-face%E2%80%99-ai.

42UK Government, Search for Leader of Centre for Data Ethics and Innovation launched, 25 January 2018, https://www.gov.uk/government/news/search-for-leader-of-centre-for-data-ethics-and-innovation-launched.

43UK House of Lords Select Committee on AI, “AI in the UK: ready, willing and able?”, Report of Session 2017-19, 16 April 2018, page 118, paragraph 386, https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2018/04/AI-in-the-UK-ReadyWillingAndAble-April-2018.pdf.

44Ibid. page 100, paragraphs 317 and 318.

45Zhihao, Z., “Minister: Plan to boost AI research”, China Daily, 12 March 2017, http://www.chinadaily.com.cn/a/201803/12/WS5aa5bdd4a3106e7dcc140ed0.html

46Ibid.

47Ding, J., Deciphering Chinaʼs AI Dream: the context, components, and consequences of Chinaʼs strategic to lead the world in AI, Future of Humanity Insititute, University of Oxford, March 2018, https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf.

48Parasol, M., “Chinaʼs preferential regulations can help it win the self-driving race”, Tech in Asia, 9 March 2018, https://www.techinasia.com/talk/china-win-self-driving.

49Info-communications Media Development Authority Singapore, Fact Sheet: AI Industry Initiatives, https://www.imda.gov.sg/-/media/imda/files/about/media-releases/2017/annex-a—ai-industry-initiatives.pdf?la=en.

50Personal Data Protection Commission Singapore, Guide to Data Sharing, published July 2017, pages 13-17 https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Other-Guides/guide-to-data-sharing-(270717).pdf

51Lung, N., “Minister Vivian Balakrishnan on 7 major tech trends and Singaporeʼs regulatory approach”, OpenGov Asia, 19 April 2018, https://www.opengovasia.com/articles/minister-vivian-balakrishnan-on-7-major-tech-trends-and-singapores-regulatory-approach.

52Monetary Authority Singapore, MAS and financial industry to develop guidance on responsible use of data analytics, 2 April 2018, http://www.mas.gov.sg/News-and-Publications/Media-Releases/2018/MAS-and-financial-industry-to-develop-guidance-on-responsible-use-of-data-analytics.aspx.

53Menon, R., Speech by Managing Director of Monetary Authority Singapore, Singaporeʼs FinTech Journey – Where we are, What is next, 16 November 2016, http://www.mas.gov.sg/News-and-Publications/Speeches-and-Monetary-Policy-Statements/Speeches/2016/Singapore-FinTech-Journey.aspx

54Ibid.

55Republic of Estonia Government Office, Estonia will have an artificial intelligence strategy, 27 March 2018, https://riigikantselei.ee/en/news/estonia-will-have-artificial-intelligence-strategy.

56Niiler, E., “In Estonia, planning for life alongside robots”, Medium, 22 November 2017, https://medium.com/cxo-magazine/estonia-is-teaching-the-world-how-to-live-with-robots-da9145f9f170.

57Gorzelany, J., “Volvo will accept liability for its self-driving cars”, Forbes Online, 9 October 2015, https://www.forbes.com/sites/jimgorzelany/2015/10/09/volvo-will-accept-liability-for-its-self-driving-cars/#51607b8e72c5.

58Hutson, M., “People don´t trust driverless cars. Researchers are trying to change that”, Science, 14 December 2017, http://www.sciencemag.org/news/2017/12/people-don-t-trust-driverless-cars-researchers-are-trying-change.

59Ethics Commission of the German Federal Ministry of Transport and Digital Infrastructure, Ethics Commissionʼs complete report on automated and connected driving, 28 August 2017, p. 6, http://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.pdf?__blob=publicationFile.

60German Federal Ministry of Transport and Digital Infrastructure, Press release: Ethics Commission on Automated Driving present report Dobrindt: First guidelines in the work for self-driving computers, 28 August 2017, https://www.bmvi.de/SharedDocs/EN/PressRelease/2017/084-ethic-commission-report-automated-driving.html.

61See n 42, pp. 11-12.

62Section 2 of the Vehicle Technology and Aviation Bill (HC Bill 143) https://publications.parliament.uk/pa/bills/cbill/2016-2017/0143/cbill_2016-20170143_en_2.htm#pt2-pb1-l1g8.

63Arizona House Bill 2167

64US National Highway Traffic Safety Administration, “Section 2: Technical assistance to states: Best practices for legislatures regarding automated driving systems), p. 20, https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/automated-driving-systems-2.0-best-practices-for-state-legislatures.pdf

65National Conference of State Legislatures Office of State-Federal Relations, Info Alert Senate Releases Bi-Partisan Autonomous Vehicle Legislation that Pre-empts States, p. 2, http://www.ncsl.org//Portals/1/Documents/standcomm/scnri/senate_commerce_ads_1_25672.pdf

Leave a Reply

Your email address will not be published. Required fields are marked *

six − 4 =