Report on Artificial Intelligence: Part II – outline of future regulation of AI

This Part II of the howtoregulate Report on Artificial Intelligence presents regulatory approaches for minimizing the harms of artificial intelligence (AI), evidently without putting into question the utility of AI. What should regulation of Artificial Intelligence look like? The answer to this question depends on the goals of the regulator. As was previously outlined in Part I, much of the goals of states today is to focus on incentivizing innovative applications of AI or encouraging breakthrough AI research. We could imagine, however, that the average regulator might also consider such goals as avoiding the risk that AI research or technology leads to the eradication of humankind and reducing other major risks for human beings to the extent that the expected positive effect of AI is not disproportionately hampered. Furthermore, regulators might feel compelled to deal with particular risks linked to specific technological uses.

The goals agreed upon will need to be operationalized and incentives for compliance created. In the four-part howtoregulate series of articles concerning the regulation of research and technology risks in general (see here), regulatory tools (including a prototype regulation) were presented which can help to contain risks linked to research and technology. A methodology for developing concrete regulations is outlined in the Handbook “How to regulate?”. In this report, we omit intermediate steps and present just the result of the exercise outlined in the Handbook.

A. Requirements for minimising the risks of artificial intelligence

1. In this section we look at the requirements or obligations of companies, research institutions, researchers and developers of Artificial Intelligence research and technology to minimise the risks of AI. In Section I some general requirements are outlined based on AI activities in research or technology. These general requirements for AI research or technology help the regulator focus limited resources towards higher risk activities or perhaps prohibit activities that cannot be regulated to satisfy safety, for example due to insufficient resources. Sections II to VIII examine specific issues of concern peculiar to AI such as algorithms and data to name a few.

I. Requirements based on artificial intelligence activity: research or technology

2. Two distinct areas of AI merit the development of requirements:

  • Research: there appears to be general approval in the Artificial Intelligence community that AI research should be conducted for the benefit of human well-being. The corollary of this is that where AI research has no tangible benefit to human well-being or endangers humankind or attracts a high risk of misuse in its application that research should not occur1.

  • Technology: noting that many applications of AI can be dual-use, what requirements should be developed to encourage positive uses or dissemination of AI technology, while prohibiting negative uses? Safety may require rules about how algorithms work and the data making up datasets of the AI. What data can the Artificial Intelligence access and how does it learn from this? How is tampering prevented? How is the AI protected from hacking? What standards should be applied?

3. Entities such as companies, research institutions and individual researchers undertaking AI research could for example be required to:

  • assess risks prior to starting their research undertaking, with or without application of a relevant risk management standard;

  • reduce risks linked to the research undertaking, to the extent possible, if the risk reduction does not endanger the purpose of their research, with or without application of a relevant risk management standard;

  • refrain from research undertakings which trigger disproportionate / high risks;

  • inform the responsible authority of risks for health and life linked to a research undertaking;

  • request an authorisation from the responsible authority if the research undertaking triggers risks for health and life of a higher number of individuals;

  • request an authorisation from the responsible authority if the research undertaking triggers risks for the economic or ecological survival of the society in the jurisdiction in question or of societies in other jurisdictions; or

  • register research projects in a database with two levels: public information and information only accessible to the authority in charge.

4. For companies and their staff developing or using risky AI technologies, they could for example be required to:

  • assess risks prior to using and disseminating technologies, with or without application of a relevant risk management standard;

  • reduce risks linked to the technologies to the extent possible if the risk reduction does not endanger the prevailing utility of the technologies, if any there is; this can be done with or without application of a relevant risk management standard;

  • refrain from using or disseminating technologies which trigger disproportionate / high risks;

  • inform the responsible authority of risks for health and life linked to use of these technologies;

  • request an authorisation from the responsible authority if the technologies trigger risks for health and life of a higher number of individuals;

  • request an authorisation from the responsible authority if the technologies triggers risk for the economic or ecological survival of the society in the jurisdiction in question or of societies in other jurisdictions; or

  • register the use and the dissemination of risky technologies in a database with two levels: public information and information only accessible to the authority in charge.

II. Algorithms

5. Algorithms are the basic building blocks of AI. Algorithmic decision making (ADM) is used to determine who is hired, who receives a good credit rating, where to drive, what news or advertisements pop up in our social media feed and to help judges determine a defendantʼs risk of recidivism. Underpinning ADM is complex mathematics and this mathematics influences peopleʼs lives in significant ways, one mathematician called such systems “weapons of math destruction”2. These algorithms are mostly developed by privatecompanies. They are complex and also protected by trade secrecy. As a citizen or customer you want to know why a decision was made, what information was used but you probably do not have the right to know or to access the particular information used. Given the profound affect ADM can have on our lives it is paradoxical that there is not more supervision or accountability.

6. In establishing algorithmic requirements, regulators should differentiate between different algorithmic applications. For example algorithms used in determinations in the justice system (see the example of Compas, a risk-assessment tool about recidivism developed by a company and used by the Department of Corrections of the US state of Wisconsin) may have more stringent requirements than those used in cleaning or waste disposal. Algorithms that affect individuals directly could be required to be verified as having no biases (this may not be possible, noting that humans are evidently not without bias), or that the biases contained in the algorithms are disclosed. For example all medicines sold include information about secondary effects such as one in 10 people is likely to get a headache or one in 100 people is likely to experience vomiting. So where disclosed biases of algorithms are made, particularly in ADM systems that affect people significantly, regulators may want to think about what level of appropriate human intervention is required to mitigate these biases. Although the IEEE3(worldʼs largest technical professional organisation for the advancement of technology) is still developing its standards on algorithms, regulators should watch this space when developing algorithmic requirements and consider the implications of algorithms already in use and whether they should be audited against the IEEE standard to know the risks of that algorithm.

7. Designers of algorithms should also be required to disclose the ethical decisions or conditions the AI is programmed to make. Alternatively, regulators could require algorithmic decisions to be made in particular ways. This is the German approach adopted in itsreport on Automated and Connected Drivingdeveloped by the Ethics Commission of Germanyʼs Federal Ministry of Transport and Digital Infrastructure. The key elements of the approach are:

  • Automated and connected driving is an ethical imperative if the systems cause fewer accidents than human drivers (positive balance of risk);

  • Damage to property must take precedence over personal injury. In hazardous situations, the protection of human life must always have top priority;

  • In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical or mental constitution) is impermissible (concerns the classic trolley problem);

  • In every driving situation, it must be clearly regulated and apparent who is responsible for the driving task: the human or the computer;

  • It must be documented and stored who is driving (to resolve possible issues of liability, among other things); and

  • Drivers must always be able to decide themselves whether their vehicle data are to be forwarded and used (data sovereignty).

Most of these basic principles could be applied by analogy to other AI applications as well.

III. Data

8. If the algorithm is the building block of an AI then data is its lifeblood. Machine learning used in AI involves algorithms that progressively improve themselves by consuming vast amounts of data to better spot patterns. Getting better at spotting patterns means that autonomous vehicles better recognise objects on a road or a bankʼs AI systems better understands customer behaviour to better spot fraud. However, it is not always clear the ethical rules companies and research institutes apply in sourcing the data, the veracity of the data, how the data is protected, the consent to the data and future data access of the AI. Certainly, there exists national data protection laws but many of these laws were not made with the scale of data used in machine learning.

9. Regulators could consider the following requirements for data used in AI research and technology:

  • Consent to data used: it seems obvious that where personal data is used the necessary consent should be obtained, but what about anonymised data sets used along side other anonymised data sets with a machine looking for patterns, how difficult is it for such anonymous data to be identifiable?

  • Veracity of data: some efforts should be made to ensure the accuracy of the data used in AI to reduce the affect of any “data poison” and some requirements around inferences drawn to ensure that profiling or discrimination does not form a part of the AI.

  • Access to data: if the AI has been designed with the capacity to search for data some requirements around the type of data the AI can access should include checking or asking for consent or adhering to laws around intellectual property for example.

IV. Safety requirements for AI products and services

10. In addition to the safety requirements for products and services contained at paragraph 10 of Part IV of “Robots: no regulatory race against the machine yet”, it is worth investigating the utility of a “kill switch” as a precautionary risk control mechanism that either enables a human override capability or risk indicators which automatically stop an AI system once the risk is detected. An identification chip could be required where manufacturers must ensure their robots are identifiable by including identification chips that are protected from alteration. For AI applications other than robots, regulators could develop standards for secure hardware that would prevent copying a trained AI model off a chip without the original copy being first deleted4. Other safety requirements could also be developed under the following subjects:

  • Hardware: in addition to those listed above (tamper proof, kill switch, identification chip), a permanent localisation to ensure retrievability in case of loss or theft.

  • Software: built in risk indicators to detect risks and start corrective measures or shut down.

  • Access to data: in addition to the consent requirements above, data connection requirements could also be made to determine how the AI should operate once the data connection is lost.

  • Additional requirements in safety-critical contexts such as AI use in aviation, electricity or surgery.

  • Fixing a limit for acceptable risks: either qualitatively or – at least in theory – quantitatively, i.e. to require that a certain hazard does not become reality in more than one out of 1000 or one out of 1000000 cases. Evidently, the acceptability limits should depend on the severity of the harm / hazard.

  • AI technology sold as tuning kits, either robots or software designed to perform a task or service, for example buying autonomous vehicle software and sensor systems to make an ordinary vehicle more automated.

V. Cybersecurity

11. Although cybersecurity laws exist in most countries (see the howtoregulate article “Cybersecurity: regulating the virtual world” for details), such laws should be reviewed to ensure that the requirements for AI products and services are robust. It may be beneficial to specify cybersecurity levels for different AI products and services. For example the cybersecurity requirements in aviation are understandably stringent (even so, vulnerabilities still exist that require constant vigilance by regulators, see report “Cyber-Security, a new challenge for the aviation and automotive industries”) but for cleaning robots perhaps the cybersecurity requirements could be less strict. The “Malicious use of AI” report suggests exploring the following cybersecurity requirements:

  • Regular red team exercises: an exercise where a “red team” made up of suitably qualified experts deliberately plans and carries out attacks against the systems and practices of the organisation (evidently with limits to prevent damage), to explore any vulnerabilities with a view to improving the organisation´s systems and practices;

  • Formal verification: that AIʼs internal processes do in fact attain the goals specified for the system, that its goals remain constant in the face of attempts to change them by adversaries and that AI actions based on deception from adversarial inputs can be bounded in some way. Although formal verification provides high levels of consumer protection, designing appropriate tests is not an easy task and may be prohibitively expensive;

  • Responsible disclosure of AI vulnerabilities;

  • Security tools; and

  • Secure hardware: requiring a level of secure or tamper proof hardware for AI applications that are dual-use may be useful to protect that knowledge from reverse engineering by any malevolent actors5.

VI. Employees working with artificial intelligence

12. As previously mentioned it is paradoxical that there is not more supervision or accountability of ADM given the effect it has on our lives. This applies also to the question of who develops and who operates AI. What are the qualifications of those developing the algorithms? Undoubtedly employees developing algorithms in companies or research institutes are experts in their field (science, engineering, mathematics, information technology) but given the social and cultural implications of ADM other qualifications from the humanities may be equally important. Regulators may wish to consider specific training or education requirements for employees in particularly sensitive sectors of Artificial Intelligence such as in health, justice or defence to ensure that employees designing algorithms understand how they will be used, and that the data points training the algorithms are representative. It is also important to prioritise continuous professional training and possibly requiring employees to demonstrate ethical understanding of their work.

13. Noting the dual-use nature of AI research and technology it may be prudent to consider the security requirements for employees working in research institutions and companies. For example strict regulations govern access to biological materials (bacteria and toxins) by employees in private and public biological facilities, and yet breaches occur and evidence suggests that biosecurity education among life scientists is poor6. Companies typically have a business interest to ensure that threats posed by their employees are regulated through background checks, monitoring employee behaviour, using the principle of least privilege access, controlling user access, monitoring user actions and educating employees. Regulators could consider such precautionary requirements for all dual-use Artificial Intelligence research and technology (see “Insider Threats as the Main Security Threat in 2017” for how these requirements operate).

VII. Social and cultural aspects

14. Many have commented on the ʻdisruptionʼ we can expect from wider use of Artificial Intelligence in our society and lives. States are investigating policy responses for those left behind by this disruption through the welfare, health, education and tax system7. If the regulator opts for a risk management approach to regulating the use and dissemination of AI technologies, such companies could be required to assess risks to human labour displacement, the economic sustainability of cities or towns where the AI technology will be used, any likely increases to social welfare or the affect on the environment. Companies could be required to reduce such risks for example by re-training the displaced human labour. It is reasonable, and indeed encouraged, that a company seeks to innovate and automate to maximise efficiency dividends. However, such “efficiency dividends” for the company does not reflect the costs placed on the health and welfare system caused by any displacement in human labour. In the example of products put on the market such as robo cleaners it could be expected that there would be less demand for human cleaners, and certainly cleaning services companies may replace human cleaners with robo cleaners. It may not be appropriate to require manufacturers or distributors to mitigate risks to human labour displacement caused by a consumer choice. However, using the robo cleaner example, human cleaners from fragile demographics such as low-wage, limited education, female and migrants are over-represented and so the wider use of robo cleaners would have a significant impact on this group. Companies putting Artificial Intelligence products on the market could be required to examine the societal and environmental impacts of their products to at least serve as a trigger to the state to develop mitigation strategies in the social welfare system, should that be required.

VIII. Liability and insurance

15. Whilst some authors have already mentioned the need for liability provisions applicable to designers / manufacturers / owners / users of robots, little has been said about the necessity to require insurance covering liability claims. Ultimately, this will depend on how the insurance industry views the profit versus risk equation and if it comes to pass that the insurance industry does not create insurance for AI risk then the state may wish to think about a public liability scheme funded by revenue from AI technology. However, it goes without saying that liability provisions are worth nothing in case of insolvency unless they are backed by an insurance or subsidiary state liability.

B. Ensuring compliance

In this section we look at specific measures to ensure compliance with the requirements outlined in the previous section.

I. Enforcement powers of authorities

1. Countries researched at Part IV of “Report on AI: Part I – the existing regulatory landscape” had a dedicated policy centre for AI with close links to AI research institutes. The countries dedicated to direct public investment of AI research and technology take-up seemed to closely monitor the recipients of public funding. Although no explicit evidence of enforcement of safety standards was found, there would most likely be some government criteria for deciding who received funding or not based on good standards for research and product development. However, many countries did not have a central policy centre for Artificial Intelligenceand those countries with an AI focus did not have authorities empowered to enforce standards in AI research, save those reliant on public funding.

2. To enable authorities to act, it is necessary that authorities are informed in advance or as soon as possible about potentially risky research projects or technologies. The information obligations contained in the previous section might not be sufficient to ensure this result. The authorities might need to have access to work programmes of companies or documentation of universities and other research institutes on request. Therefore they should have comprehensive investigative powers. In cases of high risk undertakings, prior authorisation procedures should be mandatory. See for details paragraphs 16 to 19 of “Regulating Research and Technology Risks: Part II – Technology Risks”.

3. But in addition, authorities should have the means and, even more importantly, the scientific competence to assess the risks linked to research or technologies. To increase the authorities’ effectiveness, authorities should be empowered to cooperate with their peers in the same and in other jurisdictions and this empowerment should include the right to transmit information on persons and confidential information relating to the research institutions or private companies undertaking research and using or disseminating technologies. Regulators should consider developing and promulgating a list of research areas that should not be funded or technologies that should be closely monitored.

4. Once authorities have identified noteworthy risks, they need legal empowerments and work capacity to monitor and react to AI research projects and the use or dissemination of AI technologies which pose a particular risk. These measures might have a temporary or a definitive character. The range of measures to be taken by the authority should be as generic as legally possible in the respective jurisdiction because many different types of measures might be needed in the given case. However, the respective empowerments contained in the regulation should explicitly mention the most far reaching measures such as the confiscation of objects, including computers and documents, the sealing of facilities and the destruction of harmful objects. In jurisdictions which require extremely precise and delimited empowerments, regulators might appreciate studying the comprehensive empowerments in SingaporeʼsAir Navigation (Amendment) Act 2014 at Section 4 or in theUgandanAnti Money Laundering Act 2013. See also the “Empowerments checklist” which is expected to be published on howtoregulate.org within two months after publication of this report. Regulators of course would need to consider how it would monitor and enforce secure hardware particularly where supply chain vulnerabilities exist.

5. In cases of extremely high risks, empowerments to supervise electronic and telecommunications might be considered appropriate, whilst a limitation of the individual’s right to keep communications confidential might not be justified in cases of minor risks (principle of proportionality, applied at constitutional level in some jurisdictions).

6. When measures have been taken, the authority should have the legal power to communicate its decision to peers and to enforce such decisions in its own jurisdictions, as well as in other jurisdictions in which regulated entities operate, evidently in agreement of the latter. This is necessary because nothing is gained if risky research is just relocated to another jurisdiction. Publication of measures to peer authorities might also stop a competition spiral downwards in terms of control intensity. Evidently administrations are expected to be research friendly. Accordingly, there might be a political objective to maintain a research and innovation friendly environment and consequently to ignore associated risks. Individual agents of administrations who prefer to counter this pressure are in a better position if they can refer to measures taken in other jurisdictions. The exchange of information on considered or taken measures is thus very important for maintaining a climate which is open for justified risk limitation.

II. Self-regulating bodies and certification

7. The installation of self-regulating bodies is a very useful technique complementing other compliance measures. Self-regulating bodies could provide basic certification with regard to ethical procedures for AI development around algorithms, data, cybersecurity processes for example. Certification could be based on the to-be-released IEEE standard on ethical design and other “best practices”. Whilst it might go too far to require external quality system certification, certification by a self-regulating body could be a proportionate way for verifying that suitable internal processes have been established. This type of certification is to be distinguished from certification against legal requirements by certification bodies entrusted by the state which is regarded as alternative to state authorisation procedures.

8. To ensure conformity with the major legal obligations, regulators might consider establishing procedural obligations for research institutions, such as undergoing certification to prove that the institutions’ internal processes ensure the fulfillment of these obligations. For companies using or disseminating technologies, certification could also be used to ensure that internal processes fulfill their legal obligations. Risk management certification would be particularly helpful. Procedural obligations should be proportionate to the risks. To that end, it might be useful to establish risk classes and assign sets of (proportionate) procedural obligations to them. See the howtoregulate article about risk classification “Research and Technology Risks: Part III – Risk Classification”.

III. Penal and administrative sanctions

9. Whilst penal sanctions are mostly on the radar of regulators, we recommend considering an addition administrative sanction against the legal bodies (the economic operators or research institutions) which do not ensure compliance. The administrative sanction does not need to be based on negligence of an individual – negligence of the entire organisation or bad management might suffice. Hence it is more easy to prove the conditions for the sanctions which ensures a higher degree of efficiency. The German Network Enforcement Act (NetzDG) imposes regulatory fines committed by any person who, intentionally or negligently fails to produce the reports outlined in NetzDG, fails to provide a procedure for complaints, fails to monitor complaints, fails to offer training and support amongst others listed at Section 4, subsection 1 of NetzDG. Regulatory fines may be made up to five million euros (Section 4, subsection 2 NetzDG). Similarly, sanctions could be imposed on AI research institutes and companies that fail to have ethical codes, training for employees in such codes, recruit employees with the necessary competencies to operate ethically, exercises to text cybersecurity protocols for example.

IV. Professional bodies for AI industry

10. Much like the professional bodies that regulate the conduct of medical practitioners or lawyers there may be value in regulating for the development of such a professional body and membership for data scientists and engineers that work in AI. Or perhaps start with regulating professional standards of data scientists and engineers working on publicly funded AI projects. Regulating the types of research and innovations developed at one end, without also enforcing standards of ethical behaviour of the employees in such projects at the other is counter-productive.

11. Bias has been raised as an issue that produces undesirable results from the AI because of data bias that the AI is trained with but also the lack of diversity of the AI creators, who are predominately males. One must consider what kind of AI is developed when job advertisements for AI engineers state “strong men wanted”8. A professional AI body could be empowered to monitor this kind of recruitment practice in the field and publish material on the demographics in the profession and introduce quotas or incentives that encourage more diversity. Diversity might pay-off in terms of reducing the likelihood of a risk-neglecting work culture from developing.

V. Whistle-blowing, alert portals, AI-specific exploit bounties

12. Regulators should promote whistle-blowing mechanisms or alert portals to encourage those working in AI research or technology to report instances of unethical behaviour. Consumers, could report to a specific alert portal for instances of bias in algorithms. For more information about whistle-blowing schemes see the howtoregulate article “Whistleblowers: protection, incentives and reporting channels as a safeguard to the public interest.To complement the norm of responsible disclosure of vulnerabilities, which relies on social incentives and goodwill, some software vendors offer financial incentives (cash bounties) to anyone who detects and responsibly discloses a vulnerability in their products. However, not all AI operators offer financial incentives for vulnerability disclosures in their products and so the regulator may deem it appropriate to have a government financial incentive scheme.

VI. Labelling and rating

13. Regulation could consider implementing a labelling and rating scheme. The legal basis makes the labelling or rating more reliable, trustworthy and easier to execute, in particular with regard to legal concerns around liability in the event of a product failure. Labelling could inform consumers about how the algorithms work, the kinds of data the AI was trained with, the risk of not performing, the cybersecurity protocols of the AI etc. Research from the Ponemon Institute shows that in the case of autonomous vehicles, manufacturer information about security precautions taken to prevent the vehicle from being hacked, and how the vehicle owner can protect their privacy and security within the vehicle rated highly9. This suggests that labelling of AI products and services is likely to be very useful.

C. Concluding remarks

1. In the first part of the “Report on Artificial Intelligence: Part I – the existing regulatory landscape” the current regulatory landscape was examined beginning with questions of substance for regulating AI, such as a definition of AI and the decisions to be made by humans as opposed to AI. At the international and supranational level AI is regulated through a system of norms and standards as opposed to a formal framework of regulations, much of which, at the development phase. At the national level, Japan is an active proponent of AI governance and standards, followed by the United States. In terms of detail about specific AI issues, Germany has guidelines about the ethical decisions automated vehicles ought to be programmed to make and Singapore is using a risk based approach to AI used in the financial industry to incentivise risk mitigation, while restraining new risks. However, there are still more questions than answers in the regulatory landscape.

2. This second part “Report on Artificial Intelligence: Part II – the existing regulatory landscape” outlines regulatory approaches for minimising the harms of AI, evidently without putting into question the utility of AI. Much of the goals of administrations today focuses on incentivising innovative applications of AI or encouraging break through AI research. However, other noteworthy goals include avoiding the risk that AI research or technology leads to the eradication of humankind or reducing other major risks for humans to the extent that the expected positive effect of AI is not disproportionately hampered. In a risk based approach to regulating AI, entities could be required to assess risks prior to undertaking an AI activity, to either reduce risks or refrain from AI activity which trigger high risk and seek authorisation to engage in certain high risk AI activity. Given the ethical implications of AI activities, authorities should be empowered to act, to monitor and to enforce requirements using the regulatory techniques listed in this article, such as enforcement powers, certification, administrative and penal sanctions, whistle-blowing and labelling to name just a few.

3. AI research and AI technology use and dissemination is on the rise, with generally few regulatory constraints. Given the profound effect of AI technology on our lives, particularly algorithmic decision-making on our human rights, it is paradoxical that there is not more regulatory supervision or accountability.

D. Further links

The following sites and articles concern AI regulatory issues:

Machine Intelligence Research Institute (MIRI): the aim of MIRI is to ensure that the creation of smarter-than-human intelligence has a positive impact. Much of their research concerns ethical issues and possible technical solutions for navigating or thinking about these concerns. https://intelligence.org/research/

Simons, T., “The Big Question for the Legal Ecosystem: Can Artificial Intelligence Be Trusted”, Thomson Reuters, 3 April 2018, http://www.legalexecutiveinstitute.com/justice-ecosystem-big-question-artificial-intelligence/

Algorithm Watch: AlgorithmWatch is a non-profit research and advocacy organisation that evaluates and sheds light on algorithmic decision making processes that have a social relevance, meaning they are used either to predict or prescribe human action or to make decisions automatically. https://algorithmwatch.org/en/

This site (http://stanford.edu/~jesszhao/ServiceRobotsInJapan/) was developed by students at Stanford University and provides an interesting ethical, social and policy analysis of service robots that are experiencing exponential growth in japa and are constantly presented with ethical dilemmas.

AI Policy Landscape Summary found on Medium: https://medium.com/artificial-intelligence-policy-laws-and-ethics/the-ai-landscape-ea8a8b3c3d5d

This article was written by Valerie Thomas, on behalf of the Regulatory Institute, Lisbon and Brussels.

1UK House of Lords Select Committee on AI, “AI in the UK: ready, willing and able?”, Report of Session 2017-19, 16 April 2018, page 99, paragraph 326, https://g8fip1kplyr33r3krz5b97d1-wpengine.netdna-ssl.com/wp-content/uploads/2018/04/AI-in-the-UK-ReadyWillingAndAble-April-2018.pdf.

2OʼNeil, Cathy, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 2017, Puffin Books.

3The abbreviation for the Institute of Electrical and Electronics Engineers.

4Brundage, M., Avin, S. et al, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, pp. 85,https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf.

5Ibid. pp. 7, 52, 53, 80-83.

6Novossiolova, T. and Sture, J., “Towards the Responsible Conduct of Scientific Research: Is Ethics Education Enough?” Medicine, conflict, and survival28.1 (2012): 73–84 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3378933/

7The US Congress passed the “AI Jobs Act” in December 2017 to prepare a report on Artificial Intelligence and its impact on the worforce, see Part D, Paragraph 4 of howtoregulate “Report on AI: Part I – the existing regulatory landscape”.

8Huang, E., “Strong men wanted: tech hiring in China is rife with blatantly sexist job ads”, Quartz, 23 April 2018, https://qz.com/1257486/tech-hiring-in-china-is-rife-with-blatantly-sexist-job-ads/.

9Ponemon Institute “Will Security & Privacy Concerns Stall the Adoption of Autonomous Automobiles?” November 2017 https://www.ponemon.org/local/upload/file/Autonomous%20Car%20Survey%20Presentation%20V2.pdf.

Leave a Reply

Your email address will not be published. Required fields are marked *

sixteen − six =