This Part II of the howtoregulate Report on Artificial Intelligence presents regulatory approaches for minimising the harms of artificial intelligence (AI), evidently without putting into question the utility of AI. What should regulation of AI look like? The answer to this question depends on the goals of the regulator. As was previously outlined in Part I, much of the goals of states today is to focus on incentivising innovative applications of AI or encouraging break through AI research. We could imagine, however, that the average regulator might also consider such goals as avoiding the risk that AI research or technology leads to the eradication of humankind and reducing other major risks for human beings to the extent that the expected positive effect of AI is not disproportionately hampered. Furthermore, regulators might feel compelled to deal with particular risks linked to specific technological uses. Continue reading Report on Artificial Intelligence: Part II – outline of future regulation of AI
Artificial intelligence (AI) has been placed front and centre in many countriesʼ economic strategies1, probably unsurprising as AI is one of the defining technologies of the Fourth Industrial Revolution2. Nascent AI regulation around the world today is characterised by soft approaches either aimed at incentivising innovation in the manufacturing or digital sectors or encouraging break through research. The ethical implications of AI are either regulated through specific AI codes in companies concerned with good corporate social responsibility, in research institutes (private or public) concerned with ethical research and innovation or not regulated at all. These AI ethical codes are not formally scrutinised by any public administration, nor are they legislatively required, and so it is difficult to assess the quality and effectiveness of such codes in minimising the negative implications of AI. The purpose of this howtoregulate report is to examine existing AI regulatory landscape (Part I) and present regulatory approaches for minimising the harms of AI (Part II – outline for future regulation of AI), evidently without putting into question the utility of AI. Continue reading Report on Artificial Intelligence: Part I – the existing regulatory landscape
This article describes how research and technology risks could be classified. This risk classification is the basis for the attribution of appropriate and proportionate legal obligations in the prototype regulation presented in the following blogpost. Continue reading Research and Technology Risks: Part III – Risk Classification
This articles presents regulatory tools which can help to contain risks linked to technologies.
This articles presents regulatory tools which can help to contain risks linked to research.