The Model Law on Artificial Intelligence is a continuation of the Regulatory Institute’s popular series of model laws. The scope of the Model Law on AI applies to the development, operation and use of software that constitutes artificial intelligence or of items that use artificial intelligence. Artificial intelligence is a relatively new topic of regulation and presents a good opportunity for lawmakers to regulate in a comprehensive way, free from any legacy legislation. Continue reading Model Law on Artificial Intelligence
Tag Archives: artificial intelligence
Digital government: regulating the automation of public administration
Governments are increasingly looking towards automated decision-making systems (ADS), including algorithms to improve the delivery of public administration. This raises issues in administrative law around legality, transparency, accountability, procedural fairness and natural justice. The provision of public services and government decision-making are regulated by legislation that protect administrative (public) law principles and permit affected persons to seek judicial review of that decision. However, the government use and deployment of ADS has, in many jurisdictions, preceded any prudent analysis of how the ADS fits within the broader administrative legal framework. This howtoregulate article outlines a regulatory framework for the automation of public administration. Continue reading Digital government: regulating the automation of public administration
Report on Artificial Intelligence: Part II – outline of future regulation of AI
This Part II of the howtoregulate Report on Artificial Intelligence presents regulatory approaches for minimizing the harms of artificial intelligence (AI), evidently without putting into question the utility of AI. What should regulation of Artificial Intelligence look like? The answer to this question depends on the goals of the regulator. As was previously outlined in Part I, much of the goals of states today is to focus on incentivizing innovative applications of AI or encouraging breakthrough AI research. We could imagine, however, that the average regulator might also consider such goals as avoiding the risk that AI research or technology leads to the eradication of humankind and reducing other major risks for human beings to the extent that the expected positive effect of AI is not disproportionately hampered. Furthermore, regulators might feel compelled to deal with particular risks linked to specific technological uses. Continue reading Report on Artificial Intelligence: Part II – outline of future regulation of AI
Report on Artificial Intelligence: Part I – the existing regulatory landscape
Artificial intelligence (AI) has been placed front and centre in many countriesʼ economic strategies1, probably unsurprising as AI is one of the defining technologies of the Fourth Industrial Revolution2. Nascent AI regulation around the world today is characterised by soft approaches either aimed at incentivising innovation in the manufacturing or digital sectors or encouraging break through research. The ethical implications of AI are either regulated through specific AI codes in companies concerned with good corporate social responsibility, in research institutes (private or public) concerned with ethical research and innovation or not regulated at all. These AI ethical codes are not formally scrutinised by any public administration, nor are they legislatively required, and so it is difficult to assess the quality and effectiveness of such codes in minimising the negative implications of AI. The purpose of this howtoregulate report is to examine existing AI regulatory landscape (Part I) and present regulatory approaches for minimising the harms of AI (Part II – outline for future regulation of AI), evidently without putting into question the utility of AI. Continue reading Report on Artificial Intelligence: Part I – the existing regulatory landscape
Research and Technology Risks: Part III – Risk Classification
This article describes how research and technology risks could be classified. This risk classification is the basis for the attribution of appropriate and proportionate legal obligations in the prototype regulation presented in the following blogpost. Continue reading Research and Technology Risks: Part III – Risk Classification
Regulating Research and Technology Risks: Part II – Technology Risks
This article presents regulatory tools that can help to contain technology risks linked to technologies.
Continue reading Regulating Research and Technology Risks: Part II – Technology Risks
Regulating Research and Technology Risks: Part I – Research Risks
This articles presents regulatory tools which can help to contain risks linked to research.
Continue reading Regulating Research and Technology Risks: Part I – Research Risks