This Part II of the howtoregulate Report on Artificial Intelligence presents regulatory approaches for minimizing the harms of artificial intelligence (AI), evidently without putting into question the utility of AI. What should regulation of Artificial Intelligence look like? The answer to this question depends on the goals of the regulator. As was previously outlined in Part I, much of the goals of states today is to focus on incentivizing innovative applications of AI or encouraging breakthrough AI research. We could imagine, however, that the average regulator might also consider such goals as avoiding the risk that AI research or technology leads to the eradication of humankind and reducing other major risks for human beings to the extent that the expected positive effect of AI is not disproportionately hampered. Furthermore, regulators might feel compelled to deal with particular risks linked to specific technological uses. Continue reading Report on Artificial Intelligence: Part II – outline of future regulation of AI
This article presents regulatory tools that can help to contain technology risks linked to technologies.
This articles presents regulatory tools which can help to contain risks linked to research.