This Part II of the howtoregulate Report on Artificial Intelligence presents regulatory approaches for minimising the harms of artificial intelligence (AI), evidently without putting into question the utility of AI. What should regulation of AI look like? The answer to this question depends on the goals of the regulator. As was previously outlined in Part I, much of the goals of states today is to focus on incentivising innovative applications of AI or encouraging break through AI research. We could imagine, however, that the average regulator might also consider such goals as avoiding the risk that AI research or technology leads to the eradication of humankind and reducing other major risks for human beings to the extent that the expected positive effect of AI is not disproportionately hampered. Furthermore, regulators might feel compelled to deal with particular risks linked to specific technological uses. Continue reading Report on Artificial Intelligence: Part II – outline of future regulation of AI
This article describes the utility potential of work which prepares the proper regulatory activity.
Continue reading Why we need meta-regulatory work
Regulators who wish to develop or amend regulation for their respective jurisdiction might wish not to start from scratch, but to learn from other jurisdictions. Therefore we present here some legislation on medicines / pharmaceuticals / drugs of different jurisdictions as models. The models have different degrees of complexity. We start with rather simple models and go up to the quite complex models. Continue reading Reference legislation on medicines