Tag Archives: AI

Regulating online safety and tackling online harms

When Tim Berners-Lee invented the World Wide Web in 1990, he envisaged a decentralised environment of free exchange of ideas and information. Fast forward to 2019, almost 30 years later, and that online environment has been polluted by disinformation, manipulation, harassment and privacy breaches. The growth of online pollution has prompted various regulatory responses such as the European Union’s General Data Protection Regulation1, Germany’s Network Enforcement Act2, Australia’s Abhorrent Violence Amendment Bill3 and California’s Consumer Privacy Act4, each one responding to an online safety problem. In a world first, however, the UK has signalled it will regulate online safety in a single and coherent way, including creating a statutory duty of care for online safety. This howtoregulate article will analyse the UK’s regulatory approach outlined in its April 2019 Online Harms White Paper, which is open for public consultation until July 2019, and propose ways to improve on regulatory enforcement of online safety. Continue reading Regulating online safety and tackling online harms

Report on Artificial Intelligence: Part II – outline of future regulation of AI

This Part II of the howtoregulate Report on Artificial Intelligence presents regulatory approaches for minimising the harms of artificial intelligence (AI), evidently without putting into question the utility of AI. What should regulation of AI look like? The answer to this question depends on the goals of the regulator. As was previously outlined in Part I, much of the goals of states today is to focus on incentivising innovative applications of AI or encouraging break through AI research. We could imagine, however, that the average regulator might also consider such goals as avoiding the risk that AI research or technology leads to the eradication of humankind and reducing other major risks for human beings to the extent that the expected positive effect of AI is not disproportionately hampered. Furthermore, regulators might feel compelled to deal with particular risks linked to specific technological uses. Continue reading Report on Artificial Intelligence: Part II – outline of future regulation of AI

Report on Artificial Intelligence: Part I – the existing regulatory landscape

Artificial intelligence (AI) has been placed front and centre in many countriesʼ economic strategies1, probably unsurprising as AI is one of the defining technologies of the Fourth Industrial Revolution2. Nascent AI regulation around the world today is characterised by soft approaches either aimed at incentivising innovation in the manufacturing or digital sectors or encouraging break through research. The ethical implications of AI are either regulated through specific AI codes in companies concerned with good corporate social responsibility, in research institutes (private or public) concerned with ethical research and innovation or not regulated at all. These AI ethical codes are not formally scrutinised by any public administration, nor are they legislatively required, and so it is difficult to assess the quality and effectiveness of such codes in minimising the negative implications of AI. The purpose of this howtoregulate report is to examine existing AI regulatory landscape (Part I) and present regulatory approaches for minimising the harms of AI (Part II – outline for future regulation of AI), evidently without putting into question the utility of AI. Continue reading Report on Artificial Intelligence: Part I – the existing regulatory landscape