Regulating Research and Technology Risks: Part II – Technology Risks

This article presents regulatory tools that can help to contain technology risks linked to technologies.

1. In our previous blogpost, we have outlined how regulation could possibly contain risks linked to research projects. We will do the same here for the risks and namely the existential risks linked to the use of technologies.

2. Technologies merit being regulated under more aspects than just the risks. E.g., many technologies consume natural resources without paying any price for them. As a result, the natural resources dwindle or are damaged which can both be against the public interest. Hence it might be worthwhile considering to make the use of natural resources subject to conditions or to establish a financial compensation mechanism or, even better, a nature compensation mechanism.

Furthermore, technologies can have negative side-effects below the level of (existential) risk. They can have impacts on income of different persons, positive and negative alike. They can have a disruptive effect on social structures.

Whilst these are interesting regulatory aspects of technologies, we focus here on the angle of risks for human beings. We take our previous analysis of research risk as a starting point.

3. We have seen in the previous blogpost that the risk or even destructive potential of certain forms of research goes far beyond classic nuclear risks such as a melt-down of a nuclear plant. We mentioned Geo-engineering, synthetic biology and self-replicate artificial intelligence could cause. Evidently, the risks are not necessarily reduced, but sometimes also increased if the research findings are shifted into “technologies (ready to be) applied”. Whilst the standardised procedures of technologies might reduce some risks, the special research environment might also sometimes provide for safeguards which cannot be kept in case of large scale roll-out of technologies. E.g., it is easier to control one place containing potentially dangerous bacterias or viruses than 1.000. Already for this reason one cannot simply transfer the risk assessment made for research to the sphere of applied technologies.

4. The second reason why technologies need to be assessed separately is that some negative effects only materialise at a higher scale. Take the example of toxins or certain forms of pollution or of radiation which become, still gradually, dangerous only from a certain threshold onwards. Or take the reduction of the number of a certain species caused by technologies: in some cases, it has no negative effects on other species, but going over a certain threshold, other species which might be crucial for the survival of mankind might be in danger, see e.g. the bees. History shows that entire civilisations have disappeared because they underestimated the negative effects, e.g. of using wood, arising from the large scale use.

5. The third reason why technologies need to be assessed separately is the fact that technologies, once they leave the research environment, are used in a large variety of external conditions and by a large variety of different persons. Risks of technologies depend on the situations in which they are used. They also depend on the culture, the habits, the knowledge, and the training of the technology users and the interaction of these factors with the situations and the technologies themselves.

6. Whilst we have so far stressed the differences between the assessment of research risks and the assessment of technology risks, we have to admit that there are also plenty of similarities. Accordingly, the following text is to a very large extent identical to the sections 6. onwards of the previous blogpost on research risks. Again, we could imagine that the average regulator might have at least two goals:

A. Avoid the risk that technologies leads to the eradication of mankind;

B. Reduce other major risks for human beings to the extent that the expected positive effect of technologies is not disproportionately hampered.

Subject to the values of the regulator in question, further goals might be pursued, such as:

C. Protection of animals;

D. Protection of nature.

7. Which obligations could be set up for the companies using risky technologies? These companies and their staff could for instance be obliged:

(a) to assess risks prior to using and disseminating technologies, with or without application of a relevant risk management standard;

(b) to reduce risks linked to the technologies to the extent possible if the risk reduction does not endanger the prevailing utility of the technologies, if any there is; this can be done with or without application of a relevant risk management standard;

(c) to refrain from using or disseminating technologies which trigger disproportionate / high risks;

(d) to inform the responsible authority of risks for health and life linked to use of these technologies;

(e) to request an authorisation from the responsible authority if the technologies trigger risks for health and life of a higher number of individuals;

(f) to request an authorisation from the responsible authority if the technologies triggers risk for the economic or ecological survival of the society in the jurisdiction in question or of societies in other jurisdictions;

(g) to register the use and the dissemination of risky technologies in a database with two levels: public information and information only accessible to the authority in charge.

8. To ensure conformity with the major legal obligations, regulators might consider establishing procedural obligations for companies using or disseminating technologies such as undergoing certification to prove that the institutions’ internal processes ensure the fulfillment of these obligations. In particular for the aspect of risk management certification could be regarded as helpful.

9. To enable authorities to act, it is necessary that authorities are informed in advance or as soon as possible about potentially risky use or dissemination of technologies. The information obligations contained in the previous paragraph might not be sufficient to ensure this result. The authorities might need to have access to business reports and plans on request. Therefore they should have comprehensive investigation powers. But in addition, they should have the means and in particular the scientific competence to assess the risk linked to technologies. To increase the effectiveness of authorities, authorities should be empowered to cooperate with their peers in the same and in other jurisdictions and this empowerment should include the right to transmit information on persons and confidential information relating to the companies using or disseminating technologies.

10. Once authorities have identified noteworthy risks, they need legal empowerments and work capacity to react to the use or dissemination of technologies which pose a particular risk. These measures might have a temporary or a definitive character. Two types of empowerment might be necessary. The range of measures to be taken by the authority should be as generic as legally possible in the respective jurisdiction because many different types of measures might be needed in the given case. However, the respective empowerments contained in the regulation should explicitly mention the most far reaching measures such as the confiscation of objects, including of computers and documents, the sealing of facilities and the destruction of harmful objects. In jurisdictions which require extremely precise and delimited empowerments, regulators might appreciate studying as reference or inspiration the Singapore Air Navigation (Amendment) Act 2014 which contains comprehensive empowerment in its Section 4.

In cases of extremely high risks, empowerments to supervise electronic and telecommunication might deemed to be appropriate, whilst a limitation of the individuals’ right to keep communication confidential might not be justified in cases of minor risks (principle of proportionality, applied at constitutional level in quite some jurisdictions).

11. When measures have been taken, the authority should have the legal power to communicate its decision to peers in the same and in other jurisdictions. This is necessary because nothing is won if risky technologies are just dislocated from one jurisdiction to the next. Publication of measures to peers might also stop a competition spiral downwards in terms of control intensity. Evidently administrations are expected to be technology friendly. Accordingly, there might be a politically wish to maintain a technology friendly environment and consequently to discard risks. Individual agents of administrations who wish to counter this pressure are in a better position if they can refer to measures taken in other jurisdictions. The exchange of information on considered or taken measures is thus very important for maintaining a climate which is open for justified risk limitation.

12. In case of technologies, a particularly deterring empowerment could be considered: confiscation / transfer of patent and other intellectual property rights. The confiscation should also have effect at the level of private law so that the usual means for pursuing private rights in other jurisdictions can be used.

13. Evidently, a range of accompanying legal measures can be conceived. Regulators frequently consider penal sanctions against persons. They should also consider administrative sanctions against the companies infringing legal obligations.

14. Legal enforcement and sanctions should be preceded by information campaigns, both in terms of fairness and because information alone might already change the behaviour of well-minded operators using technologies.

15. An often underestimated regulatory tool is the establishment of liability provisions. Once liability has been established, institutions concerned often start a kind of self-control process or look for insurances. Insurances will estimate the risks and may limit their insurance to cases where risk management is applied or other conditions are fulfilled. Evidently, this positive effect of the involvement of insurances can more systematically obtained by making liability insurance mandatory.

16. For high risk technologies, it is advisable that even the start of the use or dissemination is subject to an authorisation. The authorisation mechanism ensures that the projects cannot start without that administrations had screened them. To avoid that administrations take too long, it is possible to stipulate that the authorisation is deemed to be provided if the administration has not reacted within a certain time, e.g. two or three months.

17. The authorisation mechanism should provide some flexibility. E.g., regulators should ascertain that the authority is empowered to set-up conditions when authorising the use or dissemination of technologies – a simple yes or no decision might not be appropriate in all situations. These conditions might need to cover the ways of execution or the technology use or dissemination. The conditions might also be just of procedural nature, e.g. obliging the companies to periodically report or to undergo a certain risk management certification based on the application of a generally recognised risk management standard. Authorities should also have the power to limit their authorisation in time so that a new application is needed after a certain period in time. Finally, authorities should be empowered to withdraw authorisations with retro-active effect in case of fraudulent or erroneous application and with effect for the future in case new information or newly discovered information leads to a negative assessment of the technology use or dissemination.

18. Regulators sometimes think that the authorisation mechanism makes classic empowerments as described under 9. to 15. superfluous. Experience shows that this is not the case. It should be checked which of the empowerments listed in 9. to 15. are still needed.

19. Whether authorisation requirements have been established or not, regulators should apply special care in order not to impose an unrealistic burden of proof on either of the two sides. Neither the companies will be able to provide full proof of safety, nor can authorities be expected to provide for such full proof of risk as justification for their measures. Hence it will be necessary to use unpleasantly vague formulas such as the following:

– Technology use or dissemination/risks shall be authorised if it is very unlikely that they trigger a major risk in the meaning of Article … . To demonstrate that their technology use or dissemination project is very unlikely to trigger a major risk, companies shall submit scientific literature dealing directly or indirectly with risks of similar research or technology projects.”

– Authorities may take measures against planned or ongoing technology use or dissemination projects when there is either complete uncertainty regarding the risks triggered by the project or if, based on first evidence or findings regarding similar projects, it is not completely unlikely that the technology project will trigger (major) risks in the meaning of Article … .”

20. Further side measures that could indirectly help the authorities include the following:

– Obligation of the authority to investigate technology risks or dissemination projects: such a legal obligation might help the authority to defend its interests when it comes to the annual budgeting exercise. Mandatory tasks can be easier defended against budget cuts.

– Requirements on independence and control of independence of the authority.

– Requirements on minimum resources of the authority (ideally with clear indication of minimum Full-time equivalences, not just vague clauses like „appropriate number of staff“ as the latter is difficult to enforce);

– Establishment of technology register (with partly public and partly confidential information);

– Establishment of a research and technology risks observatory, ideally working for groups of jurisdictions (e.g. for several countries in one continent or for several states within the same Federal State. With “Federal State” we mean states like the U.S., India, Nigeria, Brazil, Germany where a good part of the state’s power is decentralised to sub-divisions.

– Establishment of a scientific advisory board.

– Obligation for research institutions to make available their expertise to assess the risks of technologies.

– Creation of a central alert portal on which everybody may inform authorities about potentially problematic technology use or dissemination projects.

– Whistle-blowing protection mechanisms protecting those who report in good faith to the authorities.

– Confidentiality provisions (assured confidentiality of information will increase readiness to cooperate with the authority).

– Cooperation agreements with other jurisdictions on information exchange, mutual advice, cooperation on enforcement etc.

21. Tools of self-regulation might be helpful as well. We could imagine the following:

– Voluntary mutual control by analysis of technology projects by an expert panel set-up by a specialised roof organisation;

– Development of guidance and minimum standards by specialised roof organisations or expert panels set-up by them;

– Control of the fulfilment of the minimum standards by the specialised roof organisations or third bodies acting on their behalf.

22. Evidently, there are specific technologies for which certain jurisdictions have adopted regulation. To provide a few examples, we refer to the following parts of our directory of reference regulation:

Part 5 and to a lesser extent Part 2. For jurisdictions which wish to start regulating on technologies, we nonetheless recommend an approach which covers technologies in general terms or at least a range of technologies in one strike as it is extremely cumbersome to regulate on individual technologies one by one. Furthermore, ever new technologies emerge so that a generic approach avoids ever new regulation to become necessary.

23. However, it has to be admitted that the alternative approach, regulating each technology separately, permits much more fine-tuning. To strike a good compromise between the need for fine-tuning and the generic approach, we recommend establishing technology risks classes and to standardize obligations and procedures for each of the risk classes. We will probably see in two more future blogposts how risk classes and prototype regulation based on risk classes could look like.

Leave a Reply

Your email address will not be published. Required fields are marked *

fifteen + 10 =