GESDA. Hot off the press. “We explore the feasibility of a science self-regulation mechanism”
ARTIFICIAL REGULATION Renowned scientists have recently expressed their fears about the development of artificial intelligence. They have published a letter in which we read the following: ”Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening”.
What is GESDA doing in this field, knowing that the Foundation has set itself the objective of anticipating important technological breakthroughs?
With our Breakthrough Radar, we effectively anticipate technological developments and submit them, as an honest broker, for discussion in a broader public including scientists, diplomats, businesspeople, and academics. This allows all to collectively anticipate the important questions about security, safety or even identity which must be asked before problems arise. This is a first step, but we go one step further.
What do you propose?
GESDA is currently looking into whether a science self-regulation mechanism on critical matters is possible. The idea is to focus on researchers insofar as it is the responsibility of scientists and researchers themselves to ensure that they work for the benefit of humanity, and therefore to self-regulate their research.
What concrete solutions do you consider?
There is currently no structure for the scientific community to organize such discussions. This results in reactions on a case-by-case basis when policy attention becomes critical. Anticipating by putting in place a mechanism where these discussions can take place before they become alarming, would allow to address concerns and also accelerate the adoption of scientific development instead of blocking it.
This would not be about telling scientists what they are not allowed to do, but providing the setting for the scientific community to take responsibility for guiding the developments of emerging technologies and science at the boundary of human knowledge.
Are scientists ready to take on these responsibilities?
Not only are they ready to do so, but they are in the best position to understand the safety issues of certain technological developments. At the Asilomar Conference in 1975, it was the scientists themselves who raised safety issues related to recombinant bacteria. They are the first ones to see the red flags and thus in the best position to be at the origin of the solution.
When can we expect concrete proposals from GESDA?
We are organizing a series of small workshops with experts in 2023 to explore the “technical” feasibility of self-regulation by science, looking at mechanisms that are already in place and others that would need to be created. We will start by looking at two approaches: arbitration and incentives. Discussions with specialists from the world of sports arbitration will allow us to understand whether a model such as the court of arbitration for sports (CAS) could be applicable to science, anticipating through a “private” dispute resolution procedure by the scientific community instead of bringing a topic to court. The situation being very different, we need to study what could be learned from sports which might be transposable to science.
Looking at it from a very different angle, we will be discussing with other experts whether the incentives that drive science – such as patents, publications, funding, access to data, etc. – could technically become a driver for behavioral change by restricting their access for identified no-go zones. The outcome of the workshops will be presented and discussed at the GESDA 2023 Summit, October 11 to 13 in Geneva for the launch of a possible pilot in 2024.