I would rather say that we should have been asking ourselves these questions a long time ago. The digital sector has been structured by private, non-European players, the Gafam companies (Google, Apple, Facebook, Amazon and Microsoft), which have developed at full speed, on an untrodden path. Facebook, Google and Waze users have been providing an incredible amount of information for a long time, without asking any questions. Who bothers to uninstall an app that comes preinstalled on the smartphone you just bought, that counts the number of steps you take during the day? No one.
What strikes me as utterly irrational is that public opinion today refuses to do the same when the government proposes using an app to track Covid-19 cases. Due to a lack of knowledge and education, the general public is completely oblivious to what is done with the data so willingly given up to the private sector. At the same time, there is great distrust, which is extremely worrying, in public authorities. A lot of fantastical ideas have developed around these issues.
What is needed is for every citizen, starting with the youngest, to understand what digital technology and artificial intelligence (AI) are really about. For the Institut Montaigne, I contributed to the development of an online training course, free and open to all, called “Objectif IA”.
The aim is to train at least 1% of the French population, i.e. 670,000 people, in the fundamentals of artificial intelligence. Let’s stop talking nonsense, learn the basics, and then talk again about these ethical issues!
Maybe, but the Chinese context is very specific: China is by no means a democratic regime and the “social credit” system, under which the government collects absolutely all its citizens’ data from the Chinese equivalents of Google, Amazon or Facebook, has become widely accepted there. In addition, Chinese people have a relationship with innovation, an appetite for novelty, which meant that they accepted these tracking applications in a completely natural way.
Regarding these deep issues, which touch on the foundations of our social organization, Europeans do not have the same attitude at all. The European Union defends the principle of AI for good and for all. The ethical bias is clear and indicates that digital technology must contribute to social cohesion, not to controlling individuals.
Even if the Gafam companies are, naturally, not in favor of regulation, I am convinced that we can act at the European level. The AI High-Level Experts group, of which I am a member, submitted its “Guidelines for trustworthy artificial intelligence” to the European Commission in April 2019. We specifically chose to view ethics as a tool for building trust. In this document, we start from the idea that citizens and consumers need reassurance about how their data is used. They need a clear contract: I will give or sell you my data, but only if I am told why, how and for how long.
We believe that the idea of “trustworthy” AI can even be a competitive advantage. We want to offer companies that develop products incorporating AI a list of questions that they can use to assess themselves. In the long term, it may even be that these assessments can be left to companies’ discretion for most products or services, but become mandatory for “risky” applications, such as those linked to driverless vehicles for example, or the release of welfare payments. For these applications, robustness and ethics must indeed be irreproachable.
It is true that they are powerful and well established. But the Wild West and the Gold Rush won’t last forever. At some point regulation takes hold. Neither the United States nor China are investing in the field of ethics, so it is an opportunity for Europe, which is at the right scale to address these subjects.
We have clearly shown with the General Data Protection Regulation (GDPR) that we can set an example and position ourselves as leaders in ethical and cutting-edge technology. The introduction of the GDPR sparked an outcry at the time, it was said that the regulation would stifle competitiveness, but we can now see that the United States is following suit, and California is moving towards even more restrictive regulations. So, it is possible!
Data collection and tracing are at the heart of dealing with the Covid-19 health crisis, with more or less coercive approaches depending on the country concerned.
South Korea and China have made tracing apps mandatory, transforming smartphones into real “snitches”.
With HaMagen (“the shield”), Israel has opted for voluntary tracking, via an app sponsored by the Ministry of Health and downloaded by those who want it. In the United States, researchers at Carnegie-Mellon University have drawn on the power of social media to build a dynamic map of the spread of the virus in the country. They offered a questionnaire for Facebook users to enter their symptoms and cross-referenced this feedback with analysis of Google searches. In France, the TousAntiCovid app (formerly StopCovid), which sparked a lively debate when launched on 2 June, was updated in October and is now enriched with access to factual and health information on the epidemic.
By the end of October, it had been downloaded more than 7 million times since its release.