Do you trust artificial intelligence?

Be it car navigation systems, chatbots or image recognition, artificial intelligence (AI) has the potential to change society profoundly. But there is no foundation of trust and regulation. The AI with Trust conference, co-organised by the FDFA and to be held in Geneva, will explore these issues, serving as a platform for exchanges on the most pressing AI challenges. Switzerland can add value to the debate thanks to its credibility, neutrality and status as a powerhouse of innovation.

16.05.2022
A smartphone showing a quizzical emoji raising an issue about something.

Artificial intelligence harbours great potential. Switzerland can make valuable contributions to the debate on trust in AI and on its regulation. © Markus Winkler/Unsplash

Invisible to us, artificial intelligence (AI) is already embedded in many domains of our everyday lives. First thing in the morning, when we're still tired and dishevelled and look our smartphone, AI unlocks it using image recognition software. Then, when we check our email, AI intercepts unwanted advertising and phishing emails and sends them right to the spam folder. Afterwards, when driving to work, the navigation system's AI warns us of traffic jams and finds us the fastest alternative. In the evening, Netflix's AI tell us which films we might like.

A dangerous black box?

Computer chips' processing capabilities have grown much stronger over the last decades, huge quantities of data have become available, and machine learning has been further developed. All this has moulded AI into its present form. Thanks to AI, computer-powered machines can now be assigned tasks that previously only humans could perform. And that has the potential to deeply change economies and societies. Machine translation, self-driving cars, chatbots, and image recognition are just a few examples of this. But for many, AI remains a possibly dangerous black box. AI's lack of decision-making transparency as well as its use for mass surveillance, to manipulate elections, and within autonomous weapons systems influence the extent to which people trust it. According to a survey conducted for the World Economic Forum, 60% of adults worldwide expect AI-powered products and services to make their lives easier. Yet at the same time, only 50% say they trust companies that use AI as much as companies that do not.

What does it take to build up trust in AI systems?

AI systems, developed worldwide, facilitate human endeavours in many fields. Yet they also raise fundamental ethical, legal, and standardisation issues that we must grapple with. Because technology is not value-neutral. The developers of AI are part of a culture and a social environment whose values find their way into the application of AI. Internationally, while this is being debated in many areas, the various discussions are dispersed, sector-specific and poorly coordinated. Switzerland's foreign policy aims to promote exchanges between the major players in AI. Based on its Foreign Policy Strategy 2020–23, Switzerland is taking an active role in shaping an international regulatory framework for AI that will lay the foundation for trust in AI systems. This framework must set out how the various actors are to cooperate, define clearly who is supposed to do what, and lay down clear rules on the application of AI. The latter must also allow the private sector enough leeway for innovation.

The international debate: Switzerland's added value

Compared to other countries, Switzerland is ahead of the field in terms of AI research, development and innovation. Numerous globally active companies in the medical technology, pharmaceutical and machinery industries are offshoring their production, but leaving their product and service development in innovation-conducive Switzerland. As a host state, Switzerland brings together numerous actors and organisations in International Geneva that are considered centres of normative power, i.e. actors such as states, multilateral organisations and private companies that play a key role in shaping globally applicable norms and standards for AI systems.

Another actor in Geneva is GESDA (Geneva Science and Diplomacy Anticipator), a foundation that aims to anticipate technological and scientific revolutions – including AI – and analyse their impact on humanity. International discussions of AI are also highly geopolitical. Thanks to its neutrality and political stability, Switzerland can add value in this area, facilitating compromises as a credible mediator.

Designing an international regulatory framework

The two-day AI with Trust conference in Geneva will bring together experts and key players in AI, providing a platform for exchanging ideas on the most pressing AI challenges. So far, global debates have seen an international regulatory framework emerging on and shaped by five levels that should be better interlinked:

International law

International law already includes a large number of legal norms that are key for AI. These include fundamental human rights standards such as non-discrimination, the prohibition of arbitrary procedures, and the freedom of expression. The same applies to international humanitarian law in armed conflicts, e.g. in contexts involving autonomous weapons systems. At this level, dialogue between key actors in AI should help to adapt and, if necessary, further develop existing norms in international law to address issues arising from the application of AI.

Soft Law

Various non-legally binding soft law instruments containing AI rules are already in place. For example, in 2019, the G7 and G20 agreed on principles for dealing with AI. This includes using AI to achieve inclusive and sustainable growth and promote a human-rights centred AI approach. These principles are closely aligned to the OECD AI Principles.

National law with international significance

Through their national legislation, countries such as the US and China, which are driving forces in AI development, indirectly influence how AI is developed and dealt with in other countries. In such cases, national legal norms acquire international significance.

The industry's technical standards and self-binding rules

Standards organisations such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC) and the International Telecommunication Union (ITU) set technical standards. In AI this means, for example, standardising how AI-controlled robotic nutritional advisers and smarthome devices are supposed to work. For the industry, standardisation processes play a key role and are, in some cases, declared nationally or internationally binding. Numerous technology companies also publish self-binding statements on ethical issues in dealing with AI. In doing so, they often refer to international law.

Technological advances create facts

designing their products, tech giants create facts in need of legal and technical standardisation. One example of this was Apple's planned roll-out of an AI-enabled content filter to combat child pornography. This filter was, however, too far ahead of the times when it came to issues of privacy and data protection. Its roll-out was postponed. The CSAM detection system could be seen as setting a precedent that could influence future AI rules and standards.

AI with Trust – a conference held to harmonise AI

Building trust in AI has become a key concern for the industry, the scientific community, lawmakers, standards developers and compliance assessors. In light of this, the Federal Department of Foreign Affairs (FDFA), together with the International Electrotechnical Commission (IEC), is organising a conference for experts – AI with Trust – in Geneva on 16 and 17 May 2022. The aim of the conference is to analyse the interplay between legislation, standardisation and conformity assessment, and to define steps towards harmonising international efforts for AI. 

Start of page