A.I. - Connecting the Dots
Our posters on A.I. basics are online: take a look! For the newest updates on our AI project, please visit the project page. We are currently working on a policy paper. If you would like to get involved, feel free to contact Jannes Jegminat .
The Series: A.I. – Connecting the Dots
From self-driving cars to cancer detection, the recent success of A.I. is ubiquitously present in the media. Its impact is starting to become noticeable in our everyday lives through targeted advertising, conversational chatbots, and optimized search engines. But what's behind the buzzword «A.I.» and how does it really work?
At this two-day exhibition in the ETH Main Hall, you will have the chance to learn about the key components that make up modern A.I. technology directly from people that build and use it. Starting with the basics and working up to exciting real-world applications by companies and startups that make use of A.I. technology, we will show you how to connect the dots – to see how it all fits together so as to enable the technology that we see around us. Check out the posters here.
Stephen Hawking, Bill Gates, and Elon Musk share the concern that future artificial Superintelligence might be «the last invention we will ever make». Much of the current research on the issue of existential risks of A.I., is focused around the idea of Superintelligence (SI). Brought to popular attention by Professor Nick Bostrom's 2014 book with the same name, SI suggests that the greatest threat will not come from robots but simply from A.I. systems with a substantially higher level of intellect than humans will ever achieve.
At this panel discussion in the ETH Main Building (HG E7), you will have the chance to experience a philosophical debate both on the risks and chances of SI. The discussion is centered on the potential impact of SI on human society and how current research could be positively influenced.
For more information, please visit the event page.
Language and creativity are considered fundamental features of human intelligence. They pose a significant challenge to artificial systems. However, in recent years, the success of deep and recurrent neural networks has revolutionized artificial intelligence, as rule based systems were replaced by networks that are able to learn through experience.
A.I. more than ever competes with human intelligence, but creativity and language are still dominated by humans. Why – and for how long?
The third evening of the series encompass three short talks and a panel discussion. For more information, visit the event page.
Artificial intelligence has reached our legal system: algorithms determine the duration of prison sentences, calculate the likelihood of crimes in a certain city district and draft legal documents for lawyers. How will technology shape the legal professions of the future? Will our legal system change from the rule of law to the rule of algorithms? Are algorithms the automated judges of tomorrow? And can algorithmic justice ever be just?
Robots are penetrating evermore areas of our everyday life – as coaches, care workers, or waiters. They exist to help, to please, and to serve us. They will more and more become like human to facilitate our interaction with them. As we grow used to having these servants around us, how will our attitudes towards real humans change? In an afternoon series of short talks, we set out to find possible answers.
A.I. setzt die Demokratie unter Druck und eröffnet ihr umgekehrt auch neue Möglichkeiten. Deep-Fakes, Fake-News, Troll-Bots und die (berechtigte) Angst vor dem Überwachungsstart geben reichlich Stoff für Leitartikel, doch jede Technologie eröffnet auch Chancen. Welche das sind, wollen wir in Kollaboration mit dem 100 Ways of Thinking Festival in der Kunsthalle Zurich ergründen.
From personal assistants on our smartphones to self-driving cars, artificial intelligence (A.I.) is poised to play a major role in our lives in the coming decades. However, due to the rapid growth of these technologies, our understanding of the future impact of A.I. is limited – and from hypes over recent developments, actual knowledge on these technologies' capabilities is in danger of becoming distorted.
This poor understanding prevents citizens and politicians from reaching acceptable trade-offs between the benefits that A.I. promises and the risks it entails. As a result, we find ourselves in danger of ill-founded fears of beneficial technologies such as autonomous vehicles, of ignorant risks being taken in the context of automation, and of high expectations set up by science fiction that, eventually, will be disappointed.