NoBIAS Summer School and Datathon

September 5 – 7, 2022


The second NoBIAS Summer School will be held in September 2022, hosted at the University of Southampton, Southampton, UK. A datathon will also be held as a part of the summer school on day 3.

With the aim of extending the possibility of participation as broadly as possible and to the ends of facilitating sustainable academic engagement, the Summer School will take place in a hybrid format, that is, both in person – at the University of Southampton – and virtually.

We invite everyone to attend the sessions, listen and engage with our inspiring speakers from across a range of disciplines.

Registration

The Summer School will take place on the 5th and 6th September; you can register for one or both of these sessions in-person or online.

NoBIAS Datathon (7 September 2022)

The Datathon will take place on the 7th September and is open only to members of the NoBIAS network and the speakers contributing to the Summer School.

Find the detailed schedule with information on speakers, their short bio, and talk outlines below.

Schedule


*All times are GMT (UK time)

Monday, September 5, 2022 (09.00-16.30)

09:00 – 09:15   Opening and Welcome (In-person)

  • Dr Stephanie Law, Associate Professor, School of Law, University of Southampton

09.15 – 10.45   Addressing Algorithmic Bias in Healthcare: The Regulatory Challenge and Some Possible Responses (In-person talk)

  • Lecture by Dr. Michael Da Silva, Lecturer, School of Law, University of Southampton

10.45 – 11.00   Coffee break

  • Room: 85/2207

11.00 – 12.30   Bias and University Recruitment (Online talk)

  • Keynote talk by Dr. Ben Wagner, Assistant Professor, TPM, TU Delft

12.30 – 13.30   Lunch

13.30 – 16.30  Gender and Intersectional Inequalities (In-person workshop)

  • Workshop by Dr. Lena Holzer, Lecturer, School of Law, Goldsmiths

Tuesday, September 6, 2022 (09.00-16.30)

09:00 – 09:15   Welcome to day 2 of the summer school (In-person)

  • Dr Stephanie Law, Associate Professor, School of Law, University of Southampton

09.15 – 10.45   Assessing Biases, Relaxing Moralism: On Ground-truthing Practices in Machine Learning Design and Application (Online talk)

  • Lecture by Dr. Florian Jaton, Postdoctoral Researcher and Lecturer STS Lab – Institute of Social Sciences, University of Lausanne

10.45 – 11.00   Coffee break

  • Room: 85/2207

11.00 – 12.30  Human-machine inter-agencies (In-person talk)

  • Keynote talk by Dr. Dave Murray-Rust, Associate Professor, Human Algorithm Interaction Design and Director AI Futures Lab, Industrial Design Engineering, TU Delft

12.30 – 13.30   Lunch

13.30 – 16.30   Much Ado about Socio-technical Systems – The Hybrid Structure of Content Moderation (In-person workshop)

  • Workshop by Marie-Therese Sekwenz, Researcher, Sustainable Computing Lab and WU Vienna

Speakers


Dr. Michael Da Silva

Silva

Date and Time:

  • Monday, September 5, 2022, 09.15-10.45 (In-person talk)

Title:

Addressing Algorithmic Bias in Healthcare: The Regulatory Challenge and Some Possible Responses

Abstract:

Artificial Intelligence (AI) offers the promise of addressing identified wrongful biases in existing healthcare systems. Yet familiar concerns about algorithmic bias also apply in the healthcare sector: many AI tools are not designed for and do not perform as well for all populations, creating risks of unsafe use on populations for whom AI was not designed, distributional injustices in terms of the benefits of health-related AI, and other problematic outcomes. Further challenges arise due to some populations’ understandable reluctance to share data with AI developers (making it hard to create ‘representative’ datasets that would make it easier to produce AI that could benefit broader populations) and to the need for some AI tools to be ‘biased’ in a non-wrongful sense to address clinically-relevant differences. This session will provide an overview of the regulatory challenge(s) posed by algorithmic bias threats and the pros and cons of some possible regulatory responses.

 

 

 

 

 

 

 

 

Bio:

Dr. Michael Da Silva is Lecturer in the University of Southampton School of Law. He is a member of the New York bar. Dr Da Silva was previously the Alex Trebek / CIHR Postdoctoral Fellow in AI and Health Care in the University of Ottawa Faculty of Lawand remains affiliated with Ottawa’s Centre for Health Law, Policy and Ethics and Centre for Law, Technology, and Society. He is widely published in law, philosophy, and bioethics, including pieces on AI in Philosophy & Technology, Healthcare Policy, and the University of British Columbia Law Review. He recently served on a Health Canada External Reference Group on the development of regulatory requirements for adaptive machine learning-enabled medical devices.

Back to the schedule

Dr. Ben Wagner

wagner

Date and Time:

  • Monday, September 5, 2022, 11.00 – 12.30 (Online talk)

Title:

Bias and University Recruitment

Abstract:

Coming soon...  

 

 

 

 

 

 

 

 

Bio:

Ben Wagner is Assistant Professor at the Faculty of Technology, Policy and Management and Director of the AI Futures Lab at TU Delft. He is also Professor of Media, Technology and Society at Inholland. His research focuses on the governance of socio-legal systems, in particular human rights in digital technologies, and designing more accountable decision-support systems.

He is a visiting researcher at the Human Centred Computing Group at Oxford University, advisory board member of the data science journal Patterns and on the International Scientific Committee of the UKRI Trustworthy Autonomous Systems Hub.

Previously, Ben served as founding Director of the Center for Internet & Human Rights at European University Viadrina, Director of the Sustainable Computing Lab at Vienna University of Economics and member of the Advisory Group of the European Union Agency for Network and Information Security (ENISA). He holds a PhD in Political and Social Sciences from European University Institute in Florence.

Back to the schedule

Dr. Florian Jaton

jaton

Date and Time:

  • Tuesday, September 6, 2022, 09.15 – 10.45 (Online talk)

Title:

Assessing Biases, Relaxing Moralism: On Ground-truthing Practices in Machine Learning Design and Application

Abstract:

When one documents the manufacture of machine learning (or artificial intelligence) algorithms using the analytical genre of laboratory ethnography – among other possible ones – one notices that many of them rely upon referential databases called “ground truths” that gather sets of input-data and their manually designed output-targets counterparts. One also quickly realizes that the collective processes leading to the definition of these ground truths heavily impact on the nature of the algorithms they help constitute, evaluate, and compare. In this talk, I will first discuss some of the whys and wherefores of these ground-truthing processes with an emphasis on supervised and unsupervised learning for computer vision. Then, building upon the presented elements and the concept of “genuine option” developed by pragmatist philosopher William James, I will critically discuss the notion of bias and propose an alternative way to consider the morality of machine learning algorithms.

 

 

 

 

 

 

Bio:

Dr. Florian Jaton is a Postdoctoral Researcher at the University of Lausanne, STS Lab. He studied Philosophy, Mathematics, Literature, and Political Sciences before receiving his PhD in Social Sciences at the University of Lausanne, in partnership with the School of Computer and Communication Sciences at EPFL. His research interests are the sociology of algorithms, the philosophy of mathematics, and the history of computing. He is the author of The Constitution of Algorithms, published by MIT Press in 2021. Florian Jaton has also worked at the Donald Bren School of Information and Computer Science at the University of California, Irvine, and at the Center for the Sociology of Innovation at MinesParis (PSL Research University).

Back to the schedule

Dr. Dave Murray-Rust

rust

Date and Time:

  • Tuesday, Sept. 6, 2022, 11.00 – 12.30 (In-person talk)

Title:

Human-machine inter-agencies

Abstract:

In this talk, I will discuss questions of how humans and computational system understand each other, with a particular eye to how we can design both the systems and the mutual understandings that arise. I look at this through the lens of inter-agency – how do we co-shape behaviours in complex situations. I’ll touch on some emerging concepts such as multi-intentionality, and also look at how the metaphors used around AI systems can be tricky, and how concepts such as ‘respect’ can help us to create better things.

 

 

 

 

 

 

 

 

 

Bio:

Dr. Dave Murray-Rust is an Associate Professor at TU Delft, working in human-algorithm interaction – exploring the messy terrain between people, data and things through a combination of making and thinking. Current research questions include: How can we understand the algorithmically mediated society that we are heading towards? How can we ensure that there is space for people within computational systems, preserving privacy, choice, identity and humanity while making use of the possibilities of emerging technology? How can we work with things that have an increasing sense of agency, from sensing to responding to shaping the world around them?

In creative practice, Dr. Murray-Rust engages with interactions between people and technology. This includes electronic music making (especially with laptop trio Raw Green Rust), building software for different kinds of musicking and a collection of technology based artworks.

Back to the schedule

Workshop instructors


Dr. Lena Holzer

holzer

Date and Time:

  • Monday, September 5, 2022, 13.30 – 16.30 (In-person workshop)

Title:

Gender and Intersectional Inequalities

Abstract:

This interdisciplinary and participatory workshop aims at investigating diverse approaches to address gender and intersectional inequalities in the context of the participants’ own research projects. The speaker will specifically draw attention to diverse feminist and queer theories of conceptualizing inequalities that are shaped by multiple axes of power, including gender, race, class, (dis)abilities, religion and geographical location. The first part of the workshop focuses on analyzing different ways of conceptualizing and tackling (gender) inequalities in various disciplines, especially law and computer sciences. The second part of the workshop will delve into concrete debates on gender justice that are also of relevance to the NoBias project. They include discussions on “gender blindness” and the (re)production of the gender binary through law and technology. This part also involves analyzing specific case studies that are researched by the participants in their projects through the lens of feminist and queer theories. This provides participants with conceptual tools that grasp the complexity of intersectional gender inequalities and assist in making gender a core category of analysis in their current and future research projects.

 

 

 

 

 

 

 

 

Bio:

Dr. Lena Holzer is a researcher in the field of international law and international relations. They are passionate for finding ways to challenge systems of inequality, such as gender or racial inequalities, through legal and other means. In addition to their experience with academic research and teaching, they have worked for several human rights NGOs and been involved in civil society movements. Lena completed their doctoral research at the Graduate Institute and since September 2022 is a lecturer at Goldsmiths in London.

Back to the schedule

Marie-Therese Sekwenz

Sekwenz

Date and Time:

  • Tuesday, Sept. 6, 2022, 13.30 – 16.30 (In-person workshop)

Title:

Much Ado about Socio-technical Systems – The Hybrid Structure of Content Moderation

Abstract:

Governing speech is a difficult art. Not only because of the sheer masses of content produced daily by users, but also because of the different forms of content (visual, acoustic, text, gesture) that are moderated based on platform-specific community standards. Online platforms, therefore, use technical means, such as Artificial Intelligence (AI), as well as human content moderators to reduce harms and risks uploaded to their environments, such as hate speech, electoral interference, or disinformation. The process of moderation for online expression furthermore is addressed by previous European regulations such as the General Data Protection Regulation, the Digital Services Act, and the Artificial Intelligence Act. This session will give insight into the socio-technical process of moderation, the challenges occurring within this process design, as well as an overview of relevant regulations.

 

 

 

 

 

 

 

 

Bio:

Marie-Therese Sekwenz is a PhD candidate at TU Delft’s Institute of Technology, Policy and Management, a member of the AI Futures Lab, and a journalist for the Austrian Broadcasting Agency (ORF) and a researcher at the Sustainable Computing Lab at Vienna University of Economics and Business. Her research focuses on platform governance and regulation, Artificial Intelligence (AI) and socio-technical system design. Previously Marie-Therese worked on research projects for the Leibniz Institute for Media Research – Hans-Bredow- Institute (HBI) and the Alexander von Humboldt Institute for Internet and Society. Marie-Therese studied at Vienna University of Economics and Business and the Graduate School of Management in St Petersburg. She has a background in law, information systems and economics.

Back to the schedule

Organization and Contact


For questions and issues regarding the NoBIAS Summer School, please contact: