Professor Bernd Stahl

Job: Director of the Centre for Computing and Social Responsibility

Faculty: Computing, Engineering and Media

School/department: School of Computer Science and Informatics

Research group(s): Centre for Computing and Social Responsibility (CCSR)

Address: De Montfort University, The Gateway, Leicester, LE1 9BH, United Kingdom

T: +44 (0)116 207 8252

E: bstahl@dmu.ac.uk

W: https://orcid.org/0000-0002-4058-4456

 

Personal profile

Bernd Carsten Stahl is Professor of Critical Research in Technology and Director the Centre for Computing and Social Responsibility at De Montfort University, Leicester, UK. His interests cover philosophical issues arising from the intersections of business, technology, and information. This includes the ethics of ICT and critical approaches to information systems. 

Publications and outputs

  • Brain simulation as a cloud service: The Virtual Brain on EBRAINS
    Brain simulation as a cloud service: The Virtual Brain on EBRAINS Schirner, Michael; et al.; Akintoye, Simisola; Stahl, Bernd Carsten, 1968- The Virtual Brain (TVB) is now available as open-source services on the cloud research platform EBRAINS (ebrains.eu). It offers software for constructing, simulating and analysing brain network models including the TVB simulator; magnetic resonance imaging (MRI) processing pipelines to extract structural and functional brain networks; combined simulation of large-scale brain networks with small-scale spiking networks; automatic con- version of user-specified model equations into fast simulation code; simulation-ready brain models of patients and healthy volunteers; Bayesian parameter optimization in epilepsy patient models; data and software for mouse brain simulation; and extensive educational material. TVB cloud services facilitate reproducible online collabo- ration and discovery of data assets, models, and software embedded in scalable and secure workflows, a precondition for research on large cohort data sets, better generalizability, and clinical translation. open access article Schirner, M., Domide, L., Perdikis, D., Triebkorn, P., Stefanovski, L., Pai, R., Prodan, P., Valean, B., Palmer, J., Langford, C., Blickensdörfer, A., van der Vlag, M., Diaz-Pier, S., Peyser, A., Klijn, W., Pleiter, D., Nahm, A., Schmid, O., Woodman, M., Zehl, L., Fousek, J., Petkoski, S., Kusch, L., Hashemi, M., Marinazzo, D., Mangin, J.-F., Flöel, A., Akintoye, S., Stahl, B.C., Cepic, M., Johnson, E., Deco, G., McIntosh, A.R., Hilgetag, C.C., Morgan, M., Schuller, B., Upton, A., McMurtrie, C., Dickscheid, T., Bjaalie, J.G., Amunts, K., Mersmann, J., Jirsa, V., Ritter, P. (2022) Brain simulation as a cloud service: The Virtual Brain on EBRAINS. NeuroImage, 251, 118973
  • Responsible Innovation Ecosystems - Ethical implications of the application of the ecosystem concept to artificial intelligence
    Responsible Innovation Ecosystems - Ethical implications of the application of the ecosystem concept to artificial intelligence Stahl, Bernd Carsten, 1968- The concept of innovation ecosystems has become prominent due to its explanatory power. It offers a convincing account of innovation, explaining how and why innovation pathways change and evolve. It has been adopted to explain, predict, and steer innovation. The increasing importance of innovation for most aspects of human life calls for the inclusion of ethical and social rights aspects into the innovation ecosystems discourse. The current innovation ecosystems literature does not provide guidance on how the integration of ethical and social concerns into innovation ecosystems can be realised. One way to achieve this is to draw on the discussion of responsible research and innovation (RRI). This paper applies RRI to the innovation ecosystems discourse and proposes the concept of responsible innovation systems. It draws on the discussion of the ethics of artificial intelligence (AI) to explore how responsible AI innovation ecosystems can be shaped and realised. open access article Stahl, B. C. (2022) Responsible innovation ecosystems: Ethical implications of the application of the ecosystem concept to artificial intelligence. International Journal of Information Management, 62, 102441.
  • Pseudonymization of neuroimages and data protection: Increasing access to data while retaining scientific utility
    Pseudonymization of neuroimages and data protection: Increasing access to data while retaining scientific utility Eke, Damian; Stahl, Bernd Carsten, 1968-; Ogoh, George; Knight, William; Akintoye, Simisola; Ochang, Paschal For a number of years, facial features removal techniques such as ‘defacing’, ‘skull stripping’ and ‘face masking/ blurring’, were considered adequate privacy preserving tools to openly share brain images. Scientifically, these measures were already a compromise between data protection requirements and research impact of such data. Now, recent advances in machine learning and deep learning that indicate an increased possibility of re- identifiability from defaced neuroimages, have increased the tension between open science and data protection requirements. Researchers are left pondering how best to comply with the different jurisdictional requirements of anonymization, pseudonymization or de-identification without compromising the scientific utility of neuroimages even further. In this paper, we present perspectives intended to clarify the meaning and scope of these concepts and highlight the privacy limitations of available pseudonymization and de-identification techniques. We also discuss possible technical and organizational measures and safeguards that can facilitate sharing of pseudonymized neuroimages without causing further reductions to the utility of the data. open access article Eke, D., Aasebø, I.E.J., Akintoye, S., Knight, W., Karakasidis, A., Mikulan, E., Ochang, P., Ogoh, G., Oostenveld, R., Pigorini, A., Stahl, B.C., White, T., Zehl, L. (2021) Pseudonymization of neuroimages and data protection: Increasing access to data while retaining scientific utility. Neuroimage Reports, 1, (4) 100053.
  • From Responsible Research and Innovation to Responsibility by Design
    From Responsible Research and Innovation to Responsibility by Design Stahl, Bernd Carsten, 1968-; Akintoye, Simisola; Bitsch, Lise; Bringedal, Berit; Eke, Damian; Farisco, Michele; Grasenick, Karin; Guerrero, Manuel; Knight, William; Leach, Antonia; Nyholm, Sven; Ogoh, George; Rosemann, Achim; Salles, Arleen; Trattnig, Julia; Ulnicane, Inga Drawing on more than eight years working to implement Responsible Research and Innovation (RRI) in the Human Brain Project, a large EU-funded research project that brings together neuroscience, computing, social sciences, and the humanities, and one of the largest investments in RRI in one project, this article offers insights on RRI and explores its possible future. We focus on the question of how RRI can have long-lasting impact and persist beyond the time horizon of funded projects. For this purpose, we suggest the concept of “responsibility by design” which is intended to encapsulate the idea of embedding RRI in research and innovation in a way that makes it part of the fabric of the resulting outcomes, in our case, a distributed European Research Infrastructure. open access article Stahl, B. C., Akintoye, S., Bitsch, L., Bringedal, B., Eke, D., Farisco, M., Grasenick, K., Guerrero, M., Knight, W., Leach, A., Nyholm, S., Ogoh, G., Rosemann, A., Sallees, A., Trattnig, J., Ulnicane, I. (2021): From Responsible Research and Innovation to Responsibility by Design. Journal of Responsible Innovation.
  • From Computer Ethics and the Ethics of AI towards an Ethics of Digital Ecosystems
    From Computer Ethics and the Ethics of AI towards an Ethics of Digital Ecosystems Stahl, Bernd Carsten, 1968- Ethical, social and human rights aspects of computing technologies have been discussed since the inception of these technologies. In the 1980s this led to the development of a discourse often referred to as computer ethics. More recently, since the middle of the 2010s, a highly visible discourse on the ethics of artificial intelligence (AI) has developed. This paper discusses the relationship between these two discourses and compares their scopes, the topics and issues they cover, their theoretical basis and reference disciplines, the solutions and mitigations options they propose and their societal impact. The paper argues that an understanding of the similarities and differences of the discourses can benefit the respective discourses individually. More importantly, by reviewing them, one can draw conclusions about relevant features of the next discourse, the one we can reasonably expect to follow after the ethics of AI. The paper suggests that instead of focusing on a technical artefact such as computers or AI, one should focus on the fact that ethical and related issues arise in the context of socio-technical systems. Drawing on the metaphor of ecosystems which is widely applied to digital technologies, it suggests preparing for a discussion of the ethics of digital ecosystems. Such a discussion can build on and benefit from a more detailed understanding of its predecessors in computer ethics and the ethics of AI. open access article Stahl, B. C. (2021) From Computer Ethics and the Ethics of AI towards an Ethics of Digital Ecosystems. AI and Ethics,
  • Good governance as a response to discontents? Déjà vu, or lessons for AI from other emerging technologies
    Good governance as a response to discontents? Déjà vu, or lessons for AI from other emerging technologies Ulnicane, Inga; Eke, Damian; Knight, William; Ogoh, George; Stahl, Bernd Carsten, 1968- Recent advances in Artificial Intelligence (AI) have led to intense debates about benefits and concerns associated with this powerful technology. These concerns and debates have similarities with developments in other emerging technologies characterized by prominent impacts and uncertainties. Against this background, this paper asks, What can AI governance, policy and ethics learn from other emerging technologies to address concerns and ensure that AI develops in a socially beneficial way? From recent literature on governance, policy and ethics of emerging technologies, six lessons are derived focusing on inclusive governance with balanced and transparent involvement of government, civil society and private sector; diverse roles of the state including mitigating risks, enabling public participation and mediating diverse interests; objectives of technology development prioritizing societal benefits; international collaboration supported by science diplomacy, as well as learning from computing ethics and Responsible Innovation. open access article Ulnicane, I., Okaibedi Eke, D., Knight, W., Ogoh, G. and Stahl, B.C. (2021) Good governance as a response to discontents? Déjà vu, or lessons for AI from other emerging technologies. Interdisciplinary Science Reviews, 46(1-2), pp.71-93.
  • From PAPA to PAPAS and Beyond: Dealing with Ethics in Big Data, AI and other Emerging Technologies
    From PAPA to PAPAS and Beyond: Dealing with Ethics in Big Data, AI and other Emerging Technologies Stahl, Bernd Carsten, 1968- The acronym PAPA, which stands for privacy, accuracy, property, and accessibility has long been part of the discussion of ethical issues in information systems. While all of the four constituent components remain relevant, technical progress and the integration of technology in organisations and society in the intervening almost 40 years call for a reconsideration of the acronym. In response to Richardson et al.’s proposal to add the term “society”, this paper suggests that an extension of the acronym in more than one dimension would be useful. This includes the dimension of stakeholder, which can be individuals, organisations or society. It could include the stage of systems use, including input, processing and output. The third dimension is the ethical issue, which still includes PAPA but can be supplemented with others, such as bias, power distribution and others. The paper therefore suggests that we not only need to extend PAPA to PAPAS but that we need to go beyond a list of ethical issues to capture the richness and complexity in which ethics and information systems interact. Stahl, Bernd Carsten (2021) From PAPA to PAPAS and Beyond: Dealing with Ethics in Big Data, AI and other Emerging Technologies. Communications of the AIS,
  • Artificial Intelligence for a Better Future
    Artificial Intelligence for a Better Future Stahl, Bernd Carsten, 1968- This open access book proposes a novel approach to Artificial Intelligence (AI) ethics. AI offers many advantages: better and faster medical diagnoses, improved business processes and efficiency, and the automation of boring work. But undesirable and ethically problematic consequences are possible too: biases and discrimination, breaches of privacy and security, and societal distortions such as unemployment, economic exploitation and weakened democratic processes. There is even a prospect, ultimately, of super-intelligent machines replacing humans. The key question, then, is: how can we benefit from AI while addressing its ethical problems? This book presents an innovative answer to the question by presenting a different perspective on AI and its ethical consequences. Instead of looking at individual AI techniques, applications or ethical issues, we can understand AI as a system of ecosystems, consisting of numerous interdependent technologies, applications and stakeholders. Developing this idea, the book explores how AI ecosystems can be shaped to foster human flourishing. Drawing on rich empirical insights and detailed conceptual analysis, it suggests practical measures to ensure that AI is used to make the world a better place. open access book Stahl, B.C. (2021) Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies. Cham: Springer International Publishing.
  • Organisational Responses to the Ethical Issues of Artificial Intelligence.
    Organisational Responses to the Ethical Issues of Artificial Intelligence. Stahl, Bernd Carsten, 1968-; Antoniou, Josephine; Ryan, Mark; Macnish, Kevin; Jiya, Tilimbe The ethics of artificial intelligence (AI) is a widely discussed topic. There are numerous initiatives that aim to develop principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current understanding of how organisations deal with AI ethics by presenting empirical findings collected using a set of 10 case studies and providing an account of the cross-case analysis. The paper reviews the discussion of ethical issues of AI as well as mitigation strategies that have been proposed in the literature. Using this background, the cross-case analysis categorises the organisational responses that were observed in practice. The discussion shows that organisations are highly aware of the AI ethics debate and keen to engage with ethical issues proactively. However, they make use of only a relatively small subsection of the mitigation strategies proposed in the literature. These insights are of importance to organisations deploying or using AI, to the academic AI ethics debate, but maybe most valuable to policymakers involved in the current debate about suitable policy developments to address the ethical issues raised by AI. The file attached to this record is the author's final peer reviewed version. Stahl, B.C., Antoniou, J., Ryan, M., Macnish, K., Jiya, T. (2021) Organisational Responses to the Ethical Issues of Artificial Intelligence. AI & Society
  • Research and Practice of AI Ethics: A case study approach juxtaposing academic discourse with organisational reality
    Research and Practice of AI Ethics: A case study approach juxtaposing academic discourse with organisational reality Ryan, Mark; Antoniou, Josephina; Brooks, Laurence; Jiya, Tilimbe; Macnish, Kevin; Stahl, Bernd Carsten, 1968- This study investigates the ethical use of Big Data and Artificial Intelligence (AI) technologies (BD+AI) - using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues, (from the literature), into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and applications of AI , there is overlap and correlation between the organisations’ ethical concerns. This more detailed understanding of ethics in AI+BD is required to ensure that the multitude of suggested ways of addressing them can be targeted and succeed in mitigating the pertinent ethical issues that are often discussed in the literature. The file attached to this record is the author's final peer reviewed version. Ryan, M., Antoniou, J., Brooks, L., Jiya, T., Macnish, K., Stahl, B.C. (2021) Research and Practice of AI Ethics: A case study approach juxtaposing academic discourse with organisational reality. Science and Engineering Ethics, 27(2).

Click here to view a full listing of Bernd Stahl's publications and outputs.

Research interests/expertise

Computer and information ethics

Emerging Technologies

Responsible Research and Innovation

Areas of teaching

Ethics and ICT

Responsible Research and Innovation (RRI) in ICT

Critical approaches to information systems

Privacy

Qualifications

DSc De Montfort University, UK

PhD University Witten/Herdecke, Germany.

MSc in Industrial Engineering: University of the German Armed Forces, Hamburg, Germany

MA in Philosophy and Economics: University of Hagen, Germany

MPhil in Philosophy: University Michel de Montaigne, Bordeaux III, France

LLM in Business Law: De Montfort University, Leicester, UK

Membership of professional associations and societies

Fellow of the British Computer Society (BCS) (since 04/2010, Member from 2003)

Fellow of the International Information Management Association (IIMA) (from 2005);

President of the IIMA 2005-2006

Fellow of the Higher Education Academy (from March 2007), registered practitioner since April 2006

Member of the International Federation for Information Processing (IFIP), Working Group 8.2

Member of the Association for Information Systems (AIS)

Member of the United Kingdom Academy for Information Systems (UKAIS)

Member of the Information Resource Management Association (IRMA)

Member of the International Center for Information Ethics (ICIE)

Member of the European Business Ethics Network (EBEN)

Projects

I have or had significant leadership positions in the following projects:

SHERPA - Shaping the Ethical Dimensions of Smart Information Systems -a European Perspective (EU, SwafS, 2018-2021)

Human Brain Project (EU, FET Flagship, 2013-2023)
(report and video by Bloomberg and euronews)

ORBIT - The Observatory for RRI in ICT (EPSRC, 2017-2022, then spin-off company)

Responsible-Industry - Responsible Research and Innovation in Business and Industry in the Domain of ICT for, Health, Demographic Change and Wellbeing (EU, SiS, 2014-2017)

CONSIDER - Civil Society Organisations in Designing Research Governance (EU, SiS, 2012 - 2015)

Framework for RRI in ICT (EPSRC, 2011-2013, precedecssor of ORBIT)

ETICA - Ethical Issues of Emerging ICT Application (EU, SiS, 2009-2011)

Consultancy work

Ethics

Responsible Research and Innovation

Privacy Impact Assessment

Professional esteem indicators

 

Editorial work

Editor in Chief of the Journal of Responsible Technology (successor of the ORBIT journal)

PhD student supervision

I am interested in supervising high quality and motivated PhD students in my areas of interest, including:

  • Responsible research and innovation in ICT, notably questions like:
    • Industrial realisation of RRI
    • Success measures, impact of RRI
  • Responsible data governance
  • Ethical issue of emerging technologies, e.g. 
    • Brain-computer interfaces
    • Converging technologies (neuro, cognitive, ICT) 
Bernd Carsten Stahl