RAI Summit workshops

RAI Summit / Workshops

Responsible AI in Education

Chair: Professor Riri Fitri Sari (University of Indonesia, Depok, Jakarta)
Audience: University leaders, Rectors/Vice Rectors/Institutional Research and ICT Directors, lecturers

This workshop’s purpose is to create a shared global baseline for how universities assess, govern, and advance responsible AI using the RIDAN Charter’s Responsible AI eight indicators. The speakers will share the results of their institution evaluation and share lessons learned on Responsible AI Implementation in their universities.

Key questions

1.⁠ ⁠Where is your university today on each of the 8 RIDAN Charter indicators? 

  • What maturity level have you achieved? 
  • What governance, policy, and accountability structures are in place? 

2.⁠ ⁠What are the specific enabling conditions and barriers? 

  • Institutional culture 
  • Leadership support 
  • Resource allocation 
  • Interdisciplinary engagement 

3.⁠ ⁠How are ethical AI principles translated into operational practice? 

  • Teaching curricula 
  • Responsible research frameworks 
  • Procurement and deployment of AI systems 

4.⁠ ⁠What actionable priorities does your institution commit to for the next 12-24 months? 

  • Internal targets 
  • Collaborative initiatives 
  • External partnerships 

5.⁠ ⁠What global commonalities and contextual differences emerge from these reports? 

  • Best practices that can be generalised
  • Context-specific adaptations (e.g., regulatory environments, cultural norms).

Focused outcome: Capture insights that can be consolidated into a global Responsible AI University Rating Framework based on empirical self-evaluations.

Generative AI in Policing

Chair: Dr Neil McBride (De Montfort University)
Audience: Lecturers and researchers in Policing, chief constables, police officers, ICT leaders in Policing, community organisers

The recent case of the use of generative AI by West Midlands Police to support the case for banning Israeli football fans brought concerns about the use of large language models in policing into sharp focus. While much is researched and written at the corporate and management level concerning checklists for AI tool development, high-level decision-making and efficiency applications in connection with evidence-based policing, predictive policing and data analytics, less attention has been given to the benefits and risks of local use of Gen AI in investigations, situation management and the practical use of models such as ChatGPT, Gemini and Claude in general, everyday policing. This workshop aims to develop the office’s ideas and understanding of the potential value and responsible use of generative AI in pursuing everyday policing activities.

Key questions

  • What are large language models and how do they work?
  • Can we identify suitable and unsuitable use cases for the application of generative IT in policing practice and investigations?
  • Where do we set the guardrails?
  • What skillsets are needed by officers to apply generative AI in police work?
  • How do we ensure accountability and auditability in Generative AI across the board, from officer to chief constable?
  • How can we position Generative AI in police procedure and practice?
  • What reassurance does the public need?
  • How can we apply ethical policing principles of courage, respect and empathy, and public service to the embedding of Generative AI in police work?
  • What does ethical use of Generative AI in the pursuance of general police duty look like?

Focused outcome: Gathering of insights and ideas towards developing practical guidelines for officers in using Generative AI in police practice leading to defining of responsible generative AI usage and outlining the type of skills training needed.

Middle Rage: Extremism, invisibility and social media

Chair: Dr Sara Wilford (DMU)
Audience: University leaders, Rectors/Vice Rectors/Institutional Research and ICT Directors, lecturers, policy makers

This workshop’s purpose is to understand the vulnerability of the middle-aged to extremist content online, and how the use of AI, deep fakes and manipulation targets this group. The middle aged are both powerful (decision makers, politicians, CEOs etc) and powerless (invisible, left behind, hard to reach) and this workshop will present counter-narratives aimed specifically at this group and then to consider how targeted policy interventions can prevent the fall down the social media rabbit hole towards extremism.

Key questions

  • Middle age – the invisible generation (the powerful and the powerless): What policy approaches would enable online counter-narratives to better reach middle-aged people?
  • Radicalising the middle aged: What other factors do you think may lead to radicalisation specifically within this age group?
  • Who are the influencers and content creators?: Online ‘friends’ and content creators – who are they?
  • Are they the usual suspects?: How do we disrupt those influencers who are spreading disinformation?
  • As ‘Prevent’ chiefly targets younger people, how can we address the gap in identifying and addressing extremism in the middle aged?
  • Next steps for policy making: With limited resources, how do we include this group into existing initiatives?
  • What approaches in policy making will ensure that in the future this group is not ignored in the future?

Focused outcome: Capture insights and recommendations for future policy making and targeted approaches to foster effective counter-initiatives against extremism and misinformation online, focusing on the middle-aged group.

Digital and AI Transformation for Business – Strategy, Implementation and Responsible Practice

Chair: Dr. Adebowale and Dr. Abiodun Egbetokun (DFI and BAL
Audience:  Academics interested in digital transformation and AI adoption; industry practitioners, business leaders, innovation managers, policy stakeholders, and SMEs engaged in or planning digital and AI-enabled change initiatives.

This workshop examines digital and AI transformation as an organisational, cultural, and socio-technical process rather than a purely technological exercise. Moving beyond tools and systems, it explores how (responsible) transformation reshapes business models, decision-making structures, workforce capabilities, and stakeholder relationships. 

Participants will explore how digital technologies and AI can be strategically aligned with business goals, operational processes, and stakeholder needs to deliver measurable value and sustainable impact 

The key guiding questions:

  • What organisational and cultural changes are required to support effective transformation?
  • What capabilities and skills are needed to support workforce readiness in AI-enabled environments?
  • How can SMEs and resource-constrained organisations approach digital and AI transformation pragmatically?
  • What frameworks or models best support the implementation and evaluation of transformation initiatives?

Focused Outcome:

A concise set of actionable insights outlining key organisational enablers, workforce capabilities, and pragmatic frameworks to support effective and responsible digital and AI transformation, particularly within SMEs and resource-constrained contexts.

The impact of GenAI on academic practices: information searching, writing, and feedback

Chairs: Arina Cirstea, Bev Hancock-Smith, Jason Eyre, Jenny Coombs (Library Learning Services)
Audience: academics, researchers, academic librarians, student support professionals

This workshop will reflect on the presenters' research, teaching, and resource-development experience.

Participants will have the opportunity to interact with selected AI tools available to DMU students/staff, find out about guides and support available from LLS as well as engage in a critical discussion on the potential implications of AI-assisted academic practices for their own context.

Key questions

  • How does the emergence of AI-powered tools change information searching practices?
  • AI as co-author versus AI as assistant: can academic writers effectively tell the difference?
  • How can AI be effectively integrated within peer review and writing feedback practices?

Focused Outcome: Capture the insight that can be shared in the event proceedings (1-2 pages summary of the findings)

AI and Sustainbility in Higher Education

Chair: Dr Andrew Reeves (De Montfort University)
Presenters: Prof Simon Kemp (University of Southampton) and Charlotte Bonner (CEO of Environmental Association of Universities and Colleges)
Audience: University Leaders, Academic Staff and Professional Services Staff

  • Evidence-based insights on the environmental impacts of a range of AI systems and tools
  • Current policies and practices adopted at universities on environmental aspects of AI use
  • Key issues to consider when linking environmental factors to socio-economic impacts and benefits related to AI adoption.

Focused Outcome: The session will enable participants to discuss the issues raised and identify relevance to their own contexts, including taught courses, research processes and institutional policies.

Responsible AI in Digital Health: Opportunities, risks, and responsibilities in healthcare

Chairs: Chris Alvey & Dr Eleni Karasouli
Audience: Health care practitioners, policy makers, patients & public, researchers, digital transformation professionals

Artificial intelligence and digital health technologies are increasingly being adopted across healthcare systems to support clinical decision-making, service delivery and access to health information. While these innovations offer significant opportunities to enhance efficiency, improve patient engagement, and expand access to health knowledge, they also raise complex ethical and governance questions around responsibility, trust, transparency, and equity. This interactive workshop will explore how AI-enabled digital health technologies are being integrated into healthcare practice and what responsible adoption should look like. Through facilitated discussion and case-based scenarios, participants will examine both opportunities and risks, and contribute to identifying principles for the ethical and responsible use of AI in healthcare.

Participants will:

  • Examine how AI and digital health technologies are being adopted in healthcare practice, including their potential benefits and limitations.
  • Explore ethical and governance challenges associated with integrating AI into healthcare decision-making and service delivery.
  • Discuss how digital health tools may reshape relationships between healthcare professionals, patients, and health knowledge.
  • Identify principles for the responsible and equitable implementation of AI-enabled digital health technologies.

Key questions

  • How is AI currently being adopted within healthcare systems and clinical practice?
  • What opportunities do AI-enabled digital health tools offer for improving healthcare delivery and access to health information?
  • What risks arise when AI is integrated into healthcare decision-making and patient support?
  • How should responsibility be distributed between AI systems, healthcare professionals, and organisations?
  • How can healthcare systems ensure ethical, trustworthy, and equitable adoption of AI technologies?

Focused outcome: The workshop will function as an interdisciplinary consultation bringing together perspectives from health, technology, and behavioural sciences. Participants will collectively identify key challenges and guiding principles for the responsible adoption of AI in healthcare. Insights generated during the session will inform a short reflection or discussion paper for the summit proceedings on responsible AI and digital health.