Generative AI: Verification, Responsibility, and Interdisciplinary Insights

Generative AI: Verification, Responsibility, and Interdisciplinary Insights

By The University of Manchester

Generative AI Under the Lens: Rigorous Checks for Safety and Reliability

Date and time

Location

The University of Manchester

Oxford Road Manchester M13 9PL United Kingdom

Lineup

Agenda

Dec'10 (Manchester )
Dec'11 (Liverpool)

9:00 AM - 9:30 AM

Opening & Welcome


Objectives of the workshop and technical scope. Introductory remarks from organizers.

9:30 AM - 10:30 AM

Keynote


AI-assisted formal verification – bridging speed, quality, and trust.

10:30 AM - 12:00 PM

Session I – SE4GenAI & GenAI4SE Foundations


AI-assisted formal verification frameworks. Perspectives on SE4GenAI and GenAI4SE. Safety-critical systems and US/DoD experience.

12:00 PM - 1:30 PM

Lunch and Networking Pause

1:30 PM - 3:00 PM

Session II – Technical Demonstrations & Case Studies


Demos of LLMs and AI agents in verification workflows.

3:00 PM - 4:00 PM

Panel Discussion


Theme: Balancing speed, quality, and trust in verification pipelines. Panelists: keynote + academic/industry experts.

4:00 PM - 4:30 PM

Closing


Technical outcomes and research questions for Day 2.

Good to know

Highlights

  • 7 hours 30 minutes
  • all ages
  • In person
  • Paid parking
  • Doors at 09:00

About this event

Science & Tech • Science

Generative AI: Verification, Responsibility, and Interdisciplinary Insights

Generative AI is advancing at an unprecedented pace, enabling breakthroughs in software engineering, robotics, autonomous systems, healthcare, education, and beyond. These rapid developments create extraordinary opportunities for innovation but also raise urgent challenges around safety, reliability, ethics, and societal impact. Addressing these challenges requires both technical rigor and interdisciplinary insight.

This two-day workshop brings together researchers, practitioners, and experts from computer science and across diverse domains to explore how generative AI can be developed, verified, and deployed responsibly.

  • Day 1 (Dec, 10, Manchester) will focus on the computer science and AI foundations of generative AI, with an emphasis on verification methods—both formal and statistical—that can ensure the safety, security, and reliability of AI-driven systems.
  • Day 2 (Dec, 11, Liverpool) will broaden the conversation by incorporating interdisciplinary perspectives from fields such as healthcare, law, education, and the social sciences. These discussions will highlight how insights from other disciplines can directly inform computer science research, contributing to the design of generative AI that is more robust, ethical, and aligned with societal needs.

Together, the two days will provide a comprehensive view of the challenges and opportunities in generative AI, bridging technical innovation with cross-domain collaboration to advance both scientific rigor and responsible practice.


Day 1 (this event)

Title: Generative AI Under the Lens: Rigorous Checks for Safety and Reliability

Generative AI has accelerated development across various domains—including software, robotics, autonomous systems, and industrial applications—enabling rapid innovation and productivity gains. However, deploying generative AI-driven systems can introduce safety, security, and reliability vulnerabilities, highlighting the urgent need for rigorous evaluation of their outputs before deployment. Among many evaluation techniques, this workshop will focus on formal and statistical verification approaches.

Generative AI requires careful verification. For example, in software development, 45% of organizations prioritize speed over quality, and 63% deploy code without full testing. While 80% view generative AI as enhancing both speed and quality, studies reveal significant flaws that could compromise safety and security, underscoring the need for efficient yet thorough verification methods across all AI-driven systems.

Generative AI can also support verification in multiple ways, including:

  • Automating specification generation from system requirements to accelerate verification workflows.
  • Enhancing formal verification tools by guiding proofs, generating counterexamples, and analyzing high-risk areas.
  • Optimizing verification productivity and coverage in complex systems, including hardware and software co-design.
  • Integrating with statistical methods to improve reliability, uncertainty quantification, and diagnostics.

These AI-assisted techniques reduce human effort, scale verification to complex systems, and enable tasks previously infeasible, while maintaining rigorous guarantees.

Participants will explore cutting-edge verification techniques and collaborative opportunities to advance trustworthy and reliable generative AI deployment.


Day 2 (register here)

Title: Generative AI in Perspective: Cross-Domain Guidance for Robust and Ethical Systems

Generative AI is reshaping fields far beyond computer science—transforming healthcare, education, law, media, and the arts. Each of these domains brings unique challenges and insights around ethics, accountability, fairness, and human impact. The second day of the workshop will focus on how interdisciplinary perspectives can directly inform and strengthen computer science research, ensuring that generative AI systems are not only powerful but also responsible, reliable, and aligned with societal needs.

Technical innovations alone cannot resolve issues such as bias, misuse, or ethical ambiguity. By engaging with healthcare, we learn how to embed safety and reliability standards into system design. From law and policy, we gain frameworks for accountability and verifiable compliance. Social sciences highlight risks around bias, equity, and trust that can guide model evaluation. The arts and humanities challenge us to rethink creativity, authorship, and the boundaries of human–AI collaboration. These perspectives are not external to CS—they are essential inputs that shape more rigorous and ethical AI development.

Interdisciplinary insights can support CS research in several concrete ways:

  • Embedding Ethical Constraints: Translating values like fairness and accountability into measurable, testable system properties.
  • Guiding Verification Benchmarks: Using policy and domain-specific requirements to set new standards for evaluation and robustness.
  • Improving Human–AI Interaction: Drawing on social science and design principles to create systems that are usable, trustworthy, and equitable.
  • Anticipating Risks: Leveraging cross-domain expertise to surface vulnerabilities (bias, misinformation, cultural harm) that technical testing might miss.
  • Shaping Research Agendas: Using interdisciplinary priorities to guide CS researchers toward problems with the greatest societal relevance.

By reframing interdisciplinary dialogue as an essential input to technical progress, this workshop will highlight how cross-domain expertise can directly strengthen computer science research, ensuring that future systems are both technically sound and socially aligned.

Frequently asked questions

Organised by

The University of Manchester

Followers

--

Events

--

Hosting

--

Free
Dec 10 · 09:00 GMT