Generative AI Under the Lens: Rigorous Checks for Safety and Reliability
Just Added

Generative AI Under the Lens: Rigorous Checks for Safety and Reliability

By The University of Manchester

We'll dive deep into ensuring that AI systems are safe and reliable through rigorous checks and discussions at Generative AI Under the Lens!

Date and time

Location

The University of Manchester

Oxford Road Manchester M13 9PL United Kingdom

Lineup

Agenda

Dec'10 (Manchester )
Dec'11 (Liverpool)

9:00 AM - 9:30 AM

Opening & Welcome


Objectives of the workshop and technical scope. Introductory remarks from organizers.

9:30 AM - 10:30 AM

Keynote


AI-assisted formal verification – bridging speed, quality, and trust.

10:30 AM - 12:00 PM

Session I – SE4GenAI & GenAI4SE Foundations


AI-assisted formal verification frameworks. Perspectives on SE4GenAI and GenAI4SE. Safety-critical systems and US/DoD experience.

12:00 PM - 1:30 PM

Lunch and Networking Pause

1:30 PM - 3:00 PM

Session II – Technical Demonstrations & Case Studies


Demos of LLMs and AI agents in verification workflows.

3:00 PM - 4:00 PM

Panel Discussion


Theme: Balancing speed, quality, and trust in verification pipelines. Panelists: keynote + academic/industry experts.

4:00 PM - 4:30 PM

Closing


Technical outcomes and research questions for Day 2.

Good to know

Highlights

  • 7 hours 30 minutes
  • all ages
  • In person
  • Paid parking
  • Doors at 09:00

About this event

Science & Tech • Science

Generative AI Under the Lens: Rigorous Checks for Safety and Reliability

Generative AI has accelerated development across various domains - including software, robotics, autonomous systems, and industrial applications - enabling rapid innovation and productivity gains. However, deploying generative AI-driven systems can introduce safety, security, and reliability vulnerabilities, highlighting the urgent need for rigorous evaluation of their outputs before deployment. Among many evaluation techniques, this workshop will focus on formal and statistical verification approaches.

Generative AI requires careful verification. For example, in software development, 45% of organizations prioritize speed over quality, and 63% deploy code without full testing. While 80% view generative AI as enhancing both speed and quality, studies reveal significant flaws that could compromise safety and security, underscoring the need for efficient yet thorough verification methods across all AI-driven systems.

Generative AI can also support verification in multiple ways, including:

  • Automating specification generation from system requirements to accelerate verification workflows.
  • Enhancing formal verification tools by guiding proofs, generating counterexamples, and analyzing high-risk areas.
  • Optimizing verification productivity and coverage in complex systems, including hardware and software co-design.
  • Integrating with statistical methods to improve reliability, uncertainty quantification, and diagnostics.

These AI-assisted techniques reduce human effort, scale verification to complex systems, and enable tasks previously infeasible, while maintaining rigorous guarantees.

The workshop will convene researchers, practitioners, and industry experts to explore the following research question: How can generative AI-assisted verification techniques, including formal and statistical approaches, ensure that AI-driven systems are rigorously checked for safety, reliability, and security across diverse domains?

Participants will explore cutting-edge methods and collaborative opportunities to advance trustworthy and reliable generative AI deployment.

Frequently asked questions

Organised by

The University of Manchester

Followers

--

Events

--

Hosting

--

Free
Dec 10 · 09:00 GMT