To the report on this excellent AI event, I add some notes [and my comments, in square brackets], by theme rather than chronological order:
AI is increasingly used/useful for science: chemistry/biology, industrial manufacturing, computational fluid dynamics (eg airflow, floods) etc.
- Certain weather/climate AI modelling took 10 minutes which would otherwise have taken 6.5 years!
- Nurse-driven rather than IT-led, an NHS trust [probably this one or this one?] created an AI tool to prioritise patient complaints, which was extended to identify incidents and systemic issues, then responding to complaints more effectively ; it's now embedded in the NHS federated data platform
- Coding of course: eg at Uber AI agents drive about 11% of coding, freeing staff for more important work.
AI democratises expertise globally, and can help with repetitive work, mundane tasks eg for lawyers, freeing time to focus on more challenging work.
Domain expertise (chemistry, law etc) was said to be "more valuable" than the rush to data; AI should complement human experts [for me, search results a few months back indicated that a provision was from the Data Protection Directive - but it wasn't, it was only in GDPR, you'd need data protection expertise to spot the error]
More interactivity with AI gives scientists back their flow, "What if I change this or that", and get the results in a minute. But some software engineers initially excited about AI's speed turned it off after a few months because they'd stopped learning!
Physical AI (robotics, autonomous vehicles, AI interacting with/reasoning about physical world [see eg TechUK's events on physical AI]) is garnering more investment.
There's increasing work on scientific ML, and ways to encode physical laws into AI models: neural physics (physics-based, AI-enabled [PINNs]); using data generated by slow, expensive, platform-specific rules-based computational/physics-based models to train, using less energy, faster, platform-agnostic (but black-box, less generalisable) data-driven surrogates.
- One simulated airflow and airspeed over Greater London a thousand times faster than realtime!
Weather: there's evidence that models are learning underlying physical principles eg atmospheric dynamics, rather than just pattern matching/stochastic parroting! Much insurance sector work is being undertaken on extreme events (which was too energy-intensive with traditional models).
Energy: AI could help improve energy efficiency (eg Singapore is doing this).
- For UK and especially London data centres, the electricity issue isn't power generation but the distribution network, cables etc to get power to data centres - there's a multiyear queue, 5-10 years!
LLMs for legal research?: does the model provide the best supporting citations, is its answer appropriate to the context, does it avoid assumptions on important points? Thomson Reuters is continuing to conduct research on all this, but we're not quite there yet..
Bias: an obvious concern. Lawyers think of bias as relating to equality/fairness to humans e.g. racial discrimination, but scientists seem to take a broader view, ie non-representativeness due to insufficient data, data not known to be missing, provenance issues, failed data. Ongoing data is needed too, to help correct/remove bias
Transparency is important: including on how finetuned, how embedded in pipeline [are the EU AI Act's transparency requirements too narrow?]
AI regulation: is a regulator needed per sector? Do regulators have enough resources? [regulatory resource has been raised as an issue in an amendment tabled to the Cyber Security & Resilience (CSR) etc Bill too!] Domain expertise is needed, but the private sector pays more... Do regulators need to change how they work, eg safety mindset in the energy sector vs innovation? Should/could there be an international AI regulator? We need common principles to be agreed worldwide, eg how to decide whether what AI is doing is good or bad, cf current laws/regulations being too strict or too restrictive.
- [But it's always been tough if not impossible to secure international agreement on anything beyond very (too?) high-level principles, given different countries' very different cultures and values. We do have the G7 Hiroshima AI Process, various AI Summits have been held, and the UN is currently considering global AI governance (see its 2024 report, Governing AI for Humanity ). And some standards have been agreed regionally, eg from the UK NCSC and ETSI on the cybersecurity of generative AI]
Views on AI tend to be at either extreme: considered a magic panacea, or feared. What we need is collaborative work to improve precision and confidence regarding what an AI system can do, verifiability, robustness/trustworthiness, and to develop AI systems more intelligently.
Proportionality: we don’t need precision for all use cases, the severity of the consequences will vary (eg electricity blackout), but we do need responsible deployment and a systematic approach to risk management.
Risks are many, eg market risks from AI colluding with AI! More interconnectivity increases risks. Kids thinking AI is human is also not desirable! There are risks that AI is deskilling students [interesting post on AI deskilling eg linking to research showing AI-assisted colonoscopy creates deskilling]. Many small businesses etc will use and trust AI, without knowing how to check/verify eg website code.
Approach: people think something can’t be done or must be done fully first, but we really need the middle ground, emphasising that the aim is not to increase risk but to lower risk or to get to the desired risk threshold faster.
How to teach critical thinking? As with search engines, humans need to be able to know not only how to critically evaluate and verify AI results, but also know when not to use AI (e.g. if existing domain knowledge and algorithms/tools/solvers suffice).
Human society. Increasingly embedding AI in society without understanding how it works can lead to complacency and erode societal values so that we end up allowing certain behaviours previously not considered acceptable, cf the experience with social media.