Mastodon Kuan0

Monday, 13 January 2025

Things cyber security, Q4 2024

Selected things cyber security, mostly from Q4 2024, are listed below in reverse chronological order, some with descriptions. See also Things AI, Oct 2024, and Data protection & cyber security, Oct 2024


28 Dec 24

  • Software code, provenance: a thoughtful, telling post, Why it's hard to trust software, but you mostly have to anyway: "...the situation is fairly dire: if you're running software written by someone else—which basically everyone is—you have to trust a number of different actors. We do have some technologies which have the potential to reduce the amount you have to trust them, but we don't really have any plausible venue to reduce things down to the level where there aren't a number of single points of trust... Open source, audits, reproducible builds, and binary transparency are all good, but they don't eliminate the need to trust whoever is providing your software and you should be suspicious of anyone telling you otherwise" 

21 Dec 24

  • International collaboration, critical infrastructure: the Critical Five (5 Eyes) countries reaffirmed their "vision of fostering collaboration across the private and government critical infrastructure communities in our five nations" (see the June 24 summary on how they plan to modernise their approach to critical national infrastructure security and resilience). CISA links

20 Dec 24

  • Consumer IoT: the UK Department for Science, Innovation & Technology (DSIT) will be undertaking an interim Post-Implementation Review of the Product Security and Telecommunications Infrastructure Act (PSTI), to be published by October 2026. To support this, DSIT has commissioned a consultancy to condcut preparations for evaluation and research projects on the product security elements of the PSTI Act

19 Dec 24

  • Training: the UK updated its webpage on cyber security training for business - this has links to many useful resources, such as free online staff training, a free online incident response exercise and personalised cyber action plan for SMEs/individuals, many from the UK National Cyber Security Centre

18 Dec 24

  • Supply chain, vendors, defence, national security: the UK Ministry of Defence wrote to Defence Industry CEOs/leads, asking them to review their organisations' performance against the Cyber Assessment Framework (CAF, developed by the UK National Cyber Security Centre (NCSC) for NIS Regulations assessments) particularly the areas of Govern, Identify, Protect, Detect, Respond and Recover; adopt Active Cyber Defence (ACD) with the NCSC and its tools including Early Warning (see 3 Dec); implement the March 24 Cyber Security Standard for Suppliers; and deliver Secure by Design

17 Dec 24

  • Cloud, SaaS, Microsoft 365: the US Cyber & Security Infrastructure Agency (CISA) issued a web-friendly verions of its directive Implementing Secure Practices for Cloud Services for US federal agencies, requiring deployment of its SCuBA tool - mentioned here because this open source tool, downloaded >30k times as at 13 Nov, automatically assesses Microsoft 365 (M365) configurations for security gaps (against CISA baselines): reportedly, misconfigurations were the initial access point for 30% of cloud environment attacks in the first half of 2024. "ScubaGear rapidly and thoroughly analyzes an organization’s M365 tenant configuration. It then delivers actionable security change insights and recommendations that allow the tenant administrator to close security gaps and attain a stronger defense within their M365 environment". So Microsoft 365 users could do worse than use this free tool!
  • Cybercrime, UN Convention: this Convention's privacy and security issues (law enforcement access to data) have been raised by many, including by the EDPB, referring to its Statement 5/2024 on the Recommendations of the High-Level Group on Access to Data for Effective Law Enforcement
  • Mobile comms: the US Cyber & Security Infrastructure Agency (CISA)issued Mobile Communications Best Practice Guidance after "identified cyber espionage activity by People’s Republic of China (PRC) government-affiliated threat actors targeting commercial telecommunications infrastructure, specifically addressing “highly targeted” individuals who are in senior government or senior political positions and likely to possess information of interest to these threat actors". While intended to assist highly-targeted individuals, its recommendations are obviously also relevant to everyone else who values their security and privacy (there were also iPhone and Android-specific recommendations, not reproduced here, see the link above):
    • Use only end-to-end encrypted communications (free messaging apps mentioned include Signal "or simlar apps")
    • Enable Fast Identity Online (FIDO) phishing-resistant authentication like hardware-based security keys
    • Migrate away from Short Message Service (SMS)-based MFA
    • Use a password manager
    • Set a telco PIN (for login etc)
    • Regularly update the operating system and other software (i.e. patch)
    • Opt for the latest hardware version from your cell phone manufacturer
    • Do not use a personal virtual private network (VPN): "Personal VPNs simply shift residual risks from your internet service provider (ISP) to the VPN provider"

16 Dec 24

  • Cybercrime, advanced persistent threats: a RUSI article points out that "...foreign government adversaries no longer have a monopoly on sophistication or persistence. Cybercriminals have just as much if not more of an impact on the Western world... Digital spying by foreign state adversaries is still important. However, in biasing themselves towards ‘APT versus cybercrime’, information security and cybersecurity practitioners create a false dichotomy that pushes resources, attention and support to areas that don’t always align with the greatest organisational or national risk and impacts"

13 Dec 24

  • CSAM, encryption: the EU Council agreed its general approach on the proposed CSAM Directive (this has notes on some amendments), based on which it can commence negotiations on the text with the European Parliament, but the Parliament hasn't agreed its own position internally yet, so it will be some months or longer before this Directive is adopted
    • Statements on the general approach by Austria, Austria and Slovenia, and Belgium, Finland, Ireland, Latvia, Luxembourg, Slovenia and Sweden
    • The age old debate continues about undermining encryption to allow checking of encrypted material for any CSAM, e.g. US litigation against Apple for dropping its planned CSAM scanning after privacy and surveillance concerns.
    • An excellent post about the planned "Chat Control" scanning under this Directive points out: "Chat Control is one example of mass screening for a low-prevalence problem — a dangerous mathematical structure. It requires breaking end-to-end encryption, the technological bedrock of digital privacy. Such a move would make mass surveillance cheap and easy again... false positives will overwhelm true positives in programs of this structure — mass screenings for low-prevalence problems under conditions of rarity, persistent uncertainty, and secondary screening harms. Under these conditions, even highly accurate such programs backfire by making huge haystacks (wrongly flagged cases, “false positives”) while missing some needles (wrongly cleared cases, “false negatives”)... when finite investigative resources are tied up processing CSAM possession tips from mass scanning, they cannot be used for other investigations... This is consistent with the possibility that children are endangered by such mass screening programs exhausting the investigative resources necessary to process tips that have a higher likelihood of being true positives and may otherwise be more relevant to current as opposed to past abuse. Curtailing targeted investigations that might stop ongoing abuse or bigger-fish distributors in favor of processing mass scanning tips that are overwhelmingly false positives does not serve the interests of vulnerable children or society... [but] it does serve the interests of those who would like a return to cheap, easy mass digital communications surveillance..."
  • Data, software, products: reminder from the US Federal Trade Commission (FTC) that to protect security it's important to have good data management (including enforcing mandated data retention schedules and mandating data deletion, so there's less unnecessary data that could be hacked), secure software development, and secure product design for humans (including least privilege, phishing-resistant MFA) 
  • Financial services, incident reporting, vendors: the UK Prudential Regulation Authority (PRA) issued a consultation paper CP17/24 – Operational resilience: Operational incident and outsourcing and third-party reporting, on proposed "rules and expectations for firms to report operational incidents and their material third-party arrangements" (deadline 14 Mar 25), with reporting thresholds (quite subjective), and a phased approach to incident reporting: initial, intermediate, final, with certain minimum information
  • Product safety: the EU General Product Safety Regulation applies from this date. When assessing whether a product is a safe product, factors to consider include, "when required by the nature of the product, the appropriate cybersecurity features necessary to protect the product against external influences, including malicious third parties, where such an influence might have an impact on the safety of the product, including the possible loss of interconnection", particularly digitally connected products likely to have an impact on children (on top of sectoral laws on cybersecurity risks affecting consumers etc)
    • Detailed UK guidance, applicable to Northern Ireland summarises it as: "This Regulation requires that all consumer products placed on the NI and EU markets are safe and establishes specific obligations for businesses to ensure that safety. The Regulation applies to products placed on, or made available to, the market where there are no sector-specific provisions with the same objective"   

12 Dec 24

  • Passkeys: these are touted as more secure than using passwords, and are increasingly being supported (see my book and the free PDF). Microsoft published its UX design insights to boost passkey adoption and security

10 Dec 24

  • EU Cyber Resilience Act (CRA): this Regulation entered into force (published in OJ 20 Nov 24, news item), requiring minimum cybersecurity requirements for any "product with digital elements" before they can be made available on the EU market, with certain cybersecurity vulnerability handling obligations on manufacturers, including on vulnerability disclosure
    • A "product with digital elements" is any software or hardware product and its "remote data processing solutions", including software or hardware components being placed on the market separately (where "remote data processing" is remote processing designed by the manufacturer, whose absence would prevent one of the product's functions from being performed - such as essential cloud processing). So, CRA catches not just IoT / smart devices but also software (and is not limited to consumer IoT, unlike the UK's  Product Security and Telecommunications Infrastructure Act (PSTI))
    • CRA applies fully from 11 December 2027, but with some earlier applicable dates like 11 September 2026 for Art.14 on manufacturers' obligation to report any actively exploited vulnerability contained in such a product: with staggered deadlines of 24 hrs, 72 hrs etc.

9 Dec 24

  • Open source, malicious code, tools: open source code is increasingly incorporated into software, but open source packages can be malicious, or legitimate code can be accessed by attackers and "poisoned" to serve malicious purposes e.g. adding a backdoor for hackers. The Stack reported that a tool, DataDog, had been open sourced, termed as a [software] "supply chain firewall" that scans Python packages being installed, and blocks packages know to be malicious based on the tool provider's own observations or certain open source feeds 

5 Dec 24

  • Consumer IoT: the UK published a survey it had commissioned before the  Product Security and Telecommunications Infrastructure Act (PSTI) came into force, to map and analyse the market for consumer connectable products, and collect and analyse evidence on the compliance of manufacturers with the PSTI legal regime, as well as evidence on awareness and impacts of the legislation. It outlines well the PSTI compliance challenges (which many may be familiar with!). And see the related infographic. See also 25 Nov
  • Software patching, tools: one of the most critical security measures to take is patching, ensuring software is kept updated with new versions that address security vulnerabilities. Google released a new open source security patch validation automation tool Vanir, that helps Android developers "quickly and efficiently scan their custom platform code for missing security patches and identify applicable available patches". "While initially designed for Android, Vanir can be easily adapted to other ecosystems with relatively small modifications, making it a versatile tool for enhancing software security across the board" 

4 Dec 24

  • EU digital identity wallets: technical standards, adopted by the Commission on 28 Nov, for cross-border eID wallets under the European Identity Framework that was updated in 2024, were published in the OJ under 5 implementing regulations with rules on eID Wallets' integrity and core functionalities, on eID Wallets solutions' protocols and interfaces and on person identification data and electronic attestations of attributes of eID Wallets, plus reference standards, specifications and procedures for a certification framework for eID Wallets, and obligations for notifications to the Commission concerning the eID Wallet ecosystem
  • Measures, metrics: how can organisations measure to what extent their cyber security measures are effective? The US National Institute of Standards and Technology (NIST) published updated guidance on how an organization can develop 

3 Dec 24

  • Comms infrastructure: several US and other agencies published a joint guide Enhanced Visibility and Hardening Guidance for Communications Infrastructure, "that provides best practices to protect against a People’s Republic of China (PRC)-affiliated threat actor that has compromised networks of major global telecommunications providers. The recommended practices are for network engineers and defenders of communications infrastructure to strengthen visibility and harden network devices against this broad and significant cyber espionage campaign... Although tailored to communications infrastructure sector, this guidance may also apply to organizations with on-premises enterprise equipment"
  • EU cybersecurity, threats: EU security agency ENISA published its first NIS2 biennial report on the state of EU cybersecurity. This reported "substantial cyber threat level to the EU, highlighting discovered vulnerabilities exploited by threat actors targeting EU entities..." and made several policy recommendations on strengthening EU cyber skills/workforce and addressing supply chain security 
  • UK cybersecurity, threats: the UK National Cyber Security Centre (NCSC) published its Annual Review 2024. Its head stressed in an accompanying speech the "clearly widening gap between the exposure and threat we face, and the defences that are in place to protect us... We need all organisations, public and private, to see cyber security as both an essential foundation for their operations and a driver for growth. To view cyber security not just as a ‘necessary evil’ or compliance function, but as a business investment, a catalyst for innovation and an integral part of achieving their purpose... Hostile activity in UK cyberspace has increased in frequency, sophistication and intensity... Actors are increasingly using our technology dependence against us, seeking to cause maximum disruption and destruction... And yet, despite all this, we believe the severity of the risk facing the UK is being widely underestimated... There is no room for complacency about the severity of state-led threats or the volume of the threat posed by cyber criminals. The defence and resilience of critical infrastructure, supply chains, the public sector and our wider economy must improve..."
    • The NCSC's incident management (IM) team issued 542 bespoke notifications to organisations of a cyber incident impacting them, providing advice and mitigation guidance (cf. 258 in 2023). Almost half related to pre-ransomware activity, enabling organisations to detect and remove precursor malware before ransomware was deployed.
    • Top sectors reporting ransomware activity into the NCSC were academia, manufacturing, IT, legal, charities and construction. "We received 317 reports of ransomware activity, either directly from impacted organisations, or from our partners (an increase on 297 last year). These were triaged into 20 NCSC-managed incidents, of which 13 were nationally significant. These included high-profile incidents impacting the British Library and NHS trusts"
    • The NCSC was made aware of 347 reports of activity that involved the exfiltration or extortion of data
    • The IM team issued ~12,000 alerts about vulnerable services through its Early Warning service (an automated NCSC threat notification service, free to UK organisations - do sign up!). Exploitation of zero-days CVE-2023-20198 (Cisco IOS XE) and CVE-2024-3400 (Palo Alto Networks PAN OS) also resulted in 6 nationally significant incidents which the IM team helped manage 

2 Dec 24

  • Cybersecurity measures: the US Cyber & Security Infrastructure Agency (CISA) updated its Trusted Internet Connections (TIC) 3.0 Security Capabilities Catalog (SCC) version 3.2, based on the new National Institute of Standards and Technology (NIST) Cyber Security Framework (CSF) Version 2.0 mapping updates. TIC 3.0 SCC provides a list of deployable security controls, security capabilities, and best practices, intended to guide secure implementations and help US federal agencies satisfy program requirements within discrete networking environments, but is of more general use/interest 
  • Encryption: this was Global Encryption Day: the Global Encryption Coalition reported on the support of policymakers and others for encryption , including for the protection of children 
  • FS, DORA: the EU's DORA Regulation on digital operational resilience for the financial sector applies in the EU from 17 Jan 2025. Much secondary legislation on certain detailed requirements has been made under it (see list as at 4 Dec 24). On 2 Dec, an implementing regulation was published in the OJ on technical standards for standard templates for the register of information that in-scope financial entities must maintain in relation to their ICT services and ICT service providers, including providers' subcontractors in certain cases, such as details of their contracts 
    • Financial service entities' contracts with their ICT service providers should also be updated to comply with DORA's requirements, and some providers are directly regulated under DORA - not discussed here
  • Managed security services, certifications; EU: the Council approved a directly-applicable Regulation amending the EU Cybersecurity Act (CSA) to enable future adoption of European certification schemes for managed security services (MSS, like incident handling, penetration testing, security audits, consulting advice on technical support), increasingly important for cybersecurity incidents' prevention, detection, response, and recovery. Awaiting the CSA's broader evaluation by the Commission, this targeted amendment aims to enable establishment of such European certification schemes to help increase MSSs' quality, comparability and trustworthiness, and avoid fragmentation as some Member States have initiated national certification schemes for such services
    • Not law yet, to awaiting OJ publication
    • The Council also approved a Cyber Solidarity Act Regulation (also awaiting OJ) to strengthen EU/Member State cooperation and resilience against cyber threats, e.g. creating a cyber security alert system pan-European infrastructure comprising national and cross-border cyber hubs responsible for detecting, sharing information and acting on cyber threats including cross-border incidents. It also creates a cybersecurity emergency mechanism (including a EU cybersecurity reserve: private sector incident response services "ready to intervene" on significant/large-scale incidents if requested by a Member State or EU body as well as associated third countries, and incident review mechanism 

1 Dec 24

  • Cloud, access, MFA: previously, cloud service providers have tended to leave it to their customers to decide whether the customer wants to require MFA in order for its users to access its cloud service. A very positive trend is that providers are increasingly enforcing MFA, e.g. Snowflake will be blocking attempted sign-ins using single-factor authentication with passwords. It seems likely this move by Snowflake was influenced by >100 of its customers, who had not required MFA, being successfully attacked in 2024. While it would have behoved those customers to require MFA for access to their Snowflake services, these incidents did appear to lead to some negative comments about Snowflake 

29 Nov 24

28 Nov 24

  • Boards, directors: UK National Cyber Security Centre (NCSC)'s Cyber Security Toolkit for Boards: updated briefing pack released with insights on the ransomware attack against the British Library
  • EU NIS2 Directive: this Directive, updating and expanding the NIS Directive, should have been implemented by Member States by 17 Oct 24, but it wasn't (Europa list of those that have notified the Commission of their NIS2 transposition).
    • The Commission decided to open infringement procedures for not fully implementing NIS2 by sending formal notice to 23 Member States (Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Greece, Spain, France, Cyprus, Latvia, Luxembourg, Hungary, Malta, Netherlands, Austria, Poland, Portugal, Romania, Slovenia, Slovakia, Finland and Sweden). The Commission has given them two months to respond, complete their transposition and notify their measures to the Commission. Ireland hasn't yet transposed NIS2 officially, but it wasn't in that list, for whatever reason
  • UK Cyber Security & Resilience Bill consultation: consultation closed, on UK DSIT's call for evidence on proposals to inform the Bill

26 Nov 24

  • Awareness raising, NIS: EU security agency ENISA updated its guide on how to promote cyber security awareness to C-level (part of its AR-in-a-box DIY awareness-raising toolbox, "a comprehensive solution for cybersecurity awareness activities designed to meet the needs of public bodies, operators of essential services, and both large and small private companies. It provides theoretical and practical knowledge on how to design and implement effective cybersecurity awareness") - still relevant to NIS2 of course

25 Nov 24

  • IoT, smart devices, vulnerability handling, PSTI: the IoT Security Foundation published The State of Vulnerability Disclosure Policy (VDP) Usage in Global Consumer IoT in 2024, including some coverage of the impact of the UK Product Security and Telecommunications Infrastructure Act (PSTI). "...the UK legislation has driven a bigger improvement [among UK retailers] than European and US retailers. Whilst the sample set maybe low, it is a consistent gauge moving faster in the right direction" 
    • The survey indicated an increase in the proportion of manufacturers checked that had a vulnerability disclosure policy, from 23.99% in 2023 to 35.59% in 2024. Only ~21% of companies complied with PSTI's vulnerability disclosure requirements, though that's "increased significantly" from the previous year. 
    • The picture's variable regarding proportion of retailers stocking products whose manufacturers support vuln disclosure. Over 50% of IoT products stocked by several UK retailers were from manufacturers that had vulnerability disclosure policies. John Lewis was the best, 93.33% of its products checked were from compliant manufacturers. The detail on specific manufacturers, their website statements of compliance and how some meet PSTI (or not) is worth a look
    • "There has clearly been some effect from the UK’s Product Security and Telecommunications Infrastructure Act (Part 1) requirements... but implementation seems fragmented and inconsistent. While some leading UK retailers are showing that around 90% of the IoT manufacturers they stock have vulnerability disclosure policies, there are some notable exceptions to this ‘dip test’ of the market and there are obvious differences in online marketplaces. The other regions showed less promising and variable data about the product manufacturers they stocked"  (the report covers manufacturers and retailers in the EU, US and Asia too - not discussed here)
    • And there remains a "..gap in practice between the consumer and enterprise sectors. Whilst the consumer sector is firmly heading in the right direction, there is a stark contrast in market practice levels and continues to justify the need for consumer regulation" (I'd suggest enterprise IoT security could still improve)".
    • On individual product categories, "notable laggards being Health and Fitness, Lighting and, somewhat paradoxically, Security. Those manufacturer report cards read “must do better”"
    • See also 5 Dec 

22 Nov 24

  • Fraud, data protection: the UK ICO emphasised that data protection is not an excuse when tackling scams and fraud, "warning that reluctance from organisations to share personal information to tackle scams and fraud can lead to serious emotional and financial harm. Data protection law does not prevent organisations from sharing personal information, if they do so in a responsible, fair and proportionate way". It published "new practical advice to provide clarity on data protection considerations and support organisations to share data responsibly to tackle scams and fraud", aimed at any organisation seeking to share personal information to identify, investigate and prevent fraud, especially banks, telecommunications providers and digital platforms"
    • The same also applies to organisations disclosing potential personal data, like IP addresses and domain names, as indicators of compromise (IOCs) in threat sharing initiatives/platforms regarding cyber threats/breaches, whether sectoral or otherwise, and it would have been helpful if the ICO had also made that point.

21 Nov 24

  • Critical infrastructure, red teaming: the US Cyber & Security Infrastructure Agency (CISA) published its insights from a red team assessment of a US critical infrastructure organisation including lessons learned (technical controls, staff training, leadership/board: "Leadership deprioritized the treatment of a vulnerability their own cybersecurity team identified, and in their risk-based decision-making, miscalculated the potential impact and likelihood of its exploitation") and technical details

18 Nov 24

  • Passwords: it's interesting that, following notification of personal data breaches, the Romanian data protection supervisory authority ordered a company to take measures including (machine translation) "password complexity and history policy on all customer accounts with a pre-established expiration interval". That is a decades old practice, which is no longer considered good. Technical experts including the UK NCSC and US NIST recommend longer rather than more complex passwords, indeed NIST's latest draft update recommends not enforcing any password complexity rules like one lowercase, one uppercase etc. Similarly with forced password changes every few months or year, which is now deprecated (e.g. New Scientist article) as it reduces security by resulting in people writing down passwords, using bad passwords they can remember, etc! So it seems that some GDPR authorities could still benefit from more technical assistance/education on cybersecurity...

15 Nov 24

  • UK NIS Regulations, notified incidents: the ICO is the regulator for digital service providers under NIS (cloud, online marketplaces, online search engines). Responding to a freedom of information request, the ICO stated that 37 incidents were reported to the ICO as NIS incidents, including 18 incidents that were not in fact NIS incidents and 2 incidents (reported in 2020 and 2021) that did not meet the mandatory threshold following its assessment. The figures suggest many incidents are reported as NIS incidents when they are not, but it's possible there were some actual NIS incidents that were not reported as the final total of 19 seems quite low...:
    • 2020 - 2 (really 1, see above, but in fact the ICO did not consider it a NIS incident, so 0)
    • 2021 - 3 (really 2, but the ICO did not consider 1 a NIS incident, so 1)
    • 2022 - 4 (really 2, as the ICO did not consider 2 of those a NIS incident)
    • 2023 - 19 (really 18, as one was incorrectly reported to the ICO as well as to the correct competent authority, but several were not considered NIS incidents, so 8) 
    • 2024 YTD - 9 (but 5 were not considered NIS incidents, so 4) 

14 Nov 24

  • Product safety, IoT: in the first horizon scan report by the UK Office for Product Safety & Standards (OPSS), privacy, data loss and wider cyber security issues like distributed denial of service (DDOS) attacks were considered as part of harms or benefits a technology may present in relation to non-physical aspects, and the scan's taxonomy of technologies included cybersecurity and data platforms: "combination of data, policies, processes, and technologies employed to secure information, protect organisations, and protect individuals' cyber assets, including specific biological research through omics, and financial activities through blockchain, like new data technology and PETs. Health data was at greater risk of being compromised by cyber threats. Trends across technologies included security issues, with new vulnerabilities created by increased automation and connected technology with IoT, and new ways of compromise; most IoT devices' relatively limited computing power limits cybersecurity complexity and effectiveness, their interconnectivity increases vulnerabilities (specific IoT rapid review guidance). Social commerce normalises online money transfer enabling cybersecurity scams. "Blockchain can potentially offer some solutions to these challenges". Online marketplaces also need consumer protection against scams.
    • OPSS research on consumer attitudes/awareness indicated consumers are increasingly comfortable with manufacturers making changes remotely in the case of physical safety issues or cyber security vulnerabilities, but becoming less considerate of cyber security before initial purchase, particularly those with a low education level. Note that the OPSS is responsible for enforcing the UK's Product Security and Telecommunications Infrastructure Act (PSTI) 

12 Nov 24

  • Financial services, vendors: UK FS regulators Bank of England (Bank), Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) issued PS16/24 FCA 24/16 – Operational resilience: Critical third parties to the UK financial sector, with final rules for FS use of critical third parties (CTPs) including operational risk and resilience requirements, and incident reporting and other notifications and enforcement
  • Security engineering, learning: all PDF chapters of the late, great Ross Anderson's seminal, very readable Security Engineering book (3rd edition 2020) are now available for free download via this link

7 Nov 24

  • NIS2, risk management: EU security agency ENISA issued a consultation on its detailed technical implementin gguidance (PDF no longer availabe on ENISA's website, but see Internet Archive) to support EU Member States and entities with  implementation of the technical and methodological requirements of NIS2's required cybersecurity risk management measures. The final version is awaited. (On the implementing regulation for certain types of entities, see my October post.)

4 Nov 24

  • QR code phishing: this is phishing by tricking people into scanning malicious QR codes to take them to malicious websites or install/open malicious apps/files, and it's an increasing attack vector. Microsoft explained how it updated its Microsoft Defender for Office 365 to address this 

31 Oct 24

  • Incident response, preparations, resilience: helpful lessons on the Jul 24 Crowdstrike outage from the UK Financial Conduct Authority (FCA) with its observations on how FS firms responded to the incident including infrastructure resilience, third party management, incident response and communications, with recommendations on what firms should be doing on these fronts    
  • Cybersecurity measures: ending Cybersecurity Awareness Month, Microsoft published 7 cybersecruity trends and (same old, same old!) its tips for SMEs:
    • 1in 3 SMBs have suffered a cyberattack (Microsoft tips: strong passwords, MFA, consider password manager, recognise/report phishing, keep software updated i.e. patching)
    • Attacks cost them >$250k on average and up to $7m (tip: risk assessment to understand gaps, determine measures to address)
    • 81% of SMBs think AI increases need for additional security controls (tip: data security & data governance when adopting AI)
    • 94% think cybersecurity is business-critical (tip: educate/train employees e.g. using Microsoft awareness resources)
    • <30% manage security in-house (tip: it's common to engage a Managed Service Provider (MSP) for security support)
    • 80% mean to increase cybersecurity spending, prioritising "data protection" [NB broader than in the GDPR sense] (tip: prioritise data protection, firewall, anti-phishing, ransomware & device/endpoint protection, access control, identity management e.g. via DLP, EDR, IAM)
    • 68% feel secure data access is a challenge for remote workers (tip: measures to protect data and Internet-connected devices, app store downloads; no credential sharing by email/text only phone in real time)

25 Oct 24

24 Oct 24

  • Web security, standards: a Princeton news item discusses a new security standard their researchers worked on. "The change centers on how web browsers and operating systems verify a website’s identity when establishing a secure connection. They rely on third party organizations known as certification authorities, who issue digital certificates of authenticity based on a website owner’s ability to demonstrate legitimate control over the website domain, usually by embedding a random value that the certification authority has provided. ...bad actors could easily sidestep those hurdles to obtain a fraudulent certificate for a website they do not legitimately control... it could target any website on the internet. Users had no way to spot the fraud since the certificates were real, even if their underlying facts had been forged. With a fraudulent certificate, criminals could attack users and route traffic to fake sites without anyone knowing... the fake site would look every bit as legitimate as the real one... By adopting the Princeton standard, certification authorities have agreed to verify each website from multiple vantage points rather than only one... [multi-perspective validation]", which will improve Internet/web security 

23 Oct 24

  • Cybersecurity measures, certifications, supply chain: the UK's Cyber Essentials certification scheme, to encourage organisations to implement key essential cybersecurity measures (cyber hygiene), reached its 10 year anniversary.
    • It's great to hear it has been effective in improving cybersecurity: "Recent insurance data shows us that organisations with Cyber Essentials are 92% less likely to make a claim on their insurance than those without it". The NCSC noted, "This statistic underscores the scheme’s effectiveness in mitigating cyber risks". "Additionally, where organisations require their third parties to get Cyber Essentials, we know they experience fewer third party cyber incidents".
    • The full impact evaluation noted that Cyber Essentials:
      • is providing cyber security protection to organisations of all sizes, including larger organisations that use other schemes, standards and accreditations
      • helps to improve organisations’ awareness and understanding of the cyber security risk environment
      • has stimulated wider actions, good practice and behaviours among organisations that use it
      • is being actively used as part of supply chain assurance to inform the supplier selection process, instil confidence and demonstrate basic cyber hygiene to the market
    • The NCSC also added, "Cyber Essentials has played a crucial role in raising awareness about cyber security. An evaluation conducted as part of the 10-year review revealed that 85% of certified organisations reported a better understanding of cyber risks. This increased awareness has empowered businesses to take proactive measures in safeguarding their digital assets", and said "The data is clear, implementing the five controls significantly lowers the risk of experiencing a cyber incident. For organisations lacking the necessary in-house expertise, support is readily available through companies offering the NCSC-recognised Cyber Advisor Service"
    • Also, to improve supply chain securityprocurement efficiency, consistent minimum standards, UK financial entities Barclays, Lloyds Banking Group, Nationwide, NatWest, Santander UK and TSB have stated that they will promote and incorporate Cyber Essentials in their supply chain risk management and they encourage other businesses to incorporate Cyber Essentials into their supplier requirements. (This would also "Spread greater cyber insurance coverage across supply chains through the provision of free cyber insurance, and incident response services, included with Cyber Essentials certification to qualifying organisations")
      • Comment: Contractually requiring suppliers/vendors/service providers to be certified is a helpful move in the right direction. Cyber Essentials measures are the bare minimum that organisations should take, are not difficult to implement, and would go a long way towards preventing or reducing the impact of cyber incidents, so all organisations should be certifying, or at least implementing those measures even if they don't get certified! Unlike with ISO standards, Cyber Essentials measures are freely available, whether implemented through self-assessment or (Plus) third-party audit. 
      • Note: Cyber Essentials funding is offered to small organisations in certain sectors like AI, quantum, semiconductors etc., with certain criteria 

2 Oct 24

  • Ransomware: Counter Ransomware Initiative (CRI) guidance for organisations experiencing a ransomware attack and organisations supporting them
  • Scanning, testing: interesting study on how external cybersecurity scanning data can enhance underwriting accuracy for the (re)insurance industry. This compared companies’ security controls with actual insurance claims, identifying key predictive factors including the organisation's IP address count and patching cadence (the speed at which it updated software to address vulnerabilities), that help forecast claims. Single Point of Failure (SPoF) data also highlighted dependencies on third-party services like AWS (cloud) and VPNs
    • While aimed at (re)insurance, scanning/pen testing obviously is also helpful if not essential for insured organisations, and same issues obviously affect their susceptibility to successful attacks, so keep your number of IP addresses limited to reduce exposure, and (not a new recommendation) patch ASAP!  

25 Sept but UK press release 17 Oct 24

  • Quantum computing, encryption, financial services - risks to security & FS: G7 Cyber Expert Group's statement on planning for quantum computing's opportunities and risks (including to public key cryptography) and steps that financial entities should take

10 Sept 24

  • Cybercrime, data sharing: the UK ICO and National Crime Agency (NCA) signed a memo of understanding on how they'll collaborate to improve the UK's cyber resilience, including "The NCA will never pass information shared with it in confidence by an organisation to us without having first sought the consent of that organisation" and "We will support the NCA’s visibility of UK cyber attacks by sharing information about cyber incidents with the NCA on an anonymised, systemic and aggregated basis, and on an organisation specific basis where appropriate, to assist the NCA in protecting the public from serious and organised crime"

Friday, 10 January 2025

Things AI, Q4 2024

Selected things AI, mostly from Q4 2024, are listed below in reverse chronological order, some with descriptions. See also Things AI, Oct 2024, AI Liability Directive links, and Data protection & cyber security, Oct 2024. This blog illustrates how much is going on and the fast pace of AI-related developments, and I repeat that I've curated the below and by no means included all or even most of what's been happening. Italy's Garante seems the most active supervisory authority in enforcing against AI-related matters under GDPR. (Note that * is used in possible "rude words" to prevent this blog being auto-blocked, e.g. by AI!) 


30 Dec 24

  • Tools, open source: chipmaker NVIDIA acquired run:ai, and plans to open source its software, that helps customers/users "to orchestrate their AI Infrastructure, increase efficiency and utilization, and boost the productivity of their AI teams", so that it can be used even with non-NVIDIA GPUs, whether on-prem or in-cloud

23 Dec 24

  • Future: Stanford University's Human-Centered Artificial Intelligence (HAI) center issued its Predictions for AI in 2025: Collaborative Agents, AI Skepticism, and New Risks 

20 Dec 24

  • Data protection, enforcement, training, Italy: following its investigation into OpenAI's ChatGPT (news item), Italy's Garante fined OpenAI €15m (this amount was said to take account of its cooperative attitude, suggesting it could have been much higher!) and ordered OpenAI to conduct a "6-month institutional communication campaign on radio, television, newspapers and the Internet. The content, to be agreed with the Authority, should promote public understanding and awareness of the functioning of ChatGPT, in particular on the collection of user and non-user data for the training of generative artificial intelligence and the rights exercised by data subjects, including the rights to object, rectify and delete their data. Through this communication campaign, users and non-users of ChatGPT will have to be made aware of how to oppose generative artificial intelligence being trained with their personal data and thus be effectively enabled to exercise their rights under the GDPR... in view of the fact that the company established its European headquarters in Ireland in the course of the preliminary investigation, the Data Protection Authority, in compliance with the so-called one stop shop mechanism, forwarded the procedural documents to the Irish Data Protection Authority (DPC), which became lead supervisory authority under the GDPR so as to continue investigating any ongoing infringements that have not been exhausted before the opening of the European headquarters"
  • AGI, reasoning: OpenAI's new o3 model scored a breakthrough high score for its performance in one Arc Prize challenge, although still not reaching AGI... 
  • Risks, testing: the US NIST published a paper, The Assessing Risks and Impacts of AI (ARIA) Program Evaluation Design Document. "ARIA (Assessing Risks and Impacts of AI) is a NIST evaluation-driven research program to develop measurement methods that can account for AI’s risks and impacts in the real world. The program establishes an experimentation environment to gather evidence about what happens when people use AI under controlled real-world conditions". Its testbed involves model testing, red teaming and field testing. "Dialogues collected in the ARIA environment will be curated and anonymized and are planned to be publicly released after each evaluation series. The publication of ARIA’s methods, metrics, practices and tools will facilitate adoption and scaling across industry and research settings" 
  • Testing, researchers: OpenAI invited safety researchers to apply to receive API access to its forthcoming frontier models, including o3-mini, to advance frontier safety

19 Dec 24

  • Agents: Anthropic has shared its learnings on how to build effective AI agents
  • Agents, genAI: Microsoft researchers submitted a paper on the current resurgence of agents and argue that "While generative AI is appealing, this technology alone is insufficient to make new generations of agents more successful. To make the current wave of agents effective and sustainable, we envision an ecosystem that includes not only agents but also Sims, which represent user preferences and behaviors, as well as Assistants, which directly interact with the user and coordinate the execution of user tasks with the help of the agents"
    • Assistants seem to be software that controls agents on behelf of users. Sims, that simulate users, seem to require some kind of user profiling, so data protection again...
  • AI usage, health, tools: such a beneficial use case, an Indian group has developed AI tools to assist with tuberculosis diagnosis and treatment   
  • EU AI Act: second draft of GPAI Code of Practice published, based on feedback on the first draft. This "remains a work in progress. They focused primarily on providing clarifications, adding essential details, and aligning to the principle of proportionality, such as the size of the general-purpose AI model provider." Verbal discussions on the second draft with Chairs and Vice-Chairs are planned, and workshops with general-purpose AI model providers and Member State representatives in the AI Board Steering Group are planned for the weeks of 20 and 27 January respectively. The third draft of the Code of Practice is expected to be out in the week of 17 February 2025
    • The Computer & Communications Industry Association (CCIA) criticised the draft as containing "measures already explicitly rejected by EU co-legislators during the AI Act negotiations. Ideas previously dismissed that have been resurrected include mandatory third-party assessment and differentiated treatment between smaller and larger GPAI developers. If left unchecked, the Code risks becoming an undemocratic vehicle that overturns the AI Act’s legislative process. This second iteration also contains measures going far beyond the Act’s agreed scope, such as far-reaching copyright measures" 

18 Dec 24

  • Environment, tools: Microsoft announced "SPARROW: A Breakthrough AI Tool to Measure and Protect Earth’s Biodiversity in the Most Remote Places"
  • Intellectual property, laws: in the UK Parliament's debate on the Data (Use and Access) Bill, various amendments were proposed on AI and copyright, and more..., we'll have to wait to find out which ones if any get through (and see 17 Dec)
  • Models, testing: from the UK Artificial Intelligence Safety Institute (AISI) and US Artificial Intelligence Safety Institute, a joint pre-deployment evaluation of OpenAI's o1 model (see 5 Dec)
  • Neurosymbolic AI, LLM hallucinations: good article Generative AI can’t shake its reliability problem. Some say ‘neurosymbolic AI’ is the answer, on "neurosymbolic AI, which its advocates say blends the strengths of today’s LLMs with the explainability and reliability of this older, symbolic approach"
    • An example of a neurosymbolic system is Google's impressive AlphaGeometry AI system (certain code), that "surpasses the state-of-the-art approach for geometry problems, advancing AI reasoning in mathematics... solves complex geometry problems at a level approaching a human Olympiad gold-medalist - a breakthrough in AI performance...". "AlphaGeometry is a neuro-symbolic system made up of a neural language model and a symbolic deduction engine, which work together to find proofs for complex geometry theorems. Akin to the idea of “thinking, fast and slow”, one system provides fast, “intuitive” ideas, and the other, more deliberate, rational decision-making. Because language models excel at identifying general patterns and relationships in data, they can quickly predict potentially useful constructs, but often lack the ability to reason rigorously or explain their decisions. Symbolic deduction engines, on the other hand, are based on formal logic and use clear rules to arrive at conclusions. They are rational and explainable, but they can be “slow” and inflexible - especially when dealing with large, complex problems on their own. AlphaGeometry’s language model guides its symbolic deduction engine towards likely solutions to geometry problems..."
    • As the Turing Institute puts it, "“sub-symbolic” or “neuro-inspired” techniques only work well for certain classes of problem and are generally opaque to both analysis and understanding... “symbolic” AI techniques, based on rules, logic and reasoning, while not as efficient as “sub-symbolic” approaches, have much better behaviour in terms of transparency, explainability, verifiability and, indeed, trustworthiness... “neuro-symbolic” AI has been suggested, combining the efficiency of “sub-symbolic” AI with the transparency of “symbolic” AI. This combination can potentially provide a new wave of AI tools and systems that are both interpretable and elaboration tolerant and can integrate reasoning and learning in a very general way"
    • Also see 2023 paper Neurosymbolic AI -- Why, What, and How, May 2024 article Neuro-Symbolic AI Could Redefine Legal Practices 
  • Future: is AI progress slowing down?

17 Dec 24

  • GenAI, contract, usage: Google updated its generative AI prohibited use policy to add "clear examples of conduct that is not acceptable. For example, we’re explicitly stating that using our tools to create or distribute non-consensual intimate imagery or to compromise security by facilitating phishing or malware is not allowed under the policy. Finally, we've added language that allows for exceptions for certain educational, artistic, journalistic or academic use cases that might otherwise violate our policies"
  • Data protection (and see next point): the EDPB adopted its Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models (see my blog on its stakeholder workshop, legitimate interests, and other important data protection issues)
    • This consistency opinion was requested by the Irish DPC, who welcomed it
  • Intellectual property, laws: the UK opened a public consultation on copyright and AI (news, deadline 25 Feb 25), seeking views on how to achieve its objectives for the AI and creative sectors of  1. Supporting right holders’ control of their content and ability to be remunerated for its use; 2. Supporting the development of world-leading AI models in the UK by ensuring wide and lawful access to high-quality data; 3. Promoting greater trust and transparency between the sectors (see also 18 Dec). It is considering:
    • Measures that would require increased transparency from AI developers. "This includes the content they use to train their models, how they acquire it, and any content generated by their models" (training data information including provenance, etc.), and
    • Introduction of an exception to copyright law for “text and data mining”, to improve access to content by AI developers but allow right holders to reserve their rights and thereby prevent their content being used for AI training - similar to the EU’s exception for text and data mining",under Art.4, Digital Single Market Copyright Directive (Directive (EU) 2019/790), but with rights reservations "using effective and accessible machine-readable formats" like robots.txt or metadata
Some thoughts/queries here on data protection
    • The consultation document does mention data protection where personal data is used to train AI models or appears in AI-generated outputs (174), & that the ICO will issue further guidance on genAI (179)
    • It notes existing protection for personality: the passing off tort against misrepresentation in a commercial environment; some IP rights like copyright may help people control/prevent digital replicas, eg sound recording rights if training on a singer’s recorded voice, performance rights of film actors, singers.
    • The UK isn't seeking but would "welcome views" on intellectual property protection for personality rights (178).
    • Problem: IP rights, like the US right of publicity personality right, protect commercial proprietary interests, NOT human rights, and are often signed away to music/film companies. US rights of publicity/personality vary with the state, and generally protect only celebrities who can commercially exploit their well-known images, etc.
    • So, for ordinary UK/EU people, is GDPR adequate? Yes, using photo/video/audio to create a replica of someone's likeness is processing their personaldata, but could it be made clearer (e.g. by supervisory authorities) that deploying those replicas, e.g. in ads, scams etc, also constitutes "processing" of personal data, particularly if deepfakes are used in ways the person wouldn't agree to?
    • As regards GDPR remedies, shouldn't people have a right to prevent their deepfaked voice, image etc being used without their knowledge or consent, even if they're not famous? If legitimate interests is the claimed legal basis for using their likeness/voice, there's certainly a right to object, and compensation claims may be possible, but should there be additional positive rights there, e.g. to require hosting providers to take down deepfakes?
    • Real examples of deepfake misuse (there are many more): BBC presenter; social media user. Not to mention faces (almost invariably female) being used in deepfaked nudes, porn etc (Note: on 7 Jan, the UK government stated it would introduce a new offence for creating s**ually explicit deepfake images, plus other offences, as promised in the Labour Party manifesto p.35) 
  • Policy, risks, US: the US Congress's bipartisan House Task Force on Artificial Intelligence published its report on AI (news item). This long report covered the full range of AI-related concerns: US government use of AI, federal preemption of state law, data privacy, national security, research, development & standards, civil rights & civil liberties, education & workforce, intellectual property, content authenticity, open and close systems, energy usage and data centers, small business, agriculture, healthcare and financial services
  • Transparency, AI usage: the UK government published information (using the UK's Algorithmic Transparency Recording Standard (ATRS); guidancemore on ATRS) on 14 further UK algorithmic tools used in public sector decision-making (news item). This may or may not have been in response to news reports about lack of transparency regarding UK government tools, see 28 Nov
    • The list is worth a skim, there have been many different uses, not limited to chatbots
    • The UK also published a mandatory scope and exemptions policy for the ATRS, setting out which organisations and algorithmic tools are in scope of the mandatory requirement to publish ATRS records (i.e. central government departments and Arm’s-length-bodies (ALBs): executive agencies and non-departmental public bodies which provide public or frontline services or routinely interact with the general public), and "the required steps to ensure that sensitive information is handled appropriately"

16 Dec 24

  • GenAI, images: Google released its experimental Whisk, allowing users to prompt using images instead of text to create new images. "Whisk lets you input images for the subject, one for the scene and another image for the style" 

13 Dec 24

  • Agents: Google announced Google Agentspace for its cloud services, including "agents that bring together Gemini’s advanced reasoning, Google-quality search and enterprise data, regardless of where it’s hosted"
  • AI risks, taxonomy: the UK AI Standards Hub discussed developing a taxonomy of AI risks for organisations, including a proposed table mapping AI risk sources to AI hazards 
  • Deepfakes, misinformation: ...Political Misinformation is not an AI Problem. "Technology Isn’t the Problem—or the Solution... There’s no quick technical fix, or targeted regulation, that can “solve” our information problems. We should reject the simplistic temptation to blame AI for political misinformation and confront the gravity of the hard problem". As always, it's humans, not tech - tech can't solve issues with how humans think and act! (see also 9 Dec)
  • Laws: Lord Clement-Jones's Public Authority Algorithmic and Automated Decision-Making Systems Bill had its second reading (debate); the date for its committee stage is TBA 
  • SLMs, reasoning: Microsoft announced its Phi-4 "Small Language Model Specializing in Complex Reasoning", that "offers high quality results at a small size (14B parameters)" (and see 4 Dec). SLMs of course are more suitable for on-device processing than LLMs
  • SLMs: Epoch AI's article, Frontier language models have become much smaller. "...in 2023, the trend of frontier language models becoming bigger reversed... Should we expect frontier models to keep getting smaller? The short answer is probably not, though it’s harder to say if we should expect them to get much bigger than GPT-4 in the near term."

12 Dec 24

  • AI usage, LLMs, risks, privacy, PETs: Anthropic discussed its Claude insights and observations (Clio): "...an automated analysis tool that enables privacy-preserving analysis of real-world language model use. It gives us insights into the day-to-day uses of claude.ai... It’s also already helping us improve our safety measures... Clio takes a different approach, enabling bottom-up discovery of patterns by distilling conversations into abstracted, understandable topic clusters. It does so while preserving user privacy: data are automatically anonymized and aggregated, and only the higher-level clusters are visible to human analysts" (paper)
  • EU AI Act, jobs!: unsurprisingly, the EU AI Office is recruiting for legal or policy officers, deadline 15 Jan 25
  • GenAI, data protection: while publishing its response to its consultation series on generative AI,the UK ICO emphasised that genAI developers must provide proper privacy notices. "...it’s time to tell people how you’re using their information. This could involve providing accessible and specific information that enables people and publishers to understand what personal information has been collected. Without better transparency, it will be hard for people to exercise their information rights and hard for developers to use legitimate interests as their lawful basis".
    • However, unfortunately there's still little clarity from the ICO on how, if at all, the Art.14(4)(b) exemption could apply, where providing the information is "impossible or would involve a disproportionate effort", even though that was a key point raised by some consultation respondents
  • Testing: the AI Alliance's Trust and Safety Evaluations Initiative, still in draft, was updated to v0.3.1 (initial version was in Oct 24 so quite new), covering: terms glossary, user personae, taxonomy of evaluations & assessments, evaluators and benchmarks, leaderboards and evaluation platform reference stack 
    • Data: the AI Alliance also has an Open Trusted Data Initiative (OTDI) with many datasets available; "our mission is to create a comprehensive, widely-sourced catalog of datasets with clear licenses for use, explicit provenance guarantees, and governed transformations, intended for AI model training, tuning, and application patterns like RAG (retrieval augmented generation) and agents"
    • Skills: the Alliance has produced a Guide to Essential Competencies for AI "to help bridge the AI divide... support a framework for education and training curricula and... help promote inclusive access to AI education. These competencies include the responsible and ethical use of AI, identifying data limitations, data analysis, machine learning, and AI logic, and range in levels from fluency to proficiency to expertise, and finally, mastery"

11 Dec 24

  • Agents: Google announced its Gemini 2.0, its "new AI model for the agentic era" 
  • AI usage, deepfakes, comms: Wired article "OnlyFans Models Are Using AI Impersonators to Keep Up With Their DMs", "AI is replacing the humans who pretend to be OnlyFans stars in online amorous messages". If there's tech, people will use it - in all sorts of ways!
  • AI usage, health: Meta outlined use of its open source Llama model by researchers and developers to address various issues, "from gaps in clinical cancer trials to inefficiencies in agriculture"
  • OSs, mobile, genAI: Apple expanded its Apple Intelligence AI capabilities for iPhone, iPad, Mac
    • However, reportedly this AI has hallucinated inaccurate information when summarising breaking news alerts, and Apple stated it would update the AI feature 
  • Sustainability, water, data centers: Techradar report on Microsoft's new "zero-water data center cooling design", including reuse of cooling water

10 Dec 24

  • Adoption, data quality: Ataccama Data Trust Report 2025 (registration wall), reported by Freevacy as finding, from a survey of 300 US, Canada, and UK data leaders, that "while 74% of responding organisations have adopted some AI-based solutions, only 33% have successfully integrated them across the company", suggesting that "54% of respondents, with 72% of data strategy decision-makers particularly concerned, feel the pressure of not implementing AI effectively, fearing a potential loss of competitive edge". 51% of executives prioritised data quality and accuracy improvements, 30% cited challenge of managing large data volumes
  • AI usage, education: the UK's Ofsted is researching how AI is used in education, barriers, challenges and potential benefits
  • Clean energy, data centres, sustainability: Google announced a strategic partnership with other organisations "to synchronize new clean power generation with data center growth in a novel way", developing "industrial parks with gigawatts of data center capacity in the U.S., co-located with new clean energy plants to power them"
  • Employment, recruitment, bias: IBM on AI in recruitment, how intersectionality (e.g. female AND non-white) can compound bias in AI-based systems, prioritising data diversity and ongoing monitoring  
  • EU: the European High Performance Computing Joint Undertaking (EuroHPC) selected seven proposals to establish and operate the first AI Factories across Europe (Finland, Germany, Greece, Italy, Luxembourg, Spain and Sweden), most with "AI-optimised supercomputers": a major milestone for Europe in building a thriving ecosystem to train advanced AI models and develop AI solutions", that will "provide access to the massive computing power that start-ups, industry and researchers need to develop their AI models and systems. For example, European large language models or specialised vertical AI models focusing on specific sectors or domains" (news release)

9 Dec 24

  • GenAI, video: OpenAI moved its Sora video generation model out of research preview
  • Future, agents: interesting Quartz article on AI in 2025, tech companies touting AI agents cf returns on investment in AI, "Wall Street isn't convinced" 
  • No panacea..., testingCan AI Break the (Mathematical) Law? "...recent research claims to escape the inescapable — to exit the accuracy-error trade-off with artificial intelligence and machine learning (AI/ML). Are these breakthroughs, or illusions?... The promise of AI/ML is not to use data to escape the structure of the world, but to work more wisely within it. The challenge is knowing when and how to use it — recognizing the implications of universal mathematical laws, aligning with clearly defined policy goals and values, and testing as we go to assess what costs and benefits really accrue to whom". Again, how humans understand and use AI is critical! (See also 13 Dec)
  • Products, data centers, sustainability: physical products are increasingly AI-enabled. A letter sent by Lord Leong, Department for Business & Trade (DBT), on the UK Product Regulation & Metrology Bill, includes: "...The government and regulators will need to consider what specific requirements will be needed on products using Al, to safeguard their safety, as our understanding of their risks increases... the Bill ensures our product safety framework can take into account risks presented by the use of software and Al in physical products... product regulations made under the Bill will be able to regulate production processes as a whole if required. This will also include any use of Al as part of these processes and the regulations may also set labelling requirements for products containing Al. Regarding use of Al in the creative industries, including challenges to personal data and intellectual property, the Bill does not seek to regulate Al in and of itself. Nor does our product regulation framework cover the creative industries, beyond ensuring any physical product operates safely [refs to planned consultation on copyright & AI, see 17 & 18 Dec] ...Issues of Al and metrology were also raised, specifically on measurement of the power and water use of Al processes or data centres. The new powers set out in the Bill will allow metrology regulations to be updated to allow us to respond to technological advances, such as in Al... [and to ensure such measurements are accurate]"

6 Dec 24

  • Bias (age, disability, nationality, marital status): a tool to detect welfare fraud, used by the UK's Department for Work and Pensions (DWP), was reported as "showing bias according to people’s age, disability, marital status and nationality". Although humans make the final decisions on welfare payments, it was noted that no "fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status"
  • Cybersecurity, LLMs, tools, cloud: Amazon researchers discuss a new model "that harnesses advanced AI capabilities to automate the creation of security controls, enabling faster, more efficient, and highly accurate generation of the rules [for AWS services' configuration and alerts processing] that help users safeguard their cloud infrastructures", in a post entitled "Model produces pseudocode for security controls in seconds" 
  • GenAI, usage, law: the UK's Government Skills is piloting and seeking feedback on an AI-generated video used in its updated Civil Service expectations course as an optional module to help civil servants understand certain new legal responsibilities on equality: "the first time an AI-generated video has been used to enhance learning on one of the cross-government courses hosted on Civil Service Learning". Hmm, AI creating a video to explain/teach new laws... I'd like to know what the feedback was too! (recall that in a UK tax tribunal appeal, a party cited "supporting" rulings that were actually hallucinated by AI)
  • On-device ML, SLMs: Microsoft explained its "small but mighty" on-device Small Language Model, Phi Silica (and see 13 Dec)

5 Dec 24

  • Intellectual property, courtsImpact and Value of AI for IP and the Courts, speech by UK Deputy Head of Civil Justice 
  • OpenAI, o1, testing, red teaming: system card released for OpenAI's o1 and o1-mini, describing safecoty work including external red teaming and frontier risk evaluations under OpenAI's Preparedness Framework (and see 18 Dec)
  • Training data, privacy, PETs: NIST discussion with certain winners of the UK-US PETs Prize Challenges, on "real-world data pipeline challenges associated with privacy-preserving federated learning (PPFL) and explore upcoming solutions. Unlike traditional centralized or federated learning, PPFL solutions prevent the organization training the model from looking at the training data. This means it’s impossible for that organization to assess the quality of the training data – or even know if it has the right format. This issue can lead to several important challenges in PPFL deployments"

4 Dec 24

  • Agents, gaming: Google introduced its Genie 2, "a foundation world model capable of generating an endless variety of action-controllable, playable 3D environments for training and evaluating embodied agents. Based on a single prompt image, it can [create a video game that can] be played by a human or AI agent using keyboard and mouse inputs". Fascinating and impressive
  • AI usage, science, weatherProbabilistic weather forecasting with machine learning article on Google's GenCast model (authors include many Google researchers; open GenCast code on GitHub including links for data and model weights)
  • Bias (gender), LLMs: Apple paper Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models. Language used in prompts matters. "Our findings highlight the importance of ensuring fairness in pre-trained LLMs, especially when they are later used to perform downstream tasks via prompt adaptation"
  • Employment, recruitment, data protection: the UK ICO published the 42 advisory notes that it issued following audits of organisations using AI in recruitment. Well worth a read for the points to consider/document if you're thinking of using AI for hiring
  • International cooperation: GPAI Belgrade Ministerial Declaration on AI issued (yes, it's somewhat confusing that "GPAI" here stands for "Global Partnership for Artificial Intelligence" and not general purpose AI as per the EU AI Act! There are currently 29 countries that are members of GPAI including the EU and many EU Member States, UK, US and also Australia, Canada, Singapore, New Zealand etc)
  • Health, medical devices, sandboxes: to "help test and improve the rules for AI-powered medical devices to ensure they reach patients quickly, safely and effectively" and improve diagnosis and patient care, the UK Medicines and Healthcare products Regulatory Agency (MHRA) announced its selection of 5 AI technologies for its pilot AI Airlock regulatory sandbox scheme, "where manufacturers can explore how best to collect evidence that could later be used to support the approval of their product" under MHRA supervision in a virtual or simulated setting. The chosen AI uses were: targeting at risk patients with Chronic Obstructive Pulmonary Disease (COPD); using LLMs to improve the efficiency and accuracy of radiology reporting; AI performance monitoring platforms in hospitals (for drift); improving cancer care efficent; facilitating clinician decision-making

3 Dec 24

  • Fundamental concepts: Dentons has published my video, initially intended only for internal training of lawyers generally on AI terminology and jargon
  • Misrepresentating AI, biometrics, bias: FTC enforcement action "for making false, misleading or unsubstantiated claims that its AI-powered facial recognition software was free of gender and racial bias and making other misleading claims about the technology"
  • Multimodal: Amazon announced its Nova generation of multimodal foundation models. "With the ability to process text, image, and video as prompts, customers can use Amazon Nova-powered generative AI applications to understand videos, charts, and documents, or generate videos and other multimedia content" 

2 Dec 24

  • Sustainability, data centers: reportedly Amazon has partnered with AI startup Orbital to trial a new (AI-designed!) material in its data centres for carbon capture, removing carbon dioxide from the air (another article)

29 Nov 24

  • Tools: (algorithms for improved statistical models, not AI?) software to "equip individual insurance firms to assess probable liabilities arising from their specific mix of products and customers" and estimate needed cash reserves better for Solvency II purposes. Unlikely to be free/open source

28 Nov 24

  • GenAI, LinkedIn, detection, synthetic data: Wired article on Originality AI's analysis showing that "Over 54 percent of longer English-language posts on LinkedIn are likely AI-generated", "indicating the platform’s embrace of AI tools has been a success" (!) 
  • Government policy: ODI's Global Policy Observatory for Data-centric AI report 
    • "Analysing 512 policy documents from 64 countries, we find that a small group of typically wealthier nations with robust open data practices are more likely to focus on data-centric AI topics. In contrast, low and middle-income countries (LMICs), which generally lack a focus on data-centric AI topics, may find their ability to engage in global AI governance efforts hindered."
    • Recommends: international bodies supporting LMICs with guidance; investment in data-centric tools/toolkits; promoting equitable data sharing  
  • Human rights, risks, assessment, governance: the Council of Europe's Committee on AI (CAI) adopted the HUDERIA methodology for the risk and impact assessment of AI systems from the point of view of human rights, democracy and the rule of law, with the involvement of Turing Institute researchers (Turing news with further links; "in 2025-2026 a more detailed model will be developed and piloted"). HUDERIA comprises a context-based risk analysis (COBRA), stakeholder engagement process (SEP), risk & impact assessment, and mitigation plan
    • EU: the EU was involved in negotiating HUDERIA as a non-binding instrument (NBI) "to support the Parties to the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law [explan memo] in the implementation of the risk and impact management obligations included in Chapter V of the Convention", and HUDERIA took account of the EU AI Act. HUDERIA's approval by the EU is progressing. Current signatories to this Convention include the EU, UK and USA
    • EU AI Act: HUDERIA should be useful when considering fundamental rights impact assessments of high-risk AI systems under the AI Act. Indeed, the HUDERIA document itself states, "The HUDERIA can be used by both public and private actors to aid in identifying and addressing risks and impacts to human rights, democracy and the rule of law throughout the lifecycle of AI systems", and that its main objectives include "to promote compatibility and interoperability with existing and future guidance,standards and frameworks developed by relevant technical, professional and other organisations or bodies (such as ISO, IEC, ITU, CEN, CENELEC, IEEE, OECD, NIST), including the NIST AI Risk Management Framework and risk management and fundamental rights impact assessment under the EU AI Act"

28 Nov 24

  • EU AI Act: AI Office's AI Pact webinar on the AI Act's architecture - recording, slides
  • Government policy: the Open Data Institute (ODI) report analysing 512 policy documents from 64 countries - "we find that a small group of typically wealthier nations with robust open data practices are more likely to focus on data-centric AI topics. In contrast, low and middle-income countries (LMICs), which generally lack a focus on data-centric AI topics, may find their ability to engage in global AI governance efforts hindered". Recommendations:
    • Support LMICs via international organisations' guidance on digital infrastructure and accessing high quality data resources
    • Invest in data-centric tools/toolkits
    • Promote equitable data sharing
  • Transparency on AI use: reportedly the UK government was failing to list is use of AI on mandatory register (now see 17 Dec)

27 Nov 24

  • Data protection, training, enforcement, Italy: if "selling" personal data for AI training, GDPR still applies. Italy's Garante issued a warning to a news group that sharing its editorial content (including archive) with OpenAI could likely infringe GDPR's provisions on special category data, criminal offence data, privacy notices and data subject rights. (Machine translation) "all editorial content will be used by OpenAI to allow users [of the ChatGPT service, ed.] to carry out real-time searches for current news, with the simultaneous provision of a summary (generated by OpenAI artificial intelligence systems) and a direct link to the news item itself” and that “all editorial content will also be used by OpenAI to improve its services and train its artificial intelligence algorithms”.
    • From the associated news item of 29 Nov (machine translation): "Digital newspaper archives store the stories of millions of people, with information, details, even extremely sensitive personal data that cannot be licensed for use by third parties to train artificial intelligence, without due precautions... Based on the information received, the Authority believes that the processing activities are intended to involve a large volume of personal data, including sensitive and judicial data, and that the impact assessment [DPIA], carried out by the company and transmitted to Garante, does not sufficiently analyze the legal basis by virtue of which the publisher could transfer or license for use by third parties the personal data present in its archive to OpenAI, so that it can process them to train its algorithms [the DPIA cited legitimated interest]. Finally, the warning notice highlights how the information and transparency obligations towards the interested parties do not appear to have been sufficiently fulfilled and that [the controller] is not in a position to guarantee the latter the rights they are entitled to under European privacy legislation, in particular the right to object"
  • Human rights: are rights sufficiently human in the age of the machine? - speech by UK Master of the Rolls

26 Nov 24

  • Data centers, clean energy, sustainability: concerns that increasing power demands from data centres for AI processing will delay the world's transition to clean energy (some companies are going for nuclear power to reduce carbon emissions, even banks not just big tech: paywalled FT article)
  • Misrepresenting AI capabilities: US FTC action over allegations of false claims about the extent to which AI-powered security screening system can detect weapons and ignore harmless personal items, including in school settings
  • Public sector, government, AI usage: Google Cloud-commissioned report on genAI - news itemsummaryPDF
  • Shadow AI: Strategy Insights survey - over a third of organisations struggled to monitor use by employees (even cybersecurity staff) of non-approved AI tools, particularly those integrated with legacy systems
  • Testing, red teaming: US Cybersecurity and Infrastructure Security Agency (CISA) on how AI red teaming (third-party safety and security evaluation of AI systems) must fit into the existing framework for AI Testing, Evaluation, Validation and Verification (TEVV) and into software TEVV

25 Nov 24

  • National security, defence, cybersecurity: the UK announced it will part-fund a new Laboratory for AI Security Research (LASR) that will "partner with world-leading experts from UK universities, the intelligence agencies and industry to boost Britain’s cyber resilience and support growth... to assess the impact of AI on our national security" including collaboration with Five Eyes countries and NATO allies 
  • Open source, tools, models, data: Anthropic released its open source model context protocol (MCP), "an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools"

22 Nov 24

  • Court judgments, genAI, Argentina: replacing a previous AI system PROMETEA, OpenAI's ChatGPT is now being used in Buenos Aires for contentious administrative and tax matters, reviewing uploaded case documents and drafting judgments (the first cut only, with human review, it seems): "20 rulings it has drafted have all been reviewed by a lawyer and approved by the deputy attorney"
  • Work, employees: OECD Global Deal group on social dialogue & workplace use of AI

21 Nov 24

  • AI PCs: IBM survey - AI PCs offered "potentially transformative impact on people’s lives, saving individuals roughly 240 minutes a week on routine digital tasks", but "current AI PC owners spend longer on tasks than their counterparts using traditional PCs", more consumer education needed
  • Financial services: third survey by UK Bank of England & Financial Conduct Authority on AI and machine learning in UK FS including use/adoption, third-party exposure, automated decision-making, materiality, understanding of AI systems, benefits/risks, constraints on use (data protection top), governance & accountability. The exec summary is worth a read if short on time!
  • Countries: Stanford University's Human-Centered Artificial Intelligence (HAI) center's 2024 Global AI Power Rankings: Stanford HAI Tool Ranks 36 Countries in AI using its Global Vibrancy Tool; "the U.S. is the global leader in artificial intelligence, followed by China and the United Kingdom. The ranking also highlights the rise of smaller nations such as Singapore when evaluated on both absolute and per capita bases..."  
  • Red teaming: OpenAI published 2 papers on external and automated red teaming of AI models/systems 
  • Religion!: Swiss church installs AI-powered Jesus instead of priests

20 Nov 24

  • Cybersecurity: EU Cyber Resilience Act (CRA) published in OJ, aiming to improve tge cyber security of "products with digital elements" by regulating the making available of such products on the EU market to ensure their cybersecurity, including "essential cybersecurity requirements" for their design, development and production, obligations for economic operators in relation to their cybersecurity, and cybersecurity requirements for vulnerability handling processes by their manufacturers during the time they are expected to be in use, and obligations for economic operators regarding those processes. Significance for AI? Art.12, Rec.51:
    • Products with digital elements classified as high-risk AI systems under the EU AI Act will be deemed to comply with the AI Act's Art.15 cybersecurity requirements if the product and the manufacturer's processes meet the CRA's essential cybersecurity requirements in CRA Annex I, but only to the extent the CRA declaration of conformity "demonstrates" achievement of the level of cybersecurity protection Art.15 requires. The assessment should take account of risks to an AI system's cyber resilience as regards attempts by unauthorised third parties to alter its use, behaviour or performance, including AI specific vulnerabilities such as data poisoning or adversarial attacks, as well as risks to fundamental rights, in accordance with the AI Act
    • There are nuances, e.g. "important products with digital elements" (Art.7, Ann.3) and "critical products with digital elements" (Art.8, Ann.4) that are high-risk AI systems will be subject to the CRA's conformity assessment procedures in so far as the CRA's essential cybersecurity requirements are concerned, but the AI Act's conformity assessment for all other aspects
    • Manufacturers of products with digital elements classified as high-risk AI systems under the AI Act may participate in AI regulatory sandboxes under the AI Act
  • EU AI Act: AI Office's updated GPAI models' Code of Practice FAQ / Q&A
  • GenAI, synthetic data, deepfakes, provenance, detection, risks, testing, management: NIST report Reducing Risks Posed by Synthetic Content An Overview of Technical Approaches to Digital Content Transparency, on existing standards, tools, methods, and practices, and potential development of further science-backed standards and techniques, for: authenticating content and tracking its provenance; labeling synthetic content, such as using watermarking; detecting synthetic content; preventing genAI from producing CSAM or non-consensual intimate imagery of real individuals (including intimate digital depictions of an identifiable individual's body/body parts); testing software used for the above purposes; and auditing and maintaining synthetic content.
  • Health, medical images: experts' alarm over people being encouraged to upload medical scans to Grok AI, including concerns on transparency, privacy, accuracy (reportedly it misidentified broken clavicle for dislocated shoulder, didn't recognise tuberculosis, mistook benign cyst for test*cles)
  • International cooperation: EU and Singapore signed an Administrative Arrangement on cooperation between the EU’s AI Office and Singapore’s AI Safety Institute, "to address the safety of general-purpose AI models through information exchanges and best practices, joint testing and evaluations, development of tools and benchmarks, standardisation activities, as well as research on how to advance safe and trustworthy AI" and to exchange views on trends and future technological developments in the field of AI
  • US, government policy, laws: the Computer & Communications Industry Association (CCIA) published its 2024 State Landscape Artificial Intelligence, outlining the major trends across the 50 US state legislatures, and highlighting key states expected to be active in the upcoming session 

19 Nov 24

  • Models, testing: from the UK Artificial Intelligence Safety Institute (AISI) and US Artificial Intelligence Safety Institute, a joint pre-deployment evaluation of Anthropic's upgraded Claude 3.5 Sonnet model (see also 22 Oct 24)

18 Nov 24

  • AI usage, science, health, weather, maths, physics: Google highlighted some scientific breakthroughs enabled by AI and newly-developed AI models - protein structure prediction (Deepmind, free database) to assist developing new medicines, fight antibiotic resistance, tackle plastic pollution; mapping human brain in more detail to assist health research (public dataset); more accurate flood forecasting to help save lives; spotting wildfires earlier to help stop them faster; predicting weather more quickly and accurately (open source model code); advancing AI's mathematical & geometry reasoning (AlphaGeometry2, AlphaProof); more accurate predictions of chemical reactivity/kinetics using quantum computing for chemistry simulations; accelerating materials science which could help produce more sustainable solar cells, batteries, superconductors (some predictions in an open database); assisting nuclear fusion research 
  • AI usage, potential: views of Google's Demis Hassabis & James Manyika - AI will help us understand the very fabric of reality
  • Data protection, LIAs: Information Accountability Foundation (IAF) published Assessments for an AI World - Legitimate Interest Assessment, with a draft model LIA on "how to demonstrate legitimate interest multi-dimensional balancing when AI is the processing activity"
  • Future, limits: view that genAI has "hit a dead end" and will stagnate...
  • Product liability: EU Product Liability Directive published in the OJ, applicable from 9 Dec 2026. Significance for AI? For the purposes of no-fault liability for defective products:
    • "Product" explicitly includes software like AI systems, including where supplied via SaaS
    •  "developer or producer of software, including AI system providers" under the AI Act, "should be treated as a manufacturer" under this Directive
    • Where a substantial modification is made e.g. due to the continuous learning of an AI system, the substantially modified product should be considered to be made available on the market or put into service at the time that modification is actually made 
    • National courts should presume the defectiveness of a product or the causal link between the damage and the defectiveness, or both, where, notwithstanding the defendant’s disclosure of information, it would be excessively difficult for the claimant, in particular due to the technical or scientific complexity of the case, to prove the defectiveness or the causal link, or both. Technical or scientific complexity should be determined by national courts on a case-by-case basis, taking into account various factors including the complex nature of the technology used such as machine learning, and the complex nature of the causal link, such as a link that, in order to be proven, would require the claimant to explain the inner workings of an AI system. While a claimant should provide arguments to demonstrate excessive difficulties, proof of such difficulties should not be required. For example, in a claim concerning an AI system, the claimant should, for the court to decide that excessive difficulties exist, neither be required to explain the AI system’s specific characteristics nor how those characteristics make it harder to establish the causal link. 

17 Nov 24

  • Deepfakes: Sir David Attenborough against his voice being cloned by AI

14 Nov 24

  • AI usage, phone scams, fraud: an excellent beneficial use of AI is telecomms provider O2's AI "granny", Daisy, trained on "real scambaiter content" and designed to "answer calls in real time from fraudsters, keeping them on the phone and away from customers for as long as possible... [Daisy's] mission is to talk with fraudsters and waste as much of their time as possible with human-like rambling chat to keep them away from real people, while highlighting the need for consumers to stay vigilant as the UK faces a fraud epidemic... Able to interact with scammers in real time without any input from her creators, O2 has put Daisy to work around the clock answering dodgy calls. Daisy combines various AI models which work together to listen and respond to fraudulent calls instantaneously and is so lifelike it has successfully kept numerous fraudsters on calls for 40 minutes at a time... [Daisy] has told frustrated scammers meandering stories of her family, talked at length about her passion for knitting and provided exasperated callers with false personal information including made-up bank details. By tricking the criminals into thinking they were defrauding a real person and playing on scammers biases about older people, Daisy has prevented them from targeting real victims and, most importantly, has exposed the common tactics used so customers can better protect themselves"
    • Query: exactly how is Daisy triggered to answer a call? Can O2 customers opt in to use Daisy? The accompany video (see link above or Youtube) says, "If you want to help Daisy and O2 to ruin a scammer's day, you can report scam numbers to 7726". Does that mean that if calls are made to O2 customers' numbers from numbers reported to 7726 that O2 has determined are used by fraudsters, O2 intervenes and has Daisy answer the call instead of the customer?
    • As one customer said, do we really want O2 to be intercepting calls? More info on how Daisy works would be very helpful. I can imagine a troublemaker deliberately reporting a legitimate person's phone number to 7726, so that when that person tries to call an O2 number, they get Daisy instead of whoever they were trying to call! 
  • Cybersecurity: from the UK AISI, Safety case template for ‘inability’ arguments - "How to write part of a safety case showing a system does not have offensive cyber capabilities" (paper)
  • EU AI Act: first draft of GPAI Code of Practice published
  • GenAI poetry: research indicating that AI-generated poetry is indistinguishable from human-written poetry and even rated more favorably by the people involved! (Forbes article)
  • Frameworks, critical infrastructure: from the US DHS, consulting with AI Safety & Security Board: Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure, on deploying AI in critical infrastructure (but more generally useful) and the roles of cloud infrastructure providers, AI developers, critical infrastructure owners & operators, civil society and the public sector/government. Includes AI roles/responsibilities matrix and glossary.
  • Innovation, risks, red-teaming: from UK DSIT Responsible Technology Adoption Unit (RTA/RTAU), a Model for Responsible Innovation "to help teams across the public sector and beyond to innovate responsibly with data and AI" (but of interest to private organisations too) by:
    • Setting out a vision for what responsible innovation in AI looks like, and the component Fundamentals and Conditions required to build trustworthy AI 
    • Providing a practical tool public sector teams can use to rapidly identify potential risks associated with AI development and deployment, and understand how to mitigate them - RTA uses this model in red-teaming workshops mapping data and AI projects against the model to rapidly identify where risks might arise and prioritise actions to ensure a trustworthy approach.
  • Product safety: unsurprisingly, AI and machine learning feature in the first horizon scan report by the UK Office for Product Safety & Standards (OPSS), and the scan's taxonomy of technologies included computational tools and platforms that collect, analyse or leverage data, including AI and machine learning (advanced analysis and algorithmic technologies that can interpret existing information and automate or support decision-making and action like AI, ML, neural networks, computer vision), cybersecurity and data platforms, and smart technology and internet of things (IoT)

13 Nov 24

  • Data protection, bias, Italy: Garante fined a food delivery company €2.6m and ordered various measures e.g. changing how it processes its riders' data through a digital platform and "verify that the algorithms used to book and assign orders for food and other products do not result into discrimination" including on automated decision-making (news item in English; another). Infringements noted included privacy notices/transparency, safeguards to ensure accuracy and fairness of algorithmic results used to rate riders' performance, lack of procedures to enforce the right to human interventionand contest the algorithms' decisions (which sometimes excluded riders from work assignments) 
  • EU AI Act: the European Commission launched its consultation (ending 11 Dec 24) on AI Act prohibitions and the AI system definition
  • GenAI, media literacy: Ofcom paper on Future Technology and Media Literacy: Applications of Generative AI, covering the news sector & personalisation, personalisation & adaptation, content creation & education, and data protection concerns with genAI
  • Government policy, AI risks, AI benefits: OECD report Assessing Potential Future Artificial Intelligence Risks, Benefits nd Policy Imperatives
  • Justice, law enforcement: EU Council's draft conclusions on the use of AI in the field of justice (final conclusions from 4 Dec 24 meeting not yet available)

12 Nov 24

  • Bias, Denmark: Amnesty report Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State, on "how the sweeping use of fraud detection algorithms, paired with mass surveillance practices, has led people to unwillingly –or even unknowingly– forfeit their right to privacy, and created an atmosphere of fear" (Fortune article)
  • Bias, Netherlands, ethnicity: reportedly the Dutch government is refunding >10k students "who were unjustly flagged for student finance fraud by an algorithm developed by the Education Executive Agency (DUO)" after an investigation found it to be "discriminatory, targeted students based on arbitrary risk factors that disproportionately affected those from immigrant backgrounds, particularly those of Turkish and Moroccan descent"

11 Nov 24

  • Automated decision-making (ADM): controversy over a UK Home Office tool proposing enforcement against migrants/asylum seekers. Government said its algorithms are rules-based not AI/ML and that humans remain responsible; objectors fear rubber-stamping of biased decisions, raised transparency regarding AI use.
  • Bias (age), health: does the UK’s liver transplant matching algorithm systematically exclude younger patients?
  • Ethics, sentience, feelings?: reportedly Anthropic previously hired an "AI welfare" researcher "to explore whether future AI models might deserve moral consideration and protection"
  • Spatial intelligence, physical world: Niantic's Large Geospatial Model is being built using player-contributed scans of public real-world geographic locations, as part of its Visual Positioning System (VPS), to enable computers to perceive and understand physical spaces e.g. for augmented reality, robots

8 Nov 24

  • GenAI, chatbots, LLMs: UK Ofcom's open letter to online service providers on how the UK Online Safety Act (OSA) will apply to generative AI and chatbots. E.g.
    • Sites enabling users to share AI-generated text, images, videos with other users such as via group chats; services letting users upload or create their own chatbots available to other users like chatbots mimicking real/fake people; any genAI content that's shared is considered "user-generated" including deepfake fraud material.
    • GenAI tools enabling searching of more than one site/database are OSA-regulated "search services" (e.g. tools using live search results)
    • Sites/apps with genAI tools that can generate p*rn material are also OSA-regulated (so, ensure guardrails that prevent this!)
    • Measures from draft Ofcom Codes to help services and users include having a named person accountable for OSA compliance, having an adequately resourced, well trained content moderation function for swift takedown of illegal content and child protection; using "highly effective age assurance" for child protection; having easily-accessible, usable reporting & complaints processes.
  • Regulatory cooperation, international: the UK Digital Regulation Cooperation Forum (DRCF) announced a joint OECD and The International Network for Digital Regulation Cooperation (INDRC) workshop to discuss the "interplay between digital regulatory frameworks – challenges and opportunities of structural collaboration", and published the resulting joint statement "to demonstrate INDRC members’ commitment to continued collaboration and dialogue on key matters of digital regulatory significance"

7 Nov 24

  • AI usage: Wendy's is using Palantir's software for supply chain management and anticipating/preventing ingredient shortages

6 Nov 24

  • Assurance, testing, frameworks, tools: UK DSIT report "surveys the state of the UK AI assurance market and sets out how DSIT will drive its future growth" (news release). Key actions planned:
    • Developing an AI assurance platform as a one-stop shop for developers/deployers, "bringing together existing assurance tools, services, frameworks and practices together in one place" including DSIT guidance/tools/resources, including an AI Essentials toolkit which, like the UK Cyber Essentials, will "distil key tenets of relevant governance frameworks and standards to make these comprehensible for industry"
      • The first tool was an AI Management Essentials self-assessment tool (draft tool, guidance) drawing on key principles from existing standards/frameworks including ISO/IEC 42001 (on AI management), the EU AI Act, and the NIST AI Risk Management Framework, to provide "a simple, free baseline of organisational good practice, supporting private sector organisations to engage in the development of ethical, robust and responsible AI". The consultation on this tool closes on 25 Jan 25.
    • Developing, with industry, a roadmap to trusted third-party AI assurance to increase the supply of independent, high-quality, trusted assurance
    • Collaborating with the UK Artificial Intelligence Safety Institute (AISI) to advance assurance research, development and adoption like new techniques for evaluating and assuring AI systems to ensure safe and responsible development/deployment. This includes:
      • Exploring how Privacy Enhancing Technologies (PETs) can enable data sharing with researchers to help them understand the capabilities and controllability of models while minimising risks to privacy or commercial confidentiality
      • Enabling/promoting the interoperability of AI assurance across jurisdictions internationally
    • Developing a Terminology Tool for Responsible AI, to define key terminology used in the UK and other jurisdictions and the relationships between them, to help industry and assurance service providers navigate key concepts and terms in different AI governance frameworks "to communicate effectively with consumers and trading partners within the UK and other jurisdictions, supporting the growth of the UK’s AI assurance market"
      • Work has already started on this with the US National Institute for Standards and Technology (NIST) and the UK’s National Physical Laboratory (NPL)
      • Sector-specific non-technical guidance on assurance good practice: already produced for employment (procuring and deploying AI for recruitment: see also below on ICO questions); guidance for other sectors including financial services to be published "in the near future"  
  • Employment, recruitment, data protection: UK ICO's key data protection questions when procuring AI tools to help with recruitment (DPIA, lawful basis, documented responsibilities and clear processing instructions, bias mitigation, transparency re use, limiting unnecessary processing): ICO webinar 10 am, 22 Jan 25. Also see above on DSIT guidance
  • EU AI Act: EDPB letter to European Commission on role of data protection authorities 

5 Nov 24

  • AI usage, chatbots, LLMs: the UK government released its experimental AI chatbot to help people set up small businesses and find support 
  • EU: the Council approved conclusions on the European Court of Auditors' (ECA) report on strengthening EU "AI ambitions, notably by enhancing governance and ensuring an increased, more focused investment when moving forward in this field", including scaling up AI investments and facilitating access to digital infrastructure, noting the importance of AI systems' environmental impact, high-performance computing, possible solutions to increase energy efficiency, and securing a reliable hardware supply chain

4 Nov 24

  • Models, open source: Epoch AI's report comparing open and closed models: "The best open model today is on par with closed models in performance and training compute, but with a lag of about one year"

1 Nov 24

  • Gen AI, risks, management: Open Loop & Meta report, Generative AI Risk Management and the NIST Generative AI Profile (NIST AI 600-1)
  • On-device, LLMs: Apple explained "how to optimize and deploy an LLM to Apple silicon, achieving the performance required for real time use cases", using Meta's Llama-3.1-8B-Instruct, "a popular mid-size LLM", showing "how using Apple’s Core ML framework and the optimizations described here, this model can be run locally on a Mac with M1 Max with about ~33 tokens/s decoding speed"

30 Oct 24

  • Metrics, hallucinations: OpenAI announced its new open source SimpleQA benchmark that measures LMs' "ability to answer short, fact-seeking questions"
  • US EO14110: US DoC fact sheet on key accomplishments 1 year on from this Biden-Harris EO
    • Note: reportedly the new Trump government is likely to get get rid of EO14110 

29 Oct 24

  • Employment, recruitment: LinkedIn introduced an AI-powered "Hiring Assistant to Help Recruiters Spend More Time On Their Most Impactful Work" 

28 Oct 24

  • Cybersecurity, LLMs, open source, tools: paper Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks. "...We introduce Mantis, a defensive framework that exploits LLMs' susceptibility to adversarial inputs to undermine malicious operations. Upon detecting an automated cyberattack, Mantis plants carefully crafted inputs into system responses, leading the attacker's LLM to disrupt their own operations (passive defense) or even compromise the attacker's machine (active defense). By deploying purposefully vulnerable decoy services to attract the attacker and using dynamic prompt injections for the attacker's LLM, Mantis can autonomously hack back the attacker". Code
  • Data scraping, training data: following industry engagement, global privacy authorities issued their Concluding joint statement on data scraping and the protection of privacy
  • Facial recognition, data protection: the UK ICO applied for permission to appeal the First-tier Tribunal's judgment on Clearview AI
  • Intellectual property, competition, training: UK Prime Minister Keir Starmer's article includes: "Both artificial intelligence and the creative industries – which include news media – are central to this government’s driving mission on economic growth. To strike balance in our industrial policy, we are working closely with these sectors. We recognise the basic principle that publishers should have control over and seek payment for their work, including when thinking about the role of AI. Not only is it essential for a vibrant media landscape, in which the sector’s provision of trustworthy information is more vital than ever, it is also relevant to our ongoing work to roll out the Digital Markets, Competition and Consumers Act as swiftly as possible. This landmark legislation will help rebalance the relationship between online platforms and those, such as publishers, who rely on them."
  • Open source AI: the Open Source Initiative announced its release of the first Open Source AI Definition (OSAID) v1.0
  • OSs, mobile: Apple released its Apple Intelligence AI capabilities for iPhone, iPad, Mac 
  • Sustainability, US, water, data centers: J.P. Morgan and ERM report on water resilience in the US. Water is of course needed for cooling in data centers, which are increasingly being built for AI processing

25 Oct 24

  • AI usage, saving lives: UK announcement that Scotland-based SME Zelim (which benefited from DASA funding) had won a contract with the US Navy to deploy their innovative AI-enabled Person-in-Water detection and tracking technology ZOE. "Zelim’s detection and tracking system uses AI to scan the water surface to find people in the water much more accurately and consistently than human eyes and current systems can. Low-cost and easy to integrate, the software solution can be implemented in any camera or CCTV setup"

24 Oct 24

  • EU AI Act standards, intellectual property: Commission note on some of the key characteristics expected from upcoming standards for high-risk AI systems to support implementation of the AI Act (news article)
    • Note: the Commission previously requested EU standardisation organisations, led by CEN-CENELEC, to draft various standards for the AI Act, to cover: risk management systems, governance and quality of datasets used to build AI systems, record keeping through logging capabilities, transparency and information provisions for users, human oversight, accuracy specifications, robustness specifications, cybersecurity specifications, quality management systems for providers including post-market monitoring processes, conformity assessment.
    • In Case C-588/21P the CJEU stated that, while harmonised standards under Regulation 1049/2001 were protected by copyright, as CEN-CENELEC acknowledged, there was an overriding public interest in their disclosure, and annulled the Commission's refusal of access to those standards. So the new standards for high-risk AI should be publicly-available. 
  • AI edits, transparency: Google is indicating when an image in Google Photos has been edited using Google's genAI (e.g. Magic Editor, Magic Eraser and Zoom Enhance), and using The International Press Telecommunications Council (IPTC) metadata to indicate when an image comprises elements from different photos using non-generative features
  • Health, life expectancy: UK NHS trials AI "that predicts when patients will die"  
  • On-device ML, homomorphic encryption, PETs: Apple is combining machine learning and HE in its ecosystem for privacy while enriching on-device experiences with information privately retrieved from server databases 
  • Testing, cybersecurity, tools: Google released its Secure AI Framework (SAIF) risk assessment tool "that can help others assess their security posture, apply these best practices and put SAIF principles into action". It's a questionnaire-based tool "that will generate an instant and tailored checklist to guide practitioners to secure their AI systems"
  • Testing, assurance, safety: by the UK Artificial Intelligence Safety Institute (AISI), Early lessons from evaluating frontier AI systems, discussing the evolving role of third-party evaluators in assessing AI safety, and how to design robust, impactful testing frameworks. 

23 Oct 24

22 Oct 24

  • Agents, computer control: Anthropic announced that its experimental Claude 3.5 Sonnet model can (as a beta feature) interact with tools to manipulate a computer desktop environment, to "use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text" (see also 19 Nov)
  • Intellectual property, genAI, training: a statement on AI training "The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted" was released. To date it's been signed by over 39.5k (and counting): musicians, writers, industry organisations and others including many famous names

21 Oct 24

  • Detection, genAI, synthetic data: good roundup of recently-reported errors made by claimed AI-detecting tools (no doubt many also using AI), falsely accusing students of plagiarism
  • Open source, LLMs: IBM released its open source Granite 3.0 models "High Performing AI Models Built for Business", and also announced its next generation of Granite-powered watsonx Code Assistant for general purpose coding, and new tools in watsonx.ai for building and deploying AI applications and agents
  • Transparency: summary of findings from a UK Digital Regulation Cooperation Forum (DRCF) workshop in Aug 2024 on why AI transparency is important, key considerations for participating regulators, and useful information from each regulator on existing guidance and their next steps related to AI transparency

18 Oct 24

17 Oct 24

16 Oct 24

  • Employment, recruitment, bias, ethnicity: paper on using LLMs for resume / CV screening, Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval which tested whether various Massive Text Embedding (MTE) models are biased (bias regarding intersectionality involves intersecting attributes like gender and ethnicity). "We simulate this for nine occupations, using a collection of over 500 publicly available resumes and 500 job descriptions. We find that the MTEs are biased, significantly favoring White-associated names in 85.1% of cases and female-associated names in only 11.1% of cases, with a minority of cases showing no statistically significant differences. Further analyses show that Black males are disadvantaged in up to 100% of cases... We also find an impact of document length as well as the corpus frequency of names in the selection of resumes [such that increasing the ratio of signals that are proxies to race or gender information in a document by decreasing its length can increase the number of biased outcomes by 22.2%, and changing frequency matching strategies can alter whether Black names or White names are favored in a majority of cases]. These findings have implications for widely used AI tools that are automating employment, fairness, and tech policy"
  • EU AI Act, compliance, tools: Reuters reported on a new tool that "awards AI models a score between 0 and 1 across dozens of categories, including technical robustness and safety" with a leaderboard published of various models developed by big tech companies, and a "Large Language Model (LLM) Checker". According to the website, models can be evaluated locally and the JSON report file uploaded for a technical report. The tool provider states, "We have interpreted the high-level regulatory requirements of the EU AI Act as concrete technical requirements. We further group requirements within six EU AI Act principles and label them as GPAI, GPAI+SR (Systemic Risk), and HR (High-Risk)" (paper - on my "to read" list!). The technical aspects the tool seeks to evaluate are: Robustness and Predictability, Cyberattack resilience (cybersecurity), Training Data Suitability, No Copyright Infringement, User Privacy Protection, Capabilities, Performance, and Limitations, Interpretability, Disclosure of AI (transparency), Traceability, Explainability, Risks, Evaluations, Representation — Absence of Bias, Fairness — Absence of Discrimination, Harmful Content and Toxicity 

Early Oct 24

  • LLMs, hallucinations: much ado (e.g. good article) about the Entropix project, on using uncertainty modelling to reduce LLM hallucinations

10 Oct 24

7 Oct 24

7 Oct 24

  • GenAI, transparency: Google joined the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member, and is incorporating C2PA's standard into its products

And just a few from before Oct 2024...


27 Sept 24

  • Employment, recruitment: half of a tech company's HR department was fired after their manager found their an application review system (not necessarily AI?) auto-rejected all applications, even the manager's own CV submitted under a fake name! "...HR had set up the system to search for developers with expertise in the wrong development software and one that no longer exists..." but also "[HR] always told [the manager] that they had some candidates that didn't pass the first screening processes (which was false)"

25 Sept 24

  • Misrepresentation: the US FTC announced 5 law enforcement actions "against operations that use AI hype or sell AI technology that can be used in deceptive and unfair ways" 

24 Sept 24

  • Financial services, Canada, usage, risks, management: by the Office of the Superintendent of Financial Institutions (OSFI) and the Financial Consumer Agency of Canada (FCAC), OSFI-FCAC Risk Report - AI Uses and Risks at Federally Regulated Financial Institutions outlines key risks (from internal AI adoption or from AI use by external actors) that arise for financial institutions from AI, supported by findings from a previous questionnaire and insights from external publications. "It also presents certain practices that can help mitigate some of the risks. These are not meant to serve as guidance but can be positive steps in a financial institution's journey to manage the risks related to AI"

23 Sept 24

18 Sept 24

  • GenAI, health, misrepresentation, accuracy: Texas AG settlement with a Dallas-based artificial intelligence healthcare technology company, resolving allegations that it deployed its products at several Texas hospitals after making false and misleading statements about the accuracy and safety of its products. At least four major Texas hospitals provided their patients’ healthcare data in real time for the generative AI product to “summarize” patients’ condition and treatment for hospital staff. The AG investigation found deceptive claims about the accuracy of its healthcare AI products, putting the public interest at risk; metrics used to claim accuracy, including advertising and marketing the accuracy of its products and services by claiming an error rate or “severe hallucination rate” of  “<1 per 100,000”, were found likely inaccurate "and may have deceived hospitals about the accuracy and safety of the company’s products"

Older, but just because I think these are interesting!

  • EU AI Act, law enforcement: in Jul 2024, various EU Member States submitted queries to the Commission on the AI Act. Some related to definitions/concepts more broadly, not just in the crime/justice context. It would be good to know the answers!
  • Workplace, jobs: a 2023 report by Challenger, Gray noted that a CEO of an unnamed company [maybe Dictador, see below?] was "replaced by artificial intelligence... China-based gaming company NetDragon Websoft appointed an AI robot it calls Tang Yu as its CEO last August [i.e. 2022]. Legal software company Logikcull CEO and co-founder Andy Wilson has said he will replace himself with an AI bot named Andy Woofson by 2024"
    • In fact Logikcull was acquired in Aug 23 so that never happened... 
    • Reportedly Polish drinks company Dictador appointed an AI-powered humanoid robot named Mika as its experimental CEO in 2023 (video interview)
  • AI usage: in 2021, NotCo launched 4 plant-based chicken varieties in Latin America. "NotCo utilizes a proprietary artificial intelligence technology, Giuseppe, which matches animal proteins to their ideal replacements among thousands of plant-based ingredients"