Mastodon Kuan0

Sunday, 3 May 2026

AI in science & industry: Imperial AI Collider, Apr 2026

To the report on this excellent AI event, I add some notes [and my comments, in square brackets], by theme rather than chronological order:

AI is increasingly used/useful for science: chemistry/biology, industrial manufacturing, computational fluid dynamics (eg airflow, floods) etc.

  • Certain weather/climate AI modelling took 10 minutes which would otherwise have taken 6.5 years! 
  • Nurse-driven rather than IT-led, an NHS trust [probably this one or this one?] created an AI tool to prioritise patient complaints, which was extended to identify incidents and systemic issues, then responding to complaints more effectively ; it's now embedded in the NHS federated data platform
  • Coding of course: eg  at Uber AI agents drive about 11% of coding, freeing staff for more important work.

AI democratises expertise globally, and can help with repetitive work, mundane tasks eg for lawyers, freeing time to focus on more challenging work.

Domain expertise (chemistry, law etc) was said to be "more valuable" than the rush to data; AI should complement human experts [for me, search results a few months back indicated that a provision was from the Data Protection Directive - but it wasn't, it was only in GDPR, you'd need data protection expertise to spot the error]

More interactivity with AI gives scientists back their flow, "What if I change this or that", and get the results in a minute. But some software engineers initially excited about AI's speed turned it off after a few months because they'd stopped learning!

Physical AI (robotics, autonomous vehicles, AI interacting with/reasoning about physical world [see eg TechUK's events on physical AI]) is garnering more investment.

There's increasing work on scientific ML, and ways to encode physical laws into AI models: neural physics (physics-based, AI-enabled [PINNs]); using data generated by slow, expensive, platform-specific rules-based computational/physics-based models to train, using less energy, faster, platform-agnostic (but black-box, less generalisable) data-driven surrogates.

  • One simulated airflow and airspeed over Greater London a thousand times faster than realtime!

Weather: there's evidence that models are learning underlying physical principles eg atmospheric dynamics, rather than just pattern matching/stochastic parroting! Much insurance sector work is being undertaken on extreme events (which was too energy-intensive with traditional models).

Energy: AI could help improve energy efficiency (eg Singapore is doing this).

  • For UK and especially London data centres, the electricity issue isn't power generation but the distribution network, cables etc to get power to data centres - there's a multiyear queue, 5-10 years!

LLMs for legal research?: does the model provide the best supporting citations, is its answer appropriate to the context, does it avoid assumptions on important points? Thomson Reuters is continuing to conduct research on all this, but we're not quite there yet..

Bias: an obvious concern. Lawyers think of bias as relating to equality/fairness to humans e.g. racial discrimination, but scientists seem to take a broader view, ie non-representativeness due to insufficient data, data not known to be missing, provenance issues, failed data. Ongoing data is needed too, to help correct/remove bias

Transparency is important: including on how finetuned, how embedded in pipeline [are the EU AI Act's transparency requirements too narrow?]

AI regulation: is a regulator needed per sector? Do regulators have enough resources? [regulatory resource has been raised as an issue in an amendment tabled to the Cyber Security & Resilience (CSR) etc Bill too!] Domain expertise is needed, but the private sector pays more... Do regulators need to change how they work, eg safety mindset in the energy sector vs innovation? Should/could there be an international AI regulator? We need common principles to be agreed worldwide, eg how to decide whether what AI is doing is good or bad, cf current laws/regulations being too strict or too restrictive.

Views on AI tend to be at either extreme: considered a magic panacea, or feared. What we need is collaborative work to improve precision and confidence regarding what an AI system can do,  verifiability, robustness/trustworthiness, and to develop AI systems more intelligently.

Proportionality: we don’t need precision for all use cases, the severity of the consequences will vary (eg electricity blackout), but we do need responsible deployment and  a systematic approach to risk management.

Risks are many, eg market risks from AI colluding with AI! More interconnectivity increases risks. Kids thinking AI is human is also not desirable! There are risks that AI is deskilling students [interesting post on AI deskilling eg linking to research showing AI-assisted colonoscopy creates deskilling]. Many small businesses etc will use and trust AI, without knowing how to check/verify eg website code. 

Approach: people think something can’t be done or must be done fully first, but we really need the middle ground, emphasising that the aim is not to increase risk but to lower risk or to get to the desired risk threshold faster.

How to teach critical thinking? As with search engines, humans need to be able to know not only how to critically evaluate and verify AI results, but also know when not to use AI (e.g. if existing domain knowledge and algorithms/tools/solvers suffice).

Human society. Increasingly embedding AI in society without understanding how it works can lead to complacency and erode societal values so that we end up allowing certain behaviours previously not considered acceptable, cf the experience with social media.

Monday, 13 April 2026

View CJEU judgments by ECLI number

This form lets you directly open the Eur-Lex document for a CJEU judgment or Advocate-General opinion using the document's ECLI number e.g. ECLI:EU:C:2020:559. (Or: use this form instead to view text of judgments etc by C- or T- case number.) 

Instructions:

  • C is pre-selected. Change the dropdown to T only if it's a T case (General Court), e.g. if looking for ECLI:EU:T:2025:831.
  • Enter the ECLI:EU year and number after the C: or T: in the boxes below; for convenience the cursor is already in the Year box so you can start entering the year immediately.
    • Tip: press the Tab key to move quickly from dropdown to Year box to Number box.
  • Then, press Enter or click the View... button to view the document in another tab.

ECLI:EU:::

To view another judgment etc. by its ECLI number, just refresh/reload this page.

Notes:

  • Created because I couldn't find anything relevant when trying to search by entering the ECLI number in the official ECLI search form, although the Eur-Lex quick search is better
  • Unlike with case numbers, judgments and A-G opinions etc may have different ECLI numbers
  • F cases (Civil Service Tribunal) are not catered for above
  • Only works for CJEU case documents, not any Member State national cases.

Disclaimer: if the Europa URL structure changes in future, this may stop working.

Wednesday, 1 April 2026

View CURIA judgments in EUR-LEX

This form lets you directly open the Eur-Lex document for a CJEU judgment or Advocate-General opinion using the document's case number (updated 13 Apr 2026). (Or: use this form instead to view text of judgments etc by ECLI number.)

Instructions:

  • C- is pre-selected, for ECJ cases. Just select T- instead if the case number starts with T-, for General Court cases (like T-325/23). Or, select A-G if you want to see the Advocate-General's opinion for that case.
  • You can leave the checkbox blank, ticking it only if the number ends with a P (e.g. C-703/25 P), which is rare.
  • Enter the case number in the box in the format firstnumber/secondnumber. Just start typing the number if it's a C- case, the cursor is already in the box. Then, click View... or press Enter on your keyboard.
  • This will open a new browser tab showing the judgment's EUR-LEX webpage.


To view another judgment or opinion's EUR-LEX webpage, just refresh/reload this page.

Notes:

Disclaimer: if the Europa URL structure changes in future, this may stop working.

Saturday, 21 March 2026

EU AI Act resources

My key EU AI Act resources:

  1. Linkable EU AI Act, where you can visit/link directly to the text of specific Articles/Recitals etc: bit.ly/eu-aiact (see that link for instructions)

  2. EU AI Act scope - diagrams on roles bit.ly/eu-aiactscope

  3. Timeline bit.ly/eu-aiacttimeline and commencement dates bit.ly/eu-aiactdates

  4. My AI jargon-busting demystifying video on basic technical AI/machine learning concepts, to help AI literacy bit.ly/aijargon
  5. Key supply chain/value chain terms/relationships (AI model, AI system, provider, deployer) under the AI Act: bit.ly/eu-aiact-supplychain

  6. Flowchart on automated decision-making (ADM) under the EU GDPR (NB pre-the UK Data (Use & Access) Act 2025)
Obviously the Digital Omnibus on AI, under the EU Digital Simplification package (press release), proposes to amend the AI Act.

But generally it won't change the above overviews, except commencement timings (and AI literacy obligations). And of course it's not been passed yet, though it's getting there. For ease of ref:

  1. European Parliament's news release 18 Mar 2026 and its MEP-approved text (added: European Parliament's 26 Mar 26 text)

  2. Council of the European Union's press release 13 Mar 2026 and Council's text

  3. European Commission's original draft amendments to the AI Act: Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI), COM/2025/836 final

👀Let's see how the negotiations go!

Recall: even UK and other non-EU providers and deployers (users) of AI systems may be caught by the AI Act. May I repeat my AI Act jurisdiction song...

🧑‍🌾Old MacDonald had a thought, EU AI O!
Could AI Act apply or not? EU AI O!
If his output's used in the EU, yes!
Many other aspects extraterritorial
Old MacDonald, check your role(s)! EU AI O!

Sunday, 16 March 2025

Canon MF Toolbox - solution if it's not working!

Here's the fix for Canon multi-function devices' scanning software, MF Toolbox, not working since late 2024/2025. For me it opened, but nothing happened whatever button I clicked, PDF, scan, save....


 
I tried many online suggestions, none worked. The only effective one, finally, was one from 2012! For a Windows 11 (and probably 10) computer, add this to your path - C:\Windows\twain_32\MF4100 (or whatever is in the twain_32 subfolder for your device, although it's likely to be MF4100 - just look in that Windows twain_32 subfolder).

How to add something to your path in Windows?
  • Press a Win key on your keyboard
  • Type: env
  • You should see something like this:



  • Click on "Edit the system environment variables" (left or right one, it doesn't matter)
  • You should see something like this, entitled System Properties:



  • Click the Environment Variables button (bottom right)
  • You'll see something like this, entitled Environment Variables, with a list of items under Variable. One is named Path, highlighted below (I've blanked out the rest):


    • Click on Path. Then click the first Edit button just below its box (or, just doubleclick on Path)
    • You'll see something like this, entitled Edit environment variables, with a list under it (again I've blanked out some info):


    • Click on New, then type in or paste the path to the twain_32 MF4100 subfolder which, for me, was as above:
      C:\Windows\twain_32\MF4100
    • You'll see something like this, with the added info now at the bottom of the list on the left:



    • Now, click OK and just keep clicking OK (I counted 3 times altogether clicking OK) till you come out of the Environmental Variables box
    • For me, that was it! MF Toolbox then started working properly again, I didn't even need to restart my computer.
    I hope this helps others, it's been driving me mad for months!



    Sunday, 9 February 2025

    AI literacy: EU AI Act

    The EU AI Act's "AI literacy" obligation applied from 2 Feb 25, alongside its prohibition on certain AI uses (commencement dates 1-pager). But what, if anything, should you do about it? Points to consider:

    • Who's caught? This obligation applies to "providers" and, especially, "deployers" (i.e. users) of AI systems
      • For non-compliance, you can't be fined yet (fining provisions don't kick in till 2 Aug 2025), or maybe even not at all (Art.4 isn't listed in Ch.12 on penalties, and it's unclear whether individual EU Member States can or will penalise breach of this obligation - we'll find out by 2 Aug 2026!)
    • But... train anyway? However, if you use any AI system caught by the EU AI Act, an AI upskilling/training and awareness program for staff is good practice and should help to boost your business's competitiveness as well as legal compliance, so you may want to roll it out anyway - if not yet, then ideally by 2 Aug 2026
      • Who? Train at least staff/contractors developing/adapting/integrating, operating/maintaining or using any high-risk AI systems (including third-party AI systems), and also train staff providing human oversight of AI; and ensure they have appropriate authority to perform their stasks properly. Train them similarly even if your AI systems aren't high-risk (the Art.4 AI literacy obligation applies to all AI systems)
      • What on? Train them (as appropriate to their role, technical knowledge, experience, education and training) on AI technicalities, use/safeguards, and interpretation of output, as well raising their awareness about AI's opportunities, risks and possible harms, taking into account the context the relevant AI system is to be used in, and the persons or groups of persons on whom your AI system is to be used
        • The "AI literacy" definition (below) mentions skills, knowledge and understanding, "taking into account" AI Act rights and obligations, to make an informed deployment of AI systems. This might imply that relevant staff should also be trained on what are your obligations under the AI Act as deployer/provider, at least to a basic level - even engineers who aren't in the Legal/Regulatory/Compliance teams
      • How?
        • Others' experiences. To see what other organisations are doing on AI literacy, you can review the Commission's "living repository" compilation of AI literacy practices of many organisations (15 currently) from different sectors and of different sizes (direct link). Added: on 2 Apr 25 the Commission conducted a survey to gather practices for the repository.
          Also consider attending/viewing the AI Pact webinar on AI literacy on 20 Feb 25 (YouTube livestream). If you don't have enough internal resources/expertise to train your staff, external third-party resources are available, but do check that whoever you engage is sufficiently knowledgeable. There are now many out there who offer AI training (nice market for that since ChatGPT, and it can only get bigger!) - but how well qualified or expert are they? A lot of big well-known AI companies already provide online AI training (typically tailored to their own services but covering the basics too), those are often free, so it's worth checking them out. 
        • AI jargon. My free YouTube video demystifying key AI jargon/terminology may also be of use😉, do incorporate it if you wish
    • What else? Consider contributing to any sectoral/industry initiatives on training/awareness of people ("affected persons") who may be affected by your use of AI systems, and/or of other actors in the AI value chain
      • Surely "affected person" won't be making any "deployment" of AI systems, so the "AI literacy" definition doesn't work very well in relation to them... it seems to be more awareness-raising on AI opportunities and risks/harms rather than training, there
    • What to monitor for?
      • Monitor relevant Member States' national laws for any local penalties that might be imposed for infringement of this obligation (seems unlikely, but you never know)
      • Watch out for any voluntary codes of practice on promoting AI literacy "facilitated" by the EU AI Office/relevant Member States under Art.95(2)(c), and take on board anything from them if you can
      • The AI Board is supposed to support the Commission in promoting AI literacy, public awareness and understanding of the benefits, risks, safeguards and rights and obligations in relation to the use of AI systems. If and when they put anything out, again see whether what they produce can usefully be incorporated into your own AI literacy program.
    • (Added) Update - more resources

    Key background info for ease of reference

    • Art.4 AI literacy obligation: Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
      • Rec.91: ... deployers should ensure that the persons assigned to implement the instructions for use [of high-risk AI systems] and human oversight as set out in this Regulation have the necessary competence, in particular an adequate level of AI literacy, training and authority to properly fulfil those tasks...

    • "AI literacy" definition: skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause
      • Rec.20: In order to obtain the greatest benefits from AI systems while protecting fundamental rights, health and safety and to enable democratic control, AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems. Those notions may vary with regard to the relevant context and can include understanding the correct application of technical elements during the AI system’s development phase, the measures to be applied during its use, the suitable ways in which to interpret the AI system’s output, and, in the case of affected persons, the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them. In the context of the application this Regulation, AI literacy should provide all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and its correct enforcement. Furthermore, the wide implementation of AI literacy measures and the introduction of appropriate follow-up actions could contribute to improving working conditions and ultimately sustain the consolidation, and innovation path of trustworthy AI in the Union. The European Artificial Intelligence Board (the ‘Board’) should support the Commission, to promote AI literacy tools, public awareness and understanding of the benefits, risks, safeguards, rights and obligations in relation to the use of AI systems. In cooperation with the relevant stakeholders, the Commission and the Member States should facilitate the drawing up of voluntary codes of conduct to advance AI literacy among persons dealing with the development, operation and use of AI.

    Monday, 13 January 2025

    Things cyber security, Q4 2024

    Selected things cyber security, mostly from Q4 2024, are listed below in reverse chronological order, some with descriptions. See also Things AI, Oct 2024, and Data protection & cyber security, Oct 2024


    28 Dec 24

    • Software code, provenance: a thoughtful, telling post, Why it's hard to trust software, but you mostly have to anyway: "...the situation is fairly dire: if you're running software written by someone else—which basically everyone is—you have to trust a number of different actors. We do have some technologies which have the potential to reduce the amount you have to trust them, but we don't really have any plausible venue to reduce things down to the level where there aren't a number of single points of trust... Open source, audits, reproducible builds, and binary transparency are all good, but they don't eliminate the need to trust whoever is providing your software and you should be suspicious of anyone telling you otherwise" 

    21 Dec 24

    • International collaboration, critical infrastructure: the Critical Five (5 Eyes) countries reaffirmed their "vision of fostering collaboration across the private and government critical infrastructure communities in our five nations" (see the June 24 summary on how they plan to modernise their approach to critical national infrastructure security and resilience). CISA links

    20 Dec 24

    • Consumer IoT: the UK Department for Science, Innovation & Technology (DSIT) will be undertaking an interim Post-Implementation Review of the Product Security and Telecommunications Infrastructure Act (PSTI), to be published by October 2026. To support this, DSIT has commissioned a consultancy to condcut preparations for evaluation and research projects on the product security elements of the PSTI Act

    19 Dec 24

    • Training: the UK updated its webpage on cyber security training for business - this has links to many useful resources, such as free online staff training, a free online incident response exercise and personalised cyber action plan for SMEs/individuals, many from the UK National Cyber Security Centre

    18 Dec 24

    • Supply chain, vendors, defence, national security: the UK Ministry of Defence wrote to Defence Industry CEOs/leads, asking them to review their organisations' performance against the Cyber Assessment Framework (CAF, developed by the UK National Cyber Security Centre (NCSC) for NIS Regulations assessments) particularly the areas of Govern, Identify, Protect, Detect, Respond and Recover; adopt Active Cyber Defence (ACD) with the NCSC and its tools including Early Warning (see 3 Dec); implement the March 24 Cyber Security Standard for Suppliers; and deliver Secure by Design

    17 Dec 24

    • Cloud, SaaS, Microsoft 365: the US Cyber & Security Infrastructure Agency (CISA) issued a web-friendly verions of its directive Implementing Secure Practices for Cloud Services for US federal agencies, requiring deployment of its SCuBA tool - mentioned here because this open source tool, downloaded >30k times as at 13 Nov, automatically assesses Microsoft 365 (M365) configurations for security gaps (against CISA baselines): reportedly, misconfigurations were the initial access point for 30% of cloud environment attacks in the first half of 2024. "ScubaGear rapidly and thoroughly analyzes an organization’s M365 tenant configuration. It then delivers actionable security change insights and recommendations that allow the tenant administrator to close security gaps and attain a stronger defense within their M365 environment". So Microsoft 365 users could do worse than use this free tool!
    • Cybercrime, UN Convention: this Convention's privacy and security issues (law enforcement access to data) have been raised by many, including by the EDPB, referring to its Statement 5/2024 on the Recommendations of the High-Level Group on Access to Data for Effective Law Enforcement
    • Mobile comms: the US Cyber & Security Infrastructure Agency (CISA)issued Mobile Communications Best Practice Guidance after "identified cyber espionage activity by People’s Republic of China (PRC) government-affiliated threat actors targeting commercial telecommunications infrastructure, specifically addressing “highly targeted” individuals who are in senior government or senior political positions and likely to possess information of interest to these threat actors". While intended to assist highly-targeted individuals, its recommendations are obviously also relevant to everyone else who values their security and privacy (there were also iPhone and Android-specific recommendations, not reproduced here, see the link above):
      • Use only end-to-end encrypted communications (free messaging apps mentioned include Signal "or simlar apps")
      • Enable Fast Identity Online (FIDO) phishing-resistant authentication like hardware-based security keys
      • Migrate away from Short Message Service (SMS)-based MFA
      • Use a password manager
      • Set a telco PIN (for login etc)
      • Regularly update the operating system and other software (i.e. patch)
      • Opt for the latest hardware version from your cell phone manufacturer
      • Do not use a personal virtual private network (VPN): "Personal VPNs simply shift residual risks from your internet service provider (ISP) to the VPN provider"

    16 Dec 24

    • Cybercrime, advanced persistent threats: a RUSI article points out that "...foreign government adversaries no longer have a monopoly on sophistication or persistence. Cybercriminals have just as much if not more of an impact on the Western world... Digital spying by foreign state adversaries is still important. However, in biasing themselves towards ‘APT versus cybercrime’, information security and cybersecurity practitioners create a false dichotomy that pushes resources, attention and support to areas that don’t always align with the greatest organisational or national risk and impacts"

    13 Dec 24

    • CSAM, encryption: the EU Council agreed its general approach on the proposed CSAM Directive (this has notes on some amendments), based on which it can commence negotiations on the text with the European Parliament, but the Parliament hasn't agreed its own position internally yet, so it will be some months or longer before this Directive is adopted
      • Statements on the general approach by Austria, Austria and Slovenia, and Belgium, Finland, Ireland, Latvia, Luxembourg, Slovenia and Sweden
      • The age old debate continues about undermining encryption to allow checking of encrypted material for any CSAM, e.g. US litigation against Apple for dropping its planned CSAM scanning after privacy and surveillance concerns.
      • An excellent post about the planned "Chat Control" scanning under this Directive points out: "Chat Control is one example of mass screening for a low-prevalence problem — a dangerous mathematical structure. It requires breaking end-to-end encryption, the technological bedrock of digital privacy. Such a move would make mass surveillance cheap and easy again... false positives will overwhelm true positives in programs of this structure — mass screenings for low-prevalence problems under conditions of rarity, persistent uncertainty, and secondary screening harms. Under these conditions, even highly accurate such programs backfire by making huge haystacks (wrongly flagged cases, “false positives”) while missing some needles (wrongly cleared cases, “false negatives”)... when finite investigative resources are tied up processing CSAM possession tips from mass scanning, they cannot be used for other investigations... This is consistent with the possibility that children are endangered by such mass screening programs exhausting the investigative resources necessary to process tips that have a higher likelihood of being true positives and may otherwise be more relevant to current as opposed to past abuse. Curtailing targeted investigations that might stop ongoing abuse or bigger-fish distributors in favor of processing mass scanning tips that are overwhelmingly false positives does not serve the interests of vulnerable children or society... [but] it does serve the interests of those who would like a return to cheap, easy mass digital communications surveillance..."
    • Data, software, products: reminder from the US Federal Trade Commission (FTC) that to protect security it's important to have good data management (including enforcing mandated data retention schedules and mandating data deletion, so there's less unnecessary data that could be hacked), secure software development, and secure product design for humans (including least privilege, phishing-resistant MFA) 
    • Financial services, incident reporting, vendors: the UK Prudential Regulation Authority (PRA) issued a consultation paper CP17/24 – Operational resilience: Operational incident and outsourcing and third-party reporting, on proposed "rules and expectations for firms to report operational incidents and their material third-party arrangements" (deadline 14 Mar 25), with reporting thresholds (quite subjective), and a phased approach to incident reporting: initial, intermediate, final, with certain minimum information
    • Product safety: the EU General Product Safety Regulation applies from this date. When assessing whether a product is a safe product, factors to consider include, "when required by the nature of the product, the appropriate cybersecurity features necessary to protect the product against external influences, including malicious third parties, where such an influence might have an impact on the safety of the product, including the possible loss of interconnection", particularly digitally connected products likely to have an impact on children (on top of sectoral laws on cybersecurity risks affecting consumers etc)
      • Detailed UK guidance, applicable to Northern Ireland summarises it as: "This Regulation requires that all consumer products placed on the NI and EU markets are safe and establishes specific obligations for businesses to ensure that safety. The Regulation applies to products placed on, or made available to, the market where there are no sector-specific provisions with the same objective"   

    12 Dec 24

    • Passkeys: these are touted as more secure than using passwords, and are increasingly being supported (see my book and the free PDF). Microsoft published its UX design insights to boost passkey adoption and security

    10 Dec 24

    • EU Cyber Resilience Act (CRA): this Regulation entered into force (published in OJ 20 Nov 24, news item), requiring minimum cybersecurity requirements for any "product with digital elements" before they can be made available on the EU market, with certain cybersecurity vulnerability handling obligations on manufacturers, including on vulnerability disclosure
      • A "product with digital elements" is any software or hardware product and its "remote data processing solutions", including software or hardware components being placed on the market separately (where "remote data processing" is remote processing designed by the manufacturer, whose absence would prevent one of the product's functions from being performed - such as essential cloud processing). So, CRA catches not just IoT / smart devices but also software (and is not limited to consumer IoT, unlike the UK's  Product Security and Telecommunications Infrastructure Act (PSTI))
      • CRA applies fully from 11 December 2027, but with some earlier applicable dates like 11 September 2026 for Art.14 on manufacturers' obligation to report any actively exploited vulnerability contained in such a product: with staggered deadlines of 24 hrs, 72 hrs etc.

    9 Dec 24

    • Open source, malicious code, tools: open source code is increasingly incorporated into software, but open source packages can be malicious, or legitimate code can be accessed by attackers and "poisoned" to serve malicious purposes e.g. adding a backdoor for hackers. The Stack reported that a tool, DataDog, had been open sourced, termed as a [software] "supply chain firewall" that scans Python packages being installed, and blocks packages know to be malicious based on the tool provider's own observations or certain open source feeds 

    5 Dec 24

    • Consumer IoT: the UK published a survey it had commissioned before the  Product Security and Telecommunications Infrastructure Act (PSTI) came into force, to map and analyse the market for consumer connectable products, and collect and analyse evidence on the compliance of manufacturers with the PSTI legal regime, as well as evidence on awareness and impacts of the legislation. It outlines well the PSTI compliance challenges (which many may be familiar with!). And see the related infographic. See also 25 Nov
    • Software patching, tools: one of the most critical security measures to take is patching, ensuring software is kept updated with new versions that address security vulnerabilities. Google released a new open source security patch validation automation tool Vanir, that helps Android developers "quickly and efficiently scan their custom platform code for missing security patches and identify applicable available patches". "While initially designed for Android, Vanir can be easily adapted to other ecosystems with relatively small modifications, making it a versatile tool for enhancing software security across the board" 

    4 Dec 24

    • EU digital identity wallets: technical standards, adopted by the Commission on 28 Nov, for cross-border eID wallets under the European Identity Framework that was updated in 2024, were published in the OJ under 5 implementing regulations with rules on eID Wallets' integrity and core functionalities, on eID Wallets solutions' protocols and interfaces and on person identification data and electronic attestations of attributes of eID Wallets, plus reference standards, specifications and procedures for a certification framework for eID Wallets, and obligations for notifications to the Commission concerning the eID Wallet ecosystem
    • Measures, metrics: how can organisations measure to what extent their cyber security measures are effective? The US National Institute of Standards and Technology (NIST) published updated guidance on how an organization can develop 

    3 Dec 24

    • Comms infrastructure: several US and other agencies published a joint guide Enhanced Visibility and Hardening Guidance for Communications Infrastructure, "that provides best practices to protect against a People’s Republic of China (PRC)-affiliated threat actor that has compromised networks of major global telecommunications providers. The recommended practices are for network engineers and defenders of communications infrastructure to strengthen visibility and harden network devices against this broad and significant cyber espionage campaign... Although tailored to communications infrastructure sector, this guidance may also apply to organizations with on-premises enterprise equipment"
    • EU cybersecurity, threats: EU security agency ENISA published its first NIS2 biennial report on the state of EU cybersecurity. This reported "substantial cyber threat level to the EU, highlighting discovered vulnerabilities exploited by threat actors targeting EU entities..." and made several policy recommendations on strengthening EU cyber skills/workforce and addressing supply chain security 
    • UK cybersecurity, threats: the UK National Cyber Security Centre (NCSC) published its Annual Review 2024. Its head stressed in an accompanying speech the "clearly widening gap between the exposure and threat we face, and the defences that are in place to protect us... We need all organisations, public and private, to see cyber security as both an essential foundation for their operations and a driver for growth. To view cyber security not just as a ‘necessary evil’ or compliance function, but as a business investment, a catalyst for innovation and an integral part of achieving their purpose... Hostile activity in UK cyberspace has increased in frequency, sophistication and intensity... Actors are increasingly using our technology dependence against us, seeking to cause maximum disruption and destruction... And yet, despite all this, we believe the severity of the risk facing the UK is being widely underestimated... There is no room for complacency about the severity of state-led threats or the volume of the threat posed by cyber criminals. The defence and resilience of critical infrastructure, supply chains, the public sector and our wider economy must improve..."
      • The NCSC's incident management (IM) team issued 542 bespoke notifications to organisations of a cyber incident impacting them, providing advice and mitigation guidance (cf. 258 in 2023). Almost half related to pre-ransomware activity, enabling organisations to detect and remove precursor malware before ransomware was deployed.
      • Top sectors reporting ransomware activity into the NCSC were academia, manufacturing, IT, legal, charities and construction. "We received 317 reports of ransomware activity, either directly from impacted organisations, or from our partners (an increase on 297 last year). These were triaged into 20 NCSC-managed incidents, of which 13 were nationally significant. These included high-profile incidents impacting the British Library and NHS trusts"
      • The NCSC was made aware of 347 reports of activity that involved the exfiltration or extortion of data
      • The IM team issued ~12,000 alerts about vulnerable services through its Early Warning service (an automated NCSC threat notification service, free to UK organisations - do sign up!). Exploitation of zero-days CVE-2023-20198 (Cisco IOS XE) and CVE-2024-3400 (Palo Alto Networks PAN OS) also resulted in 6 nationally significant incidents which the IM team helped manage 

    2 Dec 24

    • Cybersecurity measures: the US Cyber & Security Infrastructure Agency (CISA) updated its Trusted Internet Connections (TIC) 3.0 Security Capabilities Catalog (SCC) version 3.2, based on the new National Institute of Standards and Technology (NIST) Cyber Security Framework (CSF) Version 2.0 mapping updates. TIC 3.0 SCC provides a list of deployable security controls, security capabilities, and best practices, intended to guide secure implementations and help US federal agencies satisfy program requirements within discrete networking environments, but is of more general use/interest 
    • Encryption: this was Global Encryption Day: the Global Encryption Coalition reported on the support of policymakers and others for encryption , including for the protection of children 
    • FS, DORA: the EU's DORA Regulation on digital operational resilience for the financial sector applies in the EU from 17 Jan 2025. Much secondary legislation on certain detailed requirements has been made under it (see list as at 4 Dec 24). On 2 Dec, an implementing regulation was published in the OJ on technical standards for standard templates for the register of information that in-scope financial entities must maintain in relation to their ICT services and ICT service providers, including providers' subcontractors in certain cases, such as details of their contracts 
      • Financial service entities' contracts with their ICT service providers should also be updated to comply with DORA's requirements, and some providers are directly regulated under DORA - not discussed here
    • Managed security services, certifications; EU: the Council approved a directly-applicable Regulation amending the EU Cybersecurity Act (CSA) to enable future adoption of European certification schemes for managed security services (MSS, like incident handling, penetration testing, security audits, consulting advice on technical support), increasingly important for cybersecurity incidents' prevention, detection, response, and recovery. Awaiting the CSA's broader evaluation by the Commission, this targeted amendment aims to enable establishment of such European certification schemes to help increase MSSs' quality, comparability and trustworthiness, and avoid fragmentation as some Member States have initiated national certification schemes for such services
      • Not law yet, to awaiting OJ publication
      • The Council also approved a Cyber Solidarity Act Regulation (also awaiting OJ) to strengthen EU/Member State cooperation and resilience against cyber threats, e.g. creating a cyber security alert system pan-European infrastructure comprising national and cross-border cyber hubs responsible for detecting, sharing information and acting on cyber threats including cross-border incidents. It also creates a cybersecurity emergency mechanism (including a EU cybersecurity reserve: private sector incident response services "ready to intervene" on significant/large-scale incidents if requested by a Member State or EU body as well as associated third countries, and incident review mechanism 

    1 Dec 24

    • Cloud, access, MFA: previously, cloud service providers have tended to leave it to their customers to decide whether the customer wants to require MFA in order for its users to access its cloud service. A very positive trend is that providers are increasingly enforcing MFA, e.g. Snowflake will be blocking attempted sign-ins using single-factor authentication with passwords. It seems likely this move by Snowflake was influenced by >100 of its customers, who had not required MFA, being successfully attacked in 2024. While it would have behoved those customers to require MFA for access to their Snowflake services, these incidents did appear to lead to some negative comments about Snowflake 

    29 Nov 24

    28 Nov 24

    • Boards, directors: UK National Cyber Security Centre (NCSC)'s Cyber Security Toolkit for Boards: updated briefing pack released with insights on the ransomware attack against the British Library
    • EU NIS2 Directive: this Directive, updating and expanding the NIS Directive, should have been implemented by Member States by 17 Oct 24, but it wasn't (Europa list of those that have notified the Commission of their NIS2 transposition).
      • The Commission decided to open infringement procedures for not fully implementing NIS2 by sending formal notice to 23 Member States (Bulgaria, Czechia, Denmark, Germany, Estonia, Ireland, Greece, Spain, France, Cyprus, Latvia, Luxembourg, Hungary, Malta, Netherlands, Austria, Poland, Portugal, Romania, Slovenia, Slovakia, Finland and Sweden). The Commission has given them two months to respond, complete their transposition and notify their measures to the Commission. Ireland hasn't yet transposed NIS2 officially, but it wasn't in that list, for whatever reason
    • UK Cyber Security & Resilience Bill consultation: consultation closed, on UK DSIT's call for evidence on proposals to inform the Bill

    26 Nov 24

    • Awareness raising, NIS: EU security agency ENISA updated its guide on how to promote cyber security awareness to C-level (part of its AR-in-a-box DIY awareness-raising toolbox, "a comprehensive solution for cybersecurity awareness activities designed to meet the needs of public bodies, operators of essential services, and both large and small private companies. It provides theoretical and practical knowledge on how to design and implement effective cybersecurity awareness") - still relevant to NIS2 of course

    25 Nov 24

    • IoT, smart devices, vulnerability handling, PSTI: the IoT Security Foundation published The State of Vulnerability Disclosure Policy (VDP) Usage in Global Consumer IoT in 2024, including some coverage of the impact of the UK Product Security and Telecommunications Infrastructure Act (PSTI). "...the UK legislation has driven a bigger improvement [among UK retailers] than European and US retailers. Whilst the sample set maybe low, it is a consistent gauge moving faster in the right direction" 
      • The survey indicated an increase in the proportion of manufacturers checked that had a vulnerability disclosure policy, from 23.99% in 2023 to 35.59% in 2024. Only ~21% of companies complied with PSTI's vulnerability disclosure requirements, though that's "increased significantly" from the previous year. 
      • The picture's variable regarding proportion of retailers stocking products whose manufacturers support vuln disclosure. Over 50% of IoT products stocked by several UK retailers were from manufacturers that had vulnerability disclosure policies. John Lewis was the best, 93.33% of its products checked were from compliant manufacturers. The detail on specific manufacturers, their website statements of compliance and how some meet PSTI (or not) is worth a look
      • "There has clearly been some effect from the UK’s Product Security and Telecommunications Infrastructure Act (Part 1) requirements... but implementation seems fragmented and inconsistent. While some leading UK retailers are showing that around 90% of the IoT manufacturers they stock have vulnerability disclosure policies, there are some notable exceptions to this ‘dip test’ of the market and there are obvious differences in online marketplaces. The other regions showed less promising and variable data about the product manufacturers they stocked"  (the report covers manufacturers and retailers in the EU, US and Asia too - not discussed here)
      • And there remains a "..gap in practice between the consumer and enterprise sectors. Whilst the consumer sector is firmly heading in the right direction, there is a stark contrast in market practice levels and continues to justify the need for consumer regulation" (I'd suggest enterprise IoT security could still improve)".
      • On individual product categories, "notable laggards being Health and Fitness, Lighting and, somewhat paradoxically, Security. Those manufacturer report cards read “must do better”"
      • See also 5 Dec 

    22 Nov 24

    • Fraud, data protection: the UK ICO emphasised that data protection is not an excuse when tackling scams and fraud, "warning that reluctance from organisations to share personal information to tackle scams and fraud can lead to serious emotional and financial harm. Data protection law does not prevent organisations from sharing personal information, if they do so in a responsible, fair and proportionate way". It published "new practical advice to provide clarity on data protection considerations and support organisations to share data responsibly to tackle scams and fraud", aimed at any organisation seeking to share personal information to identify, investigate and prevent fraud, especially banks, telecommunications providers and digital platforms"
      • The same also applies to organisations disclosing potential personal data, like IP addresses and domain names, as indicators of compromise (IOCs) in threat sharing initiatives/platforms regarding cyber threats/breaches, whether sectoral or otherwise, and it would have been helpful if the ICO had also made that point.

    21 Nov 24

    • Critical infrastructure, red teaming: the US Cyber & Security Infrastructure Agency (CISA) published its insights from a red team assessment of a US critical infrastructure organisation including lessons learned (technical controls, staff training, leadership/board: "Leadership deprioritized the treatment of a vulnerability their own cybersecurity team identified, and in their risk-based decision-making, miscalculated the potential impact and likelihood of its exploitation") and technical details

    18 Nov 24

    • Passwords: it's interesting that, following notification of personal data breaches, the Romanian data protection supervisory authority ordered a company to take measures including (machine translation) "password complexity and history policy on all customer accounts with a pre-established expiration interval". That is a decades old practice, which is no longer considered good. Technical experts including the UK NCSC and US NIST recommend longer rather than more complex passwords, indeed NIST's latest draft update recommends not enforcing any password complexity rules like one lowercase, one uppercase etc. Similarly with forced password changes every few months or year, which is now deprecated (e.g. New Scientist article) as it reduces security by resulting in people writing down passwords, using bad passwords they can remember, etc! So it seems that some GDPR authorities could still benefit from more technical assistance/education on cybersecurity...

    15 Nov 24

    • UK NIS Regulations, notified incidents: the ICO is the regulator for digital service providers under NIS (cloud, online marketplaces, online search engines). Responding to a freedom of information request, the ICO stated that 37 incidents were reported to the ICO as NIS incidents, including 18 incidents that were not in fact NIS incidents and 2 incidents (reported in 2020 and 2021) that did not meet the mandatory threshold following its assessment. The figures suggest many incidents are reported as NIS incidents when they are not, but it's possible there were some actual NIS incidents that were not reported as the final total of 19 seems quite low...:
      • 2020 - 2 (really 1, see above, but in fact the ICO did not consider it a NIS incident, so 0)
      • 2021 - 3 (really 2, but the ICO did not consider 1 a NIS incident, so 1)
      • 2022 - 4 (really 2, as the ICO did not consider 2 of those a NIS incident)
      • 2023 - 19 (really 18, as one was incorrectly reported to the ICO as well as to the correct competent authority, but several were not considered NIS incidents, so 8) 
      • 2024 YTD - 9 (but 5 were not considered NIS incidents, so 4) 

    14 Nov 24

    • Product safety, IoT: in the first horizon scan report by the UK Office for Product Safety & Standards (OPSS), privacy, data loss and wider cyber security issues like distributed denial of service (DDOS) attacks were considered as part of harms or benefits a technology may present in relation to non-physical aspects, and the scan's taxonomy of technologies included cybersecurity and data platforms: "combination of data, policies, processes, and technologies employed to secure information, protect organisations, and protect individuals' cyber assets, including specific biological research through omics, and financial activities through blockchain, like new data technology and PETs. Health data was at greater risk of being compromised by cyber threats. Trends across technologies included security issues, with new vulnerabilities created by increased automation and connected technology with IoT, and new ways of compromise; most IoT devices' relatively limited computing power limits cybersecurity complexity and effectiveness, their interconnectivity increases vulnerabilities (specific IoT rapid review guidance). Social commerce normalises online money transfer enabling cybersecurity scams. "Blockchain can potentially offer some solutions to these challenges". Online marketplaces also need consumer protection against scams.
      • OPSS research on consumer attitudes/awareness indicated consumers are increasingly comfortable with manufacturers making changes remotely in the case of physical safety issues or cyber security vulnerabilities, but becoming less considerate of cyber security before initial purchase, particularly those with a low education level. Note that the OPSS is responsible for enforcing the UK's Product Security and Telecommunications Infrastructure Act (PSTI) 

    12 Nov 24

    • Financial services, vendors: UK FS regulators Bank of England (Bank), Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) issued PS16/24 FCA 24/16 – Operational resilience: Critical third parties to the UK financial sector, with final rules for FS use of critical third parties (CTPs) including operational risk and resilience requirements, and incident reporting and other notifications and enforcement
    • Security engineering, learning: all PDF chapters of the late, great Ross Anderson's seminal, very readable Security Engineering book (3rd edition 2020) are now available for free download via this link

    7 Nov 24

    • NIS2, risk management: EU security agency ENISA issued a consultation on its detailed technical implementin gguidance (PDF no longer availabe on ENISA's website, but see Internet Archive) to support EU Member States and entities with  implementation of the technical and methodological requirements of NIS2's required cybersecurity risk management measures. The final version is awaited. (On the implementing regulation for certain types of entities, see my October post.)

    4 Nov 24

    • QR code phishing: this is phishing by tricking people into scanning malicious QR codes to take them to malicious websites or install/open malicious apps/files, and it's an increasing attack vector. Microsoft explained how it updated its Microsoft Defender for Office 365 to address this 

    31 Oct 24

    • Incident response, preparations, resilience: helpful lessons on the Jul 24 Crowdstrike outage from the UK Financial Conduct Authority (FCA) with its observations on how FS firms responded to the incident including infrastructure resilience, third party management, incident response and communications, with recommendations on what firms should be doing on these fronts    
    • Cybersecurity measures: ending Cybersecurity Awareness Month, Microsoft published 7 cybersecruity trends and (same old, same old!) its tips for SMEs:
      • 1in 3 SMBs have suffered a cyberattack (Microsoft tips: strong passwords, MFA, consider password manager, recognise/report phishing, keep software updated i.e. patching)
      • Attacks cost them >$250k on average and up to $7m (tip: risk assessment to understand gaps, determine measures to address)
      • 81% of SMBs think AI increases need for additional security controls (tip: data security & data governance when adopting AI)
      • 94% think cybersecurity is business-critical (tip: educate/train employees e.g. using Microsoft awareness resources)
      • <30% manage security in-house (tip: it's common to engage a Managed Service Provider (MSP) for security support)
      • 80% mean to increase cybersecurity spending, prioritising "data protection" [NB broader than in the GDPR sense] (tip: prioritise data protection, firewall, anti-phishing, ransomware & device/endpoint protection, access control, identity management e.g. via DLP, EDR, IAM)
      • 68% feel secure data access is a challenge for remote workers (tip: measures to protect data and Internet-connected devices, app store downloads; no credential sharing by email/text only phone in real time)

    25 Oct 24

    24 Oct 24

    • Web security, standards: a Princeton news item discusses a new security standard their researchers worked on. "The change centers on how web browsers and operating systems verify a website’s identity when establishing a secure connection. They rely on third party organizations known as certification authorities, who issue digital certificates of authenticity based on a website owner’s ability to demonstrate legitimate control over the website domain, usually by embedding a random value that the certification authority has provided. ...bad actors could easily sidestep those hurdles to obtain a fraudulent certificate for a website they do not legitimately control... it could target any website on the internet. Users had no way to spot the fraud since the certificates were real, even if their underlying facts had been forged. With a fraudulent certificate, criminals could attack users and route traffic to fake sites without anyone knowing... the fake site would look every bit as legitimate as the real one... By adopting the Princeton standard, certification authorities have agreed to verify each website from multiple vantage points rather than only one... [multi-perspective validation]", which will improve Internet/web security 

    23 Oct 24

    • Cybersecurity measures, certifications, supply chain: the UK's Cyber Essentials certification scheme, to encourage organisations to implement key essential cybersecurity measures (cyber hygiene), reached its 10 year anniversary.
      • It's great to hear it has been effective in improving cybersecurity: "Recent insurance data shows us that organisations with Cyber Essentials are 92% less likely to make a claim on their insurance than those without it". The NCSC noted, "This statistic underscores the scheme’s effectiveness in mitigating cyber risks". "Additionally, where organisations require their third parties to get Cyber Essentials, we know they experience fewer third party cyber incidents".
      • The full impact evaluation noted that Cyber Essentials:
        • is providing cyber security protection to organisations of all sizes, including larger organisations that use other schemes, standards and accreditations
        • helps to improve organisations’ awareness and understanding of the cyber security risk environment
        • has stimulated wider actions, good practice and behaviours among organisations that use it
        • is being actively used as part of supply chain assurance to inform the supplier selection process, instil confidence and demonstrate basic cyber hygiene to the market
      • The NCSC also added, "Cyber Essentials has played a crucial role in raising awareness about cyber security. An evaluation conducted as part of the 10-year review revealed that 85% of certified organisations reported a better understanding of cyber risks. This increased awareness has empowered businesses to take proactive measures in safeguarding their digital assets", and said "The data is clear, implementing the five controls significantly lowers the risk of experiencing a cyber incident. For organisations lacking the necessary in-house expertise, support is readily available through companies offering the NCSC-recognised Cyber Advisor Service"
      • Also, to improve supply chain securityprocurement efficiency, consistent minimum standards, UK financial entities Barclays, Lloyds Banking Group, Nationwide, NatWest, Santander UK and TSB have stated that they will promote and incorporate Cyber Essentials in their supply chain risk management and they encourage other businesses to incorporate Cyber Essentials into their supplier requirements. (This would also "Spread greater cyber insurance coverage across supply chains through the provision of free cyber insurance, and incident response services, included with Cyber Essentials certification to qualifying organisations")
        • Comment: Contractually requiring suppliers/vendors/service providers to be certified is a helpful move in the right direction. Cyber Essentials measures are the bare minimum that organisations should take, are not difficult to implement, and would go a long way towards preventing or reducing the impact of cyber incidents, so all organisations should be certifying, or at least implementing those measures even if they don't get certified! Unlike with ISO standards, Cyber Essentials measures are freely available, whether implemented through self-assessment or (Plus) third-party audit. 
        • Note: Cyber Essentials funding is offered to small organisations in certain sectors like AI, quantum, semiconductors etc., with certain criteria 

    2 Oct 24

    • Ransomware: Counter Ransomware Initiative (CRI) guidance for organisations experiencing a ransomware attack and organisations supporting them
    • Scanning, testing: interesting study on how external cybersecurity scanning data can enhance underwriting accuracy for the (re)insurance industry. This compared companies’ security controls with actual insurance claims, identifying key predictive factors including the organisation's IP address count and patching cadence (the speed at which it updated software to address vulnerabilities), that help forecast claims. Single Point of Failure (SPoF) data also highlighted dependencies on third-party services like AWS (cloud) and VPNs
      • While aimed at (re)insurance, scanning/pen testing obviously is also helpful if not essential for insured organisations, and same issues obviously affect their susceptibility to successful attacks, so keep your number of IP addresses limited to reduce exposure, and (not a new recommendation) patch ASAP!  

    25 Sept but UK press release 17 Oct 24

    • Quantum computing, encryption, financial services - risks to security & FS: G7 Cyber Expert Group's statement on planning for quantum computing's opportunities and risks (including to public key cryptography) and steps that financial entities should take

    10 Sept 24

    • Cybercrime, data sharing: the UK ICO and National Crime Agency (NCA) signed a memo of understanding on how they'll collaborate to improve the UK's cyber resilience, including "The NCA will never pass information shared with it in confidence by an organisation to us without having first sought the consent of that organisation" and "We will support the NCA’s visibility of UK cyber attacks by sharing information about cyber incidents with the NCA on an anonymised, systemic and aggregated basis, and on an organisation specific basis where appropriate, to assist the NCA in protecting the public from serious and organised crime"