Mastodon Kuan0: 2024

Saturday, 19 October 2024

Things AI, Oct 2024

AI tool for meeting recordings, taking notes, creating draft documents: ICO says if not used for new purpose, can rely on previous legal basis. For any new processing activity/purpose, identify lawful basis! NB. update privacy notice, accuracy, ADM, profiling, consider any data sharing with tool provider (ICO last updated date still says April but this Q&A is new since Sept).

EU AI Act contractual clauses drafted by SCL (I've not reviewed them myself). And, the Commission seeks feedback on a draft implementing regulation for scientific panel of AI experts to assist the AI Office.

EU algorithms regulation: don't forget the EU Platform Work Directive, just approved by the Council; 2-year transposition deadline. This aims to improve working conditions and protection of  personal data in platform work (i.e. gig economy workers like drivers) by, among other things, promoting transparency, fairness, human oversight, safety and accountability in algorithmic management in "platform work". It will require measures on algorithmic management of people performing platform work in the EU, including those with no employment contract/relationship. Chapter III on algorithmic management limits certain processing of personal data by means of automated monitoring systems or automated decision-making systems, such as personal data on emotional or psychological state. Similarly where "digital labour platforms" use automated systems taking or supporting decisions that affect persons performing platform work; personal data processing by a digital labour platform by means of automated monitoring systems or automated decision-making systems is deemed high risk, requiring a DPIA under GDPR, and more, as well as detailed transparency requirements on automated monitoring systems and automated decision-making systems, and obligations regarding human oversight and human review, etc. There's certainly overlap with both GDPR and the AI Act.

US EO14110: NIST 1-pg summary of progress to date & next steps.

Open source AI: a draft definition 1.0-RC1 is open for comment. FAQs; and must all training data be made available for openness?

Federated learning: scalability challenges in privacy-preserving federated learning (UK RTAU & US NIST collaboration). (For an explanation of federated learning, please see my book)

UK AI Safety events: the Nov 2023 summit cost £27.7m; plus info on the Nov 2024 event incl. criteria for invites (names of invitees were withheld for data protection reasons, but names of their organisations were also withheld, not clear why): from FOI requests.

Financial services/finance/securities:

Training data collection, not just by web scraping!: certain robot vacuums were found to collect photos and audio to train AI, so big security and privacy risks with some robotic hoovers, though reportedly the privacy notice was suitably expansive (but who reads those?!, covering wholesale data collection for research including: device-generated 2D/3D map of user's houses, voice recordings, photos or videos! Talk about hoovering up data for AI training...😉🙄

LLMsstill can't do maths or reasoning (Apple researchers)

G7 Hiroshima AI Process (recall the Code of Conduct etc.) progresses:

  • G7 ministerial declaration 
  • Overview of the OECD pilot of the Hiroshima artificial intelligence process reporting framework (for the international code of conduct for organizations developing advanced AI systems, like foundation models/GPAI) - summary by the Italian presidency (pilot phase); G7 joint statement 
  • G7 toolkit for AI in the public sector - "a comprehensive guide designed to help policymakers and public sector leaders translate principles for safe, secure, and trustworthy Artificial Intelligence (AI) into actionable policies" - of interest/use to the private sector too. And see the Ada Lovelace Institute's Buying AI: s the public sector equipped to procure technology in the public interest? 

Adtech: IAB Tech Lab's AI in advertising primer.

Recommender systems: seem to be particularly targeted, e.g. under the EU Digital Services Act (DSA) (and see ICO brief consultation re using children's data for recommender systems).

AI in healthcare: increasing focus e.g. by Google, Microsoft. See below on the new UK RIO.

LinkedIn & AI: LinkedIn may have agreed not to train AI using UK users' data, but it plans in its new user agreement to put all responsibility for AI-generated content on users - even though, when a user wants to start a new post, it encourages users to "try writing with AI"!


Fairness: evaluating first-person fairness in chatbots (PDF)

AI hype, costs cf productivity (is AI making work worse?) and environmental impact (is nuclear the answer?) vs. examples of AI uses: detecting that UK family court judges used victim-blaming language in domestic abuse cases; stymying mobile phone thieves; cancer detection (UKRI, gov news); pollen & allergies; UK Royal Navy like predictive maintenance; helping sustainable cities; fertilisation treatment

UK AI research programs: include wearable tech to help drug addicts; building resilience against AI risks like deepfakes, misinformation, and cyber-attacks.

UK Regulatory Innovation Office: the RIO promised in the Labour manifesto has been launched, within DSIT, "to reduce the burden of red tape and speed up access to new technologies... like AI training software for surgeons to deliver more accurate surgical treatments for patients and drones which can improve business efficiency", with the 4 initial areas including AI and digital in healthcare, and connected and autonomous technology. The RIO "it will support regulators to update regulation, speeding up approvals, and ensuring different regulatory bodies work together smoothly. It will work to continuously inform the government of regulatory barriers to innovation, set priorities for regulators which align with the government’s broader ambitions and support regulators to develop the capability they need to meet them and grow the economy... The new office will also bring regulators together and working to remove obstacles and outdated regulations to the benefit of businesses and the public, unlocking the power of innovation". But the RIO's first Chair has yet to be appointed, working 4-5 days a month (apply!). FT article (paywall).

(See also my blog on data protection & cyber security)

Data protection & cyber security, Oct 2024

Cookies: consent or pay OK in UK? ICO says it's a business decision by the organisation, it holds no info! (FOI).

EU NIS2 Directive: applies from 18 Oct 2024 (news): see Commission implementing regulation on requirements for digital services incl. cloud, CDN, online marketplaces, social networks; too few Member States have transposed it into national law (published Commission list, so far just Belgium, Croatia, Italy, Latvia, Lithuania). Not listed doesn't mean "not implemented": a country might not have notified the Commission yet, or the Commission might not have added it to that list yet. But it's clear some Member States have missed the deadline, like Ireland (draft law heads of Bill). Microsoft has been quick off the mark to tout how Azure can help NIS2 compliance.

EU Cyber Resilience Act (CRA)adopted by the Council in Oct 24, on security requirements for "products with digital elements" (software or hardware products and their remote data processing solutions, including software or hardware components being placed on the market separately). NB "remote data processing" as defined could catch some cloud servces. Applicable 36 months after CRA becomes effective (should be published in OJ in a few weeks), with some transitional provisions.  Views that the CRA is an "accidental European alien torts statute"! Separately, the US CISA/FBI have published for consultation draft guidance on product security bad practices.

Revised EU Product Liability Directiveadopted by the Council in Oct 24, see some previous blog commentary on software/SaaS being caught, and defects including cybersecurity issues. Liability on repairers, compensation claims easier for claimants, importers/EU representatives can be liable for products of non-EU manufacturers. 2-year transposition period after it becomes effective (should be published in the OJ soon).

EU CSAM Regulation: recently revived by the Council's Hungarian presidency which suggested the amended compromise text. Remember, this would catch online service providers, such as providers of hosting services and interpersonal communications services. Currently this would apply 24 months from its effective date. (The previous temporary derogation from the ePrivacy Directive to allow scanning for CSAM was extended to 3 Apr 2026, in Apr 24.)

UK Product & Metrology Bill: the Delegated Powers and Regulatory Reform Committee has reservations, see my previous comments on LinkedIn including that things are mostly left to delegated legislation.

Backdoors?: but, note that any encryption/other backdoors into apps/products/networks, or special keys "only" for government access, will threaten everyone's security (as noted regarding Global Encryption Day, 21 Oct 2024!). Example: it seems Chinese hackers got into US broadband providers' networks and acquired information "from systems the federal government uses for court-authorized wiretapping".

Passkeys: more secure than passwords (see my book free PDF!), it's great that this "passwordless" option is increasingly being adopted, and increasingly interoperable cross-platform: see passkeys on Windows, and Google's passkey syncing.

Ransomware, sanctions: individuals with links to Russian state and other prolific ransomware groups, including LockBit, have been found and sanctioned. NCA newshistory of Evil Corp (not on technical matters)

Software bill of materials (SBOM): more from the US NIST e.g. on framing software component transparency (what's SBOM? CISA FAQ, resources, SBOM in SaaS/cloud, SBOM for assembled group of products. SBOM is explained in my book). I do feel contracts should include SBOM provisions.

IoT:

UK NCSC guidance:

Microsoft, Cybersecurity and Infrastructure Security Agency (CISA) and the National Cybersecurity Alliance (NCA) Be Cybersmart Kit for Cybersecurity Awareness Month (which is October) also focuses on the basics: use strong passwords and consider a password manager; turn on MFA; learn to recognize and report phishing; keep software updated.

Quantum techICO views; UK government response on regulating quantum applications; cybersecurity risks from quantum computing and steps for financial authorities and institutions (see the G7 Cyber Expert Group statement on planning for the opportunities and risks of quantum computing)

US & transfersCommission's report on the first periodic review of the functioning of the adequacy decision on the EU-US Data Privacy Framework (DPF). Separately, industry body CCIA's comments on digital trade barriers affecting US companies include, for the EU (detailed PDF), data and infrastructure localization mandates and restrictions on cloud services (citing e.g. the EUCS, NIS2, Data Act), and restrictions on cross-border data flows (under not just GDPR but also the Data Act and Data Governance Act)

Other ICO:

  • Levales solicitors reprimand: "A threat actor accessed Levales’ cloud-based server using legitimate credentials and subsequently published data on the dark web". Levales "did not have Multi-Factor Authentication (MFA) in place for the affected domain account. Levales relied on computer prompts for the management and strength of password and did not have a password policy in place at the time of the incident. The threat actor was able to gain access to the administrator level account via compromised account credentials. Levales Solicitors LLP have not been able to confirm how these were obtained." And see above, NCSC and cybersecurity awareness month guidance reiterating the importance of using MFA, especially for cloud!
  • New data protection audit framework launched, including toolkits (on areas like securitypersonal data breach detection/preventionAI), framework trackers (similar areas), resources, case studies
  • From 11 Oct 24, businesses must try online resources "Instead of first calling our phone line..." - will the expected increase in the data protection fee change this?
  • Children's data: ICO's further short consultation on its Children's Code (on use of children’s personal information in recommender systems, use of PD of children <13) has closed, sorry I didn't have time to blog it earlier this month
  • Cyber investigations/incidents: latest datasets, for Q1 24/25 published
  • ICO DPIA for its use of Canva - interestingly, here as in some other FOI responses, the ICO redacted internal tech info like, in this case, detailed links: "The disclosure of extended links reveals the ‘make up’ of our SharePoint system. Due to the nature of information this reveals, this information increases our vulnerability to cyber attacks."
    • Is security by obscurity really the best approach here? Previously, when asked for a "list of all the variable names in the database, together with any descriptive/user guides of the variable names in the database list of all the variable names in the database, together with any descriptive/user guides of the variable names in the database" for the ICO's database of data security incident trends, the ICO refused, saying "if disclosed, such information could be used by malicious actors seeking criminal access to our information and systems". It even took the view that "The size of our internal security team is exempt from disclosure to you under section 31(1)(a) of the FOIA, as it could make the ICO more vulnerable to crime".
  • Facial recognition:
  • One court order for winding-up (liquidation) on ICO petition in Q2 24/25, wonder who?

Cyber Security Breaches Survey (UK, annual): how could this be developed and improved? DSIT call for views (survey questions), deadline 23:59, 4 Nov 24. 

Cloud: NIST's A Data Protection Approach for Cloud-Native Applications (note: here "data protection" means protecting all types of data, not just personal data), and see NCSC on MFA and cloud

UN Cybercrime Convention: concerns continue to be raised (see other critiques summarised in my book and free PDF).

Adtech: the IAB has published its Repository of European IAB’s Initiatives for Responsible Digital Advertising with helpful links to its key docs on data protection, DSA etc. It also published, for consultation, a proposed privacy-centric Attribution Data Matching Protocol (ADMaP), a data clean room interoperability protocol for attribution measurement (tech specs) "that enables advertisers and publishers to measure attributions using Privacy Enhancing Technologies (PETs) in a  Data Clean Room (DCR) and protecting their user’s Personal Identifiable Information". 

GDPR non-material damage: CJEU case, reiterating that mere GDPR infringement isn't damage, but an apology could be sufficient compensation if previous position can't be restored, as long as it's full compensation; controller attitude/motivation irrelevant in awarding smaller compensation than the damage suffered. (I'd add, an apology is not full compensation without a binding promise not to do something similar again in future!)

GDPR Procedural RegulationEDPB statement; the Council's Data Protection Working Party will be discussing the draft Regulation on 24 Oct 24.

Digital identity:

Other EDPB:

  • Adopted a raft of docs including
    • Opinion 22/2024 on certain obligations following from the reliance on processor(s) and sub-processor(s), produced on the Danish SA's request (industry association BSA has raised concerns that these requirements are at odds with market practice, supply chain relationships, etc.)
    • For consultation, Guidelines 1/2024 on processing of personal data based on Article 6(1)(f) GDPR, deadline 20 Nov 24
      • Note: I've not read properly but there's at least one oddity. The cases the EDPB relied on to argue that personalised advertising is "direct marketing" don't actually say that. "However, CJEU case law suggests that personalised advertising could be considered a form of direct marketing" - well no, the para referenced stated processing for direct marketing may be for legitimate interests, not that personalised ads are direct marketing! Similarly, arguments about "communications" being for direct marketing skate over the case cited clearly being about "electronic mail" as defined in the ePrivacy Directive. I think we'd all agree that ads in emails are direct marketing, but the EDPB seems to be arguing that, under that case, all commercial communications like personalised ads are direct marketing. This can't follow from that case, which is clearly confined to "communications covered by Article 13(1)" of the ePrivacy Directive such as email.
    • Work programme 24-25
    • Granting Kosovan Information and Privacy Agency observer status for the EDPB's activities (contrast the polite No post-Brexit to the UK's then Information Commissioner, in a letter whose reference, coincidentally or not, was "OUT2020-0110"!)
    • Next coordinated enforcement action in 2025 will be on erasure (right to be forgotten, RTBF)
  • Final Guidelines 2/2023 on Technical Scope of Art. 5(3) of ePrivacy Directive i.e. "cookie" consent but much more; local processing, like on-device processing for AI/machine learning, is still caught according to the EDPB, if anything is sent to the "entity producing the client-side code". Small AI models that can "fit" on user devices are emerging, and may represent the only way forward for users who want AI applications on their phones, at this rate!
  • Response to the European Commission concerning the EDPB work on the interplay between EU data protection and competition law (DMA etc.: still working on it!)

For amusement value only: ICO FOI response, non!

(See also blog on AI and, just because, UK Attorney-General's speech on the rule of law in an age of populism, Commission webinars on development of model or standard contractual terms for data sharing and switching between data processing services i.e. cloud services under the EU Data Act, and EU Digital Services Act DSA transparency database researchers' workshop)

Sunday, 6 October 2024

Things data protection / privacy (some AI), Sept/Oct 2024

GDPR Procedural Regulation: the Council seems to be progressing this, in October 2024.

CJEU cases: there have been several lately that others have covered, such as on commercial interests possibly being legitimate interests, so I won't for now. I just want to highlight a case from a few months back, which is relevant to employee policies and training/awareness-raising, and possible strict liability to pay compensation to data subjects, at least for infringements arising from employee action/inaction.

Adtech: IAB Tech Lab has launched, for public consultation, its PAIR protocol 1.0 for a "privacy-centric approach for advertisers and publishers to match and activate their first-party audiences for advertising use cases without relying on third-party cookies". Initially donated by Google, PAIR has been developed into "an open standard that enables interoperability between data clean rooms and allows all DSPs to adopt the protocol for enhanced privacy-safe audience targeting".

Equality, AIThe public sector equality duty and data protection, Sept 2024, UK EHRC guidance (with ICO input), including helpful examples of proxy data for protected characteristics under the UK Equality Act 2010, and a short section on proxy analysis of AI models, with a case study on the Dutch benefit fraud scandal that led to unlawful discriminatinon (from using biased predictive algorithms).

Open-source AI: from UK ICO's previously-asked questions, this Q&A was added recently even though currently the "Last updated" date indicates 11 April 2024.
Q: We want to develop a speech transcription service for use in our organisation, using an open-source artificial intelligence (AI) model. Can we do this even though we don’t have detailed information about how the model was trained? (see the answer! It seems call transcription is a popular use of AI, see other Q&A on that webpage on that topic, e.g. this and this. Also, compare a Danish SA decision from June 2024 on the use of AI to analyse recordings of phone calls.)

Oral disclosures?: talking of contrasting approaches, compare a Polish SA decision holding that oral disclosure of personal data during a press conference was not in breach of GDPR, whereas an Icelandic SA decision ruled that oral disclosures by police under the Law Enforcement Directive infringed that Directive.Yes, different laws, but they ought to be interpreted consistently. And I don't get how oral statements amount to "processing" wholly or partly by automated means under EU data protection laws, just as I don't get how there have been so many fines in the EU/UK regarding paper records without first holding that they form part of a "filing system" as defined.

ICO big PSNI fine: well-known by now (news release, MPN), but it underlines the point that the many surnames can be unique, and indicate religion and/or ethnicity (see Equality above on proxy data).

ICO: selected recent ICO disclosures, that the ICO decided to publish following FOI requests to it:

  • How the ICO assesses incidents / possible personal data breaches: ICO internal guidance (request, PDB assessment methodology as of June 2023); seems to be based on ENISA's risk assessment for PDBs, which is unsurprising as that has been endorsed by both EDPB and ICO
  • Territorial scope under UK GDPR, DPA 2018: ICO internal guidance (request, copy)
  • What's a restricted transfer outside the UK: ICO internal guidance (request, copy); taking the outdated and misguided view that "transfer" is based on transfer of personal data's physical location, which is at odds with the ICO's own public guidance on transfers!
  • How does ICO decide whether to publicise its intention to fine (request, emails on decision, more info)? This was on one concrete situation, but it's helpful to know the factors, again unsurprising, which I summarise below:
    •  ICO default posture of transparency, although it considers each circumstance.
    • This is consistent and fair with other similar cases where it has publicised the information at this stage.
    • For deterrence regarding perceived central provider issues: "We are seeing a pattern of central providers having security issues with consequences for patients, publishing this will act as a learning/ deterrent for other processors with large central contracts, including the provisional fine will help clarify the seriousness of these issues".
    • "The case has been extremely well reported and is well known, so this reduces the potential additional impact on the organisation and there is limited dispute about the facts of the attack."
    • "Publishing the NOI [notice of intention to fine] and the provisional fine will help improve information rights practice and compliance among those we regulate."
    • While it is possible that the fine value will change, as it is "provisional and subject to reps", this was balanced "the possible criticism of the ICO for changing the fine amount as the process concludes vs. the benefit of being transparent about the process... Idemonstrating that, if it does change, that is proof that the ICO does consider reps carefully and takes action based upon reps. This can serve to increase confidence in and awareness of our processes. I am comfortable that, subject to including suitable language to make clear it is provisional, that this risk is managed and the benefit is greater."
    • "in this case, I have decided that publicity at this point allows for improved public protection from threat and hence is overridingly in the public interest. It is also already in the public domain."

DRCF: UK regulators the Digital Regulation Cooperation Forum are seeking input on their 2025/26 workplan by 8 Nov 2024. Unsurprisingly, the work includes AI, but also bilateral work on data protection and online safety, competition and data protection and illegal online financial promotions, and risks and opportunites of emerging technologies like digital identity, digital assets and synthetic media.

Data protection fee: The consultation on increasing the UK data protection fee has closed. The ICO's own response supported the increase, but didn't advocate for any change in the bases for charging the fee, although the government was open to views on that, so it seems there will just be an increase in fee levels but no substantive changes to the bases.

Dark patterns: while not limited to data protection, see OECD dark patterns on online shopping: countdown timers, hidden information, nagging, subscription traps, forced registration and privacy  intrusions, cancellation hurdles. Not dissimilar to the issues previously raised by UK regulators ICO and CMA on online choice architecture, control over personal data and harmful designs in digital markets.

Data transfers under the UN Digital Compact ("a comprehensive framework for global governance of digital technology and artificial intelligence"): the text is a bit vague and general on cross-border data flows, and 2030 is not exactly near-term!:

46. Cross-border data flows are a critical driver of the digital economy. We recognize the potential social, economic and development benefits of secure and trusted cross-border data flows, in particular for micro-, small and medium-sized enterprises. We will identify innovative, interoperable and inclusive mechanisms to enable data to flow with trust within and between countries to mutual benefit, while respecting relevant data protection and privacy safeguards and applicable legal frameworks (SDG 17).

47. We commit, by 2030, to advance consultations among all relevant stakeholders to better understand commonalities, complementarities, convergence and divergence between regulatory approaches on how to facilitate cross-border data flows with trust so as to develop publicly available knowledge and best practices (SDG 17)...

...We encourage the working group to report on its progress to the General Assembly, by no later than the eighty-first session, including on follow-up recommendations towards equitable and interoperable data governance arrangements, which may include fundamental principles of data governance at all levels as relevant for development; proposals to support interoperability between national, regional and international data systems; considerations of sharing the benefits of data; and options to facilitate safe, secure and trusted data flows, including cross-border data flows as relevant for development (all SDGs).

But on data protection more broadly, Objective 4. Advance responsible, equitable and interoperable data governance approaches, data privacy and security:

"We recognize that responsible and interoperable data governance is essential to advance development objectives, protect human rights, foster innovation and promote economic growth. The increasing collection, sharing and processing of data, including in artificial intelligence systems, may amplify risks in the absence of effective personal data protection and privacy norms...

...We commit, by 2030, to: (a) Draw on existing international and regional guidelines on the protection of privacy in the development of data governance frameworks (all SDGs); (b) Strengthen support to all countries to develop effective and interoperable national data governance frameworks (all SDGs); (c) Empower individuals and groups with the ability to consider, give and withdraw their consent to the use of their data and the ability to choose how those data are used, including through legally mandated protections for data privacy and intellectual property (SDGs 10 and 16); (d) Ensure that data collection, access, sharing, transfer, storage and processing practices are safe, secure and proportionate for necessary, explicit and legitimate purposes, in compliance with international law (all SDGs); (e) Develop skilled workforces capable of collecting, processing, analysing, storing and transferring data safely in ways that protect privacy (SDGs 8 and 9).

Survey on attitudes and awareness of emerging technologies, data protection, and digital products: There was a recent government survey of the UK public on the level of adoption and awareness of blockchain and immersive virtual worlds, attitudes towards pricing on digital platforms and behaviours regarding personal data control. But I can't yet find a summary of its outcomes, just the raw data.

Hungary: the Commission's decision to refer Hungary to the CJEU argues that Hungary's national law on the Defence of Sovereignty is in breach of EU law, including the e-Commerce Directive, the Services Directive, as well as EU Data protection legislation.

Canada: if attacker accesses and encrypts data without exfiltration for ransom purposes, that is still considered a breach that must be notified to affected individuals under Ontario’s Personal Health Information Protection Act (PHIPA), and the Child, Youth and Family Services Act (CYFSA).

Facial recognition & privacy / personal data: interesting and scary, students managed to adapt smart glasses to look up info on strangers in real-time, including parents' names!

(Also please see my blogs last week on security and AI: both have also been updated with more Sept links.)

GDPR compensation: strict liability? employee training / awareness

Case C‑741/21, GP v juris GmbH is not a recent judgment, but it still bugs me. Yes, it clarifies that mere infringement of GDPR provisions giving data subjects rights doesn't in itself necessarily constitute non-material damage, and that factors for determining fines, including when the same processing infringes multiple provisions, don't apply when determining damages for Art.82 compensation purposes.

However, what concerns me is this: the court also said, "it is not sufficient for the controller, in order to be exempted from liability under paragraph 3 of that article [Art.82], to claim that the damage in question was caused by the failure of a person acting under his or her authority, within the meaning of Article 29 of that regulation." And:

"...it cannot be sufficient for him or her to demonstrate that he or she had given instructions to persons acting under its authority, within the meaning of Article 29 of that regulation, and that one of those persons failed in his or her obligation to follow those instructions, with the result that that person contributed to the occurrence of the damage in question.

53      If it were accepted that the controller may be exempted from liability merely by relying on the failure of a person acting under his or her authority, that would undermine the effectiveness of the right to compensation enshrined in Article 82(1) of the GDPR, as the referring court noted, in essence, and would not be consistent with the objective of that regulation, which is to ensure a high level of protection for individuals with regard to the processing of their personal data."

Where should the line be drawn, then? It seems that, at least in the UK, a controller is not responsible for the acts of a rogue employee, who clearly becomes a controller in their own right. But if, despite an employer giving clear instructions to its employees, providing them with training, and implementing awareness-raising measures, a careless, mistaken or ignorant employee does something they shouldn't have (or doesn't do something they should have), and that results in the employer infringing GDPR, the employer is now still liable to compensate affected data subjects for the damage, including non-material damage, that they suffer arising from the infringement.

It had generally been thought that proving the organisation conducted training and awareness-raising measures would help it, at least perhaps in relation to potential fines for security breaches or the amount of fines, and some national regulators have taken post-breach training/awareness-raising measures into account there. Indeed, regulators generally consider that employee training/awareness measures are essential to comply with Art.32. However, it looks like such measures will not help employers to reduce or avoid compensation claims, at least under the EU GDPR.

Hopefully, given that regulators expect employee training/awareness-raising, this case won't result in organisations deciding to stop providing clear instructions/policies and training and awareness-raising measures for their employees, whether on security or other GDPR requirements. But, it doesn't exactly incentivise such measures... though it will certainly incentivise data subjects to claim compensation, including perhaps collective action lawsuits directly or through representatives, in cases where infringements were caused by the controller's employee(s) not following instructions or their training.  Proving that a controller "is not in any way responsible for the event giving rise to the damage" under Art.82(3) is a tough ask, but Art.82(3) says what it says. Effectively, this seems to create strict liability for compensation, unless the controller can disprove causation. Talk about rock and a hard place...

Tuesday, 1 October 2024

Things cyber security, summer / Sept 2024

Software acquisition: procurement teams acquiring third-party software may find useful NIST's list of questions (PDF) to ask and security considerations relevant before, during and after procurement; e.g. some of those questions could be included in contractual warranties and/or due diligence questionnaires. See also CISA's related Software Acquisition Guide for Government Enterprise Consumers: Software Assurance in the Cyber-Supply Chain Risk Management (C-SCRM) Lifecycle (PDF, spreadsheet), again useful for private sector organisations too.

Personal data breaches/PDBs: an SA is not required to fine/enforce for a PDB if that's "not appropriate, necessary or proportionate to remedy the shortcoming found and to ensure that that regulation is fully enforced" (Case C‑768/21, TR v Land Hessen).

Revised EU Product Liability Directive: the new EU Parliament has approved the text (Eur-Lex), so it just remains for the Council to adopt it (although Estonia is against the procedural rules); when published in the OJ thereafter, it will become law. Significance? For the purposes of no-fault liability for defective products, "product" will explicitly include software including that supplied via SaaS. Note the emphasis on safety and cyber vulnerabilities

Art.7(2): "In assessing the defectiveness of a product, all circumstances shall be taken into account, including... (f) relevant product safety requirements, including safety-relevant cybersecurity requirements..."

Also see the Recitals:"A product can also be found to be defective on account of its cybersecurity vulnerability, for example where the product does not fulfil safety-relevant cybersecurity requirements... relevant product safety requirements, including safety-relevant cybersecurity requirements, and interventions by competent authorities, such as issuing product recalls, or by economic operators themselves, should be taken into account in the assessment of defectiveness. Such interventions should, however, not in themselves create a presumption of defectiveness...The possibility for economic operators to avoid liability by proving that the defectiveness came into being after they placed the product on the market or put it into service should be restricted when a product’s defectiveness consists in the lack of software updates or upgrades necessary to address cybersecurity vulnerabilities and maintain the safety of the product... manufacturers should also not be exempted from liability for damage caused by their defective products when the defectiveness results from their failure to supply the software security updates or upgrades that are necessary to address those products’ vulnerabilities in response to evolving cybersecurity risks [unless not in their control e.g. owner fails to install it; yet, no obligation under this law to provide updates/upgrades but see CRA below]... a third party exploiting a cybersecurity vulnerability of a product. In the interests of consumer protection, where a product is defective, for example due to a vulnerability that makes the product less safe than the public at large is entitled to expect, the liability of the economic operator should not be reduced or disallowed as a result of such acts or omissions by a third party. However, it should be possible to reduce or disallow the economic operator’s liability where injured persons  themselves have negligently contributed to the cause of the damage, for example where the injured person negligently failed to install updates or upgrades provided by the economic operator that would have mitigated or avoided the damage."

EU Cyber Resilience Act (CRA) on "horizontal cybersecurity requirements for products with digital elements": the new EU Parliament has approved the text (Eur-Lex), so it just remains for the Council to adopt it; when published in the OJ thereafter, it will become law. Note, this aims to "set the boundary conditions for the development of secure products with digital elements by ensuring that hardware and software products are placed on the market with fewer vulnerabilities and that manufacturers take security seriously throughout a product’s lifecycle".

EU DORA Regulation, financial entities: there are corrections in the versions for FR, RO, SL [sic, SI?]

UK Cyber Security and Resilience Bill: while the UK recently designated data centres as Critical National Infrastructure (CNI), the CPNI list doesn't seem to have been updated accordingly yet. Note, this is not the same as extending the UK NIS Regulations to cover data centres (as the EU NIS2 Directive will do, though it's inapplicable in the UK post-Brexit). However, DSIT has indicated in its Sept newsletter (updated: now on gov.uk) that the Bill will strengthen UK’s cyber resilience and ensure the critical infrastructure and essential services are more secure, by "strengthening the UK’s only cross-sector cyber legislation – the Network and Information Systems (NIS) Regulations 2018. Measures will include expanding the remit of the regulation to protect more digital services and supply chains". And just out: a DSIT webpage on this Bill. Currently it says little more about the Bill that what was in the King's Speech background PDF, but it does indicate that this Bill will be introduced to Parliament in 2025. (On ransomware under the Bill, please see below.)

Ransomware: in late 2023, Interpol and 50 countries including the UK signed a Counter Ransomware Initiative (CRI) joint statement on ransomware payments (US press release). The European Commission has now been authorised to negotiate, on behalf of the EU, the International Counter Ransomware Initiative 2024 Joint Statement (background on CRI). UPDATED: now see the full CRI guidance for organisations during ransomware incidents (news release).

(In May 2024, the UK NCSC with insurance industry bodies had issued Guidance for organisations considering payment in ransomware incidents, and the King's Speech detailed PDF in July 2024 stated that the forthcoming Cyber Security and Resilience Bill will be, among other things, "mandating increased incident reporting to give government better data on cyber attacks, including where a company has been held to ransom".)

UK communications providers & security: Ofcom updated its Network and Service Resilience Guidance for Communications Providers for telcos in early Sept 2024, following consultation.  Ofcom said, "Specifically, we are making clear that we expect them to: ensure networks are designed to avoid or reduce single points of failure; make sure key infrastructure points have automatic failover functionality built in, so traffic is immediately diverted to another device or site when equipment fails; and  set out the processes, tools and training that should be considered to support the requirements on resilience".  

Proposed EU CSAM Regulation: the Global Encryption Coalition is concerned about the Hungarian Presidency's 9 Sept 2024 compromise text, which would still require scanning of encrypted messaging services, undermining encryption and accordingly security and privacy. The Presidency is pushing for a partial general approach at the Council by as soon as 10 Oct 2024! (Good encryption FAQ).

Passwords: NIST's latest draft Digital Identity Guidelines: Authentication and Authenticator Management now states, among other things, that passwords:

  • Minimum - "shall" be required to be 8 characters minimum, and "should" be required to be 15 characters minimum
  • Maximum - "should" accept 64 characters (to enable passphrases)
  • Types of characters - "should" accept ASCII, space, Unicode; but "shall" NOT require other composition rules like a mix of different character types - unlike what most organisations currently require!
  • Change - "shall not" be required to be changed by users periodically (again unlike what too many organisations do), but change "shall" be required if there's evidence the "authenticator" was compromised (cf. that the password itself was compromised)
  • No storage of password hints accessible to unauthenticated people (e.g. not logged in), and no prompts for knowledge-based authentication (like first pet's name) or security questions when choosing passwords
(Added: security guru Bruce Schneier approves of these changes!)

Payment webpages: fines have been imposed on companies under GDPR because their payment webpages got hacked, directly or indirectly, enabling criminals to capture customers' payment card details for fraud. The recent Frame Watch feature of ReportURI, helmed by noted security expert Scott Helme (if you'll forgive the pun!) alongside its existing Script Watch and Data Watch features, looks helpful to monitor and provide alerts for suspicious activity on payment pages.

Cloud forensics: post-data breach forensics on cloud services isn't easy. NIST's Cloud Computing Forensic Reference Architecture document, from July 2024, suggests ways to implement cloud architecture to faciliate forensics.

Aligning US federal agencies' cyber defence: CISA's priority areas aren't surprising: asset management, vulnerability management, defensible architecture, cyber supply chain risk management, and incident detection and response. The tricky bit is, of course, aligning systems/processes accordingly, e.g. by increasing operational visibility of assets, managing the attack surface of Internet-accessible assets, securing cloud applications etc., under its Federal Civilian Executive Branch (FCEB) Operational Cybersecurity Alignment (FOCAL) Plan. Again, much of this is of use to the private sector too.

Also of interest:

Sunday, 29 September 2024

Things AI, Sept 2024

Open-source AI models: from ICO's previously-asked questions, this Q&A was added recently even though currently the "Last updated" date indicates 11 April 2024.
Q: We want to develop a speech transcription service for use in our organisation, using an open-source artificial intelligence (AI) model. Can we do this even though we don’t have detailed information about how the model was trained? (see the answer!)

AI Act: from Deloitte's AI Act Survey 2024, not many companies surveyed have started prep, nearly half feel partially/poorly prepared, over half think the Act constrains their innovation capabilities in AI, there were mixed views on legal certainty and on the Act's impact on trust in AI, and almost half thought the Act's more of a hindrance to AI-based applications! But, over a 100 companies have signed the Commission's voluntary AI Pledge under its AI Pact, that seeks to encourage organisations to implement AI Act measures ahead of its formal applicable dates.

Beyond the AI Act, see more generally:

Revised EU Product Liability Directive: the new EU Parliament has approved the text (Eur-Lex), so it just remains for the Council to adopt it (although Estonia is against the procedural rules); when published in the OJ thereafter it will become law. Significance? For the purposes of no-fault liability for defective products, "product" will explicitly include software including that supplied via SaaS. The text also mentions software as including AI systems. Also:

"A developer or producer of software, including AI system providers within [AI Act] should be treated as a manufacturer"... "Where a substantial modification is made through a software update or upgrade, or due to the continuous learning of an AI system, the substantially modified product should be considered to be made available on the market or put into service at the time that modification is actually made."

"National courts should presume the defectiveness of a product or the causal link between the damage and the defectiveness, or both, where, notwithstanding the defendant’s disclosure of information, it would be excessively difficult for the claimant, in particular due to the technical or scientific complexity of the case, to prove the defectiveness or the causal link, or both... Technical or scientific complexity should be determined by national courts on a case-by-case basis, taking into account various factors. Those factors should include...the complex nature of the causal link,  such as... a link that, in order to be proven, would require the claimant to explain the inner workings of an AI system...  ...in a claim concerning an AI system, the claimant should, for the court to decide that excessive difficulties exist, neither be required to explain the AI system’s specific characteristics nor how those characteristics make it harder to establish the causal link." 

EU Cyber Resilience Act (CRA) on "horizontal cybersecurity requirements for products with digital elements": the new EU Parliament has approved the text (Eur-Lex), so it just remains for the Council to adopt it; when published in the OJ thereafter, it will become law. Note, this aims to "set the boundary conditions for the development of secure products with digital elements by ensuring that hardware and software products are placed on the market with fewer vulnerabilities and that manufacturers take security seriously throughout a product’s lifecycle". Also note, "Products with digital elements classified as high-risk AI systems pursuant to Article 6 of [AI Act] which fall within the scope of this Regulation should comply with the essential cybersecurity requirements set out in this Regulation..." (see much more in Art.12 and Rec.51 which specifically cover high-risk AI systems, and Art.52(14)).  BTW, the Commission is inviting cybersecurity experts to apply to join its CRA Expert Group. Various criticisms of the CRA have been mentioned in my book/free companion PDF; here's another critique.

EU AI Liability Directive: added - Proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence: Complementary impact assessment from the EPRS (as requested by a Europarl committee) "proposes that the AILD should extend its scope to include general-purpose and other 'high-impact AI systems', as well as software. It also discusses a mixed liability framework that balances fault-based and strict liability. Notably, the study recommends transitioning from an AI-focused directive to a software liability regulation, to prevent market fragmentation and enhance clarity across the EU" (PDF).

UK: the AI Act doesn't apply in the UK post-Brexit, so perhaps there are indeed more AI opportunities in the UK, on which Google has published a blog and fuller paper. The UK will make the AI Safety Institute (AISI) a statutory body as well as "identifying and realising the massive opportunities of AI" including for government/public services. (Here, the UK's not alone: a study for the European Commission emphasises AI's "significant potential" to improve EU public sector services.) AISI work includes assessing AI capabilities, e.g. Early Insights from Developing Question-Answer Evaluations for Frontier AI.

But, the GDPR still applies in the UK: ICO statement on LinkedIn's changes to its AI policy, so it is no longer training genAI models using UK users' data (opt-out link for others). There was separately an AI opt-out hoax that fooled a lot of people!



The recently-published UK MoD's annual analysis of future global strategic trends 2024 mentions cyber and AI, of course. UK civil servants (but not the rest of us!) are being offered free training AI-related courses, covering various aspects of AI, illustrating what's considered most important: Fundamentals, Understanding AI Ethics, The business value of AI, Gen AI Tools and Applications, Working with Large Language Models, Machine Learning and Deep Learning, Natural Language Processing and Speech Recognition, Computer Vision, and a Technical Curriculum.

Separately, case studies summarised based on the DSIT AI assurance techniques have been boosted by the addition of more products/platforms, on areas from governance, facial recognition e.g. for verification/identification, compliance management and bias assessment (even for NIST AI RMF, ISO, and NYC 144 bias audit with synthetic data!) to AI monitoring/audit. If you're planning to offer AI products to government (or beyond), it wouldn't be a bad idea to get your own products assured and listed similarly.

AI uses in the UK: a great use is autonomous robots to maintain fusion facilities. On health, a "novel ... AI tool, validated using NHS eye imaging datasets... could transform the efficiency of screening for Diabetic Retinopathy (DR)", while the MHRA  is calling for applications for manufacturers and developers of AI medical devices to join its AI Airlock regulatory sandbox; and, Reflections on building the AI and Digital Regulations Service. Added: AI platform via QR code for citizen science info on bathing water quality in Devon and Cornwall.

Collaboration on cybersecurity and AI research announced between the UK, US and Canada, to support defence and security

Equality, AIThe public sector equality duty and data protection, Sept 2024, UK EHRC guidance (with ICO input), including helpful examples of proxy data for protected characteristics under the UK Equality Act 2010, and a short section on proxy analysis of AI models, with a case study on the Dutch benefit fraud scandal that led to unlawful discriminatinon (from using biased predictive algorithms)

United Nations: much activity on AI, such as the final Governing AI for Humanity report on global AI governance, gaps, and international cooperation.

The recently (and almost simultaneously) promulgated UN Digital Compact is "a comprehensive framework for global governance of digital technology and artificial intelligence":

  • Objectives agreed included: "Enhance international governance of artificial intelligence for the benefit of humanity"
  • Principles agreed included: "Safe, secure and trustworthy emerging technologies, including artificial intelligence, offer new opportunities to turbocharge development. Our cooperation will advance a responsible, accountable, transparent and human-centric approach to the life cycle of digital and emerging technologies, which includes the pre-design, design, development, evaluation, testing, deployment, use, sale, procurement, operation and decommissioning stages, with effective human oversight"
  • On Digital public goods and digital public infrastructure: "We recognize that digital public goods, which include open-source software, open data, open artificial intelligence models, open standards and open content that adhere to privacy and other applicable international laws, standards and best practices and do no harm, empower societies and individuals to direct digital technologies to their development needs and can facilitate digital cooperation and investment... ...We commit by, 2030, to: (a) Develop, disseminate and maintain, through multi-stakeholder cooperation, safe and secure open-source software, open data, open artificial intelligence models and open standards that benefit society as a whole (SDGs [Sustainable Development Goals] 8, 9 and 10)
  • On  Objective 3. Foster an inclusive, open, safe and secure digital space that respects, protects and promotes human rights, they "urgently... Call on digital technology companies and developers to continue to develop solutions and publicly communicate actions to counter potential harms, including hate speech and discrimination, from artificial intelligence-enabled content. Such measures include incorporation of safeguards into artificial intelligence model training processes, identification of artificial intelligence-generated material, authenticity certification for content and origins, labelling, watermarking and other techniques (SDGs 10, 16 and 17).
  • On Objective 4. Advance responsible, equitable and interoperable data governance approaches, data privacy and security, "We recognize that responsible and interoperable data governance is essential to advance development objectives, protect human rights, foster innovation and promote economic growth. The increasing collection, sharing and processing of data, including in artificial intelligence systems, may amplify risks in the absence of effectivepersonal data protection and privacy norms...
    ...We commit, by 2030, to: (a) Draw on existing international and regional guidelines on the protection of privacy in the development of data governance frameworks (all SDGs); (b) Strengthen support to all countries to develop effective and interoperable national data governance frameworks (all SDGs); (c) Empower individuals and groups with the ability to consider, give and withdraw their consent to the use of their data and the ability to choose how those data are used, including through legally mandated protections for data privacy and intellectual property (SDGs 10 and 16); (d) Ensure that data collection, access, sharing, transfer, storage and processing practices are safe, secure and proportionate for necessary, explicit and legitimate purposes, in compliance with international law (all SDGs); (e) Develop skilled workforces capable of collecting, processing, analysing, storing and transferring data safely in ways that protect privacy (SDGs 8 and 9)
  • And Objective 5 was all about AI governance, not quoted in full here but
    "We will: (a) Assess the future directions and implications of artificial intelligence systems and promote scientific understanding (all SDGs); (b) Support interoperability and compatibility of artificial intelligence governance approaches through sharing best practices and promoting common understanding (all SDGs); (c) Help to build capacities, especially in developing countries, to access, develop, use and govern artificial intelligence systems and direct them towards the pursuit of sustainable development (all SDGs); (d) Promote transparency, accountability and robust human oversight of artificial intelligence systems in compliance with international law (all SDGs). (Also see UNESCO's consultation from Aug-Sept 2024  with a policy brief summarising emerging regulatory approaches to AI.)
  • We therefore commit to: (a) Establish, within the United Nations, a multidisciplinary Independent International Scientific Panel on AI with balanced geographic representation to promote scientific understanding through evidence-based impact, risk and opportunity assessments, drawing on existing national, regional and international initiatives and research networks (SDG 17); (b) Initiate, within the United Nations, a Global Dialogue on AI Governance involving Governments and all relevant stakeholders which will take place in the margins of existing relevant United Nations conferences and meetings (SDG 17)."

US: global AI research agendaproposed Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters (i.e. cloud providers); "This includes reporting about developmental activities, cybersecurity measures, and outcomes from red-teaming efforts, which involve testing for dangerous capabilities like the ability to assist in cyberattacks or lower the barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons." One I missed earlier: the IAF's paper on Risk/Data Protection Assessment (for AI) as Required by U.S. State Privacy Laws.

US FTC: action against "multiple companies that have relied on artificial intelligence as a way to supercharge deceptive or unfair conduct that harms consumers... include actions against a company promoting an AI tool that enabled its customers to create fake reviews, a company claiming to sell “AI Lawyer” services, and multiple companies claiming that they could use AI to help consumers make money through online storefronts." 

And some miscellaneous things...

Hallucination issues with LLMs remain: a recent egregious example.

Comparing chatbots: interesting open-source tool to compare different (anonymized) chatbots by asking them the same questions, and do choose the best answer. See its leaderboard, currently OpenAI's o1-preview is top!

Cognitive bias: humans tend to think fluent content (e.g. LLM-generated) is more truthful/useful than less fluent content, which can produce systematic errors.  Of course, this tendency is why even hallucinationary genAI output can be trusted and believed by humans!AWS scientists argue that "human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon insights from disciplines such as user experience research and human behavioral psychology".

AI users: apparently have a healthier relationship with work than colleagues who don't use AI! Although of course AI has been the reason for some job cuts.

Interesting article on AI hype and another on the importance of human thought and judgment when using AI.

ADDED:
(Also see my separate blogs on privacy / data protection and on security: links now added.)