Pages

Monday 16 September 2024

Browser cookie settings & consumer preferences - UK study

"Evaluating browser-based cookie setting options to help the UK public optimise online privacy behaviours" (PDF), a study for the UK Department for Science, Innovation and Technology on consumer preferences conducted  between Aug and Dec 2023, concluded that:

"...We recommend that any future cookie setting option should be interactive and detailed to a sufficient level that participants understand the real-world impact of accepting or declining a number of different options, e.g. that ‘functional’ cookies include login details, website preferences (language, currency), see Appendix 2, Figure 5. These setting designs secure stronger engagement by breaking participants out of the habit of automatically accepting all cookies purely for the sake of expedient access to the browser; furthermore, participants are satisfied after such a process of critical engagement...

...People remain divided over the idea of browser-based cookies. To improve sentiment, any future browser-based cookie settings should include features that will enhance web users’ feelings of control over their data (e.g. frequent prompts for updates, options to adapt preferences by types of websites, or for specific websites).

Participant engagement and satisfaction improved when they had access to more functionality details, an interactive interface to select their preferences, and timely prompting about privacy. As a result, should browser based cookie management systems replace the website level settings, we recommend that browser-based cookie setting design should attempt to disrupt users’ habits of automatically accepting through novel designs to create a dissonance with what they are used to seeing. Furthermore, any cookie settings that encourage participants to make a privacy-protective choice will lead to higher satisfaction regardless of initial preferences."

But if there's too much detail in the expanded info, users may just ignore them. And I'm not so sure about frequent prompts to users... that doesn't provide a great user experience. Strictly, when users have accepted cookies for a site, they should probably be told about their right to withdraw consent to cookies everytime they return to the site, but that doesn't really happen, at least not in a "disruptive" way, presumably because that's not great for UX too (popups continuing to appear even if you've accepted cookies previously?!).

AI and GPAI developments/info, Sept 2024

Some AI-related links, which I hope will be of use:

Final text of AI Pact pledges, promulgated by the European Commission to get tech companies to comply voluntarily with (at least some of) the AI Act before its formal applicable date. So, the text is not dissimilar from that of the EU AI Act, and indeed the G7 Principles from the Hiroshima Process


AI Act briefing for European Parliament, 2 Sept 24

✨BSA (The Software Alliance) Best Practices for Information Sharing Along the General Purpose AI Value Chain. There's some overlap with the EU AI Act's GPAI requirements, 3 Sept 24

✨Computer & Communications Industry Association's recommendations on GPAI code of practice, Aug 24

✨Note that the Council of Europe's AI Treaty, signed by the UK, EU and others, will come into force only on the first day of the month following the 3-month period after 5 signatories, including at least 3 Council of Europe member states, have ratified it. (On treaties/conventions, see the differences between signing versus ratification versus accession).

To support the Treaty's implementation, the COE's HUDERIA is a "legally non-binding methodology" for Risk & Impact Assessment of AI Systems for Human Rights, Democracy and Rule of Law (good summary). The UK's Alan Turing Institute is assisting on HUDERIA. The COE Committee on AI is considering HUDERIA soon. The European Commission & Council of the European Union are also involved. What's the betting as to how much HUDERIA will influence what is going to be required in Fundamental Rights Assessments for certain high-risk AI systems under the AI Act?

(Compare the US Department of State's risk management profile for AI and human rights, July 2024)


✨A data scientist has written a great outline of a practical approach in the face of the pressure to to AI-ify everything ASAP. Consider, is using AI always the best solution? Especially given that AI systems will increasingly be subject to more onerous obligations than non-AI systems (e.g. under the AI Act), is it always best to use AI when non-AI approaches/methods could work equally well or perhaps better?

✨What's up with the planned AI Liability Directive? It won't automatically be brought into force, but  a June Parliamentary Committee report stated that "the legislative work under the leadership of the Committee on Legal Affairs... will continue under the new Parliament. In the meantime, the Committee... requested an additional impact assessment from the European Parliament Research Service (EPRS). It is to primarily deal with the compatibility of the three legal acts mentioned [AIAct, product liability etc] and the risk-based concept and is expected to be completed by the end of 2024". Note the date for the additional impact assessment - end of 2024. So it's very unlikely this Directive will be passed this year, perhaps ever.

(I had tried to post the above on LinkedIn last Thursday, please see this AI developments post, but LinkedIn saw fit to demote it and not show it in people's feeds, so I think it's best to blog here instead, and at least there's space for me to flesh things out a bit more here.)