AI in the UK: ready, willing and able?

House of Lords Select Committee on Artificial Intelligence

There is something slightly disconcerting about reading a robust and comprehensive account of public policy issues in relation to artificial intelligence in the stately prose style of a parliamentary report. But the slightly antique structure shouldn’t get in the way of seeing this as a very useful and systematic compendium.

The strength of this approach is that it covers the ground systematically and is very open about the sources of the opinions and evidence it uses. The drawback, oddly, is that the result is an curiously unpolitical document – mostly sensible recommendations are fired off in all directions, but there is little recognition, still less assessment, of the forces in play which might result in the recommendations being acted on. The question of what needs to be done is important, but the question of what it would take to get it done is in some ways even more important – and is one a House of Lords committee might be expected to be well placed to answer.

One of the more interesting chapters is a case study of the use of AI in the NHS. What comes through very clearly is that there is a fundamental misalignment betweeen the current organisational structure of the NHS and any kind of sensible and coherent use – or even understanding- of the data it holds and of the range of uses, from helpful to dangerous, to which it could be put. That’s important not just in its own right, but as an illustration of a much wider issue of institutional design noted by Geoff Mulgan.

Deeply intertwingled laws

John Sheridan

Beyond even the bonus points for talking about laws being ‘intertwingled’, this is an important and interesting post at the intersection of law, policy and automation. It neatly illustrates why the goal of machine-interpretable legislation,such as the recent work by the New Zealand government, is a much harder challenge than it first appears – law can have tacit external interpretation rules, which means that the highly structured interpretation which is normal, and indeed necessary, for software just doesn’t work. Which is why legal systems have judges and programming languages generally don’t – and why the New Zealand project is so interesting.

LabPlus: Better Rules for Government Discovery Report

Nadia Webster – NZ Digital Government

The rather dry title of this post belies the importance and interest of its content. Lots of people have spotted that laws are systems of rules, computer code is systems of rules and that somehow these two fact should illuminate each other. Quite how that should happen is much less clear. Ideas have ranged from developing systems to turn law into code to adapting software testing tools to check legislative compliance. This post records an experiment with a different approach again, exploring the possibility of creating legislative rules in a way which is designed to make them machine consumable. That’s an approach with some really interesting possibilities, but also some very deep challenges. As John Sheridan has put it, law is deeply intertwingled: the meaning of legislation is only partly conveyed by the words of a specific measure, which means that transcoding the literal letter of the law will never be enough. And beyond that again, the process of delivering and experiencing a service based on a particular set of legal rules will include a whole set of rules and norms which are not themselves captured in law.

That makes it sensible to start, as the work by the New Zealand government reported here has done, with exploratory thinking, rather than jumping too quickly to assumptions about the best approach.  The recommendations for areas to investigate further set out in their full report are an excellent set of questions, which will be of interest to governments round the world.

Usability of Key Distribution in BlockChain Backed Electronic Voting

Terence Eden

This is a good post on the very practical difficulties in establishing secure digital identity, in this case for the purpose of voting in elections. It’s included here mainly as a timely but inadvertent illustration of the point in the previous post that even technology fixes are harder than they look. Implementing some form of online voting wouldn’t be too difficult; implementing a secure and trustworthy electoral system would be very hard indeed.

A New Approach to Digital Identity

Chris Yiu and Harvey Redgrave – Institute for Global Change

Digital identity (like digital voting) sounds as though it ought to be a problem with a reasonably straightforward solution, but which looks a lot more complicated when it comes to actually doing it. Like everything with the word ‘digital’ attached to it, that’s partly a problem of technical implementation. But also like everything with the word ‘digital’ attached to it, particularly in the public and political space, it’s a problem with many social aspects too.

This post makes a brave attempt at offering a solution to some of the technical challenges. But the reason why the introduction of identity cards has been highly politically contentious in the UK, but not in other countries, has a lot to do with history and politics and very little to do with technology. So better technology may indeed to be better, but that doesn’t in itself constitute a new approach to identity. Even if the better technology is in fact better (and as Paul Clarke spotted, ‘attestation’ is doing a lot more work as a word than it first appears), there are some much wider issues (some flagged by Peter Wells) which would also need to be addressed as part of an overall approach.

What do we mean when we talk about services?

Stephanie Marsh – GDS

A service is not an interaction on a website; it is not an immediate transaction. A service has a beginning, a middle and an end. The problem is that the service designer is at risk of only seeing the middle, and while a well designed middle is a good thing, it is not the whole thing. From the point of view of the person who has a need they want to resolve, the starting point may come much earlier and the resolution much later.

So it’s very encouraging to see GDS recognising this and making it clear that service design should be seen broadly, not narrowly. There’s room for debate about where the lines are drawn from the supply side perspective (the difference between ‘supporting content’ and ‘things which support’ is lost on me, for example) and perhaps more significantly a definition of a user journey which is too producer focused. But the underlying approach is very much the right one.

The perils of bad strategy

Richard Rumelt – McKinsey Quarterly

If we want to create a good strategy, there is some value in understanding what makeas a bad one. This paper sets out to do exactly that and ends even more helpfully by reversing that into three key characteristics of a good strategy – understanding the problem; describing a guiding approach to addressing it; and setting out a coherent set of actions to deliver the approach. This is a classic article – which is a way of saying both that it’s a few years old, while also being pretty timeless. It derives from a book, but as is not uncommon, the book is very much longer without adding value in proportion.

The Risk of Machine-Learning Bias (and How to Prevent It)

Chris DeBrusk – Sloan MIT Management Review

This article is a good complement to the previous post, providing some pragmatic rigour on the risk of bias in machine learning and ways of countering it. Perhaps the most important point is one of the simplest:

It is safe to assume that bias exists in all data. The question is how to identify it and remove it from the model.

There is some good practical advice on how to do just that. But there is an obvious corollary: if human bias is endemic in data, it risks being no less endemic in attempts to remove it. That’s not a counsel of despair, this is an area where good intentions really do count for something. But it does underline the importance of being alert to the opposite, that unless it is clear that bias has been thought about and countered, the probability is high that it still remains. And of course it will be hard to calibrate the residual risk, whatever its level might be, particularly for the individual on the receiving end of the computer saying ‘no’.

Computer Says No: Part 1 Algorithmic Bias and Part 2 Explainability

These two (of a planned three) posts take an interesting approach to the ethical problems of algorithmic decision making, resulting in a much more optimistic view than most who write on this. It’s very much worth reading even though the arguments don’t seem quite as strong as they are made to appear.

Part 1 essentially side steps the problem of bias in decision making by asserting that automated decision systems don’t actually make decisions (humans still mostly do that), but should instead be thought of as prediction systems – and the test of a prediction system is in the quality of its predictions, not in the operations of its black box. The human dimension is a bit of a red herring as it’s not hard to think of examples where in practice the prediction outputs are all the decision maker has to go on, even if in theory the system is advisory. More subtly, there is an assumption that prediction quality can easily be assessed and an assertion that machine predictions can be made independent of the biases of those who create them, both of which are harder problems than the post implies.

The second post goes on to address explainability, with the core argument being that it is a red herring (an argument Ed Felten has developed more systematically): we don’t really care whether a decision can be explained, we care whether it can be justified, and the source of justification is in its predictive power, not in the detail of its generation. There are two very different problems with that. One is that not all individual decisions are testable in that way: if I am turned down for a mortgage, it’s hard to falsify the prediction that I wouldn’t have kept up the payments. The second is that the thing in need of explanation may be different for AI decisions from that for human decisions. The recent killing of a pedestrian by an autonomous Uber car illustrates the point: it is alarming precisely because it is inexplicable (or at least so far unexplained), but whatever went wrong, it seems most unlikely that a generally low propensity to kill people will be thought sufficiently reassuring.

None of that should be taken as a reason for not reading these posts. Quite the opposite: the different perspective is a good challenge to the emerging conventional wisdom on this and is well worth reflecting on.

Crossing the ‘Valley of Death’ – how we can bridge the gap between policy creation and delivery

Tony Meggs – Civil Service Quarterly

A policy which cannot be – or is not – implemented is a pretty pointless thing. The value in policy and strategy is not in the creation of documents or legislation (essential though that might be), but in making something, somewhere better for someone. Good policy is designed with delivery very much in mind. Good delivery can trace a clear and direct line to the policy intention it is supporting.

That’s easily said, but as we all know, there is no shortage of examples where that’s not what has happened in practice. More positively, there is also no shortage of people and organisations focused on making it work better. Much of that has been catalysed – more or less directly – through digital and service design, with the idea now widely accepted (albeit still sometimes more in principle than in practice) that teams should be formed from the outset by bringing together a range of disciplines and perspectives. But as this post reminds us, there is another way of thinking about how to bring design and delivery together, focusing on implementation and programme management.

But perhaps most importantly, the post stresses the need to recognise and manage the pressures in a political system to express delivery confidence at an earlier stage and with greater precision than can be fully justified. Paradoxically (it might appear), embracing uncertainty is a powerful way of enhancing delivery confidence.

Why it’s never a good time for service design

Lou Downe

headless chicken deliveringIt’s really hard to do things as well responding to a crisis as when they are properly planned. It’s really hard to do proper planning if all your time and energy is taken up by responding to crises. Service design is one of the leading indicators of that problem: there’s no (perceived) time to do it when it’s urgent; but there’s no urgency to do it when there’s time.

The solution to that conundrum argued here is very simple: slow degradation over time has to be recognised as being as bad as the catastrophic failure which occurs when the degradation hits a tipping point – “we need to make doing nothing as risky as change.”

Simple in concept is, of course, a very long way from being simple to realise, and the lack of attention given to fixing things before they actually break is a problem not limited to service design – slightly more terrifyingly it applies just as much to nuclear weapons (and in another example from that post, to apparently simple services which cross organisational boundaries and which it isn’t quite anybody’s responsibility to fix). Changing that won’t be easy, but that doesn’t make it any less important.

Data as photography

Ansel Adams, adapted by Wesley Goatley

“A visualisation is usually looked at – seldom looked into.” – Ansel Adams “The sheer ease with which we can produce a superficial visualisation often leads to creative disaster.” – Ansel Adams “There's nothing worse than a sharp visualisation of a fuzzy concept.” – Ansel Adams “You don't collect a data set, you make it.” – Ansel Adams “There are always two people in every data visualisation: the creator and the viewer.” – Ansel Adams “To make art with data truthfully and effectively is to see beneath the surfaces.” – Ansel Adams “A great data visualisation is a full expression of what one feels about what is being visualised in the deepest sense, and is, thereby, a true expression of what one feels about life in its entirety.” – Ansel Adams “Data visualisation is more than a medium for factual communication of ideas. It is a creative art.” – Ansel Adams “We must remember that a data set can hold just as much as we put into it, and no one has ever approached the full possibilities of the medium.” – Ansel Adams “Data art, as a powerful medium...offers an infinite variety of perception, interpretation and execution.” – Ansel Adams “Twelve significant data points in any one year is a good crop.” – Ansel Adams

The idea that the camera does not lie is as old as photography. It has been untrue for just as long.

The exposure of film or sensor to light may be an objective process, but everything which happens before and after that is malleable and uncertain. There are some interesting parallels with data in that: the same appearance – and assertion – of accurately representing the real world, the same issues of both deliberate and unwitting distortion.

This tweet simply takes some of the things Ansel Adams, the great photographer of American landscapes, has written about photography and adapts them to be about data. It’s neatly done and provides good food for thought.

Designing digital services that are accountable, understood, and trusted

Richard Pope

This is a couple of years old, but is not in any way the worse for that. It’s an essay (originally a conference presentation), addressed to software developers, seeking to persuade them that in working in software or design, they are inescapably working in politics.

He’s right about that, but the implications for those on the other end of the connection are just as important. If the design of software is not neutral in political or policy terms, then people concerned with politics and policy need to understand this just as much. Thanks to Tom Loosemore for the enthusiastic reminder of its existence.

Are we still talking about digital transformation?

Gavin Beckett – Perform Green

Apparently we still are. Whether we should be is another matter. There is certainly a strong case against ‘digital’, my version of which was made in a blog post a couple of years ago, which stated firmly

Digital transformation is important. But it’s important because digital is a means of doing transformation, not because transformation enables digital.

That leaves us with ‘transformation’. Is that a word with enough problems of its own that we should avoid it as well? The case against is clear, and is well articulated in this post: transformation carries implications of one massive co-ordinated effort, of starting with stability, applying the intended change, and then returning to a new and better stability – and none of that happens in the real world. Instead, it’s better to see change from a more agile perspective, neatly summarised in a line quoted in the post

Approaching change in a more evolutionary way may be the best way of making effective progress.  Small steps towards a bigger picture, with wiggle room to alter the path.

Sometimes, though, that bigger picture is big enough to deserve being called transformational. Sometimes the first step is possible only when there is some sense of direction and of scale of ambition. Sometimes radical change is what’s needed – it’s not hard to look around and see systems and organisations crying our for transformation. We should be cautious about discarding the ambition just because, too often, the means deployed to achieve it have fallen short.

Indeed, perhaps the real problem with ‘transformation’ as word is that it has been applied to far too casually to things which haven’t been nearly transformational enough in their ambition. If digital transformation is to mean anything, it has to be more than technology supported process improvement.

Looking at historical parallels to inform digital rights policy

Justine Leblanc – IF

Past performance, it is often said, is not a guide to future performance. That may be sound advice in some circumstances, but is more often than not a sign that people are paying too little attention to history, over too short a period, rather than that there is in fact nothing to learn from the past. To take a random but real example, there are powerful insights to be had on contemporary digital policy from looking at the deployment of telephones and carrier pigeons in the trenches of the first world war.

That may be an extreme example, but it’s a reason why the idea of explicitly looking for historical parallels for current digital policy questions is a good one. This post introduces a project to do exactly that, which promises to be well worth keeping an eye on.

The value of understanding history, in part to avoid having to repeat it, is not limited to digital policy, of course. That’s a reason for remembering the value of the History and Policy group, which is based on “the belief that history can and should improve public policy making, helping to avoid reinventing the wheel and repeating past mistakes.”

Don’t believe the hype about AI in business

Vivek Wadhwa – VentureBeat

If you want to know why artificial intelligence is like teenage sex, this is the post to read. After opening with that arresting comparison, the article goes on to make a couple of simple but important points. Most real world activities are not games with pre-defined rules and spaces. And for businesses – and arguably still more so for governments – it is critically important to be able to explain and account for decisions and outcomes. More pragmatically, it also argues that competitive advantage in the deployment of AI goes to those who can integrate many sets of disparate data to form a coherent set to which AI can be applied. Most companies – and, again, perhaps even more so most governments – are not very good at that. That might be the biggest challenge of all.


 which way I ought to go from here?

Dave Snowden – Cognitive Edge

This is close to the beginning of what is billed as series of indefinite length on agility and Agility, which we are promised will at times be polemical and curmudgeonly, and are tangentially illustrated with references to Alice (the one in Wonderland, not the cryptographic exemplar). The first post in the series set some context; this second one focuses on the question of whether short-cycle software production techniques translate to business strategy. In particular, the argument is that scrum-based approaches to agile work best when the problem space is reasonably well understood and that this will be the case to different extents and different stages of an overall development cycle.

Dave Snowden is best known as the originator of the Cynefin framework, which is probably enough to guarantee that this series will be thought provoking. He positions scrum approaches within the Cynefin complex domain and as a powerful approach – but not the only or uniquely appropriate one. It will be well worth watching his arguments develop.

Eight things I’ve learnt from designing services for colleagues

Steve Borthwick – DWP Digital

Civil servants are users too. Indeed, as Steph Gray more radically claims, civil servants are people too. And as users, and even more so as people, they have needs. Some of those needs are for purely internal systems and processes, others are as users of systems supporting customer services.

In the second category, the needs of the internal user are superficially similar to the needs of the external user – to gather and record the information necessary to move the service forward. That for a time led to a school of thought that the service for internal and external users should be identical, to the greatest possible extent. But as this post recognises, there is a critical difference between somebody who completes a transaction once a year or once in a blue moon and somebody who completes that same transaction many times a day.

That shouldn’t be an excuse for complexity and confusion: just because people on the payroll can learn to fight their way through doesn’t mean it’s a good idea to make them. But it is one good reason for thinking about internal user needs in their own right – and this excellent post provides seven more reasons why that’s a good thing to do.

Meanwhile, the cartoon here remains timeless – it prompted a blog post almost exactly ten years ago arguing that there is a vital difference between supporting expert users (good) and requiring users to be expert (bad). We need to keep that difference clearly in sight.

YouTube, the Great Radicalizer

Zeynep Tufekci – New York Times

This article has been getting extensive and well-deserved coverage over the last few days. Essentially, it is demonstrating that the YouTube recommendation engine tends to lead to more extreme material, more or less whatever your starting point. In  short, “YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.”

The reason for including it here is not because of the specific algorithm or the specific behaviour it generates. It is because it’s a very clear example of a wider phenomenon. It’s a pretty safe assumption that the observed behaviour is not the result of a cabal of fringe conspirators deep in the secret basements of Google setting out a trail to recruit people into extremist groups or attitudes. The pretty obvious motivation is that what they are actually trying to do is to tempt people into spending as long as possible watching YouTube videos, because that’s the way they can put most advertising in front of most eyeballs.

In other words, algorithmic tools can have radically unintended consequences. That’s made worse in this case because the unintended consequences are not a sign of the intended goal not being achieved; on the contrary, they are the very means by which that intended goal is being achieved. So it is not just the case that YouTube has some strong incentives not to fix the problem, the problem may not be obvious to them in the first place.

This is a clear example. But we need to keep asking the same questions about other systems: what are the second order effects, will we recognise them when we see them, and will we be ready to – and able to – address them?

Strategic thinking with blog posts and stickers

Giles Turnbull

Strategic thinking is best done by thinking out loud, on your blog, over a long period of time.

As someone clocking in with over a thousand blog posts of various shapes and sizes since 2005, that feels like a box well and truly ticked. Whether that makes up something which might be called strategic thinking is a rather different question – but that may be because all those blog posts have not yet generated a single sticker.

There’s an important point being made here. Even in a more traditional approach to strategy development, the final document is never the thing which carries the real value: it’s the process of development, and the engagement and debate that that entails which makes the difference. The test of a good strategy is that it helps solve problems, so as the problems change, so should the strategy. Whether that makes blog posts and stickers a sufficient approach to strategy development is a slightly different question. There might be a blog post in that.