This is a good post in both form and function: a complex and important policy area, neatly summarised in a set of well chosen charts, mixing objective and attitudinal data, and quietly prompting some very big strategic questions.
This is an artful piece – the first impression is of a slightly unstructured stream of consciousness, but underneath the beguilingly casual style, some great insights are pulled out, as if effortlessly. Halfway down, we are promised ‘three big ideas’, and the fulfilment does not disappoint. The one which struck home most strongly is that we design institutions not to change (or, going further still, the purpose of institutions is not to change). There is value in that – stability and persistence bring real benefits – but it’s then less surprising that those same institutions struggle to adapt to rapidly changing environments. A hint of an answer comes with the next idea: if everything is the product of a design choice, albeit sometimes an unspoken and unacknowledged one, then it is within the power of designers to make things differently.
Do you best transform government by importing disruption and disruptors to overwhelm the status quo, or by nurturing and encouraging deeper but slower change which more gradually displaces the status quo? Or do both methods fail, leaving government – and the civil service – to stagger on to the next crisis, all set to try again and fail again?
The argument of this post is that those attempts are doomed to failure because the civil service is not willing to acknowledge the depth of the crisis it faces, and until it is, it will never take the steps necessary to fix things. It’s a good and thought provoking polemic – and the questions above are very real ones. But it underplays two important factors. The first is to frame this as being about the civil service. Arguably, that’s too narrow a view: if you want to change the system, you have to change the system: the civil service is the way that it is in large part because of the wider political system of which it is part. The second is one the article rightly identifies, but then does not really pursue. One reason disruptive outsiders tend to fail is that by definition they are brought in at a time when they enjoy the strongest possible patronage – and it’s an understandable temptation to see that as a normal state of affairs. But the reality is that such patronage always fades. Disruptors tend to sprint; they might do better if they planned for a relay – and that is as true for those attempting to disrupt from within as for those brought in to disrupt from without.
The problem with good policies, badly implemented is not primarily the bad implementation, it is that the bad implementation strongly suggests they weren’t good policies to start with. That’s the proposition advanced by this post, (and one interesting to read in parallel with The Blunders of our Government).
There are few examples of good but badly implemented policies because, in this approach, policy making is not – or not just – the grand sweep of a speech, but is the grinding detail of working through real world implications. Failure of implementation is therefore a strong indicator of a bad policy – akin, perhaps, to the idea that if you can’t explain a complicated thing simply, you probably don’t understand it.
A remarkable proportion of the infrastructure of a modern state is there to compensate for the absence of trust. We need to establish identity, establish creditworthiness, have a legal system to deal with broken promises, employ police officers, security guards and locksmiths, all because we don’t know whether we can trust one another. Most of us, as it happens, are pretty trustworthy. But a few of us aren’t, and it’s really hard to work out which category each of us fall into (to say nothing of the fact that it’s partly situational, so people don’t stay neatly in one or the other).
There are some pretty obvious opportunities for big data to be brought to bear on all that, and this article focuses on a startup trying to do exactly that. That could be a tremendous way of removing friction from the way in which strangers interact, or it could be the occasion for a form of intrusive and unregulated social control (it’s not enough actually to be trustworthy, it’s essential to be able to demonstrate trustworthiness to an algorithm, with all the potential biases that brings with it) – or it could, of course, be both.
This is an interesting variant of the idea of a universal basic income – to provide universal basic services instead. The argument is that taking a services-based approach makes it much easier to manage the fiscal impact, with the value of the services being only a fifth of the cost of even a pretty modest universal basic income.
The idea forces us to think a bit differently about the role of government. In the UK, healthcare and schools are provided as universal public services, and to most people that seem almost self-evidently right. But in the USA, that isn’t true at all – state-provided schooling is universal, state-provided healthcare emphatically isn’t. It’s not obvious, to put it mildly, that there is a set of services which intrinsically should be state provided and free at the point of use and another set which shouldn’t (there is also a question about whether all of the services proposed are in any normal sense ‘universal’).
But it’s important that questions such as these are asked. It’s easy to slide into the assumption that the way things are is the only way they can be. If patterns of work are changing, patterns of support to complement work will need to change too, and options less radical than full universal basic incomes need to be considered in that context.
Civil servants and civil services have ethical responsibilities for what they do. They can and must take account of the democratic mandate of the governments they serve, but that is a factor in the ethical judgements they make, it is not an exemption from the obligation to make them.
Usually when questions of this kind are discussed it is in the context of the limits beyond which government officials should not act, given that by strong implication politically neutral civil servants value the political, legal and civil system more highly than all the specific outcomes of that system. But this post is doubly interesting because it comes from the opposite direction: are there countervailing obligations, are there circumstances in which officials should continue in government despite fundamental disagreements with the policy and ethics of the political leaders they serve? The assertion here is that commitment to the values of public service can – and arguably should – lead people to stay in government both to sustain services to those who depend on them and to mitigate the worst consequences of bad policies and decisions. It’s a powerful argument. But it has the potential to be a profoundly undemocratic one. Democracies undoubtedly need checks and balances and in extremis, civil services can be such a check. But we should be worried – much more as citizens than as bureaucrats – when political and legal checks are insufficient to create an ethical balance.
Making things a bit less rubbish may sound a pale and uninspiring ambition. Set against the grand rhetoric of strategic change, it is certainly unassuming. But this post shouldn’t be overlooked because of its asserted modesty, for at least two important reasons.
The first is that making things a bit less rubbish is no small thing. Continuous attention to making things a bit less rubbish starts to make them a lot less rubbish and ultimately perhaps not rubbish at all. The second is that this is a wonderfully clear account of why mapping services from the perspective of users, rather than providers, is so powerful and so important. The ambition has been there for a long time, but the reality of actually doing it has lagged years behind, so it’s good to see real progress being made.
A few years ago, it looked as though joined up information might be as far as we would get: joined up information wouldn’t and couldn’t deliver a joined up government. With hindsight, that looks a little pessimistic in terms of the possibility of delivering better and more joined up services. But the thirty parts of government described in the post as being relevant to exports are all still there, and the question of how far service design can reality transcend those boundaries is still a very real one.
This is a really interesting perspective on when it is – and isn’t – sensible to bring digital approaches and expertise into government policy making. Digital has much to offer policy making, but the value of that offer is massively increased if it is made with some humility, recognising the need to understand and add value to the policy making process. There is a refreshing recognition that not all service design (and so even more so, not all policy) is digital and that the contexts and constraints of policy making can be very different from those assumed in agile development and delivery. That isn’t – and shouldn’t be – a return to the view that policy making is so arcane an art that only true initiates should be allowed to do it, or have an opinion on it. It is though a very welcome recognition of the value of an almost anthropological approach – the idea of sending product managers to be participant observers of the policy making world is a particularly good one.
Ellen Broad – Medium
Facial recognition is the next big area where questions about data ownership, data accuracy and algorithmic bias will arise – and indeed are arising. Some of those questions have very close parallels with their equivalents in other areas of personal data, others are more distinctive – for example, discrimination against black people is endemic in poor algorithm design, but there are some very specific ways in which that manifests itself in facial recognition. This short, sharp post uses the example of a decision just made in Australia to pool driving licence pictures to create a national face recognition database to explore some of the issues around ownership, control and accountability which are of much wider relevance.
This is a long and detailed post, making two central points, one more radical and surprising than the other. The less surprising – though it certainly bears repeating – is that qualitative understanding, and particularly ethnographic understanding, is vitally important in understanding people and thus in designing systems and services. The more distinctive point is that qualitative and quantitative data are not independent of each other and more particularly that quantitative data is not neutral. Or, in the line quoted by Leisa Reichelt which led me to read the article, ‘behind every quantitative measure is a qualitative judgement imbued with a set of situated agendae’. Behind the slightly tortured language of that statement there are some important insights. One is that the interpretation of data is always something we project onto it, it is never wholly latent within it. Another – in part a corollary to the first – is that data cannot be disentangled from ethics. Taken together, that’s a reminder that the spectrum from data to knowledge is one to be traversed carefully and consciously.
This is a beguiling timeline which has won a fair bit of attention for itself. It’s challenging stuff, particularly the point around 2060 when “all human tasks” will apparently be capable of being done by machines. But drawing an apparently precise timeline such as this obscures two massive sources of uncertainty. The first is the implication that people working on artificial intelligence have expertise in predicting the future of artificial intelligence. Their track record suggests that that is far from the case: like nuclear fusion, full blown AI has been twenty years in the future for decades (and the study underlying this short article strongly implies, though without ever acknowledging, that the results are as much driven by social context as by technical precision). The second is the implication that the nature of human tasks has been understood, and thus that we have some idea of what the automation of all human tasks might actually mean. There are some huge issues barely understood about that (though also something of a no true Scotsman argument – something is AI until it is achieved, at which point it is merely automation). Even if the details can be challenged, though, the trend looks clear: more activities will be more automated – and that has some critical implications, regardless of whether we choose to see it as beating humans.
Politicians make decisions, legitimised by their democratic mandate. Bureaucrats implement those decisions, based on the objective and standardised application of rules.
So at least goes the standard caricature. Reality is, of course, more complicated than that. That simplistic model breaks down for at last two kinds of reasons. First, the real world is just too big and varied to make it possible – or sensible – to specify everything in minute detail. Systems which attempt to do that tend to break. Secondly, bureaucrats and the consumers of public services are human beings (as indeed are politicians), and their interactions will inevitably be influenced by emotional as well rational responses. Bureaucrats always have faces, even if those faces are not always visible.
This article is a thoughtful exploration of the place of bureaucracy and bureaucrats in wider political systems, including the psychological toil which can be exacted on those trying to manage those intrinsic tensions between rules, complex reality, and humanity.
“Transformation” is a dangerous word. It is bold in ambition, but often very uncertain in precision. Instead of attempting yet another definition, as part of yet another attempt to tie the concept down, this post sets out eight powerful design principles which, if applied, would result in something which pretty unarguably would have delivered transformation. Perhaps transformation isn’t what you do, it’s how you tell what you’ve done.
But whatever the level of ambition, there is a lot in these apparently simple principles – well worth keeping close to hand.
It is increasingly obvious that ways of regulating and controlling digital technologies struggle to keep pace with the technologies themselves. Not only are they ever more pervasive, but their control is ever more consolidated. Regulations – such as the EU cookie consent rules – deal with real problems, but in ways which somehow fail to get to the heart of the issue, and which are circumvented or superseded by fast-moving developments.
This post takes a radical approach to the problem: rather than focusing on specific regulations, might we get to a better place if we take a systems approach, identifying (and nurturing) a number of approaches, rather than relying on a single, brittle, rules based approach? Optimistically, that’s a good way of creating a more flexible and responsive way of integrating technology regulation into wider social and political change. More pessimistically, the coalition of approaches required may be hard to sustain, and is itself very vulnerable to the influence of the technology providers. So this isn’t a panacea – but it is a welcome attempt to ask some of the right questions.
By happy – one might almost say curious – coincidence, this is another mapping of policy interventions, but this time ranked by democratic power. The result may feel a little painful to user researchers, but is a powerful complement to the Policy Lab perspectives.
But this post is about much more than a neat diagram. The core argument is that policy making is intrinsically political, and that being political should mean being democratic, not – or at least not just – because democracy is intrinsically good, but because there is already clear evidence that bad things happen when design, and particularly digital design, happens in a democratic vacuum. ‘Working in the open’ is one of the mantras of GDS. This post takes that thought to a level I suspect few of its proponents have ever imagined.
The first part is a typology of government interventions (click on the image for a larger and more legible version), which prompts more rigorous thought about the nature of the design challenge in relation to the nature of the intended impact.
Slightly curiously, the vertical categories are described as being on a scale from ‘Low level intervention’ (stewardship) to ‘Large scale intervention’ (legislation). That’s a little simplistic. Some legislation is intended to have a very narrow effect; some attempts to influence – which look as though they belong in the ‘leader’ line – can have huge effects. But that’s a minor quibble, particularly as it is described as being still work in progress.
The second dimension is about the scale of design, from micro to macro. Thinking about it that way has the rather helpful effect of cutting off what has become a rather sterile debate about the place of service design in government. Service design is, of course, critically important, but it’s a dimension of a wider model of policy and design which doesn’t entail conflict between the layers.
This new report from the RSA takes a more balanced view than most on the impact of automation on work, and particularly on low-skill work. This is neither a story of a displaced workforce condemned to penury as the robots take over, nor one of a blithe assumption that everything will muddle through. Much of the underlying analysis is now fairly familiar – certainly to regular readers here – what is distinctive and valuable is the focus on the quality as well as the quantity of work and the ways in which automation can enhance human work rather than displace it
Impressively, the authors have put some of their approach into practice by partly automating the process of reading it. Traditional manual readers can work through the full eighty page report; automation maximalists need only skim the eight key takeaways; and those with intermediate ambitions can focus on extracts from the main report summarising the main arguments or focusing on the impact of automation on the quality of work.
Being agile in a small agile organisation is one thing. Being a pocket of agility in a large and not necessarily very agile organisation is quite another. One of the points of friction is between conventional approaches to budget setting, typically with a strong focus on detailed advance planning, and agile approaches which make a virtue of early uncertainty and an exploratory approach. It’s clear that that’s not an ideal state of affairs, it’s less clear what the best way is of moving on. This post puts forward the radical approach of not funding projects at all, but funding teams instead.
The thought behind it makes a lot of sense, with the approval process becoming some version of managing a high-level backlog and there being a real efficiency gain from sustained team activity rather than fragmented project team formation. But in focusing on funding as the key tension to be resolved, the post slightly skates over what might be the larger issue of planning, where the gap between the aspiration to be precise and accurate and the reality of underlying uncertainty tends to be large. It may be that following the approach suggested here moves, rather than resolves, the friction. But it may also be that that is a useful and necessary next step.