Much is being written about how robots and automation either will or won’t displace lots of employment, often with breathless excitement as a substitute for thoughtful analysis. This report brings a more measured approach, in every sense. Its focus, as seems increasingly sensible, is less on the end point of change (which can’t be known in any case) and much more on the pace and direction of change. It also pays as much attention to the jobs which will be created as to those which might be displaced, which must be right as it is the net effect which really matters. The conclusion is that up to 2030, jobs will be created in sufficient number to offset the effects of automation – but that that overall stability may involve 375 million people around the world and 6.6 million in the UK being displaced from their current occupation, in part because an estimated 8 or 9% of jobs by 2030 being in occupations which haven’t existed before.
Geoff Mulgan comes at the power of collective intelligence in this article from an interestingly different direction from that taken by Tim Harford. The underlying thought is the same: that individuals are subject to false confidence and confirmation bias, and that tempering that through more collective approaches leads to better results. This article though is more interested in the systems which embody that intelligence than in diluting individuality through diverse teams. Regulation and audit are examples of ways which are intended to discourage aberrant behaviour by encapsulating shared wisdom about ways of doing things in ways which are both efficient and effective in themselves and also counter illusion and self-deception.
This is an extract from Geoff’s new book, Big Mind: How Collective Intelligence Can Change Our World.
Teams with diverse capabilities perform better than teams which are too homogeneous. That much isn’t – or shouldn’t be – controversial. But this post adds two succinct insights to that starting point. The first is that despite the known value of diversity, recruitment and team formation tends to optimise for convergence rather than divergence – and that’s got a lot to do with the fact that diversity is a property of teams, not of individuals. So the more people are recruited in groups, the easier it should be to ensure that between them the successful candidates cover the full range of the needed skills and experience. The second is that homogeneous teams tend to think they are performing better but actually to be performing worse than teams which include a divergent outsider. A degree of social discomfort is a price which turns out to be well worth paying for better performance.
This is one of two articles worth reading together – the other is Geoff Mulgan’s on collective intelligence – as they cover some closely related ground from quite a different starting point.
Starting with user needs has become the axiomatically correct way of framing almost any government design problem. That’s a great deal better than not starting with user needs, but it also carries some very real risks and problems. One is that it tends to a very individualistic approach: the user is a lone individual, whose only relevant relationship is with the service under consideration. The wider social network, within which we are all nodes, doesn’t get much of a look in. Another is that we risk prioritising the completion of a process over the achievement of an outcome. Both of those addressed in this post, which directly challenges what has become the conventional starting point.
But perhaps what most distinguishes public services (in the widest sense) from other kinds of service is that there are often social needs which don’t always align with individual needs. The post refers to moral and collective needs, though it’s not entirely clear either whether ‘moral’ is a helpful label in this context or whether in practice moral and collective are being used as synonyms.
The question which this article tries to answer is a critically important one. Sometimes – often – it matters not just that a decision has been made, but that it has been made correctly and appropriately, taking proper account of the factors which are relevant and no account of factors which are not.
That need is particularly obvious in, but not limited to, government decisions, even more so where a legal entitlement is at stake. But machine learning doesn’t work that way: decisions are emergent properties of systems, and the route to the conclusion may be neither known nor, in any ordinary sense, knowable.
The article introduces a new name for the challenge faced from the earliest days of the discipline, “explainable AI” – with a matching three letter acronym for it, XAI. The approach is engagingly recursive. The problem of describing the decision produced by an AI may itself be a problem of the type susceptible to analysis by AIs. Even if that works, it isn’t of course the end of it. We may have to wonder whether we need a third AI system which assures us that the explanation given by the second AI system of the decision made by the first AI system is accurate. And more prosaically, we would net to understand whether any such explanation is even capable of meeting the new GDPR standards.
But AI isn’t going away. And given that, XAI or something like it is going to be essential.
If we can’t get discoveries right, we won’t get anything else right that builds on their findings. That becomes ever more important as the language – if not always the rigour – of agile expands beyond its original boundaries. This short post introduces three others which look at planning, starting and finishing a discovery. They aren’t a guide to the tasks and activities of a discovery; they are instead a very powerful and practical guide to thinking about how to make a discovery work. There is a lot here for people who know they are doing discoveries, there may be even more for people who don’t necessarily think of that as what they are doing at all
It is also, not at all incidentally, beautifully written with not a word wasted. These things matter.
Posts generally appear on Strategic Reading because they make powerful and interesting arguments or bring thought provoking information to bear. This 45 minute discussion is in a rather different category. It’s appearance here is to illustrate the alarmingly low level of thought being applied to some critically important questions. In part, it’s a classic two cultures problem, technologists who don’t seem to see the social and political implications of their work in a hopeless discourse with people who don’t seem to grasp the basics of the technology, in a discussion chaired by somebody capable of introducing the topic by referring to ‘computer algorithms – whatever they are.’ Matthew Taylor stands out among the participants for his ability to comment intelligently on both sides of the divide, while Michael Portillo is at least fluently pessimistic about the intrinsic imperfection of humanity.
Why then mention it at all? Partly to illustrate the scale and complexity of some of the policy questions prompted by artificial intelligence, which are necessarily beyond the scope of the technology itself. Partly also because the current state of maturity of AI makes it hard to get traction on the real problems. Everybody can project their hopes and fears on hypothetical AI developments – it’s not clear that people are agreeing on enough to have meaningful disagreements.
So despite everything, there is some value in listening to this – but with an almost anthropological cast of mind, to get some insight into the lack of sophistication on an important and difficult topic of debate.
This is the back story to one of yesterday’s budget announcements – £40 million a year for two years to give UK small businesses access to Ordnance Survey data. If you are interested in that you will find it gripping. But even if you are not, it’s well worth reading as a perceptive – if necessarily speculative – account of how policy gets made.
There are people lobbying for change – some outside government, some within. What they want done has a cost, but more importantly entails changing the way that the problem is thought about, not just in the bit of government which owns the policy, but in the Treasury, which is going to have to pay for it. A decision is made, but not one which is as clear cut or all embracing as the advocates would have liked. They have won, in a sense, but what they have won isn’t really what they wanted.
It’s also a good example of why policy making is hard. What seems at first to be a simple issue about releasing data quickly expands into wider questions of industrial and social strategy – is it a good idea to subsidise mapping data, even if the first order beneficiaries are large non-UK multinationals whose reputation for paying taxes is not the most positive? Is time limited pump-priming funding the right stimulus, or does it risk creating a surge of activity which then dies away? And, of course, this is a policy with no service design in sight.
IT projects used to be about – or at least were perceived to be about – building things. That determined not just how the work was done, but also how it was managed and accounted for. That leads to a focus on the production of assets, which in turn depreciate. And treating software as a capital asset has consequences not just in arcane accounting treatments, but in how digital can be measured and managed – and those ways are, this post argues, counter-productive if we want to see sustained continuous agile improvement.
That’s not just an interesting argument in its own right, it’s also a great example of how understanding ‘the way things are done round here’ requires several layers of digging and goes well beyond ‘culture’ as some amorphous driver of perverse behaviour.
Sometimes the best way of thinking about something completely familiar is to treat it as wholly alien. If you had to explain a smartphone to somebody recently arrived from the 1990s, how would you describe what it is and, even more importantly, what it does?
In a way, that’s what this article is doing, painstakingly describing both the very familiar, and the aspects of its circumstances we prefer not to know – cheap phones have a high human and environmental price. An arresting starting point is to consider what people routinely carried around with them in 2005, and how much of that is now subsumed in a single ubiquitous device.
That’s fascinating in its own right, but it’s also an essential perspective for any kind of strategic thinking about government (or any other) services, for reasons pithily explained by Benedict Evans:
Periodic reminder: maybe 100 million people use any kind of pro PC app. 3 billion people have a smart phone, and that will rise to 5 billion people in the next few years https://t.co/NUtiAoOfS6
— Benedict Evans (@BenedictEvans) November 18, 2017
Anything that you can't do on mobile/tablet and can do on a PC is something that 90%+ of people couldn't actually do on a PC either.
— Benedict Evans (@BenedictEvans) July 14, 2017
Smartphones are technological marvels. But they are also powerful instruments of sociological change. Understanding them as both is fundamental to understanding them at all.
This post works at two entirely different levels. It is a bold claim of right to the challenges of digital archiving, based on the longevity of the National Archives as an organisation, the trust it has earned and its commitment to its core mission – calling on a splendidly Bayesian historiography.
But it can be read another way, as an extended metaphor for government as a whole. There is the same challenge of managing modernity in long established institutions, the same need to sustain confidence during rapid systemic change. And there is the same need to grow new government services on the foundations of the old ones, drawing on the strength of old capabilities even as new ones are developed.
And that, of course, should be an unsurprising reading. Archival record keeping is changing because government itself is changing, and because archives and government both need to keep pace with the changing world.
It’s interesting to read this Economist editorial alongside Zeynep Tufekci’s TED talk. It focuses on the polarisation of political discourse driven by the persuasion architecture Tufekci describes, resulting in the politics of contempt. The argument is interesting, but perhaps doubly so when the Economist, which is not know for its histrionic rhetoric, concludes that ‘the stakes for liberal democracy could hardly be higher.’
That has implications well beyond politics and persuasion and supports the wider conclusion that algorithmic decision making needs to be understood, not just assumed to be neutral.
This TED talk is a little slow to get going, but increasingly catches fire. The power of algorithmically driven media may start with the crude presentation of adverts for the thing we have already just bought, but the same powers of tracking and micro-segmentation create the potential for social and political manipulation. Advertising-based social media platforms are based on persuasion architectures, and those architectures make no distinction between persuasion to buy and persuasion to vote.
That analysis leads – among other things – to a very different perception of the central risk of artificial intelligence: it is not that technology will develop a will of its own, but that it will embody, almost undetectably, the will of those in a position to use it. The technology itself may, in some senses, be neutral; the business models it supports may well not be.
The idea of digital disruption is familiar enough. Usually that’s seen as a consequence of rapid technological change. Clearly that’s part of the story, but this post argues that the more important challenge is not so much adopting the technology as adapting the people and organisations which use it – and that that is messier and harder to do well. It follows that to be successful digitally, organisations need to be effective at managing organisational change.
This wide ranging and fast moving report hits the Strategic Reading jackpot. It provides a bravura tour of more of the topics covered here than is plausible in a single document, ticking almost every category box along the way. It moves at considerable speed, but without sacrificing coherence or clarity. That sets the context for a set of radical recommendations to government, based on the premise established at the outset that incremental change is a route to mediocrity, that ‘status quo plus’ is a grave mistake.
Not many people could pull that off with such aplomb. The pace and fluency sweep the reader along through the recommendations, which range from the almost obvious to the distinctly unexpected. There is a debate to be had about whether they are the best (or the right) ways forward, but it’s a debate well worth having, for which this is an excellent provocation.
An intriguing natural experiment on the impact of a universal income has been going on in North Carolina for the last twenty years. Nobody intended – or even noticed – it to begin with; it’s a slightly accidental by-product of a profitable casino. It’s not a universal basic income in the normal sense, in that the amounts involved aren’t enough actually to live on and so it’s not a substitute for employment. But that makes the observed effects even more interesting. Even relatively small amounts can have significant behavioural consequences, including improving health and education outcomes and reducing crime. Larger lump sums at key stages, such as supporting tertiary education, can be more dramatically life changing.
Service design in government is hard not because it is intrinsically more complicated than any other kind of service design (though there are plenty in government who like to think it is), but partly because it is universal (we can’t design to exclude difficult or expensive to serve customers) and partly because often the need for a service comes at a time of crisis (which also means that those difficult or expensive to serve customers are those whose need is greatest).
This post makes a powerful case for that to underpin the whole approach to service design in government, and so to ‘aim not just for seamlessness, but for kindness’. And in an interesting gem of synchronicity, there are strong parallels with Kit Collingwood’s post on why civil servants should become experts on empathy, also published this morning.
The question in the title of this piece can be answered very simply: yes, overwhelmingly bureaucrats do care. The fact that such an answer is not obvious, or not credible, to many people who are not bureaucrats suggests that the better question might be, how is it that uncaringness is an emergent property of systems populated by caring people?
Two rather different groups of bureaucrats are considered here. The first is those furthest from the delivery of services, particularly policy makers, and of them particularly those who learned their penmanship while studying classics at Oxford. There are rather fewer of those than there once were. But there is overwhelming evidence that even those who do not neatly fit the stereotype can be far too distant from the people whose needs their policies are intended to address. The second group is those who deliver services directly to the people who use them, described drawing on the work of Bernardo Zacka, covered here a few weeks ago. They are not rules-applying automata, but subtle observers, judges and influencers of what is going on – and incorporating those perspectives and insights into policy making enhances it immeasurably. That is increasingly happening, but this post is a good reminder that too often the gap remains a wide one.
Product owners play a vital pivotal role in agile delivery, a role which is simple and clear (which is not at all to say easy) in some ways, but still much less clear in others, particularly in thinking about government services. This post uses the differences between public and private sector contexts to illustrate the complex balancing act that is required of product owners in government. That matters not just the product owners themselves, but to the other players in the wider systems of which they are a part. The underlying intent, the purpose for which a service is being developed won’t always be a straightforward response to a user need, and the articulation of goals and priorities needs to reflect that. This is a useful step towards building and sharing a common understanding.
Some simple but very powerful thoughts on the intersection of automation and design. The complexity of AI, as with any other kind of complexity, cannot be allowed to get in the way of making the experience of a service simple and comprehensible. Designers have an important role to play in avoiding that risk, reinforced as the post notes by the requirement under GDPR for people to be able to understand and challenge decisions which affect them.
There is a particularly important point – often overlooked – about the need to ensure that transparency and comprehension are attributes of wider social and community networks, not just of individuals’ interaction with automated systems.