Reflections 6 months into my work at NHS Digital – part 1

Matt Edgar

There is a sweet spot in any job, or more generally in understanding any organisation, when you still retain a sense of surprise that anything could quite work that way, but have acquired an understanding of why it does, and of the local application of the general rule that all organisations are perfectly designed to get the results they get. Matt has reached the six month mark working in NHS Digital, and has some good thoughts, which are partly about the specifics of the NHS, but are mostly about humans and service design. This is part 1, there is also a second post on creating a design team to address those issues.

Where does Product Management sit in Government? Ponderings on ‘ownership’ & organisational design.

Zoe G – Medium

Some further reflections on the place of product management, building on Zoe’s post from a couple of months ago. This time the focus is on where product managers best sit organisationally – are they essential, digital, operational or policy people? The answer, of course, is that that’s not a terribly good question – not because it doesn’t matter, but because what matters doesn’t uniquely map to single organisational structures. Indeed, the question about where product managers (or, indeed, a number of other people) belong might better be asked as a question about whether the organisational structures of the last decade are optimal for the next. In the current way of doing things, the risk of losing strategic or policy intents feels like the one to be most concerned about – but, as so often, where you stand depends heavily on where you sit.

What we talk about when we talk about fair AI

Fionntán O’Donnell – BBC News Labs

Courtesy of Laura Amaya from the Noun Project

This is an exceptionally good non-technical overview of fairness, accountability and transparency in AI. Each issue in turn is systematically disassembled and examined.  It is particularly strong on accountability, bringing out clearly that it can only rest on human agency and social and legal context. ‘My algorithm made me do it’ has roughly the same moral and intellectual depth as ‘a big boy made me do it’.

I have one minor, but not unimportant, quibble about the section on fairness. The first item on the suggested checklist is ‘Does the system fit within the company’s ethics?’ That is altogether too narrow a formulation, both in principle and in practice. It’s wrong in practice because there is no particular reason to suppose that a company’s (or any other organisation’s) ethics can be relied on to impose any meaningful standards. But it’s also wrong in principle: the relevant scope of ethical standards is not the producers of an algorithm, but the much larger set of people who use it or have it applied to them.

But that’s a detail. Overall, the combination of clear thinking and practical application makes this well worth reading.

Future Historians Probably Won’t Understand Our Internet

Alexis Madrigal – The Atlantic

Archiving documents is easy. You choose which ones to keep and put them somewhere safe. Archiving the digital equivalents of those documents throws up different practical problems, but is conceptually not very different. But often, and increasingly, our individual and collective digital footprints don’t fit neatly into that model. The relationships between things and the experience of consuming them become very different, less tangible and less stable. As this article discusses, there is an archive of Twitter in theory, but not in any practical sense, and not one of Facebook at all. And even if there were, the constant tweaking of interfaces and algorithms and increasingly finely tuned individualisation make it next to impossible to get hold of in any meaningful way.

So in this new world, perhaps archivists need to map, monitor and even create both views of the content and records of what it means to experience it. And that will be true not just of social media but increasingly of knowledge management in government and other organisations.

Machine learning, defined

Sarah Jamie Lewis

There’s a whole emerging literature summarised in those words. But it underlines how much of the current debate is still as much about what machine learning is as what it does.

The invention of jobs

Stian Westlake

It’s often tempting – because it’s easy – to think that the way things currently are is the necessary and natural way of their being. That can be a useful and pragmatic assumption. Until it isn’t.

The impossibility of intelligence explosion

François Chollet – Medium

Last week, there was another flurry of media coverage for AI, as Google’s AlphaZero went from no knowledge of the rules of chess to beating the current (computer) world champion in less than a day. And that inevitably prompts assumptions that very specific domain expertise will somehow translate into ever accelerating general intelligence, until humans become pets of the AI, if they are suffered to live at all.

This timely article systematically debunks that line of thought, demonstrating that intelligence is a social construct and arguing that it is in many ways a property of our civilization, not of each of us as individuals within it. Human IQ (however flawed a measure that is) does not correlate with achievement, let alone with world domination, beyond a fairly narrow range – raw cognition, it seems, is far from being the only relevant component of intelligence.

Or in a splendid tweet length dig at those waiting expectantly for the singularity:

Ten Year Futures

Benedict Evans

Interesting ideas on how to think about the future seem to come in clumps. So alongside Ben Hammersley’s reflections, it’s well worth watching and listening to this presentation of a ten year view of emerging technologies and their implications. The approaches of the two talks are very different, but interestingly, they share the simple but powerful technique of looking backwards as a good way of understanding what we might be seeing when we look forwards.

They also both talk about the multiplier effect of innovation: the power of steam engines is not that they replace one horse, it is that each one replaces many horses, and in doing so makes it possible do things which would be impossible for any number of horses. In the same way, machine learning is a substitute for human learning, but operating at a scale and pace which any number of humans could not imitate.

This one is particularly good at distinguishing between the maturity of the technology and the maturity of the use and impact of the technology. Machine learning, and especially the way it allows computers to ‘see’ as well as to ‘learn’ and ‘count’, is well along a technology development S-curve, but at a much earlier point of the very different technology deployment S-curve, and the same broad pattern applies to other emerging technologies.

 

Thinking about the future

Ben Hammersley

This is a video of Ben Hammersley talking about the future for 20 minutes, contrasting the rate of growth of digital technologies with the much slower growth in effectiveness of all previous technologies – and the implications that has for social and economic change. It’s easy to do techno gee-whizzery, but Ben goes well beyond that in reflecting about the wider implications of technology change, and how that links to thinking about organisational strategies. He is clear that predicting the future for more than the very short term is impossible, suggesting a useful outer limit of two years. But even being in the present is pretty challenging for most organisations, prompting the question, when you go to work, what year are you living in?

His recipe for then getting to and staying in the future is disarmingly simple. For every task and activity, ask what problem you are solving, and then ask yourself this question. If I were to solve this problem today, for the first time, using today’s modern technologies, how would I do it? And that question scales: how can new technologies make entire organisations, sectors and countries work better?

It’s worth hanging on for the ten minutes of conversation which follows the talk, in which Ben makes the arresting assertion that the problem is not that organisations which can change have to make an effort to change, it is that organisations which can’t or won’t change must be making a concerted effort to prevent the change.

It’s also well worth watching Ben Evan’s different approach to thinking about some very similar questions – the two are interestingly different and complementary.

Teaching Digital at HKS: A Roadmap

David Eaves – digitalHKS

This is the entry page for a series of posts about teaching digital at the Harvard Kennedy School (of government). This isn’t the course itself, but a series of reflections on designing and delivering it. It is though filled with insights about what it is useful to know and to think about, and how the various components fit together and reinforce each other to meet the needs of students with different backgrounds and interests in government.

The Problem With Finding Answers

Paul Taylor

There are some who argue that the only test of progress is delivery and that the only thing which can be iterated is a live service. That is a horribly misguided approach. There is no point in producing a good answer to a bad question, and lots to be gained from investing time and energy in understanding the question before attempting to answer it. Even for pretty simple problems, badly formed initial questions can generate an endless – and expensive – chain of solutions which would never have needed to exist if that first question had been a better one. Characteristically, Paul Taylor asks some better questions about asking better questions.

Digital government? Sort of.

Laurence – Global Village Governance

Nothing ever quite beats the description of a service by somebody who has just used it – or tried to use it. This is a good example of the genre – applying for ‘National Super’ (or state pension) in New Zealand. As turns out to be the case surprisingly often, even if all or most of the steps work well enough individually, that’s still a very long way from the end to end service working well. And where, as in this case, one step in the process fails, the process as a whole goes down with it. One common problem, which we may also be seeing in this example, is that service providers are at constant risk of defining their service more narrowly than their service users do.

What the future of work will mean for jobs, skills, and wages

James Manyika, Susan Lund, Michael Chui, Jacques Bughin, Jonathan Woetzel, Parul Batra, Ryan Ko, and Saurabh Sanghvi – McKinsey

Much is being written about how robots and automation either will or won’t displace lots of employment, often with breathless excitement as a substitute for thoughtful analysis. This report brings a more measured approach, in every sense. Its focus, as seems increasingly sensible, is less on the end point of change (which can’t be known in any case) and much more on the pace and direction of change. It also pays as much attention to the jobs which will be created as to those which might be displaced, which must be right as it is the net effect which really matters. The conclusion is that up to 2030, jobs will be created in sufficient number to offset the effects of automation – but that that overall stability may involve 375 million people around the world and 6.6 million in the UK being displaced from their current occupation, in part because an estimated 8 or 9% of jobs by 2030 being in occupations which haven’t existed before.

Collective Intelligence Can Change the World

Geoff Mulgan – Bloomberg

Geoff Mulgan comes at the power of collective intelligence in this article from an interestingly different direction from that taken by Tim Harford. The underlying thought is the same: that individuals are subject to false confidence and confirmation bias, and that tempering that through more collective approaches leads to better results. This article though is more interested in the systems which embody that intelligence than in diluting individuality through diverse teams. Regulation and audit are examples of ways which are intended to discourage aberrant behaviour by encapsulating shared wisdom about ways of doing things in ways which are both efficient and effective in themselves and also counter illusion and self-deception.

This is an extract from Geoff’s new book, Big Mind: How Collective Intelligence Can Change Our World.

True diversity means looking for the knife in a drawer of spoons

Tim Harford – The Undercover Economist

Teams with diverse capabilities perform better than teams which are too homogeneous. That much isn’t – or shouldn’t be – controversial. But this post adds two succinct insights to that starting point. The first is that despite the known value of diversity, recruitment and team formation tends to optimise for convergence rather than divergence – and that’s got a lot to do with the fact that diversity is a property of teams, not of individuals. So the more people are recruited in groups, the easier it should be to ensure that between them the successful candidates cover the full range of the needed skills and experience. The second is that homogeneous teams tend to think they are performing better but actually to be performing worse than teams which include a divergent outsider. A degree of social discomfort is a price which turns out to be well worth paying for better performance.

This is one of two articles worth reading together – the other is Geoff Mulgan’s on collective intelligence – as they cover some closely related ground from quite a different starting point.

Putting users first is not the answer to everything

Cassie Robinson – Medium

Starting with user needs has become the axiomatically correct way of framing almost any government design problem. That’s a great deal better than not starting with user needs, but it also carries some very real risks and problems. One is that it tends to a very individualistic approach: the user is a lone individual, whose only relevant relationship is with the service under consideration. The wider social network, within which we are all nodes, doesn’t get much of a look in. Another is that we risk prioritising the completion of a process over the achievement of an outcome. Both of those addressed in this post, which directly challenges what has become the conventional starting point.

But perhaps what most distinguishes public services (in the widest sense) from other kinds of service is that there are often social needs which don’t always align with individual needs. The post refers to moral and collective needs, though it’s not entirely clear either whether ‘moral’ is a helpful label in this context or whether in practice moral and collective are being used as synonyms.

Can A.I. Be Taught to Explain Itself?

Cliff Kuang – New York Times

The question which this article tries to answer is a critically important one. Sometimes – often – it matters not just that a decision has been made, but that it has been made correctly and appropriately, taking proper account of the factors which are relevant and no account of factors which are not.

That need is particularly obvious in, but not limited to, government decisions, even more so where a legal entitlement is at stake. But machine learning doesn’t work that way: decisions are emergent properties of systems, and the route to the conclusion may be neither known nor, in any ordinary sense, knowable.

The article introduces a new name for the challenge faced from the earliest days of the discipline, “explainable AI” – with a matching three letter acronym for it, XAI. The approach is engagingly recursive. The problem of describing the decision produced by an AI may itself be a problem of the type susceptible to analysis by AIs. Even if that works, it isn’t of course the end of it. We may have to wonder whether we need a third AI system which assures us that the explanation given by the second AI system of the decision made by the first AI system is accurate. And more prosaically, we would net to understand whether any such explanation is even capable of meeting the new GDPR standards.

But AI isn’t going away. And given that, XAI or something like it is going to be essential.

Three ways to run better discoveries

Will Myddelton

If we can’t get discoveries right, we won’t get anything else right that builds on their findings. That becomes ever more important as the language – if not always the rigour – of agile expands beyond its original boundaries. This short post introduces three others which look at planning, starting and finishing a discovery. They aren’t a guide to the tasks and activities of a discovery; they are instead a very powerful and practical guide to thinking about how to make a discovery work. There is a lot here for people who know they are doing discoveries, there may be even more for people who don’t necessarily think of that as what they are doing at all

It is also, not at all incidentally, beautifully written with not a word wasted. These things matter.

The morality of artificial intelligence

Moral Maze – BBC

Posts generally appear on Strategic Reading because they make powerful and interesting arguments or bring thought provoking information to bear. This 45 minute discussion is in a rather different category. It’s appearance here is to illustrate the alarmingly low level of thought being applied to some critically important questions. In part, it’s a classic two cultures problem, technologists who don’t seem to see the social and political implications of their work in a hopeless discourse with people who don’t seem to grasp the basics of the technology, in a discussion chaired by somebody capable of introducing the topic by referring to ‘computer algorithms – whatever they are.’ Matthew Taylor stands out among the participants for his ability to comment intelligently on both sides of the divide, while Michael Portillo is at least fluently pessimistic about the intrinsic imperfection of humanity.

Why then mention it at all? Partly to illustrate the scale and complexity of some of the policy questions prompted by artificial intelligence, which are necessarily beyond the scope of the technology itself. Partly also because the current state of maturity of AI makes it hard to get traction on the real problems. Everybody can project their hopes and fears on hypothetical AI developments – it’s not clear that people are agreeing on enough to have meaningful disagreements.

So despite everything, there is some value in listening to this – but with an almost anthropological cast of mind, to get some insight into the lack of sophistication on an important and difficult topic of debate.

 

We’ll need more than £40m* a year to get free maps – specifically politicians willing to share

Ed Parkes

This is the back story to one of yesterday’s budget announcements – £40 million a year for two years to give UK small businesses access to Ordnance Survey data. If you are interested in that you will find it gripping. But even if you are not, it’s well worth reading as a perceptive – if necessarily speculative – account of how policy gets made.

There are people lobbying for change – some outside government, some within. What they want done has a cost, but more importantly entails changing the way that the problem is thought about, not just in the bit of government which owns the policy, but in the Treasury, which is going to have to pay for it. A decision is made, but not one which is as clear cut or all embracing as the advocates would have liked. They have won, in a sense, but what they have won isn’t really what they wanted.

It’s also a good example of why policy making is hard. What seems at first to be a simple issue about releasing data quickly expands into wider questions of industrial and social strategy – is it a good idea to subsidise mapping data, even if the first order beneficiaries are large non-UK multinationals whose reputation for paying taxes is not the most positive? Is time limited pump-priming funding the right stimulus, or does it risk creating a surge of activity which then dies away? And, of course, this is a policy with no service design in sight.