There is a sweet spot in any job, or more generally in understanding any organisation, when you still retain a sense of surprise that anything could quite work that way, but have acquired an understanding of why it does, and of the local application of the general rule that all organisations are perfectly designed to get the results they get. Matt has reached the six month mark working in NHS Digital, and has some good thoughts, which are partly about the specifics of the NHS, but are mostly about humans and service design. This is part 1, there is also a second post on creating a design team to address those issues.
Some further reflections on the place of product management, building on Zoe’s post from a couple of months ago. This time the focus is on where product managers best sit organisationally – are they essential, digital, operational or policy people? The answer, of course, is that that’s not a terribly good question – not because it doesn’t matter, but because what matters doesn’t uniquely map to single organisational structures. Indeed, the question about where product managers (or, indeed, a number of other people) belong might better be asked as a question about whether the organisational structures of the last decade are optimal for the next. In the current way of doing things, the risk of losing strategic or policy intents feels like the one to be most concerned about – but, as so often, where you stand depends heavily on where you sit.
This is an exceptionally good non-technical overview of fairness, accountability and transparency in AI. Each issue in turn is systematically disassembled and examined. It is particularly strong on accountability, bringing out clearly that it can only rest on human agency and social and legal context. ‘My algorithm made me do it’ has roughly the same moral and intellectual depth as ‘a big boy made me do it’.
I have one minor, but not unimportant, quibble about the section on fairness. The first item on the suggested checklist is ‘Does the system fit within the company’s ethics?’ That is altogether too narrow a formulation, both in principle and in practice. It’s wrong in practice because there is no particular reason to suppose that a company’s (or any other organisation’s) ethics can be relied on to impose any meaningful standards. But it’s also wrong in principle: the relevant scope of ethical standards is not the producers of an algorithm, but the much larger set of people who use it or have it applied to them.
But that’s a detail. Overall, the combination of clear thinking and practical application makes this well worth reading.
Archiving documents is easy. You choose which ones to keep and put them somewhere safe. Archiving the digital equivalents of those documents throws up different practical problems, but is conceptually not very different. But often, and increasingly, our individual and collective digital footprints don’t fit neatly into that model. The relationships between things and the experience of consuming them become very different, less tangible and less stable. As this article discusses, there is an archive of Twitter in theory, but not in any practical sense, and not one of Facebook at all. And even if there were, the constant tweaking of interfaces and algorithms and increasingly finely tuned individualisation make it next to impossible to get hold of in any meaningful way.
So in this new world, perhaps archivists need to map, monitor and even create both views of the content and records of what it means to experience it. And that will be true not just of social media but increasingly of knowledge management in government and other organisations.
Machine Learning: Everything you love about the complex & intuitive biases of people combined with the cold, hard, low-cost and efficient decision making of a computer.
— Sarah Jamie Lewis (@SarahJamieLewis) December 11, 2017
There’s a whole emerging literature summarised in those words. But it underlines how much of the current debate is still as much about what machine learning is as what it does.
— Stian Westlake (@stianwestlake) December 9, 2017
It’s often tempting – because it’s easy – to think that the way things currently are is the necessary and natural way of their being. That can be a useful and pragmatic assumption. Until it isn’t.
Last week, there was another flurry of media coverage for AI, as Google’s AlphaZero went from no knowledge of the rules of chess to beating the current (computer) world champion in less than a day. And that inevitably prompts assumptions that very specific domain expertise will somehow translate into ever accelerating general intelligence, until humans become pets of the AI, if they are suffered to live at all.
This timely article systematically debunks that line of thought, demonstrating that intelligence is a social construct and arguing that it is in many ways a property of our civilization, not of each of us as individuals within it. Human IQ (however flawed a measure that is) does not correlate with achievement, let alone with world domination, beyond a fairly narrow range – raw cognition, it seems, is far from being the only relevant component of intelligence.
Or in a splendid tweet length dig at those waiting expectantly for the singularity:
Dolphins achieve super-human swimming ability in < 24 hours! It's just a matter of time before they surpass humans in every task. #DolphinZero
— Daniel Lowd @ NIPS 2017 (@dlowd) December 7, 2017
Interesting ideas on how to think about the future seem to come in clumps. So alongside Ben Hammersley’s reflections, it’s well worth watching and listening to this presentation of a ten year view of emerging technologies and their implications. The approaches of the two talks are very different, but interestingly, they share the simple but powerful technique of looking backwards as a good way of understanding what we might be seeing when we look forwards.
They also both talk about the multiplier effect of innovation: the power of steam engines is not that they replace one horse, it is that each one replaces many horses, and in doing so makes it possible do things which would be impossible for any number of horses. In the same way, machine learning is a substitute for human learning, but operating at a scale and pace which any number of humans could not imitate.
This one is particularly good at distinguishing between the maturity of the technology and the maturity of the use and impact of the technology. Machine learning, and especially the way it allows computers to ‘see’ as well as to ‘learn’ and ‘count’, is well along a technology development S-curve, but at a much earlier point of the very different technology deployment S-curve, and the same broad pattern applies to other emerging technologies.
This is a video of Ben Hammersley talking about the future for 20 minutes, contrasting the rate of growth of digital technologies with the much slower growth in effectiveness of all previous technologies – and the implications that has for social and economic change. It’s easy to do techno gee-whizzery, but Ben goes well beyond that in reflecting about the wider implications of technology change, and how that links to thinking about organisational strategies. He is clear that predicting the future for more than the very short term is impossible, suggesting a useful outer limit of two years. But even being in the present is pretty challenging for most organisations, prompting the question, when you go to work, what year are you living in?
His recipe for then getting to and staying in the future is disarmingly simple. For every task and activity, ask what problem you are solving, and then ask yourself this question. If I were to solve this problem today, for the first time, using today’s modern technologies, how would I do it? And that question scales: how can new technologies make entire organisations, sectors and countries work better?
It’s worth hanging on for the ten minutes of conversation which follows the talk, in which Ben makes the arresting assertion that the problem is not that organisations which can change have to make an effort to change, it is that organisations which can’t or won’t change must be making a concerted effort to prevent the change.
It’s also well worth watching Ben Evan’s different approach to thinking about some very similar questions – the two are interestingly different and complementary.
This is the entry page for a series of posts about teaching digital at the Harvard Kennedy School (of government). This isn’t the course itself, but a series of reflections on designing and delivering it. It is though filled with insights about what it is useful to know and to think about, and how the various components fit together and reinforce each other to meet the needs of students with different backgrounds and interests in government.
There are some who argue that the only test of progress is delivery and that the only thing which can be iterated is a live service. That is a horribly misguided approach. There is no point in producing a good answer to a bad question, and lots to be gained from investing time and energy in understanding the question before attempting to answer it. Even for pretty simple problems, badly formed initial questions can generate an endless – and expensive – chain of solutions which would never have needed to exist if that first question had been a better one. Characteristically, Paul Taylor asks some better questions about asking better questions.