Don’t believe the hype: work, robots, history

Michael Weatherburn – Resolution Foundation

This post introduces a longer paper which takes the idea of understanding the future by reflecting on the past to a new level. The central argument is that digital technologies have been influencing and shaping the industry sectors it examines for a long time, and that that experience strongly suggests that the more dramatic current forecasts about the impact of technology on work are overblown.

The paper’s strengths come from its historical perspective – and, unusually for this topic, from being written by a historian. It is very good on the underlying trends driving changing patterns of work and service delivery and distinguishing them from the visible emanations of them in web services. It does though sweep a lot of things together under the general heading of ‘the internet’ in a way which doesn’t always add to understanding – the transformation of global logistics driven by ERP systems is very different from the creation of the gig economy in both cause and effect.

The paper is less good in providing strong enough support for its main conclusion to justifying making it the report’s title. It is true that the impacts of previous technology-driven disruptions have been slower and less dramatic to manifest themselves than contemporary hype expected. But the fact that hype is premature does not indicate that the underlying change is insubstantial – the railway mania of the 1840s was not a sign that the impact of railways had peaked. It is also worth considering seriously whether this time it’s different – not because it necessarily is, but because the fact that it hasn’t been in the past is a reason to be cautious, not a reason to be dismissive.

What we talk about when we talk about fair AI

Fionntán O’Donnell – BBC News Labs

Courtesy of Laura Amaya from the Noun Project

This is an exceptionally good non-technical overview of fairness, accountability and transparency in AI. Each issue in turn is systematically disassembled and examined.  It is particularly strong on accountability, bringing out clearly that it can only rest on human agency and social and legal context. ‘My algorithm made me do it’ has roughly the same moral and intellectual depth as ‘a big boy made me do it’.

I have one minor, but not unimportant, quibble about the section on fairness. The first item on the suggested checklist is ‘Does the system fit within the company’s ethics?’ That is altogether too narrow a formulation, both in principle and in practice. It’s wrong in practice because there is no particular reason to suppose that a company’s (or any other organisation’s) ethics can be relied on to impose any meaningful standards. But it’s also wrong in principle: the relevant scope of ethical standards is not the producers of an algorithm, but the much larger set of people who use it or have it applied to them.

But that’s a detail. Overall, the combination of clear thinking and practical application makes this well worth reading.

Machine learning, defined

Sarah Jamie Lewis

There’s a whole emerging literature summarised in those words. But it underlines how much of the current debate is still as much about what machine learning is as what it does.

The impossibility of intelligence explosion

François Chollet – Medium

Last week, there was another flurry of media coverage for AI, as Google’s AlphaZero went from no knowledge of the rules of chess to beating the current (computer) world champion in less than a day. And that inevitably prompts assumptions that very specific domain expertise will somehow translate into ever accelerating general intelligence, until humans become pets of the AI, if they are suffered to live at all.

This timely article systematically debunks that line of thought, demonstrating that intelligence is a social construct and arguing that it is in many ways a property of our civilization, not of each of us as individuals within it. Human IQ (however flawed a measure that is) does not correlate with achievement, let alone with world domination, beyond a fairly narrow range – raw cognition, it seems, is far from being the only relevant component of intelligence.

Or in a splendid tweet length dig at those waiting expectantly for the singularity:

Thinking about the future

Ben Hammersley

This is a video of Ben Hammersley talking about the future for 20 minutes, contrasting the rate of growth of digital technologies with the much slower growth in effectiveness of all previous technologies – and the implications that has for social and economic change. It’s easy to do techno gee-whizzery, but Ben goes well beyond that in reflecting about the wider implications of technology change, and how that links to thinking about organisational strategies. He is clear that predicting the future for more than the very short term is impossible, suggesting a useful outer limit of two years. But even being in the present is pretty challenging for most organisations, prompting the question, when you go to work, what year are you living in?

His recipe for then getting to and staying in the future is disarmingly simple. For every task and activity, ask what problem you are solving, and then ask yourself this question. If I were to solve this problem today, for the first time, using today’s modern technologies, how would I do it? And that question scales: how can new technologies make entire organisations, sectors and countries work better?

It’s worth hanging on for the ten minutes of conversation which follows the talk, in which Ben makes the arresting assertion that the problem is not that organisations which can change have to make an effort to change, it is that organisations which can’t or won’t change must be making a concerted effort to prevent the change.

It’s also well worth watching Ben Evan’s different approach to thinking about some very similar questions – the two are interestingly different and complementary.

Can A.I. Be Taught to Explain Itself?

Cliff Kuang – New York Times

The question which this article tries to answer is a critically important one. Sometimes – often – it matters not just that a decision has been made, but that it has been made correctly and appropriately, taking proper account of the factors which are relevant and no account of factors which are not.

That need is particularly obvious in, but not limited to, government decisions, even more so where a legal entitlement is at stake. But machine learning doesn’t work that way: decisions are emergent properties of systems, and the route to the conclusion may be neither known nor, in any ordinary sense, knowable.

The article introduces a new name for the challenge faced from the earliest days of the discipline, “explainable AI” – with a matching three letter acronym for it, XAI. The approach is engagingly recursive. The problem of describing the decision produced by an AI may itself be a problem of the type susceptible to analysis by AIs. Even if that works, it isn’t of course the end of it. We may have to wonder whether we need a third AI system which assures us that the explanation given by the second AI system of the decision made by the first AI system is accurate. And more prosaically, we would net to understand whether any such explanation is even capable of meeting the new GDPR standards.

But AI isn’t going away. And given that, XAI or something like it is going to be essential.

The morality of artificial intelligence

Moral Maze – BBC

Posts generally appear on Strategic Reading because they make powerful and interesting arguments or bring thought provoking information to bear. This 45 minute discussion is in a rather different category. It’s appearance here is to illustrate the alarmingly low level of thought being applied to some critically important questions. In part, it’s a classic two cultures problem, technologists who don’t seem to see the social and political implications of their work in a hopeless discourse with people who don’t seem to grasp the basics of the technology, in a discussion chaired by somebody capable of introducing the topic by referring to ‘computer algorithms – whatever they are.’ Matthew Taylor stands out among the participants for his ability to comment intelligently on both sides of the divide, while Michael Portillo is at least fluently pessimistic about the intrinsic imperfection of humanity.

Why then mention it at all? Partly to illustrate the scale and complexity of some of the policy questions prompted by artificial intelligence, which are necessarily beyond the scope of the technology itself. Partly also because the current state of maturity of AI makes it hard to get traction on the real problems. Everybody can project their hopes and fears on hypothetical AI developments – it’s not clear that people are agreeing on enough to have meaningful disagreements.

So despite everything, there is some value in listening to this – but with an almost anthropological cast of mind, to get some insight into the lack of sophistication on an important and difficult topic of debate.

 

We’ll need more than ÂŁ40m* a year to get free maps – specifically politicians willing to share

Ed Parkes

This is the back story to one of yesterday’s budget announcements – ÂŁ40 million a year for two years to give UK small businesses access to Ordnance Survey data. If you are interested in that you will find it gripping. But even if you are not, it’s well worth reading as a perceptive – if necessarily speculative – account of how policy gets made.

There are people lobbying for change – some outside government, some within. What they want done has a cost, but more importantly entails changing the way that the problem is thought about, not just in the bit of government which owns the policy, but in the Treasury, which is going to have to pay for it. A decision is made, but not one which is as clear cut or all embracing as the advocates would have liked. They have won, in a sense, but what they have won isn’t really what they wanted.

It’s also a good example of why policy making is hard. What seems at first to be a simple issue about releasing data quickly expands into wider questions of industrial and social strategy – is it a good idea to subsidise mapping data, even if the first order beneficiaries are large non-UK multinationals whose reputation for paying taxes is not the most positive? Is time limited pump-priming funding the right stimulus, or does it risk creating a surge of activity which then dies away? And, of course, this is a policy with no service design in sight.

Digital archiving: disrupt or be disrupted?

John Sheridan – The National Archives blog

This post works at two entirely different levels. It is a bold claim of right to the challenges of digital archiving, based on the longevity of the National Archives as an organisation, the trust it has earned and its commitment to its core mission – calling on a splendidly Bayesian historiography.

But it can be read another way, as an extended metaphor for government as a whole. There is the same challenge of managing modernity in long established institutions, the same need to sustain confidence during rapid systemic change. And there is the same need to grow new government services on the foundations of the old ones, drawing on the strength of old capabilities even as new ones are developed.

And that, of course, should be an unsurprising reading. Archival record keeping is changing because government itself is changing, and because archives and government both need to keep pace with the changing world.

Do social media threaten democracy? – Scandal, outrage and politics

The Economist

It’s interesting to read this Economist editorial alongside Zeynep Tufekci’s TED talk. It focuses on the polarisation of political discourse driven by the persuasion architecture Tufekci describes, resulting in the politics of contempt. The argument is interesting, but perhaps doubly so when the Economist, which is not know for its histrionic rhetoric, concludes that ‘the stakes for liberal democracy could hardly be higher.’

That has implications well beyond politics and persuasion and supports the wider conclusion that algorithmic decision making needs to be understood, not just assumed to be neutral.

We’re building a dystopia just to make people click on ads

Zeynep Tufekci – TED

This TED talk is a little slow to get going, but increasingly catches fire. The power of algorithmically driven media may start with the crude presentation of adverts for the thing we have already just bought, but the same powers of tracking and micro-segmentation create the potential for social and political manipulation. Advertising-based social media platforms are based on persuasion architectures, and those architectures make no distinction between persuasion to buy and persuasion to vote.

That analysis leads – among other things – to a very different perception of the central risk of artificial intelligence: it is not that technology will develop a will of its own, but that it will embody, almost undetectably, the will of those in a position to use it. The technology itself may, in some senses, be neutral; the business models it supports may well not be.

Technology for the Many: A Public Policy Platform for a Better, Fairer Future

Chris Yiu – Institute for Global Change

This wide ranging and fast moving report hits the Strategic Reading jackpot. It provides a bravura tour of more of the topics covered here than is plausible in a single document, ticking almost every category box along the way. It moves at considerable speed, but without sacrificing coherence or clarity. That sets the context for a set of radical recommendations to government, based on the premise established at the outset that incremental change is a route to mediocrity, that ‘status quo plus’ is a grave mistake.

Not many people could pull that off with such aplomb. The pace and fluency sweep the reader along through the recommendations, which range from the almost obvious to the distinctly unexpected. There is a debate to be had about whether they are the best (or the right) ways forward, but it’s a debate well worth having, for which this is an excellent provocation.

 

Five thoughts on design and AI

Richard Pope – IF

Some simple but very powerful thoughts on the intersection of automation and design. The complexity of AI, as with any other kind of complexity, cannot be allowed to get in the way of making the experience of a service simple and comprehensible. Designers have an important role to play in avoiding that risk, reinforced as the post notes by the requirement under GDPR for people to be able to understand and challenge decisions which affect them.

There is a particularly important point – often overlooked – about the need to ensure that transparency and comprehension are attributes of wider social and community networks, not just of individuals’ interaction with automated systems.

Your Data is Being Manipulated

danah boyd – Point

This the transcript of a conference address, less about the weaknesses of big data a machine learning and more about its vulnerability to attack and to the encoding of systematic biases – and how everything is going to get worse. There are some worrying case studies – how easy will it turn out to be to game the software behind self-driving cars to confuse one road sign with another? – but also some hope, from turning the strength of machine learning against itself, using adversarial testing for models to probe each other’s limits. Her conclusion though is stark:

We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.

(the preamble promises a link to a video of the whole thing, but what’s there is only one section of the piece, the rest is behind a paywall)

Tales from three disruption “sherpas”

Martin Stewart-Weeks – Public Purpose

This is an artful piece – the first impression is of a slightly unstructured stream of consciousness, but underneath the beguilingly casual style, some great insights are pulled out, as if effortlessly. Halfway down, we are promised ‘three big ideas’, and the fulfilment does not disappoint. The one which struck home most strongly is that we design institutions not to change (or, going further still, the purpose of institutions is not to change). There is value in that – stability and persistence bring real benefits – but it’s then less surprising that those same institutions struggle to adapt to rapidly changing environments. A hint of an answer comes with the next idea: if everything is the product of a design choice, albeit sometimes an unspoken and unacknowledged one, then it is within the power of designers to make things differently.

Who do you trust? How data is helping us decide

Rachel Botsman – The Guardian

A remarkable proportion of the infrastructure of a modern state is there to compensate for the absence of trust. We need to establish identity, establish creditworthiness, have a legal system to deal with broken promises, employ police officers, security guards and locksmiths, all because we don’t know whether we can trust one another. Most of us, as it happens, are pretty trustworthy. But a few of us aren’t, and it’s really hard to work out which category each of us fall into (to say nothing of the fact that it’s partly situational, so people don’t stay neatly in one or the other).

There are some pretty obvious opportunities for big data to be brought to bear on all that, and this article focuses on a startup trying to do exactly that. That could be a tremendous way of removing friction from the way in which strangers interact, or it could be the occasion for a form of intrusive and unregulated social control (it’s not enough actually to be trustworthy, it’s essential to be able to demonstrate trustworthiness to an algorithm, with all the potential biases that brings with it) – or it could, of course, be both.

Who gets held accountable when a facial recognition algorithm fails? And how?

Ellen Broad – Medium
Facial recognition is the next big area where questions about data ownership, data accuracy and algorithmic bias will arise – and indeed are arising. Some of those questions have very close parallels with their equivalents in other areas of personal data, others are more distinctive – for example, discrimination against black people is endemic in poor algorithm design, but there are some very specific ways in which that manifests itself in facial recognition. This short, sharp post uses the example of a decision just made in Australia to pool driving licence pictures to create a national face recognition database to explore some of the issues around ownership, control and accountability which are of much wider relevance.

The Ethnographic Lens: Perspectives and Opportunities for New Data Dialects

Elizabeth Churchill – EPIC

This is a long and detailed post, making two central points, one more radical and surprising than the other. The less surprising  – though it certainly bears repeating – is that qualitative understanding, and particularly ethnographic understanding, is vitally important in understanding people and thus in designing systems and services. The more distinctive point is that qualitative and quantitative data are not independent of each other and more particularly that quantitative data is not neutral. Or, in the line quoted by Leisa Reichelt which led me to read the article, ‘behind every quantitative measure is a qualitative judgement imbued with a set of situated agendae’. Behind the slightly tortured language of that statement there are some important insights. One is that the interpretation of data is always something we project onto it, it is never wholly latent within it. Another – in part a corollary to the first – is that data cannot be disentangled from ethics. Taken together, that’s a reminder that the spectrum from data to knowledge is one to be traversed carefully and consciously.

This is when robots will start beating humans at every task

Chris Weller – World Economic Forum

This is a beguiling timeline which has won a fair bit of attention for itself. It’s challenging stuff, particularly the point around 2060 when “all human tasks” will apparently be capable of being done by machines. But drawing an apparently precise timeline such as this obscures two massive sources of uncertainty. The first is the implication that people working on artificial intelligence have expertise in predicting the future of artificial intelligence. Their track record suggests that that is far from the case: like nuclear fusion, full blown AI has been twenty years in the future for decades (and the study underlying this short article strongly implies, though without ever acknowledging, that the results are as much driven by social context as by technical precision). The second is the implication that the nature of human tasks has been understood, and thus that we have some idea of what the automation of all human tasks might actually mean. There are some huge issues barely understood about that (though also something of a no true Scotsman argument – something is AI until it is achieved, at which point it is merely automation). Even if the details can be challenged, though, the trend looks clear: more activities will be more automated – and that has some critical implications, regardless of whether we choose to see it as beating humans.