How to Build Self-Conscious Artificial Intelligence

Hugh Howey – Wired

The title of this article is a bit of a false flag, since it could easily have continued ‘… and why that would be a really bad idea’. It is though an interesting – though considerably longer – complement to the argument that the idea of general artificial intelligence is based on a false analogy between human brains and computers. This article takes the related but distinct approach that self-consciousness exists as compensation for structural faults in human brains, particularly driven by the fact that having a sophisticated theory of mind is a useful evolutionary trait and that it would be pointless (rather than impossible) to replicate that – because perhaps the most notable thing about human introspection about consciousness is how riddled it is with error and self-contradiction. That being so, AI will continue to get more powerful and sophisticated. But it won’t become more human, because it makes no sense to make it so.

 

204 examples of artificial intelligence in action

Chris Yiu

This is a page of links which provides over two hundred examples of artificial intelligence in action – ranging from mowing the lawn, through managing hedge funds and sorting cucumbers all the way to writing AI software. Without clicking a single one of the links, it provides a powerful visual indicator of how pervasive AI has already become. There is inevitably a bit of a sense of never mind the quality, feel the width – but the width is itself impressive, and the quality is often racing up as well.

There is a linked twitter account which retweets AI-related material – though in a pleasing inversion, it shows every sign of being human-curated.

Mind the gap

Charlotte Augst – Kaleidoscope Health

One ever present risk in thinking strategically is to be too strategic. Or rather, to be too abstract, losing sight of the messiness of today in the excitement of the far tomorrows. Convincing strategies address recognisable problems (even if making the problems recognisable is part of the strategic process) and, perhaps most importantly, convincing strategies get to the future by starting in the present. There is no value in the most glorious of futures if you can’t get there from here.

This post is a brilliant example of why that is. How, it asks, with clearsighted perspective of very personal experience, can we hope to deliver a future strategy without understanding and addressing the gap between where we are and where we want to be?

Ethics and ethicists

Ellen Broad

This is a short tweet thread making the point that ethics in AI – and in technology generally – needs to be informed by ethical thinking developed in other contexts (and over several millenia). That should be so obvious as to be hardly worth saying, but it has often become one of those questions which people doing new things fall into the trap of believing themselves to be solving for the first time.

How do we embed digital transformation in government?

Matthew Cain – Medium

There are increasing numbers of government services which are digital. But that doesn’t make for a digital government. This post is a challenge to set a greater ambition, to make government itself digitally transformed. As a manifesto or a call to arms, there’s a lot here: a government with the characteristics envisaged here would be a better government. But in general, the problem with transforming government has not been with describing how government might work better, but with navigating the route to get there – and that makes the question in the title critically important. Ultimately though, the digital bit may be a critical catalyst but is not the goal – and we need to be clear both about the nature of that goal and about the fact that digital is a means of transforming; not that transforming is a means to be digital. This post describes powerful tools for realising an ambition for better government – but they will have effect only if both ambition and opportunity are there to use them. On that, it’s well worth reading this alongside Matthew’s own post earlier this year commenting on the government’s digital strategy.

Government Services Look Radically Different in the Customer’s Eyes

Peter Jackson – IDEO Stories

Not so many years ago, this would have been a very radical post. It is a measure of progress that the core message – services should be designed with an understanding of customers – now seems obvious. But it’s still well worth reading both for the overall clarity with which the case is made, and for some neat turns of phrase. Governments tend to start with a policy which may eventually be expressed as a service; customers experience a service and will discern dimly – if at all – the policy which ultimately drives it. And those two things are not only different in themselves, they can also have different cycle times: ‘just because a major new policy only comes around once in a lifetime, doesn’t mean you only have one chance to implement it.’

Why the robot boost is yet to arrive

Tim Harford – the Undercover Economist

One of the problems with predicting the future is working out when it’s going to happen. That’s not quite as silly as it sounds: there is an easy assumption that the impact of change follows closely on the change itself, but that assumption is often wrong. That in turn can lead to the equally wrong assumption that because there has been limited impact in the short term, the impact will be equally limited in the long term. As Robert Solow famously put it in 1987, ‘You can see the computer age everywhere but in the productivity statistics.’ In this post, Tim Harford updates the thought from computers to robots. The robot takeover isn’t obviously consistent with high employment and low productivity growth, but that is what we can currently observe. The conclusion – and the resolution of the paradox is disarmingly simple, if rather frustrating: wait and see.

Don’t believe the hype: work, robots, history

Michael Weatherburn – Resolution Foundation

This post introduces a longer paper which takes the idea of understanding the future by reflecting on the past to a new level. The central argument is that digital technologies have been influencing and shaping the industry sectors it examines for a long time, and that that experience strongly suggests that the more dramatic current forecasts about the impact of technology on work are overblown.

The paper’s strengths come from its historical perspective – and, unusually for this topic, from being written by a historian. It is very good on the underlying trends driving changing patterns of work and service delivery and distinguishing them from the visible emanations of them in web services. It does though sweep a lot of things together under the general heading of ‘the internet’ in a way which doesn’t always add to understanding – the transformation of global logistics driven by ERP systems is very different from the creation of the gig economy in both cause and effect.

The paper is less good in providing strong enough support for its main conclusion to justifying making it the report’s title. It is true that the impacts of previous technology-driven disruptions have been slower and less dramatic to manifest themselves than contemporary hype expected. But the fact that hype is premature does not indicate that the underlying change is insubstantial – the railway mania of the 1840s was not a sign that the impact of railways had peaked. It is also worth considering seriously whether this time it’s different – not because it necessarily is, but because the fact that it hasn’t been in the past is a reason to be cautious, not a reason to be dismissive.

Reflections 6 months into my work at NHS Digital – part 1

Matt Edgar

There is a sweet spot in any job, or more generally in understanding any organisation, when you still retain a sense of surprise that anything could quite work that way, but have acquired an understanding of why it does, and of the local application of the general rule that all organisations are perfectly designed to get the results they get. Matt has reached the six month mark working in NHS Digital, and has some good thoughts, which are partly about the specifics of the NHS, but are mostly about humans and service design. This is part 1, there is also a second post on creating a design team to address those issues.

Where does Product Management sit in Government? Ponderings on ‘ownership’ & organisational design.

Zoe G – Medium

Some further reflections on the place of product management, building on Zoe’s post from a couple of months ago. This time the focus is on where product managers best sit organisationally – are they essential, digital, operational or policy people? The answer, of course, is that that’s not a terribly good question – not because it doesn’t matter, but because what matters doesn’t uniquely map to single organisational structures. Indeed, the question about where product managers (or, indeed, a number of other people) belong might better be asked as a question about whether the organisational structures of the last decade are optimal for the next. In the current way of doing things, the risk of losing strategic or policy intents feels like the one to be most concerned about – but, as so often, where you stand depends heavily on where you sit.

What we talk about when we talk about fair AI

Fionntán O’Donnell – BBC News Labs

Courtesy of Laura Amaya from the Noun Project

This is an exceptionally good non-technical overview of fairness, accountability and transparency in AI. Each issue in turn is systematically disassembled and examined.  It is particularly strong on accountability, bringing out clearly that it can only rest on human agency and social and legal context. ‘My algorithm made me do it’ has roughly the same moral and intellectual depth as ‘a big boy made me do it’.

I have one minor, but not unimportant, quibble about the section on fairness. The first item on the suggested checklist is ‘Does the system fit within the company’s ethics?’ That is altogether too narrow a formulation, both in principle and in practice. It’s wrong in practice because there is no particular reason to suppose that a company’s (or any other organisation’s) ethics can be relied on to impose any meaningful standards. But it’s also wrong in principle: the relevant scope of ethical standards is not the producers of an algorithm, but the much larger set of people who use it or have it applied to them.

But that’s a detail. Overall, the combination of clear thinking and practical application makes this well worth reading.

Future Historians Probably Won’t Understand Our Internet

Alexis Madrigal – The Atlantic

Archiving documents is easy. You choose which ones to keep and put them somewhere safe. Archiving the digital equivalents of those documents throws up different practical problems, but is conceptually not very different. But often, and increasingly, our individual and collective digital footprints don’t fit neatly into that model. The relationships between things and the experience of consuming them become very different, less tangible and less stable. As this article discusses, there is an archive of Twitter in theory, but not in any practical sense, and not one of Facebook at all. And even if there were, the constant tweaking of interfaces and algorithms and increasingly finely tuned individualisation make it next to impossible to get hold of in any meaningful way.

So in this new world, perhaps archivists need to map, monitor and even create both views of the content and records of what it means to experience it. And that will be true not just of social media but increasingly of knowledge management in government and other organisations.

Machine learning, defined

Sarah Jamie Lewis

There’s a whole emerging literature summarised in those words. But it underlines how much of the current debate is still as much about what machine learning is as what it does.

The invention of jobs

Stian Westlake

It’s often tempting – because it’s easy – to think that the way things currently are is the necessary and natural way of their being. That can be a useful and pragmatic assumption. Until it isn’t.

The impossibility of intelligence explosion

François Chollet – Medium

Last week, there was another flurry of media coverage for AI, as Google’s AlphaZero went from no knowledge of the rules of chess to beating the current (computer) world champion in less than a day. And that inevitably prompts assumptions that very specific domain expertise will somehow translate into ever accelerating general intelligence, until humans become pets of the AI, if they are suffered to live at all.

This timely article systematically debunks that line of thought, demonstrating that intelligence is a social construct and arguing that it is in many ways a property of our civilization, not of each of us as individuals within it. Human IQ (however flawed a measure that is) does not correlate with achievement, let alone with world domination, beyond a fairly narrow range – raw cognition, it seems, is far from being the only relevant component of intelligence.

Or in a splendid tweet length dig at those waiting expectantly for the singularity:

Ten Year Futures

Benedict Evans

Interesting ideas on how to think about the future seem to come in clumps. So alongside Ben Hammersley’s reflections, it’s well worth watching and listening to this presentation of a ten year view of emerging technologies and their implications. The approaches of the two talks are very different, but interestingly, they share the simple but powerful technique of looking backwards as a good way of understanding what we might be seeing when we look forwards.

They also both talk about the multiplier effect of innovation: the power of steam engines is not that they replace one horse, it is that each one replaces many horses, and in doing so makes it possible do things which would be impossible for any number of horses. In the same way, machine learning is a substitute for human learning, but operating at a scale and pace which any number of humans could not imitate.

This one is particularly good at distinguishing between the maturity of the technology and the maturity of the use and impact of the technology. Machine learning, and especially the way it allows computers to ‘see’ as well as to ‘learn’ and ‘count’, is well along a technology development S-curve, but at a much earlier point of the very different technology deployment S-curve, and the same broad pattern applies to other emerging technologies.

 

Thinking about the future

Ben Hammersley

This is a video of Ben Hammersley talking about the future for 20 minutes, contrasting the rate of growth of digital technologies with the much slower growth in effectiveness of all previous technologies – and the implications that has for social and economic change. It’s easy to do techno gee-whizzery, but Ben goes well beyond that in reflecting about the wider implications of technology change, and how that links to thinking about organisational strategies. He is clear that predicting the future for more than the very short term is impossible, suggesting a useful outer limit of two years. But even being in the present is pretty challenging for most organisations, prompting the question, when you go to work, what year are you living in?

His recipe for then getting to and staying in the future is disarmingly simple. For every task and activity, ask what problem you are solving, and then ask yourself this question. If I were to solve this problem today, for the first time, using today’s modern technologies, how would I do it? And that question scales: how can new technologies make entire organisations, sectors and countries work better?

It’s worth hanging on for the ten minutes of conversation which follows the talk, in which Ben makes the arresting assertion that the problem is not that organisations which can change have to make an effort to change, it is that organisations which can’t or won’t change must be making a concerted effort to prevent the change.

It’s also well worth watching Ben Evan’s different approach to thinking about some very similar questions – the two are interestingly different and complementary.

Teaching Digital at HKS: A Roadmap

David Eaves – digitalHKS

This is the entry page for a series of posts about teaching digital at the Harvard Kennedy School (of government). This isn’t the course itself, but a series of reflections on designing and delivering it. It is though filled with insights about what it is useful to know and to think about, and how the various components fit together and reinforce each other to meet the needs of students with different backgrounds and interests in government.

The Problem With Finding Answers

Paul Taylor

There are some who argue that the only test of progress is delivery and that the only thing which can be iterated is a live service. That is a horribly misguided approach. There is no point in producing a good answer to a bad question, and lots to be gained from investing time and energy in understanding the question before attempting to answer it. Even for pretty simple problems, badly formed initial questions can generate an endless – and expensive – chain of solutions which would never have needed to exist if that first question had been a better one. Characteristically, Paul Taylor asks some better questions about asking better questions.

Digital government? Sort of.

Laurence – Global Village Governance

Nothing ever quite beats the description of a service by somebody who has just used it – or tried to use it. This is a good example of the genre – applying for ‘National Super’ (or state pension) in New Zealand. As turns out to be the case surprisingly often, even if all or most of the steps work well enough individually, that’s still a very long way from the end to end service working well. And where, as in this case, one step in the process fails, the process as a whole goes down with it. One common problem, which we may also be seeing in this example, is that service providers are at constant risk of defining their service more narrowly than their service users do.