My phone connects up to the car’s Bluetooth system just fine, as you’d expect it to. Not that long ago it began to give me helpful information as soon as it connected: “15mins to get home, take Potter Row. Traffic is light.” Ok, it’s figured out where “home” is and that when I start the car somewhere else, I usually want to go there. It’s not that useful, frankly, but it’s ok. Sometimes it makes bizarre but obvious gaffs, and I just laught and ignore them.  All moderately smart and quite benign, occasionally useful. Probably some algorithms are involved somewhere but that’s ok.

About a year ago I was doing some market research and ran a really productive Google search that identified two of the things that I was looking for, and from just a simple three word search. I happened to have done that on my tablet, so I went across to my laptop to take it further with a decent keyboard and screen, and i ran the same three word search. Neither of the things I was expecting came up. Bizarre.

Uh ho?

I eventually traced the problem to a factor that I didn’t expect. I’d used Safari on my tablet, and Chrome on my laptop and I’d logged into my Gmail account using that Chrome browser. As soon as I logged out and ran the three word search again, up came the two things I’d found with Safari on the tablet. Some algorithm connected to my Gmail account had steered the search away from the results I was after. I’d never have known of those two items ifI hadn’t happened to run the search elsewhere. It’s widely understood that Google and others steer at you the things they want you to see, and that’s ok. At least, it’s clearly understood and the rationale seems acceptable.

You don’t know what you don’t know

Is it generally understood that Google is deciding what you shouldn’t see as well as what you should? Their preference for presenting some things to you, means that they will deny you knowledge of things they would prefer you not to see. I’m ok with them pushing advertising to me – I can live with that because I can choose to ignore it or not. I guess it feels inevitable to discover that Amazon steers search results to favour their own products – they would wouldn’t they?  These seem like rather crass commercial actions.

More pages on this subject

I have a real problem when this becomes them deciding more widely what I shouldn’t know about, especially because all that is unseen – I don’t know what I don’t know. I don’t know what they don’t want me to know about, or when. It’s behind a two-way mirror – they can see me but I can’t see them.

The quiet advance to a mechanised world

How many algorithms are adjusting things around us, what we see and don’t see, who does what, who assumes what?  Algorithms are getting everywhere and are gradually mechanising our world, and not all are visible to us.

Of course, we can assume that pretty much anything we do online will have an algorithm behind it somewhere. Anything with a search is a potential interaction where you can be denied without realising it. That may be purchase options on Amazon, content on Roku, iPlayer and Netflix, no doubt. Certainly anything that we go to for “information”. Inevitably, this also applies to others looking for information and guidance, including those in positions of authority.

My SatNav is connected for live traffic and routing information; who’s decided what “information” that should be sending to me? I do know that routing presented to my friend’s dashboard is different from that given to me for the same journey and timing. Something somewhere has made the determination of what “truth” is fed to whom.

Connected algorithms are penetrating quietly into our world; they have been for longer than I had realised, and have reached further than I’d understood until I stepped back and brought things together. Conspiracies may exist but the certainty is that steady and largely unseen mechanisation is happening around us.

The loss of privacy

Hand in hand with the growing use of algorithms to decide what to show us, if the steadily advancing use of algorithms to work out who we are. That is, which specific individual person we are. Facial recognition is being discussed in the West and there’s now a prominence to that discussion and the issues involved.

As I noted elsewhere, protesters in Hong Kong and others have been devising ingenious ways of thwarting such systems, and some of these are really complex and targeted. It is something of a cat and mouse chase, though, and facial recognition is on policy makers’ agenda in Western countries, with bans beginning to be considered and enacted (NYTimes). The cat and mouse is moving on, though, with new mechanisms being devised to make recognition systems work better and be harder to detect (Wired). We must assume that algorithms are going to get better at knowing what person they’re interacting with, as well as better at deciding about us, for us.

The loss of Agency

All this means a steady erosion in the power to decide, to decide what I want to decide and what I’m ok to have decided for me. While that’s whether I click on an advert or use a particular route home, it’s ok by me. As soon as it strays into decisions around what I should know about, then I’ve got a much bigger problem.

The problem goes two ways. Firstly, if I don’t know then I can’t decide. Second, someone else does know and is deciding for me, and I’m not ok with if I’ve got not choice about the matter. Third, what “truth” isn’t true? The new BBC drama series, The Capture, is exploring this whole area. There are only two episodes on iPlayer so far, so I don’t know where it’s going to end up, and I’m not going to spoil the beginning for anyone that’s not already watching, but I recommend it if you’re interested in this whole subject area.

It’s here and now

Much of the discussion about AI, machine learning, robots and the like, has made all this seem like a future world that we need to think about so that we’re ready when it arrives. It’s already here and affecting all of us now to some degree. It’s not going to go away and it’s impact isn’t going to reduce. The challenge for almost every one of us is that we have next to no practical control over it. That’s especially true of things we don’t know about. At least we can start to be alert and identify where the things might be that we don’t know about, and what the problems are that result.

The iPhone brings it all together

Not unsurprisingly, that darling device brings everything together: facial recognition to recognise you (and unlock the device), Siri to hear your commands (and other things), location technology to enhance maps and such (and know where you are), always on connectivity to integrate all your activities (and bring all that data back to Cupertino), the trusted brand to give you confidence that you’re safe.  All that’s missing is the will to hook these things up, I imagine.

Or have I got an over-active imagination?

Actually, this is simply going back to where we were when our parents warned us about talking to strangers, isn’t it? Back in the day, if you were in an unfamiliar city and trying to find your way around, you might ask someone for directions, but you’d choose who you asked, and you’d treat with caution anything that they told you. You might even choose to ask someone in a trusted role, like Police, say. That same instinctive, circumspect caution is needed in the way that we now ask algorithms, inadvertently, and listen to their guidance.

The Pew Research Centre published a valuable paper last December, that examines Artificial Intelligence and The Future of Humans, and it’s a thought provoking read. The Economist has a special report this week with a group of articles called: Chips with Everything, and that is also worth reading.

As a starting point, I’m now circumspect about anything that might have a connected algorithm, and try to be alert to what I might not know as a result. I’m much more curious about what I might not know. I dare say that I’ll identify other vulnerabilities in time, but this is where I’m starting. At least there are sometimes practical steps that you can choose to take if you identify a vulnerability, like not using Chrome, and to log out of Gmail when I’m done.  I now use Firefox and I reluctantly swallow the implications of using MS Outlook. So far, so obvious, you might say. What’s clear is that this issue is here and now, with an insidious and secretive nature.

The danger is to think that all this AI is some way off, as yet. It isn’t. The future is already here.

Also:
Blog Can entrepreneurialism be taught?
Blog
Human engagement is a big opportunity
Blog
The Good, The Bad, and The Surreptitious
Blog
We need to talk about losses
Blog WhatsApp scuppers the B2B market

and

Peter is chairman of Flexiion and has a number of other business interests. (c) 2019, Peter Osborn .