It’s not about the technology

There is a well-meaning saying being uttered in meeting rooms and conference calls throughout the world. But it is misguided. This is a post about why the idea of “It’s not about the technology,” is a dangerous one. For the sake of my fingers we can call it INATT.

It is usually muttered in large groups when considering the implementation or the implications of technology and there usually follows a ripple of appreciative agreement. Sage wisdom has been imparted.

Technology and culture are one

But in reality, it really is about the technology, and how we consider technology as separate from us, so INATT is a symptom about how we think about world and our place in it as we pass through time. Technology is seen as an alien independent factor of change, like the weather or the tides. We feel powerless as it washes over us. It is not. It is the creation of humankind, moreover the creation of one part of humanity being thrust over the rest. It is, in other words, political.

The reason that we need to break away from INATT, particularly within organisations, is the underlying idea that we just need to organise our thoughts on what we really want or need and that the technology choice will be made any easier.  Quasi-religious incantations follow INATT, after it has set the overall mood of wisdom. Closely related is the sentiment that “We just need to work out what we want to achieve.” or perhaps, “Let’s focus on the business outcome”

These are fine questions, but why must there be a wall between the objective and the method? Now I’m going to ask you to concentrate for this bit: Technology is a branch of politics and culture, and like those things it affects what it means to be human which in turn affects society. Human values are created by technology, because it forces us to create values. What was not considered important before is made important. So we debate it.

Forks and IVF

Before forks we ate with a knife and our hands. The existence of forks prompted, one assumes, a degree of debate about whether hands was polite or not [I checked in Wikipedia, yes it did and forks were seen as ungodly!]. We now do not regard forks as technology, it has passed over into the invisible. Before the advent of IVF and the freezing of human eggs, there was little debate: you could have children or not. We are still grappling with the impact of these technologies. In what circumstances is it not OK to proceed to pregnancy against the will of the other party, or if one of the parents has subsequently died. We didn’t have the values to deal with these sorts of quandaries before, and now the ethics committee, politicians and the popular press are working its way through it with our mewlings as civilisation’s focus group. Ancient philosophy didn’t have to grapple with human nature when it was this complex.

Blackberries and dads that aren’t really there

Before the advent of Blackberries we didn’t worry maybe so much about work life balance. You were working at the office or not with all of the familial fall out that would ensue if you were “working late” or “too hard”. When Blackberries and other forms of easily accessible email arrived, company by company, there arose a crisis of human values amongst the participants about availability and presence in a mindful sense. Dad is in the room, but is Dad in the Dad, no he is in next week’s sales figures.

The smartphone continues to confound our values. A new crisis occurs and is debated seemingly weekly. It has yet to pass into the invisible. AirBnB, Uber, Tinder, Snapchat, Revenge porn, Google, Facebook and the NSA. These are all value crises we are working through at this very minute. What is fun? What is love? What is privacy? Do we want to maintain our disrupted industries? Is it better to suck value out and hope for the best and be thankful? How does one conduct oneself without becoming a victim or a douche?

These crises of values arrive in the wake of the technology. They are in the main perceived as the wash of change. It makes some technology uncomfortable and newsworthy, in a way that other technology is not.

Technology is the driver and the solution

So we are in the workshop. The attendees and the consultants are dressed in their ceremonial robes and the incantation of INATT begins. At that moment, all thinking about that value-crisis stops, because the group can now only think about what will happen in the future in terms of the values of today. All nuance is lost, that they are building the future will tools that could be BOTH technological and cultural. The only reason that they are sitting there is because of the technology. It is the driver and the solution. It has constructed every modern business phenomenon in recent memory.

And of course “it” doesn’t exist: there is no “technology”. There are a series of different technologies. Some become invisible and are disregarded because they are humdrum. Then there are the variety of tools which are in various states of landing, and individually creating values-crises. Even more than this, the efficacy of a new technology is defined by the size of the value-crisis it creates. Is it disruptive? Is it going to change fundamentally how you are going to work? If not, why bother? In fact we need to perhaps face the fact, particularly in the deployment of some tools such as Enterprise Social Networks that the reason that adoption (whatever that means) is struggling is that they have not created a value crisis at all. The difference between no email and email was huge. The difference between the desk phone and the mobile phone was huge. The difference between using a computer to communicate with a group of people in two slightly different ways? Potentially compared to the previous examples it is not big enough to cause the bloody cultural revolution some seek.

Using value-crises for fun and profit

But by ignoring the value-crisis that we are attempting to create or mitigate or putting down to an unintended consequence or by-product is, on reflection, insane. Let’s consider an alternative world where our role is see technology as intimately bound to us as time and culture. New technology will be coming down the pipe. Imagine the hype cycle graph with technologies tumbling down it like rocks towards you. You need to choose the ones to ignore and choose the ones to engage with. For each technology there will be an advantage to someone, that is after all what it is for. But of course all your competitors have access to this as well, so time is short. It is our role to see which technologies are inevitable and will need perhaps to mitigate the coming value-crisis. Others we will see and we will intend to create a value crisis, and use it to change the organisation for the better.


A machine way of thinking — the coming algorithmic apocalypse

“The target of the Jihad was a machine-attitude as much as the machines,” Leto said. “Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments. Naturally, the machines were destroyed.” — Frank Herbert “Dune” [The Bulterian Jihad]

We are at a point in computing when the sum-total of all communication that goes on within organisations is recorded. The mathematics that would allow a machine to start to do things with that communication-as-data have been around for a long time, and innovations around semantic computing have come along.

At the same time we are seeing long term squeeze on costs within organisations under the banner of maximising shareholder value and increasing organic cash-flow. One of the easiest ways to do this is to reduce your headcount, or to put easily described work, such as help desks, support, operations, manufacturing and programming outside of the organisation, quite often where the workers are cheaper to employ.

I am constantly giving recommendations to intranet teams that are fundamentally recognise that they don’t have enough people to do the quality and quantity of work that they need. There is a usual lack of resources to, for example:

  • Optimise content for search
  • Work on projects while maintaining operations
  • Help people make sure their content is relevant and is up to date
  • Eat lunch or go to the loo

Meanwhile the perceived value in the corpus of that communications-as-data is building up: implicity, as in the enterprise social network; and explicitly, where the employee is expected to fill in their personal profile so people can find their expertise if it should be required.

The loose field of “social analytics” seeks to unlock this potential in creating connections and insights between people, places and things with mathematics. Some examples:

  • Consider automating classifications and information architecture using aggregate usage and semantic analysis.
  • Consider automating the extraction of people’s expertise from what they say and do, rather than what they say they do.
  • Consider improving search results by retrieving context from the user’s history of interest and discussions with people.
  • Consider automatically measuring people’s sentiments about a range of topics. Are people positive or negative about things?

This all sounds great, I’ll buy a pile of it. If this works I can make the right content appear automagically in from of the right people. What’s not to like. Well, the important bit is “if it works”. The problem is that it might make things worse.

Search as a minor example

In the intranet world we already have a chaotic algorithm making our lives a misery: Enterprise search. It sounds like a simple thing to do. Here is the corpus, index it and make it available so when someone wants something you show it to them. This is a classic wicked problem. The corpus of course is created and maintained by fallible, disparate people who are doing their own thing. They wish to publish a piece of content for immediate reasons, and discount its value to the corpus. Metadata is not set and context such as titles are purely local. When someone searches for something they get poor results. The intranet manager says: “We are getting rubbish results, let’s try tidying things up a bit.” That doesn’t really work, so they try creating best-bets manually and mucking about with weighting. That is only partially successful because the number of potential queries is vast (although the number of common queries is few), the numbers of texts in the corpus is massive, and at the heart of it is an algorithm that nobody apart from its developer really understands. At the core of a search engine is a complex bit of mathematics that provides the results. It is chaotic because for the layperson, changes to the content or the search engine do not have direct consequences on the results. They change a little and in ways that has possibly negative effects on different queries.

Any attempts to make the algorithms better has only increased the pain. Go find an intranet manager who has had to wrestle with Autonomy and ask them about it. It will be described in robust Anglo-Saxon. Semantic it might be, but try and get the pensions page up when the query is “pensions” is a non-trivial task. Intranet managers lumbered with Google Search Appliance have virtually no access to the mechanism of search, and are given little indication as to how to influence it. It an algorithmic black box shrouded in IP.Make it work like Google is the cry of the stakeholder, ignoring the fact there is a billon dollar business in Search Engine Optimisation attempting to reverse engineer Google and other search engines, and game the system end to end.

The problem is that the mathematics within these engines are not able to be communicated to those charged with looking after them. Borked search is a minor example – it only hurts organisations a little bit.But this isn’t natural mathematics. It doesn’t work like physics, determined by the characteristics of the natural world.Algorithms are programs written by people. People are not objective – they bring all their biases about how the world works to work with them. Algorithms are as much works of art as a news article on the intranet. All people are subjective and biased. All algorithms are subjective and biased. The thing is, you can argue with a person.

Let’s consider what may happen when we start unleashing this sort of maths on the populace.

Consider a piece of software that is designed to find “experts” in a field within an organisation. For the sake of argument let’s ignore any privacy or data protection concerns and we say in the requirements that it is access all areas: intranet, SharePoint team sites, enterprise social network, email, instant messaging – the works. This will necessarily will be software provided by an external vendor, as we are far an above the sort of quotidian development that most organisations have lying about. The programmer will make an array of assumptions about what “expertise” means and look for proxies available within the corpus: the amount someone talks about a concept; to whom; do they contribute initially or answer questions; how many people read the information they provide. Etc. I should point out I made those up. They may or may not be good proxies for expertise. Will the programmer be able to ask a social scientist to prove the alleged link between the proxy and expertise. Possibly, but probably not. Even if some psychometry is employed, there is no proof that that is, well, real. Will a social scientist be able to verify the weighting given to each source? Again, no.

This is a work of fiction. It no more that the guesswork of a clever-clogs.

And it will spit out a number. It might be based on something that sounds proper clever like the “k-nearest neighbour algorithm”, but it is the complexity of the real world boiled down into a reductive sticky goo. Search for experts in C++, it will give you a list. Bravo. But unlike our search engine algorithm it will have real world effects and feedbacks.

If the proxies are wrong, the inferences will be too. It is easy to mistake helpfulness or enthusiasm for expertise. If your algorithm starts crowning experts based on people being merely chatty on Yammer you could be in for some fun. Bob from IT help desk is now crowned an expert in something because he is helpful and interested in it. This validates his learning and encourages him. Increased findability of his “knowledge” results in more conversations, that drives a positive feedback mechanism. Bob is now, according to the system, an expert.

What could possibly go wrong?

Imagine a bizarre world where that expertise system repeated for every individual and every speciality within an organisation. This is a chaotic system. We have lost the ability to associate, on human terms, our inputs from our outputs. Perverse incentives and unintended consequences will become abundant. Now imagine that these mathematical loaded guns are deployed in lots of places throughout your organisation and its digital workplace, in places that you couldn’t even imagine, from choosing which projects get funding, to which emails get responded to.

Bang.

We are careening into what Taleb calls the fourth quadrant – a place of disproportionate disaster where black swans abound.

I might be catastrophising. Please, someone who knows what they are actually talking about, persuade me that this isn’t the case. Software like this is renowned to be brittle. In the lab, it works. Out of the lab, it falls flat on its face. But this data is growing exponentially; so is the amount of storage, processing power and (I’m sure) VC money being thrown at this. As is people’s belief that it should be so.

Value people

Which brings me back to the quote from Dune at the top. In Dune they had got rid of intelligent machines as well as the machine-attitude. It is this machine way of thinking that I find so dangerous. We trust ourselves so little in the world of human affairs that we want machines to do it for us. We hate paying people to be human so much we are willing to swap them for machines that spit out answers; not real answers that are in fact true, but constantly available answers at zero marginal cost.

When a man comes to sell you a machine that understands humans tell him that he is mistaken. He has caught this machine-attitude. Trust humans to do the work of the heart, and value the work of humans enough to have them around to do the work.

[With thanks to @shbib for reintroducing me to the Bulterian Jihad]