This is a story about the future.
This is a story about the future. Our near future. A story about how the very real applications of Quantum Computers and the utopian ideals of the silicon valley pioneers from the 1990s may combine to fundamentally change the trust we have in technology.
If that sounds pretty scary, it is. But don’t worry, I’ll try not to be all doom and gloom — I’m a designer after all and we’re here to design for problems, even the biggest ones!
The future doesn’t have to be all ‘Black Mirror’ so I’ve laid out what I believe can be done, in our own little ways, to fight against any harmful possibilities and ensure our work embodies and promotes trust above all else.
What are quantum computers?
Quantum theory was pioneered in the 1920s and it’s crazy-ass-ness is far reaching, incredibly complex and not that easy to understand. Most people’s understanding of ‘quantum’ goes so far as the parallel universe theory and that’s about it.
Fun fact, did you know the guy who came up with the parallel universe theory, Hugh Everrit Snr, is the father of US alt-rock band Eels lead singer ‘E’ (or Hugh Everitt Jnr)? Anyway, I digress.
Traditional computers are simultaneously growing ever smaller and ever more powerful. In fact, they are approaching their physical and computing power limits: they’re approaching the size of an ATOM which is just crazy. Therefore, soon the seemingly unstoppable increase in computing power is going to come grinding to a halt.
Transistors — the simplest form of computing components — either block or let through information (in the form of electrons) which results in a binary method meaning ‘bits’ are set to either 0 or 1.
However, quantum theory dictates that two things can actually be in two places at the same time. This is the thinking that’s now being applied to computing.
Quantum computers use ‘Qubits’ (quantum bits). Qubits are the same as your usual 0/1 bits but Qubits can be in any proportion of both states at once, known as super-positioning. Once they are observed, they must fall within one of the two states — 0 or 1.
In terms of processing, the difference is massive. With standard bits, you can only have the number of possible solutions that at are equal to the number of bits to the power of 2. So 20 bits would give you 400 variations of which only one can be selected. However, if you had 4 qubits instead, they could be in all of those 400 variations at the same time, meaning the total configurations you could have would be over 1,000,000. Bejesus! That’s incredible!
Just FYI, if you have in-depth questions on quantum theory, go find a physicist cos I’m no expert. If you know that I’ve interpreted things incorrectly, do let me know — it’s hard to explain!!!
All I know is that Michael Crichton said in ‘Timeline’ that you can time travel to medieval France with a quantum computer, so that sold me on these machines. lol 😆😉
What use are quantum computers?
Seriously though, what’s incredible to think is that chances are — in my lifetime at least — we’ll all have an opportunity to interact with a quantum computer directly. What’s definite is that we’ll all end up using them indirectly very soon. In fact, we may already have — we just don’t know about it.
Before I move on from QC’s, I’d like to highlight out some of the possible uses of these amazing machines:
- Highly complex database searches
- Calculating systems simulations
- Machine learning and AI
- Facial, image and speech recognition
- Analysing medical research
- Creating safer encryption
- Predicting risk in financial models
Now, combine those uses with who has already funded/ordered these QCs — governments and tech companies — and the possible reasons for WHY they’ll be used starts to become just more than a little suspicious.
The California Ideology
Fractured narrative time! Time to leave the future behind and jump into Michael Crichton’s ‘Timeline’ quantum computer. Let’s go back through the mists of time — the distant past…. 1991.
An experiment was carried out by programmer Lauren Carpenter in Silicon Valley. Hundreds of people were gathered in a large barn with a huge screen and on every seat there was what looked like a table tennis bat type thing with a green and a red side. Nobody knew what this meant and they weren’t given instructions. As the room filled up and they started to play with these random paddle thingys, the people gradually started to realise that the paddles were reflecting what was on the screen. When everyone realised, the place erupted with joy — everyone loved it.
Then, out of no-where, Pong appeared. Now, if you don’t know Pong then you need to scrub up on your gaming history. It’s a super simple computer game that was popular in the 70’s and looked like this:
Looks boring to you millennials? Trust me, when I first played this in the 80’s it was ah-mazing.
What the participants in the barn didn’t know at first is that people in left side of the room could move the left controller, and the right side the right controller. On the bat-thingys — showing green made the controller go up, red goes down.
The key thing was that, without an easy way of communicating between them, they had to work as a team to move the controller go to the right spot on the screen to ensure they didn’t miss the ball and lose the point. The group mentality self-regulated themselves enough to create equilibrium and play the game in a controlled and meaningful way. I understand that’s quite a convoluted explaination so here’s a demonstration from the BBC’s ‘Bang Goes The Theory’…
This ‘self-governing’ social order way of thinking became known as the ‘Californian Ideology’. This theory was a guiding principle held by silicon valley’s big players during their rise to prominence in the 70’s, 80’s and 90’s.
To quote Adam Curtis:
“Ever since the 1970’s, computer utopians in California believed that if human beings were linked by webs of computers then together, they could create their own kind of order.
It was a cybernetic dream that said that the feedback of information between all the individuals, connected as nodes in the network, would work to create a self-stabilising system.
The world would be stable yet everyone would be completely free to follow their desires.”
This theory, informed by the objectivist ideas of Ayn Rand, have been so entrenched in the valley for so long, it’s unwittingly become part of our lives through personal, everyday tech and the products created for them.
So does being free to follow our own desires give us more freedom and create a natural order or are we shackled by this ‘idea’ of freedom? What have been the consequences?
The online landscape we now know
We are all involved in social networks of some description — online or offline, but with the rise of the digital industry we have become ever more immersed in the world of cyberspace. As our social circles have migrated here, huge and unforeseen problems have arisen.
The ideals of the Californian Theory, combined with the ideas of online freedom written into 1996s Declaration of the Independence of Cyberspace, was not as robust as we’d like them to be in the real world.
It did not account for the rise of online trolls and it certainly did not consider how groups like ISIS would use their platforms. The computer utopains thought that if we give people the freedom to do and say what they like, a natural equilibrium would prevail. This idealism is all well and good, but idealism cannot account for all aspects of human nature.
It’s clear that social platforms — like Facebook, YouTube, Twitter — were all created with no-regulation in mind. As part of their MVP (minimum viable product), they believe that people will naturally self-regulate. This is not happening — instead, because of their algorithms based around behaviour and advertising, these platforms have become echo chambers that social groups hide in and whip themselves up into a self-perpetuating frenzy.
So when someone stumbles across something from someone else’s echo chamber — from political blowhards to terrorist propaganda — there’s OUTRAGE!
The response? Journalists, politicians, social commentators, dagnabit even ‘everyday folk’ are beginning to believe that these tech giants need to do something to stop spreading offensive and potentially dangerous content. For example: this article, this article and this article.
Quantum in the 21st century security landscape
I mentioned above that the two main buyers of quantum computers were governments and tech companies. Tech firms — yeah, makes sense right? But governments? Well, like I intimated, I’d be very surprised if that doesn’t make you feel just a little suspicious. Snoopers charter anyone? And that’s just in the UK.
Quantum computers can process an incredible amount of information in a fraction of the time it would take traditional computers to process. With that kind of processing power, and the sheer amount of data that government bodies like GCHQ and the NSA holds on us, it’s easy to feel paranoid about how it can be used.
Going even further and in light of the Cambridge Analytica scandal where the private company has been accused of data-grabbing 50million Facebook accounts to help political campaigns… It can feel like having anything online is open to abuse and misuse.
And we’ve not even mentioned hackers or rogue governments. Needless to say, our government’s quantum security game will have to be tight to ensure all that juicy data isn’t stolen and lives ruined. Fun stuff eh?
The current political and security landscape in the West can feel like an unstable, fragile thing and abuses of power are the lifeblood of major historical events so yeah, it is scary.
Trust — there’s a problem
It’s all about trust. Our trust in what quantum computers will be used for.
On one side, trust in tech’s ability to regulate itself is low… and the only people willing to regulate them (if they can) seem to be governments.
On the other hand — trust in the government to use technology responsibly is also low. Combined that with the fact that most of the general public don’t know what the possibilities of quantum computers are yet, just imagine what it’ll be like when Johnny McPublic-Pants understands what QC’s can do! Trust in the authorities could evaporate — just like that.
Where will that leave us? What to do…?
It’s time for users and tech to align
I feel it’s time for the tech companies and political figures to live in the real world. They need to learn what’s going on in the real world instead of trying to create social order from a set of ideals and experiences encountered during the cold war.
The Californian Theory demonstrates tech’s attitude to regulation which has led to rampant and unbridled usage of its social channels. Although this is the aim from their perspective, much of it has been perceived as abuse, breeding mistrust and creating the eagerness for change. Thing is, politicians will always reach for a blunt instrument — enforced regulation.
No-one wants that. So as a first step — I think it’s key for tech executives and entrepreneurs and politicians to start listening to the concerns of real people on mass (not through third parties). I feel that much can be solved if these key figure made a real effort to understand needs, frustrations, behaviours and attitudes. They need to be interested in people in a meaningful way as much as us UXers are.
At the moment of a product or policy inception, they need to think of people and their everyday needs, not ‘changing the world’, making money or getting ahead in their careers. In fact, I’d argue that considering human behaviours and painpoints properly would lead to all those things anyway.
I believe there’s a LOT to be said for changing the world incrementally, working Lean, instead of trying to ‘change the world’ in one fell swoop. Let’s start with trust…
How UX could help now
As a UX designer, I am designing for trust every time I approach a project. One of my personal design principles is ‘Honesty is the best policy’ and I fully believe that. Here’s my 5 tips to embody that honesty in UX:
1 — Create design principles.
Whether it be a marketing website, native app or VR platform, starting any project with a set of guiding principles can help align both stakeholders and designers. And I for one haven’t seen a set of principles that hasn’t included something about being transparent. The challenge is sticking to that high ideal.
Essentially, create clear principles that includes being transparent and building trust that any project — big or small — will adhere by.
Not sure what design principles are? Check https://principles.design/ out.
2 — Be honest in your content.
Sometimes your product just isn’t as good as those of your competitors in the one way that matters. Be real with yourself and own up to your weaknesses.
A great example of what I mean here is Monzo’s blogs. They admit their weaknesses and concentrate on what’s important — what’s best for their customers. Being open and utterly honest engenders trust because admitting your weaknesses is as human-centred as you can get.
This approach could be a very very hard sell to traditional business minds but stay strong, fight your corner and push the trust envelope for them as much as your users. After all, keeping them happy will result in profits for the business 😉😉😉
3 — Be welcoming and inclusive
Anyone who knows me knows how much I love being an inclusivity (and, by extension accessibility) advocate. It’s time to make the effort to include more minorities into our process and industry.
Different communities, people who don’t use tech, older people (see my previous blog on that subject) — let’s diversify as much as we possibly can. Us designers need to work harder to make that extra effort for people with accessibility issues.
We need to cast the net as far as we possibly can when we deal with users — both in user interviews and testing. Talking of which…
4 — Push for more user interviews, more testing
HOLD THE NEWS! A UX DESIGNER ADVOCATES MORE USER TESTING! Yeah yeah yeah, big wow, but trust me — sometimes the interviews or information you get on your users is frustratingly and annoyingly not enough.
Ensure your new business people know how many people you should’ve interviewed and test through project retrospectives and — even better — get involved in the proposal creation. If we don’t do this, our new business friends will never know to push for more user interaction.
5 — Never, NEVER, employ dark patterns
Again, not a big surprise but one that needs repeating. Ever since ‘Dark Patterns’ got highlighted in the industry just a few of years ago, I think there’s been a decrease in them (correct me if I’m wrong in the comments section).
However, there’s some that I’ve seen that fill me with anger and only the complete demise of such trickery will suffice.
Tricking your users will not create trust. Users who do not trust your product do not make good customers. No customers = no business. Common sense, simple common sense.
The pitfalls of current tech thinking and future developments could contribute to completely eroding user’s trust in technology and it’s associated apps, programs, websites, Alexa skills etc etc. And that’s OUR work.
To me, it all amounts to this equation:
Quantum computers will make surveillance easier
Outdated ideas of tech self-regulation
Erosion of trust in technology as a whole.
It’s a big subject matter and one that won’t be solved easily and without pain. But I believe that for technology to continue as we know it, we need to do everything we can to embody transparency, honesty and inclusivity.
But I want to leave you with a tough question:
If, as I’m suggesting we can encourage all users to trust all forms of future tech — like quantum computing — will that be a good thing? Will we be helping the governments to delve into our lives even more?
What do you think?