We've just released our newest work. A technological study on the Future of Money.

Click here to find out more.

Blog


Envisioning the future of financial technologies

Published on November 24, 2012 by mz

Technology and finance advance in lockstep. Our very notion of money and particularly currency are themselves technological creations which have evolved and mutated over millennia, and have proven resilient against contrary efforts of displacing them. Ours is a monetary culture, and the nature of scarce natural (and human) resources requires money to trade hands in order for progress to continue.

Our new visualization, produced exclusively for Innotribe at SWIFT is an exercise in speculating about which individual technologies are likely to disrupt the future of finance. We truly hope you like it.

The future of education (at LEGO)

Published on November 22, 2012 by mz

Annotated keynote presentation discussing the future of education technologies at LEGO. (Best viewed fullscreen)

September 2012 keynotes

Published on September 28, 2012 by mz

This has been a busy month, traveling to present in São Paulo, Berlin, Rome and Amsterdam. Below are the slides from my most recent talks.

Global Futures Forum (Rome)

PICNIC (Amsterdam)

The Next Web (São Paulo):

The Next Web (São Paulo) Video:

Envisioning the future of health

Published on September 20, 2012 by mz

Technology is the ultimate democratizing force in society. Over time, technology raises lowest common denominators by reducing costs and connecting people across the world. Medical technology is no exception to this trend: previously siloed repositories of information and expensive diagnostic methods are rapidly finding a global reach and enabling both patients and practitioners to make better use of information.

Our new visualization is an exercise in speculating about which individual technologies are likely to affect the scenario of health in the coming decades. Arranged in six broad areas, the forecast covers a multitude of research and developments that are likely to disrupt the future of healthcare.

We truly hope you enjoy it.

Keynote at Campus Party (Berlin)

Published on August 25, 2012 by mz

Here’s the full 40-minute talk from Campus Party in Berlin (30 minutes speaking + 10 minutes of questions from the audience). I go over a couple of technological imperatives, as well as describe three plausible sci-fi scaffolds, related to photography, surveillance and mobile connectivity. I hope you enjoy it.

Video:

Just the slides:

Sitting at the intersection between personal computers and mobile phones, the current crop of smartphones now represent more than half the mobiles used in the U.S., while numbers for the rest of the world are quickly catching up. These touch-screen and application-oriented devices have received a staggering market penetration in under a decade, and serve as an excellent pointer of what's to come. The sector is a fundamental driver of technological innovation and has essentially all major players working hard to outpace one another in accelerating product cycles.

It is therefore fitting to look ahead another half-decade and attempt to predict what the average consumer-grade smartphone might be like in 2017.

The point isn't to anticipate the relative market share eventual players and devices are likely to have in 2017. Predicting sales for the very near future is better left to experts like John Gruber and Horace Deidu. Instead, the exercise is to extrapolate on current technological trends and focus on the imperatives that seem to drive research and consumer expectations.

Moore's Law claims that we can expect the 2017 crop of smartphones to be roughly 10× more powerful than today. That's the same order of magnitude as a current mid-range laptop. We can also expect storage in the vicinity of 256Gb (32Gb today) with 4 or 8 Gb RAM (1024 Mb today). But that explains only a fraction of what to expect from said future handset.

Excuse the Apple-centrism in the following diagram, but their limited product line facilitates comparison:

Comparing smartphone and laptop speed over time

In a post-Megahertz-myth world, megapixels, processor speed and storage capacity bear no relevance to end users. They serve merely as an expanding enabler for our pocket computers to swallow previously disparate technologies into the void which has already assimilated cameras, music players, video games, business cards, books, GPS devices, credit cards, newspapers and paper maps.

The effort of this article is to speculate about a few of these future potential scenarios.

Ergonomy

The upper bounds of weight, volume and screen size have already been well-defined, and until devices allow us to unbundle aspects like power storage to external wireless batteries, or screens to glass-like Head-up Displays, the size of pockets/purses/hands will continue to be a limiting factor for portability. This, coupled with the discernment capacity of the human eye implies that screen resolution is unlikely to increase much beyond lower-hundred pixels per inch at ~4 diagonal inches. Manufacturers will instead focus on power efficiency, color saturation, brightness, thickness and eventually volumetric glasses-free 3D.

Screens

Considering the pervasiveness and multiplicity of both personal devices and ambient communal screens (like televisions), our 2017 smartphone will most likely extend considerably on existing screen-sharing and media-streaming functionalities. Transitioning conversations, video conferencing, gaming sessions, media playback, installed applications and user preferences between devices seamlessly will be taken for granted and possible with almost all screens that surround us.

This merging of screens will be done either from intense interoperability from manufacturers (unlikely) or by expanding on web standards in future versions of HTML (likely).

Consider Gmail: the messages in your inbox today are the same across all web browsers you log in from. If HTML extends toward persistent content states, you can expect the same behavior from immersive video games (pausing a game on your phone and resuming it on your TV), media playback, video conferencing and other computationally and bandwidth intensive applications. In effect, Chrome OS represents the first iteration of truly web-based operating systems, and the success of these by 2017 is still a gamble.

Expect much deeper integration with other wearables like glasses, wrist-watches, sensors and clothing. While not expected to represent a massive shift by 2017, the concept of a personal input/output info-stream that follows you around will start to manifest itself.

Interface

We're likely to see foldable & flexible screens for consumption-oriented devices (think Kindle, not iPad) very soon, but computationally intensive devices like smartphones will require high-performing hardware. Haptics (or simulated textures) are likely to show inroads on top-end devices by 2017, as are transflective screens which will further reduce energy consumption.

Definitely not a transflective screen

Texting in the sun will get better. Soon.

Gesture recognition coupled with already rapidly advancing speech recognition and context awareness should allow for very fast, multi-layered and complex interfacing with our smartphones.

If Siri and Google Now represent the current state of the art for invoking on-demand voice recognition, the 2017 smartphone ought to take on a more active role by "listen in" on conversations. Given permission, the device will take note of that article you mentioned you were going to send your friend when you last had coffee, and it will notify your spouse that you'll probably be early for dinner (before having left the office).

MIT Sixth Sense demo

We might see the implementation of pico projectors and Augmented Reality projection mapping (by incorporating Kinect-style depth perception), allowing the device screen to bleed into real life. While technologically (Sixth Sense), AR mapping is computationally intensive and power-hungry. The device would be running on all engines, having to activate video camera, depth sensor, positioning systems as well as high-lumen pico projector, meaning said pseudo-AR would be used under exceptional circumstances like gaming or wayfaring.

Sensors

Deployment of NFC/contactless payments is already well underway, and should be commonplace not only at coffee-shop franchises, but in supermarkets and eventually your local corner shop. Discount cards, fidelity cards, boarding passes and bus tickets will take more than half a decade to be technologically supplanted, but we'll see swathes of already-digital transactions and identification processes that today happen to take place on specialized devices (like printed boarding passes) being incorporated by apps.

We should see Indoor Positioning take off, allowing for centimeter-resolution navigation in many public buildings. Contextual awareness should grow exponentially, allowing apps to know not only your geo-coordinates, but also factor in your schedule/intent, social graph and recent history when making decisions for you. We will still maintain the final word regarding our whereabouts, but will start trusting the system for "planned spontaneity".

Indoor navigation

Identification

By 2017 we'll see the first steps for having our devices bridge the identification gap between online services and ourselves. Today we rely on a multitude of logins, passwords, email addresses and a barrage of services attempting to simplify the aforementioned issue by siloing your personal data and sharing only authorization tokens. Allowing the device to biologically identify its owner (through biometric sensors, like fingerprint scanners) has the potential to solve this issue by the end of the decade. After having done away with passwords, the next natural step would be keys, allowing the device to open your car or bicycle locks with your authorization. Given a few more years, even the door to your house will be unlocked from your smartphone.

Biometrics

The smartphone will be a lot more knowledgable of its owner. Biometric readings such as: body temperature, blood pressure, insulin levels, heart rate and all sorts of activity tracking should allow the device to extrapolate a comprehensive picture of our health. Coupled with external sensors, such as ambient CO2, illumination, air quality & pressure, the device moves into Tricorder territory.

This could in effect outsource part of the triage doctors have to deal with to the realm of specialized applications. Which, in effect, leaves said qualified professionals with more time to deal with actual problems.

Power

Koomey's law predicates that the energy efficiency of our devices doubles approximately every 1.5 years, which implies that our 2017 smartphone is expected to be 7-8× more efficient per joule, but the cumulative effect of power-hungrier CPUs, GPUs, sensors and screens has kept the amount of usable hours for our devices essentially stable around 8-10 hours of effective use. Considering that battery efficiency increase hasn't kept pace with Moore's observations and consumers' greed for speed, it is likely that improvements in battery technology will come from outside the device itself. Inductive charging, screen-embedded transparent photovoltaic panels and piezoelectric power generation are the most likely contestants in the race to keep batteries from running out before your lunch break.

Networking

Networking will be somewhat faster, more predictive and have lower latency. Considering the timeline for infrastructure & protocol engineering, IP, Wi-Fi, 3G and 4G (LTE/WiMax) will still be predominant in 2017. Fifth-generation networking technologies will be on the visible horizon, but ultimately only interesting if they deliver on the promises of a single global standard, pervasive networking and femtocell transitioning. Software-defined Radios might be feasible in this timeframe, delivering on the promise of ubiquitous global roaming, mesh networking and higher network throughput.

In five years' time, mobile networking infrastructure should also increase transmission speeds and reduce latency by approximately one order of magnitude, allowing the offset of additional operations into the cloud. In terms of gaming, don't expect OnLive when riding the bus, but the state of the art will be far cry from Game Center. Or in terms of translations, thanks to Google's efforts coupled with low latency and high processing power, we'll be closer to the Babel Fish than ever before.

We'll see more network access sharing between devices and hopefully the death of paying for half a dozen data plans. This will be induced either by mobile phone operators offering true multi-device plans (unlikely) or personal area networks which aggregate local transmission onto the internet (likely).

We should also see a surge in devices feeding information back into the network in order to "smarten" the infrastructure. For example by sharing users' intent to drive to a different neighborhood later in the day, and having the network allocate resources accordingly. Or in switching off all non-pre-programmed appliances at home, automatically, when you leave. Or in notifying emergency services when a device that was previously traveling at 120 km/h on the freeway comes to an abrupt halt.


Apologies to Mondo 2000

R.U.Cyberpunk?


In conclusion

The smartphone of 2017 will still be readily identifiable as part of the same genus as today's devices. It will remain a slab of polished glass with lit pixels underneath. They'll still buzz for incoming notifications and allow you to contact practically anyone, anywhere in the world, on command.

It will inevitably incorporate a subset of the aforementioned technologies, but also surprise us with unimaginable functionalities that lay far beyond mere extrapolation.

We'll stop worrying about battery life, storage, computation and even devices. Almost everything will be processed and stored online, with the smartphone serving as a temporary buffer for information, and as a constantly uploading sensor for ambient data.

It will hide an explosive wealth of possibilities behind the screen. It will not only react, but predict. Figuratively speaking, the device will allow you to see through walls and at times seem to be reading your mind. But it will not feel indistinguishable from magic. It will, like every device has, at every step of the way, become part of our expectations and quotidian, and become the new normal.


Thanks to Ricardo, Simon, Alex, Arthur, Diana and /r/Futurology for reading drafts of this.

B.F.

Published on August 14, 2012 by mz

We must do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest. The youth of today are absolutely right in recognizing this nonsense of earning a living. We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian-Darwinian theory, he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect inspectors. The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living.

– Buckminster Fuller.

GFF Keynote + Video

Published on August 8, 2012 by mz

Video to accompany my slides from GFF earlier this year.

A symphony, delivered

Published on July 18, 2012 by mz

This is the first in a series of Sci-Fi Scaffolds: brief science fiction inspired stories extrapolating on emerging technologies in order to create plausible scenarios for the near future.


Gleaning toward the far end of the bedroom, you spot the parcel. Unremarkable in size, like yesterday’s and all of those before it, a parcel adorned only by today’s date. Inside, an elaborate outfit concocted for today’s schedule: A pair of branded purplish blue jeans sewn to your size, a tight-fitting black faux-cotton shirt (yesterday you tweeted about longing for them), a weather-padded green-tinted leather jacket because a cold front is bringing autumn around earlier than expected. Sneakers with elaborate polygonal dark gray camo-print and a matching scarf. Socks and underwear are packed: high-quality, millimetrically crafted to your shape.

20120718-160335.jpg

You don’t remember missing a wardrobe. The reliable daily parcels freed up much valuable space in the flat and the mental bandwidth of having to decide what to wear.

When the newest Valley billions were being made from efficient shipping of socks, shampoo and soda to time-pressed bachelors, the retail industry finally paid attention. T-shirts-as-a-Service. Vegetables-as-a-Service. Sneakers-as-a-Service. Subscriptions were low-tech, but with swarms of autonomous, road-ready, shoebox-sized drones hauling boxes from remote warehouses to your doorstep overnight, everything changed. Until then, transport had been a remnant of the post-industrial landscape. Manned by underpaid drivers following computerized schedules and placemarks on a map.

20120718-160418.jpg

The cloud had caught up with the real.

After solving small-scale transport, efforts were put into automating production and disassembly of fabric and plastics. From-scratch ad-hoc manufacturing of individual pieces of garment with their multitude of intricately sewn materials was deemed too complex, so brainpower was invested into autonomously disassembling, chemically cleaning and repurposing clothes.

A solution driven from rapidly accelerating fashion cycles asymptoting towards the limits of discernment. Years ago, the fast-forwarding revolutions had made trends last only weeks, and had pushed fast fashion manufacturers into a corner of tight margins. But fine-controlled robotic needles driven by software pattern detection, trend dispersion algorithms and unflinching artificial eyes scaled the loops down to breakthrough overnight sensations for unwitting consumers at the end of the pipeline silently trusting the contents of their dutifully delivered parcels.

The death of the wardrobe had also made luggage redundant. Much like the old habit of preemptively locating Wi-Fi hotspots, you’d now confirm your destination was serviced by the physical embodiment of the Cloud. Not giving second thought to weather conditions, regional nanotrends or even the length of your trip meant replacing the agency of aesthetics and practicality for increasing convenience.

You still paid for speed and exclusivity in your garment selection, but instead of scouring crowded high-street outlets you gave in to a Nairobi whizkid who’d written a clever algo factoring in trending colors in São Paulo’s Itaim neighborhood while correlating your friends public garment history before packing that low-cut orange-red dress.

20120718-160425.jpg

Or you’d opt out from the trend-nonsense altogether and employ an open-source algo for extrapolating near-term fabric availability based on the cotton commodity boom of last month, delivering you only perfectly fitting outfits dyed gray & black for pennies a day.

You gave no more consideration about your daily outfit than whether the watts powering your tablet were sourced from a wind farm in Flanders or a solar array in Kinshasa. You trusted a system, carefully attuned to your budget, expectations and personality. You told the system what it needed to know, and the system tapped into signals spread deep throughout the fabric of reality to deliver goods you barely knew you wanted. Of course you’d trust it.


[Illustrations by the very talented Julia Scheele]

Envisioning the future of Education

Published on June 25, 2012 by mz

Education, in all facets, is an issue that lies close at heart for me.

Models of teaching worldwide are being revolutionized and reconsidered in real-time, and it seems everybody is looking for the holy grail of how to future-proof their classrooms. Advancing technology is leaving old schools of thought in their wake, and teachers are waking up to the fact that things will only speed up further in the foreseeable future.

Having spent time with the wonderful people at TFE Research in Dublin earlier this year, our new visualization is a concise overview of technologies that have the potential to disrupt and improve teaching on all levels.

Along with a few dozen emerging techs, we identified six key trends that link and contextualize said technologies, including classroom digitization, gamification and disintermediation.

We really hope you enjoy it.

[This was written as a guest post for the Institute of Ethics & Emerging Technologies.]


The merits of literacy are self-evident to the point of no longer being questioned in society. The very concept of reading and writing is a tenet of social compatibility for most cultures, having embedded itself into our social fabric to the degree where even debating whether "we should teach our kids how to read & write" is preposterous. But one doesn't have to trace far back into our history before encountering an era where literacy was a rare skill for a very distinct minority.


Literacy

A religious technology trait, in the middle ages, writing was a skill relegated to scribes, who needed it mostly to replicate bibles. Fast-forward past Gutenberg, movable type, the industrial revolution and past the information age, and today there's no debating that a literate populace has direct economic and intellectual leverage against an illiterate one.

The issue of literacy in most western countries today is focusing on the importance of teaching programming as a fundamental life skill for future economic gain. Having already gone through a boom-bust cycle in many education systems in the 1980's and 1990's, programming went from being a crucial computing skill to being relegated to a select group of specialists as UIs grew easier to manipulate. Today, learning C, PHP or Assembly isn't of much value for a non-developer, mostly as a function of its inherent complexity (programming is hard) and lack of broad necessity (programming is a means to an end).

I find myself in the camp of technologists seeing a fundamental shift happening around the role of programming in a post-information-age. Increasingly, the skill will transcend occupation and embed itself more deeply in aspects of daily life.


Why programming

Programming skills are regarded as crucial to develop a thriving economy (Silicon Valley being the prime proponent of said argument), but on a more fundamental level it teaches us skills that underline the contemporary condition. For starters, code is about recursive thinking: to use logic for breaking down complex problems into smaller ones, and solving these with specific tools. Code is also about heuristic thinking: making sense of large swathes of data and information, and throwing computer power against issues our brains are not optimized at solving. And finally, code is about abstract thinking: detaching oneself from the problem at hand and analyzing it from another level. It's about defusing the world and making it seem less magic (in response to Arthur C. Clarke's famous quip about advanced technologies being indistinguishable from magic).

"If you can code, you start to see the computer as a machine that can do anything you want, instead of just the things some app store makes available to you. That freedom is addictive. You start demanding it."

Dennis Peterson, in the comments.


The business of code

The case for coding skills being directly tied to prospective job opportunities (as an employee) or potential capital (as a startup founder) has been made to exhaustion, and the global "Silicon Geographies" prove the point. To further the argument, I want to make the case that algorithmic skills are reaching an increasingly wider base.

Take IFTTT, a free tool allowing you to create If-Then clauses for any combination of web-enabled services, such as: "If I am tagged in a Facebook album, then save that photo to my Dropbox" or "If it is going to rain tomorrow, then send me an email first thing in the morning". Said examples are simple but of broad appeal, but the general trend is of increasing future complexity and possibilities.

Only a couple of days ago, Microsoft announced a peculiar tool called on{x}, enabling Windows Phone users with basic JavaScript knowledge to extend the IFTTT paradigm of triggers and actions to the real world. on{x} allows your phone to, for example, automatically reply to a text message from your wife containing your current location (based on prior permissions, or remind you to visit the gym if you haven't been in three days (based on your location history). Just like with IFTTT, the applications are still rudimentary, but part of a larger trend.

Mojang, the company behind smash-hit Minecraft, recently announced that their next game title would essentially be a programming language. Entitled 0x10c, the game is a spaceship simulator where the player controls a virtual 16-bit CPU using actual code. The abilities of your ship, such as its speed, cloaking capacity, maneuverability, etc are a direct function of well you manage to utilize the simulated processor using code. Evidently, the intent isn't for the title to have a broad appeal among gamers, but rather reach a very particular niche — but again, the very concept of embedding actual programming in a video game is indicative of bigger changes.

Not to mention the meteoric success of the Raspberry Pi, a british initiative for mass-producing extremely affordable, hackable computers. A tremendous hit, the $35 pocket-sized Linux machine sold its initial batch of 10.000 units in minutes, proving that the maker ethos is alive and well.

“The web is becoming the world’s second language, and a vital 21st century skill — as important as reading, writing and arithmetic. It’s crucial that we give people the skills they need to understand, shape and actively participate in that world, instead of just passively consuming it.”

Mark Surman, Executive Director, Mozilla


Algorithmic & parametric design

Revising the fundaments behind the importance of coding skills, it becomes clear that being educated to think along the capacities of machines is bound to have positive consequence in areas not directly related to code. Take medicine, where the requirement for bug-free engineering is paramount — or pharmaceutics, where big-data analysts are being hired in droves in order to algorithmically identify potential new drugs.

With the advent of real-time analytics, even geo-economies like real estate and global shipping are rapidly turning algorithmic, drawing upon wealths of data in order to optimize global routing and predict booming hotspots.

Some claim that 70% of trading on Wall Street is already algorithmic, what about something as subjective as fashion forecasting? Pimkie Color Forecast automatically analyses webcam feeds in Paris, Milan and Antwerp to observe the predominant new colors worn on the street in order to extrapolate potential future trends.

Not even intellectual skills like journalism are safe from the onslaught of computer mediation. Companies like Narrative Science are already mass-producing sports and finance news reports based on real-time statistics, and some predict that the vast majority of news writing will be intermediated by algorithms by the end of the next decade. Architecture, too, is rapidly moving toward so-called Parametric Design, where buildings, floor plans and façades are designed according to a delicate interplay between artist and code.


Tools for making tools

In short, programming is the simplest tool-making tool. It allows for rapidly improving on existing solutions as well as solving problems previously thought to be irreducible.

Technology has long become our external memory, allowing us to entirely forget rote information such as phone numbers and addresses. Increasingly, our tools are swallowing the responsibility of agency, being more capable than us at telling us what to do (think Siri and fast-forward). If current trends are indicative of our future, the next generation of tools will allow us to forget how to do things. Take the U.S. financial market with its flash crashes. Our best minds are being applied to developing systems whose actions we no longer understand and we are rapidly becoming a species developing tools whose output we do not understand.

Knowing how these systems act, and determining how they ought to behave in the future is our responsibility.

Melvin Kranzberg’s six laws of technology state:

Published on June 12, 2012 by mz

  1. Technology is neither good nor bad; nor is it neutral.
  2. Invention is the mother of necessity.
  3. Technology comes in packages, big and small.
  4. Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.
  5. All history is relevant, but the history of technology is the most relevant.
  6. Technology is a very human activity – and so is the history of technology.

Hybrid Reality

Published on June 12, 2012 by mz

A couple of months ago, I was invited by Parag & Ayesha Khanna to help illustrate a couple of visual concepts for their upcoming TED e-book Hybrid Reality: Thriving in the Emerging Human-Technology Civilization.

The collaboration worked out beautifully, and I produced a handful of diagrams which help convey some of the book’s key messages. If you have the time, I highly recommend reading the whole thing — it’s a mesmerizing panorama of the next five minutes, elaborating on a wealth of topics, from geopolitics to the singularity.




B.C.

Published on June 3, 2012 by mz

We might ask ourselves: Is it appropriate to allow our definition of our own uniqueness to be, in some sense, reactive to the advancing front of technology? And why is it that we are so compelled to feel unique in the first place?

Brian Christian

Jaguar Land Rover keynote

Published on June 1, 2012 by mz

A brief presentation for the technology people at Jaguar Land Rover. The core concept was to explain my view on technological imperatives, Sci-Fi Scaffolding, and attempt to extrapolate a couple of potential scenarios for their industry.

London Futurists keynote

Published on May 27, 2012 by mz

My keynote presentation for the London Futurists Meetup group. The theme of the talk was to explain my “Sci-Fi Scaffolding” methodology. The ideas herein will be fleshed out into long piece for the blog soon.

Download PDF

H.E.M.

Published on May 27, 2012 by mz

The future is always ideal: The fridge is stocked, the weather clear, the trains run on schedule and meetings end on time. Today, well, stuff happens.

Hara Estroff Marano, Psychology Today

The future nauseous

Published on May 24, 2012 by mz

Welcome to the Future Nauseous is the most inquisitive take on near-futurism I’ve come across in recent memory. The whole piece is highly recommended reading, and hard to summarize in a single quote (but I’ll try):

At a more human level, I find that I am unable to relate to people who are deeply into any sort of cyberculture or other future-obsessed edge zone. There is a certain extreme banality to my thoughts when I think about the future. Futurists as a subculture seem to organize their lives as future-experience theaters. These theaters are perhaps entertaining and interesting in their own right, as a sort of performance art, but are not of much interest or value to people who are interested in the future in the form it might actually arise.

D.C.

Published on May 7, 2012 by mz

Aliens didn’t come down to Earth and give us technology. We invented it ourselves. Therefore it can never be alienating; it can only be an expression of our humanity.

Douglas Coupland

Interview by Caleb Prewitt

Published on May 2, 2012 by mz

[This is an interview I did with Kansas City-born artist Caleb Prewitt for his thesis at Montclair State University]


Michell Zappa is a UK-based futurist. He is the founder of Envisioning Technology, a forecasting firm that crafts reports on emerging technologies across a variety of industries. His projections for the next 40 years are available on the Envisioning Technology homepage, www.envisioningtech.com.


CP: So to begin, I was wondering if you could just talk about technology forecasting as an idea– what purpose does it serve?

MZ: Well, it’s important because technology underlies everything we as a society do. I see technology as the sole differentiator between us today and us in medieval times. We haven’t changed biologically in the last 500 years or in the last 10,000 years. We’re still essentially the same people, only we live twice or three times as long, we explore the planet and the atmosphere. Everything we do, every enhancement– not only biologically but socially– is driven by technology. So we separate ourselves from the past through technology, which I find fascinating. And I think we’re at a point where the rate of change, which is already mind blowing, is going to start visibly increasing in front of our eyes. And if we’re already sort of worn out or in a state of constant future shock from the speed at which things change or are replaced, it’s just going to get worse. Or better depending on your point of view.


CP: That’s Kurweil’s idea.

MZ: That’s Kurzweil, exactly.


CP: So you think we’re nearing a tipping point?

MZ: I do. I mean, mathematically there’s no distinction between where we were 100 years ago and where we’ll be ten years from now, because it’s all part of the same curve. But the tipping point is something very human– we will start noticing the change with more clarity than now. It’s going to look like a wall to us, even if it’s still that same growing curve.


CP: You know, it’s funny, I was rewatching some old sci-fi shows from the ‘90s not long ago, some of the Star Trek spin-offs and things like that, and the thing that struck me was how old-fashioned it all seemed. It was set 300 years in the future or something, and yet it just seemed so antiquated, as though technology had changed but culture hadn’t. They were envisioning the future from what they knew, but in ten years the world had changed radically and their vision was already out of date.

MZ: Well some of the early cell phones were explicitly based off Star Trek designs, and there’s an X Prize now to come up with essentially a tricorder, a handheld medical device. But what you’re saying is true of course, and that’s an inevitability of any forecasting. We’re living in today, and we can only extrapolate from today and imagine what could change, but it’s the unforeseen things that make it interesting. Star Trek plays it safe, but other sci-fi writers are more daring, like William Gibson, because they uproot everything we take for granted. You have to do that. If you ignore all the givens but still hold to the curve of technology, that’s the only way to truly imagine what the future will be like, because over time all the givens start rotting away, as they do in any society. But I agree that it’s a very hard exercise and very few people do it well, mostly sci-fi writers. Corporations, for instance, hire futurists, but that’s tricky because you’re essentially bound to envision a future where your company’s products remain relevant.


CP: So that brings up the question, how much of this is predicting the future and how much of it is coming up with creative ways to shape it?

MZ: That’s sort of where my personal interests lie. Shaping is hard; predicting less so. That’s relatively easy, mostly a matter of reading a lot and putting things together. Actually making a difference is very difficult, because that’s where you have massive institutions– governments, businesses– and it’s in their best interest to perpetuate a vision of the future that is most suitable to them. You have to look down the line and shape society in the direction that’s best for you. Very few companies can pull that off. Steve Jobs did. You could say he was the ultimate futurist because he undermined his own business a couple of times because he saw bigger change on the horizon. Not many people have the guts to do that.


CP: I was looking over the list of sources you provide on your site, and it’s mostly non-fiction, works by other futurists like Ray Kurzweil, but I noticed you included Warren Ellis’ Transmetropolitan. And on the one hand it seemed odd, being the one work of fiction, but it occurred to me that the reason why those books are so satisfying is that they don’t think about technology changing so much as society changing, getting stranger and stranger.

MZ: Yes. I’m trying to find a way to point that out. If you imagine being cryogenically frozen 50 years ago and woken up today, society would seem very strange. And someone from the ‘20s even more so. I think most people sort of see the future as a more high-tech version of the present, but it’s really not that. It’s people using technology to do all sorts of unimaginable things, for better or worse. And I don’t think there’s enough forecasting that accepts that as a given.


CP: What do you see as the primary driving force behind innovation or change?

MZ: I think it varies tremendously depending on where you find yourself in terms of Maslow’s hierarchy. On the bottom tier you see tons of new technology, from fighting disease to clean drinking water and infrastructure. That’s where the megascale projects are crucial, things only governments or transnational institutions like the UN can implement. But at the top of the pyramid, that’s where things get interesting. Things like personal expression and radical individuality start driving technology, things like being able to drastically alter your body. Being able to say, replace a hand or a leg with a better, artificial one. I’m sure 15 years from now people will be upgrading their limbs for kicks, or their skin or eyes.


CP: We’re talking about some pretty substantial shifts. How do you think we go about preparing ourselves for a future that’s stranger than we can imagine?

MZ: That’s the million-dollar question. I don’t have an answer for you, necessarily. This is our biggest challenge as a society, keeping up to speed with what everyone else is doing. Especially since technology and innovation are not evenly distributed, and the rate of change is so steep. It’s already tough just keeping up with what’s happening; incorporating it into our institutions, like our education system, that’s going to be extremely challenging.


CP: Carl Sagan used to write about that, about the perils of falling behind our technology. There was a time once when everyone essentially understood how most of the technology in their lives worked. And then as science progresses and people become more specialized, we reach a point where technology just becomes like magic to us, incomprehensible.

MZ: I think that’s the best metaphor, magic. When that happens, you have two fronts to fight. One is understanding the actual groundwork behind the technologies and the other is understanding what to do with it. I’m personally okay accepting that technology might seem a little magic, because I’m more concerned with how it’s used. I know a lot of people disagree, but I think it’s fine not knowing exactly how the thing works. I’d rather focus on using it; I think that’s where human creativity really comes into play. But there’s another side to that, and that’s programming, which has been the focus of a debate on education here in the UK. Programming should be the new math or biology. Schools will have to include it, sooner rather than later, because it’s becoming essential. There’s a book called Now You See It, that talks about the role that scribes played in medieval times and how programmers are the 21st century scribes. Only a very few people know how to do it, and everyone needs to know, because–


CP: Well because we run the risk of creating an elevated class. And the rest of us are stuck lower down on the totem pole because we lack a vital skill set.

MZ: Exactly. And I think it’s only a matter of time before schools and governments realize that. It hasn’t quite caught on, but I think the UK is fortunate to be having that conversation now. Imagine where we’d be as a society if only a few people still knew how to read and write.


CP: One last question. I want to talk about the idea of crossover between disciplines, say between science and art or technology and literature. How important is that exchange of ideas between fields?

MZ: Take computers as an example. That was something when I was in high school that was taught as a separate course, a computing course or a typing course. And that’s something that will eventually fall by the wayside, because now that technology is in everything. Computers are a part of every field now, they’re interconnected with everything. I think there are fundamental reasons to view technology as overlapping other fields. That’s where the creative frontier lies.