The Big Nine BEFORE IT’S TOO LATE

June 8, 2019

 

 

 

By Amy Webb

 

 

Artificial intelligence is already here, but it didn’t show up as we all expected. It is the quiet backbone of our financial systems, the power grid, and the retail supply chain. It is the invisible infrastructure that directs us through traffic, finds the right meaning in our mistyped words, and determines what we should buy, watch, listen to, and read. It is technology upon which our future is being built because it intersects with every aspect of our lives: health and medicine, housing, agriculture, transportation, sports, and even love, sex, and death.


AI isn’t a tech trend, a buzzword, or a temporary distraction—it is the third era of computing. We are in the midst of significant transformation, not unlike the generation who lived through the Industrial Revolution. At the beginning, no one recognized the transition they were in because the change happened gradually, relative to their lifespans. By the end, the world looked different: Great Britain and the United States had become the world’s two dominant powers, with enough industrial, military, and political capital to shape the course of the next century.


Everyone is debating AI and what it will mean for our futures ad nauseam. You’re already familiar with the usual arguments: the robots are coming to take our jobs, the robots will upend the economy, the robots will end up killing humans. Substitute “machine” for “robot,” and we’re cycling back to the same debates people had 200 years ago. It’s natural to think about the impact of new technology on our jobs and our ability to earn money, since we’ve seen disruption across so many industries. It’s understandable that when thinking about AI, our minds inevitably wander to HAL 9000 from 2001: A Space Odyssey, WOPR from War Games, Skynet from The Terminator, Rosie from The Jetsons, Delores from Westworld, or any of the other hundreds of anthropomorphized AIs from popular culture. If you’re not working directly inside of the AI ecosystem, the future seems either fantastical or frightening, and for all the wrong reasons.
 

Those who aren’t steeped in the day-to-day research and development of AI can’t see signals clearly, which is why public debate about AI references the robot overlords you’ve seen in recent movies. Or it reflects a kind of manic, unbridled optimism. The lack of nuance is one part of AI’s genesis problem: some dramatically overestimate the applicability of AI, while others argue it will become an unstoppable weapon.


I know this because I’ve spent much of the past decade researching AI and meeting with people and organizations both inside and outside of the AI ecosystem. I’ve advised a wide variety of companies at the epicenter of artificial intelligence, which include Microsoft and IBM. I’ve met with and advised stakeholders on the outside: venture capitalists and private equity managers, leaders within the Department of Defense and State Department, and various lawmakers who think regulation is the only way forward. I’ve also had hundreds of meetings with academic researchers and technologists working directly in the trenches. Rarely do those working directly in AI share the extreme apocalyptic or utopian visions of the future we tend to hear about in the news.


That’s because, like researchers in other areas of science, those actually building the future of AI want to temper expectations. Achieving huge milestones takes patience, time, money, and resilience—this is something we repeatedly forget. They are slogging away, working bit by bit on wildly complicated problems, sometimes making very little progress. These people are smart, worldly, and, in my experience, compassionate and thoughtful.


Overwhelmingly, they work at nine tech giants—Google, Amazon, Apple, IBM, Microsoft, and Facebook in the United States and Baidu, Alibaba, and Tencent in China—that are building AI in order to usher in a better, brighter future for us all. I firmly believe that the leaders of these nine companies are driven by a profound sense of altruism and a desire to serve the greater good: they clearly see the potential of AI to improve health care and longevity, to solve our impending climate issues, and to lift millions of people out of poverty. We are already seeing the positive and tangible benefits of their work across all industries and everyday life.


The problem is that external forces pressuring the nine big tech giants—and by extension, those working inside the ecosystem—are conspiring against their best intentions for our futures. There’s a lot of blame to pass around.


In the US, relentless market demands and unrealistic expectations for new products and services have made long-term planning impossible. We expect Google, Amazon, Apple, Facebook, Microsoft, and IBM to make bold new AI product announcements at their annual conferences, as though R&D breakthroughs can be scheduled. If these companies don’t present us with shinier products than the previous year, we talk about them as if they’re failures. Or we question whether AI is over. Or we question their leadership. Not once have we given these companies a few years to hunker down and work without requiring them to dazzle us at regular intervals. God forbid one of these companies decides not to make any official announcements for a few months—we assume that their silence implies a skunkworks project that will invariably upset us.


The US government has no grand strategy for AI nor for our longer-term futures. So in place of coordinated national strategies to build organizational capacity inside the government, to build and strengthen our international alliances, and to prepare our military for the future of warfare, the United States has subjugated AI to the revolving door of politics. Instead of funding basic research into AI, the federal government has effectively outsourced R&D to the commercial sector and the whims of Wall Street. Rather than treating AI as an opportunity for new job creation and growth, American lawmakers see only widespread technological unemployment. In turn they blame US tech giants, when they could invite these companies to participate in the uppermost levels of strategic planning (such as it exists) within the government. Our AI pioneers have no choice but to constantly compete with each other for a trusted, direct connection with you, me, our schools, our hospitals, our cities, and our businesses.


In the United States, we suffer from a tragic lack of foresight. We operate with a “nowist” mindset, planning for the next few years of our lives more than any other timeframe. Nowist thinking champions short-term technological achievements, but it absolves us from taking responsibility for how technology might evolve and for the next-order implications and outcomes of our actions. We too easily forget that what we do in the present could have serious consequences in the future. Is it any wonder, therefore, that we’ve effectively outsourced the future development of AI to six publicly traded companies whose achievements are remarkable but whose financial interests do not always align with what’s best for our individual liberties, our communities, and our democratic ideals?
 

Meanwhile, in China, AI’s developmental track is tethered to the grand ambitions of government. China is quickly laying the groundwork to become the world’s unchallenged AI hegemon. In July 2017, the Chinese government unveiled its Next Generation Artificial Intelligence Development Plan to become the global leader in AI by the year 2030 with a domestic industry worth at least $150 billion, which involved devoting part of its sovereign wealth fund to new labs and startups, as well as new schools launching specifically to train China’s next generation of AI talent. In October of that same year, China’s President Xi Jinping explained his plans for AI and big data during a detailed speech to thousands of party officials. AI, he said, would help China transition into one of the most advanced economies in the world. Already, China’s economy is 30 times larger than it was just three decades ago. Baidu, Tencent, and Alibaba may be publicly traded giants, but typical of all large Chinese companies, they must bend to the will of Beijing.


The future of AI is currently moving along two developmental tracks that are often at odds with what’s best for humanity. China’s AI push is part of a coordinated attempt to create a new world order led by President Xi, while market forces and consumerism are the primary drivers in America. This dichotomy is a serious blind spot for us all. Resolving it is the crux of our looming AI problem. The Big Nine companies may be after the same noble goals—cracking the code of machine intelligence to build systems capable of humanlike thought—but the eventual outcome of that work could irrevocably harm humanity. Fundamentally, I believe that AI is a positive force, one that will elevate the next generations of humankind and help us to achieve our most idealistic visions of the future.


But I’m a pragmatist. We all know that even the best-intentioned people can inadvertently cause great harm. Within technology, and especially when it comes to AI, we must continually remember to plan for both intended use and unintended misuse. This is especially important today and for the foreseeable future, as AI intersects with everything: the global economy, the workforce, agriculture, transportation, banking, environmental monitoring, education, the military, and national security. This is why if AI stays on its current developmental tracks in the United States and China, the year 2069 could look vastly different than it does in the year 2019. As the structures and systems that govern society come to rely on AI, we will find that decisions being made on our behalf make perfect sense to machines—just not to us.


We humans are rapidly losing our awareness just as machines are waking up. We’ve started to pass some major milestones in the technical and geopolitical development of AI, yet with every new advancement, AI becomes more invisible to us. The ways in which our data is being mined and refined is less obvious, while our ability to understand how autonomous systems make decisions grows less transparent. We have, therefore, a chasm in understanding of how AI is impacting daily life in the present, one growing exponentially as we move years and decades into the future. Shrinking that distance as much as possible through a critique of the developmental track that AI is currently on is my mission for this book. 

 

My goal is to democratize the conversations about artificial intelligence and make you smarter about what’s ahead—and to make the real-world future implications of AI tangible and relevant to you personally, before it’s too late.
Humanity is facing an existential crisis in a very literal sense, because no one is addressing a simple question that has been fundamental to AI since its very inception: What happens to society when we transfer power to a system built by a small group of people that is designed to make decisions for everyone? What happens when those decisions are biased toward market forces or an ambitious political party? The answer is reflected in the future opportunities we have, the ways in which we are denied access, the social conventions within our societies, the rules by which our economies operate, and even the way we relate to other people.
                                    * * *
Every person alive today can play a critical role in the future of artificial intelligence. The decisions we make about AI now—even the seemingly small ones—will forever change the course of human history. As the machines awaken, we may realize that in spite of our hopes and altruistic ambitions, our AI systems turned out to be catastrophically bad for humanity. But they don’t have to be. The Big Nine aren’t the villains in this story. In fact, they are our best hope for the future. Turn the page. We can’t sit around waiting for whatever might come next. AI is already here.

Credit line:
From THE BIG NINE: How the Tech Titans and Their Thinking Machines Could Warp Humanity, by Amy Webb. Reprinted with permission from Public Affairs, a division of the Hachette Book Group.

Amy Webb is a professor of strategic foresight at the NYU Stern School of Business and the Founder of the Future Today Institute, a leading foresight and strategy firm. Named by Forbes as one of the five women changing the world, Webb was named to the Thinkers50 Radar list of the 30 management thinkers most likely to shape the future of how organizations are managed and led and won the 2017 Thinkers50 Radar Award. She is the tech columnist and a contributing editor at Inc. Magazine, where she writes about the future of technology and business. Her TED Talk  has been viewed more than seven million times and she was a featured speaker at the 2019 SXSW conference. You can see video and learn more about her at her website: amywebb.io 

  

Please reload

© 2019 by Weston Magazine Inc.