Nine years ago, I managed to get into a space normally sealed off to the world: the Stanford Artificial Intelligence Laboratory. I wanted to understand this phenomenon that was supposed to change everything. I reported on a meeting among A.I. researchers and venture capitalists looking for the Next Big Thing. As it happened, the topic of their discussion was how to use A.I. to, and I quote, “replace all the writers.” I was there, a writer on the wall.
“This was some major disruption,” I wrote shortly thereafter in the dispatch below, “a bunch of non-writers debating how to replace all the writers. I was taking careful notes, so that the replaced writers of the future would have some record of how the purge went down.”
Well, here it is.
The dispatch was not published at the time, because it didn’t fit my book then in progress. I recently revisited it and was struck by how, in retrospect, our present was being hatched there.
So I’m publishing it now, a glimpse into the past where the future was being foretold. It’s a long read, so dive in — or save it for a moment when you have time. I hope you enjoy it.
By Anand Giridharadas
The Gates Building at Stanford, home to the A.I. lab.
While out in the Bay Area, I spent a few days at the Stanford Artificial Intelligence Laboratory. The lab occupied two floors of the Gates Computer Science Building. It was a dull gray hive of offices and conference rooms. If, for some strange reason, someone blindfolded you and deposited you in its midst, and you somehow failed to notice all the robots and equations, you might guess you were in a regional sales department of a midsized manufacturer of lower-school sports trophies.
Nothing in the atmosphere suggested power. Nothing told you that this is the place that had spawned Google. And yet it was said by very intelligent people that the future they were concocting here could change the face of human civilization. Some thought that their work would bring heaven down to earth; others feared this was the closest we had come to hell.
The heaven scenario saw a human existence made effortless, seamless, healthy to the possible point of immortality, efficient, leisurely, cornucopiac, creative. A.I. already guessed what you were seeking when you looked things up, and in the future it would know all your needs in every area of life. A.I. already decided when to tell new parents that a newborn might not be breathing, and in the future disease-curing nanobots and big-data-crunching supercomputers could end aging and even dying as we know them. A.I. already traded half of all stocks on the American exchanges, and in the future it might free all of us from the burden of work, and allow us to paint and write sonnets and dance. By giving human beings such mastery over their health and environment, A.I. could, it had been argued, make us the first species to avoid extinction itself.
And yet Elon Musk — builder of electric cars and rocket ships, booster of all things technological — had called A.I. the world’s “biggest existential threat” and declared that “with artificial intelligence we’re summoning the demon.” This was the hell scenario. It was less precise, less sure, because it focused on what human beings might not foresee as they built the tools of their replacement. Reid Hoffman, of LinkedIn, compared A.I. to the development of an unknown species that could have major effects on the planet. There was also the humanist worry that an artificially intelligent future would essentially be a future without work for most people — except, of course, for the builders of A.I. and its algorithms. Pope Francis had warned that robotics and related advances could, left alone, “lead to the destruction of the human person — to be replaced by a soulless machine — or to the transformation of our planet into an empty garden for the enjoyment of a chosen few.” The most dire visions had A.I., on its own or in the hands of bad people, speeding up our extinction date.
It was a lot of pressure to work on such things. Such was the fate of the researchers of the Stanford A.I. Lab who were drifting into the second-floor lounge this evening. This, sometimes, is how civilization gets remade: by often highly socially awkward people who do not see themselves as remaking it — by a squad of robot-like humans chosen to make robots more human-like.
What stayed with me most from several days spent at the lab was this meeting, lasting a little more than an hour. For in that meeting, I was able to see, as I hadn’t so clearly before, how Silicon Valley’s rhetoric of prediction works: how a strange cocktail of futurism and cynicism could be used to justify a world that will be devastating for vast numbers of people and great for its predictors. And how cultivating and believing in the idea of your own powerlessness had become an essential tool for seizing power.
***
Tonight was the biweekly meeting of the lab’s eClub, which described itself as “the first official coalition between the Stanford Artificial Intelligence Laboratory (SAIL) and a cluster of corporate partners to foster discussions between artificial intelligence researchers and venture capitalists interested in real world AI applications.” The techies got to meet the money men of Silicon Valley, who worked a few blocks and a world away on Sand Hill Road. The money men, who were also ThoughtLeaders, got a glimpse into cutting-edge technologies that just might become their next unicorn.
The topic of today’s meeting was journalism and writing. They were trying to figure out whether and how to “replace all the writers,” as one of them put it.
The perks of coalition-building with venture capitalists: a table to the side was covered with pizza from Pronto, a bottle of Pinot Noir, and some beers. The pizza vanished at the rate of several slices a minute. The Pinot Noir would remain unopened, but some beers were being sipped.
About twenty students began to take their seats. Lots of jeans, lots of wrist activity trackers, lots of waifish legs crossed at the knee, lots of genius, lots of zealous and impatient male energy unleavened by social awareness or social grace. There was one woman in the room. Over the next ninety minutes, she would not speak.
In one corner of the room sat a pair of venture capitalists. There was a man I will call Marty, a partner at a preeminent venture capital firm nearby, who possessed, especially in this room full of immigrants and immigrants’ children, the special force of the Old White Man who has seen it all, is faintly bored by everything, thinks his first ideas are his best ideas, and has a lot of money. Beside him was a man I will call Ashish, a partner at another top venture capital firm in the Bay, who offered a more realistic ideal for the people in this room. He was Indian, handsome enough not to be rich, but rich all the same, dressed in perfectly fitting dark clothes that were at once sporty and formal, broadcasting a vibe of “I was the youngest partner in the history of my firm.” Which he had been. When you searched his name in Google, the first additional query suggested (by A.I.) was “Ashish ______ net worth.” You could just picture Stanford students looking him up late at night, intimidated and amazed: He studied here, too! He flies microlight airplanes! He is on leave from the Stanford Medical School! Together, Marty and Ashish represented several billions of dollars longing to be invested in kids like these techies.
I took a seat beside a student named Manoush. He was unkempt, earnest, slightly hostile. I asked what drew him to A.I. He spoke of wanting to free people from the drudgery of work. Let the machines, the algorithms, do the repetitive things. Free people to think big strategic thoughts.
“The biggest factor that leads to increased quality of life is efficiency of workforce,” he said.
Without intending to, I must have looked skeptical. Manoush told me to look up the citation myself.
There was some tension over Manoush’s vision in A.I. circles. A handful of A.I.’s founding fathers, some of whom were present at the 1956 Dartmouth meeting that was the field’s constitutional convention, lamented that their original project — using computers to seek to understand and mimic human beings — had given way to the more prosaic and lucrative goal of raising productivity. An irascible old-timer like Pat Langley could mourn the days when the “intelligence” in “artificial intelligence” was defined as “the ability to carry out complex, multi-step reasoning, understand the meaning of natural language, design innovative artifacts, generate plans that achieve goals, and even reason about their own reasoning.” But the privatizing drive of the age of markets had reached A.I., too. Things now had to justify themselves in the marketplace. The “commercial successes of ‘niche’ A.I.” and an “obsession with quantitative metrics” had reoriented the field, Langley wrote. A.I. labs had “abandoned the field’s original goal. Rather than creating intelligent systems with the same breadth and flexibility as humans, most recent research has produced impressive but narrow idiot savants.”
Manoush believed deeply in the idiot savants. Those bots could free up much human energy. But, I asked Manoush, what about all the people who would be beached, temporarily or even permanently?
“We have people who are going to get shafted,” Manoush said. “But in the long term, we are going to have a higher quality of life for the whole.”
This was an important article of faith around the Bay these days. These men and women knew their inventions could be frightening. Their promise was that this was the storm before the calm, the shafting before the emancipation.
Of course, there were those, even within the lab, who questioned this vision. Juan Carlos Niebles, a Colombian researcher, laughed off the pop-culture imagery of robots killing off and eating their human masters. But he worried about other threats that to him seemed realistic. He wondered: Would the A.I. agents nurtured by his lab create mass unemployment? Would people need to be paid a minimum income when complex machines are doing so many of the old jobs? How would we occupy people’s time and energy and imagination? Niebles didn’t worry about apocalypses. He worried, with more self-awareness than many winners of the age, that he was participating in the creation of a new world that would be rewarding and fulfilling only for people like him.
But he hastened to add that he had no time to think about such matters. “Day to day, we think of: what are the barriers to achieving new things?” he said. The technical problems were so overwhelming that they crowded out reflection. The A.I. researcher’s self-conception was of an unblocker of the blockages standing between us and progress, he said. It was not their habit to muse about consequences.
***
At present an Italian postdoc named Roberto called the meeting to order and introduced some questions to frame the conversation. How could A.I. help to personalize the news for each person’s interests? How could it mine oceans of data and discover stories hidden in the numbers and patterns? Could it copy the style of particular writers and produce fresh content in their voices?
It should be noted that there were no journalists participating in this conversation. (I was a silent observer.) It is far less awkward to reimagine people’s lives in their absence.
Some gatherings begin with problems in need of solution. Others begin with solutions seeking a problem. This was a meeting of the latter type. Journalism, of course, had plenty of problems. But no one in the room seemed to know much about those problems; and if they did, those problems weren’t their motivating spur. They were here because they were inventing technologies whose spread they believed was inevitable, and they wanted to see what those technologies could do for — or perhaps to — journalism.
An important self-belief in the room seemed to be this: they were extrapolators of the Curves, the seers of forces. It was not their role to say what world they wanted. Their job was to get what they wanted by saying it would happen anyway.
Manoush got things rolling with an idea about who should produce the news in the future. The Curve was driving more and more of the world’s Internet traffic and advertising dollars to the big Internet portals. In the quarter in which Manoush spoke, 85 percent of new money spent on online ads was captured by just two companies, Facebook and Google, according to the for-now-still-existing New York Times. (Both companies happened to be major recruiters at the lab.)
“It seems pretty obvious to me that news should be moving toward distribution by people who can do advertising better than, like, New York Times and Washington Post, because they just don’t have enough data on you.”
It was a modish idea in tech circles: that tech should “eat” the media, just like it should “eat” everything else. In the future that Manoush envisioned, the most powerful entities on earth would also serve as the checks on their own power. But he didn’t propose this idea out of any belief in the world it would imply. It just seemed obvious to him that news should move toward wherever the Curve of advertising revenue is going.
A meek but protesting “Well…” shot out a few seats down from Manoush. It was Elek, who looked like a blend of Bjorn Borg and Jesus. “I’ll contest that to some extent,” he said faintly.
By the way, just so you’re not alarmed, this was nothing untoward, because disagreements in the lab tended to be devoid of the E.Q. niceties of the business world: “I think that’s a really interesting point, and the only place I’d push back…”; “Just to build on that and take it in a slightly different direction…”; “I think that’s mostly true, but…” Here when you disagreed with a comment in progress, you leaned forward, and your neck stiffened, sometimes to the point of your chin mildly vibrating, and perhaps called up a fake smile that did not mask the contempt you felt, and then you launched.
Some people went with the straightforward “No no no no no no no no no.”
Others favored the more gentle but still direct “Yeah, I mean, but…”
Or, on one occasion, just: “The reason I don’t like this idea…”
“Well…” said Elek. “I’ll contest that to some extent.”
Manoush turned toward Elek, both necks now stiff, both fake smiles in force: “O.K…”
“There’s one of two cases,” Elek said. “Either there’s a lot of money in news, and The New York Times is being greedy and then, yes, Facebook should take a greater share of that. Or there’s not a lot of money in news, and The New York Times is scrambling. And if Facebook takes a bigger share of that, what’ll happen is not the world becomes a better place but all the writers get fired. And then there’s no news for anyone.”
Elek, you will notice, reasoned differently from Manoush. Manoush saw a Curve and prophecized-advocated the future that it implied. Elek saw the Curve but didn’t think we were doomed to follow it. He thought we had choices. It would turn out that he wasn’t alone in this view in the room, though he was in a tiny minority. And that minority consisted entirely of Europeans. They, having some history under their belts, perhaps heard alarm bells when people spoke of a writer-free society.
Yet, Elek and his fellow E.U. delegates aside, the ThoughtLeaders and their disciples tended to gravitate to Manoush’s view. If we lived in the best time of times, in an endlessly self-improving world, who needed the kind of critical press for which Elek seemed nostalgic?
“I look forward to the time when the press covers all the hard work and toil and not the doom and gloom or shame of companies that hit bumps,” a V.C. partner named Josh Elman tweeted. When the darling startup Theranos was the subject of a Wall Street Journal investigation that questioned its basic veracity of its blood-testing business, young founders were incensed: “Sadden by witch hunt against @theranos. Yes, more transparency needed but innovation will have mis-steps. But why burn effort on a cross?” When Mark Zuckerberg pledged to give away ninety-nine percent of his Facebook shares, but to do so through a for-profit company with little oversight or accountability, many raised questions in the press. Sam Altman — Paul Graham’s cofounder at Y Combinator — tweeted: “It’s fine to wait to congratulate until they share more specifics on the recipients, but outright hostility in the mean time makes no sense.” Graham replied: “I think the reason you’re surprised is that not being a loser yourself you underestimate the power of envy.” Many ThoughtLeaders would hardly have minded Google and Facebook “eating” the news, as they liked to call it.
***
Yet tonight Elek had an unlikely ally. Marty, sitting in the corner, was becoming irritated by all the Facebook talk. He had driven over to hear some techie tell him the future of news lay in companies guys like him had already built.
“If we get back to the context of these meetings,” Marty said, pleasantly but with great authority, “we’re trying to think of ways that you can create interesting new businesses.” He offered some kindling: “If Uber wants to replace all the drivers by robots, do we want to replace all the writers by A.I.? I’ll pause there. It strikes me that those are the kinds of things we should be talking about here.”
Now we were talking. This was some major disruption: a bunch of non-writers debating how to replace all the writers. I was taking careful notes, so that the replaced writers of the future would have some record of how the purge went down.
The other V.C., Ashish, gave Marty a bit of an assist, suggesting they discuss “an algorithmic approach towards content creation.” He praised the news site Buzzfeed, whose tautological purpose was to get the most eyeballs for the things most likely to attract the most eyeballs. The site was putting A.I. to work already, although for now it still involved humans in the process.
“A lot of the listicles are often completely curated, or suggested, using this tool they have in-house that pulls together various links being shared across Twitter, Facebook, and so on,” Ashish said. The tool scans the Web for viral outbreaks. Perhaps it detects an upswing in posts about cupcakes. It analyzes them for patterns. “Basic classification techniques like string-matching can tell you that there’s some similarity between these several links that all have to do with how good the cupcakes look.” Then an editor can assign it, a writer can stick a headline on it and choose fourteen of the best examples, and now what was already beginning to trend on its own is unleashed to trend on Buzzfeed.
“It turns out people really like that content,” Ashish said. “So maybe it means we’re staring at a future where you do have A.I. helping to create content; it just looks more like Buzzfeed than a New York op-ed.” Laughter filled the room. “And that’s what maybe we all actually secretly want to read.”
Ashish had just shown off an important ThoughtLeader move: the faux-populism of claiming to give the people what they want, which just so happens to be rewarding for people like you.
A European neck stiffened across the room. It was Roberto’s.
“But how far can that go?” he said. “Because at the end of the day, someone needs to go out there and take a picture of that news. And someone needs to sit down and write the original thing that, with A.I., you’re gonna morph. But the original content was paid by someone.”
Here, again, one of the European guys was entering the debate and offering some wide-eyed idealism. It was idealistic in this room because it elevated a vision that would require choosing, that was different from what the Curve might bring.
Ashish quickly put Roberto in his place by reminding of the power of the Curve: “I would argue that as long as the Internet’s free, there’s going to be enough user-generated content that will allow folks to compile the most interesting things out there.” This was a common Valley refrain: in the future, the news would just be a greatest-hits collection of photos and videos and pieces of text posted by ordinary people.
But if writers wanted to save themselves, Ashish said, there were ways. They could, for instance, join Patreon, a platform that allowed artists to crowdsource patronage — to find your own small-dollar Medicis. In other words, in the future the entrepreneurs were building, the way to survive was to become an entrepreneur. The rise of entrepreneurship was, after all, another Curve on which the Valley was gambling.
Now another Euro guy, with two-tone brown and blond hair and more of that Euro-humanism, stiffened his neck and wanted in. He didn’t buy this patronage idea, which assumed that people would pay for higher-quality writing. “I mean, if no one cares about good op-eds and they only care about speaking about feelings, then no one’s gonna pay for it,” he said. Once again, a Euro was drawing a distinction between what the Curve would tend toward and what would be good.
Ashish would have none of it. “What is the value of journalism?” he asked, laughing as he said it.
Two-Tone Euro was still gloomy: “Once you tell people you gotta pay five dollars, or you could get a very shitty version that has a similar title and it’s made by Buzzfeed, they might not pay five dollars anymore.”
Ashish didn’t want to be a downer. Besides patronage, there was another bright spot he knew of in journalism. A site called The Information had recently taken Silicon Valley by storm, and its subscriptions weren’t cheap. Here’s why The Information was good, according to Ashish. Because it helped people make money, instead of spouting some vague Euro ideals about democracy and citizenship.
“What’s great is their subscriber base are the people they often write about,” he said. “It’s a lot of folks on Sand Hill Road. A lot of people who are in executive positions at tech companies. And they’re willing to pay for that content, because they’re a necessity almost. Business information. You’re not reading, sort of, news. It’s critical now to your business to know.”
***
As the conversation progressed, the future of journalism was revealing itself: unpaid user-generated content about cupcakes, auto-selected by bots for curation into listicles; journalist-entrepreneurs raising their own patronage; premium content on the society-magazine model of covering the great and good for consumption by the great and good — journalism of and for them.
But now here came Roberto with his Euro-sentimentalism, delicately stated though it was.
“Journalism — I’m trying to think — is more like the intersection between objective delivering the news and something that’s artistic in the way you write, inspires the person that’s reading, moving the person to a feeling, probably. It’s not so much to have a concrete goal of producing an outcome that would be monetizable.”
Again, the Euro-defiance of the Curve. Listen to the words Roberto was condemning: concrete, goal, producing, outcome, monetizable. These were the words that made the Curve curve. What words did he offer instead? Artistic, inspires, moving, feeling. These were the kind of words you depended on when you sought, mostly in vain, to overrule the Curve.
Before long, one of the Americans was helping to bring things back to the Curve. He had an idea for how to disrupt journalism. “Can we continue to distill the content collector, the reporter themselves?” he asked, a little inscrutably. “Instead of The New York Times employing a few hundred reporters, could this turn into a model where you’ve got individual freelancers or individual bloggers just out there taking pictures and writing about things, and A.I. aggregates this information for some kind of distribution?”
Reporters at The Baltimore Afro-American, 1958
But this only served to rev up Two-Tone’s Euro-sentimentalism once more. He didn’t want to live in a listicular world. And he believed there were many others like him — people who wanted to be elevated by the writing they read and the art they experienced, who, yes, might give in to clickbait in the moment, but who desired to rise above the immediacy and instinct. He wondered aloud: Could A.I. help to build “a new website that keeps you away from cat videos, away from Buzzfeed articles”?
This was perhaps a bridge too far — an A.I. tool built to counteract the Curve? So extreme was this Euro-humanism that it now caused a Euro-schism. Roberto, though by the standards of the room a quite committed humanist, couldn’t take it.
“Yeah, but sorry,” he said. “Facebook is not a conscience. The fact that you are hooked to Facebook — there’s a reason. And, yeah, it would be great to do something that keeps you away from partying or all these other things. But for some reason you end up going. It’s very hard to change the behavior of someone.”
It is hard to change the behavior of someone: an important idea for the winners working in A.I. For those winners to win to the fullest, just with regard to media, algorithms would have to do more and writers, less; layoffs would have to happen; the quality of public discourse would have to drop; the press as an institution would have to rot; writers would have to become eternal fundraisers, dependent on the whims and opinions of their backers; the technology firms that recruited heavily in the Stanford lab would have to control ever more of the society’s information. The architects of A.I. knew that this could become an unpleasant future for many — as a subset of Euro-humanists in the room seemed to think it would.
If you were intelligent, as these techies certainly were, you understood that things could grow tense as you built the future of your dreams — a future in which people with your specific skill set would gain an enormous amount of power, even as other people’s lives and many cherished institutions suffered. And so it was far more prudent, if you could pull it off, to present what was happening as inevitable — and, more important, to cast oneself as powerless over these changes.
Here, in this laboratory, one saw the banality of disruption. Here you disrupted things because what you knew how to do was disrupt things. You optimized for variables because those happened to be variables that you knew how to optimize for. You could imagine away whole swaths of society, without asking the human questions, because the overwhelming technical questions crowded them out. You amassed what others would experience as great power, while insisting on your impotence. You mused about your tools being used to disrupt things, instead of asking what problems needed you. And you did all this by convincing yourself that your own role was minimal, that you were merely riding atop the Curve.
***
The handful of Euro-humanists — now excluding Roberto, perhaps — wanted the room to own up to the real choices that they and the world faced. They wanted their colleagues to own up to the “moral character” of their work, to borrow a phrase from Phillip Rogaway, a cryptographer. Rogaway once wrote an essay criticizing his own colleagues for denying the social implications of their work. “Cryptography rearranges power: it configures who can do what, from what,” he said. “This makes cryptography an inherently political tool, and it confers on the field an intrinsically moral dimension.”
What he wrote of cryptography perhaps applied to A.I., too. A.I. types could cast their field as “fun, deep, and politically neutral.” Their shallow optimism about the Curve undercut the need for bigger-picture questioning: “a normative need vanishes if, in the garden of forking paths, all paths lead to good (or, for that matter, to bad).” Technologists, Rogaway wrote, prefer to deny that their inventions can either benefit or harm the weak, depending on choices we make together. Technologists were, you could say, a bit like ostriches.
Roberto, having traveled to the Americanish side in the ongoing Euro-schism, was in full ostrich mode. “Facebook is not a conscience,” he had said. “It’s very hard to change the behavior of someone.” Then he brought up broccoli. It would be great if people wanted to buy pieces of broccoli at fast-food restaurants. But they don’t. So we have McDonald’s. The world is what it is. They were powerless to change it.
This gave Two-Tone Euro the opening he needed. People do want broccoli nowadays! And if such change was coming to food, why not to other things?
“Before McDonald’s, there used to be organic farmers,” Two-Tone said. “Then everyone wanted to step away from the old to McDonald’s, and now they’re going back. So in a similar fashion, people were like ‘O.K., let’s go for New York Times,’ now they’re going Buzzfeed, but they’re gonna come back.”
Manoush, that champion of efficiency, had been following the back-and-forth and now tried to turn the conversation in a new direction.
“There’s a problem here that we’re not tackling, which is: how do you identify an atom of content, right?” he said. “So right now we’re dealing with articles’ being one atom of content. So I wonder if you can break that up further and further, and maybe you can figure out how much of that content to give to each person.”
(One might note, as an aside, that even this style of diction aided the Curve view. Manoush, like many in the Valley, began a great many of his sentences with a declamatory “so” and ended a significant fraction with a faux-interrogative “right?” To speak this way was to leave no space for doubt, for choices that might resist forces, for the thwarting of inevitability. This way of speaking reinforced a view of the world’s problems as purely technical — the view that there was, in every situation, a right answer. “So…right?” was the opposite of “From where I sit,” or “but maybe that’s just me.” It rejected the idea that people have different interests and needs and ideals. It rejected the very premise of politics. It dismissed the notion that there are competing values in tension in any situation, and that those values must be weighed and negotiated. It saw a world in which there was always a right answer, and technologists like Manoush had special access to those answers, and the rest of us should speak now or forever be quiet. So when I spoke, it made sense to cajole your agreement, right?)
***
So Manoush had been talking about how to identify an atom of content, right?
A neck stiffened just to Manoush’s right. Mahesh, an Indian techie n a white T-shirt, seemed perplexed by this idea of breaking up news into bits and algorithmically distributing the packages. “I don’t know,” he said, seeming a little lost. “It’s like, what is the goal here? What are you trying to optimize on?”
Now this was a great question — perhaps even the question with which the session should have begun. What problem were they actually trying to solve?
But here was the problem with starting with problems. To start with the solution was easy: you looked at the tools you had invented and the Curves that were in progress and you imagined where the future would lead: If Uber wants to replace all the drivers by robots, do we want to replace all the writers by A.I.? To start with a problem was trickier, because not everyone agreed on what was problematic. Starting with a problem, your focus had to be on the society’s needs, not on your tools. Solving that kind of problem tended to involve democracy — collective action, contending values, the making of choices.
What was most striking about the meeting was what hadn’t been discussed.
No one had spoken of democracy and of the place of a press within it.
No one had dwelled on what happens to art in an era of free everything.
No one had reflected on the extraordinary market power of Amazon and the effect of that power on books and ideas.
No one had asked whether the society could protect itself against the Facebook News Feed’s tinkerers slipping their own biases into the algorithm.
No one asked these things, for to ask these things was to admit one’s own power and reveal to others their power, and to suggest that you and those others could decide what kind of future it would be, the forces and the Curves be damned.
Here these bearers of great power over the future seemed in denial of that power. The world would be what it would be.
Before the meeting ended, Two-Tone Euro got up, picked up what appeared to be a homemade hoverboard from the corner — a skateboard-sized platform with a cantaloupe-sized ball in its middle — and rolled away. Others mingled over the remaining pizza and drinks. Just outside, a man retrieving his bicycle from the rack was savoring what he had just imbibed upstairs. That room, he said, wonder filling his eyes, had collided some of the smartest minds in all of Stanford.
Some names and identifying details have been changed. All dialogue is quoted verbatim.
Photos: Eric Sander/Getty; Andriy Onufriyenko/Getty; Kimberly White/Getty