Sadece Litres'te okuyun

Kitap dosya olarak indirilemez ancak uygulamamız üzerinden veya online olarak web sitemizden okunabilir.

Kitabı oku: «Zucked», sayfa 7

Yazı tipi:

4
The Children of Fogg

It’s not because anyone is evil or has bad intentions. It’s because the game is getting attention at all costs. —TRISTAN HARRIS

On April 9, 2017, onetime Google design ethicist Tristan Harris appeared on 60 Minutes with Anderson Cooper to discuss the techniques that internet platforms like Facebook, Twitter, YouTube, Instagram, and Snapchat use to prey on the emotions of their users. He talked about the battle for attention among media, how smartphones transformed that battle, and how internet platforms profit from that transformation at the expense of their users. The platforms prey on weaknesses in human psychology, using ideas from propaganda, public relations, and slot machines to create habits, then addiction. Tristan called it “brain hacking.”

By the time I saw Tristan’s interview, I had spent three months unsuccessfully trying to persuade Facebook that its business model and algorithms were a threat to users and also to its own brand. I realized I couldn’t do it alone—I needed someone who could help me understand what I had observed over the course of 2016. Tristan’s vision explained so much of what I had seen. His focus was on public health, but I saw immediately the implications for elections and economics.

I got Tristan’s contact information and called him the next day. He told me that he had been trying to get engineers at technology companies like Google to understand brain hacking for more than three years. We decided to join forces. Our goal was to make the world aware of the dark side of social media. Our focus would be on Tristan’s public health framework, but we would look for opportunities to address political issues, such as elections, and economic issues like innovation and entrepreneurship. Our mission might be quixotic, but we were determined to give it a try.

Tristan was born in 1984, the year of the Macintosh. He grew up as the only child of a single mother in Santa Rosa, California, an hour or so north of the Golden Gate Bridge and San Francisco. When Tristan got his first computer at age five, he fell in love. As a child, Tristan showed particular interest in magic, going to a special camp where his unusual skills led to mentoring by several professional magicians. As performed by magicians, magic tricks exploit the evolutionary foundations of human attention. Just as all humans smile more or less the same way, we also respond to certain visual stimuli in predictable ways. Magicians know a lot about how attention works, and they structure their tricks to take advantage. That’s how a magician’s coin appears to fly from one hand to the other and then disappears. Or how a magician can make a coin disappear and then reappear from a child’s ear. When a magician tells you to “pick a card, any card,” they do so after a series of steps designed to cause you to pick a very specific card. All these tricks work on nearly every human because they play on our most basic wiring. We cannot help but be astonished because our attention has been manipulated in an unexpected manner. Language, culture, and even education level do not matter to a magician. The vast majority of humans react the same way.

For young Tristan, magic gave way to computers in the transition from elementary to middle school. Computers enabled Tristan to build stuff, which seemed like magic. He embraced programming languages the way some boys embrace baseball stats, making games and applications of increasing sophistication. It was the late nineties, and Apple was just emerging from a slump that had lasted more than a decade. Tristan fell in love with his Mac and with Apple, dreaming of working there one day. It didn’t take long, thanks to the admissions department at Stanford.

Stanford University is the academic hub of Silicon Valley. Located less than two hours south of Tristan’s home in Santa Rosa, Stanford has given birth to many of the most successful technology companies in history, including Google. When Tristan arrived at Stanford in the fall of 2002, he focused on computer science. Less than a month into his freshman year, he followed through on his dream and applied for a summer internship at Apple, which he got, working mostly on design projects. Some of the code and the user interfaces he created over the course of three summer jobs remain in Apple products today.

After graduation, Tristan enrolled in the graduate computer science master’s program at Stanford. In his first term, he took a class in persuasive technology with Professor B. J. Fogg, whose textbook, Persuasive Technology, is the standard in the field. Professors at other universities teach the subject, but being at Stanford gave Fogg outsized influence in Silicon Valley. His insight was that computing devices allow programmers to combine psychology and persuasion concepts from the early twentieth century, like propaganda, with techniques from slot machines, like variable rewards, and tie them to the human social need for approval and validation in ways that few users can resist. Like a magician doing a card trick, the computer designer can create the illusion of user control when it is the system that guides every action. Fogg’s textbook lays out a formula for persuasion that clever programmers can exploit more effectively on each new generation of technology to hijack users’ minds. Prior to smartphones like the iPhone and Android, the danger was limited. After the transition to smartphones, users did not stand a chance. Fogg did not help. As described in his textbook, Fogg taught ethics by having students “work in small teams to develop a conceptual design for an ethically questionable persuasive technology—the more unethical the better.” He thought this was the best way to get students to think about the consequences of their work.

Disclosure that the techniques he taught may have contributed to undermining democracy and public health have led to criticism of Professor Fogg himself. After reading Fogg’s textbook and a Medium post he wrote, I developed a sense that he is a technology optimist who embraced Silicon Valley’s value system, never imagining that his insights might lead to material harm. I eventually had an opportunity to speak to Fogg. He is a thoughtful and friendly man who feels he is being unfairly blamed for the consequences of persuasive technology on internet platforms. He told me that he made several attempts to call attention to the dangers of persuasive technology, but that Silicon Valley paid no attention.

In companies like Facebook and Google, Fogg’s disciples often work in what is called the Growth group, the growth hackers charged with increasing the number of users, time on site, and engagement with ads. They have been very successful. When we humans interact with internet platforms, we think we are looking at cat videos and posts from friends in a simple news feed. What few people know is that behind the news feed is a large and advanced artificial intelligence. When we check a news feed, we are playing multidimensional chess against massive artificial intelligences that have nearly perfect information about us. The goal of the AI is to figure out which content will keep each of us highly engaged and monetizable. Success leads the AI to show us more content like whatever engaged us in the past. For the 1.47 billion users who check Facebook every day, reinforcement of beliefs, every day for a year or two, will have an effect. Not on every user in every case, but on enough users in enough situations to be both effective for advertising and harmful to democracy.

The artificial intelligences of companies like Facebook (and Google) now include behavioral prediction engines that anticipate our thoughts and emotions, based on patterns found in the reservoir of data they have accumulated about users. Years of Likes, posts, shares, comments, and Groups have taught Facebook’s AI how to monopolize our attention. Thanks to all this data, Facebook can offer advertisers exceptionally high-quality targeting. The challenge has been to create ad products that extract maximum value from that targeting.

The battle for attention requires constant innovation. As the industry learned with banner ads in the early days of the internet, users adapt to predictable ad layouts, skipping over them without registering any of the content. When it comes to online ads, there’s a tradeoff. On the one hand, it is a lot easier to make sure the right person is seeing your ad. On the other hand, it is a lot harder to make sure that person is paying attention to the ad. For the tech platforms, the solution to the latter problem is to maximize the time users spend on the platform. If they devote only a small percentage of their attention to the ads they see, then the key is to monopolize as much of their attention as possible. So Facebook (and other platforms) add new content formats and products in the hope of stimulating more engagement. In the beginning, text was enough. Then photos took over. Then mobile. Video is the new frontier. In addition to new formats, Facebook also introduces new products, such as Messenger and a dating service. To maximize profits, internet platforms, including Facebook, hide the ball on the effectiveness of ads.

Platforms provide less-than-industry-standard visibility to advertisers, preventing traditional audit practices. The effectiveness of advertising has always been notoriously difficult to assess—hence the aphorism “I know half my ad spending is wasted; I just don’t know which half”—and platform ads work well enough that advertisers generally spend more every year. Search ads on Google offer the clearest payback; brand ads on other platforms are much harder to measure. What matters, though, is that advertisers need to put their message in front of prospective customers, no matter where they may be. As users gravitate from traditional media to the internet, the ad dollars follow them. Until they come up with an ad format that is truly compelling, platforms will do whatever they can to maximize daily users and time on site. So long as the user is on the site, the platform will get paid for ads.

INTERNET PLATFORMS HAVE EMBRACED B. J. Fogg’s approach to persuasive technology, applying it in every way imaginable on their sites. Autoplay and endless feeds eliminate cues to stop. Unpredictable, variable rewards stimulate behavioral addiction. Tagging, Like buttons, and notifications trigger social validation loops. As users, we do not stand a chance. Humans have evolved a common set of responses to certain stimuli—“flight or fight” would be an example—that can be exploited by technology. When confronted with visual stimuli, such as vivid colors—red is a trigger color—or a vibration against the skin near our pocket that signals a possible enticing reward, the body responds in predictable ways: a faster heartbeat and the release of a neurotransmitter, dopamine. In human biology, a faster heartbeat and the release of dopamine are meant to be momentary responses that increase the odds of survival in a life-or-death situation. Too much of that kind of stimulus is a bad thing for any human, but the effects are particularly dangerous in children and adolescents. The first wave of consequences includes lower sleep quality, an increase in stress, anxiety, depression, an inability to concentrate, irritability, and insomnia. That is just the beginning. Many of us develop nomophobia, which is the fear of being separated from one’s phone. We are conditioned to check our phones constantly, craving ever more stimulation from our platforms of choice. Many of us develop problems relating to and interacting with other people. Kids get hooked on games, texting, Instagram, and Snapchat that change the nature of human experience. Cyberbullying becomes easy over texting and social media because when technology mediates human relationships, the social cues and feedback loops that would normally cause a bully to experience shunning or disgust by their peers are not present. Adults get locked into filter bubbles, which Wikipedia defines as “a state of intellectual isolation that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history.” Filter bubbles promote engagement, which makes them central to the business models of Facebook and Google. But filter bubbles are not unique to internet platforms. They can also be found on any journalistic medium that reinforces the preexisting beliefs of its audience, while suppressing any stories that might contradict them. Partisan TV channels like Fox News and MSNBC maintain powerful filter bubbles, but they cannot match the impact of Facebook and Google because television is a one-way, broadcast medium. It does not allow for personalization, interactivity, sharing, or groups.

In the context of Facebook, filter bubbles have several elements. In the endless pursuit of engagement, Facebook’s AI and algorithms feed each of us a steady diet of content similar to what has engaged us most in the past. Usually that is content we “like.” Every click, share, and comment helps Facebook refine its algorithms just a little bit. With 2.4 billion people clicking, sharing, and commenting every month—1.58 billion every day—Facebook’s AI knows more about users than they can imagine. All that data in one place would be a target for bad actors, even if it were well-protected. But Facebook’s business model is to give the opportunity to exploit that data to just about anyone who is willing to pay for the privilege.

Tristan makes the case that platforms compete in a race to the bottom of the brain stem—where the AIs present content that appeals to the low-level emotions of the lizard brain, things like immediate rewards, outrage, and fear. Short videos perform better than longer ones. Animated GIFs work better than static photos. Sensational headlines work better than calm descriptions of events. As Tristan says, the space of true things is fixed, while the space of falsehoods can expand freely in any direction—false outcompetes true. From an evolutionary perspective, that is a huge advantage. People say they prefer puppy photos and facts—and that may be true for many—but inflammatory posts work better at reaching huge audiences within Facebook and other platforms.

Getting a user outraged, anxious, or afraid is a powerful way to increase engagement. Anxious and fearful users check the site more frequently. Outraged users share more content to let other people know what they should also be outraged about. Best of all from Facebook’s perspective, outraged or fearful users in an emotionally hijacked state become more reactive to further emotionally charged content. It is easy to imagine how inflammatory content would accelerate the heart rate and trigger dopamine hits. Facebook knows so much about each user that they can often tune News Feed to promote emotional responses. They cannot do this all the time to every user, but they do it far more than users realize. And they do it subtly, in very small increments. On a platform like Facebook, where most users check the site every day, small daily nudges over long periods of time can eventually produce big changes. In 2014, Facebook published a study called “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks,” where they manipulated the balance of positive and negative messages in the News Feeds of nearly seven hundred thousand users to measure the influence of social networks on mood. In its internal report, Facebook claimed the experiment provided evidence that emotions can spread over its platform. Without getting prior informed consent or providing any warning, Facebook made people sad just to see if it could be done. Confronted with a tsunami of criticism, Sheryl Sandberg said this: “This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated. And for that communication we apologize. We never meant to upset you.” She did not apologize for running a giant psychological experiment on users. She claimed that experiments like this are normal “for companies.” And she concluded by apologizing only for Facebook’s poor communication. If Sheryl’s comments are any indication, running experiments on users without prior consent is a standard practice at Facebook.

It turns out that connecting 2.4 billion people on a single network does not naturally produce happiness for all. It puts pressure on users, first to present a desirable image, then to command attention in the form of Likes or shares from others. In such an environment, the loudest voices dominate, which can be intimidating. As a result, we follow the human instinct to organize ourselves into clusters or tribes. This starts with people who share our beliefs, most often family, friends, and Facebook Groups to which we belong. Facebook’s News Feed enables every user to surround him- or herself with like-minded people. While Facebook notionally allows us to extend our friend network to include a highly diverse community, in practice, many users stop following people with whom they disagree. When someone provokes us, it feels good to cut them off, so lots of people do that. The result is that friends lists become more homogeneous over time, an effect that Facebook amplifies with its approach to curating News Feed. When content is coming from like-minded family, friends, or Groups, we tend to relax our vigilance, which is one of the reasons why disinformation spreads so effectively on Facebook.

Giving users what they want sounds like a great idea, but it has at least one unfortunate by-product: filter bubbles. There is a high correlation between the presence of filter bubbles and polarization. To be clear, I am not suggesting that filter bubbles create polarization, but I believe they have a negative impact on public discourse and politics because filter bubbles isolate the people stuck in them. Filter bubbles exist outside Facebook and Google, but gains in attention for Facebook and Google are increasing the influence of their filter bubbles relative to others.

Everyone on Facebook has friends and family, but many are also members of Groups. Facebook allows Groups on just about anything, including hobbies, entertainment, teams, communities, churches, and celebrities. There are many Groups devoted to politics, across the full spectrum. Facebook loves Groups because they enable easy targeting by advertisers. Bad actors like them for the same reason. Research by Cass Sunstein, who was the administrator of the White House Office of Information and Regulatory Affairs for the first Obama administration, indicates that when like-minded people discuss issues, their views tend to get more extreme over time.

Groups of politically engaged users who share a common set of beliefs reinforce each other, provoking shared outrage at perceived enemies, which, as I previously noted, makes them vulnerable to manipulation. Jonathon Morgan of Data for Democracy has observed that as few as 1 to 2 percent of a group can steer the conversation if they are well-coordinated. That means a human troll with a small army of digital bots—software robots—can control a large, emotionally engaged Group, which is what the Russians did when they persuaded Groups on opposite sides of the same issue—like pro-Muslim groups and anti-Muslim groups—to simultaneously host Facebook events in the same place at the same time, hoping for a confrontation.

Facebook wants us to believe that it is merely a platform on which others act and that it is not responsible for what those third parties do. Both assertions warrant debate. In reality, Facebook created and operates a complex system built around a value system that increasingly conflicts with the values of the users it is supposed to serve. Where Facebook asserts that users control their experience by picking the friends and sources that populate their News Feed, in reality an artificial intelligence, algorithms, and menus created by Facebook engineers control every aspect of that experience. With nearly as many monthly users as there are notional Christians in the world, and nearly as many daily users as there are notional Muslims, Facebook cannot pretend its business model and design choices do not have a profound effect. Facebook’s notion that a platform with more than two billion users can and should police itself also seems both naïve and self-serving, especially given the now plentiful evidence to the contrary. Even if it were “just a platform,” Facebook has a responsibility for protecting users from harm. Deflection of responsibility has serious consequences.

THE COMPETITION FOR ATTENTION across the media and technology spectrum rewards the worst social behavior. Extreme views attract more attention, so platforms recommend them. News Feeds with filter bubbles do better at holding attention than News Feeds that don’t have them. If the worst thing that happened with filter bubbles was that they reinforced preexisting beliefs, they would be no worse than many other things in society. Unfortunately, people in a filter bubble become increasingly tribal, isolated, and extreme. They seek out people and ideas that make them comfortable.

Social media has enabled personal views that had previously been kept in check by social pressure—white nationalism is an example—to find an outlet. Before the platforms arrived, extreme views were often moderated because it was hard for adherents to find one another. Expressing extreme views in the real world can lead to social stigma, which also keeps them in check. By enabling anonymity and/or private Groups, the platforms removed the stigma, enabling like-minded people, including extremists, to find one another, communicate, and, eventually, to lose the fear of social stigma.

On the internet, even the most socially unacceptable ideas can find an outlet. As a proponent of free speech, I believe every person is entitled to speak his or her mind. Unfortunately, anonymity, the ability to form Groups in private, and the hands-off attitude of platforms have altered the normal balance of free speech, often giving an advantage to extreme voices over reasonable ones. In the absence of limits imposed by the platform, hate speech, for example, can become contagious. The fact that there are no viable alternatives to Facebook and Google in their respective markets places a special burden on those platforms with respect to content moderation. They have an obligation to address the unique free-speech challenges posed by their scale and monopoly position. It is a hard problem to solve, made harder by continuing efforts to deflect responsibility. The platforms have also muddied the waters by frequently using free-speech arguments as a defense against attacks on their business practices.

Whether by design or by accident, platforms empower extreme views in a variety of ways. The ease with which like-minded extremists can find one another creates the illusion of legitimacy. Protected from real-world stigma, communication among extreme voices over internet platforms generally evolves to more dangerous language. Normalization lowers a barrier for the curious; algorithmic reinforcement leads some users to increasingly extreme positions. Recommendation engines can and do exploit that. For example, former YouTube algorithm engineer Guillaume Chaslot created a program to take snapshots of what YouTube would recommend to users. He learned that when a user watches a regular 9/11 news video, YouTube will then recommend 9/11 conspiracies; if a teenage girl watches a video on food dietary habits, YouTube will recommend videos that promote anorexia-related behaviors. It is not for nothing that the industry jokes about YouTube’s “three degrees of Alex Jones,” referring to the notion that no matter where you start, YouTube’s algorithms will often surface a Jones conspiracy theory video within three recommendations. In an op-ed in Wired, my colleague Renée DiResta quoted YouTube chief product officer Neal Mohan as saying that 70 percent of the views on his platform are from recommendations. In the absence of a commitment to civic responsibility, the recommendation engine will be programmed to do the things that generate the most profit. Conspiracy theories cause users to spend more time on the site.

Once a person identifies with an extreme position on an internet platform, he or she will be subject to both filter bubbles and human nature. A steady flow of ideas that confirm beliefs will lead many users to make choices that exclude other ideas both online and off. As I learned from Clint Watts, a national security consultant for the FBI, the self-imposed blocking of ideas is called a preference bubble. Filter bubbles are imposed by others, while a preference bubble is a choice. By definition, a preference bubble takes users to a bad place, and they may not even be conscious of the change.

Preference bubbles can be all-encompassing, especially if a platform like Facebook or Google amplifies them with a steady diet of reinforcing content. Like filter bubbles, preference bubbles increase time on site, which is a driver of revenue. In a preference bubble, users create an alternative reality, built around values shared with a tribe, which can focus on politics, religion, or something else. They stop interacting with people with whom they disagree, reinforcing the power of the bubble. They go to war against any threat to their bubble, which for some users means going to war against democracy and legal norms. They disregard expertise in favor of voices from their tribe. They refuse to accept uncomfortable facts, even ones that are incontrovertible. This is how a large minority of Americans abandoned newspapers in favor of talk radio and websites that peddle conspiracy theories. Filter bubbles and preference bubbles undermine democracy by eliminating the last vestiges of common ground among a huge percentage of Americans. The tribe is all that matters, and anything that advances the tribe is legitimate. You see this effect today among people whose embrace of Donald Trump has required them to abandon beliefs they held deeply only a few years earlier. Once again, this is a problem that internet platforms did not invent. Existing fissures in society created a business opportunity that platforms exploited. They created a feedback loop that reinforces and amplifies ideas with a speed and at a scale that are unprecedented.

In his book, Messing with the Enemy, Clint Watts makes the case that in a preference bubble, facts and expertise can be the core of a hostile system, an enemy that must be defeated. As Watts wrote, “Whoever gets the most likes is in charge; whoever gets the most shares is an expert. Preference bubbles, once they’ve destroyed the core, seek to use their preference to create a core more to their liking, specially selecting information, sources, and experts that support their preferred alternative reality rather than the real, physical world.” The shared values that form the foundation of our democracy proved to be powerless against the preference bubbles that have evolved over the past decade. Facebook does not create preference bubbles, but it is the ideal incubator for them. The algorithms ensure that users who like one piece of disinformation will be fed more disinformation. Fed enough disinformation, users will eventually wind up first in a filter bubble and then in a preference bubble. If you are a bad actor and you want to manipulate people in a preference bubble, all you have to do is infiltrate the tribe, deploy the appropriate dog whistles, and you are good to go. That is what the Russians did in 2016 and what many are doing now.

THE SAD TRUTH IS that Facebook and the other platforms are real-time systems with powerful tools optimized for behavior modification. As users, we sometimes adopt an idea suggested by the platform or by other users on the platform as our own. For example, if I am active in a Facebook Group associated with a conspiracy theory and then stop using the platform for a time, Facebook will do something surprising when I return. It may suggest other conspiracy theory Groups to join because they share members with the first conspiracy Group. And because conspiracy theory Groups are highly engaging, they are very likely to encourage reengagement with the platform. If you join the Group, the choice appears to be yours, but the reality is that Facebook planted the seed. It does so not because conspiracy theories are good for you but because conspiracy theories are good for them.

Research suggests that people who accept one conspiracy theory have a high likelihood of accepting a second one. The same is true of inflammatory disinformation. None of this was known to me when I joined forces with Tristan. In combination with the events I had observed in 2016, Tristan’s insights jolted me, forcing me to accept the fact that Facebook, YouTube, and Twitter had created systems that modify user behavior. They should have realized that global scale would have an impact on the way people used their products and would raise the stakes for society. They should have anticipated violations of their terms of service and taken steps to prevent them. Once made aware of the interference, they should have cooperated with investigators. I could no longer pretend that Facebook was a victim. I cannot overstate my disappointment. The situation was much worse than I realized.

The people at Facebook live in their own preference bubble. Convinced of the nobility of their mission, Zuck and his employees reject criticism. They respond to every problem with the same approach that created the problem in the first place: more AI, more code, more short-term fixes. They do not do this because they are bad people. They do this because success has warped their perception of reality. To them, connecting 2.4 billion people is so obviously a good thing, and continued growth so important, that they cannot imagine that the problems that have resulted could be in any way linked to their designs or business decisions. It would never occur to them to listen to critics—how many billion people have the critics connected?—much less to reconsider the way they do business. As a result, when confronted with evidence that disinformation and fake news spread over Facebook influenced the Brexit referendum in the United Kingdom and a presidential election in the United States, Facebook took steps that spoke volumes about the company’s world view. They demoted publishers in favor of family, friends, and Groups on the theory that information from those sources would be more trustworthy. The problem is that family, friends, and Groups are the foundational elements of filter and preference bubbles. Whether by design or by accident, they share the very disinformation and fake news that Facebook should want to suppress.

Ücretsiz ön izlemeyi tamamladınız.

₺677,25
Yaş sınırı:
0+
Hacim:
483 s. 6 illüstrasyon
ISBN:
9780008319021
Telif hakkı:
HarperCollins
Metin
Средний рейтинг 0 на основе 0 оценок
Metin
Средний рейтинг 0 на основе 0 оценок