With Tonya Riley
Exciting news: We just launched The Technology 202 Network, an invite-only panel of more than 100 technology experts who will vote in regular surveys on the most pressing issues in the field. Their responses will be featured in the newsletter — starting today! You’ll see familiar names on our interactive masthead, including AOL co-founder Steve Case, Cowboy Ventures founding partner Aileen Lee, Cloudflare chief executive Matthew Prince and many more. Sign up to get The Technology 202 newsletter in your inbox every weekday.
Ctrl + N
Social networks haven’t done enough to prevent manipulation of voters on their platforms in 2020, according to an overwhelming majority of tech experts surveyed by The Technology 202.
Companies including Facebook, Google’s YouTube and Twitter are under immense political pressure to fight disinformation after foreign interference in the last presidential election. But a whopping 89 percent of experts in The Technology 202 Network say their responses so far do not inspire confidence.
“Fighting misinformation and online voter manipulation is not a one-time effort, it is a continuous game of cat-and-mouse,” said Hadi Partovi, chief executive of Code.org and an early investor in Facebook, Uber and Airbnb. “The leaders of all the major online platforms would agree their job is far from ‘done.’ ”
The Technology 202 Network is a panel of more than 100 experts from across government, industry and the consumer advocacy community invited to vote in our ongoing surveys. (You can see the full list of experts here. Some were granted anonymity in exchange for their participation.)
The responsibility to tackle disinformation and other threats shouldn’t just be on tech companies, some industry experts argued. “Social media companies should be doing more, but are we comfortable with [Facebook chief executive] Mark Zuckerberg or [Twitter’s] Jack Dorsey as the arbiters of truth?” said Glenn Kelman, the chief executive of the real estate service Redfin.
But Kelman says that policymakers aren’t really equipped to handle the problem, either. “We elect governments, not corporations, to regulate speech, run fair elections and deter foreign interference. But most government folks lack the technical expertise to regulate the Internet, and many actually prefer a wide-open field for partisan warfare.”
Rep. Ro Khanna (D-Calif.), who represents Silicon Valley, insists that “technology companies should be investing in authentication tools, both for users and for content, to ensure that the news Americans are seeing online this year is honest and real.” Yet Khanna also says it’s time for Congress to finally pass legislation to force companies to take action: “Congress should also provide a basic regulatory framework so that social media companies remove blatant disinformation and hate speech that goes viral from their platforms.”
Many experts said tech companies’ lack of transparency about interference attempts makes it virtually impossible for voters to know whether adversaries are trying to influence them.
“The first step in preventing manipulation for individuals is knowing that it may be afoot,” said Danielle Keats Citron, a Boston University law professor and 2019 MacArthur fellow, commonly known as a “genius grant” recipient. “In short, so much is hidden from voters that we cannot tell the extent of the manipulation, let alone what companies are doing about it.”
Stewart Butterfield, the chief executive of Slack, said he’s “skeptical” that tech companies have done enough — “but the truth is we have no real way to know.”
Tech companies have promised big investments to address potential foreign interference on their platforms, including rooting out fake accounts and labeling posts that fact-checkers have deemed false. But the Network experts say these changes are just the tip of the iceberg. “Small initiatives with great fanfare won’t do the job,” said Tom Wheeler, chairman of the Federal Communications Commission during the Obama administration.
The approach to tackling disinformation and other threats has been inconsistent across Big Tech, said Falon Fatemi, the chief executive of start-up Node.io. “While some companies have invested heavily in this area and taken a proactive approach to this challenge, it is evident that some have not yet employed all that artificial intelligence can do to combat this problem,” she wrote. “All tech companies have a moral, and furthermore, business rationale to do better here, and to employ the latest and greatest AI for good in this uncertain time.”
Some respondents singled out Facebook’s policies as being particularly problematic: The company has said it will permit politicians to lie in ads and allow campaigns to target narrow slices of voters with political messages based on highly personalized data amassed by the company. (Google has put tougher restrictions on how political ads can be targeted).
If Facebook wants to be serious about election integrity, it should follow Twitter’s lead and do away with political ads altogether, said Karla Monterroso, the head of Code 2040, a nonprofit group advocating for diversity in the industry. “Tying money and targeted ad data to freedom of speech is ridiculous,” Monterroso said. “No one is entitled to that amount of data tied to a microphone in exchange for money. Especially if they are spreading lies.”
“Facebook won’t even stop running political ads. It’s hard to fathom, and Zuckerberg’s reasoning is either based on extremely ulterior motives or is just hardheaded,” said Bradley Tusk, founder of Tusk Ventures.
Several Network participants noted that the companies don’t have the right incentives to fix the problems with disinformation because of their business models. “Truly fixing Facebook’s threat to free and fair elections would require fundamental changes to its business model, but Facebook is not going to risk its billion dollars a week in targeted digital advertising revenue unless it’s forced to do so,” said Sally Hubbard, director of enforcement strategy at Open Markets Institute.
“The actions of Facebook and Google since 2016 raise serious doubts about the sincerity of their commitment to protecting democracy,” said Roger McNamee, a Silicon Valley investor and author of “Zucked: Waking Up to the Facebook Catastrophe.” “Both companies employ business models and algorithms that amplify hate speech, disinformation, and conspiracy theories because such content is unusually profitable. Rather than compromise their business models, both companies have made only cosmetic changes to appease policymakers.”
And rampant disinformation on these platforms disproportionately affects minority groups, warned Rashad Robinson, the president of the civil rights group Color of Change. “Make no mistake, these platforms know they have a problem — they acknowledged it in 2016 and they are aware of it today,” he said. “Yet these companies are making an active choice not to do more to end misinformation online, and that choice will disproportionately harm the communities most in need of the resources determined by elections and the upcoming census count.”
Just 11 percent of The Network — which includes executives from most major social networks — said that the companies are doing enough.
Kevin Martin, Facebook’s vice president for public policy and FCC chairman during the George W. Bush administration, sought to highlight the company’s progress.
“We have made strides in improving our security efforts through massive investments in people and technology to increase transparency, combat abuse and protect election integrity across the world,” Martin said. “While we know our work will never be complete, we are committed to combating these threats.”
Jesse Blumenthal, vice president of technology and innovation policy for the Koch network, argued it’s up to each individual to discern fact from fiction. “Social media companies are in a ‘cat-and-mouse’ game with all sorts of bad actors. It is easy to focus only on the challenges that exist and lose sight of the ways that social media platforms can and do help bring important information to light,” he said. “Ultimately, it is up to each of us to sort true information from falsehoods. Individuals are responsible for their actions. No company can or should think for you.”
— More responses from members of our Network panel about whether social media companies are prepared for the 2020 elections:
- NO: “Google and Facebook must take more affirmative steps to prevent platform algorithmic bias against conservative candidates and thought leaders. Takedown policies for disinformation and deep fakes lack much needed transparency. Without this clarity and consistency, voters cannot trust that platforms are objectively filtering news sources and search results, absent any political agenda or censorship.” – Sen. Marsha Blackburn (R-Tenn.), who leads the Senate Judiciary Committee’s “Tech Task Force.”
- NO: “After the lessons we learned in 2016 I think that social media companies should be focusing on several fronts in the fight against voter information 1) going beyond publishing policies to focusing on effective enforcement and rapid responses against misinformation from political candidates and unreliable sources deliberately targeting voters and vulnerable populations; 2) strengthening anti-harassment mechanisms and enforcement on their platforms; and 3) investing more in content moderation and improving working conditions for content moderators.” – Y-Vonne Hutchinson, the founder and chief executive of ReadySet, a diversity strategy firm
- NO: “Social media companies profit from the higher engagement that results from controversial and divisive content so they’re disinclined to make any real impact. Otherwise you’d see more efforts to actually reduce exposure resulting from the commercial amplification of blatantly erroneous information in politics. You hear about all the money that candidates are raising for the campaigns – that is directly for advertising, including on social networks.” – Ashkan Soltani, former Federal Trade Commission chief technologist and independent researcher
- NO: “There is definitely more work that can and should be done, but getting this right is incredibly complicated and anyone who believes there are easy answers here is not really paying attention.” – Julie Samuels, the executive director and founder of Tech:NYC, a tech sector advocacy group in New York
- NO: “Social media platforms have a chance to elevate the quality of paid political advertising by candidates. Imagine a simple change like ‘no attack ads naming opponents – focus on your own candidacy.’ Rather than focus on minimal compliance, I’d love to see the thought exercise by these platforms on how can we improve political discourse and information.” – Hunter Walk, co-founder of seed-stage venture fund Homebrew
- NO: “Social media companies like Facebook, You Tube and Twitter have demonstrated they can move to correct disinformation in response to the fast moving coronavirus outbreak, yet have failed to do the same in response to political disinformation. The stakes are equally high for a rapidly spreading disease and a rapidly eroding democracy. These companies have proven they have both means and opportunity to combat disinformation, what they lack is the will.” – Malkia Devich-Cyril, senior fellow and co-founder of MediaJustice
- NO: “We know that different vectors of attack, including automation of accounts and micro-targeted advertising, have significant loopholes across platforms. Further, policies on manipulated media do not require platform companies to disclose when users are targeted, so many of us will never know if we saw content pushed by manipulators,” – Joan Donovan, the director of the technology and social change research project at the Shorenstein Center on Media, Politics and Public Policy at Harvard University’s Kennedy School of Government
- NO: “While social media companies have gotten more serious about combating misinformation and disinformation since 2016, their adversaries have also gotten more sophisticated. This isn’t just about a specific vulnerability, it’s an ongoing arms race at an enormous scale.” – Zach Graves, the head of policy at the Lincoln Network
- NO: “I checked ‘no,’ but really this question has no answer. To answer, we would need to agree on what behavior counts as improper voter ‘manipulation’ – as opposed to campaigning, canvassing, persuading, arguing, and other normal activities that are at the heart of campaigns and citizen political engagement. Until we agree on that, the platforms will always be doing it wrong by someone’s measure.” – Daphne Keller, the director of intermediary liability at Stanford University’s Center for Internet and Society
- YES: “We should not put online platforms in the position of arbitrating the truth of content that is not of their own creation. We don’t hold the phone company responsible for policing what I say on the line.” – A Network participant who chose to remain anonymous
Update: This section has been updated to reflect Devich-Cyril’s current job title.
BITS, NIBBLES AND BYTES
BITS: Tech companies and their workers are struggling to keep up with coronavirus as U.S. cases grow:
— Listings for face masks — for prices ranging from 75 cents to $1,000 apiece — were still surfacing yesterday on Facebook Marketplace, the Verge’s Makena Kelly found. That’s despite Facebook’s announcement over the weekend it would ban all mask listings “in the days ahead” out of concern sellers would exploit the public health crisis.
— Google will prohibit visitors to its offices in Mountain View, Calif., San Francisco, and New York, Jennifer Elias at CNBC reported yesterday. The company is also now instructing its New York employees to work from home, signaling that tech company concerns over the virus have spread to the East Coast. Amazon is also telling its New York and New Jersey-based employees to work from home. (Amazon CEO Jeff Bezos owns The Washington Post).
— Employees should be prepared to brew their own coffee, as well. An employee at South Park Cafe, a hot spot for venture capitalists, tested positive for coronavirus. From Recode’s Teddy Schleifer:
aaaaaaaannnnd there goes VC Twitter https://t.co/NrY7INw06O
— Teddy Schleifer (@teddyschleifer) March 9, 2020
NIBBLES: The Trump campaign sent a letter to Twitter executives yesterday, calling on the social network to apply its new “manipulated media” label to an edited video that Joe Biden shared. But the company says it won’t flag videos that were posted before its new media policy took effect on March 5 — and the video the letter appears to describe was published on March 3.
That could escalate tensions between the company and Trump after Twitter applied its first “manipulated media” flag to a video retweeted by Trump on Sunday.
“In order for American elections to remain free and fair, it is critical that the Biden campaign be held to the same standard it is demanding apply to others,” wrote Michael Glassner, chief operating officer of Donald J. Trump for President in the letter. Glassner said the video Biden shared “manipulates audio and video of President Trump to mislead Americans and give a false impression.” The campaign video splices together many clips of Trump speaking that Glassner says were “deceptively edited” and taken out of context.
“We’ve received the letter and intend to respond,” said Katie Rosborough, a Twitter spokeswoman. The Biden campaign did not immediately respond to a request for comment.
BYTES: Whisper, a once-popular app to anonymously share secrets, left the intimate confessions and messages of hundreds of millions of users exposed online, my colleague Drew Harwell reports. Bad actors could have used the exposed data to tie users to their personal posts, potentially opening them to blackmail or harassment, researchers say.
The confessions included deeply personal information, such as sexual orientation, confessions of adultery and posts about suicide. Millions of the records belonged to children, researchers found.
“This has very much violated the societal and ethical norms we have around the protection of children online,” researcher Dan Ehrlich told Drew. “No matter what happens from here on out, the data has been exposed for years. [People could] have their lives ruined and their families blackmailed because of this.”
The data, which was publicly available online, included age, gender, ethnicity and location coordinates from the users’ last submitted post. Many of the coordinates traced to specific schools, workplaces, military bases and residential neighborhoods. The researchers were also able to access user accounts, including private messages.
The researchers alerted federal law enforcement officials and the company to the vulnerability. The researchers said they also shared their findings with human rights groups for fear the exposed data could have been abused by government officials or spies.
Shortly after researchers and The Post contacted the company on Monday, access to the data was removed. The company told Drew in a statement that much of the data was meant to be public to users from within the Whisper app. The database found by the researchers, however, was “not designed to be queried directly,” a company official said.
(This item has been updated to include a statement from Whisper.)
Twitter reached a deal with the activist hedge fund Elliott Management that will leave chief executive Jack Dorsey in place, Corrie Driebusch reports for the Wall Street Journal.
The agreement stops one of the most significant efforts to oust a famous tech founder, and it will lead to changes on the social network’s board.Twitter will have to appoint two new board members under the agreement, expanding what was an eight-person board. The company also had to agree to search for a third independent director, Twitter said.
— More news from the private sector:
Facebook Inc. named two new directors, appointing Tracey Travis, chief financial officer of Estée Lauder Cos., and longtime McKinsey & Co. executive Nancy Killefer at a time when the board’s role at the social media company is under intense scrutiny.
– The coronavirus outbreak also prevented Yelp CEO Jeremy Stoppelman from testifying at a Senate antitrust hearing today. Yelp executive Luther Lowe will testify in his place while Stoppelman deals with “operational changes” at the company resulting from the outbreak, a spokeswoman said.
Lowe will argue that Google’s favors itself in its own rating and reviews features, and that “poses a dire threat to our market, the local search market, as well as many other specialized search markets.” It’s an argument that Yelp has made for nearly a decade, and now the company is betting that lawmakers in Washington are ready to listen as scrutiny grows of the tech giants. Antitrust laws can provide a remedy to Google’s anticompetitive behavior that hurts Yelp, other search engines and consumers, Lowe will argue.
“This self-serving bias matches unwitting consumers with objectively lower-quality information,” Lowe will say in his testimony, provided to The Technology 202.
Google has denied Yelp’s accusations.
“We build Google Search for our users. Our users tell us they want quick access to information, and we’re constantly innovating Search to help people easily find what they’re looking for — whether it’s information on a web page, directions on a map, products for sale or a translation,” Google spokeswoman Julie Tarallo McAlister wrote in a statement.
— More news from the public sector:
— Tech news generating buzz around the Web:
- The Senate Judiciary Subcommittee on Antitrust will host a hearing on “Competition in Digital Technology Markets: Examining Self-Preferencing by Digital Platforms” on Tuesday at 10 a.m.
- The House Judiciary will host a hearing on copyright laws in foreign jurisdictions at 2:30 p.m.
- The Senate Committee Judiciary will hold a hearing Wednesday on “The EARN IT Act: Holding the Tech Industry Accountable in the Fight Against Online Child Sexual Exploitation” on Wednesday at 10:00 a.m.
- FCC Commissioner Geoffrey Starks and FTC Co missioner Rebecca Kelly Slaughter will jointly host a field hearing in Detroit, Michigan on 5G technology and big data on March 16 at 1 p.m.
- The Game Developers Conference will take place in San Francisco March 16-20.
—Postponed: Nava Public Benefit Corporation has postponed tonight’s event “Impact at Scale: From Big Tech to Civic Tech.”