Inside The Newsroom — The Newsletter For Journalists
Inside The Newsroom
#61 — Rachel Botsman (Trust Issues)
0:00
-49:40

#61 — Rachel Botsman (Trust Issues)

Hello! Welcome to another edition of Inside The Newsroom. Today’s guest is… Rachel Botsman, an author, podcast host, Trust Fellow at the University of Oxford’s Saïd Business School, and an overall trust expert. Whether we like it or not, we live in an age where growing numbers of people believe the opinions of strangers over facts. Rachel has studied trust in the media for several years, so we dug deep into why believing facts isn’t popular anymore, and why correcting vulnerabilities within our brain could be the solution. Below is a post-game of everything we discussed and more. But first, here are some quick links to stories I enjoyed this past week. Enjoy 🧠

  1. Saudi Arabia Hacked Jeff Bezos — Scoop of the year so far as the prince of Saudi Arabia is alleged to have hacked the phone of Amazon’s Jeff Bezos

  2. Instagram Face — How social media and plastic surgery have created a single, cyborgian look

  3. New York Times’ Dual Endorsement — The NYT broke tradition by endorsing two candidates, one of which is polling at three percent and the other currently the number one enemy of the progressive left

Oh, and if you like what you read, how about clicking the ❤️ up top. I’ll be very grateful. 😘

Rachel 👇

What Is a Trust Expert?

For more than a decade, Rachel’s explored what trust is, how it works and what its future looks like. In this TED Talk, she explains her work in more detail and how today we prefer to trust strangers online instead of facts and experts.


The History of Fake News

Misinformation, spin and lies have been around forever. The power and reach of the internet has allowed false information to be spread at speeds never seen before. A small Macedonian town called Veles is arguably the home of fake news, when in 2016 a band of fake websites began to spread false headlines on Facebook, such as “Pope Francis Shocks World, Endorses Donald Trump for President” and “FBI Agent Suspected in Hillary Email Leaks Found Dead in Apparent Murder-Suicide”. The group of scammers saw the potential to make large amounts of money using clever algorithms to exploit Facebook’s shallow system of not checking the validity of information that gets posted on its platform.

The Macedonian group exposed a sleeping giant, and of course when another giant began his bid for the White House, we were powerless to stop the vast networks already in place. Once mainstream politics entered the fray, fake news predictably exploded into a tidal wave of falsehoods, and not just because we have a Liar-In-Chief occupying 1600 Pennsylvania Avenue. How do we stop the poisonous cycle? Rachel has some ideas…

Mike Wendling, BBC


Share


How Your Brain Tricks You Into Believing Fake News

No matter how conscious we are of false information, it’s almost guaranteed that we’ve fallen foul of believing something that’s blatantly untrue. What’s even more remarkable is that in the Twitter age of retweets and likes, almost 60 percent of people will retweet a link without actually clicking on it. One of the best examples is from 2016, when satirical news website The Science Post posted a block of lorem ipsum text under the provocative headline “Study: 70% of Facebook users only read the headline of science stories before commenting”. The post was shared 125,000 times, buttressing studies that suggest 60 percent of people don’t read an article before retweeting it.

Aside from laziness, our inability to verify articles before sharing them is to do with something called ‘confirmation bias’, the idea that we want something to be true so badly, that we only accept information that supports our theory. In her work, Rachel now sets out to disprove her theories instead of just telling herself they’re true. If she can’t disprove something, then she’s onto something. I’ve started to operate in a similar manner, especially when working on projects to do with climate change, where there is often too much material to read. Still, even a handful of checks per story has mitigated several errors and saved bundles of time.

Katy Steinmetz, Time


Most of the Internet Isn’t Even Real

The Macedonian scammers were but a tiny chunk of the internet’s problem that has become far worse since 2016. According to a report by security firm Imperva in 2017, robot activity has been consistently more than that of humans, with bot activity eclipsing 60 percent in 2013. Fake YouTube views and anonymous egg avatars on Twitter are obviously incredibly concerning, if not just plain annoying. But you can easily live your online life without having to worry about petty trolls if you want to. Let’s call them ‘good bots’. What’s darker and more outright dangerous are the ‘bad bots’ — the malware that will try to hack your personal data if it’s the last thing it does. Case in point, Jeff Bezos (see above). What can we do about it? I haven’t a freaking clue. According to market-research firm CB Insights, more than a dozen bot startups raised first rounds of funding, so we may just be seeing the tip of the iceberg. 😳

Adrienne Lafrance, The Atlantic


Deep Fakes: People Don’t Care What’s Real

Just as we try to contain the epidemic of good and bad bots, we now have another headache. Deep fakes are videos which use technology to make a person appear to say or do something they didn’t say or do. Put more simply: anyone can make another person say anything they like. Take this deep fake of Mark Zuckerberg, originally posted to Instagram no less. Imagine how many people believed this was actually Zuckerberg before sharing to their networks…

In terms of legal protection, consequences will inevitably depend on who and how the law is interpreted. On one hand, deep fakes can be taken as parody, which is incredibly hard to prosecute. But the Electronic Frontier Foundation sees things differently. Per civil liberties director David Greene:

Fortunately, existing laws should be able to provide acceptable remedies for anyone harmed by deepfake videos. In fact, this area isn’t entirely new when it comes to how our legal framework addresses it. The US legal system has been dealing with the harm caused by photo-manipulation and false information in general for a long time, and the principles so developed should apply equally to deepfakes.

If a deepfake is used for criminal purposes, then criminal laws will apply. For example, if a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. And for any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations.

On the tort side, the best fit is probably the tort of False Light invasion of privacy. False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes. Deepfakes fit into those areas quite easily.

Rachel Botsman for Wired


Share Inside The Newsroom


Tech Companies Are Not On Our Side

I’ve been trying to find this Katie Couric podcast for absolutely ages, and finally writing this newsletter triggered my memory. Tristan Harris used to work for Google as a design specialist, but became horrified at what he saw as special teams with the sole purpose of making us, the users, addicted to Google’s products and technology. This isn’t limited to just Google, and takes place throughout Silicon Valley. I can’t recommend the podcast enough, and below is a sneak peak…


Related Episodes…

#58 — Art Markman (University of Texas)

#52 — Katie Notopoulos (BuzzFeed News)

#43 — Kashmir Hill (New York Times)

#41 — Jessica Lessin (The Information)

#30 — Art Markman (University of Texas)


Next Week…

We’ll have Krystal Ball on to talk about the upcoming launch of her new book on populism, as well as the New York Times’ dual endorsement of Elizabeth Warren and Amy Klobuchar.

Last Week…

#60 — Michael Mann (Penn State University) on witnessing the Australian wildfires in person and the country’s climate policy record


Job Corner

Each week I’ll feature a selection of new journalism jobs. This week, I’ve listed a range of openings at Insider Inc/Business Insider covering technology.

INTERNSHIPS (scroll down)

Associate Editor, Tech

Emerging Technology Reporter

Internet and Digital Culture Reporter

News Reporter (London)

Tech Billionaires Reporter

Tech Deals Reporter

Tech Editor

Tech Editor, Enterprise

Tech Ideas and Innovation Reporter

Tech Reporter (London)

Teen Digital Culture Reporter

Senior Tech Reporter

Sports Reporter

Visual Features Reporter, Tech

Discussion about this podcast

Inside The Newsroom — The Newsletter For Journalists
Inside The Newsroom
Daniel Levitt delves inside the minds of journalists around the world