Prepare for a backend job by mastering computer science fundamentals

Your Parent’s Internet - How to Mitigate Misinformation

By Lane Wagner on Jul 31, 2020

The age of information is not what we all hoped it would be. We successfully digitized the majority of human knowledge, and we even made it freely accessible to most. Now the problem is different, we have too much information. Answers to most questions can be found in thousands of distinct places online, and the new problem is “whos information can we trust?”

What platforms think they should do about fake news

Twitter and Facebook have recently been under scrutiny for their censorship of coronavirus-related misinformation. For example, a video claiming Hydroxychloroquine is a Corona cure recently went viral on Facebook, and the video keeps getting taken down. The video contains some wild assertions, made by Stella Immanuel, who also happens to believe that gynecological problems are the result of spiritual relationships.

By removing content they believe to be dubious, Twitter and Facebook have made themselves arbiters of truth. Anecdotally, all the posts I’ve seen them remove HAVE contained misinformation, but the fact remains… these platforms have become self-appointed authorities on the veracity of our information.

This is a problem.

A simple path to your career in backend development

The pace of Boot.dev's JavaScript, Python and Go courses has been perfect for me. The diverse community in Discord is a blast, and other members are quick to help out with detailed answers and explanations.

- Daniel Gerep from Cassia, Brasil

So we can’t censor?

We certainly can, and we certainly should in some cases. Let’s get some obvious ones out of the way:

There may be some other clear examples where censoring is unquestionably the right choice, though I doubt there are many. Let’s look at some more controversial examples:

I would posit that here the answer is contingent on who is doing the censoring. While hate speech and misinformation are disgusting, I don’t want a government deciding what is hate speech, or deciding what is truth.

war is peace 1984 orwell

George Orwell, 1984

That said, I certainly want an online system where hate speech and misinformation are effectively filtered out of the conversation. Ideally, every online participant would be a virtuous, educated, and concerned conversationalist. If that were the case, undesirable posts would effectively be ignored due to not receiving the likes, shares, upvotes, and comments they need to spread.

In reality, we can’t take such a pacifistic approach. We need to protect our gardens a bit more.

What should social networks do about misinformation?

The steps platforms take are more important than what users do on a personal level to combat misinformation. Platforms can directly affect user behavior patterns that result in more accurate information being circulated through combinations of UI, UX, tooling, and even moderation. As we’ll see in the next section, all individual users can do is try to account for their own biases and look to primary sources.

All online platforms are responsible for the tools they provide for moderation, if not for the moderation itself.

Social platforms should:

Social platforms should not be eager to:

By removing misleading content, platforms run the risk of fueling an argumentum ad martyrdom mentality. Removing information can have an adverse effect, causing people to suspect there is a nefarious reason for removing it.

But the fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.

- Carl Sagan, Probably

Get backend development jobs by learning CS

I was a field service engineer and I wanted to become a backend developer, but work and family limited my options. Since completing the backend-focused computer science track on Boot.dev, I now have a job offer in hand and I'm starting my career as a software developer!

- Özgür Yildirim from Germany

What should users do about misinformation?

As I mentioned before, this is just a point of educating the userbase. The behaviors below are just good rules of thumb for anyone as a consumer of information. That said, platforms should do more to overtly educate their users about these kinds of critical thinking skills, and even encourage users to put them into practice via reminders or tactful in-app messaging.

Users should:

Users should not:

Become a backend engineer by building real projects

Related Reading