1

Watch This Incredibly Important Speech: Tulsi Gabbard Testifies on the Weaponization of Federal Government

Source: Tulsi Gabbard

Tulsi Gabbard delivers a brilliant speech regarding the Weaponization of the Federal Government during a House Subcommittee meeting.




The Age of Intolerance: Cancel Culture’s War on Free Speech | John W. Whitehead & Nisha Whitehead

Source: The Rutherford Institute  

“Political correctness is fascism pretending to be manners.”—George Carlin

Cancel culture—political correctness amped up on steroids, the self-righteousness of a narcissistic age, and a mass-marketed pseudo-morality that is little more than fascism disguised as tolerance—has shifted us into an Age of Intolerance, policed by techno-censors, social media bullies, and government watchdogs.

Everything is now fair game for censorship if it can be construed as hateful, hurtful, bigoted, or offensive provided that it runs counter to the established viewpoint.

In this way, the most controversial issues of our day—race, religion, sex, sexuality, politics, science, health, government corruption, police brutality, etc.—have become battlegrounds for those who claim to believe in freedom of speech but only when it favors the views and positions they support.

Free speech for me but not for thee” is how my good friend and free speech purist Nat Hentoff used, to sum up, this double standard.

This tendency to censor, silence, delete, label as “hateful,” and demonize viewpoints that run counter to the cultural elite is being embraced with near-fanatical zealotry by a cult-like establishment that values conformity and group-think over individuality.

For instance, are you skeptical about the efficacy of the COVID-19 vaccines? Do you have concerns about the outcome of the 2020 presidential election? Do you subscribe to religious beliefs that shape your views on sexuality, marriage, and gender? Do you, deliberately or inadvertently, engage in misgendering (identifying a person’s gender incorrectly) or deadnaming (using the wrong pronouns or birth name for a transgender person)?

Say yes to any of those questions and then dare to voice those views in anything louder than a whisper and you might find yourself suspended on Twitter, shut out of Facebook, and banned across various social media platforms.

This authoritarian intolerance masquerading as tolerance, civility, and love (what comedian George Carlin referred to as “fascism pretending to be manners”) is the end result of a politically correct culture that has become radicalized, institutionalized, and tyrannical.

In the past few years, for example, prominent social media voices have been censored, silenced, and made to disappear from Facebook, Twitter, YouTube, and Instagram for voicing ideas that were deemed politically incorrect, hateful, dangerous, or conspiratorial.

Most recently, Twitter suspended conservative podcaster Matt Walsh for violating its hate speech policy by sharing his views about transgendered individuals. “The greatest female Jeopardy champion of all time is a man. The top female college swimmer is a man. The first female four-star admiral in the Public Health Service is a man. Men have dominated the female high school track and the female MMA circuit. The patriarchy wins in the end,” Walsh tweeted on Dec. 30, 2021.

J.K. Rowling, the author of the popular Harry Potter series, has found herself denounced as transphobic and widely shunned for daring to criticize efforts by transgender activists to erode the legal definition of sex and replace it with gender. Rowling’s essay explaining her views is a powerful, articulate, well-researched piece that not only stresses the importance of free speech and women’s rights while denouncing efforts by trans activists to demonize those who subscribe to “wrongthink,” but also recognizes that while the struggle over gender dysmorphia is real, concerns about safeguarding natal women and girls from abuse are also legitimate.

Ironically enough, Rowling’s shunning included literal book burning. Yet as Ray Bradbury once warned, “There is more than one way to burn a book. And the world is full of people running about with lit matches.”

Indeed, the First Amendment is going up in flames before our eyes, but those first sparks were lit long ago and have been fed by intolerance all along the political spectrum.

Consider some of the kinds of speech being targeted for censorship or outright elimination.

Offensive, politically incorrect, and “unsafe” speech: Political correctness has resulted in the chilling of free speech and growing hostility to those who exercise their rights to speak freely. Where this has become painfully evident is on college campuses, which have become hotbeds of student-led censorship, trigger warnings, microaggressions, and “red light” speech policies targeting anything that might cause someone to feel uncomfortable, unsafe, or offended.

Bullying, intimidating speech: Warning that “school bullies become tomorrow’s hate crimes defendants,” the Justice Department has led the way in urging schools to curtail bullying, going so far as to classify “teasing” as a form of “bullying,” and “rude” or “hurtful” “text messages” as “cyberbullying.”

Hateful speech: Hate speech—speech that attacks a person or group on the basis of attributes such as gender, ethnic origin, religion, race, disability, or sexual orientation—is the primary candidate for online censorship. Corporate internet giants Google, Twitter, and Facebook continue to redefine what kinds of speech will be permitted online and what will be deleted.

Dangerous, anti-government speech: As part of its ongoing war on “extremism,” the government has partnered with the tech industry to counter online “propaganda” by terrorists hoping to recruit support or plan attacks. In this way, anyone who criticizes the government online can be considered an extremist and will have their content reported to government agencies for further investigation or deleted. In fact, the Justice Department is planning to form a new domestic terrorism unit to ferret out individuals “who seek to commit violent criminal acts in furtherance of domestic social or political goals.” What this will mean is more surveillance, more pre-crime programs, and more targeting of individuals whose speech may qualify as “dangerous.”

The upshot of all of this editing, parsing, banning, and silencing is the emergence of a new language, what George Orwell referred to as Newspeak, which places the power to control language in the hands of the totalitarian state.

Under such a system, language becomes a weapon to change the way people think by changing the words they use.

The end result is mind control and a sleepwalking populace.

In totalitarian regimes—a.k.a. police states—where conformity and compliance are enforced at the end of a loaded gun, the government dictates what words can and cannot be used.

In countries where the police state hides behind a benevolent mask and disguises itself as tolerance, the citizens censor themselves, policing their words and thoughts to conform to the dictates of the mass mind lest they find themselves ostracized or placed under surveillance.

Even when the motives behind this rigidly calibrated reorientation of societal language appear well-intentioned—discouraging racism, condemning violence, denouncing discrimination and hatred—inevitably, the end result is the same: intolerance, indoctrination, and infantilism.

The social shunning favored by activists and corporations borrows heavily from the mind control tactics used by authoritarian cults as a means of controlling its members. As Dr. Steven Hassan writes in Psychology Today: “By ordering members to be cut off, they can no longer participate. Information and sharing of thoughts, feelings, and experiences are stifled. Thought-stopping and use of loaded terms keep a person constrained into a black-and-white, all-or-nothing world. This controls members through fear and guilt.”

This mind control can take many forms, but the end result is an enslaved, compliant populace incapable of challenging tyranny.

As Rod Serling, creator of The Twilight Zone, once observed, “We’re developing a new citizenry, one that will be very selective about cereals and automobiles, but won’t be able to think.”

The problem as I see it is that we’ve allowed ourselves to be persuaded that we need someone else to think and speak for us. And we’ve bought into the idea that we need the government and its corporate partners to shield us from that which is ugly or upsetting or mean. The result is a society in which we’ve stopped debating among ourselves, stopped thinking for ourselves, and stopped believing that we can fix our own problems and resolve our own differences.

In short, we have reduced ourselves to a largely silent, passive, polarized populace incapable of working through our own problems and reliant on the government to protect us from our fears.

As Nat Hentoff, that inveterate champion of the First Amendment, once observed, “The quintessential difference between a free nation, as we profess to be, and a totalitarian state, is that here everyone, including a foe of democracy, has the right to speak his mind.”

What this means is opening the door to more speech not less, even if that speech is offensive to some.

Understanding that freedom for those in the unpopular minority constitutes the ultimate tolerance in a free society, James Madison, the author of the Bill of Rights, fought for a First Amendment that protected the “minority” against the majority, ensuring that even in the face of overwhelming pressure, a minority of one—even one who espouses distasteful viewpoints—would still have the right to speak freely, pray freely, assemble freely, challenge the government freely, and broadcast his views in the press freely.

We haven’t done ourselves—or the nation—any favors by becoming so fearfully polite, careful to avoid offense, and largely unwilling to be labeled intolerant, hateful, or closed-minded that we’ve eliminated words, phrases, and symbols from public discourse.

We have allowed our fears—fear for our safety, fear of each other, fear of being labeled racist or hateful or prejudiced, etc.—to trump our freedom of speech and muzzle us far more effectively than any government edict could.

Ultimately the war on free speech—and that’s exactly what it is: a war being waged by Americans against other Americans—is a war that is driven by fear.

By bottling up dissent, we have created a pressure cooker of stifled misery and discontent that is now bubbling over and fomenting even more hate, distrust, and paranoia among portions of the populace.

By muzzling free speech, we are contributing to a growing underclass of Americans who are being told that they can’t take part in American public life unless they “fit in.”

The First Amendment is a steam valve. It allows people to speak their minds, air their grievances and contribute to a larger dialogue that hopefully results in a more just world. When there is no steam valve to release the pressure, frustration builds, anger grows, and people become more volatile and desperate to force a conversation.

Be warned: whatever we tolerate now—whatever we turn a blind eye to—whatever we rationalize when it is inflicted on others will eventually come back to imprison us, one and all.

Eventually, “we the people” will be the ones in the crosshairs.

At some point or another, depending on how the government and its corporate allies define what constitutes “hate” or “extremism, “we the people” might all be considered guilty of some thought crime or other.

When that time comes, there may be no one left to speak out or speak up in our defense.

After all, it’s a slippery slope from censoring so-called illegitimate ideas to silencing the truth. Eventually, as George Orwell predicted, telling the truth will become a revolutionary act.

We are on a fast-moving trajectory.

In other words, whatever powers you allow the government and its corporate operatives to claim now, for the sake of the greater good or because you like or trust those in charge, will eventually be abused and used against you by tyrants of your own making.

This is the tyranny of the majority against the minority marching in lockstep with technofascism.

If Americans don’t vociferously defend the right of a minority of one to subscribe to, let alone voice, ideas, and opinions that may be offensive, hateful, intolerant, or merely different, then we’re going to soon find that we have no rights whatsoever (to speak, assemble, agree, disagree, protest, opt-in, opt-out, or forge our own paths as individuals).

No matter what our numbers might be, no matter what our views might be, no matter what party we might belong to, it will not be long before “we the people” constitute a powerless minority in the eyes of a power-fueled fascist state-driven to maintain its power at all costs.

We are almost at that point now.

Free speech is no longer free.

On paper—at least according to the U.S. Constitution—we are technically free to speak.

In reality, however, we are only as free to speak as a government official—or corporate entities such as Facebook, Google, or YouTube—may allow.

The steady, pervasive censorship creep that is being inflicted on us by corporate tech giants with the blessing of the powers-that-be threatens to bring about a restructuring of reality straight out of Orwell’s 1984, where the Ministry of Truth polices speech and ensures that facts conform to whatever version of reality the government propagandists embrace.

Orwell intended 1984 as a warning. Instead, as I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, it is being used as a dystopian instruction manual for socially engineering a populace that is compliant, conformist, and obedient to Big Brother.

The police state could not ask for a better citizenry than one that carries out its own censorship, spying, and policing.

WC: 2189

ABOUT JOHN W. WHITEHEAD

Constitutional attorney and author John W. Whitehead is the founder and president of The Rutherford Institute. His books Battlefield America: The War on the American People and A Government of Wolves: The Emerging American Police State are available at www.amazon.com. He can be contacted at johnw@rutherford.org. Nisha Whitehead is the Executive Director of The Rutherford Institute. Information about The Rutherford Institute is available at www.rutherford.org.




In Court, Facebook Admits ‘Fact Checks’ Are Pure Opinion | Dr. Joseph Mercola

Source: mercola.com  

Story at-a-glance

  • “Fact checks” are nothing but a biased censoring mechanism, and now we have proof of this fact, thanks to a lawsuit brought against Facebook by journalist John Stossel
  • In court documents, Facebook admits that fact checks are “statements of opinion” and not factual assertions
  • Facebook recently censored a whistleblower report published by The British Medical Journal (BMJ), one of the oldest and most respected peer-reviewed medical journals in the world, variably labeling the article as “False,” “Partly false” or “Missing context.” Some users reported they could not share the article at all
  • The fact check inaccurately referred to The BMJ as a “news blog,” failed to specify any assertions of fact that The BMJ article got wrong, and published the fact check under a URL containing the phrase “hoax-alert”
  • The BMJ calls the fact check “inaccurate, incompetent and irresponsible.” In an open letter addressed to Mark Zuckerberg, The BMJ urges Zuckerberg to “act swiftly” to correct the erroneous fact check, review the processes that allowed it to occur in the first place, and “generally to reconsider your investment in and approach to fact-checking overall”

We’ve long suspected that fact-checking organizations are nothing but a biased censoring mechanism more interested in manipulating opinion than establishing actual facts, but now we have absolute proof, thanks to a lawsuit brought against Facebook by journalist John Stossel.1,2

In 2020, a Facebook fact-checker called Science Feedback slapped “False” and “Lacking context” labels on two videos posted by Stossel. The videos featured Stossel’s interviews with experts who discussed the negligible role of climate change in the 2020 California forest fires. While they did not deny climate change is real, they proposed there were other, likely more contributing factors, such as poor forest management.

Why were his videos flagged as misinformation? According to Facebook fact-checkers, Stossel was “misleading” people when he claimed that “forest fires are caused by poor forest management, not climate change.” But according to Stossel, he never actually made that claim.

According to Stossel, the labels damaged his reputation as an investigative journalist and resulted in a loss of followers. Interestingly, when Stossel contacted Science Feedback about its fact checks, two reviewers agreed to be interviewed. With regard to the first video that got flagged, they admitted they’d never even watched it. In the case of the second video, a reviewer explained that they “didn’t like [his] tone.” As noted by The New York Post:3

“That is, you can’t write anything about climate change unless you say it’s the worst disaster in the history of humanity and we must spend trillions to fight it.”

“The problem is the omission of contextual information rather than specific ‘facts’ being wrong,” the fact-checker told Stossel, who says:4

“What? It’s fine if people don’t like my tone. But Facebook declares my post ‘partly false,’ a term it defines on its website as including ‘factual inaccuracies.’ My video does not contain factual inaccuracies … I want Facebook to learn that censorship — especially sloppy, malicious censorship, censorship without any meaningful appeal process — is NOT the way to go. The world needs more freedom to discuss things, not less.”

Facebook Claims Fact Checks Are ‘Protected Opinion’

So, Stossel sued for defamation, and this is where it gets good because to defend Facebook, its lawyers had to at least temporarily resort to telling the truth. In their legal brief, they argue that fact checks are protected under the First Amendment because they are OPINIONS, not assertions of facts! Commenting on the case, climate change blogger Anthony Watts writes:5

“Facebook just blew the ‘fact check’ claim right out of the water in court. In its response to Stossel’s defamation claim, Facebook responds on Page 2, Line 8 in the court document that Facebook cannot be sued for defamation (which is making a false and harmful assertion) because its ‘fact checks’ are mere statements of opinion rather than factual assertions.

Opinions are not subject to defamation claims, while false assertions of fact can be subject to defamation … So, in a court of law, in a legal filing, Facebook admits that its ‘fact checks’ are not really ‘fact’ checks at all, but merely ‘opinion assertions.’

This strikes me as public relations disaster, and possibly a looming legal disaster for Facebook, PolitiFact, Climate Feedback and other left-leaning entities that engage in biased ‘fact checking.’

Such ‘fact checks’ are now shown to be simply an agenda to suppress free speech and the open discussion of science by disguising liberal media activism as something supposedly factual, noble, neutral, trustworthy, and based on science. It is none of those.”

Facebook Censors The British Medical Journal

Stossel is far from alone in being censored these days. In the video above, he points out other noteworthy experts who have been censored for their opinions and educated stances, such as environmentalist Michael Shellenberger, once hailed by Time Magazine as a “hero of the environment,” statistician and environmentalist Bjorn Lomborg, once declared “one of the most influential people of the 21st century,” and science writer John Tierney.

Of course, I am no stranger to censorship either, having been falsely labeled as one of the “biggest misinformation agents” on the entire internet when it comes to the COVID jab. In these times of Orwellian Doublespeak, I consider this one of the most significant achievements I have ever achieved.

Think about it for a moment. The entire mainstream media has agreed that I am the most influential spreader of the truth about COVID on the internet. Even my friend and major freedom fighter, Bobby Kennedy, was only No. 2. I couldn’t be more delighted with their award. I might even have it inscribed on my tombstone.

Most recently, Facebook even censored The British Medical Journal (BMJ) over an article that highlighted potential problems with Pfizer’s COVID jab trial, and The BMJ is one of the oldest and most respected peer-reviewed medical journals in the world!

In early November 2021, The BMJ published a whistleblower report6 that claimed there were serious data integrity issues in the Pfizer COVID jab trial. The article was censored by Facebook and labeled variably as either “False,” “Partly false” or “Missing context.” Some users reported the article could not be shared at all.

The Facebook fact check of The BMJ article was done by Lead Stories, a Facebook contractor. The headline of its “fact check” rebuttal read: “Fact Check: The British Medical Journal Did NOT Reveal Disqualifying and Ignored Reports of Flaws in Pfizer’s COVID-19 Vaccine Trials.”7

‘Inaccurate, Incompetent and Irresponsible’ Fact-Checking

In response, The BMJ has slammed the fact check, calling it “inaccurate, incompetent and irresponsible.”8,9,10 In an open letter11 addressed to Facebook’s Mark Zuckerberg, The BMJ urges Zuckerberg to “act swiftly” to correct the erroneous fact check, review the processes that allowed it to occur in the first place, and “generally to reconsider your investment in and approach to fact-checking overall.” As noted by The BMJ in its letter, the Lead Stories’ fact check:12

  • Inaccurately referred to The BMJ as a “news blog”
  • Failed to specify any assertions of fact that The BMJ article got wrong
  • Published the fact check on the Lead Stories’ website under a URL that contains the phrase “hoax-alert”

Lead Stories refused to address the inaccuracies when contacted by The BMJ directly. The BMJ also raises “a wider concern” in its letter:

“We are aware that The BMJ is not the only high quality information provider to have been affected by the incompetence of Meta’s fact checking regime. To give one other example, we would highlight the treatment by Instagram (also owned by Meta) of Cochrane, the international provider of high quality systematic reviews of the medical evidence.

Rather than investing a proportion of Meta’s substantial profits to help ensure the accuracy of medical information shared through social media, you have apparently delegated responsibility to people incompetent in carrying out this crucial task.

Fact checking has been a staple of good journalism for decades. What has happened in this instance should be of concern to anyone who values and relies on sources such as The BMJ.”

Fact Checkers Are as Biased as They Come

When it comes to fact-checking, it’s high time everyone understood that fact checks are not done by independent, unbiased parties who are sifting through facts to make sure a given piece is accurate.

As Facebook has now admitted in court, these so-called fact checks are nothing more than a declaration of preferred opinion. They’re statements of approved narrative. They have nothing to do with the verification of facts. As reported by the New York Post:13

“The Post has faced this same gauntlet too many times. In February 2020, we published a column by Steven W. Mosher asking if COVID-19 leaked from the Wuhan Lab. This was labeled ‘false’ by Facebook’s fact-checkers.

Of course, those supposed ‘independent’ scientific reviewers relied on a group of experts who had a vested interest in dismissing that theory — including EcoHealth, which had funded the Wuhan lab.

When Twitter ‘fact checked’ and blocked The Post’s stories about Hunter Biden’s laptop as ‘hacked materials,’ what was the basis? Nothing. It wasn’t hacked; the company’s staff just wanted an excuse. Guess they didn’t like our tone. In both these cases, our ‘fact checks’ were lifted, but only after it no longer mattered.”

The New York Post also points out that “The fact-check industry is funded by liberal moguls such as George Soros, government-funded nonprofits and the tech giants themselves.”14 Science Feedback, for example, received seed funding from Google.15

Journalism’s icon, the Poynter Institute — which runs the International Fact-Checking Network (IFCN) — also funded Science Feedback to build what Poynter describes as “a database of fact checks and of websites that spread misinformation the most.”

In a round-robin of circular funding, IFCN’s revenues come from the Bill & Melinda Gates Foundation, Google, Facebook, and government entities such as the U.S. Department of State.16 To top it off, Science Feedback’s crowdfunding is run through the University of California, Merced, so they can avoid taxes in the United States.17

Fact Checkers Protect the Technocratic Agenda

One of the primary funders of the fake fact-checking industry that The Post failed to mention is the drug industry. NewsGuard and other fact-checking organizations are loaded with Big Pharma conflicts of interest, and their bias in favor of the drug industry is undeniable.

Fact-checking organizations are also clearly influenced by technocratic organizations such as the World Economic Forum, which is leading the call for a Great Reset. NewsGuard, for example, is partnered with Publicis,18 one of the world’s largest PR companies that has a huge roster of Big Pharma clients, and Publicis in turn is a partner of the World Economic Forum.

NewsGuard also received a large chunk of its startup capital from Publicis. No doubt, Big Pharma and The Great Reset are tightly intertwined and work together toward the same goal, which is nothing less than world domination and the enslavement of the global population under a biomedical police state.

PR Posing as Free Press Has Unleashed Fake News Pandemic

Publicis actually appears to be coordinating the global effort to suppress information that runs counter to the technocratic narrative about COVID-19, its origin, prevention, and treatment — suppression and censorship that has been repeatedly aimed at this website specifically.

It is part of an enormous network that includes international drug companies, fact-checkers and credibility raters like NewsGuard, Google and other search engines, Microsoft, antivirus software companies like Trend Micro, public libraries, schools, the banking industry, the U.S. State Department, and Department of Defense, the World Health Organization and the World Economic Forum.

Mind you, this is not a comprehensive list of links. It’s merely a sampling of entities to give you an idea of the breadth of connections, which when taken together explain how certain views — such as information about COVID-19 and vaccines — can be so effectively suppressed and erased from public discourse.

To understand the power that PR companies such as Publicis wield, you also need to realize that PR has, by and large, replaced the free press. This has had a devastating effect, and I don’t think I’m overstating the matter when I say that it is PR masquerading as news that gave birth to the whole “fake news” phenomenon.

However, in true Orwellian double-speak, these same fake PR-news pushers claim everyone else is peddling fake news. They want us to believe their PR is the truth, even though it’s typically devoid of data and flies in the face of verifiable facts.

China’s Hidden Influence

In addition to fact-checkers doing the bidding of Big Pharma and the technocratic elite, the public is also being deceived and manipulated by Chinese propaganda. On December 20, 2021, New York Times article,19 Muyi Xiao, Paul Mozur, and Gray Beltran detail how China manipulates Facebook and Twitter to further its own authoritarian aspirations.

According to Xiao, Mozur, and Beltran, China’s government has “unleashed a global online campaign” to bolster its image and suppress accusations of human rights abuses. To that end, it hires companies to flood social media with fake accounts that are then used to advance China’s agenda worldwide.

This includes creating content on-demand, identifying and tracking critics that live outside of China, running bot networks to flood social media with tailored propaganda messages to steer the discussion, and more — strategies referred to as “public opinion management.”

Disturbingly, while the Chinese government has long hunted down dissenting voices inside the country and forced them to recant, they’re now hunting Chinese dissenters worldwide.

Any user who has connections to the mainland can find themselves in a situation where their family members in China are detained or threatened until or unless they delete the offending post or account. People of Chinese descent who live in other countries may also be detained by police if they return to mainland China, based on the opinions they’ve shared online.

China Aims for More Sophisticated Propaganda

According to the documents the trio obtained, the Chinese police are also working on more sophisticated propaganda maneuvers. For example, rather than relying on bot farms and fake troll profiles to create an appearance of public opinion, they’re looking to grow popular accounts that have an organic following, so that these accounts can later be taken over by the government to push whatever propaganda is desired at that time.

These are known as “profiles for hire.” As explained in the article, “The deeper engagement lends the fake personas credibility at a time when social media companies are increasingly taking down accounts that seem inauthentic or coordinated.”

Facebook Itself Is an Opinion Management Tool

Of course, Facebook and Twitter lend themselves to this kind of manipulation because they are essentially “public opinion management” tools. Even if they didn’t start out that way (and that’s a big if), they’ve certainly grown into it. There can be no denying that both platforms have been instrumental in censoring information about COVID-19 on behalf of the drug industry and global technocracy.

As reported by The National Pulse,20 email correspondence between Dr. Anthony Fauci and Facebook CEO Mark Zuckerberg reveals Zuckerberg even agreed to send Fauci reports on Facebook users’ sentiments to “facilitate decisions” about COVID-19 lockdowns. An April 8, 2020, email from Zuckerberg reads in part:21

“… If we’re looking at a prolonged period of tightening and loosening shelter restrictions around the country, then if there are aggregate, anonymized data reports that Facebook can generate to facilitate these decisions, for example, we’d be happy to do this …

We’ve kicked off a symptom survey, which will hopefully give a county-by-county leading indicator of cases to inform public health decisions. If there are other aggregate data resources that you think would be helpful, let me know …”

As noted by The National Pulse, this is a “stark example” of how Big Tech corporations and government agencies collude and use user data to restrict our freedoms and liberties.22

Government Colludes With Big Tech to Circumvent Constitution

Indeed, aside from this, we’ve also had clear examples of politicians colluding with Big Tech to censor on behalf of the government, in clear violation of the U.S. Constitution. This is why I sued U.S. Sen. Elizabeth Warren.

In early September 2021, Warren sent a letter23 to Andy Jassy, chief executive officer of Amazon.com, demanding an “immediate review” of Amazon’s algorithms to weed out books peddling “COVID misinformation.”24,25,26

Warren specifically singled out my book, “The Truth About COVID-19,” co-written with Ronnie Cummins, founder, and director of the Organic Consumers Association (OCA), as a prime example of “highly-ranked and favorably-tagged books based on falsehoods about COVID-19 vaccines and cures” that she wanted to be banned.

As a government official, it is illegal for her to violate the U.S. Constitution, and pressuring private businesses to do it for her is not a legal workaround. Since she willfully ignored the law, Cummins and I, along with our publisher, Chelsea Green Publishing, and Robert F. Kennedy Jr., who wrote our foreword, sued Warren,27 both in her official and personal capacities, for violating our First Amendment rights.

The federal lawsuit, in which Warren is listed as the sole defendant, was filed on November 8, 2021, in the state of Washington.

‘Fact Checks’ Are Brainwashing Attempts

Is there a fact-checking organization you can rely on? The simple and direct answer is no. They all exist for a single purpose — to metaphorically “shout over” anyone whose views differ from the officially sanctioned narrative on a given topic and suppress the truth that interferes with the implementation of their agenda.

It’s like two people trying to have a conversation about something while a third person keeps interjecting, screaming at the top of their lungs “THINK THIS! SAY THIS!”

Who needs that? They’re useless. By reading them and giving them any credence, all you’re doing is filling your head with propaganda and increasing your likelihood of falling into the pervasive mass delusional psychosis we’re seeing all around us. It’s just one big brainwashing attempt. With any amount of luck, Facebook’s court admission that fact checks are mere opinion pieces will end up triggering the fact blockers’ demise.

Sources and References



Facebook Admits Its Fact Checks Are Just Opinions

In a court filing, attorneys for Meta, formerly Facebook, admit that their fact check labels aren’t based on facts at all — they’re actually just opinions

The “fact checks” that Facebook, now known as Meta, has used to silence and censor throughout the pandemic are actually just “opinions.” The stunning admission came from Meta’s own attorneys, who stated in a court filing, “The labels themselves are neither false nor defamatory; to the contrary, they constitute protected opinion.”[1]

The court filing came in response to a lawsuit filed by television journalist John Stossel, who claims the social media giant’s fact-checking amounted to defamation when it flagged his content as false, causing all would-be viewers to doubt its integrity.[2] Stossel wrote in October 2021:[3]

“I just sued Facebook. I didn’t want to sue. I hate lawsuits. I tried for a year to reach someone at Facebook to fix things, but Facebook wouldn’t. Here’s the problem: Facebook uses ‘independent fact-checkers’ to try to reduce fake news on their site. That’s a noble goal.

Unfortunately, at least one Facebook ‘fact-checker’ is a climate-alarmist group that cleverly uses its Facebook connections to stop debate. Facebook is a private company. It has every right to cut me off. But Facebook does not have the right to just lie about me, yet that’s exactly what Facebook and its ‘fact-checker’ did. That’s defamation, and it’s just wrong.”

‘Fact Checks’ Are Actually Opinions

Stossel’s lawsuit relates to a video he published on Facebook about the 2020 California wildfires. He suggested that government mismanagement, not climate change, was probably the greatest factor in causing the fires.

It was the downplaying of climate change that got Stossel’s video flagged as “misleading” in this case, but during recent years virtually any mention of vaccine side effects or ineffectiveness, or questioning of lockdown procedures or mask mandates, yields a similar fate.

The farce is that Meta and their fact-checkers have their own agendas, and it’s now been admitted in a court filing that they’re actively censoring information based not on facts but on their own opinions.

“So-called ‘fact checking’ is a fraud used to cover up the censorship of opinions that differ from those of the powerful Silicon Valley oligarchy. And now we have proof attested to in a court filing by one of the richest companies in the world, represented by some of the most elite lawyers in the world,” ZeroHedge reported, adding that Meta’s attorneys come from “Wilmer Cutler Pickering Hale and Dore, with over a thousand attorneys and more than a billion dollars a year in revenue.”[4]

Fact Checkers Don’t Check Facts

It’s dangerous to allow a private company to dictate what you see and don’t see online. By labeling valid information as misleading, they’re controlling the narrative and suppressing scientific debate. With each person who doesn’t click, or is left believing that truth is false, the more deeply embedded their narrative becomes in our collective psyche.

Meta CEO Mark Zuckerberg has stated that when a post is identified as misinformation, meaning it’s given a warning label like Stossel’s video received, it results in users not clicking through 95% of the time.[5] But as Stossel tweeted December 10, 2021, “Court filing: Facebook admits its ‘fact checkers’ don’t check facts!”[6]

Not only does the court filing state this, but Stossel found this out personally when he contacted Climate Feedback, the group that fact-checked his video. Two of the three scientists who are reviewers for the group spoke with Stossel. Neither had seen the video, but when one of them finally did, he agreed with Stossel that the misleading label wasn’t necessarily fair. Despite this, Stossel wrote, “neither Climate Feedback nor Facebook will change their smear.” He continued:[7]

“Climate Feedback and its parent group, Science Feedback, use Facebook to censor lots of responsible people, such as science writers John Tierney, Michael Shellenberger and Bjorn Lomborg. Facebook has every right to choose who can use its platform. But Facebook does not have a legal right to knowingly and recklessly lie about what I say. That’s defamation.”

Considering the admission that fact checks are merely opinions, Anthony Watts of Watts Up With That? further noted, “[I]n a court of law, in a legal filing, Facebook admits that its ‘fact checks’ are not really ‘fact’ checks at all, but merely ‘opinion assertions.’ This strikes me as public relations disaster, and possibly a looming legal disaster for Facebook … [and other] entities that engage in biased ‘fact-checking.'”[8]

Ironically, Meta’s lawyers also point to Section 230 of the U.S. Communications Act, which protects the company from liability for material posted by third parties.[9] Yet, this Act, and the immunity it provides to social media companies, also negates the need for “fact checks” in the first place[10] — from a legal perspective, at least, not for other, perhaps more nefarious, motivations.

Who’s really in control of dictating what’s flagged or “passed” on social media? Follow the money; several of Meta’s fact-checking partners, including Africa Check[11] and The Poynter Institute,[12] receive funding from the Bill and Melinda Gates Foundation, for instance. This means news flagged as “false” or “misleading” by fact-checkers isn’t necessarily untrue, incorrect or deceitful; it may simply go against the funder’s agenda.

And oftentimes, the opposite is true in that news items flagged as false may be the ones you should delve into more deeply in your search for real, unbiased news. Hopefully, lawsuits like Stossel’s will help to uncover the censorship and defamation taking place at the hands of social media, but if not, understand that when it comes to fact checks, you can’t just take their word as fact, because it’s actually their opinion.

Facebook “fact-checked” the Greenmedinfo.com page into oblivion, deleting it earlier this year, effectively preventing half a million of our followers from receiving our updates. Censorship has never been more concerning, given that independent sources of information on the benefits of natural health approaches and the harms of pharmaceutically-driven medicine are extremely hard to find. Moreover, without this information informed consent when it comes to medical decisions is not possible. If you believe that our information (and your access to it) is important, please consider becoming a member, making a one-time donation, and following us on our censorship-free Telegram channel, and registering to become a member of our own media platform: BeSovereign.com, launching very soon!

References [1] John Stossel v. Facebook, Inc. Court Filing, Page 2, Line 8 https://wattsupwiththat.com/wp-content/uploads/2021/12/Facebook-admits-its-fact-check-is-opinion-page-2.pdf [2] John Stossel v. Facebook, Inc. Filed September 22, 2021, https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=3543&context=historical [3] The Sun October 3, 2021, https://www.lowellsun.com/2021/10/03/smeared-by-facebook/ [4] ZeroHedge December 10, 2021, https://www.zerohedge.com/political/stunning-facebook-court-filing-admits-fact-checks-are-just-matter-opinion [5] Reclaimthenet.org May 21, 2020, https://reclaimthenet.org/zuckerberg-defends-censoring/ [6] Twitter, John Stossel December 10, 2021, https://twitter.com/JohnStossel/status/1469438439816929288 [7] The Sun October 3, 2021, https://www.lowellsun.com/2021/10/03/smeared-by-facebook/ [8] Watts Up With That? December 7, 2021, https://wattsupwiththat.com/2021/12/09/bombshell-in-court-filing-facebook-admits-fact-checks-are-nothing-more-than-opinion/ [9] John Stossel v. Facebook, Inc. Court Filing, Page 2, Line 8 https://wattsupwiththat.com/wp-content/uploads/2021/12/Facebook-admits-its-fact-check-is-opinion-page-2.pdf [10] KHMER Times December 17, 2021, https://www.khmertimeskh.com/50990032/in-facebooks-virtual-universe-opinion-substitutes-for-fact/ [11] Bill & Melinda Gates Foundation, Grants, Africa Check https://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database/Grants/2019/08/OPP1214960 [12] Gates Foundation, Grants, The Poynter Institute for Media, Studies https://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database/Grants/2015/11/OPP1138320

Disclaimer: This article is not intended to provide medical advice, diagnosis, or treatment. Views expressed here do not necessarily reflect those of GreenMedInfo or its staff.



“People’s Lives are Being Endangered” by “Fact Checkers” Censoring Vaccine Critics as Facebook Fact Checker owns $1.8 BILLION Stock in Vaccine Company

By Brian Shilhavy | Health Impact News

Congressman Thomas Massie is concerned that the “Fact Checker” company being used by Facebook to squelch any dissent on COVID-19 vaccines owns $1.8 BILLION in stock in a vaccine company, which also employs a former Director of the CDC.

It’s a shame that RT.com had to interview someone who blames all of this on “leftists,” as pro-vaccine views dominate both political parties today.

This report from RT.com is on our Bitchute channel.




The Metaverse Is Big Brother in Disguise: Freedom Meted Out by Technological Tyrants | John W. Whitehead & Nisha Whitehead

The term metaverse, like the term meritocracy, was coined in a sci fi dystopia novel written as cautionary tale. Then techies took metaverse, and technocrats took meritocracy, and enthusiastically adopted what was meant to inspire horror.”—Antonio García Martínez

Welcome to the Matrix (i.e. the metaverse), where reality is virtual, freedom is only as free as one’s technological overlords allow, and artificial intelligence is slowly rendering humanity unnecessary, inferior, and obsolete.

Mark Zuckerberg, the CEO of Facebook, sees this digital universe—the metaverse—as the next step in our evolutionary transformation from a human-driven society to a technological one.

Yet while Zuckerberg’s vision for this digital frontier has been met with a certain degree of skepticism, the truth—as journalist Antonio García Martínez concludes—is that we’re already living in the metaverse.

The metaverse is, in turn, a dystopian meritocracy, where freedom is a conditional construct based on one’s worthiness and compliance.

In a meritocracy, rights are privileges, afforded to those who have earned them. There can be no tolerance for independence or individuality in a meritocracy, where political correctness is formalized, legalized, and institutionalized. Likewise, there can be no true freedom when the ability to express oneself, move about, engage in commerce, and function in society is predicated on the extent to which you’re willing to “fit in.”

We are almost at that stage now.

Consider that in our present virtue-signaling world where fascism disguises itself as tolerance, the only way to enjoy even a semblance of freedom is by opting to voluntarily censor yourself, comply, conform and march in lockstep with whatever prevailing views dominate.

Fail to do so—by daring to espouse “dangerous” ideas or support unpopular political movements—and you will find yourself shut out of commerce, employment, and society: Facebook will ban you, Twitter will shut you down, Instagram will de-platform you, and your employer will issue ultimatums that force you to choose between your so-called freedoms and economic survival.

This is exactly how Corporate America plans to groom us for a world in which “we the people” are unthinking, unresistant, slavishly obedient automatons in bondage to a Deep State policed by computer algorithms.

Science fiction has become fact.

Twenty-some years after the Wachowskis’ iconic film, The Matrix introduced us to a futuristic world in which humans exist in a computer-simulated non-reality powered by authoritarian machines—a world where the choice between existing in a denial-ridden virtual dream-state or facing up to the harsh, difficult realities of life comes down to a blue pill or a red pill—we stand at the precipice of a technologically-dominated matrix of our own making.

We are living the prequel to The Matrix with each passing day, falling further under the spell of technologically-driven virtual communities, virtual realities, and virtual conveniences managed by artificially intelligent machines that are on a fast track to replacing human beings and eventually dominating every aspect of our lives.

In The Matrix, computer programmer Thomas Anderson a.k.a. hacker Neo is wakened from a virtual slumber by Morpheus, a freedom fighter seeking to liberate humanity from a lifelong hibernation state imposed by hyper-advanced artificial intelligence machines that rely on humans as an organic power source. With their minds plugged into a perfectly crafted virtual reality, few humans ever realize they are living in an artificial dream world.

Neo is given a choice: to take the red pill, wake up and join the resistance, or take the blue pill, remain asleep and serve as fodder for the powers-that-be.

Most people opt for the blue pill.

In our case, the blue pill—a one-way ticket to a life sentence in an electronic concentration camp—has been honey-coated to hide the bitter aftertaste, sold to us in the name of expediency, and delivered by way of blazingly fast Internet, cell phone signals that never drop a call, thermostats that keep us at the perfect temperature without our having to raise a finger, and entertainment that can be simultaneously streamed to our TVs, tablets and cell phones.

Yet we are not merely in thrall with these technologies that were intended to make our lives easier. We have become enslaved by them.

Look around you. Everywhere you turn, people are so addicted to their internet-connected screen devices—smartphones, tablets, computers, televisions—that they can go for hours at a time submerged in a virtual world where human interaction is filtered through the medium of technology.

This is not freedom. This is not even progress.

This is technological tyranny and iron-fisted control delivered by way of the surveillance state, corporate giants such as Google and Facebook, and government spy agencies such as the National Security Agency.

So consumed are we with availing ourselves of all the latest technologies that we have spared barely a thought for the ramifications of our heedless, headlong stumble towards a world in which our abject reliance on internet-connected gadgets and gizmos is grooming us for a future in which freedom is an illusion.

Yet it’s not just freedom that hangs in the balance. Humanity itself is on the line.

If ever Americans find themselves in bondage to technological tyrants, we will have only ourselves to blame for having forged the chains through our own lassitude, laziness, and abject reliance on internet-connected gadgets and gizmos that render us wholly irrelevant.

Indeed, we’re fast approaching Philip K. Dick’s vision of the future as depicted in the film Minority Report. There, police agencies apprehend criminals before they can commit a crime, driverless cars populate the highways, and a person’s biometrics are constantly scanned and used to track their movements, target them for advertising, and keep them under perpetual surveillance.

Cue the dawning of the Age of the Internet of Things (IoT), in which internet-connected “things” monitor your home, your health, and your habits in order to keep your pantry stocked, your utilities regulated, and your life under control and relatively worry-free.

The keyword here, however, is control.

In the not-too-distant future, “just about every device you have—and even products like chairs, that you don’t normally expect to see technology in—will be connected and talking to each other.”

By the end of 2018, “there was an estimated 22 billion internet of things connected devices in use around the world… Forecasts suggest that by 2030 around 50 billion of these IoT devices will be in use around the world, creating a massive web of interconnected devices spanning everything from smartphones to kitchen appliances.”

As the technologies powering these devices have become increasingly sophisticated, they have also become increasingly widespread, encompassing everything from toothbrushes and lightbulbs to cars, smart meters, and medical equipment.

It is estimated that 127 new IoT devices are connected to the web every second.

This “connected” industry has become the next big societal transformation, right up there with the Industrial Revolution, a watershed moment in technology and culture.

Between driverless cars that completely lack a steering wheel, accelerator, or brake pedal, and smart pills embedded with computer chips, sensors, cameras, and robots, we are poised to outpace the imaginations of science fiction writers such as Philip K. Dick and Isaac Asimov. (By the way, there is no such thing as a driverless car. Someone or something will be driving, but it won’t be you.)

These Internet-connected techno-gadgets include smart light bulbs that discourage burglars by making your house look occupied, smart thermostats that regulate the temperature of your home based on your activities, and smart doorbells that let you see who is at your front door without leaving the comfort of your couch.

Nest, Google’s suite of smart home products, has been at the forefront of the “connected” industry, with such technologically savvy conveniences as a smart lock that tells your thermostat who is home, what temperatures they like, and when your home is unoccupied; a home phone service system that interacts with your connected devices to “learn when you come and go” and alert you if your kids don’t come home; and a sleep system that will monitor when you fall asleep when you wake up, and keep the house noises and temperature in a sleep-conducive state.

The aim of these internet-connected devices, as Nest proclaims, is to make “your house a more thoughtful and conscious home.” For example, your car can signal ahead that you’re on your way home, while Hue lights can flash on and off to get your attention if Nest Protect senses something’s wrong. Your coffeemaker, relying on data from fitness and sleep sensors, will brew a stronger pot of coffee for you if you’ve had a restless night.

Yet given the speed and trajectory at which these technologies are developing, it won’t be long before these devices are operating entirely independent of their human creators, which poses a whole new set of worries. As technology expert Nicholas Carr notes, “As soon as you allow robots, or software programs, to act freely in the world, they’re going to run up against ethically fraught situations and face hard choices that can’t be resolved through statistical models. That will be true of self-driving cars, self-flying drones, and battlefield robots, just as it’s already true, on a lesser scale, with automated vacuum cleaners and lawnmowers.”

For instance, just as the robotic vacuum, Roomba, “makes no distinction between a dust bunny and an insect,” weaponized drones will be incapable of distinguishing between a fleeing criminal and someone merely jogging down a street. For that matter, how do you defend yourself against a robotic cop—such as the Atlas android being developed by the Pentagon—that has been programmed to respond to any perceived threat with violence?

Moreover, it’s not just our homes and personal devices that are being reordered and reimagined in this connected age: it’s our workplaces, our health systems, our government, our bodies, and our innermost thoughts that are being plugged into a matrix over which we have no real control.

It is expected that by 2030, we will all experience The Internet of Senses (IoS), enabled by Artificial Intelligence (AI), Virtual Reality (VR), Augmented Reality (AR), 5G, and automation. The Internet of Senses relies on connected technology interacting with our senses of sight, sound, taste, smell, and touch by way of the brain as the user interface. As journalist Susan Fourtane explains:

Many predict that by 2030, the lines between thinking and doing will blur. Fifty-nine percent of consumers believe that we will be able to see map routes on VR glasses by simply thinking of a destination… By 2030, technology is set to respond to our thoughts, and even share them with others… Using the brain as an interface could mean the end of keyboards, mice, game controllers, and ultimately user interfaces for any digital device. The user needs to only think about the commands, and they will just happen. Smartphones could even function without touch screens.

In other words, the IoS will rely on technology being able to access and act on your thoughts.

Fourtane outlines several trends related to the IoS that are expected to become a reality by 2030:

1: Thoughts become action: using the brain as the interface, for example, users will be able to see map routes on VR glasses by simply thinking of a destination.

2: Sounds will become an extension of the devised virtual reality: users could mimic anyone’s voice realistically enough to fool even family members.

3: Real food will become secondary to imagined tastes. A sensory device for your mouth could digitally enhance anything you eat, so that any food can taste like your favorite treat.

4: Smells will become a projection of this virtual reality so that virtual visits, to forests or the countryside for instance, would include experiencing all the natural smells of those places.

5: Total touch: Smartphones with screens will convey the shape and texture of the digital icons and buttons they are pressing.

6: Merged reality: VR game worlds will become indistinguishable from physical reality by 2030.

This is the metaverse, wrapped up in the siren-song of convenience and sold to us as the secret to success, entertainment, and happiness.

It’s a false promise, a wicked trap to snare us, with a single objective: total control.

George Orwell understood this.

Orwell’s masterpiece, 1984, portrays a global society of total control in which people are not allowed to have thoughts that in any way disagree with the corporate state. There is no personal freedom, and advanced technology has become the driving force behind a surveillance-driven society. Snitches and cameras are everywhere. And people are subject to the Thought Police, who deal with anyone guilty of thought crimes. The government, or “Party,” is headed by Big Brother, who appears on posters everywhere with the words: “Big Brother is watching you.”

As I make clear in my book Battlefield America: The War on the American People and in its fictional counterpart The Erik Blair Diaries, total control over every aspect of our lives, right down to our inner thoughts, is the objective of any totalitarian regime.

The Metaverse is just Big Brother in disguise.

ABOUT JOHN W. WHITEHEAD

Constitutional attorney and author John W. Whitehead is the founder and president of The Rutherford Institute. His books Battlefield America: The War on the American People and A Government of Wolves: The Emerging American Police State are available at www.amazon.com. He can be contacted at johnw@rutherford.org. Nisha Whitehead is the Executive Director of The Rutherford Institute. Information about The Rutherford Institute is available at www.rutherford.org.




Nobel Peace Prize for Journalists Serves As Reminder that Freedom of the Press is Under Threat from Strongmen and Social Media

Thirty-two years ago next month, I was in Germany reporting on the fall of the Berlin Wall, an event then heralded as a triumph of Western democratic liberalism and even “the end of history.”

But democracy isn’t doing so well across the globe now. Nothing underscores how far we have come from that moment of irrational exuberance than the powerful warning the Nobel Prize Committee felt compelled to issue on Oct. 8, 2021, in awarding its coveted Peace Prize to two reporters.

“They are representative for all journalists,” Berit Reiss-Andersen, the chair of the Norwegian Nobel Committee, said in announcing the award to Maria Ressa and Dmitry Muratov, “in a world in which democracy and freedom of the press face increasingly adverse conditions.”

The honor for Muratov, the co-founder of Russia’s Novaya Gazeta, and Ressa, the CEO of the Philippine news site Rappler, is enormously important. In part that’s because of the protection that global attention may afford two journalists under imminent and relentless threat from the strongmen who run their respective countries. “The world is watching,” Reiss-Andersen pointedly noted in an interview after making the announcement.

Equally important is the larger message the committee wanted to deliver. “Without media, you cannot have a strong democracy,” Reiss-Andersen said.

Global political threats

The two laureates’ cases highlight an emergency for civil society: Muratov, editor of what the Nobel Prize Committee described as “the most independent paper in Russia today,” has seen six of his colleagues slain for their work criticizing Russian leader Vladimir Putin.

Ressa, a former CNN reporter, is under a de facto travel ban because the government of Rodrigo Duterte, in an obvious attempt to bankrupt Rappler, has filed so many legal cases against the website that Ressa must go from judge to judge to ask permission any time she wants to leave the country.

Inevitably, Ressa told me recently, one of them says “no.” Maybe that will change now that she has a date in Oslo. But Ressa probably knows better than to hold her breath.

Last year, when I – a long-time journalist turned professor of journalism – helped organize a group of fellow Princeton alumni to sign a letter of support for Ressa, more than 400 responded. They included members of Congress and state legislatures and former diplomats who served presidents of both parties. One of them was former Secretary of State George P. Shultz, who died several months later, making a show of solidarity with Maria Ressa one of his last public acts. This show of support is a sign of what’s at stake.

Three decades after the downfall of totalitarian regimes in Eastern Europe, forces of darkness and intolerance are on the march. Journalists are the canaries down the noxious mine shaft. Attacks on them are becoming more brazen: whether it is the grisly dismemberment of Saudi dissident and writer Jamal Khashoggi, the grounding of a commercial airplane to snatch a Belarusian journalist or the infamous graffiti “Murder the Media” scrawled onto a door of the U.S. Capitol during the Jan. 6 insurrection.

This irrational hatred of purveyors of facts knows no ideology. Former U.S. President Donald Trump’s disdain for the press is at least equaled by that of leftist Nicaraguan leader Daniel Ortega, whose response to his critics in the media has been to, well, lock ‘em up.

Digital menace

What makes today’s threats to free expression especially insidious is that they don’t come just from the usual suspects – thuggish government censors.

They are amplified and weaponized by social media networks that claim the privilege of free speech protection while they allow themselves to be hijacked by slanderers and propagandists.

No one has done more to expose the complicity of these platforms in the attack on democracy than Ressa, a tech enthusiast who built her publication’s website to interface with Facebook and now accuses the company of endangering her own freedom with its laissez-faire approach to the slander being propagated on its site.

“Freedom of expression is full of paradoxes,” the Nobel Committee’s Reiss-Andersen observed, in an interview after awarding the Peace Prize. She made it clear that the award to Ressa and Muratov was intended to tackle those paradoxes too.

Asked why the Peace Prize went to two individual journalists – rather than to one of the press freedom organizations, such as the Committee to Protect Journalists, that have represented Ressa, Muratov and so many of their endangered colleagues – Reiss-Anderson said the Nobel Committee deliberately chose working reporters.

Ressa and Muratov represent “a golden standard,” she said, of “journalism of high quality.” In other words, they are fact-finders and truth-seekers, not purveyors of clickbait.

That golden standard is increasingly endangered, in large part because of the digital revolution that shattered the business model for public service journalism.

“Free, independent and fact-based journalism serves to protect against abuse of power,” Reiss-Andersen said in the prize announcement. But it is increasingly being undermined and supplanted by what’s called “content,” served up algorithmically from sources that are not transparent in ways that are designed to addict and that drive partisanship, tribalism, and division.

This poses a challenge for public policymakers and the democracies they represent. How to regulate digital media and still protect free speech? How to support the labor-intensive work of journalism and still protect its independence?

Answering those questions won’t be easy. But democracy may be at a tipping point. With its recognition of two investigative journalists and the crucial – and dangerous – work they do to support democracy, the Nobel Committee has invited us to begin the debate.

Correction: This story has been updated to state the correct place, Oslo, where the Nobel Peace Prize is awarded.

Editor’s note: Naomi Schalit, senior politics editor at The Conversation, signed the open letter “In defense of press freedom” organized by author Kathy Kiely in July 2020.

By Kathy Kiely, Professor and Lee Hills Chair of Free Press Studies, University of Missouri-Columbia

This article is republished from The Conversation under a Creative Commons license. Read the original article.




Whistleblower Says Facebook Chooses ‘Profit Over Safety’

Source: France 24 English

Frances Haugen, a Facebook whistleblower who shared a trove of Facebook documents alleging the social media giant knew its products were fueling hate and harming children’s mental health, revealed her identity Sunday in a televised interview on 60 Minutes and accused the company of choosing “profit over safety.” Haugen revealed internal papers proving that Facebook is lying about making progress against hate, violence and misinformation – which brings them more profits. Here are a couple of key quotes from the interview:

“The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimize for its own interests, like making more money.” ~ Frances Haugen

“The version of Facebook that exists today is tearing our societies apart and causing ethnic violence around the world.” ~ Frances Haugen

Here’s a more detailed report:




Facebook Lied — It’s Reading Your Private WhatsApp Messages

By Peter ElkindJack Gillum and Craig Silverman | ProPublica | The Defender

When Mark Zuckerberg unveiled a new “privacy-focused vision” for Facebook in March 2019, he cited the company’s global messaging service, WhatsApp, as a model.

Acknowledging that “we don’t currently have a strong reputation for building privacy-protective services,” the Facebook CEO wrote that “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever. This is the future I hope we will help bring about. We plan to build this the way we’ve developed WhatsApp.”

Zuckerberg’s vision centered on WhatsApp’s signature feature, which he said the company was planning to apply to Instagram and Facebook Messenger: end-to-end encryption, which converts all messages into an unreadable format that is only unlocked when they reach their intended destinations.

WhatsApp messages are so secure, he said, that nobody else — not even the company — can read a word. As Zuckerberg had put it earlier, in testimony to the U.S. Senate in 2018, “We don’t see any of the content in WhatsApp.”

WhatsApp emphasizes this point so consistently that a flag with a similar assurance automatically appears on-screen before users send messages: “No one outside of this chat, not even WhatsApp, can read or listen to them.”

Those assurances are not true. WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin, and Singapore, where they examine millions of pieces of users’ content. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images, and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems.

These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute.

Policing users while assuring them that their privacy is sacrosanct makes for an awkward mission at WhatsApp. A 49-slide internal company marketing presentation from December, obtained by ProPublica, emphasizes the “fierce” promotion of WhatsApp’s “privacy narrative.”

It compares its “brand character” to “the Immigrant Mother” and displays a photo of Malala ​​Yousafzai, who survived a shooting by the Taliban and became a Nobel Peace Prize winner, in a slide titled “Brand tone parameters.” The presentation does not mention the company’s content moderation efforts.

WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers. But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.” The company declined to make executives available for interviews for this article but responded to questions with written comments.

“WhatsApp is a lifeline for millions of people around the world,” the company said. “The decisions we make around how we build our app are focused around the privacy of our users, maintaining a high degree of reliability and preventing abuse.”

WhatsApp’s denial that it moderates content is noticeably different from what Facebook Inc. says about WhatsApp’s corporate siblings, Instagram and Facebook. The company has said that some 15,000 moderators examine content on Facebook and Instagram, neither of which is encrypted. It releases quarterly transparency reports that detail how many accounts Facebook and Instagram have “actioned” for various categories of abusive content. There is no such report for WhatsApp.

Deploying an army of content reviewers is just one of the ways that Facebook Inc. has compromised the privacy of WhatsApp users. Together, the company’s actions have left WhatsApp — the largest messaging app in the world, with two billion users — far less private than its users likely understand or expect.

A ProPublica investigation, drawing on data, documents, and dozens of interviews with current and former employees and contractors, reveals how, since purchasing WhatsApp in 2014, Facebook has quietly undermined its sweeping security assurances in multiple ways. (Two articles this summer noted the existence of WhatsApp’s moderators but focused on their working conditions and pay rather than their effect on users’ privacy. This article is the first to reveal the details and extent of the company’s ability to scrutinize messages and user data — and to examine what the company does with that information.)

Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems, and account information to examine user messages, images, and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.

Facebook Inc. has also downplayed how much data it collects from WhatsApp users, what it does with it and how much it shares with law enforcement authorities. For example, WhatsApp shares metadata, unencrypted records that can reveal a lot about a user’s activity, with law enforcement agencies such as the Department of Justice.

Some rivals, such as Signal, intentionally gather much less metadata to avoid incursions on its users’ privacy and thus share far less with law enforcement. (“WhatsApp responds to valid legal requests,” the company spokesperson said, “including orders that require us to provide on a real-time going forward basis who a specific person is messaging.”)

WhatsApp user data, ProPublica has learned, helped prosecutors build a high-profile case against a Treasury Department employee who leaked confidential documents to BuzzFeed News that exposed how dirty money flows through U.S. banks.

Like other social media and communications platforms, WhatsApp is caught between users who expect privacy and law enforcement entities that effectively demand the opposite: that WhatsApp turns over information that will help combat crime and online abuse.

WhatsApp has responded to this dilemma by asserting that it’s no dilemma at all. “I think we absolutely can have security and safety for people through end-to-end encryption and work with law enforcement to solve crimes,” said Will Cathcart, whose title is Head of WhatsApp, in a YouTube interview with an Australian think tank in July.

The tension between privacy and disseminating information to law enforcement is exacerbated by a second pressure: Facebook’s need to make money from WhatsApp. Since paying $22 billion to buy WhatsApp in 2014, Facebook has been trying to figure out how to generate profits from a service that doesn’t charge its users a penny.

That conundrum has periodically led to moves that anger users, regulators, or both. The goal of monetizing the app was part of the company’s 2016 decision to start sharing WhatsApp user data with Facebook, something the company had told EU regulators was technologically impossible.

The same impulse spurred a controversial plan, abandoned in late 2019, to sell advertising on WhatsApp. And the profit-seeking mandate was behind another botched initiative in January: the introduction of a new privacy policy for user interactions with businesses on WhatsApp, allowing businesses to use customer data in new ways. That announcement triggered a user exodus to competing apps.

WhatsApp’s increasingly aggressive business plan is focused on charging companies for an array of services — letting users make payments via WhatsApp and managing customer service chats — that offer convenience but fewer privacy protections. The result is a confusing two-tiered privacy system within the same app where the protections of end-to-end encryption are further eroded when WhatsApp users employ the service to communicate with businesses.

The company’s December marketing presentation captures WhatsApp’s diverging imperatives. It states that “privacy will remain important.” But it also conveys what seems to be a more urgent mission: the need to “open the aperture of the brand to encompass our future business objectives.”

I. “Content moderation associates”

In many ways, the experience of being a content moderator for WhatsApp in Austin is identical to being a moderator for Facebook or Instagram, according to interviews with 29 current and former moderators. Mostly in their 20s and 30s, many with past experience as store clerks, grocery checkers and baristas, the moderators are hired and employed by Accenture, a huge corporate contractor that works for Facebook and other Fortune 500 behemoths.

The job listings advertise “Content Review” positions and make no mention of Facebook or WhatsApp. Employment documents list the workers’ initial title as “content moderation associate.” Pay starts at around $16.50 an hour. Moderators are instructed to tell anyone who asks that they work for Accenture, and are required to sign sweeping non-disclosure agreements.

Citing the NDAs, almost all the current and former moderators interviewed by ProPublica insisted on anonymity. (An Accenture spokesperson declined to comment, referring all questions about content moderation to WhatsApp.)

When the WhatsApp team was assembled in Austin in 2019, Facebook moderators already occupied the fourth floor of an office tower on Sixth Street, adjacent to the city’s famous bar-and-music scene. The WhatsApp team was installed on the floor above, with new glass-enclosed work pods and nicer bathrooms that sparked a tinge of envy in a few members of the Facebook team.

Most of the WhatsApp team scattered to work from home during the pandemic. Whether in the office or at home, they spend their days in front of screens, using a Facebook software tool to examine a stream of “tickets,” organized by subject into “reactive” and “proactive” queues.

Collectively, the workers scrutinize millions of pieces of WhatsApp content each week. Each reviewer handles upwards of 600 tickets a day, which gives them less than a minute per ticket. WhatsApp declined to reveal how many contract workers are employed for content review, but a partial staffing list reviewed by ProPublica suggests that, at Accenture alone, it’s more than 1,000. WhatsApp moderators, like their Facebook and Instagram counterparts, are expected to meet performance metrics for speed and accuracy, which are audited by Accenture.

Their jobs differ in other ways. Because WhatsApp’s content is encrypted, artificial intelligence systems can’t automatically scan all chats, images, and videos, as they do on Facebook and Instagram. Instead, WhatsApp reviewers gain access to private content when users hit the “report” button on the app, identifying a message as allegedly violating the platform’s terms of service.

This forwards five messages — the allegedly offending one along with the four previous ones in the exchange, including any images or videos — to WhatsApp in unscrambled form, according to former WhatsApp engineers and moderators. Automated systems then feed these tickets into “reactive” queues for contract workers to assess.

Artificial intelligence initiates the second set of queues — so-called proactive ones — by scanning unencrypted data that WhatsApp collects about its users and comparing it against suspicious account information and messaging patterns (a new account rapidly sending out a high volume of chats is evidence of spam), as well as terms and images that have previously been deemed abusive.

The unencrypted data available for scrutiny is extensive. It includes the names and profiles images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength, and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.

The WhatsApp reviewers have three choices when presented with a ticket for either type of queue: Do nothing, place the user on “watch” for further scrutiny, or ban the account. (Facebook and Instagram content moderators have more options, including removing individual postings. It’s that distinction — the fact that WhatsApp reviewers can’t delete individual items — that the company cites as its basis for asserting that WhatsApp reviewers are not “content moderators.”)

WhatsApp moderators must make subjective, sensitive, and subtle judgments, interviews, and documents examined by ProPublica show. They examine a wide range of categories, including “Spam Report”, “Civic Bad Actor” (political hate speech and disinformation), “Terrorism Global Credible Threat”, “CEI” (child exploitative imagery), and “CP” (child pornography).

Another set of categories addresses the messaging and conduct of millions of small and large businesses that use WhatsApp to chat with customers and sell their wares. These queues have such titles as “business impersonation prevalence,” “commerce policy probable violators” and “business verification.”

Moderators say the guidance they get from WhatsApp and Accenture relies on standards that can be simultaneously arcane and disturbingly graphic. Decisions about abusive sexual imagery, for example, can rest on an assessment of whether a naked child in an image appears adolescent or prepubescent, based on a comparison of hip bones and pubic hair to a medical index chart.

One reviewer recalled a grainy video in a political-speech queue that depicted a machete-wielding man holding up what appeared to be a severed head: “We had to watch and say, ‘Is this a real dead body or a fake dead body?’”

In late 2020, moderators were informed of a new queue for alleged “sextortion.” It was defined in an explanatory memo as “a form of sexual exploitation where people are blackmailed with a nude image of themselves which have been shared by them or someone else on the Internet.” The memo said workers would review messages reported by users that “include predefined keywords typically used in sextortion/blackmail messages.”

WhatsApp’s review system is hampered by impediments, including buggy language translation. The service has users in 180 countries, with the vast majority located outside the U.S. Even though Accenture hires workers who speak a variety of languages, for messages in some languages there’s often no native speaker on-site to assess abuse complaints.

That means using Facebook’s language-translation tool, which reviewers said could be so inaccurate that it sometimes labeled messages in Arabic as being in Spanish. The tool also offered little guidance on local slang, political context, or sexual innuendo. “In the three years I’ve been there,” one moderator said, “it’s always been horrible.”

The process can be rife with errors and misunderstandings. Companies have been flagged for offering weapons for sale when they’re selling straight shaving razors. Bras can be sold, but if the marketing language registers as “adult,” the seller can be labeled a forbidden “sexually oriented business.” And a flawed translation toolset off an alarm when it detected kids for sale and slaughter, which, upon closer scrutiny, turned out to involve young goats intended to be cooked and eaten in halal meals.

The system is also undercut by the human failings of the people who instigate reports. Complaints are frequently filed to punish, harass or prank someone, according to moderators. In messages from Brazil and Mexico, one moderator explained, “we had a couple of months where AI was banning groups left and right because people were messing with their friends by changing their group names” and then reporting them. “At the worst of it, we were probably getting tens of thousands of those. They figured out some words the algorithm did not like.”

Other reports fail to meet WhatsApp standards for an account ban. “Most of it is not violating,” one of the moderators said. “It’s content that is already on the internet, and it’s just people trying to mess with users.” Still, each case can reveal up to five unencrypted messages, which are then examined by moderators.

The judgment of WhatsApp’s AI is less than perfect, moderators say. “There were a lot of innocent photos on there that were not allowed to be on there,” said Carlos Sauceda, who left Accenture last year after nine months. “It might have been a photo of a child taking a bath, and there was nothing wrong with it.” As another WhatsApp moderator put it, “A lot of the time, the artificial intelligence is not that intelligent.”

Facebook’s written guidance to WhatsApp moderators acknowledges many problems, noting “we have made mistakes and our policies have been weaponized by bad actors to get good actors banned. When users write inquiries pertaining to abusive matters like these, it is up to WhatsApp to respond and act (if necessary) accordingly in a timely and pleasant manner.” Of course, if a user appeals a ban that was prompted by a user report, according to one moderator, it entails having a second moderator examine the user’s content.

II. “Industry leaders” in detecting bad behavior

In public statements and on the company’s websites, Facebook Inc. is noticeably vague about WhatsApp’s monitoring process. The company does not provide a regular accounting of how WhatsApp polices the platform. WhatsApp’s FAQ page and online complaint form note that it will receive “the most recent messages” from a user who has been flagged.

They do not, however, disclose how many unencrypted messages are revealed when a report is filed, or that those messages are examined by outside contractors. (WhatsApp told ProPublica it limits that disclosure to keep violators from “gaming” the system.)

By contrast, both Facebook and Instagram post lengthy “Community Standards” documents detailing the criteria its moderators use to police content, along with articles and videos about “the unrecognized heroes who keep Facebook safe” and announcements on new content-review sites. Facebook’s transparency reports detail how many pieces of content are “actioned” for each type of violation. WhatsApp is not included in this report.

When dealing with legislators, Facebook Inc. officials also offer few details — but are eager to assure them that they don’t let encryption stand in the way of protecting users from images of child sexual abuse and exploitation. For example, when members of the Senate Judiciary Committee grilled Facebook about the impact of encrypting its platforms, the company, in written follow-up questions in January 2020, cited WhatsApp in boasting that it would remain responsive to law enforcement.

“Even within an encrypted system,” one respondent noted, “we will still be able to respond to lawful requests for metadata, including the potentially critical location or account information… We already have an encrypted messaging service, WhatsApp, that — in contrast to some other encrypted services — provides a simple way for people to report abuse or safety concerns.”

Sure enough, WhatsApp reported 400,000 instances of possible child-exploitation imagery to the National Center for Missing and Exploited Children in 2020, according to its head, Cathcart. That was ten times as many as in 2019. “We are by far the industry leaders in finding and detecting that behavior in an end-to-end encrypted service,” he said.

During his YouTube interview with the Australian think tank, Cathcart also described WhatsApp’s reliance on user reporting and its AI systems’ ability to examine account information that isn’t subject to encryption. Asked how many staffers WhatsApp employed to investigate abuse complaints from an app with more than two billion users, Cathcart didn’t mention content moderators or their access to encrypted content.

“There’s a lot of people across Facebook who help with WhatsApp,” he explained. “If you look at people who work full time on WhatsApp, it’s above a thousand. I won’t get into the full breakdown of customer service, user reports, engineering, etc. But it’s a lot of that.”

In written responses for this article, the company spokesperson said: “We build WhatsApp in a manner that limits the data we collect while providing us tools to prevent spam, investigate threats, and ban those engaged in abuse, including based on user reports we receive. This work takes extraordinary effort from security experts and a valued trust and safety team that works tirelessly to help provide the world with private communication.”

The spokesperson noted that WhatsApp has released new privacy features, including “more controls about how people’s messages can disappear” or be viewed only once. He added, “Based on the feedback we’ve received from users, we’re confident people understand when they make reports to WhatsApp we receive the content they send us.”

III. “Deceiving users” about personal privacy

Since the moment Facebook announced plans to buy WhatsApp in 2014, observers wondered how the service, known for its fervent commitment to privacy, would fare inside a corporation known for the opposite.

Zuckerberg had become one of the wealthiest people on the planet by using a “surveillance capitalism” approach: collecting and exploiting reams of user data to sell targeted digital ads. Facebook’s relentless pursuit of growth and profits has generated a series of privacy scandals in which it was accused of deceiving customers and regulators.

By contrast, WhatsApp knew little about its users apart from their phone numbers and shared none of that information with third parties. WhatsApp ran no ads, and its co-founders, Jan Koum and Brian Acton, both former Yahoo engineers, were hostile to them.

“At every company that sells ads,” they wrote in 2012, “a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data, and making sure it’s all being logged and collated and sliced and packed and shipped out,” adding: “Remember when advertising is involved you the user are the product.” At WhatsApp, they noted, “your data isn’t even in the picture. We are simply not interested in any of it.”

Zuckerberg publicly vowed in a 2014 keynote speech that he would keep WhatsApp “exactly the same.” He declared, “We are absolutely not going to change plans around WhatsApp and the way it uses user data. WhatsApp is going to operate completely autonomously.”

In April 2016, WhatsApp completed its long-planned adoption of end-to-end encryption, which helped establish the app as a prized communications platform in 180 countries, including many where text messages and phone calls are cost-prohibitive. International dissidents, whistleblowers, and journalists also turned to WhatsApp to escape government eavesdropping.

Four months later, however, WhatsApp disclosed it would begin sharing user data with Facebook — precisely what Zuckerberg had said would not happen — a move that cleared the way for an array of future revenue-generating plans.

The new WhatsApp terms of service said the app would share information such as users’ phone numbers, profile photos, status messages, and IP addresses for the purposes of ad targeting, fighting spam and abuse, and gathering metrics. “By connecting your phone number with Facebook’s systems,” WhatsApp explained, “Facebook can offer better friend suggestions and show you more relevant ads if you have an account with them.”

Such actions were increasingly bringing Facebook into the crosshairs of regulators. In May 2017, EU antitrust regulators fined the company 110 million euros (about $122 million) for falsely claiming three years earlier that it would be impossible to link the user information between WhatsApp and the Facebook family of apps. The EU concluded that Facebook had “intentionally or negligently” deceived regulators. Facebook insisted its false statements in 2014 were not intentional but didn’t contest the fine.

By the spring of 2018, the WhatsApp co-founders, now both billionaires, were gone. Acton, in what he later described as an act of “penance” for the “crime” of selling WhatsApp to Facebook, gave $50 million to a foundation backing Signal, a free encrypted messaging app that would emerge as a WhatsApp rival. (Acton’s donor-advised fund has also given money to ProPublica.)

Meanwhile, Facebook was under fire for its security and privacy failures as never before. The pressure culminated in a landmark $5 billion fine by the Federal Trade Commission in July 2019 for violating a previous agreement to protect user privacy. The fine was almost 20 times greater than any previous privacy-related penalty, according to the FTC, and Facebook’s transgressions included “deceiving users about their ability to control the privacy of their personal information.”

The FTC announced that it was ordering Facebook to take steps to protect privacy going forward, including for WhatsApp users: “As part of Facebook’s order-mandated privacy program, which covers WhatsApp and Instagram, Facebook must conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy.” Compliance officers would be required to generate a “quarterly privacy review report” and share it with the company and, upon request, the FTC.

Facebook agreed to the FTC’s fine and order. Indeed, the negotiations for that agreement were the backdrop, just four months before that, for Zuckerberg’s announcement of his new commitment to privacy.

By that point, WhatsApp had begun using Accenture and other outside contractors to hire hundreds of content reviewers. But the company was eager not to step on its larger privacy message — or spook its global user base. It said nothing publicly about its hiring of contractors to review content.

IV. “We kill people based on metadata”

Even as Zuckerberg was touting Facebook Inc.’s new commitment to privacy in 2019, he didn’t mention that his company was apparently sharing more of its WhatsApp users’ metadata than ever with the parent company — and with law enforcement.

To the lay ear, the term “metadata” can sound abstract, a word that evokes the intersection of literary criticism and statistics. To use an old, pre-digital analogy, metadata is the equivalent of what’s written on the outside of an envelope — the names and addresses of the sender and recipient and the postmark reflecting where and when it was mailed — while the “content” is what’s written on the letter sealed inside the envelope. So it is with WhatsApp messages: The content is protected, but the envelope reveals a multitude of telling details (as noted: timestamps, phone numbers, and much more).

Those in the information and intelligence fields understand how crucial this information can be. It was metadata, after all, that the National Security Agency was gathering about millions of Americans not suspected of a crime, prompting a global outcry when it was exposed in 2013 by former NSA contractor Edward Snowden.

“Metadata absolutely tells you everything about somebody’s life,” former NSA general counsel Stewart Baker once said. “If you have enough metadata, you don’t really need content.” In a symposium at Johns Hopkins University in 2014, Gen. Michael Hayden, former director of both the CIA and NSA, went even further: “We kill people based on metadata.”

U.S. law enforcement has used WhatsApp metadata to help put people in jail. ProPublica found more than a dozen instances in which the Justice Department sought court orders for the platform’s metadata since 2017. These represent a fraction of overall requests, known as pen register orders (a phrase borrowed from the technology used to track numbers dialed by landline telephones), as many more are kept from public view by court order.

U.S. government requests for data on outgoing and incoming messages from all Facebook platforms increased by 276% from the first half of 2017 to the second half of 2020, according to Facebook Inc. statistics (which don’t break out the numbers by platform). The company’s rate of handing over at least some data in response to such requests has risen from 84% to 95% during that period.

It’s not clear exactly what government investigators have been able to gather from WhatsApp, as the results of those orders, too, are often kept from public view. Internally, WhatsApp calls such requests for information about users “prospective message pairs,” or PMPs.

These provide data on a user’s messaging patterns in response to requests from U.S. law enforcement agencies, as well as those in at least three other countries — the UK, Brazil, and India — according to a person familiar with the matter who shared this information on the condition of anonymity. Law enforcement requests from other countries might only receive basic subscriber profile information.

WhatsApp metadata was pivotal in the arrest and conviction of Natalie “May” Edwards, a former Treasury Department official with the Financial Crimes Enforcement Network, for leaking confidential banking reports about suspicious transactions to BuzzFeed News. The FBI’s criminal complaint detailed hundreds of messages between Edwards and a BuzzFeed reporter using an “encrypted application,” which interviews and court records confirmed was WhatsApp.

“On or about August 1, 2018, within approximately six hours of the Edwards pen becoming operative — and the day after the July 2018 Buzzfeed article was published — the Edwards cellphone exchanged approximately 70 messages via the encrypted application with the Reporter-1 cellphone during an approximately 20-minute time span between 12:33 a.m. and 12:54 a.m.,”

FBI Special Agent Emily Eckstut wrote in her October 2018 complaint. Edwards and the reporter used WhatsApp because Edwards believed the platform to be secure, according to a person familiar with the matter.

Edwards was sentenced on June 3 to six months in prison after pleading guilty to a conspiracy charge and reported to prison last week. Edwards’ attorney declined to comment, as did representatives from the FBI and the Justice Department.

WhatsApp has for years downplayed how much-unencrypted information it shares with law enforcement, largely limiting mentions of the practice to boilerplate language buried deep in its terms of service. It does not routinely keep permanent logs of who users are communicating with and how often, but company officials confirmed they do turn on such tracking at their own discretion — even for internal Facebook leak investigations — or in response to law enforcement requests. The company declined to tell ProPublica how frequently it does so.

The privacy page for WhatsApp assures users that they have total control over their own metadata. It says users can “decide if only contacts, everyone, or nobody can see your profile photo” or when they last opened their status updates or when they last opened the app. Regardless of the settings a user chooses, WhatsApp collects and analyzes all of that data — a fact not mentioned anywhere on the page.

V. “Opening the aperture to encompass business objectives”

The conflict between privacy and security on encrypted platforms seems to be only intensifying. Law enforcement and child safety advocates have urged Zuckerberg to abandon his plan to encrypt all of Facebook’s messaging platforms.

In June 2020, three Republican senators introduced the “Lawful Access to Encrypted Data Act,” which would require tech companies to assist in providing access to even encrypted content in response to law enforcement warrants. For its part, WhatsApp recently sued the Indian government to block its requirement that encrypted apps provide “traceability” — a method to identify the sender of any message deemed relevant to law enforcement. WhatsApp has fought similar demands in other countries.

Other encrypted platforms take a vastly different approach to monitoring their users than WhatsApp. Signal employs no content moderators, collects far less user and group data, allows no cloud backups, and generally rejects the notion that it should be policing user activities. It submits no child exploitation reports to NCMEC.

Apple has touted its commitment to privacy as a selling point. It has no “report” button on its iMessage system, and the company has made just a few hundred annual reports to NCMEC, all of them originating from scanning outgoing email, which is unencrypted.

But Apple recently took a new tack and appeared to stumble along the way. Amid intensifying pressure from Congress, in August the company announced a complex new system for identifying child-exploitative imagery on users’ iCloud backups.

Apple insisted the new system poses no threat to private content, but privacy advocates accused the company of creating a backdoor that potentially allows authoritarian governments to demand broader content searches, which could result in the targeting of dissidents, journalists, or other critics of the state. On Sept. 3, Apple announced it would delay the implementation of the new system.

Still, it’s Facebook that seems to face the most constant skepticism among major tech platforms. It is using encryption to market itself as privacy-friendly while saying little about the other ways it collects data, according to Lloyd Richardson, the director of IT at the Canadian Centre for Child Protection.

“This whole idea that they’re doing it for personal protection of people is completely ludicrous,” Richardson said. “You’re trusting an app owned and written by Facebook to do exactly what they’re saying. Do you trust that entity to do that?” (On Sept. 2, Irish authorities announced that they are fining WhatsApp 225 million euros, about $267 million, for failing to properly disclose how the company shares user information with other Facebook platforms. WhatsApp is contesting the finding.)

Facebook’s emphasis on promoting WhatsApp as a paragon of privacy is evident in the December marketing document obtained by ProPublica. The “Brand Foundations” presentation says it was the product of a 21-member global team across all of Facebook, involving a half-dozen workshops, quantitative research, “stakeholder interviews” and “endless brainstorms.”

Its aim: to offer “an emotional articulation” of WhatsApp’s benefits, “an inspirational toolkit that helps us tell our story,” and a “brand purpose to champion the deep human connection that leads to progress.” The marketing deck identifies a feeling of “closeness” as WhatsApp’s “ownable emotional territory,” saying the app delivers “the closest thing to an in-person conversation.”

WhatsApp should portray itself as “courageous,” according to another slide because it’s “taking a strong, public stance that is not financially motivated on things we care about,” such as defending encryption and fighting misinformation. But the presentation also speaks of the need to “open the aperture of the brand to encompass our future business objectives. While privacy will remain important, we must accommodate for future innovations.”

WhatsApp is now in the midst of a major drive to make money. It has experienced a rocky start, in part because of broad suspicions of how WhatsApp will balance privacy and profits. An announced plan to begin running ads inside the app didn’t help — it was abandoned in late 2019, just days before it was set to launch.

Early this January, WhatsApp unveiled a change in its privacy policy — accompanied by a one-month deadline to accept the policy or get cut off from the app. The move sparked a revolt, impelling tens of millions of users to flee to rivals such as Signal and Telegram.

The policy change focused on how messages and data would be handled when users communicate with a business in the ever-expanding array of WhatsApp Business offerings. Companies now could store their chats with users and use information about users for marketing purposes, including targeting them with ads on Facebook or Instagram.

Elon Musk tweeted “Use Signal,” and WhatsApp users rebelled. Facebook delayed for three months the requirement for users to approve the policy update. In the meantime, it struggled to convince users that the change would have no effect on the privacy protections for their personal communications, with a slightly modified version of its usual assurance: “WhatsApp cannot see your personal messages or hear your calls and neither can Facebook.” Just as when the company first bought WhatsApp years before, the message was the same: Trust us.

Originally published by ProPublica.

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of Children’s Health Defense.




The Panic Pandemic: How Media Fearmongering Led to ‘Unprecedented’ Censorship of Scientific Research

Story at-a-glance:

  • John Tierney, a former reporter for The New York Times, looks back over the pandemic, providing a timeline of the media-induced viral panic that led to censorship and suppression of scientific research on an unprecedented scale.
  • Experts who spoke out against the official narrative were attacked and accused of endangering lives by questioning lockdowns.
  • Numerous research journals refused to publish the results of studies that featured data questioning lockdowns, masks, and other COVID policies.
  • Certain states have stood out for their refusal to buy into the draconian public health measures that were adopted throughout much of the U.S. — Florida is chief among them and has a COVID mortality rate that’s lower than the national average.
  • The “crisis crisis,” or the ‘incessant state of alarm fomented by journalists and politicians,’ is one reason why so many government, academic and policy leaders could support rampant censorship and suppress scientific debate for so long, all while propagating panic.

Now that we’re more than a year into the pandemic, it’s crystal clear that the panic that ensued was unnecessary and the draconian measures put into place for public health were unwarranted and harmful.

John Tierney, a former reporter for The New York Times, looked back over the pandemic, providing a timeline of the media-induced viral panic that led to censorship and suppression of scientific research on an unprecedented scale.

In his article for City Journal, where he is a contributing editor, he explained that the “moral panic that swept the nation’s guiding institutions” during the pandemic was far more catastrophic than the viral pandemic itself.

Media-induced panic set off in March 2020

The panic was started by journalists beginning in March 2020, when the Imperial College COVID-19 Response Team released “Report 9” on the impact of nonpharmaceutical interventions (NPSs) to reduce deaths and health care demand from COVID-19.

The report’s computer model projected that intensive care units in the U.S. would be overrun, with 30 COVID-19 patients for every available bed, and 2.2 million dead by summer. They concluded that “epidemic suppression is the only viable strategy at the current time,” which led to lockdowns, business, and school closures, and population-wide social distancing. But as Tierney noted:

“What had originally been a limited lockdown — ‘15 days to slow the spread’ — became long-term policy across much of the United States and the world.

“A few scientists and public-health experts objected, noting that an extended lockdown was a novel strategy of unknown effectiveness that had been rejected in previous plans for a pandemic. It was a dangerous experiment being conducted without knowing the answer to the most basic question: Just how lethal is this virus?”

John Ioannidis, an epidemiologist at Stanford, was an early critic of the response, who argued that long-term lockdowns could cause more harm than good. Ioannidis came under intense fire after he and colleagues revealed that the COVID-19 fatality rate for those under the age of 45 is “almost zero,” and between the ages of 45 and 70, it’s somewhere between 0.05% and 0.3%.

In Santa Clara County, in particular, he and colleagues estimated that in late March 2020, the local COVID infection fatality rate was just 0.17%. “But merely by reporting data that didn’t fit the official panic narrative, they became targets,” Tierney explained. “… Mainstream journalists piled on with hit pieces quoting critics and accusing the researchers of endangering lives by questioning lockdowns.”

Journals refused to publish solid, anti-narrative research

The discrediting and censorship of researchers who spoke out against the official narrative — even if they included supportive data — became a common and alarming theme over the last year, one that extended to virtually every aspect of the pandemic-related policy, including masks.

The “Danmask-19 Trial,” published Nov. 18, 2020, in the Annals of Internal Medicine, found that among mask wearers 1.8% (42 participants) ended up testing positive for SARS-CoV-2, compared to 2.1% (53) among controls. When they removed the people who reported not adhering to the recommendations for use, the results remained the same — 1.8% (40 people), which suggests adherence makes no significant difference.

Initially, numerous research journals refused to publish the results, which called widespread mask mandates into question. Tierney said:

“When Thomas Benfield, one of the researchers in Denmark conducting the first large randomized controlled trial of mask efficacy against COVID, was asked why they were taking so long to publish the much-anticipated findings, he promised them as ‘as soon as a journal is brave enough to accept the paper.’

“After being rejected by The Lancet, The New England Journal of Medicine and JAMA, the study finally appeared in the Annals of Internal Medicine, and the reason for the editors’ reluctance became clear: the study showed that a mask did not protect the wearer, which contradicted claims by the Centers for Disease Control and other health authorities.”

A similar experience was had by Dr. Stefan Baral, a Johns Hopkins epidemiologist with 350 publications, who wanted to publish a critique of lockdowns. It became the “first time in my career that I could not get a piece placed anywhere,” he told Tierney.

Harvard epidemiologist Martin Kulldorff also wrote a paper against lockdowns and couldn’t get it published, noting that most other scientists he spoke to were also against them but were afraid to speak up.

Kulldorff and colleagues soon banded together to write the Great Barrington Declaration, which calls for “focused protection” of the elderly and those in nursing homes and hospitals, while allowing businesses and schools to remain open. Soon after, they too were attacked:

“They managed to attract attention but not the kind they hoped for. Though tens of thousands of other scientists and doctors went on to sign the declaration, the press caricatured it as a deadly ‘let it rip’ strategy and an ‘ethical nightmare’ from ‘COVID deniers’ and ‘agents of misinformation.’”

Physicians targeted, labeled heretics

Dr. Scott Atlas of Stanford’s Hoover Institution was another common target, as he also suggested that protections should be focused on nursing homes and lockdowns would take more lives than COVID-19. According to Tierney:

“When he joined the White House coronavirus task force, Bill Gates derided him as ‘this Stanford guy with no background’ promoting ‘crackpot theories.’ Nearly 100 members of Stanford’s faculty signed a letter denouncing his ‘falsehoods and misrepresentations of science,’ and an editorial in the Stanford Daily urged the university to sever its ties to Hoover.

“The Stanford faculty senate overwhelmingly voted to condemn Atlas’s actions as ‘anathema to our community, our values and our belief that we should use knowledge for good.’”

Similarly, the College of Physicians and Surgeons of Ontario, which regulates the practice of medicine in Ontario, issued a statement in May prohibiting physicians from making comments or providing advice that goes against the official narrative.

Actor Clifton Duncan shared the Orwellian message on Twitter, urging his followers to “Read this. Now. And then share it as much as you can.”

Because, equally as disturbing as the notion of publicly dictating to physicians what they’re allowed to say, is the fact that, as Duncan said, the statement has a glaring omission, “The health and well-being of the patient.”

Florida’s mortality rate from COVID is lower than average

Certain states have stood out for their refusal to buy into the draconian public health measures that were adopted throughout much of the U.S. Florida is chief among them. After a spring 2020 lockdown, Florida businesses, schools, and restaurants reopened, while mask mandates were rejected.

“If Florida had simply done no worse than the rest of the country during the pandemic, that would have been enough to discredit the lockdown strategy,” Tierney said, noting that the state acted as the control group in a natural experiment. The results speak for themselves:

“Florida’s mortality rate from COVID is lower than the national average among those over 65 and also among younger people so that the state’s age-adjusted COVID mortality rate is lower than that of all but ten other states. And by the most important measure, the overall rate of ‘excess mortality’ (the number of deaths above normal), Florida has also done better than the national average.

“Its rate of excess mortality is significantly lower than that of the most restrictive state, California, particularly among younger adults, many of whom died not from COVID but from causes related to the lockdowns: cancer screenings and treatments were delayed, and there were sharp increases in deaths from drug overdoses and from heart attacks not treated promptly.”

The crisis crisis

It defies reason how so many government, academic and policy leaders could support rampant censorship and suppress scientific debate for so long, all while propagating panic. One of Tierney’s explanations is what he calls “the crisis crisis,” or the “incessant state of alarm fomented by journalists and politicians”:

“It’s a longstanding problem — humanity was supposedly doomed in the last century by the ‘population crisis’ and the ‘energy crisis’ — that has dramatically worsened with the cable and digital competition for ratings, clicks, and retweets.

“To keep audiences frightened around the clock, journalists seek out Cassandras with their own incentives for fearmongering: politicians, bureaucrats, activists, academics, and assorted experts who gain publicity, prestige, funding, and power during a crisis.

“Unlike many proclaimed crises, an epidemic is a genuine threat, but the crisis industry can’t resist exaggerating the danger, and doomsaying is rarely penalized. Journalists kept highlighting the most alarming warnings, presented without context. They needed to keep their audience scared, and they succeeded.”

The politicization of research is another major issue that contributes to groupthink and the suppression of scientific debate in order to support one agenda. Meanwhile, while the media advertised that we’re all in this pandemic together, some were clearly more affected than others — namely the poor and less educated, who lost jobs while professionals were mostly able to keep working from the “safety” of their homes.

Children from disadvantaged families also suffered the most from year-long school closures. “The brunt was borne by the most vulnerable in America and the poorest countries of the world,” Tierney wrote, while many of the elite got richer. The reality is, lockdowns have caused a great deal of harm, from delays in medical treatment and disrupted education to joblessness and drug overdoses, and for little, if any, benefit.

Data compiled by Pandemics ~ Data & Analytics (PANDA) also found no relationship between lockdowns and COVID-19 deaths per million people. The disease followed a trajectory of linear decline regardless of whether or not lockdowns were imposed. Yet, this is the type of information that has been censored from the beginning. As Tierney put it:

“This experience should be a lesson in what not to do, and whom not to trust. Do not assume that the media’s version of a crisis resembles reality. Do not count on mainstream journalists and their favorite doomsayers to put risks in perspective. Do not expect those who follow ‘the science’ to know what they’re talking about.”

Originally published by Mercola.

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the views of Children’s Health Defense.




This is Fascism: White House and Facebook Merge to Censor ‘Problematic Posts’

By Matt Agorist | The Free Thought Project

If we look back throughout history, all societies whose government attempted to, or actually succeeded in, controlling the speech of their citizens, have been totalitarian nightmares. For this reason, the founders crafted the first and most important Amendment to the Constitution, barring the government from doing exactly that.

Aside from a few constitutionally illiterate politicians over the past couple of decades and the horrid atrocities throughout the 19th and 20th centuries, recently Americans have had the ability to express their protected speech in any manner they see fit. Over the last several years, however, tech giants and social media companies have brought down the hammer in the name of protecting society from “disinformation.”

Many have argued — although incorrectly — that companies like Facebook and Twitter are private entities and therefore can censor whatever speech they want to own their own platforms. As TFTP has been reporting for years, however, this censorship was anything but private.

While there has been a grey area as to the relationship between social media and government, the White House made sure to clear up any doubt on Thursday. During a press briefing, Jen Psaki removed any uncertainty that Facebook is a wholly private entity by claiming that the United States government will now dictate to the social media behemoth, exactly what is and isn’t allowed on their platform.

“We are in regular touch with the social media platforms,” said Psaki, adding, “we’re flagging problematic posts for Facebook.”

The implications of such a declaration are utterly mind-boggling. For the last four years and justifiably, the left has been screaming from the rooftops, marching in the streets, and taking to protests outside the White House to demand an end to fascism. Now, we have the merger of corporate and state entities — creating de facto fascism — and they are not only silent but behind it!

This entire insidious move seems to be a push to either convince or otherwise trick the “vaccine-hesitant” Americans into taking the jab. However, announcing that the government is merging with the state to silence critics of the vaccine on social media is hardly a way to build trust, which is why it must be the latter.

Instead of building trust through transparency, the state is attempting to silence anything that doesn’t wholly enforce its narrative to trick others into believing there is only a single consensus.

The government thinks that by creating an endless stream of completely unchallenged information and “news” that confirms their claims, then people will eventually be convinced as any contrary information will be deemed “problematic” and erased from memory. This is an incredibly slippery slope and it needs to be put to an end immediately.

As stated above, this announcement is the definition of fascism — a move that would have made Benito Mussolini proud — but that is happening in the ostensible land of the free.

To those who have been paying attention, this merger between the state and social media was inevitable. It has been taking place via proxy since 2018 and the results of such a move have been utterly disastrous. As the state and big tech attempt to control the narrative, they suppress the truth and aid in the spread of actual disinformation.

One example happened last year when anyone who shared information on social media about anything related to COVID-19 and the lab in Wuhan, China, or that mentioned the possibility that COVID-19 was man-made, saw their post removed and may have even been banned. Facebook, Twitter, Google, the establishment media, and many in the government made it their primary mission to “dispel misinformation” over the origins of the COVID-19 virus.

The arbiters of truth in Big Tech claimed and vehemently pushed the idea — based only on theories — that the COVID-19 virus originated in nature, and anyone who challenged or questioned this view was a dangerous conspiracy theorist.

It was established. The fact-checkers were correct and anyone who challenged them was a danger to society. But the fact-checkers who dismissed this information did not do so with “facts” at all. Instead, they simply promoted one theory over another.

As the world found out in May, the fact-checkers, the government, big tech, and social media were all dead wrong.

Make no mistake, there are definitely some asinine and utterly stupid conspiracy theories out there on just about everything, including COVID-19. But does society need handlers to hide these things from them by censoring those who engage with them?

Stupid ideas didn’t use to go extremely viral. Even in the furthest corners of the conspiracy theory realm, verifiably false facts were easily proven wrong and dismissed swiftly. But that no longer happens now thanks to the censors.

If the ideas of the censors are so grand, why not allow them to compete with other ideas? Censoring ideas doesn’t stop them, it only allows very bad ideas to go unchallenged in the public arena, thereby granting them credence. This is extremely dangerous.

This new merger of corporate and state cannot go unchecked. Free speech does not come with terms and conditions and those who claim it does will eventually be silenced by the very monster they helped to create.

About the Author

Matt Agorist is an honorably discharged veteran of the USMC and former intelligence operator directly tasked by the NSA. This prior experience gives him unique insight into the world of government corruption and the American police state. Agorist has been an independent journalist for over a decade and has been featured on mainstream networks around the world. Agorist is also the Editor at Large at the Free Thought Project. Follow @MattAgorist on TwitterSteemit, and now on Minds.




Facebook Wants To Know If You’ve Been Exposed to Extremism | Ben Swann

Source: ISE Media

“Is this about going beyond just policing speech and thought in this country to now creating vigilantes of speech and thought.” ~ Ben Swann of Facebook asking users to report extremist content

Facebook wants to know if you think you’ve been exposed to extremist content. Not only that, but is warning users if they “may” have been exposed to extremist content if they have watched the wrong thing. So what is this about? Is this actually about using Facebook as a way to report speech and thought that Facebook disagrees with? Ben Swann reports.




Bubbles of Hate: How Social Media Keeps Users Addicted, Alone, & Ill-Informed

By Dr. Tim Coles | New Dawn

Internet communication has gone from emails, messaging boards, and chatrooms, to sophisticated, all-pervasive networking. Social media companies build addictiveness into their products. The longer you spend on their sites and apps, the more data they generate. The more data, the more accurately they anticipate what you’ll do next and for how long. The better their predictions, the more money they make by selling your attention to advertisers.

Depressed and insecure about their value as human beings, the younger generations grow up knowing only digital imprisonment. Older users are trapped in polarised bubbles of political hate. As usual, the rich and powerful are the beneficiaries.

Masters of Manipulation

Humans are social animals. But big business wants us isolated, distracted, and susceptible to marketing. Using techniques based on classical conditioning, social media programmers bridge the gap between corporate profits and our need to communicate by keeping us simultaneously isolated and networked.

The Russian psychologist, Ivan Pavlov (1849–1936), pioneered research into conditioned reflexes, arguing that behavior is rooted in the environment. His work was followed by the Americans John B. Watson (1878–1958) and B.F. Skinner (1904–90). Their often cruel conditioning experiments, conducted on animals and infants, laid the basis for gambling and advertising design. As early as the 1900s, slot machines were designed to make noises, like bell sounds, to elicit conditioned responses to keep the gambler fixed on the machine: just as Pavlov used a bell to condition his dogs to salivate. By the 1980s, slot machines had incorporated electronics to advantage particular symbols whilst giving the gambler the impression that they are near victory. “Stop buttons” gave the gambler the illusion of control. Sandy Parakilas, former Platform Operations Manager at Facebook, says: “Social media is very similar to a slot machine.”

Psychologist Watson’s experiments “set into motion industry-wide change” in TV, radio, billboard, and print advertising “that continued to develop until the present,” says historian Abby Bartholomew. Topics included emotional arousal in audiences (e.g., sexy actress → buy the product), brand loyalty (e.g., Disney is your family), and motivational studies (e.g., buy the product → look as good as this guy).

Many of these techniques involve stimulating so-called “feel good” chemicals like dopamine, endorphins, oxytocin, and serotonin. These are released when eating, exercising, having sex, and engaging in positive social interactions. Software designers learned that their release can be triggered by simple and unexpected things, like getting an email, being “friended,” seeing a retweet, and getting alike. The billionaire co-founder of Facebook and Napster, Sean Parker, said that the aim is to “give you a little dopamine hit every once in a while because someone liked or commented on a photo or a post.” But Parker also said of his company: “God only knows what it’s doing to our children’s brains.”

Facebook’s former Vice President of User Growth, Chamath Palihapitiya, doesn’t allow his children to use Facebook and says “we have created tools that are ripping apart the social fabric.” Tim Cook, the CEO of the world’s first trillion-dollar company Apple, on whose iPhones the addictions mainly occur, bluntly said of his young relatives: “I don’t want them on a social network.”

With the understanding that “the biggest companies in Silicon Valley have been in the business of selling their users” (technology investor, Roger McNamee), social media designers built upon the history of behaviorism and game addiction to keep users hooked. For example: In the good ol’ days, sites including the BBC and YouTube had page numbers (“pagination”), which gave users a sense of where they were in their search for an article or video. If the search results were poor, the user knew to skip to the last page and work backward. But pages were phased out and replaced with “infinite scroll,” a feature designed in 2006 by Aza Raskin of Jawbone and Mozilla. Pagination, for instance, gives the user a stopping cue. Designers have systematically removed stopping cues. Likening infinite scroll to “behavioral cocaine,” Raskin said: “If you don’t give your brain time to catch up with your impulses, you just keep scrolling.”

How They Do It & How It Hurts

Users think that they have control over their social media habits and the information being fed to them, including news and suggested webpages, are coming to them organically. But, unbeknownst to them, the framework is calculated. The US Deep State, for instance, helped to develop social networks. Sergey Brin and Larry Page developed their web crawling software, which they later turned into Google, with money from the US Defense Research Projects Agency. Referring to the Massive Digital Data Systems, the CIA-funded Dr. Bhavani Thuraisingham confirmed that “[t]he intelligence community’s MDDS program essentially provided Brin seed-funding.”

Consider how the technologies were commercialized. “Growth” means advertising money accrued from sites visited, content browsed, links clicked, pages shared, etc. “Growth hackers” are described by former Google design ethicist Tristan Harris as “engineers whose job is to hack people’s psychology so they can get more growth.” Designers build applications into software that manipulate users’ unconscious behavioral cues to lead them in certain directions.

To give an example: The feel-good chemical oxytocin is released during positive social interactions. It is likely stimulated when social media companies send an email alert that a family has shared a new photo. Other human foibles include novelty-seeking (for potential rewards) and temptation (fear of missing out or FOMO). These are linked to the feel-good chemical dopamine. Rather than including the new family photo in the email, the email is designed with a URL feature to tempt the user to click the link which directs them to the social media site in order to see the new photo. The chemical-reward response chain is as follows: family (oxytocin) → novelty/new photo (dopamine), temptation to click/FOMO → reward from positive social interaction after clicking and seeing the new photo (oxytocin-dopamine stimulation).

This convoluted chain of events is designed to sell the user’s attention to advertisers. The more time spent doing these things, the more adverts can be directed at the user and the more money for the social media company. Harris says “you are being programmed at a deeper level.”

In addition, tailored psychological profiles of users are secretly built, bought from, and sold to data brokers, like Experian. User behavioral patterns feed deep learning programs which aim to predict the user’s next online move according to their personal tastes and previous browsing patterns. The more accurate the prediction, the more likely their attention is drawn to an advert, and the more money social media firms accrue. Says former Mozilla’s Raskin: “They’re competing for your attention.” He asks: “How much of your life can we get you to give to us?”

Instagram was developed in 2010 by Facebook as a photo and video sharing service. It is used by a billion people globally and, unlike the teen-loving Snapchat, is used mainly by 18-44-year-olds. Instagram falls into the so-called “painkiller app” category. One designer explains that such apps “typically generate a stimulus, which usually revolves around negative emotions such as loneliness or boredom.”

Snapchat is a messaging app designed in 2011 that stores pictures (“Snaps”) for a short period of time. The app is used by 240 million people per day. Unlike YouTube, most of whose users are male, the majority of Snapchat users are female. Only 17 percent of users are over 35. Its model is Snapstreak: a tracker that counts the days since the user replied to the Snap. Designers built FOMO (noted above) into Snapchat. The longer the user’s non-reply, the greater their credit score decline. This can lead to addiction because, unlike Facebook, Snapchat tags are “strong ties” (e.g., close friends, family), so the pressure to reply is greater.

In addition to the harmful content of social media – sexualized children, impossible and ever-changing beauty standards, cyberbullying, gaming addiction, loss of sleep, etc. – the very design of social media hurts young users. We all need to love ourselves and to feel loved by a small circle of others: friends, family, and partners. Young people are particularly susceptible to self-loathing and questioning whether someone loves them.

The introduction of social media has been devastating. A third of all teens who spend at least two hours a day on social media, i.e., the majority, have at least one suicide risk factor. The percentage increases to nearly half for those who spend five hours or more. A study of 14-year-olds found that those with fewer social media likes than their peers experienced depressive symptoms. Teens who are already victimized at school or within their peer group were the worst affected.

Divided & Conquered

Another feature built into social media is the polarisation of users along political lines; a phenomenon that mainly concerns people of voting age. One of the many human foibles exploited by social media designers is homophily: our love of things and people similar and familiar to us. Homophily makes us feel safe, understood, validated, and positively reinforced. It stimulates feel-good chemicals and, in social media contexts, is exploited to keep us inside an echo chamber so that our biases are constantly reinforced, and we stay online for longer. But is this healthy?

Referring to Usenet group discussions, the lawyer Mike Godwin formulated the Rule of Hitler Analogies (or Godwin’s Law), which correctly posits that the longer an online discussion, the higher the probability that a user will compare others to Hitler. The formula was a reflection of users’ lack of tolerance toward the views of others.

A projection published in 2008 asked if people will be more tolerant due to the internet. Nearly six in 10 participants disagreed, compared to just three in 10 who agreed. In many ways, the industry specialists were fatalistic. Internet architect, Fred Baker of Cisco Systems, said: “Human nature will not have changed. There will be a wider understanding of viewpoints, but tolerance of fundamental disagreement will not have improved.” Philip Lu of Wells Fargo Bank Internet Services said: “Just as social networking has allowed people to become more interconnected, this will also allow those with extreme views… to connect to their ‘kindred’ spirits.” Dan Larson of the PKD Foundation said: “The more open and free people are to pass on their inner feelings about things/people, especially under the anonymity of the Internet – will only foster more and more vitriol and bigotry.”

Users can artificially inflate their importance and the strength of their arguments by creating multiple accounts with different names (“sock puppets”). Some websites sell “followers” to boost users’ profiles. It is estimated that half of the Twitter followers of celebrities and politicians are bots. Gibberish-spewing algorithms have been programmed to write fake reviews on Amazon to hurt competitors’ sales. In at least one case, a pro-Israeli troll was unmasked posing as an anti-Semite in order to give the impression that anti-Semitism is rampant online and thus users should have more sympathy with Israel. Content creators increasingly find themselves de-platformed because of their political views while others’ social media accounts are suppressed by design (“shadow-banning”).

In the age of COVID, misinformation on both sides is spread: the severity of the disease, efficacy of vaccines, necessity of lockdowns, etc. As with US politics, Brexit, climate change, etc., neither side wants to talk rationally and open-mindedly with the other. The very designs of social media make this very difficult.

It should be emphasized that some social media are designed to create echo chambers, and others are not. Cinelli et al. studied conversations about emotive subjects – abortion and vaccines – and found that while Facebook and Twitter show clear evidence of the echo-chamber effect, Reddit and Gab do not. Sasahara et al. demonstrate that due to users’ need for validation when likes and friendships are withdrawn the network tends to descend into an echo chamber.

Conclusion: What Can We Do?

Noted above is Google’s seed-funding from the Deep State. More recently, the ex-NSA contractor Edward Snowden revealed that Apple, Facebook, Google, Microsoft, and others were passing user data onto his former employer. Government and big tech became “the left hand and the right hand of the same body.” In the UK, the NSA worked with Government Communications Headquarters on the Joint Threat Research Intelligence Group. Leaks revealed unprecedented, real-time surveillance and disruption operation that included hacking users’ social media accounts, posting content in their name, deleting their accounts, luring them into honey traps, planting incriminating evidence on them, and more.

To beat the antisocial social network, we need to remember who we are and what real communication is. We need to protect the young from the all-pervasive clutches of “social media” and to realize that we are being sold.

Ask yourself: Do you use social media solely to organize protests, alert friends to alternative healing products, and spread anti-war messages? Or do you use it to send irrelevant information about your day-to-day habits in anticipation that an emoji or “like” will appear?

Taking a step back can allow us to see outside and indeed prick the bubble of digital hatred in which the Deep State and corporate sectors have imprisoned us.

About the Author

Dr. Tim Coles’s new book The War on You can be obtained from online booksellers & www.amazon.com/exec/obidos/ASIN/B08HB68N97

This article was published in New Dawn Special Issue Vol 14 No 6.



Facebook Insider Blows Whistle on Vaccine Censorship

By Dr. Joseph Mercola | mercola.com

STORY AT-A-GLANCE

  • Two Facebook insiders — a data center technician and a data center facility engineer — have come forward with internal documents showing how the social media platform is suppressing science and medical facts in the name of combating “vaccine hesitancy”
  • Documents prove Facebook is working on behalf of Big Pharma and in coordination with the U.S. Centers for Disease Control and the World Health Organization to protect and promulgate the false narrative that COVID-19 vaccines are safe and effective for everyone
  • Facebook is beta testing a new algorithm that classifies users who post counternarrative information about vaccines into “vaccine hesitancy” tiers. The beta group comprises 1.5% of the total user base
  • The users are secretly assigned a “VH score” that dictates whether their posts and comments will be removed, demoted, or left alone, regardless of whether they’re factually accurate
  • Facebook’s suppression strategy is currently reducing “vaccine-hesitant” comments by 42.5% within the test group

May 24, 2021, Project Veritas released a video interview1 with two Facebook insider whistleblowers — a data center technician and a data center facility engineer — who have come forward with internal documents showing how the social media platform is suppressing science and medical facts in the name of combating “vaccine hesitancy.”

Facebook recently rolled out a beta test designed to censor negative vaccine information — regardless of its veracity and truthfulness — to eventually roll this censorship program in all nations, in as many languages as possible.

The documents prove Facebook is working on behalf of Big Pharma and in coordination with the U.S. Centers for Disease Control and the World Health Organization to protect and promulgate the false narrative that COVID-19 vaccines are safe and effective for everyone. The platform is even hiding posts in which people who dutifully got the shots to talk about their adverse effects.

Vaccine Hesitancy Comment Demotion

According to the internal documents, Facebook is beta testing a new algorithm that classifies users who post counternarrative information about vaccines into “vaccine hesitancy” (VH) tiers. The users are secretly assigned a “VH score” that dictates whether their posts and comments will be removed, demoted, or left alone — regardless of whether they’re factually accurate. According to Project Veritas:2

“The insider … revealed the tech giant was running the ‘test’ on 1.5% of its 3.8 billion users with the focus on the comments sections on ‘authoritative health pages.’ ‘They’re trying to control this content before it even makes it onto your page, before you even see it,’ the insider [said] …

The stated goal of this feature is to ‘drastically reduce user exposure’ to VH comments. Another aim of the program is to force a ‘decrease in other engagement of VH comments including create, likes, reports [and] replies.'”

Two-Tiered Rating System for Vaccine Content

Vaccine content is rated based on its perceived ability to “discourage vaccination in certain contexts, thereby contributing to vaccine hesitancy or refusal.” According to a “Borderline Vaccine Framework” document, vaccine content is “tiered … by potential harm and how much context is required to evaluate the harm.” The ratings are divided into three primary tiers:3

  1. Explicit discouragement of COVID vaccination
  2. Alarmism, criticism
  3. Indirect vaccine discouragement — This includes congratulating people who have refused the vaccine, “shocking stories” that may deter people from getting the vaccine, promoting alternatives to vaccination or “suggesting natural immunity is better versus getting the vaccine,” minimizing the risks of natural COVID-19 infection, voicing personal objections to or skepticism about the vaccine, and even “neutral discussion or debate”

Depending on where your comment falls within these tiers, your post or comment will be either removed or “demoted” to varying degrees. As noted by investigative journalist and founder of Project Veritas, James O’Keefe, in a Fox News interview:4

“What’s remarkable about these private documents … is that ‘Tier 2’ [violation] says even if the facts are true … you will be targeted and demoted — your comments will be targeted and demoted.”

While it’s unclear who approved this beta test, the listed authors of the “vaccine hesitancy comment demotion” program are senior software engineer Joo Ho Yeo;5 data scientist Nick Gibian6 who, according to LinkedIn, works on health misinformation and civic harassment; software engineer Hendrick Townley, who states his primary interests are in “harnessing technology and technical understanding towards strengthening our democratic institutions and solving pressing policy issues; “7 machine learning and data scientist Amit Bahl;8 and product manager Matt Gilles.9

A New Form of Shadow Banning

The comment demotion strategy that is currently being beta tested is very similar to shadow banning, where a user has been secretly banned — which means none of their followers can actually see their posts — yet they continue posting because they’re unaware that the content is not being disseminated.

Under this two-tier information suppression system, you will have no idea whether your posts or comments are being suppressed and can’t be seen by other users, and to which degree your post or comment is being suppressed. In general, however, the internal documents reveal that this suppression strategy is currently reducing “vaccine-hesitant” comments by 42.5% within the test group.

Facebook Is Actively Suppressing Life-Saving Science

Now, an example of a “vaccine-hesitant” comment is not just “I don’t know if I want the vaccine.” It also includes comments like, “I saw a study that said someone died who got the vaccine,” and personal experiences such as “Excruciating pain after my second vaccine! Shaking so bad, almost to convulsions.”

Facebook is even censoring and putting “fake news” labels on data obtained directly from the Vaccine Adverse Event Reporting System (VAERS), which is jointly run by the U.S. Centers for Disease Control and Prevention and the U.S. Food and Drug Administration.

This despite having a public policy to “remove content that repeats … false health information … that is widely debunked by leading health organizations such as the World Health Organization and the CDC.”

They justify this by stating that VAERS data and other study findings cannot be communicated unless “full context” is provided. But as noted by the whistleblower, that’s a highly ambiguous term. What is the full context? Do you have to post an entire study for it to be contextual?

In the final analysis, it’s clear that Facebook is actively suppressing and censoring science, medical facts, and first-hand personal experiences, and in so doing, they are putting the whole world in harm’s way. By suppressing crucial information about vaccine risks, they eliminate any possibility of informed consent because it is impossible to understand the risks.

They are promoting ignorance that can, and I firmly believe, will, literally kill many of their users. And, since Facebook openly admits coordinating its censorship with the CDC and WHO, the same can probably be said for both of those organizations. As one of the whistleblowers tells O’Keefe:

“[Zuckerberg wants to] build a community where everyone complies — not where people can have an open discourse and dialogue about the most personal and private and intimate decisions. The narrative [is] get the vaccine, the vaccine is good for you, everyone should get it. If you don’t, you will be singled out as an enemy of society.”

Facebook Has Turned From Digital Town Square to Digital Jail

The second whistleblower, a data center facility engineer, says Facebook is now “prohibiting people from having an open dialogue about issues that affect their personal security.” He likens the platform to an abusive partner who doesn’t allow their spouse to speak to friends and family about what’s going on behind closed doors.

Ironically, a leaked video from the same whistleblower shows Facebook CEO Mark Zuckerberg, back in mid-July 2020, expressing his own vaccine hesitancy during a video conference.

“I do just want to make sure that I share some caution on this because we just don’t know the long-term side effects of basically modifying people’s DNA and RNA,” Zuckerberg told his team, referring to COVID-19 vaccines under development.

As noted by O’Keefe, Zuckerberg’s own words would now violate his company’s public policy and rules of expression.

Children’s Health Defense Sues Facebook Over Censorship

In related news, Children’s Health Defense (CHD) sued Facebook in August 2020, charging the company, its CEO, Zuckerberg, and several fact-checking organizations with “censoring truthful public health posts and for fraudulently misrepresenting and defaming the children’s health organization.”10 As reported by The Defender, May 25, 2021:11

“The complaint12 alleges Facebook has ‘insidious conflicts’ with the pharmaceutical industry and health agencies, and details factual allegations regarding the CDC, CDC Foundation and the World Health Organization’s extensive relationships and collaborations with Facebook and Zuckerberg, calling into question Facebook’s collaboration with the government in a censorship campaign.

Facebook censors CHD’s page, targeting factual information about vaccines, 5G and public health agencies. Facebook-owned Instagram deplatformed CHD Chairman Robert F. Kennedy, Jr. on Feb. 10 without notice or explanation.

Lawyers for Children’s Health Defense are awaiting the ruling of Judge Susan Illston after defendants’ filed a motion to dismiss in the CHD lawsuit alleging government-sponsored censorship, false disparagement and wire fraud.”

Florida Governor Signs Law to Crack Down on Censorship

It seems legal action may be the only way to rein in censorship that has spiraled out of control, and Florida, my home state, is paving the way with brand-new legislation, SB 7072,13 to hold social media companies liable for their censorship. As reported by NBC News, May 24, 2021:14

“Florida Gov. Ron DeSantis … said the bill … cracks down on … social media ‘censorship’ while safeguarding Floridians’ ability to access social media platforms. ‘One of their major missions seems to be suppressing ideas that are either inconvenient to the narrative or which they personally disagree with,’ DeSantis said …

DeSantis … and others have accused social media companies of censoring conservative thought by removing posts or using algorithms that reduce the visibility of posts …

The bill also imposes hefty financial penalties against social media platforms that suspend the accounts of political candidates. The bill would fine companies $250,000 a day for doing so …

Florida’s attorney general can bring action against technology companies that violate the law, under Florida’s Unfair and Deceptive Trade Practices Act, and social media platforms found to have violated antitrust law will be restricted from contracting with any public entity, DeSantis said.”

The bill also allows private users to sue for certain violations, with statutory damages totaling up to $100,000 per proven claim or actual damages, plus punitive damages “if aggravating factors are present.”15

Facebook Harms Users in Other Ways Too

As detailed in “Harvard Professor Exposes Surveillance Capitalism,” which features an interview with Shoshana Zuboff, author of the book, “The Age of Surveillance Capitalism,” free social media platforms aren’t free. You pay with your personal data.

So, not only is Facebook and other social media companies suppressing your freedom of speech — often at the request of government officials, which is illegal — they’re also stealing your personal data and using it to control and manipulate you.

Their primary function isn’t actually to allow you to communicate with others. Their primary function is surveillance, data collection, and social engineering. In other words, you are the commodity, not the other way around. They need you far more than you need them.

Companies like Facebook, Google, and third parties of all kinds have the power and use that power, to target your personal inner demons, to trigger you, and to take advantage of you when you’re at your most vulnerable to entice you into action that serves them, commercially or politically.

Your entire existence — even your shifting moods, deciphered by facial recognition software — has become a source of revenue for corporate entities as you’re being cleverly maneuvered into doing (and typically buying) or thinking something you may not have done, bought, or thought otherwise.

Facebook’s massive experiments, in which they used subliminal cues to see if they could make people happier or sadder and affect real-world behavior offline, have proved that — by manipulating language and inserting subliminal cues in the online context — they can change real-world behavior and real-world emotion, and that these methods and powers can be exercised “while bypassing user awareness.”

Other technologies, such as digital security systems, employ hidden microphones to spy on your private conversations. All of these data streams, from cell phones, computers, “smart” appliances, and video cameras around public areas add to ever-expanding predictive modeling capabilities that, ultimately, are used to control and manipulate you.

We Need New Laws

As noted by Zuboff, the reason we’re in this creepy situation is that there are no laws in place to curtail this brand-new type of surveillance capitalism. Indeed, the only reason it has been able to flourish over the past 20 years is that there’s been an absence of laws against it, primarily because it has never previously existed.

Google and Facebook were the only ones who knew what they were doing. The surveillance network grew in the shadows, unbeknownst to the public or lawmakers. The good news is, it’s not too late to take back both our privacy — and our freedom of speech online — but we need legislation that addresses the reality of the entire breadth and depth of these systems in their entirety. As noted by Zuboff:16

“The choice to turn any aspect of one’s life into data must belong to individuals by virtue of their rights in a democratic society. This means, for example, that companies cannot claim the right to your face, or use your face as free raw material for analysis, or own and sell any computational products that derive from your face …

Anything made by humans can be unmade by humans. Surveillance capitalism is young, barely 20 years in the making, but democracy is old, rooted in generations of hope and contest.

Surveillance capitalists are rich and powerful, but they are not invulnerable. They have an Achilles heel: fear. They fear lawmakers who do not fear them. They fear citizens who demand a new road forward as they insist on new answers to old questions: Who will know? Who will decide who knows? Who will decide who decides? Who will write the music, and who will dance?”

How to Protect Your Online Privacy

While there’s no doubt we need a whole new legislative framework to curtail surveillance capitalism and censorship alike, in the meantime, there are ways you can protect your privacy online and limit the “behavioral surplus data” collected about you. (As of yet, there’s not much you can do about online censorship, other than encouraging your state legislators to address it, as Florida just began to do.) To protect your privacy, consider taking the following steps:17

Ditch Facebook, Twitter, and other social media platforms that siphon your personal data and censor content — Today, there are free-speech alternatives that do neither of those things.
Use a virtual private network (VPN) to mask the true identity of your computer.
Do not use Gmail, as every email you write is permanently stored. It becomes part of your profile and is used to build digital models of you, which allows them to make predictions about your line of thinking and every want and desire.

Many other older email systems such as AOL and Yahoo are also being used as surveillance platforms like Gmail. ProtonMail.com, which uses end-to-end encryption, is a great alternative and the basic account is free.

Don’t use Google’s Chrome browser, as everything you do on there is surveilled, including keystrokes and every webpage you’ve ever visited. Brave is a great alternative that takes privacy seriously.

Brave is also faster than Chrome and suppresses ads. It’s based on Chromium, the same software infrastructure that Chrome is based on, so you can easily transfer your extensions, favorites, and bookmarks.

Don’t use Google as your search engine, or any extension of Google, such as Bing or Yahoo, both of which draw search results from Google. The same goes for the iPhone’s personal assistant Siri, which draws all of its answers from Google.

Alternative search engines include SwissCows, DuckDuckGo, and Qwant. Avoid StartPage, as it has been bought by an aggressive online marketing company that, like Google, depends on surveillance.

Don’t use an Android cellphone, as they are always listening and recording your conversations.
Don’t use Google Home devices — These devices record everything that occurs in your home, both speech and sound such as brushing your teeth and boiling water, even when they appear to be inactive, and send that information back to Google.
Regularly clear your cache and cookies.
Don’t use Fitbit, as it has been acquired by Google and will provide them with all your physiological information and activity levels, in addition to everything else that Google already has on you.



Social Engineering Via Media 101 – How to Normalize the Absurd

By Sigmund FraudWaking Times

Ever pay attention to trends in the media? Some stories and narratives rise and fall in cycles, along with your awareness of them. It’s kind of like a shell game, where the street hustler directs your attention to one shell as a distraction while he shuffles aside the nut with the goods in it. A ‘now you see it, now you don’t,’ kind of thing.

When you see the same story arise frequently in the mainstream media, you can bet that it’s something you’re supposed to be looking at.

You see, the major corporate media operates from talking points and top-down directives. A mere 6 corporations own some 90% of all the major media outlets, and as corporations do, they rule by memos from up high.

Tonight show host Conan O’Brien knows this, and he rips on the media for the insane homogenization of local news. He does this bit where his team edits together actual footage of local newscasters from around the country saying the exact same thing, word for word, but, each anchor-person personalizes it with their own inflection, pausing, intonation, and so on. It’s hilarious, but at the same time disturbing because it shockingly demonstrates how ideas are forced into the mainstream of today’s corporate culture.

Have a look. This always cracks me up. Not in a ‘ha ha’ sort of way, though, more like in a ‘haha, aren’t we gullible,’ kind of way. Big difference.

The point is, when you see a story being played over and again on various news outlets, you have good reason to believe that the information isn’t coming to you organically. It’s not something you really need to know or something that is genuinely relevant to day-to-day life in your community. It’s the execution of an agenda. The information is being deliberately disseminated to manufacture awareness and recalibrate the standard for normal. It’s something the corporate media wants you to focus on. Like in the shell game.

When you understand this fundamental of corporate media, the landscape of information today looks totally different. You’re able to see narratives unfold and evolve, and able to recognize when your attention is deliberately being drawn towards an issue. Or away from an issue.

Here are a few examples from the present that when taken as everyday happenstance may seem benign, but have serious implications for the future of society and for the human race at large. The fact that these issues are being presented with noticeable frequency these days is a red flag that there is some larger agenda in the works. The norms, values, and standards in our culture are being tweaked, or twerked, and attacked by the repetition of such information.

Vaccines – This is perhaps one of the most common issues thrust on the public in order to fabricate widespread public support for a questionable and very profitable practice. The one-sidedness of the debate on this sensitive issue has successfully created a society where people now openly demand forced medical procedures on others to alleviate a perceived fear.

Gender Neutrality – This is the idea that a person’s biological gender is somehow fluid against their opinion of themselves. There is an apparent effort to make us believe that those with confusion over their gender are horribly oppressed and in danger and that they need to be protected with censorship and speech laws. The aim here is to promote the virtues of censorship and to develop a generation of people who don’t value procreation and the advancement of the human race, but rather shallow social issues and a perceived sense of justice.

Sex Robots – Robot sex toys are increasingly being put in front of the public and lauded as the future of companionship. News stories on the latest advancements in robot sex dolls are ubiquitous these days. We are being told they make great life partners and that they sufficiently synthesize the experience of being with a real woman (or man). The end game here is to further disconnect people from each other, and perhaps also to assist in a broader depopulation agenda by persuading us that sex with plastic and electronics is as good as or better than the real thing. Look for birth rates to decline further as these creepy sex toys become more popular.

Microchipping – Some call this the ‘mark of the beast,’ but the idea of microchipping people for their supposed convenience is being pushed out onto all the major media channels as a great way to take part in our technological future. Issues of privacy, tyranny, and the abuse of power are hardly examined. Feature stories on acquiescent corporate employees who willingly take the chip make it seem as though chipping is fashionable.

These are just a few examples, but the technique in play here is a fundamental method of social engineering via media.

Among the regular flow of info, certain topics or subjects are thrust into public consciousness with regularity. The issues are never quite framed as critically important, but rather positioned as matter-of-fact, sign-of-the-times. Opposing arguments or viewpoints are never fully explored. Frame it in such a way that it seems exciting and cutting edge. Normalize it by mixing it in with everyday things, and repeating it. Make it seem like the future is here now, and that there is a bandwagon you need to get in on order to be part of the gang.

This method works. It’s called conditioning. An idea as reprehensible as exchanging human-on-human love for sex with elaborate robots would have been shocking and totally unacceptable a few generations ago. But, slowly raise awareness of the wonders of this new technology over time, and people become curious rather than repulsed. It becomes normalized.

There really is nothing you can do about living in such a changing world, except opt-out of the insanity, stupidity, and self-destructive tendencies being framed as wholesome cultural advances. To make good decisions in this regard, it’s imperative to be able to process information in a way that acknowledges the true nature of corporate/government propaganda.

Social engineering is real. It’s happening all around you. Are you paying attention?

About the Author

Sigmund Fraud is a survivor of modern psychiatry and a dedicated mental activist. He is a staff writer for WakingTimes.com where he indulges in the possibility of a massive shift towards a more psychologically aware future for humankind.

This article (Social Engineering Via Media 101 – How to Normalize the Absurd) was originally created and published by Waking Times and is published here under a Creative Commons license with attribution to Sigmund Fraud and WakingTimes.com