Perspectives; Thoughts; Comments; Opinions; Discussions

Posts tagged ‘CENSORSHIP COMPLEX’

Censorship-Industrial Complex Enlists U.K. ‘Misinformation’ Group Logically.AI To Meddle In 2024 Election


BY: LEE FANG | JANUARY 29, 2024

Read more at https://thefederalist.com/2024/01/29/censorship-industrial-complex-enlists-u-k-misinformation-group-logically-ai-to-meddle-in-2024-election/

close up of black woman holding a green cellphone

Author Lee Fang profile

LEE FANG

MORE ARTICLES

Brian Murphy, a former FBI agent who once led the intelligence wing of the Department of Homeland Security, reflected last summer on the failures of the Disinformation Governance Board — the panel formed to actively police misinformation. The board, which was proposed in April 2022 after he left DHS, was quickly shelved by the Biden administration in a few short months in the face of criticism that it would be an Orwellian state-sponsored “Ministry of Truth.”

In a July podcast, Murphy said the threat of state-sponsored disinformation meant the executive branch has an “ethical responsibility” to rein in the social media companies. American citizens, he said, must give up “some of your freedoms that you need and deserve so that you get security back.”

The legal problems and public backlash to the Disinformation Governance Board also demonstrated to him that “the government has a major role to play, but they cannot be out in front.”

Murphy, who made headlines late in the Trump administration for improperly building dossiers on journalists, has spent the last few years trying to help the government find ways to suppress and censor speech it doesn’t like without being so “out in front” that it runs afoul of the Constitution. He has proposed that law enforcement and intelligence agencies formalize the process of sharing tips with private sector actors — a “hybrid constellation” including the press, academia, researchers, nonpartisan organizations, and social media companies — to dismantle “misinformation” campaigns before they take hold.

More recently, Murphy has worked to make his vision of countering misinformation a reality by joining a United Kingdom-based tech firm, Logically.AI, whose eponymous product identifies and removes content from social media. Since joining the firm, Murphy has met with military and other government officials in the U.S., many of whom have gone on to contract or pilot Logically’s platform.

Logically says it uses artificial intelligence to keep tabs on over 1 million conversations. It also maintains a public-facing editorial team that produces viral content and liaisons with the traditional news media. It differs from other players in this industry by actively deploying what they call “countermeasures” to dispute or remove problematic content from social media platforms.
 
The business is even experimenting with natural language models, according to one corporate disclosure, “to generate effective counter speech outputs that can be leveraged to deliver novel solutions for content moderation and fact-checking.” In other words, artificial intelligence-powered bots that produce, in real-time, original arguments to dispute content labeled as misinformation.

In many respects, Logically is fulfilling the role Murphy has articulated for a vast public-private partnership to shape social media content decisions. Its technology has already become a key player in a much larger movement that seeks to clamp down on what the government and others deem misinformation or disinformation. A raft of developing evidence — including the “Twitter Files,” the Moderna Reports, the proposed Government Disinformation Panel, and other reports — has shown how governments and industry are determined to monitor, delegitimize, and sometimes censor protected speech. The story of Logically.AI illustrates how sophisticated this effort has become and its global reach. The use of its technology in Britain and Canada raises red flags as it seeks a stronger foothold in the United States.

Logically was founded in 2017 by a then-22-year-old British entrepreneur named Lyric Jain, who was inspired to form the company to combat what he believed were the lies that pushed the U.K. into voting in favor of Brexit, or leaving the European Union. The once-minor startup now has broad contracts across Europe and India, and has worked closely with Microsoft, Google, PwC, TikTok, and other major firms. Meta contracts with Logically to help the company fact-check content on all of its platforms: WhatsApp, Instagram, and Facebook.

The close ties to Silicon Valley provide unusual reach. “When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” Meta and Logically announced in a 2021 press release on the partnership.

Meta and Logically did not respond to repeated requests for comment.

During the 2021 local elections in the U.K., Logically monitored up to “one million pieces of harmful content,” some of which they relayed to government officials, according to a document reviewed by RealClearInvestigations. The firm claimed to spot coordinated activity to manipulate narratives around the election, information they reported to tech giants for takedowns.

The following year, the state of Oregon negotiated with Logically for a wide-ranging effort to monitor campaign-related content during the 2022 midterm elections. In a redacted proposal for the project, Logically noted that it would check claims against its “single source of truth database,” which relied on government data, and would also crack down on “malinformation” — a term of art that refers to accurate information that fuels dangerous narratives. The firm similarly sold Oregon on its ability to pressure social media platforms for content removal.

Oregon state Rep. Ed Diehl has a led push against the state from renewing its work with Logically for the election this year. The company, he said in an interview, violates “our constitutional rights to free speech and privacy” by “flagging true information as false, claiming legitimate dissent is a threat, and then promoting “counter-narratives” against valid forms of public debate.

In response, the Oregon secretary of state’s office, which initiated the contract with Logically, claimed “no authority, ability, or desire to censor speech.” Diehl disputes this. He pointed out that the original proposal with Logically clearly states that its service “enables the opportunity for unlimited takedown attempts” of alleged misinformation content and the ability for the Oregon secretary of state’s office to “flag for removal” any “problematic narratives and content.” The contract document touts Logically as a “trusted entity within the social media community” that gives it “preferred status that enables us to support our client’s needs at a moment’s notice.”

Diehl, who shared a copy of the Logically contract with RCI, called the issue a vital “civil rights” fight, and noted that in an ironic twist, the state’s anti-misinformation speech suppression work further inflames distrust in “election systems and government institutions in general.”

Logically’s reach into the U.S. market is quickly growing. The company has piloted programs for the Chicago Police Department to use artificial intelligence to analyze local rap music and deploy predictions on violence in the community, according to a confidential proposal obtained by RCI. Pentagon records show that the firm is a subcontractor to a program run by the U.S. Army’s elite Special Operations Command for work conducted in 2022 and 2023. Via funding from DHS, Logically also conducts research on gamer culture and radicalization.

The company has claimed in its ethics statements that it will not employ any person who holds “a salaried or prominent position” in government. But records show closely entrenched state influence. For instance, Kevin Gross, a director of the U.S. Navy NAVAIR division, was previously embedded within Logically’s team during a 2022 fellowship program. The exchange program supported Logically’s efforts to assist NATO on the analysis of Russian social media.

Other contracts in the U.S. may be shrouded in secrecy. Logically partners with ThunderCat Technologies, a contracting firm that assists tech companies when competing for government work. Such arrangements have helped tech giants conceal secretive work in the past. Google previously attempted to hide its artificial intelligence drone-targeting contracts with the Defense Department through a similar third-party contracting vendor.

But questions swirl over the methods and reach of the firm as it entrenches itself into American life, especially as Logically angles to play a prominent role in the 2024 presidential election. 

Pandemic Policing

In March 2020, as Britain confronted the spread of Covid-19, the government convened a new task force, the Counter Disinformation Unit (CDU). The secretive task force was created with little fanfare but was advertised as a public health measure to protect against dangerous misinformation. Caroline Dinenage, the member of Parliament overseeing media issues, later explained that the unit’s purpose was to provide authoritative sources of information and to “take action to remove misinformation” relating to “misleading narratives related to COVID-19.”

The CDU, it later emerged, had largely outsourced its work to private contractors such as Logically. In January 2021, the company received its first contract from the agency overseeing the CDU, for £400,000, to monitor “potentially harmful disinformation online.” The contracts later swelled, with the U.K. agency that pertains to media issues eventually providing contracts with a combined value of £1.2 million and the Department of Health providing another £1.3 million, for a total of roughly $3.2 million.

That money went into far-reaching surveillance that monitored journalists, activists, and lawmakers who criticized pandemic policies. Logically, according to an investigation last year in the Telegraph, recorded comments from activist Silkie Carlo criticizing vaccine passports in its “Mis/Disinformation” reports.

Logically’s reports similarly collected information on Dr. Alexandre de Figueiredo, a research fellow at the London School of Hygiene and Tropical Medicine. Figueiredo had published reports on the negative ways in which vaccine passports could undermine vaccine confidence and had publicly criticized policies aimed at the mass vaccination of children. Despite his expertise, Logically filed his tweet in a disinformation report to the government. While some of the reports were categorized as evidence of terms of service violations, many were, in fact, routine forms of dissent aired by prominent voices in the U.K. on policies hotly contested by expert opinion.

The documents showing Logically’s role were later uncovered by Carlo’s watchdog group, Big Brother Watch, which produced a detailed report on the surveillance effort. The CDU reports targeted a former judge who argued against coercive lockdowns as a violation of civil liberties and journalists criticizing government corruption. Some of the surveillance documents suggest a mission creep for the unit, as media monitoring emails show that the agency targeted anti-war groups that were vocal against NATO’s policies.

Carlo was surprised to even find her name on posts closely monitored and flagged by Logically. “We found that the company exploits millions of online posts to monitor, record and flag online political dissent to the central government under the banner of countering ‘disinformation,’” she noted in a statement to RCI.

Marketing materials published by Logically suggest its view of Covid-19 went well beyond fact-checking and veered into suppressing dissenting opinions. A case study published by the firm claimed that the #KBF hashtag, referring to Keep Britain Free, an activist group against school and business shutdowns, was a dangerous “anti-vax” narrative. The case study also claimed the suggestion that “the virus was created in a Chinese laboratory” was one of the “conspiracy theories’’ that “have received government support” in the U.S. — despite the fact that a preponderance of evidence now points to a likely lab leak from the Wuhan Institute of Virology as the origin of the pandemic.

Logically was also involved in pandemic work that blurred the line with traditional fact-checking operations. In India, the firm helped actively persuade patients to take the vaccine. In 2021, Jain, the founder and CEO of the company, said in an interview with an Indian news outlet that his company worked “closely with communities that are today vaccine hesitant.” The company, he said, recruited “advocates and evangelists” to shape local opinion.

Questionable Fact-Checking

In 2022, Logically used its technology on behalf of Canadian law enforcement to target the trucker-led “Freedom Convoy” against Covid-19 mandates, according to government records. Logically’s team floated theories that the truckers were “likely influenced by foreign adversaries,” a widely repeated claim used to denigrate the protests as inauthentic.

The push to discredit the Canadian protests showed the overlapping power of Logically’s multiple arms. While its social media surveillance wing fed reports to the Canadian government, its editorial team worked to influence opinion through the news media. When the Financial Times reported on the protest phenomenon, the outlet quoted Murphy, the former FBI man who now works for Logically, who asserted that the truckers were influenced by coordinated “conspiracy theorist groups” in the U.S. and Canada. Vice similarly quoted Joe Ondrak, Logically’s head of investigations, to report that the “Freedom Convoy” had generated excitement among global conspiracy theorists. Neither outlet disclosed Logically’s work for Canadian law enforcement at the time.

Other targets of Logically are quick to point out that the firm has taken liberties with what it classifies as misinformation.

Will Jones, the editor of the Daily Sceptic, a British news outlet with a libertarian bent, has detailed an unusual fact-check from Logically Facts, the company’s editorial site. Jones said the site targeted him for pointing out that data in 2022 showed 71 percent of patients hospitalized for Covid-19 were vaccinated. Logically’s fact-check acknowledged Jones had accurately used statistics from the U.K. Health Security Agency, but tried to undermine him by asserting that he was still misleading by suggesting that “vaccines are ineffective.”

But Jones, in a reply, noted that he never made that argument and that Logically was batting away at a straw man. In fact, his original piece plainly took issue with a Guardian article that incorrectly claimed that “COVID-19 has largely become a disease of the unvaccinated.”

Other Logically fact-checks have bizarrely targeted the Daily Sceptic for reporting on news in January 2022 that vaccine mandates might soon be lifted. The site dinged the Daily Sceptic for challenging the evidence behind the vaccine policy and declared, “COVID-19 vaccines have been proven effective in fighting the pandemic.” And yet, at the end of that month, the mandate was lifted for health care workers, and the following month, all other pandemic restrictions were revoked, just as the Daily Sceptic had reported.

“As far as I can work out, it’s a grift,” said Daily Sceptic founder Toby Young, of Logically. “A group of shysters offer to help the government censor any criticism of its policies under the pretense that they’re not silencing dissent — God forbid! — but merely ‘cleansing’ social media of misinformation, disinformation and hate speech.”

Jones was similarly dismissive of the company, which he said disputes anything that runs contrary to popular consensus. “The consensus of course is that set by the people who pay Logically for their services,” Jones added. “The company claims to protect democratic debate by providing access to ‘reliable information,’ but in reality, it is paid to bark and savage on command whenever genuine free speech makes an inconvenient appearance.”

In some cases, Logically has piled on to news stories to help discredit voices of dissent. Last September, the anti-misinformation site leaped into action after British news outlets published reports about sexual misconduct allegations surrounding comedian and online broadcaster Russell Brand — one of the outspoken critics of government policy in Britain, who has been compared to Joe Rogan for his heterodox views and large audience.

Brand, a vocal opponent of pandemic policies, had been targeted by Logically in the past for airing opinions critical of the U.S. and U.K. response to the virus outbreak, and in other moments for criticizing new laws in the European Union that compel social media platforms to take down content.

But the site took dramatic action when the sexual allegations, none of which have been proved in court, were published in the media. Ondrak, Logically’s investigations head, provided different quotes to nearly half a dozen news outlets — including Vice, Wired, the BBC, and two separate articles in The Times — that depicted Brand as a dangerous purveyor of misinformation who had finally been held to account.

“He follows a lot of the ostensibly health yoga retreat, kind of left-leaning, anti-capitalist figures, who got really suckered into Covid skepticism, Covid denialism, and anti-vax, and then spat out of the Great Reset at the other end,” Ondrak told Wired. In one of the articles published by The Times, Ondrak aired frustration on the obstacles of demonetizing Brand from the Rumble streaming network. In an interview with the BBC, Ondrak gave a curious condemnation, noting Brand stops short of airing any actual conspiracy theories or falsehoods but is guilty of giving audiences “the ingredients to make the disinformation themselves.”

Dinenage, the member of Parliament who spearheaded the CDU anti-misinformation push with Logically during the pandemic, also leapt into action. In the immediate aftermath of the scandal, she sent nearly identical letters to Rumble, TikTok, and Meta to demand that the platforms follow YouTube’s lead in demonetizing Brand. Dinenage couched her official request to censor Brand as a part of a public interest inquiry, to protect the “welfare of victims of inappropriate and potentially illegal behaviour.”

Logically’s editorial team went a step further. In its report on the Brand allegations published on Logically Facts, it claimed that social media accounts “trotting out the ‘innocent until proven guilty’ refrain” for the comedian were among those perpetuating “common myths about sexual assault.” The site published a follow-up video reiterating the claim that those seeking the presumption of innocence for Brand, a principle dating back to the Magna Carta, were spreading a dangerous “myth.”

The unusual advocacy campaign against Brand represented a typical approach for a company that has long touted itself as a hammer against spreaders of misinformation. The opportunity to remove Brand from the media ecosystem meant throwing as much at him as possible, despite any clear misinformation or disinformation angle in the sexual assault allegations. Rather, he was a leading critic of government censorship and pandemic policy, so the scandal represented a weakness to be exploited.

Such heavy-handed tactics may be on the horizon for American voters. The firm is now a member of the U.S. Election Infrastructure Information Sharing & Analysis Center, the group managed by the Center for Internet Security that helps facilitate misinformation reports on behalf of election officials across the country. Logically has been in talks with Oregon and other states, as well as DHS, to expand its social media surveillance role for the presidential election later this year.

Previous targets of the company, though, are issuing a warning. 

“It appears that Logically’s lucrative and frankly sinister business effectively produced multi-million pound misinformation for the government that may have played a role in the censorship of citizens’ lawful speech,” said Carlo of Big Brother Watch.

“Politicians and senior officials happily pay these grifters millions of pounds to wield the red pen, telling themselves that they’re ‘protecting’ democracy rather than undermining it,” said Young of the Daily Sceptic. “It’s a boondoggle and it should be against the law.”

This article was originally published by RealClearInvestigations and LeeFang.com.


Lee Fang is an investigative reporter. Find his Substack here.

Biden’s FTC Punished Twitter For Seceding From The Censorship Complex


BY: MARGOT CLEVELAND| JULY 17, 2023

Read more at https://thefederalist.com/2023/07/17/bidens-ftc-punished-twitter-for-seceding-from-the-censorship-complex/

Twitter owner Elon Musk

Author Margot Cleveland profile

MARGOT CLEVELAND

VISIT ON TWITTER@PROFMJCLEVELAND

MORE ARTICLES

The Federal Trade Commission inappropriately pressured an independent third-party auditing firm to find Twitter had violated the terms of its settlement agreement with the FTC, a motion filed last week in federal court reveals. That misconduct and the FTC’s own repudiation of the terms of the settlement agreement entitle Twitter to vacate the consent order, its lawyers maintain. This latest development holds significance beyond Twitter’s fight with the FTC, however, with the details providing further evidence that the Biden administration targeted Twitter because of its owner Elon Musk’s support for free speech on his platform.

I “felt as if the FTC was trying to influence the outcome of the engagement before it had started,” a CPA with nearly 30 years of experience with the Big Four accounting firm Ernst & Young (EY) testified last month. The FTC’s pressure campaign left EY partner David Roque so unsettled that he sought guidance from another partner concerning controlling ethical standards for CPAs to assess whether his independence had been compromised by the federal agency. Roque’s testimony prompted attorneys for Twitter to seek documents from the FTC to assess whether the federal agency had repeated its pressure campaign with EY’s successors, but the agency refused to provide any details to the social media giant. Twitter responded last week by filing a “Motion for a Protective Order and Relief From Consent Order.” 

That motion and its accompanying exhibits provide shocking details of an abusive agency targeting Twitter. When those facts are coupled with the report on the FTC issued earlier this year by the House Weaponization Subcommittee, it seems clear the Biden administration is targeting Twitter because Musk seceded from the Censorship-Industrial Complex.

FTC’s Pre-Musk Enforcement Actions

Thursday’s motion began with the background necessary to appreciate the gravity of the FTC’s scorched-earth campaign against Twitter. 

More than a decade ago, the FTC entered into a settlement agreement with Twitter after finding Twitter had violated the Federal Trade Commission Act by misrepresenting the extent it protected user information from unauthorized access. That 2011 settlement agreement resulted in a consent order that required Twitter to establish a “comprehensive information security program” that met specific parameters. The 2011 consent order also required Twitter to obtain an assessment from an independent third-party professional confirming compliance with the terms of the settlement agreement. 

From 2011 to 2019, Twitter operated under the 2011 consent order and received about 10 “demand letters” from the FTC seeking additional information. Then in October 2019, Twitter informed the FTC that “some email addresses and phone numbers provided for account security may have been used unintentionally for advertising purposes.” In investigating that report, the FTC sent Twitter another 15 or so demand letters over a two-year period before filing a complaint in a California federal court on May 25, 2022, alleging Twitter had violated the 2011 consent order and Section 5 of the FTC Act by misrepresenting the extent to which Twitter maintained and protected the privacy of nonpublic consumer information. 

The next day, the court entered a “Stipulated Order” — meaning Twitter and the FTC had agreed to the terms of that order — “for Civil Penalty, Monetary Judgment, and Injunctive Relief.” That stipulated order allowed the FTC to reopen the 2011 proceeding and enter an updated consent order, which created a new “compliance structure.”

Under the 2022 order, Twitter was required to establish and maintain a “comprehensive privacy and information security program” to “protect[] the privacy, security, confidentiality, and integrity” of certain user information by Nov. 22, 2022. The 2022 consent order also required Twitter to obtain an assessment of its compliance with the terms of the court order by “qualified, objective, independent third-party professionals.”

Musk Makes Waves

Musk entered into an agreement on April 25, 2022, to purchase Twitter, effective Oct. 27, 2022, and one must wonder if that April agreement prompted Twitter’s then-management to enter the May 2022 consent decree, as Twitter’s prior management handcuffed Musk to the terms of the agreement forged with the FTC. Either way, the May 2022 consent order governed Twitter’s operations under Musk’s new management. 

While the 2022 consent decree remained unchanged after Musk’s purchase became final, the FTC’s posture toward Twitter changed drastically. As Twitter’s Thursday motion detailed, “in the five months between the signing of the Consent Order on May 25, 2022, and Mr. Musk’s acquisition of Twitter, Inc. on October 27, 2022, the FTC sent Twitter only three demand letters.”

All three letters concerned a whistleblower’s claims that Twitter had violated the Federal Trade Commission Act and the 2011 consent order by making false and misleading statements about its security, privacy, and integrity. The FTC waited nearly two months after receiving the whistleblower’s complaint before serving its first demand letter on Twitter.

FTC Goes Scorched Earth

According to Twitter’s motion for relief from the 2022 consent order, “Musk’s acquisition of Twitter produced a sudden and drastic change in the tone and intensity of the FTC’s investigation into the company.” Among other things, the FTC publicly stated it was “tracking recent developments at Twitter with deep concern.” The FTC also stressed that the revised consent order provided the agency with “new tools to ensure compliance,” and it was “prepared to use them.”

And use them the FTC did: The agency immediately issued two demand letters to Twitter seeking information about workforce reductions and the launch of Twitter Blue. Those demand letters came before Twitter was even required under the 2022 consent decree to have its new programs in place. Since then, Twitter’s attorneys note, the FTC has pummeled Twitter’s corporate owner, X Corp., with “burdensome demand letters” — more than 17 separate demand letters, with some 200 individual demands for information and documents, translates into a new demand letter every two weeks.

FTC Starts Drilling Former Employees

In addition to the FTC’s flurry of demand letters, it began deposing former Twitter employees — five to date — and is currently seeking to question Musk. The FTC also deposed Roque on June 21, 2023, but the questioning backfired. Twitter learned from that deposition, as its lawyers put it in Thursday’s motion, “that the FTC’s harassment campaign was even more extreme and far-reaching than it had imagined.”

Roque was the Ernst & Young partner overseeing the assessment it was hired by Twitter to perform — an assessment mandated by the May 2022 consent decree. Twitter’s previous management retained EY in July 2022 to issue the assessment report of its security measures. 

In late February 2023, EY withdrew from the engagement. Many of the FTC’s questions to Roque probed the reasoning for the withdrawal, including the high number of personnel changes and EY’s difficulty in starting the assessment because of Twitter upheaval caused by Musk’s changes.

Deposition Backfires Big Time 

During the FTC’s question of Roque about EY’s withdrawal from the engagement and various emails exchanged by partners, the longtime CPA dropped a bombshell: The FTC had so pressured Roque to reach its preconceived conclusion that Twitter had violated the consent decree that Roque sought help researching the ethical standards that govern CPAs to assess whether EY’s independence had been compromised.

Roque revealed that detail when the FTC’s lawyer quizzed him on the meaning of a chat message exchange he had with fellow EY partner Paul Penler on the evening of Feb. 21, 2023, shortly before the Big Four firm announced it was withdrawing from its engagement to assess Twitter’s compliance with the 2022 consent order. 

While the actual chat message was filed under seal as Exhibit 16 in support of Twitter’s motion, the transcript of Roque’s questioning was provided to the court, revealing the pertinent aspects of the conversation.

Roque began by asking Penler, “Where is the best place to confirm independence consideration for attest engagement?” About 15 minutes later, Roque followed up by asking whether specific language about an “adverse interest threat” “could work for Twitter?” Roque then commented to Penler that “EY interests are not aligned with Twitter anymore because of the FTC.”

Mild-Mannered CPA Drops Bombshell 

After showing Roque a copy of his chat exchange with Penler, the FTC attorney quizzed the EY partner on why he had sent the note and what he meant by the various lines. That’s when the bomb exploded, with Roque explaining he had contacted Penler — who was with EY’s professional practice group, the internal group that was responsible for ensuring the firm adequately followed professional standards — because Roque had concerns about whether the FTC had threatened his independence.

“As we were moving forward with this engagement, we had ongoing discussions with the FTC,” Roque explained. “[D]uring those discussions,” Roque continued, “the FTC kept expressing their opinion more and more adamantly about the extent of procedures Ernst & Young would need to perform based on their expectations. And there was also expectations around the results they would expect us to find based on the information Twitter had already provided to the FTC and the FTC had reviewed.” 

Those conversations, Roque testified, made him feel “as if the FTC was trying to influence the outcome of the engagement before it had started,” so he was attempting to assess whether EY had an “adverse threat,” meaning “somebody outside of the arrangement we had with Twitter trying to influence the outcome of our results.” 

FTC Spin Falls Flat

After Roque revealed his concerns about the FTC’s conduct, the lawyer for the federal agency pushed him to backtrack by asking leading questions. Rather than hedge, Roque stood firm, as these exchanges show:

FTC Attorney: “To be clear, no one from the FTC directed you to reach a particular conclusion about Twitter’s 22 program, correct?”

Roque: “There was suggestions of what they would expect the outcome to be.”

* * *

FTC Attorney: “No one from the FTC actually told you what EY’s report should say in its conclusions, correct?” 

Roque: “There was a conversation where it was conveyed that the FTC would be surprised if there was areas on our report that didn’t have findings based on information the FTC was already aware of, and if Ernst & Young didn’t have findings in those areas, we should expect the FTC would follow up very significantly to understand why we didn’t have similar conclusions.”

Twitter’s Lawyer Pounces

After two fails, the FTC moved on to other questions, but Twitter’s lawyer, Daniel Koffmann, returned to the topic when it was his turn to question Roque. Koffmann asked Roque whether there was a particular meeting with the FTC in which the agency had given him the impression that it “was expecting a certain outcome in the assessment that Ernst & Young was conducting relative to Twitter’s compliance with the consent order.” 

Roque mentioned two meetings. He described the first, which was in December 2022, as “interesting” and “surprising” because when EY noted that Twitter, under its new ownership, might opt to terminate its contract with the firm, the FTC was “very adamant about this is absolutely what you will do and this is going to occur, and you’ll produce a report at the end of the day.” Roque found the FTC’s stance “a bit surprising,” since the report was not due for another six to seven months and the federal agency would not know what might transpire during that time period.

Roque further explained that he found the December 2022 meeting “unusual” because the FTC provided “specificity on the execution of very specific types of procedures that they expected to be performed.”

“It was almost as if they were giving us components of our audit program to execute,” Roque said. While EY could perform such a review, it would be a different type of engagement than the one it had entered with Twitter. Rather, EY’s assessment for Twitter was to access, for instance, how security operates and how the user administration process is managed. In conducting that assessment, the firm would look at specific controls. But the FTC was giving EY very specific tests to run, which was inconsistent with a typical audit, Roque explained.

It was the second meeting, which took place in January 2022, that raised real concerns for Roque. It was then, Roque said, that the FTC “started providing areas that they were expecting us to look at.” Roque testified that the FTC “communicated that they would expect Ernst & Young to have findings or exceptions or negative results in certain areas based on what they already understood from an operational standpoint, based on information Twitter had provided, and that if we ended up producing a report that didn’t have findings in those areas, that they would be surprised, and they would be definitely following up with us to understand why we didn’t — why we reached the conclusions we did if they were sort of not reflecting gaps in the controls.”

Roque would go on to agree with Twitter’s attorney that during the January 2022 meeting, “the representatives from the FTC expressed that they believed Ernst & Young’s assessment would lead to findings or exceptions about Twitter’s compliance with the consent order.” 

Twitter Takes FTC to Task

A little over a week after Roque’s deposition, Twitter’s legal team wrote the FTC a scathing letter noting that Roque’s alarming testimony “demonstrates that the FTC has resorted to bullying tactics, intimidation, and threats to potential witnesses.”

“It strongly suggests that the FTC has attempted to exert improper influence over witnesses in order to manufacture evidence damaging to X Corp. and Mr. Musk,” the letter continued, adding that Roque’s testimony also raised serious questions about whether the FTC’s bias would render any future enforcement action unconstitutional.

The Twitter letter ended by requesting documents and information from the FTC “to evaluate the nature and scope of the FTC’s misconduct and the remedial measures that will be necessary.” Among other things, Twitter asked for communications between FTC personnel and the company that succeeded EY as Twitter’s independent assessor, as well as another company Twitter considered but did not select to replace EY.

The FTC refused Twitter’s request. In its letter denying Musk access to any documents, Reenah L. Kim, the same attorney who allegedly made the statements to Roque, claimed Twitter’s accusations of so-called “bullying tactics, intimidation, and threats to potential witnesses” by the FTC “are completely unfounded.” 

Lots of Legal Implications

Following the FTC’s refusal to provide Twitter the requested documents, Musk’s legal team filed its “Motion for a Protective Order and Relief From Consent Order” with the California federal court where the 2022 consent decree had been entered. In this recently filed motion, Musk’s attorneys argue the FTC “breached” the consent order when it attempted “to dictate and influence the content, procedures, and outcome” of the court-ordered assessment, which the consent decree required to be both “objective” and “independent.”

To support its argument, Twitter highlighted the FTC’s own language in an earlier letter the agency had sent to Twitter’s prior management team discussing the importance of the same “independence” requirement from the first consent decree. That order was clear, the FTC wrote, that “Twitter must obtain periodic security assessments ‘from a qualified, objective, independent third-party professional.’”

The “assessor must be an independent third party — not an employee or agent of either Twitter or the FTC,” the letter continued, adding that if the auditor were indeed an agent of Twitter, “Twitter would be in violation of the Order’s requirement that it obtain a security assessment from an ‘independent third-party’ professional.” The FTC then stressed: “The very purpose of a security or privacy order’s assessment provision is to ensure that evaluation of a respondent’s security or privacy program is truly objective — i.e., unaffected by the interests (or litigation positions) of either the respondent or the FTC.” 

The FTC’s interference with EY’s independence thus constituted a violation of the 2022 consent decree, Twitter’s legal team argued, justifying the court vacating that order — or at a minimum modifying it. Twitter also argued in its motion that as a matter of fairness, the consent decree should be set aside given the FTC’s outrageously aggressive demands for documents, compared to its posture toward Twitter prior to Musk’s purchase. 

That motion remains pending before federal Magistrate Judge Thomas Hixon, with a hearing set for next month.

Connection to the Censorship Complex

While Twitter’s Thursday motion does not directly connect to the Censorship-Industrial Complex, the FTC’s posture toward Twitter changed following news that Musk intended to purchase the tech giant to make it a free-speech zone. And when Roque’s testimony is considered against the backdrop of evidence previously exposed by the House Subcommittee on the Weaponization of the Federal Government, it seems clear the Biden administration sought to punish Twitter for exiting from the government’s whole-of-society plan to censor supposed misinformation.

The House subcommittee’s March 2023 report, titled “The Weaponization of the Federal Trade Commission: An Agency’s Overreach to Harass Elon Musk’s Twitter,” established the FTC had requested the names of every journalist Musk had provided access to internal communications, which had led to the earth-shattering revelations contained in the “Twitter Files.” Many of the FTC’s other demands, the House report concluded, also “had little to no nexus to users’ privacy and information.” The report thus concluded that the “strong inference” “is that Twitter’s rediscovered focus on free speech [was] being met with politically motivated attempts to thwart Elon Musk’s goals.” 

Know-Nothing Khan

House Judiciary Chair Jim Jordan, R-Ohio, attempted to question FTC Chair Lina Khan on Thursday about the agency’s apparent interference with EY’s independence and its connection to the federal government’s efforts to silence speech.

“The FTC has engaged in conduct so irregular and improper that Ernst & Young (‘EY’) — the independent assessor designated under a consent order between Twitter and the FTC to evaluate the company’s privacy, data protection, and information security program — ‘felt as if the FTC was trying to influence the outcome of the engagement before it had started,’” Jordan said.

But Khan claimed she knew nothing about Roque or his deposition testimony. 

That doesn’t change the fact that the FTC has been laser-focused on Twitter since Musk revolted against the Censorship-Industrial Complex. Whether Twitter will convince the California federal court that the FTC’s conduct justifies tearing up the consent decree, however, remains to be seen.


Margot Cleveland is The Federalist’s senior legal correspondent. She is also a contributor to National Review Online, the Washington Examiner, Aleteia, and Townhall.com, and has been published in the Wall Street Journal and USA Today. Cleveland is a lawyer and a graduate of the Notre Dame Law School, where she earned the Hoynes Prize—the law school’s highest honor. She later served for nearly 25 years as a permanent law clerk for a federal appellate judge on the Seventh Circuit Court of Appeals. Cleveland is a former full-time university faculty member and now teaches as an adjunct from time to time. As a stay-at-home homeschooling mom of a young son with cystic fibrosis, Cleveland frequently writes on cultural issues related to parenting and special-needs children. Cleveland is on Twitter at @ProfMJCleveland. The views expressed here are those of Cleveland in her private capacity.

Time Is Running Out to Speak Freely About Free Speech in America


BY: MARGOT CLEVELAND | MARCH 20, 2023

Read more at https://thefederalist.com/2023/03/20/time-is-running-out-to-speak-freely-about-free-speech-in-america/

man holding a finger up to his lips in shushing motion in black and white
Americans need to have an important discussion about free speech now — before the Censorship Complex makes it impossible to do so. 

Author Margot Cleveland profile

MARGOT CLEVELAND

VISIT ON TWITTER@PROFMJCLEVELAND

MORE ARTICLES

The Censorship Complex — whereby Big Tech censorship is induced by the government, media, and media-rating businesses — threatens the future of free speech in this country. To understand how and why, Americans need to talk about speech — and the government’s motive to deceive the public. 

To frame this discussion, consider these hypotheticals:

  • Two American soldiers training Ukraine soldiers in Poland cross into the war zone, ambushing and killing five Russian soldiers. Unbeknownst to the American soldiers, a Ukrainian soldier filmed the incident and provides the footage to an independent journalist who authors an article on Substack, providing a link to the video. 
  • Russia uses its intelligence service and “bots” to flood social media with claims that the Ukrainians are misusing 90 percent of American tax dollars. In truth, “only” 40 percent of American tax dollars are being wasted or corruptly usurped — a fact that an independent journalist learns when a government source leaks a Department of Defense report detailing the misappropriation of the funds sent to Ukraine.
  • A third of Americans disagree with the continued funding of the war in Ukraine and organically prompt #NoMoreMoola to trend. After this organic hashtag trend begins, Russian operatives amplify the hashtag while the Russian-run state media outlet, Russia Today, reports on the hashtag trend. 
  • Following the collapse of the Silicon Valley Bank, the communist Chinese government uses social media to create the false narrative that 10 specifically named financial institutions are bordering on collapsing. In reality, only Bank A1 is financially troubled, but a bank run on any of the 10 banks would cause those banks to collapse too.

In each of these scenarios — and countless others — the government has an incentive to deceive the country. Americans need to recognize this reality to understand the danger posed by the voluntary censorship of speech.

Our government will always seek to quash certain true stories and seed certain false stories: sometimes to protect human life, sometimes to protect our national defense or the economy or public health, sometimes to obtain the upper hand against a foreign adversary, and sometimes to protect the self-interests of its leaders, preferred policy perspectives, and political and personal friends.

Since the founding, America’s free press provided a check on a government seeking to bury the truth, peddle a lie, or promote its leaders’ self-interest. At times, the legacy press may have buried a story or delayed its reporting to protect national security interests, but historically those examples were few and far between. 

Even after the left-leaning slant of legacy media outlets took hold and “journalists” became more open to burying (or spinning) stories to protect their favored politicians or policies, new media provided a stronger check and a way for Americans to learn the truth. The rise of social media, citizen journalists, Substack, and blogs added further roadblocks to both government abuse and biased and false reporting. 

Donald Trump’s rise, his successful use of social media, and new media’s refusal to join the crusade against Trump caused a fatal case of Stockholm Syndrome, with Big Tech and legacy media outlets welcoming government requests for censorship. With support from both for-profit and nonprofit organizations and academic institutions, a Censorship Complex emerged, embracing the government’s definition of “truth” and seeking to silence any who challenged it, whether it be new media or individual Americans — even experts. 

The search for truth suffered as a result, and Americans were deprived of valuable information necessary for self-governance. 

We know this because notwithstanding the massive efforts to silence speech, a ragtag group of muckrakers persisted and exposed several official dictates as lies: The Hunter Biden laptop was not Russian disinformation, Covid very well may have escaped from a Wuhan lab, and Trump did not collude with Putin. 

But if the Censorship Complex succeeds and silences the few journalists and outlets still willing to challenge the government, Americans will no longer have the means to learn the truth. 

Consider again the above hypotheticals. In each of those scenarios, the government — or at least some in the government — has an incentive to bury the truth. In each, it could frame the truth as a foreign disinformation campaign and offer Americans a countervailing lie as the truth. 

A populace voluntarily acquiescing in the censorship of speech because it is purportedly foreign misinformation or disinformation will soon face a government that lies, protected by complicit media outlets that repeat those lies as truth, social media websites that ban or censor reporting that challenges the official government narrative, hosting services that deplatform dissenting media outlets, advertisers that starve journalists of compensation, and search engines that hide the results of disfavored viewpoints.

The window is quickly closing on free speech in America, so before it is locked and the curtain thrown shut, we must talk about speech. We need to discuss the circumstances, if any, in which the government should alert reporters and media outlets to supposed foreign disinformation and how. We need to discuss the circumstances, if any, under which Big Tech should censor speech.

Americans need to have this discussion now — before the Censorship Complex makes it impossible to do so. 


Margot Cleveland is The Federalist’s senior legal correspondent. She is also a contributor to National Review Online, the Washington Examiner, Aleteia, and Townhall.com, and has been published in the Wall Street Journal and USA Today. Cleveland is a lawyer and a graduate of the Notre Dame Law School, where she earned the Hoynes Prize—the law school’s highest honor. She later served for nearly 25 years as a permanent law clerk for a federal appellate judge on the Seventh Circuit Court of Appeals. Cleveland is a former full-time university faculty member and now teaches as an adjunct from time to time. As a stay-at-home homeschooling mom of a young son with cystic fibrosis, Cleveland frequently writes on cultural issues related to parenting and special-needs children. Cleveland is on Twitter at @ProfMJCleveland. The views expressed here are those of Cleveland in her private capacity.

The Censorship Complex Isn’t A ‘Tinfoil Hat’ Conspiracy, And The ‘Twitter Files’ Just Dropped More Proof


BY: MARGOT CLEVELAND | MARCH 10, 2023

Read more at https://thefederalist.com/2023/03/10/the-censorship-complex-isnt-a-tinfoil-hat-conspiracy-and-the-twitter-files-just-dropped-more-proof/

Matt Taibbi and Michael Shellenberger raise their right hands before testifying about Twitter Files and Censorship Complex
Sometimes there is a vast conspiracy at play, and the problem isn’t that someone is donning a tinfoil hat but that he’s buried his head in the sand.

Author Margot Cleveland profile

MARGOT CLEVELAND

VISIT ON TWITTER@PROFMJCLEVELAND

MORE ARTICLES

“It may be possible — if we can take off the tinfoil hat — that there is not a vast conspiracy,” Democrat Colin Allred of Texas scoffed at independent journalist Matt Taibbi during Thursday’s House Judiciary subcommittee hearing. But while Allred was busy deriding Taibbi and fellow witness, journalist Michael Shellenberger, the public was digesting the latest installment of the “Twitter Files” — which contained yet further proof that the government funds and leads a sprawling Censorship Complex.

Taibbi dropped the Twitter thread about an hour before the House Judiciary’s Select Subcommittee on the Weaponization of the Federal Government hearing began. And notwithstanding the breadth and depth of the players revealed in the 17-or-so earlier installments of the “Twitter Files,” Thursday’s reporting exposed even more government-funded organizations pushing Twitter to censor speech. 

But yesterday’s thread, titled “The Censorship-Industrial Complex,” did more than merely expand the knowledge base of the various actors: It revealed that government-funded organizations sought the censorship of truthful speech by ordinary Americans. 

In his prepared testimony for the subcommittee, Shellenberger spoke of the censorship slide he saw in reviewing the internal Twitter communications. “The bar for bringing in military-grade government monitoring and speech-countering techniques has moved from ‘countering terrorism’ to ‘countering extremism’ to ‘countering simple misinformation.’ Otherwise known as being wrong on the internet,” Shellenberger testified

“The government no longer needs the predicate of calling you a terrorist or an extremist to deploy government resources to counter your political activity,” Shellenberger continued. “The only predicate it needs is the assertion that the opinion you expressed on social media is wrong.”

Being “wrong” isn’t even a prerequisite for censorship requests, however, with the Virality Project headed out of the Stanford Internet Observatory reportedly pushing “multiple platforms” to censor “true content which might promote vaccine hesitancy.” 

An excerpt showed this verboten category included “viral posts of individuals expressing vaccine hesitancy, or stories of true vaccine side effects,” which the so-called disinformation experts acknowledged might “not clearly” be “mis or disinformation, but it may be malinformation (exaggerated or misleading).” 

Silencing such speech is bad enough, but the Virality Project “added to this bucket” of “true content” worthy of censorship: “true posts which could fuel hesitancy, such as individual countries banning certain vaccines.” 

Let that sink in for a minute. The Virality Project — more on that shortly — pushed “multiple platforms” to take action against individuals posting true news reports of countries banning certain vaccines. And why? Because it might make individuals “hesitant” to receive a Covid shot.

So who is this overlord of information, the Virality Project?

The Stanford Internet Observatory reports that it launched the Virality Project in response to the coronavirus, to conduct “a global study aimed at understanding the disinformation dynamics specific to the COVID-19 crisis.” Stanford expanded the project in January 2020, “with colleagues at New York University, the University of Washington, the National Council on Citizenship, and Graphika.”

Beyond collaboration with state-funded universities, the Virality Project, in its own words, “built strong ties with several federal government agencies, most notably the Office of the Surgeon General (OSG) and the CDC, to facilitate bidirectional situational awareness around emerging narratives.” According to the Virality Project’s 2022 report, “Memes, Magnets, and Microchips Narrative Dynamics Around COVID-19 Vaccines,” “the CDC’s biweekly ‘COVID-19 State of Vaccine Confidence Insights’ reports provided visibility into widespread anti-vaccine and vaccine hesitancy narratives observed by other research efforts.”

The Virality Project’s report also championed its success in engaging six Big Tech platforms — Facebook (including Instagram), Twitter, Google (including YouTube), TikTok, Medium, and Pinterest — using a “ticket” system. The social media platforms would “review and act on” reports from the Virality Project, “in accordance with their policies.” 

With the Virality Project working closely with the surgeon general and the CDC, which provided “vaccine hesitancy narratives” to the Stanford team, and the Stanford team then providing censorship requests to the tech giants, the government censorship loop was closed. 

Censorship requests were not limited to Covid-19, however, with the Stanford Internet Observatory’s Election Integrity Partnership playing a similar role in providing Twitter — and presumably other Big Tech companies — requests to remove supposed election disinformation. 

Earlier “Twitter Files” established that the Election Integrity Partnership was a conduit for censorship requests to Twitter for other government-funded entities, such as the Center for Internet Security. And in addition to receiving millions in government grants, during the 2020 election, the Center for Internet Security partnered with the Cyber and Infrastructure Security Agency at the Department of Homeland Security — again completing the circle of government censorship we saw at play during the 2020 election cycle.

The groups involved in both the Election Integrity Partnership and the Virality Project are also connected by government funding. The Election Integrity Partnership boasted that it “brought together misinformation researchers” from across four organizations: the Stanford Internet Observatory, the University of Washington’s Center for an Informed Public, Graphika, and the Atlantic Council’s Digital Forensic Research Lab. Both Graphika and the University of Washington also partnered with Stanford for the Virality Project, along with individuals from New York University and the National Council on Citizenship.

Beyond the taxpayer-funded state universities involved in the projects, Graphika received numerous Department of Defense contracts and a $3 million grant from the DOD for a 2021-2022 research project related to “Research on Cross-Platform Detection to Counter Malign Influence.” Graphika also received a nearly $2 million grant from the DOD for “research on Co-Citation Network Mapping and had previously researched “network mapping,” or the tracking of how Covid “disinformation” spreads through social media.

The Atlantic Council likewise receives federal funding, including a grant from the State Department’s Global Engagement Center awarded to its Digital Forensics Research Lab. And Stanford rakes in millions in federal grants as well.

The government funding of these censorship conduits is not the only scandal exposed by the “Twitter Files.” Rather, the internal communications of the social media giant also revealed that several censorship requests rested on bogus research. 

But really, that is nothing compared to what Thursday’s “Twitter Files” revealed: a request for the censorship of truthful information, including news that certain Covid shots had been banned in some countries. And that censorship request came from a group of so-called disinformation experts closely coordinating with the government and with several partners funded with government grants — just as was the case during the 2020 election.

This all goes to show that sometimes there is a vast conspiracy at play and that the problem is not that someone is donning a tinfoil hat, but that he’s buried his head in the sand.


Margot Cleveland is The Federalist’s senior legal correspondent. She is also a contributor to National Review Online, the Washington Examiner, Aleteia, and Townhall.com, and has been published in the Wall Street Journal and USA Today. Cleveland is a lawyer and a graduate of the Notre Dame Law School, where she earned the Hoynes Prize—the law school’s highest honor. She later served for nearly 25 years as a permanent law clerk for a federal appellate judge on the Seventh Circuit Court of Appeals. Cleveland is a former full-time university faculty member and now teaches as an adjunct from time to time. As a stay-at-home homeschooling mom of a young son with cystic fibrosis, Cleveland frequently writes on cultural issues related to parenting and special-needs children. Cleveland is on Twitter at @ProfMJCleveland. The views expressed here are those of Cleveland in her private capacity.

Tag Cloud