Perspectives; Thoughts; Comments; Opinions; Discussions

Posts tagged ‘artificial intelligence’

Today’s Politically INCORRECT Cartoon by A.F. Branco


Branco Cartoon – Humans Beware

A.F. Branco | on August 22, 2025 | https://comicallyincorrect.com/branco-cartoon-humans-beware/

AI Robot vs Humans
A Political Cartoon by A.F. Branco 2025

Facebook Twitter Pinterest Flipboard

A.F. Branco Cartoon – People are being warned that the AI could render human beings obsolete at some point. The Borg, Anyone?

BRANCO TOON STORE

Job-Killing AI: Dreamworks’ Katzenberg Says Artificial Intelligence Will Replace 90% Of the Human Artists Needed to Make an Animation Movie

By Paul Serran – The Gateway Pundit – Nov 11, 2023

On the one hand, long gone are the days where animators would be working ink-on-paper to produce frame after frame on an animation movie.
The digital revolution has long-impacted the making of animation. However, we may be about to see the digital transformation of this art form accelerate by a factor of 10 – according to one of the most qualified voices in the industry.
DreamWorks founder Jeffrey Katzenberg has predicted that generative artificial intelligence ‘will cut the cost of animated films by 90 percent’, that means: the ‘human cost’… READ MORE

DONATE to A.F. Branco Cartoons – Tips accepted and appreciated – $1.00 – $5.00 – $25.00 – $50.00 – it all helps to fund this website and keep the cartoons coming. Also, Venmo @AFBranco – THANK YOU!

A.F. Branco has taken his two greatest passions (art and politics) and translated them into cartoons that have been popular all over the country in various news outlets, including NewsMax, Fox News, MSNBC, CBS, ABC, and “The Washington Post.” He has been recognized by such personalities as Rep. Devin Nunes, Dinesh D’Souza, James Woods, Chris Salcedo, Sarah Palin, Larry Elder, Lars Larson, Rush Limbaugh, Elon Musk, and President Trump.

Censorship-Industrial Complex Enlists U.K. ‘Misinformation’ Group Logically.AI To Meddle In 2024 Election


BY: LEE FANG | JANUARY 29, 2024

Read more at https://thefederalist.com/2024/01/29/censorship-industrial-complex-enlists-u-k-misinformation-group-logically-ai-to-meddle-in-2024-election/

close up of black woman holding a green cellphone

Author Lee Fang profile

LEE FANG

MORE ARTICLES

Brian Murphy, a former FBI agent who once led the intelligence wing of the Department of Homeland Security, reflected last summer on the failures of the Disinformation Governance Board — the panel formed to actively police misinformation. The board, which was proposed in April 2022 after he left DHS, was quickly shelved by the Biden administration in a few short months in the face of criticism that it would be an Orwellian state-sponsored “Ministry of Truth.”

In a July podcast, Murphy said the threat of state-sponsored disinformation meant the executive branch has an “ethical responsibility” to rein in the social media companies. American citizens, he said, must give up “some of your freedoms that you need and deserve so that you get security back.”

The legal problems and public backlash to the Disinformation Governance Board also demonstrated to him that “the government has a major role to play, but they cannot be out in front.”

Murphy, who made headlines late in the Trump administration for improperly building dossiers on journalists, has spent the last few years trying to help the government find ways to suppress and censor speech it doesn’t like without being so “out in front” that it runs afoul of the Constitution. He has proposed that law enforcement and intelligence agencies formalize the process of sharing tips with private sector actors — a “hybrid constellation” including the press, academia, researchers, nonpartisan organizations, and social media companies — to dismantle “misinformation” campaigns before they take hold.

More recently, Murphy has worked to make his vision of countering misinformation a reality by joining a United Kingdom-based tech firm, Logically.AI, whose eponymous product identifies and removes content from social media. Since joining the firm, Murphy has met with military and other government officials in the U.S., many of whom have gone on to contract or pilot Logically’s platform.

Logically says it uses artificial intelligence to keep tabs on over 1 million conversations. It also maintains a public-facing editorial team that produces viral content and liaisons with the traditional news media. It differs from other players in this industry by actively deploying what they call “countermeasures” to dispute or remove problematic content from social media platforms.
 
The business is even experimenting with natural language models, according to one corporate disclosure, “to generate effective counter speech outputs that can be leveraged to deliver novel solutions for content moderation and fact-checking.” In other words, artificial intelligence-powered bots that produce, in real-time, original arguments to dispute content labeled as misinformation.

In many respects, Logically is fulfilling the role Murphy has articulated for a vast public-private partnership to shape social media content decisions. Its technology has already become a key player in a much larger movement that seeks to clamp down on what the government and others deem misinformation or disinformation. A raft of developing evidence — including the “Twitter Files,” the Moderna Reports, the proposed Government Disinformation Panel, and other reports — has shown how governments and industry are determined to monitor, delegitimize, and sometimes censor protected speech. The story of Logically.AI illustrates how sophisticated this effort has become and its global reach. The use of its technology in Britain and Canada raises red flags as it seeks a stronger foothold in the United States.

Logically was founded in 2017 by a then-22-year-old British entrepreneur named Lyric Jain, who was inspired to form the company to combat what he believed were the lies that pushed the U.K. into voting in favor of Brexit, or leaving the European Union. The once-minor startup now has broad contracts across Europe and India, and has worked closely with Microsoft, Google, PwC, TikTok, and other major firms. Meta contracts with Logically to help the company fact-check content on all of its platforms: WhatsApp, Instagram, and Facebook.

The close ties to Silicon Valley provide unusual reach. “When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” Meta and Logically announced in a 2021 press release on the partnership.

Meta and Logically did not respond to repeated requests for comment.

During the 2021 local elections in the U.K., Logically monitored up to “one million pieces of harmful content,” some of which they relayed to government officials, according to a document reviewed by RealClearInvestigations. The firm claimed to spot coordinated activity to manipulate narratives around the election, information they reported to tech giants for takedowns.

The following year, the state of Oregon negotiated with Logically for a wide-ranging effort to monitor campaign-related content during the 2022 midterm elections. In a redacted proposal for the project, Logically noted that it would check claims against its “single source of truth database,” which relied on government data, and would also crack down on “malinformation” — a term of art that refers to accurate information that fuels dangerous narratives. The firm similarly sold Oregon on its ability to pressure social media platforms for content removal.

Oregon state Rep. Ed Diehl has a led push against the state from renewing its work with Logically for the election this year. The company, he said in an interview, violates “our constitutional rights to free speech and privacy” by “flagging true information as false, claiming legitimate dissent is a threat, and then promoting “counter-narratives” against valid forms of public debate.

In response, the Oregon secretary of state’s office, which initiated the contract with Logically, claimed “no authority, ability, or desire to censor speech.” Diehl disputes this. He pointed out that the original proposal with Logically clearly states that its service “enables the opportunity for unlimited takedown attempts” of alleged misinformation content and the ability for the Oregon secretary of state’s office to “flag for removal” any “problematic narratives and content.” The contract document touts Logically as a “trusted entity within the social media community” that gives it “preferred status that enables us to support our client’s needs at a moment’s notice.”

Diehl, who shared a copy of the Logically contract with RCI, called the issue a vital “civil rights” fight, and noted that in an ironic twist, the state’s anti-misinformation speech suppression work further inflames distrust in “election systems and government institutions in general.”

Logically’s reach into the U.S. market is quickly growing. The company has piloted programs for the Chicago Police Department to use artificial intelligence to analyze local rap music and deploy predictions on violence in the community, according to a confidential proposal obtained by RCI. Pentagon records show that the firm is a subcontractor to a program run by the U.S. Army’s elite Special Operations Command for work conducted in 2022 and 2023. Via funding from DHS, Logically also conducts research on gamer culture and radicalization.

The company has claimed in its ethics statements that it will not employ any person who holds “a salaried or prominent position” in government. But records show closely entrenched state influence. For instance, Kevin Gross, a director of the U.S. Navy NAVAIR division, was previously embedded within Logically’s team during a 2022 fellowship program. The exchange program supported Logically’s efforts to assist NATO on the analysis of Russian social media.

Other contracts in the U.S. may be shrouded in secrecy. Logically partners with ThunderCat Technologies, a contracting firm that assists tech companies when competing for government work. Such arrangements have helped tech giants conceal secretive work in the past. Google previously attempted to hide its artificial intelligence drone-targeting contracts with the Defense Department through a similar third-party contracting vendor.

But questions swirl over the methods and reach of the firm as it entrenches itself into American life, especially as Logically angles to play a prominent role in the 2024 presidential election. 

Pandemic Policing

In March 2020, as Britain confronted the spread of Covid-19, the government convened a new task force, the Counter Disinformation Unit (CDU). The secretive task force was created with little fanfare but was advertised as a public health measure to protect against dangerous misinformation. Caroline Dinenage, the member of Parliament overseeing media issues, later explained that the unit’s purpose was to provide authoritative sources of information and to “take action to remove misinformation” relating to “misleading narratives related to COVID-19.”

The CDU, it later emerged, had largely outsourced its work to private contractors such as Logically. In January 2021, the company received its first contract from the agency overseeing the CDU, for £400,000, to monitor “potentially harmful disinformation online.” The contracts later swelled, with the U.K. agency that pertains to media issues eventually providing contracts with a combined value of £1.2 million and the Department of Health providing another £1.3 million, for a total of roughly $3.2 million.

That money went into far-reaching surveillance that monitored journalists, activists, and lawmakers who criticized pandemic policies. Logically, according to an investigation last year in the Telegraph, recorded comments from activist Silkie Carlo criticizing vaccine passports in its “Mis/Disinformation” reports.

Logically’s reports similarly collected information on Dr. Alexandre de Figueiredo, a research fellow at the London School of Hygiene and Tropical Medicine. Figueiredo had published reports on the negative ways in which vaccine passports could undermine vaccine confidence and had publicly criticized policies aimed at the mass vaccination of children. Despite his expertise, Logically filed his tweet in a disinformation report to the government. While some of the reports were categorized as evidence of terms of service violations, many were, in fact, routine forms of dissent aired by prominent voices in the U.K. on policies hotly contested by expert opinion.

The documents showing Logically’s role were later uncovered by Carlo’s watchdog group, Big Brother Watch, which produced a detailed report on the surveillance effort. The CDU reports targeted a former judge who argued against coercive lockdowns as a violation of civil liberties and journalists criticizing government corruption. Some of the surveillance documents suggest a mission creep for the unit, as media monitoring emails show that the agency targeted anti-war groups that were vocal against NATO’s policies.

Carlo was surprised to even find her name on posts closely monitored and flagged by Logically. “We found that the company exploits millions of online posts to monitor, record and flag online political dissent to the central government under the banner of countering ‘disinformation,’” she noted in a statement to RCI.

Marketing materials published by Logically suggest its view of Covid-19 went well beyond fact-checking and veered into suppressing dissenting opinions. A case study published by the firm claimed that the #KBF hashtag, referring to Keep Britain Free, an activist group against school and business shutdowns, was a dangerous “anti-vax” narrative. The case study also claimed the suggestion that “the virus was created in a Chinese laboratory” was one of the “conspiracy theories’’ that “have received government support” in the U.S. — despite the fact that a preponderance of evidence now points to a likely lab leak from the Wuhan Institute of Virology as the origin of the pandemic.

Logically was also involved in pandemic work that blurred the line with traditional fact-checking operations. In India, the firm helped actively persuade patients to take the vaccine. In 2021, Jain, the founder and CEO of the company, said in an interview with an Indian news outlet that his company worked “closely with communities that are today vaccine hesitant.” The company, he said, recruited “advocates and evangelists” to shape local opinion.

Questionable Fact-Checking

In 2022, Logically used its technology on behalf of Canadian law enforcement to target the trucker-led “Freedom Convoy” against Covid-19 mandates, according to government records. Logically’s team floated theories that the truckers were “likely influenced by foreign adversaries,” a widely repeated claim used to denigrate the protests as inauthentic.

The push to discredit the Canadian protests showed the overlapping power of Logically’s multiple arms. While its social media surveillance wing fed reports to the Canadian government, its editorial team worked to influence opinion through the news media. When the Financial Times reported on the protest phenomenon, the outlet quoted Murphy, the former FBI man who now works for Logically, who asserted that the truckers were influenced by coordinated “conspiracy theorist groups” in the U.S. and Canada. Vice similarly quoted Joe Ondrak, Logically’s head of investigations, to report that the “Freedom Convoy” had generated excitement among global conspiracy theorists. Neither outlet disclosed Logically’s work for Canadian law enforcement at the time.

Other targets of Logically are quick to point out that the firm has taken liberties with what it classifies as misinformation.

Will Jones, the editor of the Daily Sceptic, a British news outlet with a libertarian bent, has detailed an unusual fact-check from Logically Facts, the company’s editorial site. Jones said the site targeted him for pointing out that data in 2022 showed 71 percent of patients hospitalized for Covid-19 were vaccinated. Logically’s fact-check acknowledged Jones had accurately used statistics from the U.K. Health Security Agency, but tried to undermine him by asserting that he was still misleading by suggesting that “vaccines are ineffective.”

But Jones, in a reply, noted that he never made that argument and that Logically was batting away at a straw man. In fact, his original piece plainly took issue with a Guardian article that incorrectly claimed that “COVID-19 has largely become a disease of the unvaccinated.”

Other Logically fact-checks have bizarrely targeted the Daily Sceptic for reporting on news in January 2022 that vaccine mandates might soon be lifted. The site dinged the Daily Sceptic for challenging the evidence behind the vaccine policy and declared, “COVID-19 vaccines have been proven effective in fighting the pandemic.” And yet, at the end of that month, the mandate was lifted for health care workers, and the following month, all other pandemic restrictions were revoked, just as the Daily Sceptic had reported.

“As far as I can work out, it’s a grift,” said Daily Sceptic founder Toby Young, of Logically. “A group of shysters offer to help the government censor any criticism of its policies under the pretense that they’re not silencing dissent — God forbid! — but merely ‘cleansing’ social media of misinformation, disinformation and hate speech.”

Jones was similarly dismissive of the company, which he said disputes anything that runs contrary to popular consensus. “The consensus of course is that set by the people who pay Logically for their services,” Jones added. “The company claims to protect democratic debate by providing access to ‘reliable information,’ but in reality, it is paid to bark and savage on command whenever genuine free speech makes an inconvenient appearance.”

In some cases, Logically has piled on to news stories to help discredit voices of dissent. Last September, the anti-misinformation site leaped into action after British news outlets published reports about sexual misconduct allegations surrounding comedian and online broadcaster Russell Brand — one of the outspoken critics of government policy in Britain, who has been compared to Joe Rogan for his heterodox views and large audience.

Brand, a vocal opponent of pandemic policies, had been targeted by Logically in the past for airing opinions critical of the U.S. and U.K. response to the virus outbreak, and in other moments for criticizing new laws in the European Union that compel social media platforms to take down content.

But the site took dramatic action when the sexual allegations, none of which have been proved in court, were published in the media. Ondrak, Logically’s investigations head, provided different quotes to nearly half a dozen news outlets — including Vice, Wired, the BBC, and two separate articles in The Times — that depicted Brand as a dangerous purveyor of misinformation who had finally been held to account.

“He follows a lot of the ostensibly health yoga retreat, kind of left-leaning, anti-capitalist figures, who got really suckered into Covid skepticism, Covid denialism, and anti-vax, and then spat out of the Great Reset at the other end,” Ondrak told Wired. In one of the articles published by The Times, Ondrak aired frustration on the obstacles of demonetizing Brand from the Rumble streaming network. In an interview with the BBC, Ondrak gave a curious condemnation, noting Brand stops short of airing any actual conspiracy theories or falsehoods but is guilty of giving audiences “the ingredients to make the disinformation themselves.”

Dinenage, the member of Parliament who spearheaded the CDU anti-misinformation push with Logically during the pandemic, also leapt into action. In the immediate aftermath of the scandal, she sent nearly identical letters to Rumble, TikTok, and Meta to demand that the platforms follow YouTube’s lead in demonetizing Brand. Dinenage couched her official request to censor Brand as a part of a public interest inquiry, to protect the “welfare of victims of inappropriate and potentially illegal behaviour.”

Logically’s editorial team went a step further. In its report on the Brand allegations published on Logically Facts, it claimed that social media accounts “trotting out the ‘innocent until proven guilty’ refrain” for the comedian were among those perpetuating “common myths about sexual assault.” The site published a follow-up video reiterating the claim that those seeking the presumption of innocence for Brand, a principle dating back to the Magna Carta, were spreading a dangerous “myth.”

The unusual advocacy campaign against Brand represented a typical approach for a company that has long touted itself as a hammer against spreaders of misinformation. The opportunity to remove Brand from the media ecosystem meant throwing as much at him as possible, despite any clear misinformation or disinformation angle in the sexual assault allegations. Rather, he was a leading critic of government censorship and pandemic policy, so the scandal represented a weakness to be exploited.

Such heavy-handed tactics may be on the horizon for American voters. The firm is now a member of the U.S. Election Infrastructure Information Sharing & Analysis Center, the group managed by the Center for Internet Security that helps facilitate misinformation reports on behalf of election officials across the country. Logically has been in talks with Oregon and other states, as well as DHS, to expand its social media surveillance role for the presidential election later this year.

Previous targets of the company, though, are issuing a warning. 

“It appears that Logically’s lucrative and frankly sinister business effectively produced multi-million pound misinformation for the government that may have played a role in the censorship of citizens’ lawful speech,” said Carlo of Big Brother Watch.

“Politicians and senior officials happily pay these grifters millions of pounds to wield the red pen, telling themselves that they’re ‘protecting’ democracy rather than undermining it,” said Young of the Daily Sceptic. “It’s a boondoggle and it should be against the law.”

This article was originally published by RealClearInvestigations and LeeFang.com.


Lee Fang is an investigative reporter. Find his Substack here.

Biden Fell for China’s Empty-Promises Playbook at The San Francisco Summit


BY: HELEN RALEIGH | NOVEMBER 17, 2023

Read more at https://thefederalist.com/2023/11/17/biden-fell-for-chinas-empty-promises-playbook-at-the-san-francisco-summit/

Xi Jinping

Author Helen Raleigh profile

HELEN RALEIGH

VISIT ON TWITTER@HRALEIGHSPEAKS

MORE ARTICLES

President Joe Biden and China’s leader Xi Jinping met this week on the sidelines of the Asia-Pacific Economic Cooperation (APEC) summit in San Francisco. The Biden administration touted the meeting as a significant foreign policy achievement, even though it accomplished little but photo ops.

Leading up to the APEC meeting, Xi had a weak hand, while the leverage was on the U.S. side. China’s economic growth has slowed down significantly. Its once high-flying property sector crashed and exports dropped. The youth unemployment rate reached 22 percent in June 2023 before Beijing stopped publishing the data altogether out of fear of causing public panic. Foreign firms have pulled billions out of China, concerned over its weak economy, hostile regulatory environment, and geopolitical tensions with the U.S. China risks prolonged economic stagnation as consumers are unwilling to spend money due to financial and political uncertainty.

In contrast, the U.S. economy grew almost 5 percent in the third quarter despite facing its own

challenges, such as inflation and ballooning national debt. Since China needs American companies’ investments and technologies to revive its weak economy, Biden could have waited for Xi to plead for a summit and used it as leverage to demand some behavioral changes from China. Preconditions could have been that China’s military stops its harassment of Taiwan and the Philippines in the South China Sea, or no more funding Russia and Iran’s geopolitical aggressions by purchasing their oil.

Sadly, Biden and his foreign policy team are known to turn U.S. leverage into weakness by focusing on the wrong priorities. For example, they continue to believe that climate change is the world’s biggest challenge and that the U.S. needs China’s cooperation to save the planet, even though China remains the world’s biggest polluter after signing the Paris Climate Agreement.

Biden’s green initiatives have only deepened the U.S. economy’s dependency on China since the communist regime dominates the global supply chain for solar panels, wind turbines, and batteries for electric vehicles due to its willingness to exploit slave laborers (most are ethnic minorities) and the nation’s abundant supply of coal.

Biden, led by misguided policies, sent several cabinet-level officials to China, including Secretary of State Antony Blinken and Treasury Secretary Janet Yellen. They practically begged Xi for a meeting. Xi, of course, played hard to get. Rather than reciprocating senior U.S. government officials’ multiple visits, Xi waited until last month to send Wang Yi, China’s minister of foreign affairs, to visit the U.S. and only recently agreed to a meeting with Biden. 

A few days before the summit, China’s People’s Daily, the mouthpiece of the Chinese Communist Party (CCP), faulted the U.S. for the deterioration of the Sino-U.S. relationship and demanded the U.S. “abandon its aggressive Cold War and aggressive mindset, fix the ‘action deficit’ with practical actions and concrete policies,” even though China is the one who has an action deficit as wide as the Grand Canyon. Remember when Xi promised President Barack Obama not to militarize artificial islands in the South China Sea and then armed those islands anyway and claimed 90 percent of the international waterway is Chinese territory?

Communist Party’s Playbook

What Xi has been doing is following the typical CCP playbook. Miles Yu, a former senior adviser to Secretary of State Mike Pompeo, points out that CCP leaders, starting with Mao, love to “use international gatherings to lend legitimacy to [a] beleaguered regime at home.” By playing hard to get with the Biden administration, Xi hid his weakened hand behind a strongman image. He “seeks to send the message to his caged people — aided by the CCP’s relentless propaganda machine — that their supreme leader is respected, even revered, on the global stage,” according to Yu. Another CCP go-to tactic is to make vague and unenforceable pledges for the distant future in exchange for concessions from the other side now.

Unfortunately, the Biden team fell for the CCP’s trick. In a post-summit press conference, Biden put on a brave face and claimed the summit was “among the most constructive and productive we’ve had,” with three key agreements: to restart cooperation on controlling fentanyl, to resume direct (high-level) military-to-military contact, and to set up expert exchanges on risks and safety issues in artificial intelligence (AI).

But none of these represent any meaningful achievement, since the CCP is known for making empty promises, and the joke is on whoever believes them. On the fentanyl issue, many China observers, including Kelley Currie, a former diplomat, quickly pointed out on X, “Don’t forget that China agreed to do this exact thing in 2019 and dramatically reduced the flow of fentanyl out of China, only to switch tactics and instead supply mass amounts of precursor chemicals to Mexican cartels.”

On the AI issue, China promised nothing. Xi has made enhancing the People’s Liberation Army’s capabilities through AI a national priority and has already committed plenty of resources for AI research and development. Xi will not change his course because of some experts’ exchanges on AI with Americans. If such a discussion occurs, China will exploit it to identify which American AI expert to poach and what latest AI technology China should steal.

Military Aggression

Biden clearly believes that resuming direct, high-level military-to-military contact between the U.S. and China was a significant accomplishment. He tweeted, “Clear and open communication between our defense establishments is vital to avoid miscalculation by either side and prevent conflict.” But it was Chinese military leaders who refused to pick up phone calls from the U.S. side, and they did so under Xi’s order.

Chinese pilots frequently made dangerous maneuvers near the U.S. and its allies’ military assets in the South China Sea, not because of a lack of communication but because of Xi’s deliberate policy decision: China regards the international water as its territory and tries to block the U.S. and its allies from accessing it through intimidation. According to Jacob Stokes, a senior fellow at the Center for New American Security, “China wants the United States and its partners to feel worried about rising military and security risks in East Asia.”

It’s unlikely the Chinese military’s aggressive behavior in and above the South China Sea will stop after the Biden-Xi summit. Furthermore, Elbridge Colby, a former Pentagon official, points out that military-to-military communication is “not vital. It’s not the key issue.” Responding to Biden’s self-congratulatory tweet, Colby wrote, “The key issue is China undertaking a historic military buildup and increasingly using that military to get ready for a war, as your own appointees and generals point out. Just really nowhere near the seriousness we need.”

President Biden and his foreign policy team want Americans to believe that his meeting with Xi in San Francisco was successful. But in truth, the U.S. gained nothing from the Biden-Xi summit. Don’t expect Xi to fulfill any promises or change his policies. The U.S.-China Economic and Security Review Commission’s recent 753-page report to Congress presented evidence that Xi is preparing his military forces and the rest of the country for war and treats diplomacy with the United States, such as the most recent Biden-Xi summit, “primarily as a tool for forestalling and delaying U.S. pressure over a period of years while China moves ever further down the path of developing its own economic, military, and technological capabilities.”

If anything, the world is becoming more dangerous after the Biden-Xi summit, and we are on Xi’s timeline.


Helen Raleigh, CFA, is an American entrepreneur, writer, and speaker. She’s a senior contributor at The Federalist. Her writings appear in other national media, including The Wall Street Journal and Fox News. Helen is the author of several books, including “Confucius Never Said” and “Backlash: How Communist China’s Aggression Has Backfired.” Her latest book is the 2nd edition of “The Broken Welcome Mat: America’s UnAmerican immigration policy, and how we should fix it.” Follow her on Parler and Twitter: @HRaleighspeaks.

U.S. Government Gave $1 Million To AI Startup That Helped Blacklist Companies Spreading ‘Disinformation’


BY: SAMUEL MANGOLD-LENETT | NOVEMBER 13, 2023

Read more at https://thefederalist.com/2023/11/13/u-s-government-gave-1-million-to-ai-startup-that-helped-blacklist-companies-spreading-disinformation/

Computer

Author Samuel Mangold-Lenett profile

SAMUEL MANGOLD-LENETT

VISIT ON TWITTER@SMLENETT

MORE ARTICLES

The National Science Foundation’s Directorate for Technology, Innovation, and Partnerships (TIP) is helping tech developers build artificial intelligence programs that suppress digital speech by starving online companies of ad revenue and isolating them from the financial system. 

As part of the Small Business Innovation Research (SBIR) program’s second phase, the Massachusetts-based Automated Controversy Detection, Inc. (AuCoDe) received just over $940,000 for a project titled “A Controversy Detection Signal for Finance.” The company received $225,000 during the first phase of the program for the same project, for a total just under $1.2 million. AuCoDe received this money over a span of four years, from 2018 to 2022.

NSF Award Search_ Award # 1… by The Federalist

According to LinkedIn, AuCoDe is an “NSF backed company that aims to make online communication more productive and less dangerous.” Its now-defunct website states that AuCoDe “use[d] state-of-the-art machine learning algorithms to stop the spread of misinformation online.”

Let’s all say it together. “Who is responsible to determining what is, and is not, misinformation, or disinformation?” The wrong answer is Socialism.

The company developed artificial intelligence programs to identify “opposing sentiment,” “misinformation and disinformation,” “fairness and bias issues,” and “bot activity and its correlation with disinformation campaigns.” It used similar methods to “gain insight into sentiment and beliefs.”

Along these lines, the NSF-funded project’s goal was to develop technology that can “automatically detect controversy and disinformation, providing a means for financial institutions to reduce risk exposure” amid the increase of “public attention and political concern” being paid to disinformation.

Second phase SBIR grant money funded the “development of novel algorithms that automatically detect controversy in social media, news, and other outlets.” AuCoDe’s used this money to attempt the creation of “artificial intelligence and machine learning” programs that combat “the growing noise of controversy, mis- and dis-information, and toxic speech.”

According to the grant’s project outcomes report, AuCoDe developed several such “technologies.” The company created the “Squint,” controversy detection dashboard, and “Squabble, a proprietary controversy detection model.”

Squint and Squabble, “enable users to learn the controversy and toxicity levels of social media content, together with the stance score of an individual or company.” AuCoDe also created a free Chrome extension called “DETOXIFY” that enables users to blacklist and blur topics from their social media feeds.

Squint and Squabble are unavailable for public use.

AuCoDe also used this grant money to launch a YouTube channel where company members discuss “current controversies.” The channel boasts three total subscribers, and the most recent of its nine videos was uploaded eight months ago.

paper, co-authored by AuCoDe staff members Shiri Dori-Hacohen, Keen Sung, Jengyu Chou, and Julian Lustig-Gonzalez, produced as a result of this grant detailed how “detecting information disorders and deploying novel, real-world content moderation tools is crucial in promoting empathy in social networks” like Parler and Reddit.

A supplemental video provided by the authors discussed the “cost of disinformation” both before and after Covid — partially AI-generated results “conservatively” estimated to be upward of $230 billion — and relied upon a report from the Global Disinformation Index to substantiate that brands like Amazon, Petco, and UPS “inadvertently funded disinformation stories leading up to the 2020 election.”

Below is a teacher (yes, a schoolteacher, teaching children as we speak). It is safe to assume that a person this him would be used to determine what is, and is not, misinformation, disinformation, et., al.

The Global Disinformation Index, of course, is a formerly State Department-backed British organization that provided advertising companies with blacklists to starve companies accused that were accused spreading disinformation of revenue. AuCoDe’s research was aimed at helping the federal government further this goal through the algorithmic curation of digital speech.

[Read: Meet The Shadowy Group That Ran The Federal Government’s Censorship Scheme]

In January 2021, using research gathered from these grants, the company published a piece titled “Misinformation drives calls for action on Parler: preliminary insights into 672k comments from 291k Parler users.” The company said it was “investigating the nature of accounts on alt-tech networks, with an eye toward who is spreading misinformation” and suggested that the platform’s very nature enabled users to circulate and engage with “mis- and dis-information.”

“In conclusion, our first look at our collection of Parler data finds a plethora of misinformation driving a desire for action,” the company wrote. “We also discovered that in addition to highly permissive content moderation, there is a lack of moderation around bots, leaving enormous potential for disinformation campaigns to be carried out on these networks — something we will be keenly exploring in the coming weeks.”

The reality is that AuCoDe interfered with Americans’ right to free speech because it didn’t align with the left-wing consensus and used federal tax dollars to run cover for Big Tech oligarchs. If the company was actually dogmatically concerned with “misinformation,” it would have gone after Facebook, which played a much larger role in hosting Jan. 6 discourse.

[Read: Court Docs Show Facebook Played Much Bigger Part In Capitol Riot Than Parler, Yet No Consequences

A source close to the company told The Federalist that AuCoDe closed in May 2023. More than $1 million in taxpayer money went to a government-backed start-up specifically focused on attacking the First Amendment rights of Americans and sabotaging businesses that deviate from left-wing orthodoxy.


Samuel Mangold-Lenett is a staff editor at The Federalist. His writing has been featured in the Daily Wire, Townhall, The American Spectator, and other outlets. He is a 2022 Claremont Institute Publius Fellow. Follow him on Twitter @smlenett.

Kamala Harris has an artificial intelligence problem


 By Jason Chaffetz | Fox News | Published May 9, 2023 2:00am EDT

Read more at https://www.foxnews.com/opinion/kamala-harris-artificial-intelligence-problem

The jokes seemed to write themselves last week after the Biden administration announced Vice President Kamala Harris, known for her vapid word salad speeches and obvious gaslighting, would now run point on artificial intelligence. Even I jumped in on the action, noting on FOX Business that Harris was more associated with the word “artificial” than the word “intelligence.”  

All joking aside, the future of AI technology is a serious issue. With her approval ratings in the toilet and President Biden showing obvious signs of age-related decline, Kamala Harris (and by that I mean the Democratic Party) urgently needs a way to rehabilitate her historically unpopular image ahead of the 2024 presidential race. This is not the way.

On this issue, like so many before it, Harris is out of her depth. Her past attempts to talk about complicated policy issues often sound like they’ve been dumbed down for a kindergarten audience. Her incoherent speeches have repeatedly gone viral. It’s not just Greg Gutfeld getting mileage out of Harris’ viral gaffes and ramblings.

Vice President Kamala Harris smiles
Kamala Harris urgently needs a way to rehabilitate her historically unpopular image ahead of the 2024 presidential race. (Reuters/Hannah Beier)

Her poll numbers reflect voter concerns that she simply hasn’t performed well in her job. Having already fared poorly in the 2020 Democratic presidential primary, earning zero delegate support, Harris is even less popular now. She has a net negative favorability rating as vice president. Her home state newspaper, the Los Angeles Times, reported last week that 53% of voters have an unfavorable opinion of Harris, for a net negative of -12 percentage points. 

BIDEN SAYS VP KAMALA HARRIS NEEDS MORE ‘CREDIT,’ DEFLECTS QUESTION ON WHETHER HE’LL SERVE A FULL TERM

Beyond concerns about her lack of depth are even more serious questions about her integrity. The American people simply do not trust her. Beyond the revolving door of unhappy Harris staffers and allegations of a negative work environment, Harris’ dishonest assessment of the border problem is still fresh on voters’ minds.  

Video

In September 2022, as a record 2 million people were crossing our borders and drug cartels were expanding their profitable trafficking and fentanyl operations, Harris twice told an incredulous Chuck Todd on NBC that, “the border is secure.” Of course, at that point, she hadn’t bothered to even go there.  

Border security is a problem that has gotten exponentially worse on her watch. But given that a Biden victory may very well depend on raising Harris’ poll numbers, it’s safe to assume this latest assignment is simply a political move intended to boost her popularity.

Kamala Harris dressed in all black holds mic during event
Vice President Kamala Harris speaks during the Democratic National Committee Women’s Leadership Forum in Washington, D.C., on Sept. 30, 2022. (Leigh Vogel/Abaca/Bloomberg via Getty Images)

Just last month, Bloomberg’s Julianna Goldman offered a helpful suggestion published in the administration-friendly Washington Post. “One way to boost Harris would be through her policy portfolio, to put her in charge of an important issue beyond immigration or abortion,” Goldman wrote, referencing Democratic strategists who suggested Harris would need to “own it” and “show some progress.”

CLICK HERE TO GET THE OPINION NEWSLETTER 

It looks like the Biden administration reached the same conclusion. They seem to believe all of Harris’ problems with the public are simply a reflection of voters’ inherent racism and sexism, as former Biden chief of staff Ron Klain has claimed. Or that she just hasn’t gotten credit for the things she’s done.  

Video

But Biden may rue the day he tapped Harris for this important responsibility. Like the albatross of her failure as the nation’s border czar, this assignment is fraught with risk, not just for voters, but for the administration.  

The complexity and the stakes involved in this rapidly advancing technology call for a deep thinker, not a party loyalist. The president needs to treat this like the important issue that it is. The American people deserve more than the perfunctory lip service and agenda-driven gaslighting Harris is likely to give it.

Joe Biden and Kamala Harris during a campaign event on Aug. 12, 2020, at Alexis Dupont High School in Wilmington, Delaware. (AP Photo/Carolyn Kaster, File)

Artificial intelligence technology poses serious risks to the economy. It is a threat to cybersecurity. It may force fundamental change to our business models and job markets. It’s not an artificial election-year ornament to be draped around the person whose poll numbers need a boost.  

By shining this light on Harris, the administration hopes to convince a skeptical public that Harris is ready to take over for the oldest president in history if needed. But if these attempts to make Harris look intelligent actually are artificial, they risk proving just the opposite.

CLICK HERE TO READ MORE FROM JASON CHAFFETZ

Jason Chaffetz is a FOX News (FNC) contributor and the host of the Jason In The House podcast on FOX News Radio. He joined the network in 2017.

Google’s ‘godfather of AI’ quits to spread word about dangers of AI, warns it will lead to ‘bad things’


By Anders Hagstrom | FOXBusiness | Published May 1, 2023 2:21pm EDT

Read more at https://www.foxbusiness.com/technology/googles-godfather-ai-quits-spread-word-dangers-ai-warns-will-lead-bad-things

Geoffrey Hinton, a Google engineer widely considered the godfather of artificial intelligence, has quit his job and is now warning of the dangers of further AI development. Hinton worked at Google for more than a decade and is responsible for a 2012 tech breakthrough that serves as the foundation of current AIs like ChatGPT. He announced his resignation from Google in a statement to the New York Times, saying he now regrets his work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told the paper.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton went on to say of AI.

CHATGPT AI LISTS JOBS IT CAN DO BETTER THAN HUMANS AS MILLIONS COULD BE PUT OUT OF WORK

Hinton
Geoffrey Hinton worked on early AI development and made a major breakthrough in 2012, but he now says AI is too dangerous. (Getty Images)
ChatGPT
AI like OpenAI’s ChatGPT are partly based on breakthroughs made by Geoffrey Hinton, who says the technology is likely to be misused by bad actors. (NurPhoto via Getty Images / File / Getty Images)

Hinton’s major AI breakthrough came when working with two graduate students in Toronto in 2012. The trio was able to successfully create an algorithm that could analyze photos and identify common elements, such as dogs and cars, according to the NYT. The algorithm was a rudimentary beginning to what current AIs like OpenAI’s ChatGPT and Google’s Bard AI are capable of. Google purchased the company Hinton started around the algorithm for $44 million shortly after the breakthrough.

GOOGLE SCRAMBLES FOR NEW SEARCH ENGINE AS AI CREEPS IN: REPORT

One of the graduate students who worked on the project with Hinton, Ilya Sutskever, now works as OpenAI’s chief scientist. Hinton said the progression seen since 2012 is astonishing but is likely just the tip of the iceberg.

“Look at how it was five years ago and how it is now,” he said of the industry. “Take the difference and propagate it forwards. That’s scary.”

The Google logo, on a smartphone, and Bard logo
Google’s Bard AI is an advanced chatbot capable of holding conversations and producing its own work. (Rafael Henrique / SOPA Images / LightRocket via Getty Images / File / Getty Images)

Hinton’s fears echo those expressed by more than 1,000 tech leaders earlier this year in a public letter that called for a brief halt to AI development. Hinton did not sign the letter at the time, and he now says that he did not want to criticize Google while he was with the company. Hinton has since ended his employment there and had a phone call with Google CEO Sundar Pichai on Thursday.

“We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly,” Google’s chief scientist, Jeff Dean, told the Times.

Tag Cloud