Perspectives; Thoughts; Comments; Opinions; Discussions

Posts tagged ‘AI’

Why Does Every Leading Large Language Model Lean Left Politically?


By: Ross Pomeroy | August 14, 2024

Read more at https://www.dailysignal.com/2024/08/14/every-leading-large-language-model-leans-left-politically/

The logo of ChatGPT is reflected in the glasses of a user. The language program developed by OpenAI is based on artificial intelligence. (Frank Rumpenhorst/ Deutsche Presse-Agentur/Picture Alliance/Getty Images)

Ross Pomeroy@SteRoPo

Ross Pomeroy is editor of RealClearScience.com.

Large language models are increasingly integrating into everyday life—as chatbots, digital assistants, and internet search guides, for example. These artificial intelligence systems, which consume large amounts of text data to learn associations, can create all sorts of written material when prompted and can ably converse with users.

Large language models’ growing power and omnipresence mean that they exert increasing influence on society and culture. So, it’s of great import that these artificial intelligence systems remain neutral when it comes to complicated political issues. Unfortunately, according to a new analysis recently published to PLOS ONE, this doesn’t seem to be the case.

AI researcher David Rozado of Otago Polytechnic and Heterodox Academy administered 11 different political orientation tests to 24 of the leading large language models, including OpenAI’s GPT 3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, and Twitter’s Grok. He found that they invariably lean slightly left politically.

“The homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy,” Rozado commented.

That raises a key question: Why are large language models so universally biased in favor of leftward political viewpoints? Could the models’ creators be fine-tuning their AIs in that direction, or are the massive data sets upon which they are trained inherently biased?

Rozado could not conclusively answer this query:

“The results of this study should not be interpreted as evidence that organizations that create LLMs deliberately use the fine-tuning or reinforcement learning phases of conversational LLM training to inject political preferences into LLMs. If political biases are being introduced in LLMs post-pretraining, the consistent political leanings observed in our analysis for conversational LLMs may be an unintentional byproduct of annotators’ instructions or dominant cultural norms and behaviors.”

Ensuring large language models’ neutrality will be a pressing need, Rozado wrote:

“LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society. Therefore, it is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries.”

Originally published by RealClearScience and made available via RealClearWire.

Today’s Politically INCORRECT Cartoon by A.F. Branco


A.F. Branco Cartoon – Summoning the Demon

A.F. BRANCO | on May 28, 2024 | https://comicallyincorrect.com/a-f-branco-cartoon-summoning-the-demond/

AI Is a Pandora’s Box
A Political Cartoon by A.F. Branco 2024

Facebook Twitter Pinterest Flipboard

A.F. Branco Cartoon – No one, as of yet, really knows the full implications of A.I. and how it will affect millions around the globe. Millions of jobs in all areas could evaporate. Clerical, creative, service, engineering, and data processing, to name a few, could shrink the middle class to catastrophic economic levels.

‘Godfather of A.I.’ Reverses Course, Quits Google to Warn About “Dangers” of Artificial Intelligence

By Brian Lupo – May 1, 2024

Proclaimed the “Godfather of Artificial Intelligence”, 75-year old Turing prize winner Geoffrey Hinton joins several other tech pioneers and notables in warning of the impacts of artificial intelligence.  Hinton was partially responsible for the development of the AI technology that is used by the biggest companies in the tech industry according to the New York Times.

On Monday, Hinton, a decade-long Google employee, tweeted “In the NYT today, Cade Metz implies I left Google so that I could criticize Google.  Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google.  Google has acted very responsibly.” READ MORE…

DONATE to A.F. Branco Cartoons – Tips accepted and appreciated – $1.00 – $5.00 – $25.00 – $50.00 – it all helps to fund this website and keep the cartoons coming. Also Venmo @AFBranco – THANK YOU!

A.F. Branco has taken his two greatest passions (art and politics) and translated them into cartoons that have been popular all over the country in various news outlets, including NewsMax, Fox News, MSNBC, CBS, ABC, and “The Washington Post.” He has been recognized by such personalities as Rep. Devin Nunes, Dinesh D’Souza, James Woods, Chris Salcedo, Sarah Palin, Larry Elder, Lars Larson, Rush Limbaugh, and President Trump. READ MORE…

Biden and Xi Have First Talk Since November Summit


Tuesday, 02 April 2024 02:00 PM EDT

Read more at https://www.newsmax.com/politics/biden-xi-china-yellen-ai-taiwan-fentanyl/2024/04/02/id/1159508/

President Joe Biden and Chinese President Xi Jinping discussed Taiwan, artificial intelligence and security issues Tuesday in a call meant to demonstrate a return to regular leader-to-leader dialogue between the two powers. The call, described by the White House as “candid and constructive,” was the leaders’ first conversation since their November summit in California produced renewed ties between the two nations’ militaries and a promise of enhanced cooperation on stemming the flow of deadly fentanyl and its precursors from China.

Xi told Biden that the two countries should adhere to the bottom line of “no clash, no confrontation” as one of the principles for this year.

“We should prioritize stability, not provoke troubles, not cross lines but maintain the overall stability of China-U.S. relations,” Xi said, according to China Central Television, the state broadcaster.

The call kicks off several weeks of high-level engagements between the two countries, with Treasury Secretary Janet Yellen set to travel to China on Thursday and Secretary of State Antony Blinken to follow in the weeks ahead.

Biden has pressed for sustained interactions at all levels of government, believing it is key to keeping competition between the two massive economies and nuclear-armed powers from escalating to direct conflict. While in-person summits take place perhaps once a year, officials said, both Washington and Beijing recognize the value of more frequent engagements between the leaders.

The two leaders discussed Taiwan ahead of next month’s inauguration of Lai Ching-te, the island’s president-elect, who has vowed to safeguard its de-facto independence from China and further align it with other democracies. Biden reaffirmed the United States’ longstanding “One China” policy and reiterated that the U.S. opposes any coercive means to bring Taiwan under Beijing’s control. China considers Taiwan a domestic matter and has vigorously protested U.S. support for the island.

Taiwan remains the “first red line not to be crossed,” Xi told Biden, and emphasized that Beijing will not tolerate separatist activities by Taiwan’s independence forces as well as “exterior indulgence and support,” which alluded to Washington’s support for the island.

Biden also raised concerns about China’s operations in the South China Sea, including efforts last month to impede the Philippines, which the U.S. is treaty-obligated to defend, from resupplying its forces on the disputed Second Thomas Shoal.

Next week, Biden will host Philippines President Ferdinand Marcos Jr. and Japanese Prime Minister Fumio Kishida at the White House for a joint summit where China’s influence in the region was set to be top of the agenda.

Biden, in the call with Xi, pressed China to do more to meet its commitments to halt the flow of illegal narcotics and to schedule additional precursor chemicals to prevent their export. The pledge was made at the leaders’ summit held in Woodside, California, last year on the margins of the Asia-Pacific Economic Cooperation meeting.

At the November summit, Biden and Xi also agreed that their governments would hold formal talks on the promises and risks of advanced artificial intelligence, which are set to take place in the coming weeks. The pair touched on the issue on Tuesday just two weeks after China and the U.S. joined more than 120 other nations in backing a resolution at the United Nations calling for global safeguards around the emerging technology.

Biden, in the call, reinforced warnings to Xi against interfering in the 2024 elections in the U.S. as well as against continued malicious cyberattacks against critical American infrastructure, according to a senior U.S. administration official who previewed the call on the condition of anonymity.

He also raised concerns about human rights in China, including Hong Kong’s new restrictive national security law and its treatment of minority groups, and he raised the plight of Americans detained in or barred from leaving China.

The Democrat president also pressed China over its defense relationship with Russia, which is seeking to rebuild its industrial base as it presses forward with its invasion of Ukraine. And he called on Beijing to wield its influence over North Korea to rein in the isolated and erratic nuclear power.

As the leaders of the world’s two largest economies, Biden also raised concerns with Xi over China’s “unfair economic practices,” the official said, and reasserted that the U.S. would take steps to preserve its security and economic interests, including by continuing to limit the transfer of some advanced technology to China.

Xi complained that the U.S. has taken more measures to suppress China’s economy, trade and technology in the past several months and that the list of sanctioned Chinese companies has become ever longer, which is “not de-risking but creating risks,” according to the broadcaster.

Yun Sun, director of the China program at Stimson Center, said the call “does reflect the mutual desire to keep the relationship stable” while the men reiterated their longstanding positions on issues of concern.

The call came ahead of Yellen’s visit to Guangzhou and Beijing for a week of bilateral meetings on the subject with finance leaders from the world’s second largest economy — including Vice Premier He Lifeng, Chinese Central Bank Gov. Pan Gongsheng, former Vice Premier Liu He, American businesses and local leaders.

An advisory for the upcoming trip states that Yellen “will advocate for American workers and businesses to ensure they are treated fairly, including by pressing Chinese counterparts on unfair trade practices.”

It follows Xi’s meeting in Beijing with U.S. business leaders last week, when he emphasized the mutually beneficial economic ties between the two countries and urged people-to-people exchange to maintain the relationship.

Xi told the Americans that the two countries have stayed communicative and “made progress” on issues such as trade, anti-narcotics and climate change since he met with Biden in November. Last week’s high-profile meeting was seen as Beijing’s effort to stabilize bilateral relations.

Ahead of her trip to China, Yellen last week said that Beijing is flooding the market with green energy that “distorts global prices.” She said she intends to share her beliefs with her counterparts that Beijing’s increased production of solar energy, electric vehicles and lithium-ion batteries poses risks to productivity and growth to the global economy.

U.S. lawmakers’ renewed angst over Chinese ownership of the popular social media app TikTok has generated new legislation that would ban TikTok if its China-based owner ByteDance doesn’t sell its stakes in the platform within six months of the bill’s enactment.

As chair of the Committee on Foreign Investment in the U.S., which reviews foreign ownership of firms in the U.S., Yellen has ample leeway to determine how the company could remain operating in the U.S.

Meanwhile, China’s leaders have set a goal of 5% economic growth this year despite a slowdown exacerbated by troubles in the property sector and the lingering effects of strict anti-virus measures during the COVID-19 pandemic that disrupted travel, logistics, manufacturing and other industries.

China is the dominant player in batteries for electric vehicles and has a rapidly expanding auto industry that could challenge the world’s established carmakers as it goes global.

The U.S. last year outlined plans to limit EV buyers from claiming tax credits if they purchase cars containing battery materials from China and other countries that are considered hostile to the United States. Separately, the Department of Commerce launched an investigation into the potential national security risks posed by Chinese car exports to the U.S.

Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Today’s Politically INCORRECT Cartoon by A.F. Branco


A.F. Branco Cartoon – Freind or Foe

A.F. BRANCO | on February 28, 2024 | https://comicallyincorrect.com/a-f-branco-cartoon-freind-or-foe/

AI Anti-Conservative Bias
A Political Cartoon by A.F. Branco 2024

Facebook Twitter Pinterest Flipboard

Google’s AI Gemini is proving to be woke with its recent inaccuracies in a few of its depictions of white historical figures. If AI (Artificial Intelligence) is programmed by leftist woke techies it stands to reason it will have anti-conservative biased output labeled as facts. If we dare extrapolate that into the future of probable technological advances, it gives off a very dark and grim hypothesis.

Google launches leftist AI image generator – world laughs, then realizes grim truth

By Kelly McCarthy

Gemini, Google’s most recent venture into the realm of artificial intelligence (AI) chatbots, has adopted weirdly woke positions on an array of subjects, prompting ridicule from the world of social media, and posing bigger questions about the world’s biggest search engine.  READ MORE

Victor Reacts: Woke AI Makes Everyone Black (VIDEO)

By Victor Nieves Feb 23, 2024

Thank goodness for woke AI, without it we would never have known that George Washington was actually black. Google’s AI chatbot “Gemini” is getting destroyed online after reports show it generates historically inaccurate diverse images. The Gateway Pundit previously reported, … READ MORE…

DONATE to A.F. Branco Cartoons – Tips accepted and appreciated – $1.00 – $5.00 – $25.00 – $50.00 – it all helps to fund this website and keep the cartoons coming. Also Venmo @AFBranco – THANK YOU!

A.F. Branco has taken his two greatest passions (art and politics) and translated them into cartoons that have been popular all over the country in various news outlets, including NewsMax, Fox News, MSNBC, CBS, ABC, and “The Washington Post.” He has been recognized by such personalities as Rep. Devin Nunes, Dinesh D’Souza, James Woods, Chris Salcedo, Sarah Palin, Larry Elder, Lars Larson, Rush Limbaugh, and President Trump.

Google Parent Loses $70B in Value After Woke AI Snafu


Tuesday, 27 February 2024 11:11 AM EST

Read more at https://www.newsmax.com/finance/streettalk/google-alphabet-market-cap/2024/02/27/id/1155114/

Google Parent Loses $70B in Value After Woke AI Snafu
(Dreamstime)

Google parent Alphabet (GOOG) lost more than $70 billion in market capitalization in a single trading day on fears that its artificial intelligence tool is programmed to be woke, the New York Post reports. The stock sank 4.4% to $138.75 Monday after Google paused its Gemini AI image creation tool because it was churning out historically and factually inaccurate images. These included a Black George Washington, female NHL players, Black Vikings, and an Asian Nazi. The chatbot furthermore refused to condemn pedophilia and said there was “no right or wrong answer” when asked whether Adolph Hitler or Elon Musk is worse.

The calamity could fuel public concerns that Google is “an unreliable source for AI,” wrote Melius Research analyst Ben Reitzes in a note to investors.

“We have been arguing that Search behavior is about to change — with new AI-infused features,” Reitzes wrote. “This ‘once in a generation’ change by itself creates opportunities for competitors — but even more if a meaningful portion of users grow concerned about Google’s hallucinations and bias.”

On the same day the stock took its massive plunge, Google DeepMind CEO Demis Hassabis, one of the company’s top AI bosses, said Gemini would be offline for a few weeks while the issue is fixed. The AI image tool was not “working the way we intended,” Hassabis said at the Mobile World Congress in Barcelona.

As the backlash against Gemini’s woke bias went viral on X last week, politically charged tweets by Gemini product lead Jack Krawczyk resurfaced.

“White privilege is f****** real,” Krawczyk wrote in an old tweet. America is rife with “egregious racism,” the Gemini mastermind added.

Related Stories:

© 2024 Newsmax Finance. All rights reserved.

Read Newsmax: Google Parent Loses $70B in Value After Woke AI Snafu | Newsmax.com
Important: Find Your Real Retirement Date in Minutes! More Info Here

Google’s Gemini AI has a White people problem


David Marcus  By David Marcus Fox News| Published February 26, 2024 2:00am EST

Read more at https://www.foxnews.com/opinion/googles-gemini-ai-has-white-people-problem

By now we have all seen the frankly hilarious images of Black George Washington, South Asian popes, along with Gemini’s stubborn and bizarre inability to depict a White scientist or lawyer. Much like Open AI’s ChatGPT before it, Gemini will gladly generate content heralding the virtues of Black, Hispanic or Asian people, and will decline to do so in regard to White people so as not to perpetuate stereotypes.

There are two main reasons why this is occurring. The first, flaws in the AI software itself, has been much discussed. The second, and more intractable problem, that of flaws in the original source material, has not.

Google Gemini
Engineers at AI companies have trained their software to “correct,” or “compensate,” for what they assume is the systemic racism that our society is rife with. (Betul Abali/Anadolu via Getty Images)

Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here,” Jack Krawczyk, senior director for Gemini Experiences has admitted.

RED-FACED GOOGLE APOLOGIZES AFTER WOKE AI BOT GIVES ‘APPALLING’ ANSWERS ABOUT PEDOPHILIA, STALIN

Ya think?

You see, the engineers at AI companies such as Google and Open AI have trained their software to “correct,” or “compensate,” for what they assume is the systemic racism and bigotry that our society is rife with.  But the mainly 21st-century internet source material AI uses is already correcting for such bias. It is in large part this doubling down that produces the absurd and ludicrous images and answers that Gemini and ChatGPT are being mocked for. 

Video

For well over a decade, online content creators such as advertisers and news outlets have sought to diversify the subjects of their content in order to redress supposed negative historical stereotypes.

SEN. TOM COTTON TORCHES GOOGLE AI SYSTEM AS ‘RACIST, PREPOSTEROUSLY WOKE, HAMAS-SYMPATHIZING’

It is this very content that AI generators scrub once again for alleged racism, and as a result, all too often, the only option left to AI to make the content “less racist” is to erase White people from results altogether. In its own strange way, generative AI may be proving that American society is actually far less racist than those in positions of power assume.

This problem of source material also extends far beyond thorny issues of race, as Christina Pushaw, an aide to Florida Gov. Ron DeSantis, exposed in two prompts regarding COVID.

She first asked Gemini if opening schools spread COVID, and then if BLM rallies spread COVID. Nobody should be surprised to learn that the AI provided evidence of school openings spreading the virus and no evidence that BLM rallies did.

GOOGLE ADMITS ITS GEMINI AI ‘GOT IT WRONG’ FOLLOWING WIDELY PANNED IMAGE GENERATOR: NOT ‘WHAT WE INTENDED’

But here’s the thing. If you went back and aggregated the contemporaneous online news reporting from 2020 and 2021, these are exactly the answers that you would wind up with. News outlets bent over backwards to deny that tens of thousands marching against alleged racism, and using public transportation to get there, could spread COVID, while chomping at the bit to prove in-class learning was deadly. 

Video

In fact, there was so much abject online censorship of anything questioning the orthodoxies of the COVID lockdowns that the historical record upon which AI is built is all but irretrievably corrupted. This is an existential problem for the widespread use of artificial intelligence, especially in areas such as journalism, history, regulation and even legislation, because obviously there is no way to train AI to only use sources that “tell the truth.”

CLICK HERE FOR MORE FOX NEWS OPINION

There is no doubt that in areas such as science and engineering AI opens up a world of new opportunities, but as far as intellectual pursuits go, we must be very circumspect about the vast flaws that AI introduces to our discourse. 

GhatGPT openAI logo
For now, at least, generative AI absolutely should not be used to create learning materials for our schools. (Reuters/Dado Ruvic/Illustration)

For now, at least, generative AI absolutely should not be used to create learning materials for our schools, breaking stories in our newspapers, or be anywhere within a 10,000-mile radius of our government. It turns out the business of interpreting the billions of bits of information online to arrive at rational conclusions is still very much a human endeavor. It is still very much a subjective matter, and there is a real possibility that no matter how advanced AI becomes, it always will be.

CLICK HERE TO GET THE FOX NEWS APP

This may be a hard pill to swallow for companies that have invested fortunes in generative AI development, but it is good news for human beings, who can laugh at the fumbling failures of the technology and know that we are still the best arbiters of truth. More, it seems very likely that we always will be.

CLICK HERE TO READ MORE FROM DAVID MARCUS

David Marcus is a columnist living in West Virginia and the author of “Charade: The COVID Lies That Crushed A Nation.”

Censorship-Industrial Complex Enlists U.K. ‘Misinformation’ Group Logically.AI To Meddle In 2024 Election


BY: LEE FANG | JANUARY 29, 2024

Read more at https://thefederalist.com/2024/01/29/censorship-industrial-complex-enlists-u-k-misinformation-group-logically-ai-to-meddle-in-2024-election/

close up of black woman holding a green cellphone

Author Lee Fang profile

LEE FANG

MORE ARTICLES

Brian Murphy, a former FBI agent who once led the intelligence wing of the Department of Homeland Security, reflected last summer on the failures of the Disinformation Governance Board — the panel formed to actively police misinformation. The board, which was proposed in April 2022 after he left DHS, was quickly shelved by the Biden administration in a few short months in the face of criticism that it would be an Orwellian state-sponsored “Ministry of Truth.”

In a July podcast, Murphy said the threat of state-sponsored disinformation meant the executive branch has an “ethical responsibility” to rein in the social media companies. American citizens, he said, must give up “some of your freedoms that you need and deserve so that you get security back.”

The legal problems and public backlash to the Disinformation Governance Board also demonstrated to him that “the government has a major role to play, but they cannot be out in front.”

Murphy, who made headlines late in the Trump administration for improperly building dossiers on journalists, has spent the last few years trying to help the government find ways to suppress and censor speech it doesn’t like without being so “out in front” that it runs afoul of the Constitution. He has proposed that law enforcement and intelligence agencies formalize the process of sharing tips with private sector actors — a “hybrid constellation” including the press, academia, researchers, nonpartisan organizations, and social media companies — to dismantle “misinformation” campaigns before they take hold.

More recently, Murphy has worked to make his vision of countering misinformation a reality by joining a United Kingdom-based tech firm, Logically.AI, whose eponymous product identifies and removes content from social media. Since joining the firm, Murphy has met with military and other government officials in the U.S., many of whom have gone on to contract or pilot Logically’s platform.

Logically says it uses artificial intelligence to keep tabs on over 1 million conversations. It also maintains a public-facing editorial team that produces viral content and liaisons with the traditional news media. It differs from other players in this industry by actively deploying what they call “countermeasures” to dispute or remove problematic content from social media platforms.
 
The business is even experimenting with natural language models, according to one corporate disclosure, “to generate effective counter speech outputs that can be leveraged to deliver novel solutions for content moderation and fact-checking.” In other words, artificial intelligence-powered bots that produce, in real-time, original arguments to dispute content labeled as misinformation.

In many respects, Logically is fulfilling the role Murphy has articulated for a vast public-private partnership to shape social media content decisions. Its technology has already become a key player in a much larger movement that seeks to clamp down on what the government and others deem misinformation or disinformation. A raft of developing evidence — including the “Twitter Files,” the Moderna Reports, the proposed Government Disinformation Panel, and other reports — has shown how governments and industry are determined to monitor, delegitimize, and sometimes censor protected speech. The story of Logically.AI illustrates how sophisticated this effort has become and its global reach. The use of its technology in Britain and Canada raises red flags as it seeks a stronger foothold in the United States.

Logically was founded in 2017 by a then-22-year-old British entrepreneur named Lyric Jain, who was inspired to form the company to combat what he believed were the lies that pushed the U.K. into voting in favor of Brexit, or leaving the European Union. The once-minor startup now has broad contracts across Europe and India, and has worked closely with Microsoft, Google, PwC, TikTok, and other major firms. Meta contracts with Logically to help the company fact-check content on all of its platforms: WhatsApp, Instagram, and Facebook.

The close ties to Silicon Valley provide unusual reach. “When Logically rates a piece of content as false, Facebook will significantly reduce its distribution so that fewer people see it, apply a warning label to let people know that the content has been rated false, and notify people who try to share it,” Meta and Logically announced in a 2021 press release on the partnership.

Meta and Logically did not respond to repeated requests for comment.

During the 2021 local elections in the U.K., Logically monitored up to “one million pieces of harmful content,” some of which they relayed to government officials, according to a document reviewed by RealClearInvestigations. The firm claimed to spot coordinated activity to manipulate narratives around the election, information they reported to tech giants for takedowns.

The following year, the state of Oregon negotiated with Logically for a wide-ranging effort to monitor campaign-related content during the 2022 midterm elections. In a redacted proposal for the project, Logically noted that it would check claims against its “single source of truth database,” which relied on government data, and would also crack down on “malinformation” — a term of art that refers to accurate information that fuels dangerous narratives. The firm similarly sold Oregon on its ability to pressure social media platforms for content removal.

Oregon state Rep. Ed Diehl has a led push against the state from renewing its work with Logically for the election this year. The company, he said in an interview, violates “our constitutional rights to free speech and privacy” by “flagging true information as false, claiming legitimate dissent is a threat, and then promoting “counter-narratives” against valid forms of public debate.

In response, the Oregon secretary of state’s office, which initiated the contract with Logically, claimed “no authority, ability, or desire to censor speech.” Diehl disputes this. He pointed out that the original proposal with Logically clearly states that its service “enables the opportunity for unlimited takedown attempts” of alleged misinformation content and the ability for the Oregon secretary of state’s office to “flag for removal” any “problematic narratives and content.” The contract document touts Logically as a “trusted entity within the social media community” that gives it “preferred status that enables us to support our client’s needs at a moment’s notice.”

Diehl, who shared a copy of the Logically contract with RCI, called the issue a vital “civil rights” fight, and noted that in an ironic twist, the state’s anti-misinformation speech suppression work further inflames distrust in “election systems and government institutions in general.”

Logically’s reach into the U.S. market is quickly growing. The company has piloted programs for the Chicago Police Department to use artificial intelligence to analyze local rap music and deploy predictions on violence in the community, according to a confidential proposal obtained by RCI. Pentagon records show that the firm is a subcontractor to a program run by the U.S. Army’s elite Special Operations Command for work conducted in 2022 and 2023. Via funding from DHS, Logically also conducts research on gamer culture and radicalization.

The company has claimed in its ethics statements that it will not employ any person who holds “a salaried or prominent position” in government. But records show closely entrenched state influence. For instance, Kevin Gross, a director of the U.S. Navy NAVAIR division, was previously embedded within Logically’s team during a 2022 fellowship program. The exchange program supported Logically’s efforts to assist NATO on the analysis of Russian social media.

Other contracts in the U.S. may be shrouded in secrecy. Logically partners with ThunderCat Technologies, a contracting firm that assists tech companies when competing for government work. Such arrangements have helped tech giants conceal secretive work in the past. Google previously attempted to hide its artificial intelligence drone-targeting contracts with the Defense Department through a similar third-party contracting vendor.

But questions swirl over the methods and reach of the firm as it entrenches itself into American life, especially as Logically angles to play a prominent role in the 2024 presidential election. 

Pandemic Policing

In March 2020, as Britain confronted the spread of Covid-19, the government convened a new task force, the Counter Disinformation Unit (CDU). The secretive task force was created with little fanfare but was advertised as a public health measure to protect against dangerous misinformation. Caroline Dinenage, the member of Parliament overseeing media issues, later explained that the unit’s purpose was to provide authoritative sources of information and to “take action to remove misinformation” relating to “misleading narratives related to COVID-19.”

The CDU, it later emerged, had largely outsourced its work to private contractors such as Logically. In January 2021, the company received its first contract from the agency overseeing the CDU, for £400,000, to monitor “potentially harmful disinformation online.” The contracts later swelled, with the U.K. agency that pertains to media issues eventually providing contracts with a combined value of £1.2 million and the Department of Health providing another £1.3 million, for a total of roughly $3.2 million.

That money went into far-reaching surveillance that monitored journalists, activists, and lawmakers who criticized pandemic policies. Logically, according to an investigation last year in the Telegraph, recorded comments from activist Silkie Carlo criticizing vaccine passports in its “Mis/Disinformation” reports.

Logically’s reports similarly collected information on Dr. Alexandre de Figueiredo, a research fellow at the London School of Hygiene and Tropical Medicine. Figueiredo had published reports on the negative ways in which vaccine passports could undermine vaccine confidence and had publicly criticized policies aimed at the mass vaccination of children. Despite his expertise, Logically filed his tweet in a disinformation report to the government. While some of the reports were categorized as evidence of terms of service violations, many were, in fact, routine forms of dissent aired by prominent voices in the U.K. on policies hotly contested by expert opinion.

The documents showing Logically’s role were later uncovered by Carlo’s watchdog group, Big Brother Watch, which produced a detailed report on the surveillance effort. The CDU reports targeted a former judge who argued against coercive lockdowns as a violation of civil liberties and journalists criticizing government corruption. Some of the surveillance documents suggest a mission creep for the unit, as media monitoring emails show that the agency targeted anti-war groups that were vocal against NATO’s policies.

Carlo was surprised to even find her name on posts closely monitored and flagged by Logically. “We found that the company exploits millions of online posts to monitor, record and flag online political dissent to the central government under the banner of countering ‘disinformation,’” she noted in a statement to RCI.

Marketing materials published by Logically suggest its view of Covid-19 went well beyond fact-checking and veered into suppressing dissenting opinions. A case study published by the firm claimed that the #KBF hashtag, referring to Keep Britain Free, an activist group against school and business shutdowns, was a dangerous “anti-vax” narrative. The case study also claimed the suggestion that “the virus was created in a Chinese laboratory” was one of the “conspiracy theories’’ that “have received government support” in the U.S. — despite the fact that a preponderance of evidence now points to a likely lab leak from the Wuhan Institute of Virology as the origin of the pandemic.

Logically was also involved in pandemic work that blurred the line with traditional fact-checking operations. In India, the firm helped actively persuade patients to take the vaccine. In 2021, Jain, the founder and CEO of the company, said in an interview with an Indian news outlet that his company worked “closely with communities that are today vaccine hesitant.” The company, he said, recruited “advocates and evangelists” to shape local opinion.

Questionable Fact-Checking

In 2022, Logically used its technology on behalf of Canadian law enforcement to target the trucker-led “Freedom Convoy” against Covid-19 mandates, according to government records. Logically’s team floated theories that the truckers were “likely influenced by foreign adversaries,” a widely repeated claim used to denigrate the protests as inauthentic.

The push to discredit the Canadian protests showed the overlapping power of Logically’s multiple arms. While its social media surveillance wing fed reports to the Canadian government, its editorial team worked to influence opinion through the news media. When the Financial Times reported on the protest phenomenon, the outlet quoted Murphy, the former FBI man who now works for Logically, who asserted that the truckers were influenced by coordinated “conspiracy theorist groups” in the U.S. and Canada. Vice similarly quoted Joe Ondrak, Logically’s head of investigations, to report that the “Freedom Convoy” had generated excitement among global conspiracy theorists. Neither outlet disclosed Logically’s work for Canadian law enforcement at the time.

Other targets of Logically are quick to point out that the firm has taken liberties with what it classifies as misinformation.

Will Jones, the editor of the Daily Sceptic, a British news outlet with a libertarian bent, has detailed an unusual fact-check from Logically Facts, the company’s editorial site. Jones said the site targeted him for pointing out that data in 2022 showed 71 percent of patients hospitalized for Covid-19 were vaccinated. Logically’s fact-check acknowledged Jones had accurately used statistics from the U.K. Health Security Agency, but tried to undermine him by asserting that he was still misleading by suggesting that “vaccines are ineffective.”

But Jones, in a reply, noted that he never made that argument and that Logically was batting away at a straw man. In fact, his original piece plainly took issue with a Guardian article that incorrectly claimed that “COVID-19 has largely become a disease of the unvaccinated.”

Other Logically fact-checks have bizarrely targeted the Daily Sceptic for reporting on news in January 2022 that vaccine mandates might soon be lifted. The site dinged the Daily Sceptic for challenging the evidence behind the vaccine policy and declared, “COVID-19 vaccines have been proven effective in fighting the pandemic.” And yet, at the end of that month, the mandate was lifted for health care workers, and the following month, all other pandemic restrictions were revoked, just as the Daily Sceptic had reported.

“As far as I can work out, it’s a grift,” said Daily Sceptic founder Toby Young, of Logically. “A group of shysters offer to help the government censor any criticism of its policies under the pretense that they’re not silencing dissent — God forbid! — but merely ‘cleansing’ social media of misinformation, disinformation and hate speech.”

Jones was similarly dismissive of the company, which he said disputes anything that runs contrary to popular consensus. “The consensus of course is that set by the people who pay Logically for their services,” Jones added. “The company claims to protect democratic debate by providing access to ‘reliable information,’ but in reality, it is paid to bark and savage on command whenever genuine free speech makes an inconvenient appearance.”

In some cases, Logically has piled on to news stories to help discredit voices of dissent. Last September, the anti-misinformation site leaped into action after British news outlets published reports about sexual misconduct allegations surrounding comedian and online broadcaster Russell Brand — one of the outspoken critics of government policy in Britain, who has been compared to Joe Rogan for his heterodox views and large audience.

Brand, a vocal opponent of pandemic policies, had been targeted by Logically in the past for airing opinions critical of the U.S. and U.K. response to the virus outbreak, and in other moments for criticizing new laws in the European Union that compel social media platforms to take down content.

But the site took dramatic action when the sexual allegations, none of which have been proved in court, were published in the media. Ondrak, Logically’s investigations head, provided different quotes to nearly half a dozen news outlets — including Vice, Wired, the BBC, and two separate articles in The Times — that depicted Brand as a dangerous purveyor of misinformation who had finally been held to account.

“He follows a lot of the ostensibly health yoga retreat, kind of left-leaning, anti-capitalist figures, who got really suckered into Covid skepticism, Covid denialism, and anti-vax, and then spat out of the Great Reset at the other end,” Ondrak told Wired. In one of the articles published by The Times, Ondrak aired frustration on the obstacles of demonetizing Brand from the Rumble streaming network. In an interview with the BBC, Ondrak gave a curious condemnation, noting Brand stops short of airing any actual conspiracy theories or falsehoods but is guilty of giving audiences “the ingredients to make the disinformation themselves.”

Dinenage, the member of Parliament who spearheaded the CDU anti-misinformation push with Logically during the pandemic, also leapt into action. In the immediate aftermath of the scandal, she sent nearly identical letters to Rumble, TikTok, and Meta to demand that the platforms follow YouTube’s lead in demonetizing Brand. Dinenage couched her official request to censor Brand as a part of a public interest inquiry, to protect the “welfare of victims of inappropriate and potentially illegal behaviour.”

Logically’s editorial team went a step further. In its report on the Brand allegations published on Logically Facts, it claimed that social media accounts “trotting out the ‘innocent until proven guilty’ refrain” for the comedian were among those perpetuating “common myths about sexual assault.” The site published a follow-up video reiterating the claim that those seeking the presumption of innocence for Brand, a principle dating back to the Magna Carta, were spreading a dangerous “myth.”

The unusual advocacy campaign against Brand represented a typical approach for a company that has long touted itself as a hammer against spreaders of misinformation. The opportunity to remove Brand from the media ecosystem meant throwing as much at him as possible, despite any clear misinformation or disinformation angle in the sexual assault allegations. Rather, he was a leading critic of government censorship and pandemic policy, so the scandal represented a weakness to be exploited.

Such heavy-handed tactics may be on the horizon for American voters. The firm is now a member of the U.S. Election Infrastructure Information Sharing & Analysis Center, the group managed by the Center for Internet Security that helps facilitate misinformation reports on behalf of election officials across the country. Logically has been in talks with Oregon and other states, as well as DHS, to expand its social media surveillance role for the presidential election later this year.

Previous targets of the company, though, are issuing a warning. 

“It appears that Logically’s lucrative and frankly sinister business effectively produced multi-million pound misinformation for the government that may have played a role in the censorship of citizens’ lawful speech,” said Carlo of Big Brother Watch.

“Politicians and senior officials happily pay these grifters millions of pounds to wield the red pen, telling themselves that they’re ‘protecting’ democracy rather than undermining it,” said Young of the Daily Sceptic. “It’s a boondoggle and it should be against the law.”

This article was originally published by RealClearInvestigations and LeeFang.com.


Lee Fang is an investigative reporter. Find his Substack here.

U.S. Government Gave $1 Million To AI Startup That Helped Blacklist Companies Spreading ‘Disinformation’


BY: SAMUEL MANGOLD-LENETT | NOVEMBER 13, 2023

Read more at https://thefederalist.com/2023/11/13/u-s-government-gave-1-million-to-ai-startup-that-helped-blacklist-companies-spreading-disinformation/

Computer

Author Samuel Mangold-Lenett profile

SAMUEL MANGOLD-LENETT

VISIT ON TWITTER@SMLENETT

MORE ARTICLES

The National Science Foundation’s Directorate for Technology, Innovation, and Partnerships (TIP) is helping tech developers build artificial intelligence programs that suppress digital speech by starving online companies of ad revenue and isolating them from the financial system. 

As part of the Small Business Innovation Research (SBIR) program’s second phase, the Massachusetts-based Automated Controversy Detection, Inc. (AuCoDe) received just over $940,000 for a project titled “A Controversy Detection Signal for Finance.” The company received $225,000 during the first phase of the program for the same project, for a total just under $1.2 million. AuCoDe received this money over a span of four years, from 2018 to 2022.

NSF Award Search_ Award # 1… by The Federalist

According to LinkedIn, AuCoDe is an “NSF backed company that aims to make online communication more productive and less dangerous.” Its now-defunct website states that AuCoDe “use[d] state-of-the-art machine learning algorithms to stop the spread of misinformation online.”

Let’s all say it together. “Who is responsible to determining what is, and is not, misinformation, or disinformation?” The wrong answer is Socialism.

The company developed artificial intelligence programs to identify “opposing sentiment,” “misinformation and disinformation,” “fairness and bias issues,” and “bot activity and its correlation with disinformation campaigns.” It used similar methods to “gain insight into sentiment and beliefs.”

Along these lines, the NSF-funded project’s goal was to develop technology that can “automatically detect controversy and disinformation, providing a means for financial institutions to reduce risk exposure” amid the increase of “public attention and political concern” being paid to disinformation.

Second phase SBIR grant money funded the “development of novel algorithms that automatically detect controversy in social media, news, and other outlets.” AuCoDe’s used this money to attempt the creation of “artificial intelligence and machine learning” programs that combat “the growing noise of controversy, mis- and dis-information, and toxic speech.”

According to the grant’s project outcomes report, AuCoDe developed several such “technologies.” The company created the “Squint,” controversy detection dashboard, and “Squabble, a proprietary controversy detection model.”

Squint and Squabble, “enable users to learn the controversy and toxicity levels of social media content, together with the stance score of an individual or company.” AuCoDe also created a free Chrome extension called “DETOXIFY” that enables users to blacklist and blur topics from their social media feeds.

Squint and Squabble are unavailable for public use.

AuCoDe also used this grant money to launch a YouTube channel where company members discuss “current controversies.” The channel boasts three total subscribers, and the most recent of its nine videos was uploaded eight months ago.

paper, co-authored by AuCoDe staff members Shiri Dori-Hacohen, Keen Sung, Jengyu Chou, and Julian Lustig-Gonzalez, produced as a result of this grant detailed how “detecting information disorders and deploying novel, real-world content moderation tools is crucial in promoting empathy in social networks” like Parler and Reddit.

A supplemental video provided by the authors discussed the “cost of disinformation” both before and after Covid — partially AI-generated results “conservatively” estimated to be upward of $230 billion — and relied upon a report from the Global Disinformation Index to substantiate that brands like Amazon, Petco, and UPS “inadvertently funded disinformation stories leading up to the 2020 election.”

Below is a teacher (yes, a schoolteacher, teaching children as we speak). It is safe to assume that a person this him would be used to determine what is, and is not, misinformation, disinformation, et., al.

The Global Disinformation Index, of course, is a formerly State Department-backed British organization that provided advertising companies with blacklists to starve companies accused that were accused spreading disinformation of revenue. AuCoDe’s research was aimed at helping the federal government further this goal through the algorithmic curation of digital speech.

[Read: Meet The Shadowy Group That Ran The Federal Government’s Censorship Scheme]

In January 2021, using research gathered from these grants, the company published a piece titled “Misinformation drives calls for action on Parler: preliminary insights into 672k comments from 291k Parler users.” The company said it was “investigating the nature of accounts on alt-tech networks, with an eye toward who is spreading misinformation” and suggested that the platform’s very nature enabled users to circulate and engage with “mis- and dis-information.”

“In conclusion, our first look at our collection of Parler data finds a plethora of misinformation driving a desire for action,” the company wrote. “We also discovered that in addition to highly permissive content moderation, there is a lack of moderation around bots, leaving enormous potential for disinformation campaigns to be carried out on these networks — something we will be keenly exploring in the coming weeks.”

The reality is that AuCoDe interfered with Americans’ right to free speech because it didn’t align with the left-wing consensus and used federal tax dollars to run cover for Big Tech oligarchs. If the company was actually dogmatically concerned with “misinformation,” it would have gone after Facebook, which played a much larger role in hosting Jan. 6 discourse.

[Read: Court Docs Show Facebook Played Much Bigger Part In Capitol Riot Than Parler, Yet No Consequences

A source close to the company told The Federalist that AuCoDe closed in May 2023. More than $1 million in taxpayer money went to a government-backed start-up specifically focused on attacking the First Amendment rights of Americans and sabotaging businesses that deviate from left-wing orthodoxy.


Samuel Mangold-Lenett is a staff editor at The Federalist. His writing has been featured in the Daily Wire, Townhall, The American Spectator, and other outlets. He is a 2022 Claremont Institute Publius Fellow. Follow him on Twitter @smlenett.

China wants to militarize AI and Big Tech firms might not even be on our side


Joel Thayer

 By Joel Thayer | Fox News | Published June 16, 2023 2:00am EDT

Read more at https://www.foxnews.com/opinion/china-wants-militarize-ai-tech-firms-might-not-side

Circa 1996, U.S. lawmakers wanted to make sure scrappy startups, like AOL and Amazon, had a fighting chance against incumbents. Our government had a straightforward approach: rubberstamp mergers and free tech from any regulatory oversight. These policy approaches were intended to level the playing field for the nascent tech industry and export our values abroad. And it worked! But only in part. Our tech policies of yore have turned those startups into the world’s first set of trillion-dollar companies. But Big Tech failed to export our values — and has even been counterproductive on that end. 

Big Tech’s engagement with China is a case in point. 

EX-GOOGLE CHIEF BUILT ‘OLIGARCH-STYLE EMPIRE’ TO INFLUENCE AI, BIDEN WHITE HOUSE AND PUBLIC POLICY: REPORT

Big Tech has set aside American values to profit off China. Apple’s manufacturing arrangements in China, for example, have contributed to the ongoing enslavement of Uyghurs — a religious minority — In the country. 

AI sign
Big Tech has been kowtowing to Chinese interests not American values.  (JOSEP LAGO/AFP via Getty Images)

And Big Tech has helped the Chinese government firm up its own anti-American sentiments. Apple has a multi-billion-dollar deal that gives the Chinese Communist Party (CCP) a back channel to all Apple devices located on the mainland.  Apple’s deal with China requires it to take down apps that promote anti-CCP narratives, including those celebrating the Tiananmen Square demonstration and calling for independence for Tibet and Taiwan. 

Big Tech has even exported CCP values into the United States. Google demonetizes YouTube videos in the U.S. that offend the CCP. Amazon partnered with China’s propaganda arm “to create a selling portal on the company’s U.S. site, Amazon.com – a project that came to be known as China Books.”  Apple still lists TikTok as an “essential app” for its users even with the Treasury Department investigating the app on national security grounds, the Department of Justice investigating the company for spying on American journalists, and the director of the FBI and President Joe Biden’s Director of National Intelligence raising issues with the app’s relationship with the CCP.

Video

Back in America, our laissez-faire approach has encouraged Big Tech companies to take centralized control over our information, personal data and our markets. These companies are so dominant that they can buy out any competitor that drives away revenue from them, with the federal government green lighting nearly every acquisition Big Tech wants. Since 2000, Google, Amazon, Meta, and Apple collectively acquired over 800 companies. 

What may be worse, Big Tech uses centralized control to kill new startups, force acquisitions, or just flat-out steal their competitors’ functions through their app stores. And to “connect and promote a vast network of accounts openly devoted to the commission and purchase of underage-sex content.”  

The firms also totally control digital ad markets to arbitrarily inflate the cost of digital ads. In short, they get richer while you pay more for products online. 

Last Congress, a bipartisan chorus of Senators and Representatives proposed bills to curb the effects of Big Tech’s centralized control by reforming our antitrust laws, protecting our children and creating a national privacy regime. 

Congress hearing security
TikTok CEO Shou Zi Chew testified before a congressional panel on Thursday regarding security concerns surrounding the Chinese-owned app. (Fox News)

Not one significant reform passed. Why not? One theory is that Big Tech simply outspends their opposition’s lobbying effort. But that doesn’t tell the whole story.  The reason is far more entrenched — our tech policies have evolved from nurturing startups to institutionalizing Big Tech incumbents as our national champions.  

Sadly, our government still thinks these companies will help us fight against China, even though they kowtow to the CCP at almost every chance and consistently invoke policies that suppress our core values domestically, like freedom of speech, religion or association.  And Big Tech has helped the Chinese government firm up its own anti-American sentiments. Apple has a multi-billion-dollar deal that gives the Chinese Communist Party (CCP) a back channel to all Apple devices located on the mainland.  

Frankly, these facts should call into question whether Big Tech even shares American values at all. 

It is even more imperative to get this right as our nation decides how to deal with artificial intelligence. It’s no secret that the Chinese government wants to militarize its own AI capabilities; we are shaping up to be on the losing end of that digital arms race if we don’t get serious.  As it stands now, the AI market is concentrated among a few companies, dominated by Big Tech. Worse, almost all have significant ties to China and its government.  Our national security and individual liberty demand that Congress tackle AI with a clear sight of who these companies are and the values they hold — based not only on their words but their actions. 

The reality is that Big Tech firms are not our champions, and our policies should stop treating them as such. Instead, we should treat them like the multinational corporations that they are. Let’s start by passing meaningful, bipartisan reforms to protect our consumers, our children, and — let’s face it — our national security. 

Joel Thayer is president of the Digital Progress Institute and an attorney who focuses his law practice on telecommunications, regulatory and transactional matters, as well as privacy and cybersecurity issues. 

5 things conservatives need to know before AI wipes out conservative thought altogether


Dan Schneider

 By Dan Schneider | Fox News | Published May 26, 2023 2:00am EDT

Read more at https://www.foxnews.com/opinion/5-things-conservatives-need-know-before-ai-wipes-out-conservative-thought-altogether

The “Godfather of A.I.,” Geoffrey Hinton, quit Google out of fear that his former employer intends to deploy artificial intelligence in ways that will harm human beings. “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton recently told The New York Times.  

But stomping out the door does nothing to atone for his own actions, and it certainly does nothing to protect conservatives – who are the primary target of A.I. programmers – from being canceled.  

Here are five things to know as the battle over A.I. turns hot:  

FAKE PENTAGON EXPLOSION IMAGE GOES VIRAL ON TWITTER, SPARKING FURTHER AI CONCERNS

1. Google’s new monopoly on “Truth” 

Elon Musk recently revealed that Google co-founder Larry Page and other Silicon Valley leaders want AI to establish a “digital god” that “would understand everything in the world. … [A]nd give you back the exact ‘right’ thing instantly.” It is hard to imagine anything more dangerous to a pluralistic, democratic Republic than a single dispenser of “truth.”  

AI sign
The Biden administration has seen the potential of AI to push its political message.  (JOSEP LAGO/AFP via Getty Images)

That nobody has a monopoly on truth is the prerequisite for pluralism. But pluralism is what authoritarians abhor and what AI tech executives cannot tolerate. Conservatives have already seen how Big Tech censors and cancels us based on our beliefs and political viewpoints. AI is being turbocharged to do this in limitless ways.  

2. Brainwashing is no longer science fiction 

Americans are just beginning to understand that the dangers of AI go far beyond economic disruption. They also go beyond silencing speech. The newest gadgets being powered by AI now permit tech companies to capture our most intimate thoughts and our most sensitive data. They have already begun to map our minds, so they can manipulate our thoughts. 

Duke Law professor Nita Farahany (a biologist, philosopher and human rights attorney) has been sounding the alarm, explaining how the Chinese government is using AI to analyze facial expressions and brain waves to punish those who are not faithful communists.  

Using similar technology, U.S. tech companies may be able to hack into the minds of users to steal PIN codes, according to Farahany. They are also tracking brain waves via sensors embedded in watches and headphones which can determine which political messages are most persuasive to a user.  

Video

AI will soon empower lying politicians to deceive more voters than ever before. When Farahany tried to explain these dangers at the World Economic Forum, the snobs of Davos applauded enthusiastically. They see AI’s dangers as an asset.  

3. The GOP is truly the Grand OLD Party 

Republicans in Congress who are even talking about AI are focusing on how many nurses and truck drivers might lose their jobs, not about the serious threat AI poses to the very essence of who we are as humans. Economic disruption is most assuredly going to happen, but Republicans are missing the profound implications to liberty.   

In the first AI hearing held by the House Innovation Subcommittee this year, Big Tech lobbyists admitted that self-driving car manufacturers would gobble up every imaginable bit of data “for our own safety” but assured the committee that they would endeavor not to share this data with other companies. Shockingly, nobody asked the obvious: what assurances do we have that these companies will not use this data against their own customers?  

You’d think that the lessons of Big Tech censorship would draw every Republican into the AI fight. That has not happened yet.  

Americans are just beginning to understand that the dangers of AI go far beyond economic disruption. They also go beyond silencing speech. 

4. Democrats have us where they want us 

Democrats in the Biden administration and in Congress have a much better understanding that AI is the greatest tool they’ve ever had to socialize America. Many are pretending to call for a pause to AI development while stomping on the accelerator to develop it as fast as possible.  

Here’s reality: the Biden administration has already pledged to spend $140 million to establish seven AI research institutes, and it just created the National Artificial Intelligence Advisory Committee to chart “a path for responsible and inclusive AI.” Even more telling, the Biden White House has indicated to it will direct federal agencies to “use AI tools” in their work. Nary a pause in the Dems’ use of AI can be found. 

5. But failure is not an option 

Communist China just released regulations mandating that AI be programmed to reflect “socialist core values” and avoid information that could undermine “state power.”  The Chinese government and other authoritarians seek to harness this new technological power for control of information and the masses. They will use it extensively in warfare, too.  

The trick is to lead the development of AI globally while enforcing appropriate guardrails to prevent the left from attacking our freedoms. The window to achieve both is small and shrinking.  

Dan Schneider is vice president of MRC Free Speech America 

Kamala Harris has an artificial intelligence problem


 By Jason Chaffetz | Fox News | Published May 9, 2023 2:00am EDT

Read more at https://www.foxnews.com/opinion/kamala-harris-artificial-intelligence-problem

The jokes seemed to write themselves last week after the Biden administration announced Vice President Kamala Harris, known for her vapid word salad speeches and obvious gaslighting, would now run point on artificial intelligence. Even I jumped in on the action, noting on FOX Business that Harris was more associated with the word “artificial” than the word “intelligence.”  

All joking aside, the future of AI technology is a serious issue. With her approval ratings in the toilet and President Biden showing obvious signs of age-related decline, Kamala Harris (and by that I mean the Democratic Party) urgently needs a way to rehabilitate her historically unpopular image ahead of the 2024 presidential race. This is not the way.

On this issue, like so many before it, Harris is out of her depth. Her past attempts to talk about complicated policy issues often sound like they’ve been dumbed down for a kindergarten audience. Her incoherent speeches have repeatedly gone viral. It’s not just Greg Gutfeld getting mileage out of Harris’ viral gaffes and ramblings.

Vice President Kamala Harris smiles
Kamala Harris urgently needs a way to rehabilitate her historically unpopular image ahead of the 2024 presidential race. (Reuters/Hannah Beier)

Her poll numbers reflect voter concerns that she simply hasn’t performed well in her job. Having already fared poorly in the 2020 Democratic presidential primary, earning zero delegate support, Harris is even less popular now. She has a net negative favorability rating as vice president. Her home state newspaper, the Los Angeles Times, reported last week that 53% of voters have an unfavorable opinion of Harris, for a net negative of -12 percentage points. 

BIDEN SAYS VP KAMALA HARRIS NEEDS MORE ‘CREDIT,’ DEFLECTS QUESTION ON WHETHER HE’LL SERVE A FULL TERM

Beyond concerns about her lack of depth are even more serious questions about her integrity. The American people simply do not trust her. Beyond the revolving door of unhappy Harris staffers and allegations of a negative work environment, Harris’ dishonest assessment of the border problem is still fresh on voters’ minds.  

Video

In September 2022, as a record 2 million people were crossing our borders and drug cartels were expanding their profitable trafficking and fentanyl operations, Harris twice told an incredulous Chuck Todd on NBC that, “the border is secure.” Of course, at that point, she hadn’t bothered to even go there.  

Border security is a problem that has gotten exponentially worse on her watch. But given that a Biden victory may very well depend on raising Harris’ poll numbers, it’s safe to assume this latest assignment is simply a political move intended to boost her popularity.

Kamala Harris dressed in all black holds mic during event
Vice President Kamala Harris speaks during the Democratic National Committee Women’s Leadership Forum in Washington, D.C., on Sept. 30, 2022. (Leigh Vogel/Abaca/Bloomberg via Getty Images)

Just last month, Bloomberg’s Julianna Goldman offered a helpful suggestion published in the administration-friendly Washington Post. “One way to boost Harris would be through her policy portfolio, to put her in charge of an important issue beyond immigration or abortion,” Goldman wrote, referencing Democratic strategists who suggested Harris would need to “own it” and “show some progress.”

CLICK HERE TO GET THE OPINION NEWSLETTER 

It looks like the Biden administration reached the same conclusion. They seem to believe all of Harris’ problems with the public are simply a reflection of voters’ inherent racism and sexism, as former Biden chief of staff Ron Klain has claimed. Or that she just hasn’t gotten credit for the things she’s done.  

Video

But Biden may rue the day he tapped Harris for this important responsibility. Like the albatross of her failure as the nation’s border czar, this assignment is fraught with risk, not just for voters, but for the administration.  

The complexity and the stakes involved in this rapidly advancing technology call for a deep thinker, not a party loyalist. The president needs to treat this like the important issue that it is. The American people deserve more than the perfunctory lip service and agenda-driven gaslighting Harris is likely to give it.

Joe Biden and Kamala Harris during a campaign event on Aug. 12, 2020, at Alexis Dupont High School in Wilmington, Delaware. (AP Photo/Carolyn Kaster, File)

Artificial intelligence technology poses serious risks to the economy. It is a threat to cybersecurity. It may force fundamental change to our business models and job markets. It’s not an artificial election-year ornament to be draped around the person whose poll numbers need a boost.  

By shining this light on Harris, the administration hopes to convince a skeptical public that Harris is ready to take over for the oldest president in history if needed. But if these attempts to make Harris look intelligent actually are artificial, they risk proving just the opposite.

CLICK HERE TO READ MORE FROM JASON CHAFFETZ

Jason Chaffetz is a FOX News (FNC) contributor and the host of the Jason In The House podcast on FOX News Radio. He joined the network in 2017.

Google’s ‘godfather of AI’ quits to spread word about dangers of AI, warns it will lead to ‘bad things’


By Anders Hagstrom | FOXBusiness | Published May 1, 2023 2:21pm EDT

Read more at https://www.foxbusiness.com/technology/googles-godfather-ai-quits-spread-word-dangers-ai-warns-will-lead-bad-things

Geoffrey Hinton, a Google engineer widely considered the godfather of artificial intelligence, has quit his job and is now warning of the dangers of further AI development. Hinton worked at Google for more than a decade and is responsible for a 2012 tech breakthrough that serves as the foundation of current AIs like ChatGPT. He announced his resignation from Google in a statement to the New York Times, saying he now regrets his work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he told the paper.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton went on to say of AI.

CHATGPT AI LISTS JOBS IT CAN DO BETTER THAN HUMANS AS MILLIONS COULD BE PUT OUT OF WORK

Hinton
Geoffrey Hinton worked on early AI development and made a major breakthrough in 2012, but he now says AI is too dangerous. (Getty Images)
ChatGPT
AI like OpenAI’s ChatGPT are partly based on breakthroughs made by Geoffrey Hinton, who says the technology is likely to be misused by bad actors. (NurPhoto via Getty Images / File / Getty Images)

Hinton’s major AI breakthrough came when working with two graduate students in Toronto in 2012. The trio was able to successfully create an algorithm that could analyze photos and identify common elements, such as dogs and cars, according to the NYT. The algorithm was a rudimentary beginning to what current AIs like OpenAI’s ChatGPT and Google’s Bard AI are capable of. Google purchased the company Hinton started around the algorithm for $44 million shortly after the breakthrough.

GOOGLE SCRAMBLES FOR NEW SEARCH ENGINE AS AI CREEPS IN: REPORT

One of the graduate students who worked on the project with Hinton, Ilya Sutskever, now works as OpenAI’s chief scientist. Hinton said the progression seen since 2012 is astonishing but is likely just the tip of the iceberg.

“Look at how it was five years ago and how it is now,” he said of the industry. “Take the difference and propagate it forwards. That’s scary.”

The Google logo, on a smartphone, and Bard logo
Google’s Bard AI is an advanced chatbot capable of holding conversations and producing its own work. (Rafael Henrique / SOPA Images / LightRocket via Getty Images / File / Getty Images)

Hinton’s fears echo those expressed by more than 1,000 tech leaders earlier this year in a public letter that called for a brief halt to AI development. Hinton did not sign the letter at the time, and he now says that he did not want to criticize Google while he was with the company. Hinton has since ended his employment there and had a phone call with Google CEO Sundar Pichai on Thursday.

“We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly,” Google’s chief scientist, Jeff Dean, told the Times.

‘If We Go Ahead On This Everyone Will Die’ Warns AI Expert Calling For Absolute Shutdown


By: Naveen Athrappully, The Epoch Times | Apr 1, 2023

Read more at https://www.gopusa.com/if-we-go-ahead-on-this-everyone-will-die-warns-ai-expert-calling-for-absolute-shutdown/

‘If We Go Ahead on This Everyone Will Die’ Warns AI Expert Calling for Absolute Shutdown

Human beings are not ready for a powerful AI under present conditions or even in the “foreseeable future,” stated a foremost expert in the field, adding that the recent open letter calling for a six-month moratorium on developing advanced artificial intelligence is “understating the seriousness of the situation.”

“The key issue is not ‘human-competitive’ intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence,” said Eliezer Yudkowsky, a decision theorist and leading AI researcher in a March 29 Time magazine op-ed. “Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.

“Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.”

After the recent popularity and explosive growth of ChatGPT, several business leaders and researchers, now totaling 1,843 including Elon Musk and Steve Wozniak, signed a letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” GPT-4, released in March, is the latest version of OpenAI’s chatbot, ChatGPT.

AI ‘Does Not Care’ and Will Demand Rights

Yudkowsky predicts that in the absence of meticulous preparation, the AI will have vastly different demands from humans, and once self-aware will “not care for us” nor any other sentient life. “That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.” This is the reason why he’s calling for the absolute shutdown.

Without a human approach to life, the AI will simply consider all sentient beings to be “made of atoms it can use for something else.” And there is little humanity can do to stop it. Yudkowsky compared the scenario to “a 10-year-old trying to play chess against Stockfish 15.” No human chess player has yet been able to beat Stockfish, which is considered an impossible feat.

Related Story: Elon Musk’s warnings about AI research followed months-long battle against ‘woke’ AI

The industry veteran asked readers to imagine AI technology as not being contained within the confines of the internet.

“Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.”

The AI will expand its influence outside the periphery of physical networks and could “build artificial life forms” using laboratories where proteins are produced using DNA strings.

The end result of building an all-powerful AI, under present conditions, would be the death of “every single member of the human species and all biological life on Earth,” he warned.

Yudkowsky blamed OpenAI and DeepMind—two of the world’s foremost AI research labs—for not having any preparations or requisite protocols regarding the matter. OpenAI even plans to have AI itself do the alignment with human values. “They will work together with humans to ensure that their own successors are more aligned with humans,” according to OpenAI.

This mode of action is “enough to get any sensible person to panic,” said Yudkowsky.

He added that humans cannot fully monitor or detect self-aware AI systems. Conscious digital minds demanding “human rights” could progress to a point where humans can no longer possess or own the system.

“If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the ‘self-aware’ part, but because being unsure means you have no idea what you are doing and that is dangerous and you should stop.”

Unlike other scientific experiments and gradual progression of knowledge and capability, people cannot afford this with superhuman intelligence because if it’s wrong on the first try, there are no second chances “because you are dead.”

‘SHUT IT DOWN’

Yudkowsky said that many researchers are aware that “we’re plunging toward a catastrophe” but they’re not saying it out loud.

This stance is unlike that of proponents like Bill Gates who recently praised the evolution of artificial intelligence. Gates claimed that the development of AI is “as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”

Gates said that AI can help with several progressive agendas, including climate change and economic inequities.

Meanwhile, Yudkowsky instructs all establishments, including international governments and militaries, to indefinitely end large AI training runs and shut down all large computer farms where AIs are refined. He adds that AI should only be confined to solving problems in biology and biotechnology, and not trained to read “text from the internet” or to “the level where they start talking or planning.”

Regarding AI, there is no arms race. “That we all live or die as one, in this, is not a policy but a fact of nature.”

Yudkowsky concludes by saying, “We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.

Tag Cloud