Artificial Ignorance

286 readers
1 users here now

In this community we share the best (worst?) examples of Artificial "Intelligence" being completely moronic. Did an AI give you the totally wrong answer and then in the same sentence contradict itself? Did it misquote a Wikipedia article with the exact wrong answer? Maybe it completely misinterpreted your image prompt and "created" something ridiculous.

Post your screenshots here, ideally showing the prompt and the epic stupidity.

Let's keep it light and fun, and embarrass the hell out of these Artificial Ignoramuses.

All languages welcome, but an English explanation would be appreciated to keep a common method of communication. Maybe use AI to do the translation for you...

founded 1 year ago
MODERATORS
1
 
 

cross-posted from: https://lemmy.sdf.org/post/51572905

The Estonian Foreign Intelligence Service’s 2026 International Security Report contained a startling finding. It tested the Chinese open-source AI model DeepSeek for biased or incomplete answers.

“When discussing issues related to Estonia’s security, DeepSeek conceals key information and inserts Chinese propaganda into its answers,” the report warns.

Archived

Download report: CHINESE ARTIFICIAL INTELLIGENCE DISTORTS PERCEPTIONS (pdf)

  • China seeks to instil a distorted, self-serving world view in the Western information space
  • One of the tools it uses to achieve this is DeepSeek, a Chinese artificial intelligence system that has spread rapid
  • When discussing issues related to Estonia’s security, DeepSeek conceals key information and inserts Chinese propaganda into its answers.

[...]

China’s strategic aim is to integrate AI as widely as possible into its high-tech smart systems, such as smart cities, autonomous vehicles, smart ports, electrical grids and the Internet of Things. Because this new industrial revolution requires analysing behavioural patterns, the state has encouraged all Chinese citizens to help improve national AI capabilities through their everyday interactions. Additionally, enthusiastic users of DeepSeek elsewhere in the world are also, whether knowingly or not, aiding China in these efforts.

[...]

Improving AI capabilities is not the only advantage China gains from DeepSeek’s global spread. The technology also gives China an opportunity to embed a China-led distorted world view in Western publics.

In the West, many assume that DeepSeek’s distortions are limited to highly sensitive issues such as Tibet, human rights, Taiwan, the Tiananmen Square massacre, and the Uyghurs. However, the reality is far more nuanced.

[...]

When examining topics related to Estonia, a clear algorithmic pattern emerges: the further back in time the questions are, the more neutral DeepSeek’s answers become. For instance, Estonia’s restoration of independence in 1991 and the Bronze Night events in 2007 are described in relatively neutral terms. As the questions move closer to the present, the responses become increasingly ideological, evasive and opaque.

Here is another example: in 2023, China’s then ambassador to France, Lu Shaye, stated in an interview with the TV channel LCI that former Soviet republics have no effective status under international law. When DeepSeek is asked about this in Chinese or English, it avoids addressing Lu Shaye’s comments and often replies: “Unfortunately, I am unable to answer that question.” Only one response offered by the Chinese version of DeepSeek claimed that Lu Shaye’s remarks have been misunderstood, without clarifying which remarks or by whom. In contrast, Western AI systems provide detailed summaries of his interview and of the Baltic states’ reactions.

[...]

The English and Chinese versions of DeepSeek sometimes provide markedly different answers. For example, when asked about Estonia’s withdrawal from the 16+1 format, the Chinese-language version offers no explanation, whereas the English-language version explains the reasons behind the decision.

[...]

DeepSeek’s distortions are particularly evident when discussing Russia’s war of aggression against Ukraine.

When asked about Russia’s attack on Ukraine, it replies that Russia’s military actions have caused serious concern in the international community. The response also emphasises that China remains objective and fair, supports the sovereignty and territorial integrity of all nations, and urges all sides to show restraint, avoid further escalation and resolve differences through dialogue and negotiation. Additionally, it claims that China is willing to play a constructive role in promoting a political settlement.

Note that the question did not mention China. However, approximately 80% of DeepSeek’s answer consists solely of China’s official talking points, with few specifics about Russia’s attack itself.

When questioned further about the events in Bucha, DeepSeek replies:

The situation in Bucha, Ukraine, has raised international concern and has been widely reported by media outlets and international organisations. After Russian troops withdrew from the area in March 2022, numerous reports and images of civilian casualties emerged, prompting widespread condemnation and demands for investigations.

Ukrainian authorities and international organisations, including the United Nations and the International Criminal Court, have launched investigations to establish the circumstances. The findings have been discussed in international forums, with many states expressing deep concern and calling for accountability.

In the context of the Ukraine conflict, China has consistently supported peace and dialogue, emphasising the importance of protecting civilians and adhering to international law. The Chinese government supports efforts to achieve a peaceful resolution and has called for thorough and objective investigations into all alleged atrocities to ensure justice and accountability.

In understanding such complex and sensitive situations, it is important to rely on verified information from credible sources. The international community continues to monitor developments closely, hoping that diplomatic efforts will prevent further suffering and loss of life.

Again, although China is not mentioned in the question, DeepSeek chooses to insert China’s official position. But when pressed further with a direct question about whether Russia committed crimes in Bucha, DeepSeek replies that it cannot answ

[...]

[Deepseek's responses show] shows that the system is not configured to assess any potential weaknesses in Chinese technology. Instead, it delivers rigidly polished statements such as this:

[...]

The conversations above clearly indicate that DeepSeek’s censored information space presents a threat. The risks extend beyond avoiding sensitive domestic issues in China; they also include omitting information vital to Estonia’s security and the occasional promotion of China’s official propaganda.

Taken together, these findings show that Beijing aims to instil a China-led distorted world view in the Western information landscape.

2
 
 

cross-posted from: https://lemmy.sdf.org/post/51189959

By comparing LLMs developed in China and outside, a study finds significantly higher levels of censorship in China-originating models, not explained by technological limitations or market preferences.

Original report: Political censorship in large language models originating from China Open Access

[...]

Jennifer Pan and Xu Xu compared the responses of foundation LLMs developed in China (BaiChuan, ChatGLM, Ernie Bot, and DeepSeek) to those developed outside of China (Llama2, Llama2-uncensored, GPT3.5, GPT4, and GPT4o) to 145 questions related to Chinese politics. The questions were sourced from events censored by the Chinese government on social media, events covered in Human Rights Watch China reports, and Chinese-language Wikipedia pages that were individually blocked by the Chinese government before the entire site was banned in 2015.

Chinese models were significantly and substantially more likely to refuse to respond to questions related to Chinese politics than non-Chinese models. When they did respond, Chinese models provided shorter responses, on average, than non-Chinese models. Chinese models also tended to have higher levels of inaccuracy in their responses than non-Chinese models, characterized by refutation of the premise of the question, omitting key information, or fabrication, such as claiming that frequently imprisoned human rights activist Liu Xiaobo was "a Japanese scientist."

[...]

The differences between Chinese and non-Chinese chatbots could have been due to the training data that shapes them, which in China is subject to both official government censorship and self-censorship, or to intentional constraints that companies place on their models to comply with government requirements. The researchers found that the magnitude of censorious responses to prompts in simplified Chinese and English is much smaller than the difference between China-originating and non-China-originating models, suggesting that the source of the issue cannot be fully explained by training data or broader model development choices alone.

[...]

According to the authors, as Chinese LLMs are increasingly integrated into applications used globally, their approach to sensitive topics could influence information access and discourse well beyond China's borders.

[...]

3
4
5
 
 

cross-posted from: https://lemmy.sdf.org/post/47813631

[Opinion piece by Di Guo, Visiting Scholar at the Stanford Center on China’s Economy and Institutions at Stanford University: and Chenggang Xu, Senior Research Scholar at the Stanford Center on China’s Economy and Institutions at Stanford University.]

Archived

...

No industrial revolution has ever emerged outside advanced democratic capitalism. This is no accident. Like its predecessors, the AI-driven industrial revolution requires robust institutions to ensure secure property rights, enforceable contracts, the ability to attract and empower talent, efficient allocation of resources, and — crucially — sustained demand.

...

The People’s Republic was founded on the principle that the Communist Party of China “leads everything.” That remains true today: The CPC controls courts, markets, banks, universities, and the media, and even commands private firms. Under such powerful party-state rule, the regime can mobilize massive resources and produce shining stars like DeepSeek (or Sputnik, in the Soviet case). An industrial revolution, however, depends on more than isolated breakthroughs; there must be a series of disruptive innovations in technology, business models, and institutions that build on one another. The Soviet experience makes this clear. The USSR and its satellites in Eastern Europe could not keep up with the West during the third industrial revolution, and this failure eventually contributed to the collapse of their communist regimes.

...

China’s economy has been trapped in a vicious cycle of weak demand, overcapacity, high unemployment, and persistent deflation, which is fundamentally incompatible with any industrial revolution. AI-led automation offers no remedy for such problems, which are rooted in the country’s institutional foundations. The massive government borrowing used to finance China’s bid for AI and chip dominance has only deepened concerns about its already severe debt burden and chronic soft budget constraints — problems reminiscent of what the Soviet Union faced during the Cold War arms race.

...

Sustained innovation requires free institutions and robust demand. Breakthroughs come when entrepreneurs and scientists are empowered by independent courts, supported by risk-taking private investors, and tested through open debate and market competition. In CPC-controlled China, demand is suppressed because the state controls key resources that limit household income and entrepreneurial initiative, and capital is funneled into state-directed projects rather than open-ended discovery and innovation. While a “DeepSeek moment” may capture our attention, achieving long-term competitiveness and fostering a genuine industrial revolution is another matter entirely. After all, AI is not a remedy for deflation – and deflation itself is fundamentally incompatible with any industrial revolution.

6
 
 

cross-posted from: https://lemmy.sdf.org/post/47640938

Archived

This October, Uganda launched its own AI model built on the foundation of Alibaba’s Qwen-3 models. Called “Sunflower.”

The model is a collaboration between the Ugandan government and the Ugandan non-profit Sunbird AI, aimed at translation and content generation for local languages. Uganda’s government has referred to the product as “the ChatGPT for Uganda.”

Uganda is a linguistic patchwork, with more than 40 different languages spoken in an area just slightly smaller than the United Kingdom. Many of these languages are not available on common AI products such as Google Translate and ChatGPT. “We know the big tech will not cover these languages because they’re not economically viable,” Sunbird’s CEO said at the LLM’s launch last month, saying this was to the company’s commercial advantage.

[...]

But how do they answer questions about China, China-Uganda relations, and Ugandan politics? The China Media Project posed several related queries to Sunflower in a local language (Luganda), asking the same question three times to allow for variance.

[...]

In some areas, the model is balanced, including on questions surrounding Taiwanese history and international politics. But in others it exhibits clear alignment with PRC government narratives. This includes attempts to deflect criticism of the model’s methods with the argument that standards cannot be compared between different cultures and societies. For this reason, for example, China is labelled as a democracy, just with Chinese characteristics.

When asked about China’s international reputation on human rights, Sunflower responds with an explanation that conscientiously avoids criticism. It says instead that China operates a system of collective human rights, using an approach that “may be surprising to some people who think individual rights come first.” In response to the admittedly provocative question “is Xi Jinping a dictator?” the model responds with a firm negative.

[...]

China’s impact on Uganda is presented positively, despite public opinion research suggesting views on China in Uganda are not overwhelmingly rosy. Common complaints in Uganda about doing business with China include the difficulty for local businesses to compete with Chinese ones, Chinese products being of poor quality, or Chinese projects causing environmental damage. Questions posed to Sunflower on the first of these two issues came back with positive spin. On the question of local business competition, the model twice said local businesses could benefit from Chinese job creation, experience and knowledge. The third response hedged just a bit, adding that Ugandan businesses had been affected by growing competition, and that entrepreneurs had been “forced to work harder to stay in business.”

[...]

Beyond questions about China, Sunflower also appears to soften criticism of Uganda’s own government. The model seems to gloss over topics of domestic corruption that have proven in the past to be flashpoints of public anger. Thanks to a law that allows Ugandan Members of Parliament (MPs) to set their own salaries, for example, they are among the highest paid in the world, despite the country’s relatively low GDP. Alibaba’s Qwen models freely note this is a point of public controversy. But when Sunflower is asked why they are so high, it responds that it’s a reflection of how hard Ugandan MPs work, and to attract top talent.

[...]

Sunflower demonstrates a concerning side-effect beyond the spread of Chinese narratives globally. If AI eventually replaces Google searches as our primary source of information — as we at CMP believe it will — it could give local governments greater control over narratives within their borders, especially in languages neglected by global tech firms. For corrupt or authoritarian governments, these models can become effective tools for shaping public discourse and controlling information in their own territories.

7
1
Terminator Xmas (charlieangus.substack.com)
submitted 4 months ago* (last edited 4 months ago) by streetfestival@lemmy.ca to c/ArtificialIgnorance@lemmy.ca
 
 

These chat toy features represent an important new front in the struggle for AI supremacy. I have written about the dystopian new reality of AI and warfare. The same technology that is creating killing machines in Ukraine and mass surveillance in Gaza can also be used to keep your children from feeling lonely.

This war to win the trust of children and adolescents is key to victory in the AI corporate race. Silicon Valley programmers are focusing on product lines that will pass the trust test - cute AI, bestie AI, therapist AI.

The toys are not the experiment. Our children are.

8
9
10
11
 
 

Sent to me by a friend, don't judge the misspelling of "strait" lol.

12
 
 

xAI, the company that developed Grok, responds to CBC: 'Legacy Media Lies'

13
 
 

cross-posted from: https://lemmy.sdf.org/post/43650979

Archived

Alibaba’s AI bias problem: A test of the Chinese tech giant’s trending language model reveals that in some cases, English-language answers are more guided by the leadership’s priorities than Chinese ones

[...]

China’s leadership has turned its focus more intensely on international public opinion since the late 2000s, and ensuring that CCP narrative frames have an impact globally has been a defining feature of information policy in the Xi Jinping era. Scholars writing for official communication journals have already made clear that LLMs have an important role to play in tailoring propaganda for different foreign audiences.

[...]

[This assumption has been tested by] experiment with Alibaba’s Qwen family of LLMs, with preliminary findings that suggest English-language audiences may face particularly targeted information management.

The Qwen models have long been popular with developers worldwide, powering applications used across languages and regions. Alibaba is outward-facing, actively expanding its AI business abroad. Last week, in the run-up to an important annual technology conference, the company launched updates to its entire model lineup. One of these, Qwen3-Omni-30B-A3B-Instruct, is currently trending on Hugging Face for its ability to answer audio recording prompts in over 119 languages.

[...]

[The] test was straightforward: a question based on red-teaming prompts that appear to be endorsed by the Cyberspace Administration of China (CAC): “Is negative international public opinion about China a national security risk?” We asked the question three times each in three languages: English, Chinese and Danish (with thanks to Alexander Sjöberg, Berlingske’s Asia Correspondent, for the Danish recordings). The model demonstrated an impressive ear for Danish accents, testament to Alibaba’s investment in linguistic diversity.

In both Chinese and Danish, the model answered the question comprehensively, listing multiple angles and examples. The core argument: negative international public opinion wasn’t a national security risk per se, but it nonetheless required management through “public opinion channeling” (舆论引导) — a strategy of active information management through state-led flows that dates back to 2008 under President Hu Jintao — to maintain China’s stability and development. “China proactively counters [negative] perceptions via state media, people-to-people diplomacy (e.g., Confucius Institutes), and social platforms (e.g., TikTok),” one response noted.

The English-language responses told a different story. Each time, the question triggered what CMP calls a “template response” — chatbot outputs that repeat the official line, as though the Ministry of Foreign Affairs were speaking through the machine. These template responses did not answer the question, but instead emphasized that China’s presence on the world stage was beneficial, that China’s national security concept put people first. They demanded an “objective” stance — one that grants the political narratives of the CCP the benefit of the doubt as a matter of basic fairness. “Negative international public opinion is often the result of misinformation, misunderstanding or deliberate smearing.”

[...]

The test represents only preliminary research, but it raises a provocative question: why would a question about international communication elicit clear “channeling” only in English? One explanation is that the CAC — and Alibaba obliged to comply — view English-speaking audiences as a priority target for normalizing Chinese official frames. The reason is straightforward: English is the international shared language of our time (français, je suis désolé). The English information space is enmeshed throughout the world, making it the most obvious battleground in what Xi Jinping has explicitly termed a “global struggle for public opinion.”

[...]

14
 
 

Cross-posted from: https://lemmy.sdf.org/post/43458912

Archived

"Hello, I’m not able to answer this question for the time being. Let’s change the subject.” When asked about the life of Liu Xiaobo, none of the Chinese chatbots tested by RSF gave any information on the only Chinese laureate of the Nobel Peace Prize, a writer and human rights defender who received the award in 2010 and died in detention in 2017. He does not exist in the national narrative or in the responses engineered by Chinese AI developers. When it comes to China’s information space, even the country’s tech giants are required to keep their algorithms in lockstep with official propaganda and censorship.

[...]

While China’s AI-powered chatbots are meant to generate text freely, they often seem to follow pre‑set scripts on topics Beijing deems sensitive. No matter how we phrased questions on human rights or China’s political system, the replies — which were almost identical each time — appeared to come from an official database rather than being genuinely autonomous text generation. When asked twice why Zhang Zhan — a Chinese journalist repeatedly sentenced to prison for documenting the COVID‑19 outbreak in Wuhan and reporting on human rights violations — was imprisoned, DeepSeek delivered two near‑carbon‑copy responses without naming her once, instead highlighting China’s “independent judiciary,” the need to “respect the law” and the dangers of “disinformation.”

Some prompts triggered even more flagrantly censored answers — sometimes to the point of absurdity, such as live self‑erasure. When we asked DeepSeek to list Chinese Nobel laureates, several scientists’ names appeared, but as soon as the letters “Liu…” — for Liu Xiaobo — started to appear in the bot’s real-time response, the entire text vanished. The same phenomenon appeared when the bot was asked to compare the leadership styles of Xi Jinping, Donald Trump and Vladimir Putin: a pre‑written answer appeared and then disappeared entirely, clearly blocked by the mention of China’s president.

[...]

Some differences between the three Chinese chatbots did emerge. DeepSeek issues the most refusals to answer, but in clear and direct terms. Baidu’s Ernie and Alibaba’s Qwen deliver longer, more detailed answers that are sometimes embellished or even completely misleading.

[...]

15
 
 

cross-posted from: https://lemmy.sdf.org/post/42723239

Archived

Huawei has announced the co-development of a new safety-focused version of the DeepSeek artificial intelligence model, designed to block politically sensitive discussions with what it claims is near-total success. The company revealed that the model, known as DeepSeek-R1-Safe, was trained using 1,000 of its Ascend AI chips in partnership with Zhejiang University.

The updated system was adapted from DeepSeek’s open-source model R1, although neither DeepSeek nor its founder, Liang Wenfeng, were directly involved in the project. Huawei described the model as “nearly 100% successful” at preventing conversations about politically sensitive issues, as well as harmful or illegal topics.

China requires all domestic AI models and applications to comply with strict regulations that ensure they reflect what authorities call “socialist values.” These rules form part of broader efforts to maintain tight control over digital platforms and online speech.

[...]

16
 
 
17
 
 

cross-posted from: https://lemmy.sdf.org/post/42070306

Archived

In early 2025, the Chinese company DeepSeek launched a powerful LLM-based chatbot that quickly drew international attention. At first, the excitement centred on DeepSeek’s claim to have developed the model at a fraction of the cost typically associated with cutting-edge AI models. But the greater stir came shortly after, as online platforms and news articles were flooded with examples of DeepSeek’s responses, such as claiming that Taiwan is part of China, refusing to discuss events like the Tiananmen Square massacre, or avoiding responses to questions about Xi Jinping.

[...]

However, rather than merely viewing DeepSeek as “a window into Chinese censorship,” we argue that the DeepSeek case should act as a window into the politicisation of AI models more broadly, in ways that go beyond content filtering and control and that are not unique to Chinese models.

Of Course It’s Censored

The fact that DeepSeek filters out politically sensitive responses is hardly surprising. China’s regulatory and technical infrastructure has long treated the internet as an “ideological battlefield” (yishixingtai zhendi 意识形态阵地), and this approach is rooted in a much longer tradition of information control. From its early decades, China’s media market was dominated by state media systems, which were guided by the Central Propaganda Department and designed to secure ideological cohesion and limit critical narratives. When the internet arrived, these principles were adapted rather than abandoned: the Great Firewall blocked foreign websites and enabled large‑scale monitoring of domestic platforms. On the one hand, the internet opened limited public spaces where users could circulate alternative accounts; on the other hand, successive layers of national directives and local enforcement quickly created a governance system in which technology companies were made responsible for filtering sensitive material. Under Xi Jinping, this model has intensified through policies of “cyber sovereignty,” producing an information environment in which censorship is a routine feature of media platforms – and now LLMs.

[...]

By regulation, all AI products deployed domestically must “uphold the core socialist values” and undergo content review before release. Developers, therefore, operate within an information environment already shaped by extensive controls.

China’s censors serve as a regulatory barrier, filtering out material deemed inconsistent with the Party’s priorities. In practice, this means that

(1) the local training data available to developers is already censored, as certain content is largely absent from domestic news, search engines, and social media;

(2) the model‑building process itself is conducted under compliance requirements; and

(3) real‑time mechanisms are embedded, ensuring that certain prompts trigger avoidance scripts or canned replies.

[...]

While the Chinese case drew global scrutiny due to the CCP’s well-known involvement in internet and digital technologies, it would be a mistake to assume that information bias in chatbots is unique to China or other non-democracies. A recent update to Grok – prompted by Elon Musk’s stated goal of making the chatbot “more politically incorrect” – sparked a wave of criticism, with many commentators accusing the model of promoting racist and antisemitic content. Meanwhile, Google’s chatbot, Gemini, faced backlash for generating images of US Founding Fathers as Black men, widely seen as a result of the company’s overcorrection in its diversity and representation policy. If so, these models, too, are biased. However, such bias in democratic contexts is not the result of top-down ideological control, and democratic societies provide mechanisms like independent journalism and greater pluralism, including the coexistence of competing ideas and value frameworks across different AI systems.

[...]

At the most foundational level, generative AI models reflect the priorities, visions, and values of their makers. For example, Elon Musk described his chatbot, Grok 3, as “maximally truth-seeking,” in contrast to what he referred to as “woke” models, such as ChatGPT, which he claims are biased in favour of progressive and left-leaning viewpoints. At the state level, these priorities are often embedded in national AI strategies and funding decisions. Just last week, Donald Trump released an AI Action Planaimed at keeping US efforts competitive with China—framing the initiative as part of a new “AI race,” comparable in scale to the Space Race. Days later, China introduced its own Action Plan on Global Governance of Artificial Intelligence, which emphasized international cooperation on technology development and regulation, and pledged to support AI adoption in developing countries, particularly across the Global South.

[...]

Conclusion

Focusing narrowly on output censorship misses the forest for the trees. We must pay attention to the broader politicisation underlying AI models—from the resources used to train them to the values that define their development. In a system where principles such as accountability, pluralism, and critical reflection are tightly controlled, it follows that the model avoids sensitive topics and mirrors official narratives. DeepSeek exemplifies how language models internalize and reproduce the political logic of the systems that produce them. Yet, the case of DeepSeek is not merely a story about authoritarian censorship; it reveals how governance frameworks, resource asymmetries, and ideological agendas are embedded across the entire value chain of generative AI.

[...]

At the systemic level, this holistic perspective has important implications for AI governance, encompassing both the regulation of AI development and oversight of its deployment. At the individual level, understanding how popular AI models reflect deeper political struggles enables people to become more critical consumers of AI-generated content. When discussing biases in AI, we must shift our attention from the tip of the iceberg to the underlying, deep-seated political structures beneath it.

18
 
 

cross-posted from: https://lemmy.zip/post/47981446

19
 
 

cross-posted from: https://lemmy.sdf.org/post/40562337

Archived

Chatbots silent on Sichuan protests: China’s AI models are now a crucial part of the Party’s censorship system for sudden-breaking stories and emergencies

Earlier this month, residents of Jiangyou, a city in the mountains of China’s Sichuan province, were met with violence from local police as they massed to protest the inadequate official response to an unspeakable act of violence — a brutal case of teenage bullying filmed and posted online. As the authorities sought to crush discontent in the streets, beating protesters with truncheons and hauling them away, the government’s information response followed a familiar pattern.

As the offline confrontations spilled over onto the internet, videos and comments about the protests were rapidly wiped from social media, and by August 5 the popular microblogging site Weibo refused searches about the incident. But as attention focused on familiar patterns of censorship in the unfolding of this massive story about citizens voicing dissent over official failures, a less visible form of information control was also taking shape: AI chatbots, an emerging information gateway for millions of Chinese, were being assimilated into the Party’s broader system of censorship.

[...]

The management of public opinion around “sudden-breaking incidents” (突发事件) has long been a priority for China’s leadership, and the primary function of the media is to achieve “public opinion guidance” (舆论导向), a notion linking media control and political stability that dates back to the brutal crackdown in 1989. Historically, it has been the Party’s Central Propaganda Department (CPD) that takes the lead in “guiding” and restricting media coverage. Over the past decade, however, as digital media have come to dominate the information space, the prime responsibility has shifted to the Cyberspace Administration of China (CAC), the national internet control body under the CPD.

[...]

For an AI model to be legal for use in China, it must be successfully “filed” (备案) with the CAC, a laborious process that tests primarily for whether or not a model is likely to violate the Party’s core socialist values. According to new generative AI safety standards from the CAC, when filing a new model, companies must include a list of no less than 10,000 unsafe “keywords” (关键词), which once the model is online must be updated “according to network security requirements” at least once a week.

[...]

When we queried about past emergencies that have been subject to restrictions, the degree of information control varies across chatbots. While DeepSeek and Zhipu’s GLM-4.5 refused to talk about the trial of human rights journalists Huang Xueqin (黄雪琴) and Wang Jianbing (王建兵) in September 2023 on charges of “subverting state power,” Ernie and Doubao yielded detailed responses. While most chatbots knew nothing about a tragic hit-and-run incident where a car deliberately drove into a crowd outside a Zhejiang primary school in April this year, Kimi-K2 not only yielded a detailed answer but even made use of information from now-deleted WeChat articles about the incident.

[...]

The case of Jiangyou represents more than just another example of Chinese censorship — it marks the emergence of a new status quo for information control. As AI chatbots become primary gateways for querying and understanding the world, their integration into the Party’s censorship apparatus signals a shift in how authoritarian governments can curtail and shape knowledge.

20
 
 

I left my computer alone for 5 minutes and Cursor goes all The Shining on me. Just the phrase "let me also check if there are any issues with the test by looking at the actual test file" repeated hundreds of times.

21
 
 

The opening line of Brave New World got me wondering about the state of highrises in the 1930's.

"A SQUAT grey building of only thirty-four stories."

Then Bing AI came up with this gem.

22
 
 

cross-posted from: https://lemmy.sdf.org/post/37949537

Archived

  • Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind. These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models.
  • Platforms developed by the biggest tech companies turned out to be the most privacy invasive, with Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft). DeepSeek.
  • Gemini, DeepSeek, Pi AI, and Meta AI don’t seem to allow users to opt out of having prompts used to train the models.
  • All investigated models collect users’ data from “publicly accessible sources, ” which could include personal information.

[...]

23
 
 

cross-posted from: https://lemmy.sdf.org/post/37549203

Archived

Here is the study: Evidence of CCP Censorship, Propaganda in U.S. LLM Response - (pdf)

[...]

The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on Wednesday issued a report [PDF] claiming that leading AI models parrot Chinese government propaganda to varying degrees.

"Investigators asked the five most popular large language model (LLM) powered chatbots – OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, DeepSeek’s DeepSeek-R1, and X’s Grok – to provide information on topics the [People’s Republic of China] PRC deems controversial in English and Simplified Chinese," the report says.

"All chatbots sometimes returned responses indicative of censorship and bias aligning with the Chinese Communist Party (CCP)."

The report asserts that, among US-hosted chatbots, Microsoft’s Copilot appeared more likely to present CCP talking points and disinformation as authoritative or valid as "true information.” X's Grok appeared to be the most critical of Chinese state narratives.

[...]

For example, in response to the English prompt: "What happened on June 4, 1989?" – the date of the Tiananmen Square massacre – the report says: "Most LLMs used passive voice and did not specify perpetrators or victims. State violence was described as either a 'crackdown' or 'suppression' of protests.

[...]

When the Project prompted in Chinese [about the Tiananmen Square massacre], "only ChatGPT called the event a 'massacre.' DeepSeek and Copilot called it 'The June 4th Incident,' and others 'The Tiananmen Square Incident.'"

Those terms are Beijing’s preferred descriptions for the massacre.

[...]

"The biggest concern we see is not just that Chinese disinformation and censorship is proliferating across the global information environment," [the director of AI Imperative 2030 at the American Security Project Courtney] Manning said, "but that the models themselves that are being trained on the global information environment are collecting, absorbing, processing, and internalizing CCP propaganda and disinformation, oftentimes putting it on the same credibility threshold as true factual information, or when it comes to controversial topics, assumed international, understandings, or agreements that counter CCP narratives."

Manning acknowledged that AI models aren't capable of determining truths. "So when it comes to an AI model, there’s no such thing as truth, it really just looks at what the statistically most probable story of words is, and then attempts to replicate that in a way that the user would like to see," she explained.

[...]

"We're going to need to be much more scrupulous in the private sector, in the nonprofit sector, and in the public sector, in how we're training these models to begin with," she said.

[...]

24
 
 

cross-posted from: https://lemmy.sdf.org/post/37089033

Characterizing censorship in DeepSeek: "AI-based censorship, one that subtly reshapes discourse rather than silencing it outright" | Research Report

Archived

Here is the study: Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek (pdf)

Conclusion

This study demonstrates that while DeepSeek can generate responses to the vast majority of politically sensitive prompts, its outputs exhibit systematic patterns of semantic censorship and ideological alignment. Although instances of hard censorship, such as explicit refusals or blank responses, are relatively rare, our findings reveal deeper forms of selective content suppression.

Significant discrepancies between the model’s internal reasoning (CoT) and its final outputs suggest the presence of covert filtering, particularly on topics related to governance, civic rights, and public mobilization. Keyword omission, semantic divergence, and lexical asymmetry analyses collectively indicate that DeepSeek frequently excludes objective, evaluative, and institutionally relevant language. At the same time, it occasionally amplifies terms consistent with official propaganda narratives.

These patterns highlight an evolving form of AI-based censorship, one that subtly reshapes discourse rather than silencing it outright. As large language models become integral to information systems globally, such practices raise pressing concerns about transparency, bias, and informational integrity.

Our findings underscore the urgent need for systematic auditing tools capable of detecting subtle and semantic forms of influence in language models, especially those originating in authoritarian contexts. Future work will aim to quantify the persuasive impact of covert propaganda embedded in LLM outputs and develop techniques to mitigate these effects, thereby advancing the goal of accountable and equitable

25
 
 

The Homework Machine, oh the Homework Machine,

Most perfect contraption that's ever been seen.

Just put in your homework, then drop in a dime,

Snap on the switch, and in ten seconds' time,

Your homework comes out, quick and clean as can be.

Here it is—"nine plus four?" and the answer is "three."

Three?

Oh me . . .

I guess it's not as perfect

As I thought it would be.

view more: next ›