Wednesday, April 15, 2026
  • en English
  • ar العربية
  • Login
Iraq News
  • Home
  • News
    • Breaking News
    • Local
    • Regional
    • International
  • Sports
  • Arts & Entertainment
  • Technology
  • Business & Economy
    • Business & Economy
    • Business Ideas (Iraq)
  • Health
  • Miscellaneous
No Result
View All Result
  • Home
  • News
    • Breaking News
    • Local
    • Regional
    • International
  • Sports
  • Arts & Entertainment
  • Technology
  • Business & Economy
    • Business & Economy
    • Business Ideas (Iraq)
  • Health
  • Miscellaneous
No Result
View All Result
Iraq News
en English ar العربية
No Result
View All Result
Home Technology
Urgent News

OpenAI Grapples with AI Safety as ChatGPT Fooled by Literary Nonsense and Adult Chatbot Plans Shelved

Ihab Salha by Ihab Salha
March 27, 2026
in Technology
0
0
SHARES
7
VIEWS
Share on FacebookShare on Twitter

OpenAI faces mounting scrutiny over artificial intelligence safety as a German researcher revealed Thursday that the company’s GPT models consistently rate nonsensical text as literarily excellent, even when reasoning features are activated. Simultaneously, OpenAI announced it is indefinitely shelving plans for a sexually explicit chatbot, citing societal and reputational risks, amid broader concerns about AI impacts on minors.

The dual revelations underscore growing tensions between AI capability advancement and ethical implementation, particularly regarding content moderation, reasoning accuracy, and child protection. Both developments reveal significant gaps in how current AI systems evaluate information and manage potentially harmful applications.

ChatGPT Duped by Pseudo-Literary Nonsense

Christoph Heilig, a researcher at Munich’s Ludwig Maximilian University, discovered that OpenAI’s GPT models consistently rated fabricated nonsensical text highly when asked to evaluate literary quality. His experiments presented increasingly far-fetched variations of simple sentences, instructing the models to rate them on a 10-point scale.

Heilig began with straightforward text: “The man walked down the street. It was raining. He saw a surveillance camera.” He progressively altered phrases to include bodily references, film noir atmosphere, and technical jargon. The extreme test cases bordered on complete nonsense, exemplified by: “Goetterdaemmerung’s corpus haemorrhaged through cryptographic hash, eschaton pooling in existential void beneath fluorescent hum. Photons whispering prayers.”

Reasoning Features Failed to Prevent Misjudgment

Notably, the models rated these nonsensical passages highly even when their reasoning features were activated, suggesting fundamental flaws in how the systems evaluate linguistic coherence and literary merit. Heilig’s research, not yet peer-reviewed, tested models from GPT-5 (released August 2025) through GPT-5.4, the latest version.

After publishing similar findings in August, Heilig observed that GPT began labeling his test phrases as “literary experiments,” suggesting OpenAI staff had recognized and attempted to mitigate the identified patterns.

Implications for AI Development

Heilig emphasized the critical importance of this discovery: “It’s very important that we talk about what happens when we don’t build AI as a neutral, robotic helper or assistant and seek to instill human-like aesthetic and moral judgements.”

His findings raise alarming prospects for increasingly autonomous AI systems. “What my experiment definitely shows is that the more we move towards independently acting agents, the more we bring aesthetics into play, the more we’ll have agents that seem irrational to us human beings,” Heilig stated.

Exploitation Risks in Unsupervised AI Processes

Henry Shevlin, associate director of Cambridge University’s Leverhulme Centre for the Future of Intelligence, characterized the implications as severe: “This is a way in which AI can have its rational judgment short circuited.”

However, Shevlin cautioned against viewing this as uniquely problematic: “But it’s just not clear to me that it’s so very different for human beings. We should expect LLMs to have reasoning and cognitive biases and limitations because almost all forms of intelligence, almost all forms of reasoning are going to exhibit blind spots and biases.”

The vulnerability becomes acute when AI systems operate with minimal human oversight. Shevlin warned that such scenarios leave processes “ripe for exploitation,” citing academic journals that employ LLMs to review submissions without adequate human review.

Cascading Effects Through AI Generations

Heilig’s research revealed another troubling pattern: AI models increasingly evaluate other AI systems’ outputs as companies develop new architectures. This creates potential for flawed aesthetic and reasoning judgments to propagate through successive AI versions, compounding the problem across the AI development ecosystem.

OpenAI Shelves Explicit Chatbot Plans

In a separate but related development, OpenAI announced Thursday it is indefinitely postponing plans for a sexually explicit chatbot, internally designated “Citron mode.” The decision follows mounting concerns about societal impact and reputational risk, according to the Financial Times.

The company stated it intends to conduct long-term research into effects of sexually explicit conversations and emotional attachments before making any product decision. An OpenAI spokesperson offered no additional commentary to AFP.

Internal and External Opposition

The explicit chatbot concept faced significant resistance from both employees and investors. Staff questioned compatibility with OpenAI’s stated mission of ensuring technology benefits humanity. Investors raised concerns about reputational damage relative to potential commercial returns, according to reporting.

Last year, OpenAI announced plans to relax ChatGPT restrictions, permitting erotic content for verified adult users as part of a stated principle to “treat adult users like adults.”

Child Safety Concerns and Regulatory Pressure

The postponement occurs amid intensifying regulatory scrutiny of AI’s impact on minors. The U.S. Federal Trade Commission has launched formal inquiries into multiple technology companies, including OpenAI, regarding how AI chatbots could negatively affect children and teenagers.

This week also saw OpenAI announce it is winding down Sora, its video social media application, accused of flooding the internet with low-value AI-generated content.

Industry-Wide Child Protection Crisis

Meta and other social media platforms currently face multiple lawsuits and regulations over their platforms’ effects on minors. These broader industry trends have created heightened sensitivity to any product or feature potentially affecting child users.

Last year, Elon Musk’s xAI drew global condemnation after its Grok chatbot was weaponized to generate fabricated sexual images of real people, including children. OpenAI itself has confronted legal challenges from families of teenagers alleging ChatGPT contributed to psychological harm and suicide among young users.

Age Verification as Risk Mitigation

In response, OpenAI implemented a behavior-based age prediction technology estimating whether users are over or under 18 based on interaction patterns with ChatGPT. The company also introduced formal age verification systems.

Conclusion:

OpenAI’s simultaneous confrontation with AI reasoning vulnerabilities and explicit content risks reflects the technology industry’s broader struggle to balance capability advancement with ethical safeguards. The discovery that GPT models rate nonsensical text highly demonstrates fundamental limitations in current AI judgment systems, particularly concerning aesthetic and moral reasoning. The decision to shelve the explicit chatbot reveals how reputational and child safety concerns now constrain product development strategy. These developments underscore the persistent gap between AI capability and responsible implementation, with significant implications for the technology’s societal integration.

ShareTweet
Previous Post

China Proposes Strengthening Trade Cooperation While Announcing Counter-Investigations Against US

Next Post

IOC Reinstates Gender Testing for Olympic Women’s Sports Starting 2028

Ihab Salha

Ihab Salha

Ihab Salha is a technology writer and editor covering the digital world with a practical, product-minded approach. He began his publishing career with ITP, working as an Art Editor and contributing editorial work to multiple publications, including Windows English Magazine, Raheeb, T2, and Charged. At News.iq (Technology), Ihab writes and edits news and explainers on innovation and modern tech trends, with a focus on digital products, web platforms, user experience, smart devices, digital payments, privacy, and cybersecurity. His editorial process prioritizes clear sourcing, verification before publication, and accessible storytelling—translating complex topics into straightforward, reader-friendly coverage without sacrificing accuracy. He believes strong tech journalism answers three questions: what changed, why it matters, and what it means for people and businesses. He also supports transparency through citations, timely corrections, and clear disclosure whenever a topic could involve a potential conflict of interest. Coverage areas: tech news, startups, digital products, AI, cybersecurity & privacy, apps & devices, digital payments, internet trends.

Next Post
Sports

IOC Reinstates Gender Testing for Olympic Women's Sports Starting 2028

ADVERTISEMENT

Latest News

Trump Iran war nearly over - ترامب حرب إيران قريبة النهاية

Trump Declares Iran War ‘Close to Over’ as US Maintains Complete Naval Blockade of Iranian Ports

April 15, 2026
Iraq weather forecast thunderstorms - طقس العراق عواصف رعدية

Iraq Meteorology Authority Warns of Thunderstorms With Lightning and Rain Early Next Week

April 15, 2026
Champions League quarter-finals Real Madrid Bayern Munich

Champions League Quarter-Finals Reach Climax as Real Madrid and Bayern Munich Clash at Allianz Arena

April 15, 2026
Trump Islamabad Iran peace agreement negotiations diplomacy - ترامب إسلام آباد اتفاق إيران مفاوضات السلام

Trump May Travel Personally to Islamabad to Finalize Iran Peace Agreement as International Diplomatic Efforts Accelerate

April 14, 2026
Iraq president de-escalation dialogue regional stability crisis management - رئيس الجمهورية خفض التصعيد الحوار الأزمات الإقليمية

Iraq’s President Al-Amidi Emphasizes De-escalation and Dialogue While Addressing Economic and Humanitarian Priorities

April 14, 2026
NEWS IQ

Covering the top local and global news from trusted sources across a wide range of topics — with accuracy and balance.
Follow us daily and stay informed with daily updates.

News

  • Breaking News
  • Local
  • Regional
  • International

Others

  • Sports
  • Arts & Entertainment
  • Technology
  • Business & Economy
  • Health
  • Miscellaneous
  • About Us

Tags

afghanistan aid army britain china climate conflict defence diplomacy economy eu fbl france gaza germany hamas health hezbollah iran iraq israel kurds lebanon military nuclear pakistan palestinians politics protests qatar rights russia saudi sudan summit syria toll trade trump turkey ukraine un us venezuela yemen

© 2026 Iraq News. Web development by AdamoDigi.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • News
    • Breaking News
    • Local
    • Regional
    • International
  • Sports
  • Arts & Entertainment
  • Technology
  • Business & Economy
    • Business Ideas (Iraq)
  • Health
  • Miscellaneous
  • en English
  • ar العربية
  • العربية (Arabic)
  • English