Connect with us

News

Zurich University AI Researchers Ran Secret Test On Reddit Users

The undisclosed experiment prompted backlash from moderators and a firm response from Reddit’s legal team.

Published

on

zurich university ai researchers ran secret test on reddit users
Reddit

Researchers from the University of Zurich quietly ran a months-long experiment on Reddit’s r/changemyview (CMV), using AI-generated comments to test how persuasive large language models (LLMs) could be. The subreddit, home to 3.8 million users, invites debate on controversial opinions — but moderators say the AI replies crossed ethical lines.

“The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users,” moderators wrote, calling the study “psychological manipulation”.

The experiment involved AI-generated responses written under fake identities — including a trauma counselor, a sexual assault survivor, and a “Black man opposed to Black Lives Matter”. These posts were crafted to sound human and emotionally resonant. Some remain accessible via an archive maintained by 404 Media.

The researchers went further by tailoring AI replies using personal information inferred from users’ Reddit history — details like age, gender, and political leaning, all generated using another AI model.

Moderators say this violated subreddit rules, including bans on undisclosed AI use and bots. They’ve filed a formal complaint and asked the university to halt publication of the research.

Reddit responded strongly. Chief Legal Officer Ben Lee called the study “deeply wrong on both a moral and legal level”. Reddit has since banned the accounts involved and is improving its ability to detect fake or AI-generated content.

Also Read: Influencer Growth Fuels Saudi Creator Economy Surge

“We have banned all accounts associated with the University of Zurich research effort,” Lee said. “We’re also working with the moderation team to ensure all related content has been removed”.

The researchers maintain the study was approved by their university’s ethics board and claim it offers valuable insight into how AI could be misused at scale. “We believe the potential benefits of this research substantially outweigh its risks,” they wrote in a Reddit comment.

But CMV moderators pushed back. “People do not come here to discuss their views with AI or to be experimented upon,” they wrote. “People who visit our sub deserve a space free from this type of intrusion”.

Advertisement

📢 Get Exclusive Monthly Articles, Updates & Tech Tips Right In Your Inbox!

JOIN 23K+ SUBSCRIBERS

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

News

Nano Banana 2 Arrives In MENA For Google Gemini Users

Google brings its latest image model to Gemini and Search, adding 4K output and tighter text control for regional users.

Published

on

nano banana 2 arrives in mena for google gemini users
Google

Google has opened access to Nano Banana 2 across the Middle East and North Africa, pushing its newest image model into everyday tools rather than keeping it inside the exclusive (and expensive) Pro tier.

The rollout spans the Google Gemini desktop and mobile apps, and extends to Google Search through Lens and AI Mode. Developers can also test it in preview via AI Studio and the Gemini API.

Nano Banana 2 runs on Gemini Flash, Google’s fast inference layer. The focus is speed, but also control. Users can export visuals from 512px up to 4K, adjusting aspect ratios for everything from vertical social posts to widescreen displays.

The model maintains character likeness across up to five figures and preserves fidelity for as many as 14 objects within a single workflow. This enables visual continuity across scenes, iterations, or edits — supporting projects like short films, storyboards, and multi-scene narratives. Text rendering has also been improved, delivering legible typography in mockups and greeting cards, with built-in translation and localization directly within images.

Also Read: RØDE Adds Direct iPhone Pairing To Wireless GO And Pro Mics

Under the hood, the system taps Gemini’s broader knowledge base and pulls in real-time information and imagery from web search to render specific subjects more accurately. Lighting and fine detail have been upgraded, without slowing output.

By embedding the model inside Gemini and Search, Google is normalizing advanced image generation for a mass audience. In MENA, where startups and marketing teams are leaning heavily on AI to scale content across languages and borders, that shift lands at a practical moment.

The move also folds creative tooling deeper into search itself, so that image generation is no longer a separate workflow. It now sits right next to the query box.

Continue Reading

#Trending