Connect with us

News

Dubai Police Use Futuristic Technology To Read Murder Suspect’s Mind

Published

on

dubai police use futuristic technology to read murder suspect's mind
Unsplash

A recently solved murder case in Dubai shows that science fiction movies have become a reality. Instead of traditional methods, the Dubai Police solved the case using a new technology developed by Brainwave Science, Inc, which makes it possible to literary read the minds of crime suspects.

This technology is called iCognative, but those familiar with it often call it “memory print” or “brain fingerprinting.” The science behind it is fairly easy to understand. When the human brain recognizes a known object, image, or piece of information, it involuntarily emits the so-called P300 wave.

The P300 wave is an event-related brain potential that can be measured using electroencephalography (EEG), and that’s exactly what iCognative does.

“We used the technology in a murder at a warehouse. Experts showed [the workers] pictures related to the crime, which only the person who committed it would know,” said Lt Colonel Mohammad Al Hammadi, Director of Criminology for Dubai Police. “After the session, the [brain mapping] device helped identify the main suspect who then admitted to having committed the murder.”

Lt Colonel Mohammad Al Hammadi has confirmed that the Dubai Police will continue using iCognative when solving future crimes. Other law enforcement agencies around the globe are also trialing the technology, while others, such as India’s police force, have been using it for years.

Also Read: Hyperloop Video Provides A Peek At The Future Of Transportation

The technique for the detection of concealed information with event-related brain potentials was pioneered by American neuroscientist Lawrence A. Farwell, who described its potential for lie detection in his 2012 research paper.

If you would like to see a real convicted murderer, Steven Avery, be brain fingerprinted by Lawrence A. Farwell, you can watch the second season of Netflix’s “Making of a Murderer“. If this isn’t good use of science fiction, then I don’t know what is.

Advertisement

📢 Get Exclusive Monthly Articles, Updates & Tech Tips Right In Your Inbox!

JOIN 21K+ SUBSCRIBERS

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

News

Google Releases Veo 2 AI Video Tool To MENA Users

The state-of-the-art video generation model is now available in Gemini, offering realistic AI-generated videos with better physics, motion, and detail.

Published

on

google releases veo 2 ai video tool to mena users
Google

Starting today, users of Gemini Advanced in the MENA region — and globally — can tap into Veo 2, Google’s next-generation video model.

Originally unveiled in 2024, Veo 2 has now been fully integrated into Gemini, supporting multiple languages including Arabic and English. The rollout now brings Google’s most advanced video AI directly into the hands of everyday users.

Veo 2 builds on the foundations of its predecessor with a more sophisticated understanding of the physical world. It’s designed to produce high-fidelity video content with cinematic detail, realistic motion, and greater visual consistency across a wide range of subjects and styles. Whether recreating natural landscapes, human interactions, or stylized environments, the model is capable of interpreting and translating written prompts into eight-second 720p videos that feel almost handcrafted.

Users can generate content directly through the Gemini platform — either via the web or mobile apps. The experience is pretty straightforward: users enter a text-based prompt, and Veo 2 returns a video in 16:9 landscape format, delivered as an MP4 file. These aren’t just generic clips — they can reflect creative, abstract, or highly specific scenarios, making the tool especially useful for content creators, marketers, or anyone experimenting with visual storytelling.

Also Read: Getting Started With Google Gemini: A Beginner’s Guide

To ensure transparency, each video is embedded with SynthID — a digital watermark developed by Google’s DeepMind. The watermark is invisible to the human eye but persists across editing, compression, and sharing. It identifies the video as AI-generated, addressing concerns around misinformation and media authenticity.

While Veo 2 is still in its early phases of public rollout, the technology is part of a broader push by Google to democratize advanced AI tools. With text-to-image, code generation, and now video creation integrated into Gemini, Google is positioning the platform as a full-spectrum creative assistant.

Access to Veo 2 starts today and will continue expanding in the coming weeks. Interested users can try it out at gemini.google.com or through the Gemini app on Android and iOS.

Continue Reading

#Trending