Every single piece of work generated by an AI will soon come with its own unique fingerprint. You won't see it, but it will be there, hidden in the code. That's what the world's top AI companies have promised to our sugar-chomping, war-mongering global overlords in the White House.

The next time you try to impress a chic with your poetry skills using words spat by ChatGPT or seek to make quick work of your overdue assignment, you will be caught.

No blessing is eternal. Every boon has a curse to it!

It's actually a great move, even though it might sound like doom for the perpetually lazy humans among us. Marvel, the studio that publishes manchild-pleasing superhero comics and earns billions of dollars from shit films based on those comics, got hot backlash from fans after it used AI-generated art for its TV show “Secret Invasion.”

A lawyer in New York was shamed in court after he used ChatGPT for legal research, but the AI chatbot regurgitated imaginary cases that led to him getting caught. Teachers worldwide are fed up and ranting online about students using ChatGPT to cheat on their exams and assignments.  

A human checking computer content
Credit: Dall-E / OpenAI

To sum it up, generative AI tools used for vomiting walls of words, generating fancy images, and making audio clips are all fun and helpful, but they are also being abused in myriad ways. Heck, Hollywood actors and writers are protesting against use of AI by production studios because their jobs are on the line.

The safest way forward?


Making all AI-generated content clearly disclose that it is, well, AI-generated would be a step in the right direction. Thankfully, the White House just obtained commitment from the world’s leading AI tech developers that they will watermark all content generated by an AI program.

So, the next time you ask ChatGPT to write an essay for you, your teacher will catch yo sneaky ass using a detection tool offered by these companies.

“The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system,” says the official White House press release.

Think of it as DNA fingerprinting, but for AI content. You won’t see it, but it will be very much present in the code. Be it text, AI-generated images, or audio, this watermark will be secretly saved using a technical tool and can be detected accurately.

There are already tools out there that claim to detect if a work was generated using AI, but they are not fool-proof. That’s extremely risky. One student at the University of California Davis failed her exam because an AI-checker tool used by their professor erroneously said the answers were generated by AI.

To put it simply, the stakes are extremely high, even if the error rate for such an AI tool is extremely low. For example, if an AI-checking tool has an accuracy of 99%, it will at least fail one student erroneously in a class of 100, ruining their career for good.

However, if the makers of AI tools like ChatGPT, Bard, and Bing Chat make good on their commitment, they would offer an AI fingerprint tool that is far more accurate. Plus, watermarking would go a long way toward bolstering transparency.

At the moment, not many details about this AI watermarking tool are available. But we do know that it is being developed by the world’s top AI labs — Microsoft, Google, OpenAI, Amazon, Anthropic, Meta, and Inflection — at a fundamental level, so it could very well be infallible.

Did you like what you just read? Share it!