Big companies like OpenAI might agree to label their creations, but startups such as Midjourney don’t have the capacity to devote extra engineering time to the problem. That’s a problem, since labelling is by no means universal – nor is it likely to be. “If the public get used to seeing AI-generated images with a watermark on it, does that mean they implicitly trust any without watermarking?” But they can also create a false sense of security, says Spiers. Those policies might help tackle some of the most viral forms of misinformation, like in-jokes or satire that spreads outside its original context. At the distribution end, Meta has started adding its own labels to AI-generated content and says it will remove posts that aren’t labelled. Now, images generated by the tool have a visible label and machine-readable watermark. The Coalition for Content Provenance and Authenticity, which includes among its membership the BBC, Google, Microsoft and Sony, has produced standards for watermarking and labelling, and in February OpenAI announced it would adopt them for Dall-E 3. Tech companies have started to react to the oncoming flood of generated media. The latest version of Dall-E is built into ChatGPT and Bing, while Google is offering its own tools for free to users. As the technology has become more powerful, it’s also become easier to use. Indie competitor Midjourney has released six in the same period, while the free and open source Stable Diffusion model has hit its third version, and Google’s Gemini has joined the fracas. Since 2021, OpenAI’s image generator, Dall-E, has released three versions, each radically more capable than the previous. And time is running out – the models are getting better and better.” But even today, it is time-consuming: It’s not something you can truly scale up. Hands are a classic one, and then eyes are also quite a good tell. “There are all kinds of manual techniques to spot fake images, from misspelled words, to incongruously smooth or wrinkly skin. “The models are developing at a speed and pace that is, well, incredible from a technical point of view, and quite alarming. “We’re running out of time of still being able to do manual detection,” said Mike Speirs, of AI consultancy Faculty, where he leads the company’s work on counter-disinformation. And that’s created a race against time: can we work out other ways to spot fakes, before the fakes become indistinguishable from reality? Which is good, since as generative AI becomes better and better, the days of looking for tell-tale signs to spot a fake are nearly over. Thankfully, the stakes for viral supercentenarian cake decorators are low.
0 Comments
Leave a Reply. |