Our first product, an anti-AI watermark, applies a small amount of visual noise to any images you upload. This noise is either completely imperceptible or minimally perceptible to humans, yet confuses machine learning models and makes it so that the AI is unable to properly minimize its loss function and “read” your images.
Users may customize how much noise they’d like added on a per-image basis, with the tradeoff being that AI can discern more basic visual details from images with less noise.
Each watermark requires a significant amount of computing resources to generate, and as such images are uploaded to a queue and may take some time to process. To ensure that Sanative remains accessible to all, we maintain a number of servers that users can take advantage of for free, regardless of their own personal hardware. For those who wish to support our mission, we offer a paid membership which allows users to bypass the queue and process even higher resolution images.
The topic of AI-generated artwork is hotly debated, and individuals who disagree with Sanative’s mission may attempt to attack our site. Though we’ve implemented various checks and balances to protect against this, there’s currently no better protection than requiring users to sign in with an account before using Sanative. We hope that you’ll understand this decision.
The idea that protection is a lost cause because tools like Sanative didn’t exist when Stable Diffusion and Midjourney became popular is a sunk cost fallacy . By focusing on what we’ve already lost, we stay stuck in the past which can’t be changed. The fallacy lies in the fact that it’s almost always in our best interest to look to the future instead, focusing on the positive change that’s still possible in spite of the harmful developments that have already taken place. Harm has undoubtedly been done, and it’s understandable that artists would feel discouraged and disempowered given this unfair reality. But if there are low-effort and risk-free ways to protect new images moving forward, we believe users stand to lose nothing by doing so. Our goal at Sanative is to give people choices. The dilemma of AI-generated artwork cannot be solved overnight, but we can start laying the foundation for a better future now if we resist falling into despair.
Watermarking is a temporary solution, but our team is confident in our ability to respond to advances in AI art tools with agility. At the top of our roadmap is a personal gallery, which will enable users to easily keep track of their protected images and re-apply the latest version of Sanative’s watermark with the click of a button. We also aim to market Sanative as a base technology that can be implemented by other large image-hosting platforms, so that the onus to keep images protected with the latest watermark isn’t placed on users.
Making AI-generated art for personal and private use isn’t a bad thing, per se. Where use of these tools becomes problematic is when AI-generated art is being shared widely on social media without crediting the original artwork those AI-generated pieces are based on; something that’s impractical, if not impossible, to do when large databases of artwork from many artists are utilized. Worse yet is when AI-generated art is not only shared widely on social media, but those sharing this art are profiting from it. Given that there’s currently no way for artists to verify which use case their artwork is serving, many in the art community have been left feeling fearful of the future. Our hope is to re-even the playing field by providing artists with tools that enable them to better regulate when, how, and by whom their artwork is being used, until better mechanisms exist for separating personal and private use from public or commercial use.
Given the sheer volume of artwork that exists in the public domain, the impact of anti-AI tools is unlikely to be felt by individuals unless they are generating pieces based on the work of one particular artist who opts to use an anti-AI tool. This particular use case is what we aim to safeguard against. While there’s an argument to be made that preventing individuals from generating pieces of AI artwork based on the style of one particular artist could be considered a form of creative limitation, we believe this is a lesser evil compared to artists self-limiting their creative expression due to fear. If AI-art remains unregulated and artists are made to compete with generated artwork that’s explicitly based on their work, some artists may choose to stop sharing artwork altogether.