Now anyone can build apps that use DALL-E 2 to generate images

At long last, DALL-E 2, OpenAI’s image-generating AI system, is available as an API, meaning developers can build the system into their apps, websites and services. In a blog post today, OpenAI announced that any developer can start tapping the power of DALL-E 2 — which more than three million people are now using to produce over four million images a day — once they create an OpenAI API account as part of the public beta.

Pricing for the DALL-E 2 API varies by resolution. For 1024×1024 images, the cost is $0.02 per image; 512×512 images are $0.018 per image; and 256×256 images are $0.016 per image. Volume discounts are available to companies working with OpenAI’s enterprise team.

As with the DALL-E 2 beta, the API will allow users to generate new images from text prompts (e.g. “a fluffy bunny hopping through a field of flowers”) or edit existing images. Microsoft, a close OpenAI partner, is leveraging it in Bing and Microsoft Edge with its Image Creator tool, which lets users create images if web results don’t return what they’re looking for. Fashion design app CALA is using the DALL-E 2 API for a tool that allows customers to refine design ideas from text descriptions or images, while photo startup Mixtiles is bringing it to an artwork-creating flow for its users.

Not much in terms of policy is changing with the API launch, which is likely to disappoint those who fear that generative AI systems like DALL-E 2 are being released without sufficient consideration for the ethical and legal issues that they pose. As before, users are bound by OpenAI’s terms of service, which prohibits using DALL-E 2 to generate overtly violent, sexual or hateful content. OpenAI is also continuing to block users from uploading pictures of people without their consent or images that they don’t have the rights to, employing a mix of automated and human monitoring systems to enforce this.

One slight tweak is that images generated with the API won’t be required to contain a watermark. OpenAI introduced watermarking during the DALL-E 2 beta as a way to indicate which images originated from the system, but has chosen to make it optional with the launch of the API.

“We encourage developers to disclose that images are AI-generated, but do not require that they include the DALL-E 2 signature,” Luke Miller, the product manager at OpenAI overseeing DALL-E 2’s development, told TechCrunch via email.

Microsoft’s Designer tool, powered by the DALL-E 2 API.

OpenAI also employs prompt- and image-level filters with DALL-E 2, albeit filters that some customers have complained are overzealous and inaccurate. And the company has focused a portion of its research efforts on diversifying the types of images that DALL-E 2 generates, aiming to combat the biases to which text-to-image AI systems are known to fall victim (e.g. generating mostly images of white men when prompted with text like “examples of CEOs”).

But these steps haven’t allayed every critic. In August, Getty Images banned the upload and sale of illustrations generated using DALL-E 2 and other such tools, following similar decisions by sites including Newgrounds, PurplePort and FurAffinity. Getty Images CEO Craig Peters told The Verge that the ban was prompted by concerns about “unaddressed right issues,” as the training data sets for systems like DALL-E 2 contain copyrighted images scraped from the web.

Many critics say it’s not merely the infringement of trademarked work that concerns them about DALL-E 2. The system threatens the livelihood of artists whose styles can now be replicated with a few strings of text, they argue, including artists who didn’t consent to their work being used for DALL-E 2’s training. (To be fair to OpenAI, the company has licensed a portion of the images in DALL-E 2’s training dataset, which is more than can be said of some of its rivals.)

Attempting to find a middle ground, Getty Images rival Shutterstock recently announced that it would begin using DALL-E 2 to generate content but simultaneously launch a “contributor fund” to reimburse creators when the company sells work to train text-to-image AI systems. It’s also banning AI art uploaded by third parties to minimize the potential that copyrighted work makes its way onto the platform.

Technologists Mat Dryhurst and Holly Herndon are spearheading an effort called Source+ to allow people to disallow their work or likeness to be used for AI training purposes. But it’s voluntary. OpenAI hasn’t said whether it’ll participate — or indeed, whether it’ll ever introduce a self-service tool to allow rightsholders to exclude their work from training or content generation.

Mixtiles is among the early adopters of the DALL-E 2 API.

In an interview, Miller revealed little in the way of specifics regarding new mitigatory measures, save that OpenAI has been improving its techniques to prevent the system from generating biased, toxic and otherwise offensive content customers might find objectionable. He described the open API beta as an “iterative” process, one that’ll involve work with “users and artists” over the next few months as OpenAI scales the infrastructure powering DALL-E 2.

Certainly, if the DALL-E 2 beta is any indication, the API program will evolve with time. Early on, OpenAI disabled the ability to edit people’s faces with DALL-E 2, but later enabled the capability after making improvements to its safety system.

“We’ve done a lot of work on that side of things — both through the images that you upload and the prompts that you send as far as aligning that with our content policy and baking in different mitigations to filter at the prompt level and at the image level to make sure that aligns with our content policy. So, for example, if somebody were to upload an image that contains hate symbols or gore — like very, very, very violent content — that would be rejected,” Miller said. “We’re always thinking about how we can improve the system.”

But while OpenAI appears eager to avoid the controversy that surrounds Stable Diffusion, the open source equivalent of DALL-E 2 that’s been used to create porn, gore and celebrity deepfakes, it’s leaving it up to API users to choose exactly how and where to deploy its technology. Some, like Microsoft, will no doubt take a measured approach, rolling out DALL-E 2-powered products slowly to gather feedback. Others will dive headfirst, embracing both the technology and the ethical dilemmas that come along with it.

If there’s one thing for certain, it’s that there’s pent-up demand for generative AI — consequences be damned. Even before the API was officially available, developers were publishing workarounds to integrate DALL-E 2 into apps, services, websites and even video games. With the public beta launch, fueled by OpenAI’s formidable marketing muscle, synthetic images are poised to truly enter the mainstream.

Now anyone can build apps that use DALL-E 2 to generate images by Kyle Wiggers originally published on TechCrunch