OpenAI Says DALL-E Is Generating Over 2 Million Images a Day— and that’s just the beginning.

OpenAI has taken down the wait list for its text-to-image system DALL-E 2, which means that anyone can sign up to use the AI art generator right away.

In January 2021, the company showed off the first version of DALL-E. AI experts and the general public were impressed by the tool’s ability to turn any text description (or prompt) into a unique image.

Since then, many other text-to-image systems have been made that can match or beat DALL-speed E’s and quality. Other systems, like Mid journey and Stable Diffusion, are much easier for anyone to use, which takes attention away from OpenAI’s own system.

OpenAI has been careful about letting the public know about DALL-E because tech giant Microsoft has given it a lot of money. Experts say that the fact that text-to-image systems can make nonconsensual nude pictures and pictures that look like they were taken with a camera is dangerous and could be used for harassment, propaganda, spreading false information, and more.

There are also problems with prejudice. Text-to-images systems are trained on huge sets of images scraped from the internet. Because of this, they make the same unfair things happen again and again. If you tell a system to draw a CEO, for example, it will usually draw a picture of a white man.

OpenAI has taken a number of steps to stop these effects, such as removing sexual and violent images from its training data and refusing to make images based on similarly explicit prompts. But the company has also been criticized for what some people see as a too strict or awkward way of reducing harm.

Emad Mostaque, who worked on a competing text-to-image AI called Stable Diffusion, said that OpenAI’s decision not to make images from words like “Ukraine” and “Odesa” was an “asshole move.” (It’s likely that these words are banned because they could spread false information during a war.) Some people have called the company’s efforts to stop bias “hacky.” For example, DALL-E sneakily adds phrases like “Black man” and “Asian woman” to user prompts that don’t say anything about gender or race.

This keeps the system from making pictures of white people. The Verge found out from OpenAI that it uses this method. This does make DALL-output E’s less biased, but some users have said it also makes images that don’t match what they asked for.

OpenAI said in a blog post today that it was happy with the improvements it had made to its safety systems and that this would help make up for any possible harm as DALL-E became easier to use.

“Over the past few months, we’ve made our filters more effective at blocking attempts to make sexual, violent, or other content that goes against our content policy. We’ve also built new ways to find misuse and stop it,” the company said.

The company also said that it is testing an API for DALL-E that would let other companies use the system’s output to make their own apps and plug-ins. This would make it much easier for OpenAI to sell the work done by DALL-E. For example, the system could be combined with tools used by illustrators and designers.

When you sign up for DALL-E, you get 50 credits for free, and every month after that, you get 15 more credits for free. Each credit can be used to make a single image, a variation of an image, or for “inpainting” and “outpainting” (editing the contents of an image or extending an image beyond its existing boundaries).

You can buy 115 more credits at a time for $15. OpenAI says that over 2 million images are made every day by 1.5 million DALL-E users.

Related Stories Recommended By Writer: