OpenAI Images 2.0: text in images, UI generation, and extended thinking

OpenAI Images 2.0 can render text, generate UI mockups, and pull fresh web data; extended thinking mode lets the model invent its own concept from a short prompt.

Author: Michael Kokin ·

What's new

Three notable improvements over the previous generation:

There's also extended thinking mode: give it a short prompt and the model comes up with a concept on its own and delivers a finished image — no detailed instructions needed.

Non-standard formats

OpenAI also showed how the model handles arbitrary layouts — ad banners, multi-column spreads, full newspaper pages with headlines and body copy:

![](/media/posts/openai-images-2-test-2.jpg)
![](/media/posts/openai-images-2-test-3.jpg)
![](/media/posts/openai-images-2-test-4.jpg)
![](/media/posts/openai-images-2-test-5.jpg)
![](/media/posts/openai-images-2-test-6.jpg)
![](/media/posts/openai-images-2-test-7.jpg)

Why it matters

Accurate text in images is what kept designers from using generative models for real work: logos, banners, UI mockups. Now it works. All in all — a genuinely solid release from Altman and team, nothing to be embarrassed about.

Expecting GPT-5.5 or a new model called Spud within the next couple of weeks.

OpenAI — try it via "try in ChatGPT"; works in the browser version, not in the app yet.
TechCrunch