We built a tool to generate thousands of marketing banners automatically

Hey,

I’m one of the people behind Pixelixe, and I wanted to share a project that started from a very practical frustration rather than a startup idea.

For context, I’ve been working for years in marketing software. One thing kept repeating itself across companies, teams, and products:

Creating visuals at scale is still weirdly manual.

Everyone talks about creative automation, APIs, AI, growth systems — but when it comes to graphics, teams are still duplicating files in Figma or Photoshop at 2am before a campaign launch.


The problem we kept seeing

A typical marketing request looks innocent:

“We need banners for the campaign.”

But in reality it means:

  • 6 social networks

  • 5 ad formats each

  • multiple languages

  • product variations

  • A/B testing versions

  • localized pricing

One campaign quickly becomes 200–1,000 images.

What actually happens inside companies is usually one of these:

  1. Designers duplicate files manually.

  2. Someone writes scripts generating images from templates.

  3. The scripts break as soon as marketing changes something.

  4. Everyone promises to “fix the workflow later”.

Later never comes.


Why existing tools didn’t fully solve it

There are good API-based image generation tools out there. We tested many.

But we noticed a recurring gap:

  • Developer tools were powerful but inaccessible to marketing teams.

  • Design tools were flexible but impossible to automate reliably.

So teams ended up creating a strange hybrid process involving exports, spreadsheets, and Slack messages like:

“Can someone regenerate all sizes with the new CTA?”

That message alone can cost hours.


The idea we started exploring

Instead of generating images directly, we asked:

What if banners behaved more like UI components?

Meaning:

  • design once

  • define dynamic variables

  • enforce layout constraints

  • generate variations deterministically

Not AI magic — more like a rendering system with rules.


The unexpected technical rabbit holes

I originally assumed image generation would be the easy part.

It wasn’t.

1. Text destroys layouts

Dynamic text is chaos.

Examples we hit constantly:

  • German translations 40% longer than English

  • product names longer than expected

  • emojis breaking line height

  • font rendering differences between environments

We ended up building logic closer to a browser layout engine than an image renderer.

Handling overflow without breaking design consistency became a core problem.


2. Batch generation changes everything

Generating one image via API is trivial.

Generating 5,000 reliably is a completely different system.

We had to rethink:

  • queue orchestration

  • rendering concurrency

  • predictable execution times

  • retry strategies

  • caching identical assets

Marketing teams don’t accept “sometimes it fails”. Campaigns have deadlines.


3. Designers and developers think differently

This might have been the hardest lesson.

Developers want:

  • structured inputs

  • predictable outputs

  • automation

Designers want:

  • visual control

  • freedom to tweak layouts

  • immediate feedback

Building something both sides could use without friction forced us to rethink product decisions multiple times.


What surprised me the most

The biggest insight was this:

Creative automation is not primarily an AI problem.

It’s a constraints problem.

AI can generate ideas, text, or images, but production workflows need:

  • determinism

  • repeatability

  • brand consistency

  • predictable layouts

In other words: engineering problems disguised as design problems.


Where AI actually fits

We’re now seeing AI used more as an upstream step:

  • generate campaign concepts

  • propose copy variations

  • create image assets

But the final production layer still needs a system that behaves reliably like software infrastructure.

That realization changed how we think about “AI design tools”.


Current state

We built Pixelixe around this idea — templates acting like programmable visual components that can generate large volumes of marketing visuals automatically.

It’s now used mostly for ecommerce visuals, automated campaigns, and SaaS workflows where graphics are generated dynamically.

We’re still learning a lot, especially around how companies scale creative production internally.


I’m curious about how others solved this

If you’ve worked on internal tooling or automation pipelines:

  • Did your team build custom image generators?

  • What broke first when you tried scaling visual production?

  • How do you handle localization + layout issues?

Happy to answer technical questions or share more lessons learned. By the way, we are currently building an AI Designer agent. Should be live in a couple of weeks max