Image Generation Models: A Detailed Guide

This platform presents various models for generating, editing, and stylizing images. Below you will find a detailed description of each model and their features.

🎨 Kandinsky 4.1 / 3.1 / 3.0 / 2.2

Kandinsky 4.1 / 3.1 / 3.0 / 2.2

Purpose: Creating images from text with a focus on the Russian cultural code.

General:

All versions of Kandinsky support image generation from text and also can:

✏️ Inpainting – drawing inside the image.

🌄 Outpainting – extending the image beyond the original boundaries.

Differences between versions:

Recommended:

– Choose 4.1 for complex requests and modern trends.

– Use 2.2 for authorial painterly and graphic works.

Generation examples

🖼️ ControlNet: Image-to-Image

ControlNet: Image-to-Image

Purpose: Changing the style of the original image.

🎨 Preserves structure and pose

💬 Change the style through text (e.g. "in pixel art style")

Workflow:

Image (input image) → Image (output image)

+ Text (style)

Usage example: Upload a photo → Write “in anime style” → Get a result with the same poses and composition.

🌐 Flux

Flux

Purpose: Creating images from text with a focus on modern trends and creativity.

💡 Ideal for unconventional, futuristic, and trendy concepts

🧠 Well understands fashionable visual solutions

Excellent for creating complex, detailed textures in special effects and design.

Workflow:

Text → Image

Example prompts:

“3D avatar in metaverse style”

“Surreal city made of light”

📌 How to choose a model?

How to choose a model?