← Blog3–4 Mayo, 2026 · Tenerife
Cómo Construí una Marca de Streetwear con IA — DIOCANE
De una carpeta de imágenes de referencia a un pipeline completo de generación de logos con IA, una UI web y una identidad de marca lista para imprimir. Todo en una sesión.
diocaneaibrandingstreetwearfluxideogramprint-on-demand369pipeline
Start · May 3, 2026intention
"help me with tools and workflow to create this from scratch since claude design failed... i want to sell print on demand so customers can buy it and i want to set this store up asap"
The brief in one sentence: sell streetwear online, zero upfront inventory, start now. Claude's built-in design tool had already failed — the output was too clean, too corporate. DIOCANE needed to look like it was designed in a warez scene NFO file at 3AM, not in a Canva template. That gap was the whole problem to solve.
05:00aesthetic
23 reference images in ~/Desktop/diocane/. Razor 1911 NFOs. ACiD Productions ANSI art. Dot-matrix rhinestone heraldry. Guanyin + DVD logo. SYS_BOOT terminal text over Buddhist fine-line illustration. Black wolves. Gothic blackletter. Keygen UI serial number readouts.
Before touching any tool, the visual language had to be understood. This wasn't "gothic streetwear" — it was a very specific collision: demoscene hacker culture from the early 2000s, medieval heraldry rendered in pixel dots, sacred/profane imagery, and the visual grammar of underground software distribution. The 23 images were a fully coherent aesthetic. The challenge was making an AI see it.
Session startbuild
"try to integrate good ascii in the design i love the keygen crack hacker code aesthetic mixed with the goth spiky tribals metal fonts and snowboarding and skate core core core culture"
This sentence is the complete creative brief. Keygen cracktro. Gothic spiky metal. Skate/snow core. The triple "core core core" is not a typo — it's emphasis. That's the energy. Every generation decision after this point was measured against it.
Phase 1technical
Local inference attempt: ComfyUI + FLUX.1-dev Q4_K_S GGUF (6.81GB). Apple M1 Pro 16GB unified memory. MPS backend. Result: 480 seconds per inference step.
480 seconds per step. With 28 steps for FLUX dev, that's 224 minutes per image. Nearly 4 hours for a single logo. The M1 Pro is fast at many things — FLUX inference at scale is not one of them. MPS doesn't have the memory bandwidth CUDA does for this model size. The local approach was dead on arrival. Pivot immediately.
Pivotpivot
"its taking a long time should we use a cloud API image generation instead? find the best quality to price ratio"
The right call at exactly the right moment. No attachment to the local setup. The goal is logos, not infrastructure. fal.ai FLUX dev: $0.025/image, ~6 seconds. Ideogram v2: $0.08/image, best text rendering in the industry. Built the cloud generator in one pass — multiple backends, three style modes, style reference support, retry logic. First cloud image: 18 seconds. Game changed.
The text problemdiscovery
Generated: "DIOCAINE", "DIOCAME", "DIO·ANE", garbled glyphs. Background code elements rendered as readable nonsense phrases. AI text hallucination across all FLUX models.
Every image generation model hallucinates text. FLUX is better than most but still unreliable on specific brand names. "DIOCANE" kept acquiring phantom letters, losing letters, or splitting around a dog illustration placed in the middle of the word. The background "hacker code" came out as readable English garbage. Both problems needed different fixes.
Fixbreakthrough
Switched to Ideogram v3. Prompt constraint: "only readable text is DIOCANE and 369. All other code elements must be abstract unreadable characters." Text accuracy: ~40% → ~95%.
Ideogram was built specifically for typography. v3 is their best model. The key wasn't just switching models — it was the negative constraint in the prompt. Telling the model what NOT to render as readable text is as important as telling it what to render. After this fix, "DIOCANE" and "369" came out clean in nearly every generation. The background code stayed abstract. Problem solved.
The collage trickbreakthrough
Built make_collage.py — stitches all 23 reference images into a single 1536×1536 grid. Uploads to fal.ai CDN. Passes as style_image_url to Ideogram. Quality jump: massive.
This was the biggest unlock of the session. Instead of describing the aesthetic in words — which always produces something adjacent to, but not quite, the vision — you hand the model all 23 reference images at once as a visual input. The Ideogram v3 style reference parameter absorbs the entire aesthetic in one pass: dot-matrix texture, NFO border style, sacred geometry, blackletter weight. The difference between prompted style and referenced style is the difference between a cover version and the original.
Direction lockedaesthetic
Winning formula after 100+ generations: spiky cathedral gothic metal blackletter DIOCANE + third-eye divine wolf + friendly snake companion + 369 + abstract binary code columns + ESP wallhack UI corner bracket overlays.
A hundred images to find a formula. The wolf was always there — DIOCANE means god-dog, the wolf is the archetype. The all-seeing eye came from the sacred geometry references. The snake landed when the prompt asked for a "friendly companion" rather than a threatening element — the snake became the caduceus, the healer, the keeper of wisdom. The 369 is Tesla's number. The ESP wallhack UI is the keygen cracktro layer. Everything clicked into place.
The credit problemhuman
"you said around 200 images with 5 but im already at 9.50"
$10 of fal.ai credits spent faster than expected because the iteration happened almost entirely on Ideogram v3 at $0.08/image. 125 images × $0.08 = $10. The math was right — the assumption was wrong. FLUX schnell at $0.003 would have given 3,300 drafts for the same budget. Lesson: use schnell for direction-finding, save Ideogram v3 for confirmed compositions. Always measure the tool cost against the iteration speed you actually need.
Free tier foundpivot
Pollinations.ai: completely free, no API key, FLUX-based. HTTP GET request with encoded prompt. Added as --backend pollinations. First free generation: wolf, gothic frame, fine detail. Working.
When the paid credits ran out, the pipeline didn't stop. Pollinations.ai exists, works, and requires nothing. The quality is different from Ideogram — more painterly, less typographically precise — but for direction-finding and composition testing it's perfectly usable. Always have a free fallback. The generator now defaults to Pollinations when no API key is present.
Web UIbuild
FastAPI + vanilla JS. http://localhost:7369. Live job queue. Gallery browser with folder navigation. Favorites tab (~/Desktop/doggod curated picks). Drag-and-drop style ref upload. One-click "use as style ref" on any image. Hot reload.
The command-line generator was efficient but opaque. Building the web UI turned the pipeline into a creative tool — you can see everything, click anything, iterate visually. The port is 7369. Of course it is. The favorites tab pulls from the curated doggod folder — 56 hand-picked results that now serve as the living style reference for all future generations. The best collage of those 56 is one button click away as a style input.
The curated bestdiscovery
~/Desktop/doggod — 56 manually selected generations. New collage: diocane_best.jpg — 56 images, 1536×3678px. Used as style reference for final round. Result: most consistent output of the session.
The breakthrough moment of the whole pipeline: use the best outputs as inputs. Your own generations, filtered by human taste, become the style reference for the next generation. The model learns from what you've already approved. This is how you converge on a brand identity — not by describing it better, but by showing it progressively more of itself.
Final logo directionaesthetic
"DIOCANE" spiky razor cathedral gothic blackletter. Wolf third eye halo. Snake companion. 369. Matrix binary columns. ESP wallhack corner brackets. Black on white. Screen print ready.
After 27 finals, 18 iterations, 9 style-ref batches, and a full web UI — the logo direction is locked. Wolf with all-seeing eye, snake companion, DIOCANE in extreme gothic metal, 369 below, binary code and ESP UI as the code layer. Every element means something. Nothing is decoration. The next step is vectorization and Printful integration. The brand is real.
Stack summarybuild
fal.ai FLUX dev ($0.025) · FLUX schnell ($0.003) · Ideogram v3 ($0.08) · Pollinations.ai (free) · ComfyUI + GGUF (abandoned) · FastAPI + uvicorn · Printful · Printify · Etsy · Vectorizer.ai
The full tool stack of one session. Two pivots: local → cloud, paid → free. Three style modes, four backends, one collage builder, one web UI. The project is now a repeatable pipeline, not a one-off experiment. Every future product design — back prints, sleeve graphics, cap embroidery, rhinestone patterns — runs through the same system with the same style references.