---
name: local-anime
description: Manage local ComfyUI instance for anime image generation using Animagine XL 3.1 (SDXL). Handles ComfyUI API integration for queuing prompts, workflow management, and anime-specific prompt engineering with Danbooru-style tags.
---

# Local Anime Generation Skill

## Quick Generate

Copy/paste this in PowerShell/cmd to generate a character:

```cmd
cd C:\Users\fbmor\broken-spire-comparison
python automated_consistency_workflow_local.py --generate "Ash 2"
```

Available characters: Ash, "Ash 2", Far-Future Ash, Nova Human, Nova Devil, Éva Moreau, Everly, Lin Weishan, TC-23, Jonas, Violet humaine, Violet Devil, Esper, TK, Seraphine Vale

After generation, images appear in: C:\AI\ComfyUI\output

## Overview

This skill manages a local ComfyUI instance running **Animagine XL 3.1** (SDXL-based anime checkpoint) on a **gaming PC with NVIDIA GPU**. The ComfyUI server runs locally and is accessible via HTTP API.

## Instance Details

- **ComfyUI URL:** `http://localhost:8188`
- **GPU:** Local gaming PC (NVIDIA GPU)
- **Storage:** Local filesystem

## Local Paths

Models and outputs are stored locally on the Windows filesystem:

- **Outputs:** `C:/AI/ComfyUI/output`
- **Checkpoints:** `C:/AI/ComfyUI/models/checkpoints`
- **VAE:** `C:/AI/ComfyUI/models/vae`
- **CLIP Vision:** `C:/AI/ComfyUI/models/clip_vision`

Current checkpoint: `animagine-xl-3.1.safetensors`

## ComfyUI API Integration

### Check System Status
```bash
curl -s "http://localhost:8188/api/system_stats"
```

### Queue a Prompt (Text-to-Image)

POST to `/api/prompt` with the following JSON structure:

```json
{
  "prompt": {
    "1": {
      "class_type": "CheckpointLoaderSimple",
      "inputs": {"ckpt_name": "animagine-xl-3.1.safetensors"}
    },
    "2": {
      "class_type": "CLIPTextEncode",
      "inputs": {
        "text": "<POSITIVE_PROMPT>",
        "clip": ["1", 1]
      }
    },
    "3": {
      "class_type": "CLIPTextEncode",
      "inputs": {
        "text": "<NEGATIVE_PROMPT>",
        "clip": ["1", 1]
      }
    },
    "4": {
      "class_type": "EmptyLatentImage",
      "inputs": {"width": 832, "height": 1216, "batch_size": 1}
    },
    "5": {
      "class_type": "KSampler",
      "inputs": {
        "seed": 42,
        "steps": 28,
        "cfg": 7,
        "sampler_name": "euler_ancestral",
        "scheduler": "normal",
        "denoise": 1.0,
        "model": ["1", 0],
        "positive": ["2", 0],
        "negative": ["3", 0],
        "latent_image": ["4", 0]
      }
    },
    "6": {
      "class_type": "VAEDecode",
      "inputs": {"samples": ["5", 0], "vae": ["1", 2]}
    },
    "7": {
      "class_type": "SaveImage",
      "inputs": {"filename_prefix": "AnimagineXL_", "images": ["6", 0]}
    }
  }
}
```

Example curl command:
```bash
curl -s -X POST "http://localhost:8188/api/prompt" \
  -H "Content-Type: application/json" \
  -d '{"prompt": { ... }}'
```

**Response:** `{"prompt_id": "<uuid>", "number": 1, "node_errors": {}}` on success.

### Check Queue Status
```bash
curl -s "http://localhost:8188/api/queue"
```

### Get Generation History
```bash
curl -s "http://localhost:8188/api/history"
```

### View Generated Images
```bash
curl -s "http://localhost:8188/api/view?filename=AnimagineXL_00001_.png&subfolder=&type=output"
```

## KSampler Settings (Optimized for Animagine XL 3.1)

| Parameter | Value | Notes |
|---|---|---|
| steps | 28 | High quality; can reduce to 20 for speed |
| cfg | 7 | Sweet spot for Animagine XL |
| sampler_name | euler_ancestral | Best for anime style, adds natural variation |
| scheduler | normal | Also works: karras |
| denoise | 1.0 | Full generation (reduce for img2img) |
| seed | randomize or fixed | Use fixed for reproducibility |

## Supported Resolutions (SDXL Native)

| Aspect | Width | Height | Use Case |
|---|---|---|---|
| Portrait | 832 | 1216 | Characters, full body |
| Landscape | 1216 | 832 | Scenes, environments |
| Square | 1024 | 1024 | Headshots, icons |
| Wide | 1344 | 768 | Cinematic, panoramic |
| Tall | 768 | 1344 | Vertical scenes |

**Important:** Always use SDXL-native resolutions. Non-standard sizes cause artifacts.

## Anime Prompt Engineering (Animagine XL 3.1)

### Prompt Format

Animagine XL uses **Danbooru-style tags** (comma-separated), NOT natural language sentences.

### Quality Tags (Always Include)

Start every positive prompt with quality boosters:
```
masterpiece, best quality, very aesthetic, absurdres
```

### Prompt Structure

```
<quality tags>, <character description>, <scene/action>, <background>, <style tags>
```

### Example Positive Prompts

**Character portrait:**
```
masterpiece, best quality, very aesthetic, absurdres, 1girl, solo, long hair, silver hair, red eyes, detailed face, school uniform, sailor collar, pleated skirt, standing, cherry blossoms, spring, blue sky, detailed background
```

**Action scene:**
```
masterpiece, best quality, very aesthetic, absurdres, 1boy, solo, spiky hair, black hair, glowing eyes, battle stance, sword, energy aura, dynamic pose, ruins, dramatic lighting, dark atmosphere
```

**Group scene:**
```
masterpiece, best quality, very aesthetic, absurdres, 2girls, holding hands, smiling, flower field, sunset, wind, flowing hair, white dress, summer, warm colors
```

### Standard Negative Prompt

Always use this negative prompt unless specifically adjusted:
```
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name
```

### Useful Tags Reference

**Hair:** long hair, short hair, twintails, ponytail, braids, silver hair, blue hair, black hair, blonde hair
**Eyes:** blue eyes, red eyes, green eyes, heterochromia, glowing eyes, detailed eyes
**Clothing:** school uniform, armor, dress, kimono, hoodie, military uniform, maid outfit
**Expression:** smile, blush, crying, angry, surprised, closed eyes, open mouth
**Pose:** standing, sitting, running, dynamic pose, looking at viewer, from above, from below
**Background:** detailed background, simple background, gradient background, outdoors, indoors, night sky, city, forest
**Lighting:** dramatic lighting, backlighting, rim lighting, soft lighting, golden hour
**Style:** anime coloring, flat color, cel shading, painterly, sketch, lineart

### Tags to Avoid

- Natural language sentences (the model doesn't understand them well)
- Conflicting tags (e.g., both "smile" and "crying" unless intentional)
- Too many characters (3+ characters degrades quality significantly)

## Workflow JSON (for Visual Canvas Loading)

This workflow JSON can be pasted (Ctrl+V) onto the ComfyUI canvas to load the visual node graph:

```json
{"last_node_id":7,"last_link_id":9,"nodes":[{"id":1,"type":"CheckpointLoaderSimple","pos":[50,200],"size":[315,98],"flags":{},"order":0,"mode":0,"outputs":[{"name":"MODEL","type":"MODEL","links":[1],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[2,3],"slot_index":1},{"name":"VAE","type":"VAE","links":[4],"slot_index":2}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["animagine-xl-3.1.safetensors"],"title":"Load Checkpoint"},{"id":2,"type":"CLIPTextEncode","pos":[450,100],"size":[420,164],"flags":{},"order":1,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":2}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[5],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["1girl, solo, long hair, blue eyes, school uniform, cherry blossoms, spring, detailed background, masterpiece, best quality, very aesthetic, absurdres"],"title":"Positive Prompt","color":"#232","bgcolor":"#353"},{"id":3,"type":"CLIPTextEncode","pos":[450,340],"size":[420,164],"flags":{},"order":2,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":3}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, nsfw"],"title":"Negative Prompt","color":"#322","bgcolor":"#533"},{"id":4,"type":"EmptyLatentImage","pos":[450,570],"size":[315,106],"flags":{},"order":3,"mode":0,"outputs":[{"name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[832,1216,1],"title":"Empty Latent Image"},{"id":5,"type":"KSampler","pos":[950,200],"size":[315,262],"flags":{},"order":4,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":1},{"name":"positive","type":"CONDITIONING","link":5},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":7}],"outputs":[{"name":"LATENT","type":"LATENT","links":[8],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[42,28,7.0,"euler_ancestral","normal",1],"title":"KSampler"},{"id":6,"type":"VAEDecode","pos":[1350,200],"size":[210,46],"flags":{},"order":5,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":8},{"name":"vae","type":"VAE","link":4}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"title":"VAE Decode"},{"id":7,"type":"SaveImage","pos":[1600,200],"size":[315,270],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["AnimagineXL_"],"title":"Save Image"}],"links":[[1,"1",0,"MODEL","5",0,0],[2,"1",1,"CLIP","2",0,0],[3,"1",1,"CLIP","3",0,0],[4,"1",2,"VAE","6",1,0],[5,"2",0,"CONDITIONING","5",0,0],[6,"3",0,"CONDITIONING","5",1,0],[7,"4",0,"LATENT","5",2,0],[8,"5",0,"LATENT","6",0,0],[9,"6",0,"IMAGE","7",0,0]],"groups":[],"config":{},"extra":{"ds":{"scale":1.0,"offset":[0,0]}},"version":0.4}
```

## Troubleshooting

- **"Missing Models" error in ComfyUI:** The old Wan 2.1 workflow nodes still reference deleted models. Clear the canvas and load the Animagine XL workflow above.
- **API returns 500:** Check that the checkpoint name matches exactly: `animagine-xl-3.1.safetensors`
- **Low quality output:** Ensure quality tags are at the START of the positive prompt. Use SDXL-native resolutions only.
- **ComfyUI not responding:** Ensure ComfyUI is running locally at `http://localhost:8188`. Check that the launch script started the web server on port 8188.
- **Cannot connect to localhost:8188:** Verify no other application is using port 8188. Check firewall settings.

## IP-Adapter Workflow (For Exact Character Consistency)

For exact character matches to your reference images, use IP-Adapter in the web interface.

### Character Reference Images

Reference images located at:
```
P:\Ai\Openclaw\shared-exchange\Broken Spire\book-vault\characters\Characters images concepts\
```

Files:
- `Ash.png` / `Ash 2.png` - Main character reference
- `Far-Future Ash.png` - Villain version
- `Everly.png` - Military character
- `Éva Moreau.png` - Doctor/scientist
- `Nova Human.png` / `Nova devil.png` - Warrior (human and devil forms)
- `Violet humaine.png` / `Violet Devil.png` - Duality character
- `Lin Weishan.png` - Asian fighter
- `TC-23.png` - Esper/mechanical
- `Jonas.png` - Ghost/memory
- `Seraphine Vale.png` - Royal character
- `TK.png` - Additional character
- `Esper.png` - Additional esper

### IP-Adapter Setup in Web UI

1. **Open ComfyUI:** `http://localhost:8188`

2. **Load Checkpoint:** Select `animagine-xl-3.1.safetensors`

3. **Add IP-Adapter Unified Loader:**
   - Double-click canvas → search "IPAdapter Unified Loader"
   - Connect MODEL from Checkpoint to its model input
   - This auto-loads IP-Adapter Plus + CLIP Vision models

4. **Add IPAdapter Advanced:**
   - Double-click → search "IPAdapter Advanced"
   - Connect IPAdapter Unified Loader MODEL → IPAdapter Advanced model input

5. **Add Load Image (for each character reference):**
   - Double-click → search "Load Image"
   - Load your character reference (e.g., Ash.png)
   - Connect to IPAdapter Advanced image input

6. **Connect Flow:**
   - IPAdapter Advanced MODEL → KSampler model input
   - CLIP Text Encode (positive/negative) → KSampler conditioning
   - Empty Latent Image → KSampler latent_image
   - KSampler → VAE Decode → Save Image

7. **Generate:** The output will match your reference character exactly

### Character Consistency Tips

- Use the SAME reference image for all appearances of a character
- For Far-Future Ash (evil version): use `Far-Future Ash.png`
- For Nova's two forms: use `Nova Human.png` and `Nova devil.png` separately
- For Violet duality: generate twice with each reference

### Generating Multiple Frames with IP-Adapter

For the anime opening, repeat for each frame:
1. Load appropriate character reference(s)
2. Enter scene prompt
3. Generate
4. Save to output folder

### Accessing Generated Images

Images are saved to `C:/AI/ComfyUI/output`. You can also list and download via API:

```bash
# List all outputs
curl -s "http://localhost:8188/api/history" | python3 -c "
import sys,json; d=json.load(sys.stdin)
for k,v in d.items():
    outputs = v.get('outputs',{})
    for node,data in outputs.items():
        imgs = data.get('images',[])
        for img in imgs: print(img['filename'])
"

# Download specific image
curl -s "http://localhost:8188/api/view?filename=IMAGE_NAME.png&subfolder=&type=output" -o output.png
```

## Video Generation (Image-to-Video)

For generating video animations from images using Stable Video Diffusion (SVD) on your local ComfyUI.

### SVD Image-to-Video Workflow

Use these nodes in ComfyUI canvas for video generation:

1. **Load Checkpoint** → `ImageOnlyCheckpointLoader` (for video models like svd_*.safetensors)
2. **SVD img2vid Conditioning** → Accepts init_image, vae, width, height, video_frames, motion_bucket_id, fps
3. **KSampler** → Sample the video latent
4. **VAEDecode Video** → Decode video latent to frames
5. **Save Video** or **SaveWEBM** → Export the animation

### Key Parameters

| Parameter | Value | Notes |
|-----------|-------|-------|
| video_frames | 14-84 | More frames = longer video |
| motion_bucket_id | 127-255 | Higher = more motion |
| fps | 6-30 | Frames per second |
| width/height | multiples of 8 | e.g., 1024x576 |

### Example Checkpoints

- `svd.safetensors` - Standard SVD
- `svd_xt.safetensors` - Higher quality, slower

### Generate Video via API

POST to `/api/prompt`:
```json
{
  "prompt": {
    "1": {"class_type": "ImageOnlyCheckpointLoader", "inputs": {"ckpt_name": "svd.safetensors"}},
    "2": {"class_type": "LoadImage", "inputs": {"image": "input_frame.png"}},
    "3": {"class_type": "VAELoader", "inputs": {"vae_name": "ae"}},
    "4": {"class_type": "SVD_img2vid_Conditioning", "inputs": {
      "clip_vision": ["1", 1], "init_image": ["2", 0], "vae": ["3", 0],
      "width": 1024, "height": 576, "video_frames": 14, "motion_bucket_id": 127, "fps": 6
    }},
    "5": {"class_type": "KSampler", "inputs": {"seed": 42, "steps": 14, "cfg": 1.2, "sampler_name": "euler", "model": ["1", 0], "positive": ["4", 0], "negative": ["4", 1], "latent_image": ["4", 2]}},
    "6": {"class_type": "VAEDecodeVideo", "inputs": {"samples": ["5", 0], "vae": ["3", 0]}},
    "7": {"class_type": "SaveVideo", "inputs": {"video": ["6", 0], "filename_prefix": "SVD_"}}
  }
}
```

### Check for Video Model Availability

```bash
curl -s "http://localhost:8188/api/object_info" | python3 -c "import sys,json; d=json.load(sys.stdin); print([k for k in d.keys() if 'video' in k.lower() or 'svd' in k.lower()])"
```

Note: Video generation requires significant VRAM (12GB+ recommended for 14 frames at 1024x576).

## Remote Access (Cloudflare Tunnel)

To give me access to your local ComfyUI from anywhere, set up a Cloudflare Tunnel.

### Quick Setup (Copy/paste each line in PowerShell as Administrator):

```powershell
# 1. Download cloudflared
irm https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-windows-amd64.exe -OutFile cloudflared.exe

# 2. Login with your Cloudflare account (opens browser)
.\cloudflared.exe tunnel login

# 3. Create a tunnel
.\cloudflared.exe tunnel create local-comfyui

# 4. Run and get public URL (run this, then share the URL with me)
.\cloudflared.exe tunnel url local-comfyui
```

Or for temporary access (stops when you close terminal):

```powershell
.\cloudflared.exe tunnel --url http://localhost:8188
```

Once you have a public URL (like https://xyz123.trycloudflare.com), share it with me and I can:
- Generate images directly via API
- Check queue status
- Download results
- Run full consistency workflows

### Troubleshooting

- **"Command not found"**: Run PowerShell as Administrator
- **Login failed**: Create free Cloudflare account first at cloudflare.com
- **URL not working**: Keep the terminal window open while using

## Iterative Consistency Correction

For automated quality control - generates images, analyzes against references, and regenerates until consistency threshold is met.

### Quick Start

```bash
cd /mnt/c/Users/fbmor/broken-spire-comparison
python3 automated_consistency_workflow_local.py --iterative
```

### Commands

| Command | Description |
|---------|-------------|
| `--analyze` | Only analyze, don't regenerate |
| `--iterative` | Run iterative correction loop |
| `--threshold` | Consistency threshold (default: 0.85 = 85%) |
| `--max-iterations` | Max iterations (default: 10) |

### How It Works

1. **Generate** - Queue prompts to local ComfyUI
2. **Analyze** - Compare each generated frame against character reference using perceptual hash + SSIM
3. **Score** - Calculate consistency score (0-100%)
4. **Loop** - If score < 85%, adjust prompt and regenerate
5. **Repeat** - Until all frames pass threshold or max iterations reached

### Analysis Metrics

- **Perceptual Hash (pHash)** - Detects structural similarity
- **SSIM** - Measures perceived quality differences
- **Combined Score** - Weighted average targeting 85% threshold

### Reference Images

Character references located at:
```
C:/Users/fbmor/broken-spire-comparison/references/
```

Files:
- `Ash.png` - Main character
- `Far-Future Ash.png` - Villain version
- `Everly.png` - Military character
- `Éva Moreau.png` - Doctor/scientist
- `Nova Human.png` - Warrior (human form)
- `Violet Devil.png` / `Violet humaine.png` - Duality character
- `Lin Weishan.png` - Fighter
- `TC-23.png` - Esper
- `Jonas.png` - Ghost/memory

### Troubleshooting

- **"No image in history"** - Check ComfyUI is running at http://localhost:8188
- **"Low consistency score"** - Add more specific character tags to prompt
- **"Consistent failures"** - Check reference image matches desired output

Note: ComfyUI must be running locally before starting. Start with:
```bash
# In ComfyUI directory
python3 main.py --preview
```