底模:SDXL
主页:https://civitai.com/models/152134/genshinxl-ganyu?modelVersionId=322603
版本:v3.0-alpha
Base Model |
SDXL 1.0 |
Trigger Words |
GANYU \(GENSHIN IMPACT\) |
v3.0-alpha:
该版本尝试训练甘雨的不同形态和衣服(但没有完全成功)。
ganyu \(twilight blossom\)
ganyu_(heytea)_(genshin_impact)
ganyu_(child)
ganyu_(china_merchants_bank)
--
Training model: Animagine XL V3
2.5版本更新:
使用4800张甘雨图片训练,更强的泛化。
删标训练,删除了眼睛和头发的tag。
2.0版本更新:
更小的体积,使用DIM4进行训练
2.0版本一共使用了300张精选图片进行训练
触发词:1girl,ganyu_\(genshin_impact\),solo,blue_hair,breasts,long_hair,detached_sleeves,bell,horns,gloves,bare_shoulders,bangs,gold_trim,looking_at_viewer,purple_eyes,white_sleeves,sidelocks,ahoge,thighlet,black_gloves,white_flower,neck_bell,medium_breasts,
Version 2.0 update:
Smaller volume, trained with DIM4.
Version 2.0 used a total of 300 selected pictures for training.
keyword:1girl,ganyu_\(genshin_impact\),solo,blue_hair,breasts,long_hair,detached_sleeves,bell,horns,gloves,bare_shoulders,bangs,gold_trim,looking_at_viewer,purple_eyes,white_sleeves,sidelocks,ahoge,thighlet,black_gloves,white_flower,neck_bell,medium_breasts,
1.5版本:
我使用了超过300张图片去训练这个模型,基于这个原因,我没有使用正则化。
因为我认为,300张图片已经足够泛化了。
另外,打标我使用的是全标,你可能需要更多触发词去唤出角色(是的,触发词失效了)
Version 1.5:
I used over 300 images to train this model, which is why I didn't use regularization.
Because I believe that 300 images are sufficient for generalization.
Additionally, I used full annotations for labeling, so you may need more trigger words to evoke specific characters. (Yes, trigger words are not effective anymore.)
训练模型:SDXL1.0官模
测试模型:Kohaku-XL
建议权重:0.6~0.8
触发词:(\gan yu\)
Training model: SDXL1.0 Official Model
Testing model: Kohaku-XL
Recommended weights: 0.8
Key word :(\gan yu\)
The English portion is translated by GPT and may contain errors.
值得注意的是,在该模型的训练中,我使用waifuc来通过搜集和处理训练集,他可以轻易过滤掉我所不需要的双人图,杂图和黑白图片,这使我的训练集的质量得到提高。
https://deepghs.github.io/waifuc/main/tutorials-CN/installation/index.html
It is worth noting that in the training of this model, I used waifuc to collect and process the training dataset. It can easily filter out unwanted images such as double exposure, miscellaneous images, and black and white images, which improves the quality of my training dataset.
https://deepghs.github.io/waifuc/main/index.html