0


高质量sd webui api模式使用教程大全

sd webui api模式使用教程大全

简介:

  • 这是我亲自探索的能跑通的sd webui api模式的python调用记录,我将尽我最大的力量来做好/维护好本教程。
  • 当然啦,有些我也没有探索成功,希望各位有成功经历的小伙伴在评论区告诉我怎么调用哈~```注:未经同意,禁止转载```

知识是属于全人类的。
拒绝野鸡收费教程,从我做起。


📋TODO:

  • 文生图
  • 图生图
  • 底模的获取/重载/切换🔥🔥🔥
  • 控制网络(controlnet)🔥🔥🔥
  • 分割(SAM)🔥🔥🔥
  • 获取png_info
  • 扩图(outpainting)
  • 叠图
  • 人像修复(Adetailer)
  • 融图

前排提示:在使用API模式启动好服务后,填你的地址:端口+/docs#/就能找到对应的API文档,例如我的在 http://127.0.0.1:7860/docs#/

文生图示例

注意端口和地址填你自己对应的哈!

from datetime import datetime
import urllib.request
import base64
import json
import time
import os

webui_server_url ='http://127.0.0.1:7861'
out_dir_t2i = os.path.join('api_out','txt2img')#图片保存路径
os.makedirs(out_dir_t2i, exist_ok=True)deftimestamp():'''时间戳'''return datetime.fromtimestamp(time.time()).strftime("%Y%m%d-%H%M%S")defdecode_and_save_base64(base64_str, save_path):'''base64→图片'''withopen(save_path,"wb")asfile:file.write(base64.b64decode(base64_str))defcall_api(api_endpoint,**payload):
    data = json.dumps(payload).encode('utf-8')
    request = urllib.request.Request(f'{webui_server_url}/{api_endpoint}',headers={'Content-Type':'application/json'},data=data,)
    response = urllib.request.urlopen(request)return json.loads(response.read().decode('utf-8'))defcall_txt2img_api(**payload):
    response = call_api('sdapi/v1/txt2img',**payload)for index, image inenumerate(response.get('images')):
        save_path = os.path.join(out_dir_t2i,f'txt2img-{timestamp()}-{index}.png')
        decode_and_save_base64(image, save_path)if __name__ =='__main__':
payload ={"prompt":"masterpiece, (best quality:1.1)","negative_prompt":"","seed":1,"steps":20,"width":512,"height":512,"cfg_scale":7,"sampler_name":"DPM++ 2M","n_iter":1,"batch_size":1,}
    call_txt2img_api(**payload)

图生图示例

from datetime import datetime
import urllib.request
import base64
import json
import time
import os

webui_server_url ='http://127.0.0.1:7861'
out_dir_i2i = os.path.join('api_out','img2img')#图片保存路径
os.makedirs(out_dir_i2i, exist_ok=True)deftimestamp():'''时间戳'''return datetime.fromtimestamp(time.time()).strftime("%Y%m%d-%H%M%S")defencode_file_to_base64(path):'''图片→base64'''withopen(path,'rb')asfile:return base64.b64encode(file.read()).decode('utf-8')defdecode_and_save_base64(base64_str, save_path):'''base64→图片'''withopen(save_path,"wb")asfile:file.write(base64.b64decode(base64_str))defcall_api(api_endpoint,**payload):
    data = json.dumps(payload).encode('utf-8')
    request = urllib.request.Request(f'{webui_server_url}/{api_endpoint}',headers={'Content-Type':'application/json'},data=data,)
    response = urllib.request.urlopen(request)return json.loads(response.read().decode('utf-8'))defcall_img2img_api(**payload):
    response = call_api('sdapi/v1/img2img',**payload)for index, image inenumerate(response.get('images')):
        save_path = os.path.join(out_dir_i2i,f'img2img-{timestamp()}-{index}.png')
        decode_and_save_base64(image, save_path)if __name__ =='__main__':
    init_images =[encode_file_to_base64(r"api_out/txt2img/0.png"),]
    batch_size =2
    payload ={"prompt":"1girl, blue hair","seed":1,"steps":20,"width":512,"height":512,"denoising_strength":0.5,"n_iter":1,"init_images": init_images,"batch_size": batch_size iflen(init_images)==1elselen(init_images),}
    call_img2img_api(**payload)

底模操作

1、底模获取

获取当前服务所用的底模,返回一个字典,可自行打印根据需求查看对应字段。

import base64
import requests
from PIL import Image
from io import BytesIO
url ="http://127.0.0.1:7861/sdapi/v1/sd-models"
res = requests.get(url)print(res.json())
2、底模重载
import base64
import requests
from PIL import Image
from io import BytesIO
url ="http://127.0.0.1:7861/sdapi/v1/refresh-checkpoints"
res = requests.post(url)

注:我运行完后返回了200,但是时间过的特别快,感觉不像是重载而是那个♻️按钮。

3、底模切换🔥🔥🔥

直接在data字段里加入"override_settings"字段,如下所示:

data ={"prompt":"a girl","negative_prompt":"boy","seed":-1,# 随机种子"sampler_name":"取样器(之间复制webui的名字就行)","cfg_scale":7,# 提示词相关性 越大越接近提示词"width":512,# 宽 (注意要被16整除)"height":512,# 高 (注意要被16整除)"override_settings":{"sd_model_checkpoint":"sd_xl_base_1.0.safetensors [31e35c80fc]",# 指定大模型"sd_vae":"Automatic",# 指定vae 默认自动},"override_settings_restore_afterwards":True# override_settings 是否在之后恢复覆盖的设置 默认是True}

然后运行,观察后台有没有打印换底模的log,一般情况下都能成功。

假如以上步骤切换失败了,这就涉及到改代码了,建议有代码基础的尝试以下步骤:
①修改modules\api\models.py增加一个"model_name"的key:

StableDiffusionTxt2ImgProcessingAPI = PydanticModelGenerator("StableDiffusionProcessingTxt2Img",
    StableDiffusionProcessingTxt2Img,[{"key":"sampler_index","type":str,"default":"Euler"},{"key":"script_name","type":str,"default":None},{"key":"script_args","type":list,"default":[]},{"key":"send_images","type":bool,"default":True},{"key":"save_images","type":bool,"default":False},{"key":"alwayson_scripts","type":dict,"default":{}},{"key":"model_name","type":str,"default":None},# 新增此代码]).generate_model()

StableDiffusionImg2ImgProcessingAPI = PydanticModelGenerator("StableDiffusionProcessingImg2Img",
    StableDiffusionProcessingImg2Img,[{"key":"sampler_index","type":str,"default":"Euler"},{"key":"init_images","type":list,"default":None},{"key":"denoising_strength","type":float,"default":0.75},{"key":"mask","type":str,"default":None},{"key":"include_init_images","type":bool,"default":False,"exclude":True},{"key":"script_name","type":str,"default":None},{"key":"script_args","type":list,"default":[]},{"key":"send_images","type":bool,"default":True},{"key":"save_images","type":bool,"default":False},{"key":"alwayson_scripts","type":dict,"default":{}},{"key":"model_name","type":str,"default":None},# 新增此代码]).generate_model()

②修改modules\processing.py
在StableDiffusionProcessingTxt2Img和StableDiffusionProcessingImg2Img两个类里加上model_name: str = None,假如你的版本比较老,就加到这两个类的

def __init__

里,否则就直接建一个类的属性。例如我的:

classStableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
    enable_hr:bool=False
    denoising_strength:float=0.75
    firstphase_width:int=0
    firstphase_height:int=0
    hr_scale:float=2.0
    hr_upscaler:str=None
    hr_second_pass_steps:int=0
    hr_resize_x:int=0
    hr_resize_y:int=0
    hr_checkpoint_name:str=None
    hr_sampler_name:str=None
    hr_prompt:str=''
    hr_negative_prompt:str=''
    model_name:str=None# 新增此代码

③修改modules\api\api.py text2imgapi和img2imgapi两个函数

deftext2imgapi(self,txt2imgreq: models.StableDiffusionTxt2ImgProcessingAPI):...
    model_name = txt2imgreq.model_name        # 新增此代码with self.queue_lock:if model_name isnotNone:
            w_info = sd_models.CheckpointInfo(os.path.join('models/Stable-diffusion'.model_name))# 新增此代码
            sd_models.reload_model_weights(info=w_info)# 新增此代码with closing(StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model,**args))as p:...
defimg2imgapi(self, img2imgreq: models.StableDiffusionImg2ImgProcessingAPI):...
    model_name = img2imgreq.model_name        # 新增此代码with self.queue_lock:if model_name isnotNone:
            w_info = sd_models.CheckpointInfo(os.path.join('models/Stable-diffusion'.model_name))# 新增此代码
            sd_models.reload_model_weights(info=w_info)# 新增此代码with closing(StableDiffusionProcessingImg2Img(sd_model=shared.sd_model,**args))as p:...

④🏃‍♂️调用
在传入的data里加上model_name字段就能换模型了。

payload ={"prompt":"masterpiece, (best quality:1.1)","negative_prompt":"","seed":1,"steps":2,"width":512,"height":512,"cfg_scale":7,"sampler_name":"DPM++ 2M","model_name":"sd_xl_base_1.0.safetensors",}# 新增的字段

controlnet

首先,我假设您的webui已经安装好了controlnet并下载好相关权重。
文生图和图生图都是在data字段里加入一个"alwayson_scripts"字段,如下所示:

from datetime import datetime
import urllib.request
import base64
import json
import time
import os

webui_server_url ='http://127.0.0.1:7861'
out_dir_t2i = os.path.join('api_out','txt2img')#图片保存路径
os.makedirs(out_dir_t2i, exist_ok=True)deftimestamp():'''时间戳'''return datetime.fromtimestamp(time.time()).strftime("%Y%m%d-%H%M%S")defdecode_and_save_base64(base64_str, save_path):'''base64→图片'''withopen(save_path,"wb")asfile:file.write(base64.b64decode(base64_str))defcall_api(api_endpoint,**payload):
    data = json.dumps(payload).encode('utf-8')
    request = urllib.request.Request(f'{webui_server_url}/{api_endpoint}',headers={'Content-Type':'application/json'},data=data,)
    response = urllib.request.urlopen(request)return json.loads(response.read().decode('utf-8'))defcall_txt2img_api(**payload):
    response = call_api('sdapi/v1/txt2img',**payload)for index, image inenumerate(response.get('images')):
        save_path = os.path.join(out_dir_t2i,f'txt2img-{timestamp()}-{index}.png')
        decode_and_save_base64(image, save_path)if __name__ =='__main__':
payload ={"prompt":"masterpiece, (best quality:1.1)","negative_prompt":"","seed":1,"steps":20,"width":512,"height":512,"cfg_scale":7,"sampler_name":"DPM++ 2M","n_iter":1,"batch_size":1,"alwayson_scripts":{"controlnet":{"args":[{"enabled":True,# 启用"control_mode":0,# 对应webui 的 Control Mode 可以直接填字符串 推荐使用下标 0 1 2"model":"t2i-adapter_diffusers_xl_lineart [bae0efef]",# 对应webui 的 Model"module":"lineart_standard (from white bg & black line)",# 对应webui 的 Preprocessor"weight":0.45,# 对应webui 的Control Weight"resize_mode":"Crop and Resize","threshold_a":200,# 阈值a 部分control module会用上"threshold_b":245,# 阈值b"guidance_start":0,# 什么时候介入 对应webui 的 Starting Control Step"guidance_end":0.7,# 什么时候退出 对应webui 的 Ending Control Step"pixel_perfect":True,# 像素完美"processor_res":512,# 预处理器分辨率"save_detected_map":False,# 因为使用了 controlnet API会返回生成controlnet的效果图,默认是True,如何不需要,改成False"input_image":"",# 图片 格式为base64}# 多个controlnet 在复制上面一个字典下来就行]}}
    call_txt2img_api(**payload)

SAM

首先,我假设您的webui已经安装好了SAM并下载好相关权重。

import base64
import requests
from PIL import Image
from io import BytesIO

defimage_to_base64(img_path:str)->str:withopen(img_path,"rb")as img_file:
        img_base64 = base64.b64encode(img_file.read()).decode()return img_base64
defsave_image(b64_img:str, p_out:str)->None:withopen(p_out,'wb')as f:
        f.write(base64.b64decode(b64_img))
url ="http://127.0.0.1:7861/sam/sam-predict"
data ={"sam_model_name":"sam_hq_vit_b.pth","input_image": image_to_base64("api_out/txt2img/txt2img-20240716-162233-0.png"),"sam_positive_points":[[100.25,100.25]],"sam_negative_points":[],"dino_enabled":False,"dino_model_name":"GroundingDINO_SwinT_OGC (694MB)","dino_text_prompt":"string","dino_box_threshold":0.3,"dino_preview_checkbox":False,"dino_preview_boxes_selection":[0]}
res = requests.post(url, json=data)
save_image(res.json()['masks'][0],'oup_mask.png')

获取png_info

这是一个非常实用的功能,可以获取png图片的一些信息。

import base64
import requests

defimage_to_base64(img_path:str)->str:withopen(img_path,"rb")as img_file:
        img_base64 = base64.b64encode(img_file.read()).decode()return img_base64
url ="http://127.0.0.1:7861/sdapi/v1/png-info"
data ={"image": image_to_base64("api_out/txt2img/txt2img-20240716-162233-0.png")}#要传入的图片路径
res = requests.post(url, json=data)print(res.json())

扩图

基于controlnet实现叠图功能,首先按网上教程把controlnet和LAMA装好,然后下载一个扩图的controlnet权重:control_v11p_sd15_inpaint.pth,放入相应的目录后按照以下步骤即可。

import json
import base64
import requests

defsubmit_post(url,data):return requests.post(url,data=json.dumps(data))defsave_encoded_image(b64_image:str, output_path:str):withopen(output_path,'wb')as image_file:
        image_file.write(base64.b64decode(b64_image))defencode_file_to_base64(path):withopen(path,'rb')asfile:return base64.b64encode(file.read()).decode('utf-8')if __name__ =='__main__':
    img2img_url =r'http://127.0.0.1:1245/sdapi/v1/img2img'
    init_images =[encode_file_to_base64(r"IP-Adapter1.png"),]
    data ={"seed":-1,"steps":20,"width":1024,"height":512,'sampler_index':'DPM++ 2M Karras',"denoising_strength":0.5,"n_iter":1,"init_images": init_images,# "batch_size": batch_size if len(init_images) == 1 else len(init_images),"alwayson_scripts":{"controlnet":{"args":[{"enabled":True,# 是否启用"control_mode":2,# 对应webui 的 Control Mode 可以直接填字符串 推荐使用下标 0 1 2"model":"control_v11p_sd15_inpaint [ebff9138]",# 对应webui 的 Model"module":"inpaint_only+lama",# 对应webui 的 Preprocessor"weight":1,# 对应webui 的Control Weight"resize_mode":"Resize and Fill","threshold_a":200,# 阈值a 部分control module会用上"threshold_b":245,# 阈值b"guidance_start":0,# 什么时候介入 对应webui 的 Starting Control Step"guidance_end":0.7,# 什么时候退出 对应webui 的 Ending Control Step"pixel_perfect":True,# 像素完美"processor_res":512,# 预处理器分辨率"save_detected_map":False,# 因为使用了 controlnet API会返回生成controlnet的效果图,默认是True,如何不需要,改成False"input_image": encode_file_to_base64(r"IP-Adapter1.png"),# 图片 格式为base64}# 多个controlnet 在复制上面一个字典下来就行]}}}
    response = submit_post(img2img_url, data)
    save_image_path =r'outpainting.png'
    save_encoded_image(response.json()['images'][0], save_image_path)

叠图

基于ipadapter实现叠图功能,先按网上教程把ipadapter装好,按照以下步骤即可实现叠图:

import json
import base64
import requests

defsubmit_post(url,data):return requests.post(url,data=json.dumps(data))defsave_encoded_image(b64_image:str, output_path:str):withopen(output_path,'wb')as image_file:
        image_file.write(base64.b64decode(b64_image))defencode_file_to_base64(path):withopen(path,'rb')asfile:return base64.b64encode(file.read()).decode('utf-8')if __name__ =='__main__':
    img2img_url =r'http://172.0.0.1:4524/sdapi/v1/img2img'

    input_image = encode_file_to_base64(r"/data/zkzou2/wwj/image_generation/stable-diffusion-webui/Canny.jpg")
    style_image = encode_file_to_base64(r"/data/zkzou2/wwj/image_generation/stable-diffusion-webui/IP-Adapter1.png")

    data ={"seed":-1,"steps":20,"width":512,"height":512,'sampler_index':'DPM++ 2M Karras',"denoising_strength":0.5,"n_iter":1,"init_images":[input_image],# "batch_size": batch_size if len(init_images) == 1 else len(init_images),"alwayson_scripts":{"controlnet":{"args":[{"enabled":True,# 是否启用"control_mode":0,# 对应webui 的 Control Mode 可以直接填字符串 推荐使用下标 0 1 2"model":"ip-adapter_sd15_plus [32cd8f7f]",# 对应webui 的 Model"module":"ip-adapter_clip_sd15",# 对应webui 的 Preprocessor"weight":1,# 对应webui 的Control Weight"resize_mode":"Crop and Resize","threshold_a":200,# 阈值a 部分control module会用上"threshold_b":245,# 阈值b"guidance_start":0,# 什么时候介入 对应webui 的 Starting Control Step"guidance_end":0.7,# 什么时候退出 对应webui 的 Ending Control Step"pixel_perfect":True,# 像素完美"processor_res":512,# 预处理器分辨率"save_detected_map":False,# 因为使用了 controlnet API会返回生成controlnet的效果图,默认是True,如何不需要,改成False"input_image": style_image,# 图片 格式为base64},{"enabled":True,# 是否启用"control_mode":0,# 对应webui 的 Control Mode 可以直接填字符串 推荐使用下标 0 1 2"model":"control_v11p_sd15_canny [d14c016b]",# 对应webui 的 Model, 叠图可以更换 Model 和 Preprocessor"module":"canny",# 对应webui 的 Preprocessor"weight":1,# 对应webui 的Control Weight"resize_mode":"Crop and Resize","threshold_a":200,# 阈值a 部分control module会用上"threshold_b":245,# 阈值b"guidance_start":0,# 什么时候介入 对应webui 的 Starting Control Step"guidance_end":0.7,# 什么时候退出 对应webui 的 Ending Control Step"pixel_perfect":True,# 像素完美"processor_res":512,# 预处理器分辨率"save_detected_map":False,# 因为使用了 controlnet API会返回生成controlnet的效果图,默认是True,如何不需要,改成False"input_image": input_image,# 图片 格式为base64}]}},"override_settings":{"sd_model_checkpoint":"v1-5-pruned-emaonly.safetensors [6ce0161689]",# 指定大模型"sd_vae":"Automatic",# 指定vae 默认自动},"override_settings_restore_afterwards":True# override_settings 是否在之后恢复覆盖的设置 默认是True}

    response = submit_post(img2img_url, data)
    save_image_path =r'stacked.png'
    save_encoded_image(response.json()['images'][0], save_image_path)

人像修复(Adetailer)

评论区有人问我能不能调Adetailer,那必须安排呀!
首先,你先按网上教程把Adetailer装好,该下的权重下好,参考以下代码调用:

import json
import base64
import requests
import os

defsubmit_post(url,data):return requests.post(url,data=json.dumps(data))defsave_encoded_image(b64_image:str, output_path:str):withopen(output_path,'wb')as image_file:
        image_file.write(base64.b64decode(b64_image))defencode_file_to_base64(path):withopen(path,'rb')asfile:return base64.b64encode(file.read()).decode('utf-8')if __name__ =='__main__':
    img2img_url =r'http://127.0.0.1:1245/sdapi/v1/img2img'
    image =[encode_file_to_base64("/data/zkzou2/wwj/image_generation/stable-diffusion-webui/Ad2.png"),]
    data ={"prompt":"a beautiful girl, full body","seed":816743407,"steps":20,"width":512,"height":512,'sampler_index':'DPM++ 2M SDE Karras',"denoising_strength":0.55,"n_iter":1,"init_images": image,"model_name":"v1-5-pruned-emaonly.safetensors",# "batch_size": batch_size if len(init_images) == 1 else len(init_images),"alwayson_scripts":{"ADetailer":{"args":[True,# ad_enableTrue,# 是否启用skip_img2img{"ad_model":"face_yolov8n.pt","ad_prompt":"detailed face"}# 添加多个ad_model]}}}
    response = submit_post(img2img_url, data)
    save_image_path =r"Ad_output.png"
    save_encoded_image(response.json()['images'][0], save_image_path)

融图

探索中。。。


本文转载自: https://blog.csdn.net/qq_37160051/article/details/140485768
版权归原作者 讯飞摸鱼躺平王 所有, 如有侵权,请联系我们删除。

“高质量sd webui api模式使用教程大全”的评论:

还没有评论