Couldn’t find Lora with name guofeng3_v32Light
 0% 0/20 [00:00<?, ?it/s]
 Error completing request
 Arguments: (‘task(i569b770wukqzw4)’, ‘one nude asian girl 18 years old with slender body, young face, visible armpits, sideburns,messy hair,posing on bed with white sheets,full body photo,bright room,window,sunlight,white sunbeams,4k,realistic, ultra detailed,detailed,masterpiece,highres,by Jeremy Lipking, by Antonio J Manzanedo,(by Alphonse Mucha:0.4),(Ultra detailed),(portrait),lora:guofeng3_v32Light:0.2,lora:liuyifei_10:0.9’, 'paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (outdoor:1.6), manboobs, backlight,(ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, (bad anatomy:1.21), (bad proportions:1.331), (disfigured:1.331), (more than 2 nipples:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), ', [], 20, 0, False, False, 1, 1, 7, 964412687.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, ‘Latent’, 0, 0, 0, [], 0, False, False, ‘positive’, ‘comma’, 0, False, False, ‘’, 1, ‘’, 0, ‘’, 0, ‘’, True, False, False, False, 0) {}
 Traceback (most recent call last):
 File “/content/stable-diffusion-webui/modules/call_queue.py”, line 56, in f
 res = list(func(*args, **kwargs))
 File “/content/stable-diffusion-webui/modules/call_queue.py”, line 37, in f
 res = func(*args, **kwargs)
 File “/content/stable-diffusion-webui/modules/txt2img.py”, line 56, in txt2img
 processed = process_images§
 File “/content/stable-diffusion-webui/modules/processing.py”, line 486, in process_images
 res = process_images_inner§
 File “/content/stable-diffusion-webui/modules/processing.py”, line 636, in process_images_inner
 samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
 File “/content/stable-diffusion-webui/modules/processing.py”, line 836, in sample
 samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 351, in sample
 samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 227, in launch_sampling
 return func()
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 351, in 
 samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
 File “/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py”, line 27, in decorate_context
 return func(*args, **kwargs)
 File “/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py”, line 145, in sample_euler_ancestral
 denoised = model(x, sigmas[i] * s_in, **extra_args)
 File “/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py”, line 1194, in _call_impl
 return forward_call(*input, **kwargs)
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 145, in forward
 devices.test_for_nans(x_out, “unet”)
 File “/content/stable-diffusion-webui/modules/devices.py”, line 152, in test_for_nans
 raise NansException(message)
 modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there’s not enough precision to represent the picture, or because your video card does not support half type. Try setting the “Upcast cross attention layer to float32” option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
0% 0/20 [00:00<?, ?it/s]
 Error completing request
 Arguments: (‘task(cbo8xeey2hnbevp)’, ‘one nude asian girl 18 years old with slender body, young face, visible armpits, sideburns,messy hair,posing on bed with white sheets,full body photo,bright room,window,sunlight,white sunbeams,4k,realistic, ultra detailed,detailed,masterpiece,highres,by Jeremy Lipking, by Antonio J Manzanedo,(by Alphonse Mucha:0.4),(Ultra detailed),(portrait),lora:liuyifei_10:0.9’, 'paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (outdoor:1.6), manboobs, backlight,(ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, (bad anatomy:1.21), (bad proportions:1.331), (disfigured:1.331), (more than 2 nipples:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), ', [], 20, 0, False, False, 1, 1, 7, 964412687.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, ‘Latent’, 0, 0, 0, [], 0, False, False, ‘positive’, ‘comma’, 0, False, False, ‘’, 1, ‘’, 0, ‘’, 0, ‘’, True, False, False, False, 0) {}
 Traceback (most recent call last):
 File “/content/stable-diffusion-webui/modules/call_queue.py”, line 56, in f
 res = list(func(*args, **kwargs))
 File “/content/stable-diffusion-webui/modules/call_queue.py”, line 37, in f
 res = func(*args, **kwargs)
 File “/content/stable-diffusion-webui/modules/txt2img.py”, line 56, in txt2img
 processed = process_images§
 File “/content/stable-diffusion-webui/modules/processing.py”, line 486, in process_images
 res = process_images_inner§
 File “/content/stable-diffusion-webui/modules/processing.py”, line 636, in process_images_inner
 samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
 File “/content/stable-diffusion-webui/modules/processing.py”, line 836, in sample
 samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 351, in sample
 samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 227, in launch_sampling
 return func()
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 351, in 
 samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
 File “/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py”, line 27, in decorate_context
 return func(*args, **kwargs)
 File “/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py”, line 145, in sample_euler_ancestral
 denoised = model(x, sigmas[i] * s_in, **extra_args)
 File “/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py”, line 1194, in _call_impl
 return forward_call(*input, **kwargs)
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 145, in forward
 devices.test_for_nans(x_out, “unet”)
 File “/content/stable-diffusion-webui/modules/devices.py”, line 152, in test_for_nans
 raise NansException(message)
 modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there’s not enough precision to represent the picture, or because your video card does not support half type. Try setting the “Upcast cross attention layer to float32” option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Loading weights [None] from /content/stable-diffusion-webui/models/Stable-diffusion/chilloutmix_NiPrunedFp32Fix.safetensorsCouldn’t find Lora with name guofeng3_v32Light
 0% 0/20 [00:00<?, ?it/s]
 Error completing request
 Arguments: (‘task(i569b770wukqzw4)’, ‘one nude asian girl 18 years old with slender body, young face, visible armpits, sideburns,messy hair,posing on bed with white sheets,full body photo,bright room,window,sunlight,white sunbeams,4k,realistic, ultra detailed,detailed,masterpiece,highres,by Jeremy Lipking, by Antonio J Manzanedo,(by Alphonse Mucha:0.4),(Ultra detailed),(portrait),lora:guofeng3_v32Light:0.2,lora:liuyifei_10:0.9’, 'paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (outdoor:1.6), manboobs, backlight,(ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, (bad anatomy:1.21), (bad proportions:1.331), (disfigured:1.331), (more than 2 nipples:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), ', [], 20, 0, False, False, 1, 1, 7, 964412687.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, ‘Latent’, 0, 0, 0, [], 0, False, False, ‘positive’, ‘comma’, 0, False, False, ‘’, 1, ‘’, 0, ‘’, 0, ‘’, True, False, False, False, 0) {}
 Traceback (most recent call last):
 File “/content/stable-diffusion-webui/modules/call_queue.py”, line 56, in f
 res = list(func(*args, **kwargs))
 File “/content/stable-diffusion-webui/modules/call_queue.py”, line 37, in f
 res = func(*args, **kwargs)
 File “/content/stable-diffusion-webui/modules/txt2img.py”, line 56, in txt2img
 processed = process_images§
 File “/content/stable-diffusion-webui/modules/processing.py”, line 486, in process_images
 res = process_images_inner§
 File “/content/stable-diffusion-webui/modules/processing.py”, line 636, in process_images_inner
 samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
 File “/content/stable-diffusion-webui/modules/processing.py”, line 836, in sample
 samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 351, in sample
 samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 227, in launch_sampling
 return func()
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 351, in 
 samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
 File “/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py”, line 27, in decorate_context
 return func(*args, **kwargs)
 File “/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py”, line 145, in sample_euler_ancestral
 denoised = model(x, sigmas[i] * s_in, **extra_args)
 File “/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py”, line 1194, in _call_impl
 return forward_call(*input, **kwargs)
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 145, in forward
 devices.test_for_nans(x_out, “unet”)
 File “/content/stable-diffusion-webui/modules/devices.py”, line 152, in test_for_nans
 raise NansException(message)
 modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there’s not enough precision to represent the picture, or because your video card does not support half type. Try setting the “Upcast cross attention layer to float32” option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
0% 0/20 [00:00<?, ?it/s]
 Error completing request
 Arguments: (‘task(cbo8xeey2hnbevp)’, ‘one nude asian girl 18 years old with slender body, young face, visible armpits, sideburns,messy hair,posing on bed with white sheets,full body photo,bright room,window,sunlight,white sunbeams,4k,realistic, ultra detailed,detailed,masterpiece,highres,by Jeremy Lipking, by Antonio J Manzanedo,(by Alphonse Mucha:0.4),(Ultra detailed),(portrait),lora:liuyifei_10:0.9’, 'paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, (outdoor:1.6), manboobs, backlight,(ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, (bad anatomy:1.21), (bad proportions:1.331), (disfigured:1.331), (more than 2 nipples:1.331), (missing arms:1.331), (extra legs:1.331), (fused fingers:1.61051), (too many fingers:1.61051), (unclear eyes:1.331), ', [], 20, 0, False, False, 1, 1, 7, 964412687.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, ‘Latent’, 0, 0, 0, [], 0, False, False, ‘positive’, ‘comma’, 0, False, False, ‘’, 1, ‘’, 0, ‘’, 0, ‘’, True, False, False, False, 0) {}
 Traceback (most recent call last):
 File “/content/stable-diffusion-webui/modules/call_queue.py”, line 56, in f
 res = list(func(*args, **kwargs))
 File “/content/stable-diffusion-webui/modules/call_queue.py”, line 37, in f
 res = func(*args, **kwargs)
 File “/content/stable-diffusion-webui/modules/txt2img.py”, line 56, in txt2img
 processed = process_images§
 File “/content/stable-diffusion-webui/modules/processing.py”, line 486, in process_images
 res = process_images_inner§
 File “/content/stable-diffusion-webui/modules/processing.py”, line 636, in process_images_inner
 samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
 File “/content/stable-diffusion-webui/modules/processing.py”, line 836, in sample
 samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 351, in sample
 samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 227, in launch_sampling
 return func()
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 351, in 
 samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
 File “/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py”, line 27, in decorate_context
 return func(*args, **kwargs)
 File “/content/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py”, line 145, in sample_euler_ancestral
 denoised = model(x, sigmas[i] * s_in, **extra_args)
 File “/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py”, line 1194, in _call_impl
 return forward_call(*input, **kwargs)
 File “/content/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py”, line 145, in forward
 devices.test_for_nans(x_out, “unet”)
 File “/content/stable-diffusion-webui/modules/devices.py”, line 152, in test_for_nans
 raise NansException(message)
 modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there’s not enough precision to represent the picture, or because your video card does not support half type. Try setting the “Upcast cross attention layer to float32” option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Loading weights [None] from /content/stable-diffusion-webui/models/Stable-diffusion/chilloutmix_NiPrunedFp32Fix.safetensors
这个是stable diffusion的报错
 这个可能是服务器端网络错误,我歇息了一下就好了。