<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kyosuke Takayama</title>
    <description>The latest articles on DEV Community by Kyosuke Takayama (@ktakayama).</description>
    <link>https://dev.to/ktakayama</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ktakayama"/>
    <language>en</language>
    <item>
      <title>Running Stable Diffusion on M1 MacBook Pro</title>
      <dc:creator>Kyosuke Takayama</dc:creator>
      <pubDate>Wed, 24 Aug 2022 03:49:21 +0000</pubDate>
      <link>https://dev.to/ktakayama/running-stable-diffusion-on-m1-macbook-pro-54kn</link>
      <guid>https://dev.to/ktakayama/running-stable-diffusion-on-m1-macbook-pro-54kn</guid>
      <description>&lt;p&gt;original article here: &lt;a href="https://zenn.dev/ktakayama/articles/6c627e0956f32c"&gt;https://zenn.dev/ktakayama/articles/6c627e0956f32c&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;AI image generator Stable Diffusion is now open source. I want to running it on local machine but I only have a MacBook Pro then not to easy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/CompVis/stable-diffusion"&gt;https://github.com/CompVis/stable-diffusion&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following thread is very helpful!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/CompVis/stable-diffusion/issues/25"&gt;https://github.com/CompVis/stable-diffusion/issues/25&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Speed
&lt;/h2&gt;

&lt;p&gt;Here is my MacBook Pro 14 spec.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apple M1 Pro chip&lt;/li&gt;
&lt;li&gt;8 core CPU with 6 performance cores and 2 efficiency cores&lt;/li&gt;
&lt;li&gt;14-core GPU&lt;/li&gt;
&lt;li&gt;16-core Neural Engine&lt;/li&gt;
&lt;li&gt;32GB memory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It needs about 15–20 GB of memory while generating images. 6 images can be generated in about 5 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get model
&lt;/h2&gt;

&lt;p&gt;Register and clone this repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/CompVis/stable-diffusion-v-1-4-original"&gt;https://huggingface.co/CompVis/stable-diffusion-v-1-4-original&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Get source code
&lt;/h2&gt;

&lt;p&gt;Get the source code in the apple-silicon-mps-support branch of this repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/magnusviri/stable-diffusion/tree/apple-silicon-mps-support"&gt;https://github.com/magnusviri/stable-diffusion/tree/apple-silicon-mps-support&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Install conda and rust with homebrew.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;miniconda rust
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setup shell environment for conda. I use zsh.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;conda init zsh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I run conda env create, I get an error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;conda &lt;span class="nb"&gt;env &lt;/span&gt;create &lt;span class="nt"&gt;-f&lt;/span&gt; environment-mac.yaml
Collecting package metadata &lt;span class="o"&gt;(&lt;/span&gt;repodata.json&lt;span class="o"&gt;)&lt;/span&gt;: &lt;span class="k"&gt;done
&lt;/span&gt;Solving environment: failed

ResolvePackageNotFound:
  - &lt;span class="nv"&gt;python&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3.8.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit environment-mac.yaml to match your environment. Specifically, change the version number to match your environment. For example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gh"&gt;diff --git a/environment-mac.yaml b/environment-mac.yaml
index d923d56..c8a0a8e 100644
&lt;/span&gt;&lt;span class="gd"&gt;--- a/environment-mac.yaml
&lt;/span&gt;&lt;span class="gi"&gt;+++ b/environment-mac.yaml
&lt;/span&gt;&lt;span class="p"&gt;@@ -3,14 +3,14 @@&lt;/span&gt; channels:
   - pytorch
   - defaults
 dependencies:
&lt;span class="gd"&gt;-  - python=3.8.5
-  - pip=20.3
&lt;/span&gt;&lt;span class="gi"&gt;+  - python=3.9.12
+  - pip=21.2.4
&lt;/span&gt;   - pytorch=1.12.1
   - torchvision=0.13.1
   - numpy=1.19.2
   - pip:
     - albumentations==0.4.3
&lt;span class="gd"&gt;-    - opencv-python==4.1.2.30
&lt;/span&gt;&lt;span class="gi"&gt;+    - opencv-python&amp;gt;=4.1.2.30
&lt;/span&gt;     - pudb==2019.2
     - imageio==2.9.0
     - imageio-ffmpeg==0.4.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;activate and link to model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;conda activate ldm
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; models/ldm/stable-diffusion-v1
&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /path/to/stable-diffusion-v-1-4-original/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Do image generation!
&lt;/h2&gt;

&lt;p&gt;I get a PyTorch related error when I execute txt2image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;python scripts/txt2img.py &lt;span class="nt"&gt;--prompt&lt;/span&gt; &lt;span class="s2"&gt;"a photograph of an astronaut riding a horse"&lt;/span&gt; &lt;span class="nt"&gt;--plms&lt;/span&gt; 
〜 skip 〜
NotImplementedError: The operator &lt;span class="s1"&gt;'aten::index.Tensor'&lt;/span&gt; is not current implemented &lt;span class="k"&gt;for &lt;/span&gt;the MPS device. If you want this op to be added &lt;span class="k"&gt;in &lt;/span&gt;priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can &lt;span class="nb"&gt;set &lt;/span&gt;the environment variable &lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nv"&gt;PYTORCH_ENABLE_MPS_FALLBACK&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1&lt;span class="sb"&gt;`&lt;/span&gt; to use the CPU as a fallback &lt;span class="k"&gt;for &lt;/span&gt;this op. WARNING: this will be slower than running natively on MPS.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install nightly version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;conda &lt;span class="nb"&gt;install &lt;/span&gt;pytorch torchvision torchaudio &lt;span class="nt"&gt;-c&lt;/span&gt; pytorch-nightly
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This still got error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;python scripts/txt2img.py &lt;span class="nt"&gt;--prompt&lt;/span&gt; &lt;span class="s2"&gt;"a photograph of an astronaut riding a horse"&lt;/span&gt; &lt;span class="nt"&gt;--plms&lt;/span&gt; 
    &lt;span class="k"&gt;return &lt;/span&gt;torch.layer_norm&lt;span class="o"&gt;(&lt;/span&gt;input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled&lt;span class="o"&gt;)&lt;/span&gt;
RuntimeError: view size is not compatible with input tensor&lt;span class="s1"&gt;'s size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fix this error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1221667017"&gt;https://github.com/CompVis/stable-diffusion/issues/25#issuecomment-1221667017&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;vi /opt/homebrew/Caskroom/miniconda/base/envs/ldm/lib/python3.9/site-packages/torch/nn/functional.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gd"&gt;--- functional.py_      2022-08-23 17:07:29.000000000 +0900
&lt;/span&gt;&lt;span class="gi"&gt;+++ functional.py       2022-08-23 17:07:31.000000000 +0900
&lt;/span&gt;&lt;span class="p"&gt;@@ -2506,9 +2506,9 @@&lt;/span&gt; def layer_norm(
     """
     if has_torch_function_variadic(input, weight, bias):
         return handle_torch_function(
&lt;span class="gd"&gt;-            layer_norm, (input, weight, bias), input, normalized_shape, weight=weight, bias=bias, eps=eps
&lt;/span&gt;&lt;span class="gi"&gt;+            layer_norm, (input.contiguous(), weight, bias), input, normalized_shape, weight=weight, bias=bias, eps=eps
&lt;/span&gt;         )
&lt;span class="gd"&gt;-    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
&lt;/span&gt;&lt;span class="gi"&gt;+    return torch.layer_norm(input.contiguous(), normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Everything OK! Great!!!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;python scripts/txt2img.py &lt;span class="nt"&gt;--prompt&lt;/span&gt; &lt;span class="s2"&gt;"a photograph of an astronaut riding a horse"&lt;/span&gt; &lt;span class="nt"&gt;--plms&lt;/span&gt; 
...
Your samples are ready and waiting &lt;span class="k"&gt;for &lt;/span&gt;you here:
outputs/txt2img-samples

Enjoy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>stablediffusion</category>
    </item>
  </channel>
</rss>
