Table of content
TL;DR. This application is built on top of ml-stable-diffusion using Stable Diffusion model to generate images locally on your devices. iPhone Pro 12+ (iOS 16.2+), iPad (iPadOS 16.2+)/MacBook M1/M2 (13.1+) are supported.
Every year Apple released more powerful device than a year ago. And if you aren’t a gamer, most of the time it is hard to see what your new device can do.
Apple released in end of November 2022 a library that allows to use Machine Learning models to generate images based on the text input. It is based on the Stable Diffusion models. You probably have heard about DALL-E 2, and maybe about Stable Diffusion. Both of the models allow you to generate image by using Machine Learning models from the inputed text by the user. There are some differences between how DALL-E 2 and Stable Diffusion generate images, but the most important are, that Stable Diffusion was released publicly, and DALL-E is available only via cloud services.
With help of Apple library ml-stable-diffusion you can now really see what your new iPhone, iPad or Mac can do. By just writing a senstance of what you want to see, you can generate an image with less than a minute from the Stable Diffusion Machine Learning Model.
Building an application
About a week ago, just by the accident, I was researching about if there are any Machine Learning (ML) models available that can generate simple images directly on iPhone, and by surprise discovered ml-stable-diffusion, which was released a day prior to my search.
I decided to give it a try. With the idea to release a simple application, free of charge. I could not take a credit for the work, where the main features were built by somebody else.
Unfotuntelly I have met some issues with making this application for the public:
- This library is available only for Apple Silicone architecture, but currently Apple does not allow to publish Mac applications in App Store, that don’t include Intel architecture as well. That means no native Mac application at this time. But you can install iPad version on Mac, so that solved this problem.
- The model is pretty big, close to 3GB. The application will be pretty big in download size. For the resources that big, Apple suggests to use On-Demand Resources, but even they cannot be that big, unless you can host it on your own server, and Traffic is expensive, so I just bundled it for now. Application size is about 2.5GB.
- It is still very unstable, there are so variables, that can finish the job successfully, but at the same time even crash your device, because the app will be terminated because of out-of-memory exception. I have preconfigured it to make it work.
I have decided to give it a try, and let public try this library on their devices to generate any image they want based on the text they enter.
Library requires at least iOS 16.2, iPadOS 16.2 and macOS 13.1. At the time of writing all of those OS were in Release Candidate state. So if you aren’t using beta, highly possible that you don’t meet minimum requirements. At the same time I was able to run this library on macOS 13.0.1 without any issues, but could not finish it with iOS 16.0 on my iPhone 14 Pro.
The library ml-stable-diffusion has minimum requirements of iPhone 12 and iOS 16.2, but this library and Machine Learning Model is very Memory hungry. I have iPhone 13 Mini as well, which has only 4GB of RAM, comparing to iPhone 14 Pro which has 6GB of RAM, I could not finish the image generation on the iPhone Mini. I would say that you have to have iPhone Pro model, or iPad with M1/M2 chip, or MacBook with M1/M2 processor to be able to run this library.
You can change the Step Count to lower numbers to see if those results are satisfied. This model might be producing great results with even 20 steps.
Running on iPhone
By default, application should set configuration to use CPU and GPU as compute units, it takes about a minute to finish the task. First task will take longer, as loading resources can take some time. This is based on the test of running application on iPhone 14 Pro.
I would assume that any iPhone with 6GB of RAM can handle it, but needs to run iOS 16.2+, and be iPhone 12+ Pro.
If it is crashing, you can try to free up the memory by restarting your phone.
Running on iPad
I was testing it on my iPad Pro M2 with 16GB of RAM (2TB version). You can use
All compute resources, and it definetly
takes way less time to finish the task, comparing to iPhone.
I would assume any iPad Pro M1/M2 can handle it. If your iPad has 8GB of memory, you might want to adjust compute units and change them to CPU and GPU only.
Running on macOS
I have MBP 14 with M1 Max and 64GB. Performance feels similar to iPad.
My guess any M1/M2 powered Mac can handle this application. If you are using 8GB version, you might want to try to play to change compute units from All to CPU and GPU, or CPU and Neural Engine.
I feel like the most common use cases are:
- Getting some ideas on making logos, drawings, designs. For example, logo of this application was generated by this application.
- Learning to draw, you can use prompt like
pencil sketch of bear head with pawto use it as mask for practicing drawing. For example, image above was generated with prompt Pencil sketch of bear head with paw, did not get a paw, but still can practice drawing with it.
- For fun, creativity.
But make sure to read the license.
- All the generated images are sync with iCloud.
- You can drag and drop images to Finder or other applications.
- You can export images.
- You can submit a lot of tasks, that will be executed in sequence, and review them later (don’t do that on iPhone).
Application is available with TestFlight, just accept the invitation, install it and use it.
Please review CreativeML Open RAIL++-M License. Especially Distribution and Redistribution.
We believe very strongly in our customers right to privacy. Our customer records are not for sale or trade, and we will not disclose our customer data to any third party except as may be required by law.
Any information that you provide to us in the course of interacting with our sales or technical support departments is held in strict confidence. This includes your contact information (including, but not limited to your email address and phone number), as well as any data that you supply to us in the course of a technical support interaction.
Please email us any suggestions, ideas, questions or discovered bugs to firstname.lastname@example.org