AI Models
Fusion Quill Personal v1 uses the Whisper Small model.
Fusion Quill Professional Editions will let you switch between different models.
Fusion Quill v1.0 uses the Mistral Instruct v1 Model for Local LLMs.
Fusion Quill v1 uses the following Models for the following styles
- Photo and B&W Photo – Photon
- Portrait – Realistic Vision
- Landscape – Dream shaper
- Cartoon – Real cartoon Pixar
- Anime – Dream shaper
- Painting – Dream lab soil
- Comic – Western animation
Setup
Fusion Quill uses a GPU to accelerate AI performance. If you don’t have a GPU, it will work but will be slower.
If you have a Nvidia GPU, you can update your GPU driver and install CUDA for faster LLM and Image Generation performance.
If you have a AMD/Intel GPU, you can update your GPU driver for better Image Generation performance with DirectML.
No. Fusion Quill uses DirectML Onnx version of SD Models and Llama.cpp GGUF models for Local LLMs. So no Python installation is needed.
AI models range from 400MB to 5GB in size. Depending on your Internet speed, it will take some time. We do not include any models in the Fusion Quill Setup, since it would make the setup size much larger.
Make sure you PC does not go into Sleep mode when downloading.
If you use only Open AI APIs, you don’t need to download any models for Text use.
Privacy
We don’t track your interactions (Inputs and Outputs of the AI Models) with AI models. All the data generated is stored in your local data folder in a Fusion Quill folder in your Windows Documents folder.
You are responsible for the content you generate and how you use it. Don’t break your local and international laws.
If you are using an AI API like Open AI’s Chat GPT, then you are subject to their terms and privacy policies.