Version 3.0 for Windows 11, 10 with Nvidia GPUs

Release Notes

Note: Fusion Quill does not send any information about your AI models usage (Inputs and Outputs of the AI Models) to us. Please use responsibly and follow local and international laws. Check our Terms for more details.

Download Instructions:

  1.  Click on the Get Fusion Link on the Microsoft site. It will open Microsoft Store app.
  2. Click on Install on the Fusion Quill App page in the Microsoft Store app.
  3. Click Open after it is installed. You can also open it from the Windows Start menu.
  4. Start FusionQuill
  5. It will start a wizard for first time configuration to connect to different AI APIs or to download a local model.
  6. See Help page for more information.
  7. Once again, AI models are large and will take time to download.
  8. Contact us with any issues.

Fusion Quill connects to Local Models and API models.

  • Local Models – Mistral LLM
  • API Models – Open AI, Azure AI, Google Gemini, Amazon Bedrock, Ollama, Hugging Face, vLLM, llama.cpp, etc.
  • AI Word processor
  • Chat with AI – Debate Coach, Interviewer Coach

Basic Requirements

  • Windows 11 or 10 PC with a Internet connection
  • 16GB RAM

Requirements for Local Inference

  • Windows 11 (preferably manufactured in the last 2 years)
  • Nvidia GPU preferred with updated GPU Driver.
  • CUDA 11 or 12 installation for faster AI on Nvidia GPUs for Local LLMs.
  • 32 GB Memory (Text Generation works with16GB but is slow)
  • 12 GB min disk space, 24+ GB recommended.
  • First time AI generation is slow due to Model loading time
  • Fast internet connection is recommended to download huge AI models ( > 50 Mbps)
YouTube player