Can’t chat! Forced to close this app!
Idea is great but please Test your app before upload App Store.
Response from developer
Local inference requires high device performance. We recommend selecting a model that suits your device. If your device lacks the necessary performance, try running the inference on a computer and connecting your phone to it via api for a smoother experience. We’ll continue to improve—thank you for your support!
Large models
Have an m2 Mac ultra 72 core, 128GB. It works with small models but 4bit 70b models just don’t work or return anything even though they load. They always just return a blank.
Really good
I came to see if there any other reviews here but nun so I’m here the apps really good and works great
Response from developer
Thank you for your support and feedback! We're glad you're enjoying the app, and we'll keep working to improve your experience.
Excellent
Works with llama. I have downloaded mlx file via LLM studio, but Mollama is unabe is unable to search or locate file. Is it possible to point app to mlx file?
Tested with MLX - straight forward - many thanks for free app
Tested with MLX - straight forward - many thanks for free app