Private LLM User Reviews

Top reviews

Leave a Review Private LLM
  • Literally a Open Assistant knockoff

    I’m not against a developer using open source work, but this is a very thin wrapper around someone else’s work with credit given. It’s just sad. I found out this was open assistant after letting the app chat with itself till it mentioned its name was open assistant. Ask it that question in the beginning and it will tell you it’s personal GPT.
    Show less

    Developer Response

    Thanks for your feedback. What you're saying is simply not true. The base model used in this app (RedPajama-INCITE-Chat-3B-v1) was trained on the the OpenAssistant oasst1 dataset. This app is in no way a very thin wrapper around open assistant, or even related to it. FWIW, Open Assistant's app (open-assistant.io) is a web based chatbot, this app is an offline chatbot that runs fully offline on your iPhone. I hope the difference between the two is clear.
  • I like it a lot

    As in it meets , maybe exceeds, my privacy expectations, works on all my ios devices WHEN OFF LINE. Emphasis mine, country internet is unreliable. ;-( Developer seems commited, saw this with the guy from the Applo app, hoping for good things, 10 bucks aint bad for what I see so far
  • ChatGPT Provides much better answers

    I have ChatGPT on my phone. The answers from Personal GPT sound like they're written by a teenager, and the ones I recieve from ChatGPT are very professional and much more informative. ChatGPT was free, so you may understand my frustration when having paid for a service that is far underperforming what I can get out of a free applicaiton on my cellphone.
    Show less

    Developer Response

    Thanks for the review. Our app and the underlying models have improved significantly since you wrote this review. We'd appreciate it if you could try it again. Also, the other app that you mention in your review isn't free, you pay for it with your personal data. If you think that your personal data is free, then perhaps the latter app is indeed a better choice for you.
  • wait awhile for it to be improved

    I've got some experience with local large language models such as this one. This app is intended to be a reasonably compact, private chatbot you can use on a Mac with 8 gb of ram. I purchased it, experimented with it, and in my opinion it's not ready for release.

    The developer provided a prompt and thoughtful response to my inquiry. It may be that this will work better in a couple of weeks. In the meantime, I requested a refund.
    Show less

    Developer Response

    The app has evolved significantly since v1.0.2. It now has bigger and more fluent models, also a lot of new features have been added since then. If you'd like to try it again, please email us and we could perhaps sign you up for the TestFlight beta.
  • Broken

    This app just feeds made up answers. The first question I asked was, what 10 x 20. The answer I received is : 20 x 10 is equal to the number 100

    I feel the developer is also deleting reviews. This is really really shady and I’m going to try and obtain a refund.

    Developer Response

    Thanks for your feedback. Large language models don’t do very well on arithmetic tasks, especially multiplication. Even the best LLM perform poorly at multiplication. Please read this paper if you’re interested to know more about this aspect of LLMs: https://arxiv.org/abs/2304.02015 Also, rest assured that we’re not deleting any reviews, and there’s absolutely nothing shady going on.
  • A great tool, with huge potential.

    Obviously being an offline GPT limits the responses, but the fundamentals for something impressive in future less memory restricted devices eg iPhone 15 and beyond is really exciting. Offline and privacy focused are essential qualities of this app and it works quite well. A good bargain for the price with an active developer. One recommendation/piece of feedback would be to provide information on what the settings TOP-P and temperature do. If we come across an error or weird, reproducible response could you provide an option to email you a log so we can forward you the issue easily? This way you don’t need to embed any privacy violating analytics packages.
    Show less

    Developer Response

    Thanks for your feedback! Indeed we're hoping the upcoming iPhone 15 series of devices will have more memory, so we can ship bigger and smarter models for newer devices (As free updates, of course!). We're in the process of adding an FAQ on our website to answer the temperature, top-p question. We've already received the question a few times by now on our support email and on discord. Perhaps I should also add a help section within the app. Thanks for the suggesting the email log idea, I'll add it to our backlog. We specifically don't embed any analytics packages in the app, that would be antithetical to the concept of privacy, which is one of our app's USPs. The only logs we get are crash-logs from iOS's off by default, and opt-in Analytics feature (Settings -> Privacy & Security -> Analytics & Improvements -> Share With App Developers). Also, responses from the LLM in the app are (intentionally) stochastic, and are often hard to reproduce.
  • Getting there!

    UPDATE: app is much faster now. Still waiting on a prompt library which would be awesome, but the speed and quality is increasing seemingly by the day. There was a small period of time where it didn’t work on the 13 mini but that has been fixed. I’m happy with the amount of work the devs are putting in and I’m hopeful for more features in the future. MLC chat is comparable in speed and quality but this app is looking to widen the gap by adding more features in the (hopefully near) future.

    Old review:

    In comparison to MLC chat (free and open source) this app is lacking. MLC chat offers the ability to install additional models, and the default is on par or better than this apps responses (like gpt 3 level, lots of incorrect information and it kinda goes wild when explaining things). Additionally, performance is abysmal comparatively for better quality responses on MLC chat. It’s a great idea but lacks the ability to fine tune. Being able to change the temperature and other settings for the model would be great. Would also be cool if we could create a starting prompt (or a library to choose from) for the AI so it can be used for things like role playing or DND. Kinda bummed that it’s $5 since I was expecting a lot more than MLC AIs app for the pricing. Instead I’ll be uninstalling and checking back later for new features and possible performance increases. If you’re reading this and thinking about buying it, you should probably check out MLC chat for now and just wait for more features and performance updates.
    Show less

    Developer Response

    Thanks for your feedback. Unfortunately, we cannot legally distribute the additional model that MLC chat offers (vicuna-v1-7b) since it's based on the Llama weights, which cannot be commercially redistributed. We don't know how MLC chat can legally distribute it either. Secondly, you mention the lack of the ability to change the temperature and other settings. This is incorrect. Personal GPT has always had settings for temperature and many more, please look at either the top right of the app or look up Personal GPT within the iOS settings app. Finally, we've got an update with performance improvements pending release, we also many more features in the pipeline for release in the coming weeks. Also, thanks for suggesting the prompt library feature, we're seriously considering implementing it in the near future.
  • Intriguing but

    Writes excellent English and responds in roughly the same time as it would take to manually type the response. A long input, eg re-write this text (a whole web page), will often have no response at all.
    I don’t know how it manages to work offline but it seems to work in airplane mode.
    Generally the answers seem a bit limited and even more factually suspect than with ChatGPT and it will often insist on the same answer after being corrected. (The PM of NZ is Jacinda Ardern)
    I’m hoping that with Shortcuts support coming it could be good for writing prettier versions of rough text.
    Show less

    Developer Response

    Thanks for the feedback! The app currently contains a quantised 3B parameter decoder-only (aka GPT) LLM that runs on your device, and the app makes no network connections, whatsoever. This is how it works in Airplane mode, while apps like ChatGPT cannot. This is also the reason why the app is a fairly large download (1.6GB), even with data compression. The context length of the current model in the app is 2048 tokens or about 1500 words (a token roughly corresponds to ~0.75 words). The context length of an LLM model is the most amount of text that the model can attend to. For comparison, the baseline GPT-3.5 and GPT-4 models that ChatGPT uses have context lengths of 4096 and 8192 tokens, respectively. In some ways it isn't a fair comparison, because the former (our app) is an app runs on your iPhone, while the other needs multiple large servers, and an active internet connection from your phone to those servers, to run. Anyway, improvements to the context length within the realm of possibilities and an active area of research. I can't promise anything but we might be able to increase the context length by a bit, soon. We're also experimenting with newer, larger models which have longer context lengths; although, they'll only work on newer iPhones and iPads. WRT the model's factual knowledge, since the app doesn't connect to the internet, its knowledge is limited to what the model's training data contained. Incidentally, I asked ChatGPT (3.5) the same question question, and it came up with the same incorrect response that you noted. Shortcuts integration will indeed ship later this week, we hope you'll like it!
  • Doesn’t work on iPhone 13 mini

    No other apps open, this is what the app shows

    “Error: this iPhone does not have 3GB of free memory.
    Please try closing any memory intensive apps running in the background and restart Personal GPT.”

    Developer Response

    This has been fixed in the latest 1.1.1 release.
  • Current model is pretty good 👍

    The team was very responsive with a new model update that is a lot more coherent in the prompt replies.

    -Would like to have a history tap
    -Shortcuts integration
    -Document ingestions

    Developer Response

    Thanks for the feedback! The history tab and the shortcuts integration are on our roadmap. A newer, more coherent model along with a faster app will be released next week in a 1.0.4 release.

Alternatives to Private LLM