LiteChat is a modular, extensible, and privacy-focused AI chat application designed for power users, developers, and teams. It supports multiple AI providers, advanced prompt engineering, project-based organization, and powerful developer features like virtual file systems, Git integration, and a comprehensive modding system.
soit pas hebergeable (t3.chat est un super service vraiment pas cher, mais proprioâŠ)
Les images docker mono langue (fr es it de) font moins de 100Mo et la version total (avec tout les langues) +/-250 (il faudrait que je fasse une image âenâ dedierâŠ)
you are right @ericg for now it must be in the top directory of your http serverâŠ
I have to refine my build process, it is a bit messy at the moment for the internationalization and docker support.
The fix is fairly easy though, the src attribute for the main script and the link for the css file start with a /, removing it should do the trick (I am not a yunohost user so i do not really know how things are handled :\ )
no account here (all in browser), you can sync using a git repository on a conversation basis or globally. same with settings. / pas de compte sur litechat (tout sur le browser), il est possible de faire des syncs individuelles pour chaque conversation on globalement, pareil pour les settings
I all heartily recommend using an openrouter key here, youâll have access to most of the providers/models and I have better support for model abilities from there.
I really like it so far, but one big problem I have concerns other users of my server. Itâs unrealistic for me to share my API key with every user so they can enter it every time (or every first time) they need to use the AI.
Is there a way to have an option to store API keys, chats, etc., on the server that would be disabled by default?
With such an option, a server user would connect to the server and be able to interact directly with the AI without having to pre-configure it (no need to enter an api key and no need to select the models).
Also, is it possible to add an option when selecting AI models to determine which one would be enabled by default?
Oh no no, it is on a browser basis. nothing is stored on the server, it really is just an http server that serves a js bundle every one is bringing their own key
Tu peux essayer la version en ligne (heberge sur une github pages) et regarder ce qui circule, une fois charge plus rien ne viens chez moi, seulement entre ton provider et toi.
you can try the online hosted version (completely static on github pages) at https://litechat.dev and look at the request made, nothing goes out to me, only between you and your AI provider
as for the default key, I do not have that in store quiet yetâŠ, I am working on another project right now (one that will maybe put food in the fridge ^^) but I could add something like that in the future.
The only way for now is to build the app with a specific setting file so it loads them as default.
git clone git@github.com:DimitriGilbert/LiteChat.git
VITE_USER_CONFIG_FILE="path/to/your/config/file.json" npm run build
The bundled version should be in the dist folder and you should place it at the root of your http server
you can also set the system prompt file with VITE_SYSTEM_PROMPT_FILE
the config file is the one you export from the settings>data tab
I really wanted to keep it client only for many (privacy ^^ mostly privacy) reasons but I will try to wip up something in a docker compose stack at some point that will use other ongoing work on litechat for easy config distribution (you can always open the code if your curious, it is not buried far away ^^)
I could probably add a way to get the config from a remote json file, but that means it will have to be secured somehow (if your server endpoint is behind logins, itâs ok, I will add the possibility for http basic auth also, but it is a bit shaky at best ^^) as a stop gap if you are interested.
It would be friday/saturday i think ('tis when Iâm maintaing litechat )