๐๏ธ Prerequisites
Before diving into Rubra, make sure your machine is equipped with the following tools:
๐๏ธ Installation
Before proceeding with the installation, ensure you have met the prerequisites.
๐๏ธ Local LLM Deployment
To create assistants that run entirely on your machine, you must run a model locally. We recommend the OpenHermes-NeuralChat merged model that is 7 billion parameters and ~6GB. We have tested Rubra with this model, but you can use any model you want at your own risk. Let us know if you'd like support for other models by opening up a Github issue!
๐๏ธ LLM Configuration File (Optional)
Before you start with Rubra, configure the models you want Rubra to access by editing the llm-config.yaml file.
๐๏ธ Delete Data & Uninstall
Uninstall Rubra