Home

finetuning-llm-models

OpenAccess-AI-Collective/axolotl: Go ahead and axolotl questionsvia

If your training a non-code dataset and you plan to train it on less than 34b params you should train mistral before llama-2 (IMO)