Ask HN: How to get started with local language models?
I remember using Talk to a Transformer in 2019 and making little Markov chains for silly text generation. I've lurked in the g /lmg/ threads, installed the Umbrel LlamaGPT on a Raspberry Pi, and ran webGPU WebGPT (GPT-2) locally. But I don't know how anything works or how to do anything besides following installation instructions on GitHub. HuggingFace still confuses me; there's so much stuff out there to read through, and I have been lost since the release of the llama models. I heard about Mis...
Read more at news.ycombinator.com