Date: 26.01.2026
Begin your journey in LLM
…
Hugging Face
There is only one reliable hub and commuinty for llm models - Hugging Face
It makes sense to create an account there as soon as you start working with LLMs.
It is also helpful to have external storage for LLMs, as each model may require several dozen GB of disk space. Make sure you have sufficient storage.
Recomended models
- Only apache2, mit, openrail licence
LLM
Text completion/chat
Multimodal LLM
Embedding
Rerank
No LLM
Picture generation
- Stable Diffusion 1.5 - picture generation model
- Stable Diffusion 2.0
- Stable Diffusion XL
Video generation
TTS
Civitai
civitai - very nice example cloud image generation solution, where you can create your own LoRA and Generate pictures.
Software
- PyTorch - LLM running and training solution
- llama.cpp - C++ runtime implementation for running quantized LLMs locally in gguf format
- CUDA - NVIDIA GPU API to use gpu for different calculations
- ROCm - AMD GPU API to use gpu for different calculations
Hardware
In order to work with LLM you need a very powerful GPU, I recommend a minimum of 32GB.
- Nvidia - RTX 4090/5090
- AMD - Instinct 100/200/300