it's funny how the same behavior plays out in primates. If a middle ranked male chimpanzee is denied their desire by a top ranking male they take out their frustration on a lower ranked male. The 80s documentary called Chimpanzee family explains it well
The 3090 I have in my server (Ollama on it is only used occasionally nowadays since I have dual 5080s on my work desktop), also handles accelerating transcoding in Plex, and is in the process of being setup to handle monitoring my 3d printers for failures via camera.
Am also considering setting up Home Assistant with LLM support again.
I use an older machine/GPU for wintertime heating, mining Monero (xmrig).
Should one get lucky and guess the next valid block, that pays the entire month's electricity — since an electric space heater would already be consuming the exact same amount of kWH as this GPU, there is no "negative cost" to operate.
This machine/GPU used to be my main workhorse, and still has ollama3.2 available — but even with HBM, 8GB of VRAM isn't really relevant in LLM-land.
reply