getting hands on ai

It may sound strange to someone over 50, but I bought my first good graphics card for evaluating LLM models. The entry level is an RTX 3090 with 24G VRAM. It's still way too little. But at least the 30B to 70B range is now within reach and some nice things can be done with it. As long as you take your time.

I am simultaneously evaluating Claude AI, Claude Code and Api, Mistral and OpenAi on commercial offerings and locally mainly Ollama and vllm. All bundled via litellm as proxy and open-webui as frontend. Additionally Claude AI as daily driver and Claude Code as comparison to tools like Aider etc.

Vyos as a firewall for my office setup at home

As my Ubiqutity Acces Point broke, I changed the whole topology of my home network. Solid proof consumer-grade router in front (easy to handle for the family, minimizing overhead for myself). For the more professional requirements I added a N6000 based minipc as a firewall for my office. Connected as an exposed host from the front-router. Handles all the stuff I need and my personal devices via some vlans per usecase.

Dokku

evaluating https://dokku.com/ This is really nice.

Cli PassS which makes deplyoing really easy. Tried it with some clojure webapp and a few hugo blogs so far.

init

Just recreated this small private blog for future - mainly private purpose. this is serving tech related stuff. Most of the documents for now are not indexed on the frontpage. It is more a filebin thingy for now, where I can publish content and share it with the document uri.