1.4 KiB
-
Per configurare un peer #wireguard utilizzando #NetworkManager su #archlinux
-
13:25 quick capture: GeForce RTX™ 3090 GAMING OC 24G Specification | Graphics Card - GIGABYTE Global #mittelab
-
20:42 quick capture: +100 to this, I don't think many people reading this thread realize how easy they've made it to run a LLM locally. It's a great start if you want to kick multiple tires (be careful to clean up! the gigs add up).
chmod +x TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile
./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile -ngl 999 https://euri.ca/blog/2024-llm-self-hosting-is-easy-now/
-
20:44 quick capture: LlamaIndex can make this task possible in a very few (surprisingly few) lines of code: https://docs.llamaindex.ai/en/stable/understanding/putting_it_all_together/q_and_a/#semantic-search
You'll likely want to move beyond the first examples so you can choose models & methods. Either way, LI has tons of great documentation and was originally built for this purpose. They also have a commercial Parsing product with very generous free quotas (last I checked)
-
20:54 quick capture: Turn your computer into an AI computer - Jan