#hardwarechoices β Public Fediverse posts
Live and recent posts from across the Fediverse tagged #hardwarechoices, aggregated by home.social.
-
Debating TinyBox (TinyCorp) vs NVIDIA DGX Station is like choosing between a tactical off-grid cabin and a corporate panic room.
TinyBox: OSS-friendly, sips power, runs whisper quietβgreat for AI nerds who want root.
DGX: Raw CUDA firepower, but youβll need NV cash and maybe a colocation rack.
Depends if you want control⦠or just raw scale.
#AI #TinyBox #DGX #SelfHosted #OpenSource #HardwareChoices -
Planning to locally host an LLM π€. Having a lot of internal struggles figuring out the best price curve.
A used NVIDIA Jetson Xavier NX can be acquired for ~$250 π°.
A 4060 w/ 16gb for ~$400 π³.Not sure I really want to do either π€. Waiting for NPU support on Transformer models may be a better approach.
#LLM #LocalHosting #NVIDIA #AI #MachineLearning #NPU #Transformers #TechDilemmas #HardwareChoices
-
Planning to locally host an LLM π€. Having a lot of internal struggles figuring out the best price curve.
A used NVIDIA Jetson Xavier NX can be acquired for ~$250 π°.
A 4060 w/ 16gb for ~$400 π³.Not sure I really want to do either π€. Waiting for NPU support on Transformer models may be a better approach.
#LLM #LocalHosting #NVIDIA #AI #MachineLearning #NPU #Transformers #TechDilemmas #HardwareChoices
-
Planning to locally host an LLM π€. Having a lot of internal struggles figuring out the best price curve.
A used NVIDIA Jetson Xavier NX can be acquired for ~$250 π°.
A 4060 w/ 16gb for ~$400 π³.Not sure I really want to do either π€. Waiting for NPU support on Transformer models may be a better approach.
#LLM #LocalHosting #NVIDIA #AI #MachineLearning #NPU #Transformers #TechDilemmas #HardwareChoices
-
Planning to locally host an LLM π€. Having a lot of internal struggles figuring out the best price curve.
A used NVIDIA Jetson Xavier NX can be acquired for ~$250 π°.
A 4060 w/ 16gb for ~$400 π³.Not sure I really want to do either π€. Waiting for NPU support on Transformer models may be a better approach.
#LLM #LocalHosting #NVIDIA #AI #MachineLearning #NPU #Transformers #TechDilemmas #HardwareChoices
-
Planning to locally host an LLM π€. Having a lot of internal struggles figuring out the best price curve.
A used NVIDIA Jetson Xavier NX can be acquired for ~$250 π°.
A 4060 w/ 16gb for ~$400 π³.Not sure I really want to do either π€. Waiting for NPU support on Transformer models may be a better approach.
#LLM #LocalHosting #NVIDIA #AI #MachineLearning #NPU #Transformers #TechDilemmas #HardwareChoices