#aiworkstation — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #aiworkstation, aggregated by home.social.
-
ONEXStation is a Ryzen AI Max+ 395 mini PC with 128GB RAM and RBG lighting
Chinese PC maker One Netbook is better known for making handheld gaming PCs than desktop workstations. But the company is positioning the new ONEXStation as a “mini AI Workstation” thanks to its AMD Ryzen AI Max+ 395 processor and 128GB of high-speed, high-bandwidth memory.
You could also certainly use it as a gaming PC though – and it’s got RGB lighting effects to help match the gamer […]
#aiWorkstation #miniPc #oneNetbook #ONEXPLAYER #onexstation #strixHalo Read more: https://liliputing.com/onexstation-is-a-ryzen-ai-max-395-mini-pc-with-128gb-ram-and-rbg-lighting/ -
ONEXStation is a Ryzen AI Max+ 395 mini PC with 128GB RAM and RBG lighting
Chinese PC maker One Netbook is better known for making handheld gaming PCs than desktop workstations. But the company is positioning the new ONEXStation as a “mini AI Workstation” thanks to its AMD Ryzen AI Max+ 395 processor and 128GB of high-speed, high-bandwidth memory.
You could also certainly use it as a gaming PC though – and it’s got RGB lighting effects to help match the gamer […]
#aiWorkstation #miniPc #oneNetbook #ONEXPLAYER #onexstation #strixHalo Read more: https://liliputing.com/onexstation-is-a-ryzen-ai-max-395-mini-pc-with-128gb-ram-and-rbg-lighting/ -
ONEXStation is a Ryzen AI Max+ 395 mini PC with 128GB RAM and RBG lighting
Chinese PC maker One Netbook is better known for making handheld gaming PCs than desktop workstations. But the company is positioning the new ONEXStation as a “mini AI Workstation” thanks to its AMD Ryzen AI Max+ 395 processor and 128GB of high-speed, high-bandwidth memory.
You could also certainly use it as a gaming PC though – and it’s got RGB lighting effects to help match the gamer […]
#aiWorkstation #miniPc #oneNetbook #ONEXPLAYER #onexstation #strixHalo Read more: https://liliputing.com/onexstation-is-a-ryzen-ai-max-395-mini-pc-with-128gb-ram-and-rbg-lighting/ -
ONEXStation is a Ryzen AI Max+ 395 mini PC with 128GB RAM and RBG lighting
Chinese PC maker One Netbook is better known for making handheld gaming PCs than desktop workstations. But the company is positioning the new ONEXStation as a “mini AI Workstation” thanks to its AMD Ryzen AI Max+ 395 processor and 128GB of high-speed, high-bandwidth memory.
You could also certainly use it as a gaming PC though – and it’s got RGB lighting effects to help match the gamer […]
#aiWorkstation #miniPc #oneNetbook #ONEXPLAYER #onexstation #strixHalo Read more: https://liliputing.com/onexstation-is-a-ryzen-ai-max-395-mini-pc-with-128gb-ram-and-rbg-lighting/ -
Acer Veriton GN100 AI Mini Workstation powers NVIDIA Spark Hack Series New York
https://fed.brid.gy/r/https://nerds.xyz/2026/04/acer-veriton-gn100-spark-hack-series/
-
Acer Veriton GN100 AI Mini Workstation powers NVIDIA Spark Hack Series New York
https://web.brid.gy/r/https://nerds.xyz/2026/04/acer-veriton-gn100-spark-hack-series/
-
Acer Veriton GN100 AI Mini Workstation powers NVIDIA Spark Hack Series New York
https://web.brid.gy/r/https://nerds.xyz/2026/04/acer-veriton-gn100-spark-hack-series/
-
Acer Veriton GN100 AI Mini Workstation powers NVIDIA Spark Hack Series New York
https://fed.brid.gy/r/https://nerds.xyz/2026/04/acer-veriton-gn100-spark-hack-series/
-
DURABOOK Z14I-HG brings serious AI power to a rugged laptop
https://fed.brid.gy/r/https://nerds.xyz/2026/03/durabook-z14i-hg-ai-rugged-workstation/
-
ASUS ExpertCenter Pro ET900N G3 brings NVIDIA Grace Blackwell Ultra AI supercomputing power to the desktop
https://fed.brid.gy/r/https://nerds.xyz/2026/03/asus-expertcenter-pro-et900n-g3-ai-supercomputer/
-
Plugable TBT5-AI enclosure lets Windows laptops run local AI with a desktop GPU
https://fed.brid.gy/r/https://nerds.xyz/2026/03/plugable-tbt5-ai-enclosure/
-
Beelink announces Lobster Red OpenClaw mini PCs built for local AI
https://fed.brid.gy/r/https://nerds.xyz/2026/03/beelink-openclaw-mini-pc/
-
Bàn về hiệu năng hệ thống AI workstation kép RTX PRO 6000 với 1.15TB RAM: So sánh xử lý GPU-only (INT4) vs CPU+GPU (fp8) trên mô hình MiniMax-M2.1. Kết quả: GPU-only nhanh hơn 2–4x ở prefill nhưng chỉ xử lý tối đa ~3 request đồng thời do giới hạn KV-cache..fp8 tuy chậm hơn nhưng mở rộng tốt hơn cho 10+ người dùng, đặc biệt với context dài. Queue time là điểm nghẽn quan trọng. Phù hợp cho agent coding nội bộ. #AIWorkstation #LLMBenchmark #MultiUserAI #GPUvsCPU #LocalLLM #HPC #MachineLearning #Tín
-
Ah yes, the "AI-augmented workstation" that's too cool for the cloud ☁️ — because nothing screams cutting-edge like shipping updates on an encrypted USB stick. 🚀 Enjoy watching Syd analyze tool output... if your browser could actually play videos. 😂
https://www.sydsec.co.uk #AIWorkstation #CloudAlternative #EncryptedUSB #TechHumor #ToolAnalysis #HackerNews #ngated -
Dell Pro Max 16 Plus with Qualcomm AI 100 puts Linux first with powerful on-device AI performance
https://web.brid.gy/r/https://nerds.xyz/2025/11/dell-pro-max-16-plus-qualcomm-ai-100-linux/
-
Hoi iedereen! 👋
Vragen aan de community:Heeft iemand ervaring met deze GPU’s? Welke zou je aanbevelen voor het lokaal draaien van grotere LLMs?
Zijn er andere budgetvriendelijke server-GPU’s die ik misschien heb gemist en die geweldig zijn voor AI-workloads?
Heb je tips voor het bouwen van een kosteneffectieve AI-workstation? (Koeling, voeding, compatibiliteit, enz.)
Wat is jouw favoriete setup voor lokale AI-inferentie? Ik zou graag over jullie ervaringen horen!Alvast bedankt! 🙌"
#AIServer #LokaleAI #BudgetBuild #LLM #GPUAdvies #ThuisLab #AIHardware #DIYAI #ServerGPU #TweedehandsTech #AIGemeenschap #OpenSourceAI #ZelfGehosteAI #TechAdvies #AIWorkstation #MachineLeren #AIOnderzoek #FediverseAI #LinuxAI #AIBouw #DeepLearning #ServerBouw #BudgetAI #AIEdgeComputing #Vragen #CommunityVragen -
Hey everyone 👋
I’m diving deeper into running AI models locally—because, let’s be real, the cloud is just someone else’s computer, and I’d rather have full control over my setup. Renting server space is cheap and easy, but it doesn’t give me the hands-on freedom I’m craving.
So, I’m thinking about building my own AI server/workstation! I’ve been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I’d love your advice!
My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don’t need gaming features (ray tracing, DLSS, etc.), I’m leaning toward used server GPUs that offer great performance for AI workloads.Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What’s your go-to setup for local AI inference? I’d love to hear about your experiences!I’m all about balancing cost and performance, so any insights or recommendations are hugely appreciated.
Thanks in advance! 🙌
@[email protected] #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab
-
Hey everyone 👋
I’m diving deeper into running AI models locally—because, let’s be real, the cloud is just someone else’s computer, and I’d rather have full control over my setup. Renting server space is cheap and easy, but it doesn’t give me the hands-on freedom I’m craving.
So, I’m thinking about building my own AI server/workstation! I’ve been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I’d love your advice!
My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don’t need gaming features (ray tracing, DLSS, etc.), I’m leaning toward used server GPUs that offer great performance for AI workloads.Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What’s your go-to setup for local AI inference? I’d love to hear about your experiences!I’m all about balancing cost and performance, so any insights or recommendations are hugely appreciated.
Thanks in advance! 🙌
@[email protected] #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab
-
Hey everyone 👋
I’m diving deeper into running AI models locally—because, let’s be real, the cloud is just someone else’s computer, and I’d rather have full control over my setup. Renting server space is cheap and easy, but it doesn’t give me the hands-on freedom I’m craving.
So, I’m thinking about building my own AI server/workstation! I’ve been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I’d love your advice!
My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don’t need gaming features (ray tracing, DLSS, etc.), I’m leaning toward used server GPUs that offer great performance for AI workloads.Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What’s your go-to setup for local AI inference? I’d love to hear about your experiences!I’m all about balancing cost and performance, so any insights or recommendations are hugely appreciated.
Thanks in advance! 🙌
@[email protected] #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab
-
Hey everyone 👋
I’m diving deeper into running AI models locally—because, let’s be real, the cloud is just someone else’s computer, and I’d rather have full control over my setup. Renting server space is cheap and easy, but it doesn’t give me the hands-on freedom I’m craving.
So, I’m thinking about building my own AI server/workstation! I’ve been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I’d love your advice!
My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don’t need gaming features (ray tracing, DLSS, etc.), I’m leaning toward used server GPUs that offer great performance for AI workloads.Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What’s your go-to setup for local AI inference? I’d love to hear about your experiences!I’m all about balancing cost and performance, so any insights or recommendations are hugely appreciated.
Thanks in advance! 🙌
@[email protected] #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab
-
Hey everyone 👋
I’m diving deeper into running AI models locally—because, let’s be real, the cloud is just someone else’s computer, and I’d rather have full control over my setup. Renting server space is cheap and easy, but it doesn’t give me the hands-on freedom I’m craving.
So, I’m thinking about building my own AI server/workstation! I’ve been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I’d love your advice!
My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don’t need gaming features (ray tracing, DLSS, etc.), I’m leaning toward used server GPUs that offer great performance for AI workloads.Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What’s your go-to setup for local AI inference? I’d love to hear about your experiences!I’m all about balancing cost and performance, so any insights or recommendations are hugely appreciated.
Thanks in advance! 🙌
@[email protected] #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab