Tagged "cpu-only"
- What Breaks When AI Agent Frameworks Are Forced Into <1MB RAM and Sub-ms Startup
- A Tool to Tell You What LLMs Can Run on Your Machine
- Open-Source llama.cpp Finds Long-Term Home at Hugging Face
- AI Is Stress Testing Processor Architectures and RISC-V Fits the Moment
- Ouro 2.6B Thinking Model GGUFs Released with Q8_0 and Q4_K_M Quantization
- GGML Joins Hugging Face: What This Means for Local Model Optimization
- CPU-Trained Language Model Outperforms GPU Baseline After 40 Hours
- At India AI Impact Summit, Intel Showcases Its AI PCs and Cost-Efficient Frugal AI
- I Thought I Needed a GPU to Run AI Until I Learned About These Models
- Google Is Exploring Ways to Use Its Financial Might to Take on Nvidia
- GGML.AI Acquired by Hugging Face
- Hardware Economics Shift: DDR5 RDIMM Pricing Now Comparable to GPUs for Local Inference
- Matmul-Free Language Model Trained on CPU in 1.2 Hours
- ASUS Zenbook 14 Launches in India with AI-Capable Hardware, Starting at Rs 1,15,990
- Asus ExpertBook B3 G2 Laptop Features Ryzen AI 9 HX 470 CPU in 1.41kg Ultraportable Form Factor
- GPU-Accelerated DataFrame Library for Local Inference Workloads
- Scaling llama.cpp On Neoverse N2: Solving Cross-NUMA Performance Issues
- Scaling llama.cpp On Neoverse N2: Solving Cross-NUMA Performance Issues
- Running Mistral-7B on Intel NPU Achieves 12.6 Tokens/Second
- GLM-5 Released: 744B Parameter MoE Model Targeting Complex Tasks
- NAS System Achieves 18 tok/s with 80B LLM Using Only Integrated Graphics
- Arm SME2 Technology Expands CPU Capabilities for On-Device AI
- Community Member Builds 144GB VRAM Local LLM Powerhouse