Tagged "resource-constrained-ai"
-
MiniMax M2.7 Model to Be Released as Open Weights
-
Running an AI Agent on a 448KB RAM Microcontroller
-
Auto-retry Claude Code on subscription rate limits (zero deps, tmux-based)
-
India's Mobile-First AI Strategy Could Accelerate Local Inference Adoption in Emerging Markets
-
Experiment: 0.8B Model Self-Improvement on MacBook Air Yields Surprising Results
-
Show HN: Asterode – Multi-Model AI App with Memory and Power Features
-
SynthesisOS – A Local-First, Agentic Desktop Layer Built in Rust
-
Qualcomm Snapdragon Wear Elite Brings On-Device AI to Smartwatches
-
How to Run High-Performance LLMs Locally on the Arduino UNO Q
-
Apple Intelligence, Galaxy AI, Gemini: Why Your AI-Powered Phone Is Worth Repairing
-
Meta Reveals AI-Packed Smartwatch In 2026 – Why Wearables Shift Now
-
Krasis: Hybrid CPU/GPU MoE Runtime Achieves 3,324 Tokens/Second Prefill on RTX 5080
-
5 Useful Docker Containers for Agentic Developers
-
DeepSeek Paper – DualPath: Breaking the Bandwidth Bottleneck in LLM Inference
-
At India AI Impact Summit, Intel Showcases AI PCs and Cost-Efficient Frugal AI
-
Taalas Etches AI Models onto Transistors to Rocket Boost Inference
-
Strix Halo Performance Benchmarks: Minimax M2.5, Step 3.5 Flash, Qwen3 Coder
-
Mirai Secures $10M to Optimize On-Device AI Amid Cloud Cost Surge
-
Kitten TTS V0.8 Released: New State-of-the-Art Super-Tiny TTS Model Under 25 MB
-
Sarvam Brings AI to Feature Phones, Cars, and Smart Glasses