Show HN: Proxly – Self-hosted tunneling on your own domain in 60 seconds
1 min readWhile not LLM-specific, Proxly addresses a genuine operational friction point for local LLM deployment: exposing locally-running inference services to remote clients or applications while maintaining control over your infrastructure and domain.
For teams running language models on local hardware or edge servers, the ability to quickly establish secure, branded tunneling to those services (without cloud intermediaries) streamlines deployment workflows. This becomes particularly valuable for enterprises hosting proprietary models locally and needing to serve those models to distributed clients while maintaining data residency guarantees.
The 60-second deployment claim suggests Proxly could significantly reduce boilerplate infrastructure code for practitioners building local-first applications. Check out Proxly if you're managing self-hosted LLM services and need straightforward domain-based access without cloud dependency.
Source: Hacker News · Relevance: 6/10