# Use Your Local LLMs From Anywhere (Fully Private)

## Метаданные

- **Канал:** Cole Medin
- **YouTube:** https://www.youtube.com/watch?v=SLeFoPuwMh0
- **Дата:** 02.03.2026
- **Длительность:** 1:56
- **Просмотры:** 5,919
- **Источник:** https://ekstraktznaniy.ru/video/11473

## Описание

I self-host LLMs on my home PC for my second brain - but I need to access them remotely too. Port forwarding is a security nightmare and VPNs are slow and clunky. Twingate gives me zero-trust remote access with no open ports. Just a Docker connector making outbound connections only - my PC stays invisible to the internet. Setup took 10 minutes, and it's free for up to 5 users.

Try Twingate: https://twingate.plug.dev/YEs9Z7b

## Транскрипт

### Segment 1 (00:00 - 01:00) []

So, I am on my laptop remotely accessing a large language model that I have hosted on my computer back home. I'll even go into the open web UI config and prove that to you. And the thing is I have zero ports open up on my desktop and so it's ultra secure and I can access my local LLMs from anywhere. If you are self-hosting LLMs, they work great on your local network. But what if you want to use them remotely? A lot of people set up port forwarding for temporary access, but that's actually a security nightmare because bots are constantly scanning every IP on the internet for open ports, and you can set up a VPN, but then you have a slow connection. You're dealing with those clunky configs, and it's open to the network anyway. I've been using local LLMs for my second brain a lot recently, and that is not how I want to roll. Twate replaces all of that. It is a zero trust networking tool and all that means is instead of opening up ports to our local PC or server, we are deploying a connector and it's just a docker container. And the key here is we're only making outbound connections. So no inbound ports, nothing is listening. Your server or PC becomes invisible to the internet. And setting up Twgate literally only took me 10 minutes. I created my account and my network and then I deployed my connector. This is the Docker container I have running on my PC for my home lab. And then I define my resources. So I'll blur it here, but this is the IP address set up as a resource in Twing for my local PC. Now from any device, my laptop, my phone, it doesn't matter. I can access my self-hosted LLMs running on my powerful PC at home, but I'm not at home and I have no ports exposed to the internet. And this works for anything that you self-host, not just AI, NAS, Home Assistant, Proxmox, whatever. and it's free for up to five users and 10 networks, which is certainly enough for a home lab. Twate is a very needed tool in my stack, so I jumped at the opportunity to work with them for this short. Thanks to them for that and definitely check them
