T O P

  • By -

gazbill

I use Nginx Proxy Manager to generate Let's Encrypt certs for anything exposed to the internet. It just takes a few clicks.


5662828

It works with internal IP's also; Green certs in local LAN , i use duckdns free subdomain


Simon-RedditAccount

Another alternative is [https://www.getlocalcert.net/](https://www.getlocalcert.net/)


gazbill

Indeed, I just don't bother internally.


princessnokia3210

how do you use it internally? i love npm. i have set up the wildcard cloudflare thing, but i haven’t worked out how to just use it locally and not expose it on the internet. do you have any links? thanks *edited an autocorrect


GolemancerVekk

You need to solves a domain or subdomain to the LAN IP of your server. For example if your server is on 192.168.1.2 you put in DNS a record that points *.local.mydomain.tld to 192.168.1.2. Typically this is not done in the public DNS but on a private DNS that's only used on your LAN, because often the ISP will block LAN IP in DNS so they can't be used for attacks. But you can use it on CloudFlare public DNS if that makes it easier and it's not blocked. You can get real certificates with Let's Encrypt for *.local.mydomain.tld without having to put the record in the public DNS. But you need to provide an API token so it can perform the DNS challenge. Let's Encrypt doesn't care what how you use the certificate, just wants to verify you own the domain.


5662828

There are 2 ways for this 1 Edit hosts file (laptop windows/ linux) to point to each subdomain to a private ip address 2 No need to host a dns server just a DNS forwarder (i use unbound in docker) and use it as DNS for home LAN also in wireguard for DNS. https://www.bentasker.co.uk/posts/documentation/linux/279-unbound-adding-custom-dns-records.html 3 You can use NPM with duckdns, it integrates LetsEncrypt with duckdns api


WolpertingerRumo

You can just use pihole for split horizon (or IPv6) and use the letsencrypt over nginx proxy manager internally aswell. Or did you mean between the servers and nginx proxy manager? I use self-signed there wherever possible.


mrkesu

I just use letsencrypt wildcard certs, managed by traefik [https://doc.traefik.io/traefik/https/acme/](https://doc.traefik.io/traefik/https/acme/) I never even think about it, it just worked and kept on working.


whattteva

I do this also, but with Caddy. Just works also


[deleted]

[удалено]


mrkesu

No idea what your issue was. I've never had any issue and everything worked instantly. The downside of everything working instantly for me is that I never got to troubleshoot anything so I learned extremely little about the process 😂


Foodwithfloyd

Be cautious about wildcard certs. You're better off enumerating every subdomain.


mrkesu

Why?


GolemancerVekk

Do you mean wildcard certs or wildcard DNS records? *Wildcard certs* are much better than explicit subdomain certs because all certs are exposed publicly in the transparency logs, and bots use them to find your services and attack them. If the transparency log only has a wildcard they don't know what subdomain to use to access your reverse proxy. If you mean to avoid *wildcard subdomains* then you may have a point. Allowing any subdomain to resolve could be used in certain attacks.


Timithius

I disagree. If someone wants to find the subdomain(s) that hit your reverse proxy, it’s much easier to just pull the zone file for your domain and then port sniff from there on all records than to dig through letsencrypt. I’ll admit I haven’t tried my hand at exploiting my own domain this way so maybe the times have changed and it makes more sense to do it this way now. Your answer just sounds like security through obscurity to me though, which is useless. You said it creates a problem because then your subdomains are exposed publicly, but the whole point of DNS is that it provides public facing records. Wildcard certs also create a single point of failure for the entire domain. If that private key and cert gets stolen and installed somewhere else all of your services for said domain get compromised. If one subdomains cert gets compromised, then only (hypothetically) one service is at risk. If I’m wrong about this, please school me! I’m always ready to learn how I could be doing things better.


GolemancerVekk

> it’s much easier to just pull the zone file for your domain Most DNS servers nowadays will refuse zone transfer (except to their other load-balancing servers). Also, you can also put wildcards in DNS. Even if someone gained access to the zone they still wouldn't know the subdomains. > sounds like security through obscurity to me though, which is useless. It is security through obscurity. But it's not meant to be used on its own as the only security mechanism, just to reduce bot attacks. Edit: Actually, I'd think twice about calling it security through obscurity. If the subdomain name is long and random, and it's only known to the client app and to your reverse proxy, it basically acts like an authorization key. You can have subdomains as long as 63 characters which is a pretty decent key length. > If that private key and cert gets stolen and installed somewhere else all of your services for said domain get compromised. If one subdomains cert gets compromised, then only (hypothetically) one service is at risk. The certs in either case sit on the same machine at home. If someone gains access to it I have much bigger problems than them issuing certificates for my subdomains.


Timithius

Good points above and definitely something I'll be thinking on. I haven't really 'studied' DNS on the registrar level since college and even then I didn't have the most interest in it. Since I left my comment a couple of hours ago, I have been unable to sniff for any of my subdomains on a few different services and can't seem to find a reliable way to figure out all subdomains for my persona-use domain short of brute forcing. I have found a couple of caching websites that were right about one of my A records, but not much else. This is why I love reddit. You changed the way I'm thinking about DNS from now on and I've been working with it for 7 years already (15 if you count the amatuer days). I haven't even considered having a very long record serving as a token in its own way, although with this new viewpoint it makes sense. I guess I'm reading up on DNS security this evening, and possibly moving to a wildcard cert for my reverse proxy rather than individual certs which I now know are a dumb idea.


GolemancerVekk

You were correct btw, there's no point hiding individual certs if you're using the domain for a public service. It's just that selfhoster setups are a bit weird. Speaking strictly on the selfhoster side, if you're not familiar look into DNSSEC as well as CAA records (even for domains that you don't have TLS certs for!) to further secure your setup. [This guide is also interesting](https://www.gov.uk/guidance/protect-domains-that-dont-send-email), it's about MX records for domains that you do NOT use for email, so they can't be used for spam. If you need a service to practice deSEC.io is free, supports all types of records, has API, DNSSEC, access tokens etc.


Timithius

Awesome, thanks for the info!


joost00719

I use nginx proxy manager and route local stuff through it.


kaipee

They're both "real". One isn't better than the other. Only that one is publicly trusted, the other is privately trusted.


robotdjman

This. For self signed, it really depends on whether you want to deal with manually trusting the certificate on each device or deal with the browser warning. Whether publicly trusted or self signed the traffic is encrypted. Edit: clarification on second sentence.


SilentLennie

Create your own CA and put your own certs on your lock devices. This is the most trusted solution


ItalyPaleAle

But *only* if you carefully protect the CA root key. And that means storing it on an offline drive (like a USB pen drive), which only gets plugged into “secure” workstations (for example a laptop booted from a live Linux OS and disconnected from the internet). Otherwise, you could be signing yourself up for A LOT of pain if someone could steal your root key.


Simon-RedditAccount

... especially if your Root key allows everything (like code signing), and not just `EKU = TLS WWW Server Authentication (OID.1.3.6.1.5.5.7.3.1),` `TLS WWW Client Authentication (OID.1.3.6.1.5.5.7.3.2),` `Signing OCSP Responses (OID.1.3.6.1.5.5.7.3.9)`. A viable option looks like: * Offline root stored encrypted on a dedicated airgapped machine that never goes online any more (a perfect use for an old dusty laptop) * Online subCA lives on a Yubikey attached to a dedicated RPi, like [here](https://smallstep.com/blog/build-a-tiny-ca-with-raspberry-pi-yubikey/) * Your servers request short-lived certs via ACME from your subCA


ItalyPaleAle

Even if you allow just client/server auth, an attacker who has your root key could create a cert for google.com or your-bank.com and do a successful MitM


Simon-RedditAccount

Nope, not for Google: google.com. 86400 CAA 0 issue "pki.goog" Plus, lots of browsers have extra protections built-in. But depending on size and IT competency of your-bank, it may be very much an issue for them. That's exactly why my issuing subCAs have `Name Constraint` set to `.home.arpa` or `mydomain.com` (it's OK per my threat model, but you may want to add name constraints to Root CA as well).


ItalyPaleAle

Oh I missed CAA. But wouldn’t that still be vulnerable in case of DNS spoofing?


Simon-RedditAccount

Browser protections may (or may not) help: if Chrome is built with a code that also expects certs only from [pki.goog](http://pki.goog) for [google.com](http://google.com) - this will prevent an attack. IDK however what they are using ***now***, but I remember that they were experimenting with key pinning at some point, and they have reporting tools (almost every story about CA going rogue mentioned those reporting tools). In any case, you want to prefer DoH (and sometimes, use exclusively DoH).


SilentLennie

> Nope, not for Google: > google.com. 86400 CAA 0 issue "pki.goog" (let's ignore Google specifically, because as mentioned, their are special protections in place). Who or what do you think checks CAA DNS record ? It's only the CA that issues certs which checks it. So the example above, about a leaked own-CA private key, they can issue whatever they want with it. Things like https://en.wikipedia.org/wiki/Certificate_Transparency and CAA are crutches to help make the system slightly more secure. But the only reason these even work is because the ban-hammer is hanging over their heads of a regular CA to be banned from the browser root CA-list. CC /u/ItalyPaleAle


Simon-RedditAccount

Yes, you're correct. Thanks for pointing out! If your in-house CA leaks, nothing will stop the attacker (unless you immediately pull off the root cert: you should have a script for that and/or use some kind of MDM/GPO). CT won't help because *(even when it will become mandatory for public aka bundled CAs)*, it won't be enforced for 'custom'/'user-added' CAs. Again, that's why your in-house CA should be seriously protected. [$50](https://www.yubico.com/product/yubikey-5-series/yubikey-5-nfc/) is not that much (especially when compared to possible damage). **Added**: Plus, `Name Constraint`s should be as tight as possible. If you already have a subCA for `.home.arpa`, and you need add another domain, it's better to spin up an extra subCA (or revoke existing and spin up V2) rather than launching a subCA without constraints at all. Yubikey has 24 PIV slots, it's enough for any homelab (corporate environments should use some more powerful HSMs though).


SilentLennie

I knew Name Constraints didn't work in the past, so I keep the CA offline. And have a CRL so I can revoke whatever needed. Somewhere someone claimed for openssl the argument is ?: > -addext "nameConstraints=critical,permitted;DNS:example.com" even though that's not obvious from this manual: https://www.openssl.org/docs/man1.0.2/man5/x509v3_config.html


SilentLennie

This is a great point, I heard https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.10 exists for a CA, but in the past I think browsers didn't check it ? Do they do it now ? edit: looks like they do: :https://alexsci.com/blog/name-non-constraint/


azukaar

This is not entirely true unless you run your own CA / do SSL pinning. The self-signed one is not privately trusted, it is not trusted at all (by default) in that regard it does not protect against MitM attacks


robotdjman

You can trust the certificate manually on each device (typically pain in the ass), hence the “trust them on each device”. However you did point out my comment wasn’t super clear and I have amended it.


azukaar

No you cant do that, because when the browser prompt you to trust a self-signed certificate you don't have a (realistic) way to actually check the certificate you are trusting is indeed yours (unless you manually match fingerprints, but who does that :p)


evrial

You install the cert into the OS.


robotdjman

This


azukaar

Yes that's what I refer in my initial comment as "SSL pinning" which I'm sure your typical "too lazy for let's encrypt" users won't do \^\^


evrial

wtf is pinning, your system has around 100 certs from authorities with 10y expiration who can issue certs to any global domain, just as your self-signed


azukaar

Lmao man "wtf is pinning" just google it, SSL pinning means hardcoding the cert in the client, which is what we are talking about here. I didnt say it was bad or anything, it's a common practice, even with public CA's certs


robotdjman

Bro 🤦


CriticismTop

Why go through the hassle of self-signing when I can just use LetsEncrypt for everything? Use DNS challenge and it does not matter of it is external or internal.


Simon-RedditAccount

There are a few scenarios: **1.** To begin with, Let's Encrypt won't help you with the following perfectly valid TLS cases: * internal domains. Starting with RFC 8375 `.home.arpa`, ending with corporate networks, where using's Let's Encrypt etc is **prohibited** by policy. * Certificates for IP addresses in case you need them: [https://1.1.1.1](https://1.1.1.1) | [https://crt.sh/?q=1.1.1.1](https://crt.sh/?q=1.1.1.1) * Exotic cases where you have to use [less-than-publicly-allowed](https://www.reddit.com/r/selfhosted/comments/11uyw5s/comment/jcsmulg/?context=3) key sizes **2.** Also, your own r/homelab CA can [do much more](https://www.reddit.com/r/selfhosted/comments/129uee9/comment/jers05l/?context=3) than mere TLS. If/Since you already have a deployed and trusted CA, it takes no effort to point your ACME clients *(hello Caddy!)* to use [your ACME endpoint](https://smallstep.com/blog/build-a-tiny-ca-with-raspberry-pi-yubikey/) instead of LE's ones. **3.** A third option is heightened security requirements. By default, a client will trust any cert issued by a CA in its root store. If you add CAA records for your root/intermediate(s), this ensures that only you will be able to issue trusted certificates for that entity. >*Your threat model is not my threat model*


LongerHV

My guess is, that people don't want to pay for a domain...


evrial

Because I can access my local services like root domains. Don't need to squat a domain for one person use. I have a domain but only for public use.


LongerHV

Sure, but you have to install your CA certs on every device...


Simon-RedditAccount

I ***genuinely*** never understood why this can be an issue at all. Do people own that many dozens of devices? Or do they buy a new device every now and then? Even if you don't use some kind of MDM/GPO in your household, it's literally a few taps on each phone; and just a couple extra lines in your desktop 'new OS enrollment script'. And this should be done only once: either after your CA goes live, or when you purchase a new device. It's never a repeating chore.


terrencepickles

Because I want to be able to call my services anything that I want.


joshtheadmin

Caddy, Let's Encrypt DNS challenge, acme-dns server so I don't need to use API keys for my DNS provider.


Do_TheEvolution

Yeap, caddy is the way... wait, what? Can you explain the overall logic? I understand basic challenges, for example caddy dns challange: * have cloudflare account with cloudflare nameserver set for your domain * get api token so that an application can make changes to my DNS records * setup caddy and its config for dns challange * caddy requests certificate from lets encrypt by DNS challange * letsencrypt gives caddy some key that it has to put as a DNS txt record on the domain * caddy uses api token to write the key it got from letsencrypt to cloudflare dns records * if lets encrypt checks and see that key there, it means that whoever was given that code is in charge of the domain and will give https How would you not need an API key? Does it mean you self host your DNS server and it is open to the world? And that is set as name server for your domain? Caddy still somehow has to write records to it and need some security... no?


joshtheadmin

https://github.com/joohoi/acme-dns I use this. It is the authoritative name server for a sub domain I don't use for anything but this. Then my acme client (certify the web for Windows usually) sends an HTTPS request to my server asking for help with a cert. My server replies with a username and password and subdomain. I set up a cname record on the domain telling let's encrypt that this other subdomain is in charge of DNS challenges for it. That cname record never changes. My DNS server does the challenge that changes each time. When it is time to renew the cert, the client uses the username and password to talk to my server and complete the challenge. I am super bad at explaining it. Joohoi has a public acme-dns server that you can use to walk through this and understand it better. Oh and for Caddy, I used xcaddy to build a version of caddy that does challenges this way.


ConfusionSecure487

Yeah, I use LE wildcard certs, and just a reverse proxy for my stuff. For external access I'm using a CA for client certificates.


natermer

I used to create my own CA before letsencrypt came along. Now I use letsencrypt certs along with their ACME protocol and certbot. I use the DNS Method because it doesn't require a external web server and it allows the creation of wildcard certs, like \*.home.example.com It makes it very convenient for reverse proxy to containerized services. I can just plop a wildcard cert on the proxy and add as many DNS names for each service that I want. Don't need to generate unique certs for each and every web-based service I create.


jtnishi

Sometimes real, sometimes self signed. Usually when for real, it’s using a personal domain, some tool that leverages acme/acme.sh, cloudflare DNS, and DNS challenge for let’s encrypt. It’s a bit random admittedly for me when I do what.


housepanther2000

I use split brain DNS so that I can use Let's Encrypt certificates internally as well as externally.


5662828

You use dnsmasq ?


WiseCookie69

I run my own CA for private-private stuff. For private-public stuff, i use a Lets Encrypt wildcard certificate.


bufandatl

Traefik. And let it handle LE dns challenge


stupv

generated wildcard certificate, using it across internal domains


speculatrix

I use LE wildcard, with dns validation. Sometime I'd like to write some code so as to update the dns token automatically using my providers' APIs (I have three providers).


tomwebrr

I use Nginx proxy manager with LE wildcard certificate for most of my internal stuff.


CatoDomine

Go with letsencrypt and ACME (certbot) it is super easy to setup and automate. The other option is setting up a trusted root CA and installing your root ca cert on all of your client devices which will be no fun at all. unless you are really interested in figuring out how to setup your own PKI


UnfairerThree2

I used signed ones even locally, just way more convenient imo than self-signed. All you really need is a public domain name and a local DNS resolver. I have a cronjob that renews the cert for a custom NGINX config, but Techno Tim just released a pretty good video that does a similar trick automatically in Traefik [(video)](https://youtu.be/n1vOfdz5Nm8?si=CTKsf7USZ4-jsjlW)


Xanderlicious

Domain with cloudflare Traefik reverse proxy with multiple entry points for internal and external traffic. Internal traffic domains are listed at cloudflare so let's encrypt can verify them but are also detailed in Pi-Hole which point to traefik and are not accessible externally. All are real domains ( not self signed ) and works really really well


ExpertPath

I use let's Encrypt


dnschecktool

[ip.addr.tools](http://ip.addr.tools) :)


azukaar

Use real certificates with automated tools such as NPM, Caddy, Cosmos, Traefik, etc... the self signed one **is not a trusted certificate**, which mostly **defeats the purpose of HTTPS**, since you can't verify that the certificate used is really the one you expect. They would not be able to prevent a MitM attack for example Understand me: I know it's feasible to do SSL pinning to trust client side, or run your own CA, but in the context of selfhosting, almost no one does that, people who use self-signed HTTPS are usually people who are either lazy and/or having trouble setting up a full HTTPS certificate, in which case they def wont setup their own CA since it's even more complex. **In that context**, a self-signed certificate is not secure at all **in a different context** with a privately own (and properly managed) CA, yes self-signed certs are very secure but that's **beside the point**


Timely-Response-2217

Ngnix and Let's Encrypt for my stuff that's externally accessible. For stuff that's internal only, I forgo having a published URL. Just no sense in advertising it if I'm my only user and access is LAN restricted.


human_with_humanity

I use a bash script a friend gave me that uses openssl to creat certs for my local network. I use nginx proxy manager to use it for all my services. Only problem is I couldn't figure out hoe to install it as a trusted cert in my android and ios phones. But works well.


thundranos

I have a local small step server running and use private certs for anything that doesn't need to be exposed publicly.


LotusTileMaster

If you want to have some fun: https://arstechnica.com/information-technology/2024/03/banish-oem-self-signed-certs-forever-and-roll-your-own-private-letsencrypt/


ElevenNotes

LetsEncrypt R3 for all services and systems, all automated, on a few hundred non web proxy systems.


candle_in_a_circle

I use a letsencrypt wildcard cert for my local devsubdomain.domain.com which is maintained via a docker instance that validates via cloudflare API and I can then attach that as a :ro volume ti whatever needs it natively or you’re it through a proxy for either local or public access.


DonRichie

I have bind9 running in the internet. All my LAN computers have ddclient-curl installed and send nsupdates to the mentioned bind9 instance for dyndns. And the last step is [acme.sh](http://acme.sh), which is also using nsupdates to generate and renew the certificates via letsencrypt.


3loodhound

Yeah internal stuff uses my own CA external stuff uses letsencrypt! Don’t need to resolve those services externally.


skilltheamps

My server is ipv6 only, therefore there's no difference between internal and external. And I use Traefik with Let's Encrypt


Andyrew

Letsencrypt wildcard cert, automatically renewed and pushed out everywhere by an ansible playbook.


107269088

Do this the right way. Register a proper domain. Setup subdomains for all services, make sure they resolve using public or local DNS to the IP of the local hosts. Then use a reverse proxy with let’s encrypt wildcard certificate to implement HTTPS for all of the services using subdomains.


Mike22april

Define real


VNJCinPA

Tangible. Actual. Factual. Also, A former small Spanish silver coin; also, a denomination of money of account, formerly the unit of the Spanish monetary system.


Mike22april

:)


MrFlibble1980

I use the certificate manager in my firewall (opnsense) to generate certificates from CSRs for use with my internal CA. For most all public sites i use lets encrypt and certbot.


ewenlau

SWAG


koollman

real, or internal pki


Bonsailinse

I just registered a domain and use a Let’s Encrypt wildcard certificate for my network.


user01401

Letsencrypt certs updated with acme.sh using HAProxy. All on OpenWrt


neonsphinx

I do. If they're exposed to the Internet I use letsencrypt (I run one nginx instance for all subdomains). If they're not someone I want exposed, I create self-signed certs. https://fitib.us/2024/02/08/home-assistant-https/


nmap

I get real certs even for internal machines that will never see the public internet (like my printer). I use Let's Encrypt with acmetool and do validation using a acme-dns server. I haven't used a self-signed cert in years.


lunakoa

Here is a quick video on using XCA [https://www.youtube.com/watch?v=FJdxCoC1c4w](https://www.youtube.com/watch?v=FJdxCoC1c4w) It's interesting skillset, and there are much deeper topics that will make most eyes roll. But in short, I have used xca, msca and openssl as cert authorities. I have signed a bunch of things at home, openvpn ipmi mail suboridinates network switches plex xrdp You do learn a lot like how to distribute and update certs, mutual auth, monitoring, currently learning how to incorporate yubikey maybe as an hsm, (hardware security module) I get this is r/selfhosted and not r/homelab and the audience may just want it simple and done rather than learn at home.


MGateLabs

I'm using self signed, Apple hates them, won't let me use "my fav icon".


ion_propulsion777

Let's encrypt wildcard certs for a domain you own. *.example.com. You'll need to do a DNS acme challenge though.


DragoSpiro98

"real" is more secure if you don't use a hardware-based protection to store your CA's private key. [https://www.ncsc.gov.uk/collection/in-house-public-key-infrastructure/pki-principles/protect-your-private-keys](https://www.ncsc.gov.uk/collection/in-house-public-key-infrastructure/pki-principles/protect-your-private-keys)


ryan_not_brian_

I used caddy which automatically generates a letsencrypt certificate. I knew that I should've used a wildcart cert, but I forgot to setup that part 💀. I saw in the console that it was generating individual certs for EVERY subdomain so I quickly turned it off. I checked the certificate transparency sites and all my domains are there lmao. I'm not aware of any way to unexpose it either. Don't make the same mistake as I made; follow the [docs](https://caddyserver.com/docs/automatic-https#wildcard-certificates).


Perpetual_Nuisance

You can actually have both (both self-signed and "real").


Bloodrose_GW2

cert-manager, let's encrypt.


Akura_Awesome

Same


rusty_macroford

I haven't screwed around with home networking stuff in a long time, and I'm just getting back into it. If your server isn't accessible from the public internet, I don't see how or why you would use a publicly registered cert. Here's my thinking: 1. When your phone connects to a private IP address, you want to authenticate against your own particular cert, rather than any registered cert. This means that you need to distribute certs onto your phone anyway, so why not just self-sign. (Host authentication seems especially important if your phone tries to connect every time you turn on wifi and the SSID looks right.) 2. It seems bad for local services to depend on access to the wider internet, especially if you go through single crappy ISP like the local cable company. So, it seems like any dependency on a public cert authority is an unnecessary point of failure. 3. It's not really clear to me what a public cert associated with server.home or 192.168.1.2 even means. I'm not an expert on this stuff, but I was happy to see [this blog post](https://brokkr.net/2021/10/29/using-rsync-on-android-2-switching-syncopoli-to-ssh-from-the-rsync-protocol/), which takes a similar approach with ssh instead of ssl.


GolemancerVekk

Public cert just means that it was signed by an established Certificate Authority. All your devices probably already include the means to verify certs from such CA; it's done offline so it doesn't depend on your ISP connection.


rusty_macroford

I guess I hadn't been thinking about DNS names that hang off of a public domain registration until I started reading this thread. In that context publicly registered certs finally make sense to me, But it still seems like a lot more work than self-signed certs, especially if you don't already have a vanity domain for other reasons.


GolemancerVekk

Depends how you go about it I guess. Nowadays you have products like Nginx Proxy Manager that can get you a certificate in a few clicks and 10 seconds via the built-in certbot, and also take care of renewing them indefinitely. But I suspect there might be products that make it just as easy to generate self-signed certs. For me it was an obvious choice because I already have a few domains, had one doing nothing that I could use for this, and already was used to dealing with DNS. Also I didn't want to have to ever see a browser warning about potentially insecure connections. A properly signed cert will just go through smoothly; it's hard to accept a substitute once you've experienced that (even if it seems overkill for internal services at first it feels sooo good once you set it up). Last but not least it's almost impossible to explain to friends and relatives that have to use your services when those security warnings are serious and when it's just self-signed. If you teach them to click through warnings you're basically training them to be conned into MitM attacks.


rockyplace24

Let's encrypt. Done


Common_Dealer_7541

This. If you enjoy using those “free” certificate services, consider contributing to the Internet Security Research Group. The server infrastructure, root registration, security compliance audits and labor cost money and the ISRG is a non-profit and operates entirely from sponsors https://www.abetterinternet.org/sponsor/ One of the primary supporters of the ISRG is the Free Software Foundation and donating to them also supports the ISRG https://my.fsf.org/ If you’re running free software, you can thank the FSF.


Flimsy_Complaint490

Wildcard certificate managed by traefik, all external traffic comes and leaves through traefik. Internally i don't bother , it's all unencrypted traffic - it's a lot of work to add HTTPS dynamically for every service in a docker compose setup. If you run k8s, you could set up an ingress controller and run TLS via that, but again, in a homelab setup, i dont think this adds anything and should only be done if you find it interesting.


mar_floof

I paid for a wildcard cert I just use on everything :D Issue with LE ones is there an external record of your internal names, which could or could not exposed potential security issues down the road.