Dinoleaf
DevServer Hosting

Creating Your Own Secret Dark Web

Dinoleaf’s Multilayer Infrastructure

There’s a number of layers to Dinoleaf infrastructure. The public facing sites are accessible through your standard DNS, because it’s public and it’d be wise to enable random people to actually find us without much effort. But as a company with our own assets, source code, source assets, etc. we intend to keep secure, it’s not fantastic to expose it to open skies. We all work remote and need to access the same repositories and clouds, though. If you’re fortunate to have some self-hosting server experience, it is possible to set up a private server, either on a VPS or a home network, that cannot be accessed by anyone other than devices that are given privileged information. This does not require whitelisting IP addresses, but you can if you want. This doesn’t require any fancy tools on the client’s side for simple stuff like web browser services as long as you can trust self-signed certificates. It only requires a little agency on the client’s part. What’s also nice is if you do not have experience with server setup and would like to reduce the chances of bots finding and exploiting your internal network while you teach yourself, this is a good safeguard since it can entirely block bot access from accessing your servers.

We run our own internal cloud server, wiki, chats, etc on a low power mini-PC with OpenBSD. I check logs and never see unauthorized IPs connecting to the important services. This can be done on just about anything with Nginx and probably nearly any web server setup, so long as you can sufficiently trust the server OS. (Really, there’s not much good reason to be using Windows servers in $current_year unless your server software had poor choice of development for platform restriction.)

This is likely possible to set up with most web servers, however I use Nginx and will be demonstrating with it. This isn’t a tutorial on how to set up much specific. I’m mostly going to be referring to high level concepts which you would either need to learn or know how to do independently. Note that as with everything, this isn’t magically foolproof. It is security through obscurity and should be treated as one extra layer of protection rather than a bullet-proof shield.

Prerequisites

Overall, here’s what you want:

  • A Linux/BSD server, either a VPS or some box in your office or home. I can’t speak for other systems.
  • Nginx, or some other reverse proxy which you can port the following configurations to.
  • Some site, cloud, service, etc that you want to access but prevent non-authorized machines from accessing.
  • Self-signed SSL certificate workflow. Optional but best to support encryption. We use our own CA and set our client machines to trust it.

If you’ve got experience hosting servers, then you’ll see this is pretty easy to set up.

Step 1 – Generate Dummy SSL Cert/Key

First, you need a self-signed SSL cert and key. Generate it however you want. What matters is it does not contain any meaningful identifying data. Empty country, no meaningfully identifiable CA, etc. This will make sense shortly. Store it somewhere for Nginx to access.

Step 2 – Block Unauthorized Connections

On Linux based configurations, as of writing, Nginx is set up to easily allow including configs in a directory, usually /etc/nginx/sites-enabled/. BSD systems default to only nginx.conf and will need to either change it to work like the Linux config or directly add your configurations into that. Do whatever you need and however you prefer. I will be showing the Linux way since self-contained configuration files are fun and easy to toggle. If your nginx.conf contains default_server, it should be changed to this or removed. This may exist because of the default Nginx “it works!” setup.

Symlink or add to sites-enabled the following configuration:

server {
	listen       80 default_server;
	listen       [::]:80 default_server;
	server_name _ "";
	return 444;
}

server {
	listen       443 default_server;
	listen       [::]:443 default_server;
	server_name _ "";
	ssl_certificate         "PATH_TO_DUMMY_CERT.crt";
	ssl_certificate_key     "PATH_TO_DUMMY_KEY.key";
	return 444;
}

Change the SSL cert and key path to where ever you save your dummy cert and key. If you don’t know what this means, you should be researching and not copy-pasting. What this does is it listens for a connection and, if it is an SSL connection, it encrypts the connection, then immediately terminates it. The server will send the SSL cert to handshake, which is why we made a dummy one with no identifying information. We are setting the server up to reject any connections using an unsupported domain name, which usually means it is a random bot pinging the server IP with no hostname, possibly sniffing for unsecure websites with login pages and vulnerabilities.

Now test this by restarting Nginx and attempt to connect to the server by web browser. Both HTTP and HTTPS should fail. HTTPS will likely throw an untrusted self-signed certificate warning. This is expected. If you check the certificate in your browser, you should see it has no meaningful information of who signed it or what it’s intended for. If you want, you can add custom logging to track IPs accessing this dud server, and what hostnames they’re attempting.

Step 3 – Set Up Intended Servers and Make More SSL Certs

This is where the fun stuff comes in. Set up an Nginx config to host a simple HTML doc, internal Nextcloud server, or whatever. Set up the new site config as usual alongside your deny-access configuration. However, for the server_name, you should pick something unique, such as test.secret. Now, generate a SSL cert and key with your chosen server_name, ideally with your own CA so you can tell your browsers and tools to trust your own certificates broadly instead of having to trust individual certificates. Do not use your CA with the dummy certificate, it is an identifiable piece of information.

Step 4 – Hosts

An observant fellow will see where I’m going with this, but you’ll not necessarily know how to access your server. https://test.secret isn’t even possible to have hosted on public DNS, and hosting your own DNS is possible but annoying. Some of you, however, might have seen guides on blocking malware websites and such with hosts. This works by redirecting a certain domain name, such as google.com, to 0.0.0.0 or 127.0.0.1. This means Google resolves to the current device, and thus goes nowhere. What we instead do is go and set test.secret to your server IP, whether it is your local subnet like 172.0.0.x or 10.0.0.x, or to the publicly accessible IP address. Keep in mind that browsers tend to only apply https:// automatically when it detects a common top-level domain, so you will need to manually add that to your bookmarks, lest you end up directing your custom domain to a search engine in your browser.

Once this is set up, access the new server through your browser. Should all this be correctly set up, you should now be able to connect to the target server with the custom hostname. If it’s public, you can hand off the IP and custom server name to those you authorize so they can access it as well. As a result of the hostname being impossible to put on public DNS, it is impossible for bots to scrape DNS/IP combinations and directly access your server. It is also impossible for a bad actor to attempt finding and cracking your own internal file servers, cloud servers, repositories, etc. Obviously you should have your SSH server secured with keys and not passwords, but that’s besides the point.

Caveats and Warnings

As with all things IT related, this is not infallible. Human error or malice is the biggest flaw in even the tightest of secure tech. If someone leaks your custom domains and IP, then you will need to change the domains so that they cannot be accessed by that anymore, which could involve changing domains in both Nginx and the final service so URLs generated are correct to the new name. You should also ensure that accounts and server software is secure as if it was publicly accessible. Finally, this is intended for private infrastructure, so you are trusting people who you hand the domain and IP combination to. Choose your friends wisely. Extra measures and treating the infrastructure as public is a failsafe in case the combo is leaked and a malicious actor can reach it. Also, you can further secure this by whitelisting IPs, whether at the Nginx or software level. Technically you could do this without this custom domain trick, but custom domains make it far prettier to access, does not expose your server to public DNS, and does not expose your server’s IP if you connect directly, all while having SSL security. This is especially wise if you ever stream or take screenshots with your server’s domain name visible.

Another factor is if you run the server from a home IP, and you’re not hosting behind a reverse proxy like Cloudflare which hides your IP, then you want a VPN to cover your own IP in case some malicious actor tries to snag your IP by sending you a bait link and checking their logs when you click it. Even just loading an image could catch it. If they have the IP and know your domain name, they could gain access to your server. I don’t get $10,000 sponsorships to shill specific VPNs, so I don’t have any comments on that matter. The alternative is to just whitelist trusted IPs on top of these other protections, or change the domain. I would also advise aggressively preventing telemetry that could link your server identity and IP, because technology is in a terrible state where telemetry is default on and always a breach of privacy and security.

Also, this isn’t possible if you cannot change hosts or otherwise add support for your custom hostname to resolve to an IP. Android seemed to be inconsistent for me and either need a third-party application or root to modify hosts. Also, you will best need ways to trust the self-signed certificates, or else add exceptions or otherwise tell programs to shut up about it being self-signed. I found Firefox is easier to do this than any Google-based browser.

There are likely applications and infrastructure that allow for better management of this secure internal setup with custom domain names, but this is a way to do it with no external applications and instead with just some simple Nginx config files. The annoying setup means only certain devices are configured to access the server, and extra whitelisting and blacklisting can make even malicious attempts that are fishing even less likely to find your services. It’s a scrappy method, but it does wonders for us and when I check my server logs, they don’t get random bot traffic. One service I run, I only tend to access on my own router subnet, and I have never seen a ping from an IP outside 10.0.0.x, even though it is publicly accessible online should I need to remotely. This has been running passively for months. As said, you can easily log the default_server for IPs trying to access with no hostname to see failed connections, or even set some honeypots and throttlers to have their IP blacklisted.

Hi, I’m pyral

Pyral hates videogames and doing things.

Leave a Reply