Why I Finally Built My Own Server
12/27/2025 · Shaheed Mohamed Ali

Recently, I have been spending most of my free time working on side projects, and I noticed something immediately. The friction in my digital life wasn't coming from the code itself. It was coming from where that code, and everything else, actually lived.
On the dev side, working with modern AI tools highlighted a major flaw in my workflow. While my AI assistants could remember our conversations perfectly, they couldn't sync the actual state of my machine. I would spend hours on my laptop setting up environments and dependencies, only to switch to my desktop and find myself starting from scratch. I was wasting time fighting to keep my devices in sync rather than actually building.
But as I looked closer, I realized this wasn't just a coding problem. It was a privacy problem that touched every part of my digital life.
- The Gen AI Risk: As a developer, I am constantly pasting database schemas, project ideas, and sensitive logic into public LLMs. I realized I was feeding my intellectual property into a black box. I needed a way to run powerful open source models locally, where I could experiment freely without my data ever leaving my network.
- The "Rent to Own" Trap: Whether it was my photo library or my passwords, I felt like I was renting access to my own data. I was tired of paying monthly fees just to keep my memories safe, knowing they were likely being scanned by algorithms I didn't control. Trusting third party servers with the keys to my entire digital identity started to feel like an unnecessary risk.
- Media & Ownership: I realized I didn't actually own any of the media I consumed. Shows could disappear due to licensing deals, and quality was always dependent on my internet connection. I wanted a library that was permanently mine.
I realized I didn't just need a place to run code. I needed a central brain for everything. I wanted a persistent environment where my dev tools, my local AI models, and my private data lived under my roof. Thats when I decided that it was time to build a homelab.
Avoiding the Virtualization Rabbit Hole
I've seen people have really complex virtualized setups using proxmox and running their server completely off containers but I felt that this added a layer of complexity that I really didn't benefit from. Especially since I wasn't planning on spinning up many VM's. So after some time scouring through reddit posts and forums I decided to settle on going with ubuntu server. It's stable, tried and tested and has a large community around it in case I ever needed help with configuring something. The installation was also really straight forward and was just like installing any other linux distro on a computer. The only thing that I'd watch out for is secure boot and disabling it on your motherboards bios would save a lot of headache down the line.
Containerizing the Chaos
Once ubuntu server was setup, and I had configured my ufw firewall, the next thing to do was to start installing applications on my server! The best way to do this instead of installing on bare metal was to install and manage my applications through docker containers. This allows me to spin up a mini environment just for the application itself, and if anything goes wrong with the application then I can simply delete the container and nothing on my actual OS will be affected. And I can easily start up and shut down services on my server without impacting anything on the actual operating system. To manage my docker services I decided to go with a popular container management tool called dockge which allows me to view and manage all my containers along with the ability to spin up new containers as well.

Escaping the Local Network
After setting up some applications that I wanted to use on my server through Docker, I needed a way to access them from a public IP instead of just my local network. This was where Tailscale came into play.
I knew that opening ports on my router was a security nightmare waiting to happen. I didn't want to expose my entire server to the open internet just to check on a download or tweak an automation workflow. Tailscale was the perfect solution because it creates a secure, private mesh network between my devices. It essentially tricks my phone and laptop into thinking they are on the same connection as my server, even when I am miles away. This meant I could access all my services securely without having to mess with complex router configurations or risk being attacked.
Bringing It All Together
With my applications running and secure remote access sorted via Tailscale, I faced one last issue. I didn't want to memorize a list of port numbers just to open my password manager or media server. I needed a central hub that brought everything into a single, clean interface. This is where I decided to go with CasaOS. It transforms the raw functionality of the server into a visually stunning dashboard that feels more like using a smartphone than managing a Linux machine. Instead of typing IP addresses, I just have a grid of sleek icons. It serves as the perfect landing page for my lab, making the entire setup feel polished and professional.

The Real Return on Investment
Beyond the cool apps and the privacy, the biggest payoff has been the actual learning process. There is a huge difference between writing code and actually maintaining the infrastructure it runs on. I wasn't just deploying to a managed cloud anymore. I was debugging systemd services, figuring out how Docker networks actually talk to each other, and finally understanding how Linux file permissions work. Every red error text in the terminal was a mini lesson in how computers actually function. It turns out that breaking your own server at 2 AM is the fastest way to learn how to fix one.