14 Aug 2025

Testing PostgreSQL on Debian/Hurd: A Windows + QEMU Adventure

Curiosity often leads to the most interesting technical adventures. This time, I decided to explore something off the beaten path: running Debian GNU/Hurd inside a virtual machine on my Windows 11 host and compiling PostgreSQL from source.

This post is part 1 of a multi-part series documenting the process, challenges, and discoveries along the way. Future parts will dive deeper into advanced topics, automation, and ongoing compatibility work—so if you're interested in PostgreSQL, alternative operating systems, or open source testing, stay tuned!

What is Debian?
Debian is one of the oldest and most respected Linux distributions, known for its stability, vast software repositories, and commitment to free software principles. While most people associate Debian with the Linux kernel, it’s actually a complete operating system that can run on different kernels.

What is GNU/Hurd?
GNU/Hurd is an alternative kernel developed by the GNU Project. Unlike Linux, GNU/Hurd is built on a microkernel architecture (specifically GNU Mach), aiming for greater modularity and flexibility. While GNU/Hurd is still experimental and not as mature or widely used as Linux, it represents a fascinating approach to operating system design.

Debian GNU/Hurd combines the familiar Debian userland (tools, package management, etc.) with the GNU/Hurd kernel, offering a unique environment for open source enthusiasts and OS tinkerers.

My goal for this experiment was to see how far I could get with a modern database stack—specifically, compiling and running PostgreSQL—on this unusual platform.



Setting Up the VM

Instead of the CD image, I used the pre-built disk image available here. After downloading and extracting the .img file, I launched the VM with QEMU using the following command:

qemu-system-x86_64.exe -machine type=pc,accel=whpx,kernel-irqchip=off -boot d -m 4096 -usb -display default,show-cursor=on -drive file=".\debian-hurd-i386-20250807.img",cache=writeback

Explanation of the command:

  • qemu-system-x86_64.exe: Runs QEMU for 64-bit x86 systems (works for 32-bit guests too).
  • -machine type=pc,accel=whpx,kernel-irqchip=off: Specifies a PC-type machine, enables Windows Hypervisor Platform acceleration (WHPX), and disables kernel IRQ chip emulation for compatibility.
  • -boot d: Boots from the first hard disk.
  • -m 4096: Allocates 4GB of RAM to the VM.
  • -usb: Enables USB support.
  • -display default,show-cursor=on: Uses the default display and ensures the mouse cursor is visible.
  • -drive file=".\debian-hurd-i386-20250807.img",cache=writeback: Uses the extracted Hurd disk image as the hard drive and enables writeback caching for better disk performance.

This boots directly into the installed Debian/Hurd system with improved performance and usability on a Windows 11 host.

Preparing to Build PostgreSQL

Debian/hurd is minimal out of the box, so the first step was to install all the build tools and libraries required for compiling PostgreSQL:

sudo apt-get update
sudo apt-get install build-essential git libxml2-dev libxslt-dev autotools-dev automake libreadline-dev zlib1g-dev bison flex libssl-dev libpq-dev ccache

This command installs the compiler, linker, version control tools, XML and SSL libraries, autotools, and all other dependencies PostgreSQL may need for a successful build and test cycle.

Downloading and Compiling PostgreSQL

Instead of downloading a release tarball, I cloned the official PostgreSQL git repository and compiled the master branch:

git clone https://github.com/postgres/postgres.git
cd postgres
./configure --prefix=~/proj/localpg
make
make install

This approach ensures you're building the latest development version of PostgreSQL directly from source, and installs it locally to your user's ~/proj/localpg directory.

Setting Up the Database Cluster

PostgreSQL needs a data directory (cluster) to store its databases. Since the installation was local to my user, I simply initialized the cluster and started the server using the full path to the binaries (since they're not in my PATH):

~/proj/localpg/bin/initdb -D ~/proj/localpg/pgdata
~/proj/localpg/bin/pg_ctl -D ~/proj/localpg/pgdata -l logfile start

Connecting and Creating a Table

With the server running, I connected to the database and created a sample table:

~/proj/localpg/bin/psql -d postgres

Inside psql:

CREATE TABLE test_table (id SERIAL PRIMARY KEY, name TEXT);
INSERT INTO test_table (name) VALUES ('Hello from Debian/Hurd!');
SELECT * FROM test_table;

Example output:

CREATE TABLE
INSERT 0 1
 id |         name         
----+----------------------
  1 | Hello from Debian/Hurd!
(1 row)

Running the Test Suite

To ensure the build was solid, I went back to the source directory and ran:

cd ~/postgres
make check

This runs PostgreSQL's regression tests, verifying that the core features work as expected—even on Hurd. This ran mostly fine (except for a few tests that failed - more to be researched on that failure).

Quick QEMU Tip

When working with QEMU, remember that Ctrl-Alt-G is your friend—it releases the mouse and keyboard from the VM window, making it much easier to switch back to your host system.

Adding a Separate Volume for More Disk Space

The base Debian/Hurd image is quite small and can easily run out of space, especially when compiling large projects or running make check. I frequently hit disk full errors during testing.

Solution:

  1. Shutdown the VM.

  2. Resize the disk image:

    qemu-img resize debian-hurd-i386-20250807.img +10G
    

    This adds 10GB to the existing disk image.

  3. Restart the VM.

  4. Create a new partition:

    • Use fdisk /dev/hd0 (or the appropriate device) to create a new partition in the extra space.
  5. Format the new partition:

    mkfs.ext4 /dev/hd0s3
    

    (Note: On my setup, the original root partition was /dev/hd0s2, so the new partition created for extra space was /dev/hd0s3. Adjust the device name as needed for your configuration.)

    Although the root volume is of ext2 type (!!!), Debian/Hurd works fine with ext4—so feel free to use ext4 for the new partition.

  6. Mount the new volume:

    mkdir -p /mnt/newvol
    mount /dev/hd0s3 /mnt/newvol
    
  7. Grant non-root user access:

    • As root, change ownership:
      chown robins:robins /mnt/newvol
      
    • Now your non-root user (e.g., robins) can use /mnt/newvol for compiling PostgreSQL and running make check without running out of disk space.

Why use a non-root user for PostgreSQL? PostgreSQL is designed to run as a non-root user for security reasons. Running the database server or its tests as root can expose your system to unnecessary risks and may even cause certain operations to fail. Always use a dedicated non-root user for installation, testing, and day-to-day database operations.

This approach made it possible to complete the build and test cycle without disk space issues.

Final Thoughts

Running Debian/Hurd in a VM on Windows 11 was surprisingly smooth, though some packages and features are less mature than on Linux. Compiling PostgreSQL from scratch was a great way to explore the system's capabilities and compatibility. If you're looking for a fun, geeky weekend project, give Debian/Hurd a try!

Next Steps & What's Still Pending

This is only part 1 of a multi-part series. In future installments, I'll cover:

  • Setting up the PostgreSQL buildfarm for automated testing on Debian/Hurd
  • Deeper investigation into SMP/multi-core support (currently not working)
  • More QEMU optimization and compatibility testing
  • Additional performance tuning and disk management strategies
  • Troubleshooting Perl module installation issues (e.g., LWP::Protocol::https, LWP::Simple, Net::SSLeay), which currently fail to install—more research is needed to understand and resolve these problems.
  • Investigating why make check did not complete successfully (failed on a few tests)—this requires further research.

Some features, like multi-core support, full buildfarm integration, reliable Perl module installation, and passing all PostgreSQL regression tests, are not yet working or fully tested. These will be explored in detail in future posts. Stay tuned!

6 Aug 2025

Pi-hole Part 2: Going Fully Independent with Unbound

After my positive first impressions with Pi-hole, I decided to take the next logical step: eliminating my dependence on external DNS resolvers entirely. While Quad9 served me well, there's something unsettling about routing all my DNS queries through a third party, no matter how trustworthy they appear.

The solution? Unbound - a validating, recursive, caching DNS resolver that can operate completely independently, resolving domain names directly from authoritative sources without relying on upstream DNS providers.

Why Make the Switch?

My motivation goes beyond just privacy paranoia (though that's certainly part of it):

Privacy First: Every DNS query reveals your browsing patterns. Even with Quad9's privacy promises, I prefer keeping that data entirely within my network.

Security Enhancement: Unbound performs DNSSEC validation by default, ensuring the authenticity of DNS responses and protecting against DNS spoofing attacks.

Reduced Latency: While counterintuitive, eliminating the round-trip to external resolvers can actually improve response times for frequently accessed domains through better local caching.

True Independence: No more wondering about logging policies, data retention, or potential government requests to DNS providers.

The Downsides to Consider

Before diving in, it's worth acknowledging the trade-offs:

Increased Complexity: You're now responsible for maintaining and troubleshooting your own DNS infrastructure. When things break, you can't just blame your ISP's DNS servers.

Initial Query Delays: Cold cache scenarios will be slower as Unbound has to walk the entire DNS hierarchy from root servers. First visits to new domains will take noticeably longer.

Resource Usage: While minimal, you're now running an additional service that consumes memory and CPU cycles on your Pi.

Potential Connectivity Issues: If your Pi goes down, your entire network loses DNS resolution. External resolvers provide redundancy that you're giving up.

CDN Sub-optimization: Content delivery networks may not route you to the geographically closest servers, potentially affecting streaming and download performance. Many large DNS providers like Cloudflare and Google use Anycast networks, where the same IP address is announced from multiple geographic locations, automatically routing you to the nearest server. When you run your own recursive resolver, you lose this geographic optimization and might connect to CDN endpoints that are further away.

False Sense of Security: While your DNS queries are now private, it's important to remember that all your actual web traffic (HTTP/HTTPS requests) still flows through your ISP and is visible in their logs. You've privatized the "phone book lookup" but not the actual "conversation" - your ISP can still see which IP addresses you're connecting to, just not the domain names that resolved to them.

Maintenance Overhead: The root hints file should be updated periodically (though it changes infrequently), and you're responsible for keeping your Unbound configuration current and secure.

The Verdict: Benefits Outweigh the Costs

After weighing these trade-offs, the privacy and security benefits still made a compelling case for proceeding. The complexity is manageable for anyone comfortable with basic Linux administration, and the performance impacts are largely theoretical for typical home usage. Most importantly, the peace of mind from knowing exactly where my DNS queries go and how they're handled proved worth the additional overhead.

The Setup Process

Installing and configuring Unbound alongside Pi-hole turned out to be surprisingly straightforward.

Step 1: Install Unbound

sudo apt update
sudo apt install unbound -y

Step 2: Configure Unbound for Security and Performance

I created a custom configuration at /etc/unbound/unbound.conf.d/pi-hole.conf:

sudo nano /etc/unbound/unbound.conf.d/pi-hole.conf

Here's my security-focused configuration:

server:
    # Basic settings
    port: 5335
    do-ip4: yes
    do-ip6: yes
    do-udp: yes
    do-tcp: yes
    
    # Security settings
    trust-anchor-file: "/var/lib/unbound/root.key"
    auto-trust-anchor-file: "/var/lib/unbound/root.key"
    val-clean-additional: yes
    val-permissive-mode: no
    val-log-level: 1
    
    # Privacy settings
    hide-identity: yes
    hide-version: yes
    harden-glue: yes
    harden-dnssec-stripped: yes
    harden-below-nxdomain: yes
    harden-referral-path: yes
    use-caps-for-id: yes
    
    # Performance settings (security-first approach)
    cache-min-ttl: 300
    cache-max-ttl: 86400
    prefetch: yes
    prefetch-key: yes
    
    # Interface settings
    interface: 127.0.0.1
    access-control: 127.0.0.1/32 allow
    access-control: ::1 allow
    
    # Logging
    verbosity: 1
    log-queries: no
    log-replies: no
    
    # Root hints
    root-hints: "/var/lib/unbound/root.hints"

Step 3: Root Hints Management

Root hints are essential files that tell Unbound where to find the DNS root servers - the starting point for all DNS resolution. When installing Unbound via package manager, root hints are included and should be updated automatically through regular system updates.

Note: The root hints file changes infrequently (typically a few times per year when root servers are added, removed, or have IP changes). Rather than manual updates, it's best to keep your system updated through your package manager, which will handle root hints updates appropriately - roughly every 6 months is more than sufficient for most users.

Step 4: Initialize DNSSEC Root Key (Usually Optional)

Modern Unbound packages typically handle DNSSEC root key initialization automatically. However, if you encounter DNSSEC validation errors or want to ensure the key is properly configured, you can initialize it manually:

sudo unbound-anchor -a "/var/lib/unbound/root.key"
sudo chown unbound:unbound /var/lib/unbound/root.key

Note: If this step fails or the file already exists, it's likely already configured correctly by the package installation.

Step 5: Start and Enable Unbound

sudo systemctl enable unbound
sudo systemctl start unbound

Step 6: Test Unbound

Before integrating with Pi-hole, I verified Unbound was working with both valid and invalid domain lookups:

Testing with an existing domain:

robins@pi4:~ $ dig @127.0.0.1 -p 5335 google.com | egrep -w 'status|^google'
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58713
google.com.             269     IN      A       142.250.70.206

Testing with a non-existent domain:

dig @127.0.0.1 -p 5335 asdfasdsfasfdfsafasd | egrep -w 'status|flags'
robins@pi4:~ $ dig @127.0.0.1 -p 5335 asdfasdsfasfdfsafasd | egrep -w 'status|flags'
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 19691
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
; EDNS: version: 0, flags:; udp: 1232

Both tests confirm Unbound is working correctly. The existing domain test shows successful resolution with NOERROR status, while the non-existent domain test returns NXDOMAIN (Non-eXistent DOMAIN), which is the proper response when a domain doesn't exist. Note the ad flag in the non-existent domain response, indicating that even failed lookups are DNSSEC-validated - Unbound cryptographically verified that this domain truly doesn't exist rather than just accepting an unvalidated negative response.

Step 7: Configure Pi-hole to Use Unbound

In the Pi-hole admin interface:

  1. Navigate to SettingsDNS
  2. Uncheck all upstream DNS servers
  3. Add 127.0.0.1#5335 as a custom DNS server
  4. Enable DNSSEC validation
  5. Save settings

The Results: Worth the Effort

After 24 hours of operation with the new setup, the benefits are clear:

Complete Privacy: All DNS queries now resolve independently without touching external servers (except for the initial root server queries, which reveal no specific domain information).

DNSSEC Validation: Every response is cryptographically verified, providing protection against DNS manipulation.

Improved Cache Efficiency: Unbound's more sophisticated caching algorithm seems to provide better hit rates for our household's browsing patterns.



Performance Impact: Negligible. The Pi 4 handles both Pi-hole and Unbound without breaking a sweat:

robins@pi4:~ $ uptime
 21:22:25 up 2 days,  9:47,  3 users,  load average: 0.00, 0.00, 0.00

Security Considerations

My configuration prioritizes security over raw performance:

  • Conservative TTL settings: Shorter cache times mean more frequent validation
  • Strict DNSSEC validation: No permissive mode that might accept invalid responses
  • Minimal logging: No query logging to preserve privacy even locally
  • Restricted access: Only localhost can query Unbound directly

Final Thoughts

Setting up Unbound with Pi-hole represents the logical conclusion of taking control over your DNS infrastructure. While the privacy and security benefits are the primary motivators, the technical satisfaction of running a completely self-contained DNS resolution system is considerable.

The setup process is more involved than simply pointing Pi-hole at an external resolver, but the configuration is straightforward and well-documented. For anyone concerned about DNS privacy or wanting to reduce external dependencies, this combination provides an excellent solution.

The Pi 4 continues to prove itself as the perfect platform for this type of network infrastructure project - handling both Pi-hole and Unbound with resources to spare. As I mentioned in part 1, even a Pi 2 would likely suffice for most households, making this an accessible upgrade for anyone looking to enhance their home network's privacy and security posture.

Testing PostgreSQL on Debian/Hurd: A Windows + QEMU Adventure

Curiosity often leads to the most interesting technical adventures. This time, I decided to explore something off the beaten path: running D...