Showing posts with label hardware. Show all posts
Showing posts with label hardware. Show all posts

23 Nov 2025

Managing PostgreSQL Buildfarm Members: A status update

The PostgreSQL Buildfarm is a global network of machines that continuously test PostgreSQL across a wide range of operating systems, architectures, compilers, and branches. Over the past few years, I have created and maintained several buildfarm members, each with its own quirks and strengths. In this post, I’ll share a status-update working on the following animals: alligator, dodo, woodpecker, leafhopper, massasauga, parula, and snakefly.

What is the Buildfarm?

The Buildfarm is essential for PostgreSQL development. It helps catch platform-specific bugs early, ensures code quality, and provides confidence that new changes work everywhere. Each member reports results for multiple branches (like master, REL_18_STABLE, etc.), using different OSes, compilers, and hardware.

About the Architectures

The Open Hardware Frontier: RISC-V

RISC-V is an open standard instruction set architecture (ISA) and unlike most other ISAs, RISC-V is provided under open source licenses that do not require fees to use.

  • ovenbird is my first foray into this architecture, running on a VisionFive 2 board.
  • It (hopefully) represents the future of open hardware, and ensuring PostgreSQL compiles and runs correctly on it is a long-term investment in the open-source ecosystem.

Bridging Windows and Linux: WSL2

Windows Subsystem for Linux (WSL) lets developers run a GNU/Linux environment -- including most command-line tools, utilities, and applications -- directly on Windows, unmodified, without the overhead of a traditional virtual machine or dualboot setup.

  • woodpecker runs inside a Debian container on WSL2.
  • This setup is crucial for verifying that PostgreSQL behaves correctly in this increasingly popular development environment, which bridges the gap between Windows and Linux.

Small Scale, Big Impact: Raspberry Pi

Raspberry Pi revolutionized low-cost computing and is a fantastic platform for edge cases (pun intended) :)

  • dodo runs on a Raspberry Pi 4 Model B.
  • It helps identify performance regressions and race conditions that might be masked by faster hardware. It also ensures PostgreSQL remains viable for low-powered, IoT and edge computing use cases.

The Rise of ARM in the Cloud: Graviton

Several of the buildfarm animals I’ve created run on the Graviton processors. Graviton is Amazon’s custom ARM-based CPU family, designed for high performance and energy efficiency in AWS cloud environments.

  • Graviton1 (first generation) was introduced in 2018, bringing ARM64 to AWS EC2.
  • Graviton2 (second generation) launched in 2020, offering major improvements in performance and scalability.
  • Graviton3 (third generation) arrived in 2022, further boosting compute, memory bandwidth, and energy efficiency—making it ideal for demanding workloads like database regression testing.
  • Graviton4 (fourth generation) is the latest, offering even greater performance and efficiency for cloud-native workloads. The buildfarm animal 'leafhopper' is one of the first to test PostgreSQL on Graviton4.

Testing PostgreSQL on these platforms helps ensure the database runs smoothly on modern cloud hardware and takes advantage of ARM’s growing ecosystem.

Disclosure: The Graviton machines are provided by my employer. All other machines (including the WSL2, RISC-V, and Raspberry Pi instances) are my personal machines.

Meet the Buildfarm Animals

Here’s a quick overview of the machines I have created and recently worked on:

alligator

  • OS: Ubuntu 24.04 LTS
  • Arch: x86_64
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Tracks the latest GCC changes, often finds compiler regressions before anyone else.

dodo

  • OS: Raspbian GNU/Linux 10
  • Arch: armv7l
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: ARM platform, useful for catching issues on lower-powered hardware.

woodpecker

  • OS: Debian/WSL2@win11 12 (bookworm)
  • Arch: x86_64
  • Compiler: gcc 12.2.0
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Runs inside WSL2 on Windows 11, great for testing integration with Windows environments.

leafhopper

  • OS: Amazon Linux 2023
  • Arch: aarch64/graviton4/r8g.2xl
  • Compiler: gcc experimental (hourly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Created and managed in a work-based environment; leafhopper is one of the first buildfarm animals testing PostgreSQL on Graviton4 hardware.

massasauga

  • OS: Amazon Linux 2
  • Arch: aarch64 (Graviton1)
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Created and managed in a work-based environment; Graviton1 machine—one of the earliest ARM64 regression testers in the buildfarm, still running reliably after several years.

parula

  • OS: Amazon Linux 2 (AL2) / Graviton3
  • Arch: aarch64/Graviton3/c7g.2xl
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Created and managed in a work-based environment; focuses on the third generation of AWS Graviton hardware, useful for performance and compatibility.

snakefly

  • OS: AmazonLinux@Graviton2 AL2
  • Arch: aarch64 (Graviton2)
  • Compiler: gcc experimental (nightly build)
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: Created and managed in a work-based environment; Graviton2-based member, helps ensure ARM64 stability across AWS generations.

ovenbird (newest member)

  • OS: Ubuntu 24.04.3 LTS
  • Arch: riscv64
  • Compiler: gcc 13.3.0
  • Branches: master, REL_18_STABLE, REL_17_STABLE, REL_16_STABLE, REL_15_STABLE, REL_14_STABLE, REL_13_STABLE
  • Notes: The newest addition to the family, ovenbird brings riscv64 architecture to the buildfarm, helping ensure PostgreSQL is tested on cutting-edge open hardware.

Challenges and Rewards

Managing these buildfarm animals means keeping up with OS upgrades, compiler changes, hardware failures, and PostgreSQL branch updates. Some of these machines are especially aggressive about GCC: they check for updates from the GCC git repository every few hours, recompile a fresh GCC, and use it for the next buildfarm run. This helps catch compiler regressions and compatibility issues very early.

If you want to read more about how these GCC compiles work and see the open source repository, check out my blog post: Compiling Latest GCC to Test More.

Some of these machines have been running for 3-4 years, and their logs are a treasure trove for debugging tricky platform-specific issues. The diversity of hardware and software helps the PostgreSQL community maintain its reputation for reliability and portability.

Testing with the latest GCC is especially rewarding: it ensures that upstream GCC changes are in tandem with the expectations of the PostgreSQL community, and that PostgreSQL continues to compile and pass tests without surprises. A good example is an upstream GCC bug that was found, reported, and fixed—making sure that no GCC changes adversely affect PostgreSQL in the long run. Read more about this incident here: PostgreSQL mailing list discussion of a GCC bug.

Here's another email thread that exemplifies why testing gcc experimental is helpful in ensuring that PostgreSQL compiles and tests stay green: PostgreSQL mailing list - GCC experimental thread.

However, it is also important to note that aggressive testing of GCC HEAD needs to be balanced against the time of PostgreSQL developers. The current buildfarm system does not explicitly distinguish between "production" and "bleeding edge" machines, meaning failures on experimental setups can sometimes be distracting. As discussed in this mailing list thread, there is an ongoing conversation about how to best handle these "platform not believed stable" scenarios to ensure that transient failures on experimental toolchains don't unnecessarily burden the community.

Speaking of new architectures, a few months back I wrote about [Testing PostgreSQL on Debian/Hurd](https://www.thatguyfromdelhi.com/2025/08/testing-postgresql-on-debianhurd.html) and planned to add a Hurd machine to the buildfarm. It looks like I've been beaten to the punch! A new member, [fruitcrow](https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=fruitcrow&br=master), is already up and running to test PostgreSQL on GNU/Hurd. This is fantastic news—having "competition" in adding diverse buildfarm members is exactly what we want. It shows that more people recognize that a wide array of test environments leads to a more stable PostgreSQL.

Final Thoughts

If you’re interested in contributing to PostgreSQL, running a buildfarm animal is a great way to help. It’s a hands-on way to learn about PostgreSQL internals, compilers, and operating systems, and it’s rewarding to see your machine’s name in the global test results.

6 Aug 2025

Pi-hole Part 2: Going Fully Independent with Unbound

After my positive first impressions with Pi-hole, I decided to take the next logical step: eliminating my dependence on external DNS resolvers entirely. While Quad9 served me well, there's something unsettling about routing all my DNS queries through a third party, no matter how trustworthy they appear.

The solution? Unbound - a validating, recursive, caching DNS resolver that can operate completely independently, resolving domain names directly from authoritative sources without relying on upstream DNS providers.

Why Make the Switch?

My motivation goes beyond just privacy paranoia (though that's certainly part of it):

Privacy First: Every DNS query reveals your browsing patterns. Even with Quad9's privacy promises, I prefer keeping that data entirely within my network.

Security Enhancement: Unbound performs DNSSEC validation by default, ensuring the authenticity of DNS responses and protecting against DNS spoofing attacks.

Reduced Latency: While counterintuitive, eliminating the round-trip to external resolvers can actually improve response times for frequently accessed domains through better local caching.

True Independence: No more wondering about logging policies, data retention, or potential government requests to DNS providers.

The Downsides to Consider

Before diving in, it's worth acknowledging the trade-offs:

Increased Complexity: You're now responsible for maintaining and troubleshooting your own DNS infrastructure. When things break, you can't just blame your ISP's DNS servers.

Initial Query Delays: Cold cache scenarios will be slower as Unbound has to walk the entire DNS hierarchy from root servers. First visits to new domains will take noticeably longer.

Resource Usage: While minimal, you're now running an additional service that consumes memory and CPU cycles on your Pi.

Potential Connectivity Issues: If your Pi goes down, your entire network loses DNS resolution. External resolvers provide redundancy that you're giving up.

CDN Sub-optimization: Content delivery networks may not route you to the geographically closest servers, potentially affecting streaming and download performance. Many large DNS providers like Cloudflare and Google use Anycast networks, where the same IP address is announced from multiple geographic locations, automatically routing you to the nearest server. When you run your own recursive resolver, you lose this geographic optimization and might connect to CDN endpoints that are further away.

False Sense of Security: While your DNS queries are now private, it's important to remember that all your actual web traffic (HTTP/HTTPS requests) still flows through your ISP and is visible in their logs. You've privatized the "phone book lookup" but not the actual "conversation" - your ISP can still see which IP addresses you're connecting to, just not the domain names that resolved to them.

Maintenance Overhead: The root hints file should be updated periodically (though it changes infrequently), and you're responsible for keeping your Unbound configuration current and secure.

The Verdict: Benefits Outweigh the Costs

After weighing these trade-offs, the privacy and security benefits still made a compelling case for proceeding. The complexity is manageable for anyone comfortable with basic Linux administration, and the performance impacts are largely theoretical for typical home usage. Most importantly, the peace of mind from knowing exactly where my DNS queries go and how they're handled proved worth the additional overhead.

The Setup Process

Installing and configuring Unbound alongside Pi-hole turned out to be surprisingly straightforward.

Step 1: Install Unbound

sudo apt update
sudo apt install unbound -y

Step 2: Configure Unbound for Security and Performance

I created a custom configuration at /etc/unbound/unbound.conf.d/pi-hole.conf:

sudo nano /etc/unbound/unbound.conf.d/pi-hole.conf

Here's my security-focused configuration:

server:
    # Basic settings
    port: 5335
    do-ip4: yes
    do-ip6: yes
    do-udp: yes
    do-tcp: yes
    
    # Security settings
    trust-anchor-file: "/var/lib/unbound/root.key"
    auto-trust-anchor-file: "/var/lib/unbound/root.key"
    val-clean-additional: yes
    val-permissive-mode: no
    val-log-level: 1
    
    # Privacy settings
    hide-identity: yes
    hide-version: yes
    harden-glue: yes
    harden-dnssec-stripped: yes
    harden-below-nxdomain: yes
    harden-referral-path: yes
    use-caps-for-id: yes
    
    # Performance settings (security-first approach)
    cache-min-ttl: 300
    cache-max-ttl: 86400
    prefetch: yes
    prefetch-key: yes
    
    # Interface settings
    interface: 127.0.0.1
    access-control: 127.0.0.1/32 allow
    access-control: ::1 allow
    
    # Logging
    verbosity: 1
    log-queries: no
    log-replies: no
    
    # Root hints
    root-hints: "/var/lib/unbound/root.hints"

Step 3: Root Hints Management

Root hints are essential files that tell Unbound where to find the DNS root servers - the starting point for all DNS resolution. When installing Unbound via package manager, root hints are included and should be updated automatically through regular system updates.

Note: The root hints file changes infrequently (typically a few times per year when root servers are added, removed, or have IP changes). Rather than manual updates, it's best to keep your system updated through your package manager, which will handle root hints updates appropriately - roughly every 6 months is more than sufficient for most users.

Step 4: Initialize DNSSEC Root Key (Usually Optional)

Modern Unbound packages typically handle DNSSEC root key initialization automatically. However, if you encounter DNSSEC validation errors or want to ensure the key is properly configured, you can initialize it manually:

sudo unbound-anchor -a "/var/lib/unbound/root.key"
sudo chown unbound:unbound /var/lib/unbound/root.key

Note: If this step fails or the file already exists, it's likely already configured correctly by the package installation.

Step 5: Start and Enable Unbound

sudo systemctl enable unbound
sudo systemctl start unbound

Step 6: Test Unbound

Before integrating with Pi-hole, I verified Unbound was working with both valid and invalid domain lookups:

Testing with an existing domain:

robins@pi4:~ $ dig @127.0.0.1 -p 5335 google.com | egrep -w 'status|^google'
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58713
google.com.             269     IN      A       142.250.70.206

Testing with a non-existent domain:

dig @127.0.0.1 -p 5335 asdfasdsfasfdfsafasd | egrep -w 'status|flags'
robins@pi4:~ $ dig @127.0.0.1 -p 5335 asdfasdsfasfdfsafasd | egrep -w 'status|flags'
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 19691
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
; EDNS: version: 0, flags:; udp: 1232

Both tests confirm Unbound is working correctly. The existing domain test shows successful resolution with NOERROR status, while the non-existent domain test returns NXDOMAIN (Non-eXistent DOMAIN), which is the proper response when a domain doesn't exist. Note the ad flag in the non-existent domain response, indicating that even failed lookups are DNSSEC-validated - Unbound cryptographically verified that this domain truly doesn't exist rather than just accepting an unvalidated negative response.

Step 7: Configure Pi-hole to Use Unbound

In the Pi-hole admin interface:

  1. Navigate to SettingsDNS
  2. Uncheck all upstream DNS servers
  3. Add 127.0.0.1#5335 as a custom DNS server
  4. Enable DNSSEC validation
  5. Save settings

The Results: Worth the Effort

After 24 hours of operation with the new setup, the benefits are clear:

Complete Privacy: All DNS queries now resolve independently without touching external servers (except for the initial root server queries, which reveal no specific domain information).

DNSSEC Validation: Every response is cryptographically verified, providing protection against DNS manipulation.

Improved Cache Efficiency: Unbound's more sophisticated caching algorithm seems to provide better hit rates for our household's browsing patterns.



Performance Impact: Negligible. The Pi 4 handles both Pi-hole and Unbound without breaking a sweat:

robins@pi4:~ $ uptime
 21:22:25 up 2 days,  9:47,  3 users,  load average: 0.00, 0.00, 0.00

Security Considerations

My configuration prioritizes security over raw performance:

  • Conservative TTL settings: Shorter cache times mean more frequent validation
  • Strict DNSSEC validation: No permissive mode that might accept invalid responses
  • Minimal logging: No query logging to preserve privacy even locally
  • Restricted access: Only localhost can query Unbound directly

Final Thoughts

Setting up Unbound with Pi-hole represents the logical conclusion of taking control over your DNS infrastructure. While the privacy and security benefits are the primary motivators, the technical satisfaction of running a completely self-contained DNS resolution system is considerable.

The setup process is more involved than simply pointing Pi-hole at an external resolver, but the configuration is straightforward and well-documented. For anyone concerned about DNS privacy or wanting to reduce external dependencies, this combination provides an excellent solution.

The Pi 4 continues to prove itself as the perfect platform for this type of network infrastructure project - handling both Pi-hole and Unbound with resources to spare. As I mentioned in part 1, even a Pi 2 would likely suffice for most households, making this an accessible upgrade for anyone looking to enhance their home network's privacy and security posture.

30 Jul 2025

Pi-hole First Impressions: Home DNS Caching Done Right

(This is first of a 2 part series, where I explore pi-hole as a DNS caching solution-Here's Part 2)

I've had a Raspberry Pi 4 sitting idle for months, and finally decided to put it to good use by setting up Pi-hole for network-wide DNS caching and ad blocking. After less than a day of operation, I'm genuinely impressed with both the installation process and the results.

The Installation: Surprisingly Smooth

Setting up Pi-hole turned out to be one of the smoothest software installations I've experienced in recent memory. The entire process is handled by a single bash script:


curl -sSL https://install.pi-hole.net | bash


That's it! What impressed me most wasn't just the simplicity, but the thoroughness of the pre-installation checks. Before the script even attempted to install anything, it ran through a comprehensive validation process:

  • Network connectivity tests
  • DNS resolution verification
  • Package manager status checks
  • System requirements validation
  • Port availability confirmation

This rigorous validation gave me real confidence that the installation would succeed. Too often, installation scripts fail halfway through, leaving you with a partially configured mess. Pi-hole's approach of "check everything first, then install" meant that once the green lights were all showing, the installation proceeded flawlessly.

Interestingly, what took the most time during the entire setup wasn't Pi-hole itself, but my initial attempt to use Docker. I'd assumed Docker would be the quick and easy route (as it usually is), but the Pi had some leftover Docker installations that needed cleaning up first - removing old docker, docker.io, and docker-compose packages was a messy affair. Then the Pi seemed to have issues with the Docker installation process itself. In the end, I abandoned the Docker approach and went with the native installation, which ironically turned out to be much faster and cleaner.

One lesser-known tip that made the physical setup even smoother: if you're running a Google mesh network, the routers with both WAN and LAN ports can easily be used as wifi-to-LAN bridges. Rather than connecting the Pi 4 via WiFi, I simply plugged it into the LAN port of one of the mesh routers. This eliminated the need for additional network switches and reduced latency by a few milliseconds by avoiding the WiFi overhead entirely.

Another major convenience: you don't need to update DNS settings on each individual device. A single configuration change at the router level does the job for the entire household. In Google mesh, I simply changed the DNS setting from "Automatic" to "Custom" and set the Pi 4's IP as the primary DNS server. This immediately moved all devices on the network to use Pi-hole without touching any individual device settings. The only exceptions are devices that explicitly force their own DNS (looking at you, certain smart TVs and streaming devices), but those are relatively rare.

The Results: Better Than Expected

After 23 hours of operation protecting our home network, the statistics are eye-opening:



DNS Query Volume: 100,000 requests in 23 hours

  • That's roughly 4,350 DNS queries per hour from our household
  • Averages out to about 72 queries per minute
  • Shows just how chatty modern devices really are

Cache Performance: 70% cache hit rate

  • Pi-hole is successfully caching DNS responses locally
  • Reduces latency for frequently accessed domains
  • Decreases load on upstream DNS servers
  • Currently using Quad9 (9.9.9.9) as the upstream resolver - the only non-US DNS service I still trust in this day and age

Blocking Effectiveness: 8.5% of requests blocked

  • More than 1 in 12 DNS requests were for ad/tracking domains
  • That's 8,500 blocked requests in just 23 hours
  • Represents a significant reduction in unwanted network traffic

System Performance: Negligible Impact

One concern with running Pi-hole on a Pi 4 was whether it would impact system performance. For context, here are the system specs:


robins@pi4:~ $ uname -a
Linux pi4 6.12.34+rpt-rpi-v8 #1 SMP PREEMPT Debian 1:6.12.34-1+rpt1~bookworm
  (2025-06-26) aarch64 GNU/Linux

robins@pi4:~ $ free -h
               total        used        free      shared  buff/cache   available
Mem:           3.7Gi       463Mi       1.7Gi        34Mi       1.6Gi       3.3Gi
Swap:          511Mi          0B       511Mi


The performance numbers are reassuring:

CPU Usage: Essentially zero load:


robins@pi4:~ $ uptime
 10:39:33 up 23:04,  3 users,  load average: 0.00, 0.00, 0.00

  • Zero computational overhead even after 23+ hours of operation
  • Pi 4 is more than capable of handling the workload
  • Plenty of headroom for additional services

Memory Usage: Minimal footprint

  • Pi-hole runs efficiently even on modest hardware
  • No noticeable impact on system responsiveness

Honestly, the Pi 4 seems like overkill for this task. Given the minimal resource requirements, a Pi 2 (with the older Cortex-A7) would likely handle Pi-hole just fine, leaving the Pi 4 (with the more capable Cortex-A72) free for more power-hungry side projects. The beauty of Pi-hole is that it's so lightweight, you can run it on practically any Raspberry Pi model and still have resources to spare.

The Web Interface: Polished and Responsive

Pi-hole's web dashboard deserves special mention. It's:

  • Fast and Responsive: Page loads are snappy, even on the Pi 4
  • Live Updates: The dashboard shows real-time query statistics
  • Well-Designed: Clean interface that makes data easy to understand
  • Comprehensive: Detailed logs, statistics, and configuration options



The live dashboard is particularly satisfying to watch - seeing blocked queries in real-time gives you a visceral sense of how much unwanted traffic Pi-hole is filtering out.

Real-World Benefits

Beyond the statistics, the practical benefits are noticeable:

  • Faster Browsing: Pages load quicker without ad network delays
  • Cleaner Experience: Websites feel less cluttered
  • Privacy Improvement: Reduced tracking across devices
  • Bandwidth Savings: Less unwanted traffic on the network

One particular benefit I'm looking forward to testing: my network occasionally suffers from stuttering ping latencies. Instead of the usual 10ms responses, I'll sometimes see 100ms or even 1-second ping times that persist for minutes. During these episodes, streaming mostly works and browsing barely functions, but what really hurts is DNS resolution - especially for websites requiring multiple round-trips. With Pi-hole's 70% cache hit rate, those problematic periods should be noticeably less laggy since most DNS queries won't need to traverse the struggling network connection. More testing needed, but the potential is promising.

The ad filtering aspect wasn't my primary motivation, but it doesn't hurt either. For websites I frequent that provide genuinely good content - like Phoronix with their constant stream of quality tech updates - I've set up pass-throughs so they can rightfully earn the advertising revenue they deserve for their work. For projects like Pi-hole itself, I prefer the direct contribution route via donations. It's about supporting the creators and maintainers who provide value, whether through allowing ads or direct financial support.

Worth Supporting

The Pi-hole project has created something genuinely useful that "just works" out of the box. The quality of both the software and the installation experience makes this a project worth supporting financially. I'll definitely be making a donation to help ensure continued development.

Final Thoughts

Pi-hole represents the best kind of open-source software: it solves a real problem elegantly, installs without drama, and delivers measurable benefits immediately. If you have a spare Raspberry Pi lying around, setting up Pi-hole is an excellent way to put it to work improving your entire network's performance and privacy.

The fact that it blocked 8,500 unwanted requests in less than a day while using virtually no system resources makes it a clear win. Sometimes the best technology is the kind you set up once and then forget about - until you look at the statistics and realize how much work it's quietly doing in the background.

8 Jul 2024

On-Prem AI chatbot - Hello World!

In continuation of the recent posts...


Finally got a on-premise chat-bot running! Once downloaded, the linux box is able to spin up / down the interface in a second.

(myvenv) ai@dell:~/proj/ollama$ time ollama run mistral
>>> /bye

real    0m1.019s
user    0m0.017s
sys     0m0.009s

That, on a measly ~$70 Marketplace i5/8GB machine is appreciable (given what all I had read about the NVidia RTX 4090s etc.). Now obviously this doesn't do anything close to 70 tokens per second, but am okay with that.

(myvenv) ai@dell:~/proj/ollama$ sudo dmesg | grep -i bogo
[sudo] password for ai:
[    0.078220] Calibrating delay loop (skipped), value calculated using timer frequency.. 6585.24 BogoMIPS (lpj=3292624)
[    0.102271] smpboot: Total of 4 processors activated (26340.99 BogoMIPS)

Next, I wrote a small little hello-world script to test the bot. Now where's the fun if it were to print a static text!!:

(myvenv) ai@dell:~/t$ cat a.py
from langchain_community.llms import Ollama

llm = Ollama(model="llama3")
result=llm.invoke("Why is 42 the answer to everything? Keep it very brief.")
print (result)

And here's the output, in just ......... 33 seconds :)

(myvenv) ai@dell:~/t$ time python a.py
A popular question! The joke about 42 being the answer to everything originated from Douglas Adams' science fiction series "The Hitchhiker's Guide to the Galaxy." In the book, a supercomputer named Deep Thought takes 7.5 million years to calculate the "Answer to the Ultimate Question of Life, the Universe, and Everything," which is... 42!

real    0m33.299s
user    0m0.568s
sys     0m0.104s
(myvenv) ai@dell:~/t$

And, just for kicks, works across languages / scripts too. Nice!

(myvenv) ai@dell:~/t$ ollama run mistral
>>> भारत की सबसे लंबी नदी कौन सी है?
 भारत की सबसे लंबी नदी गंगा है, जिसका पूरण 3670 किमी होता है। यह एक विश्वमित्र नदी है और बहुप्रकार से कई प्रदेशों के झिल्ले-ढाल में विचलित है।

>>>

Again, am pretty okay with this for now. I'll worry about speed tomorrow, when I have a script that's able to test the limits, and that's not today.

Hello World!

7 Jul 2024

Installing Ollama on an old linux box

Trying out Ollama - Your 10 year old box would do too.

TLDR

  • Yes, you CAN install an AI engine locally
  • No, you DON'T need to spend thousands of dollars to get started!
  • Agreed, that your ai engine wouldn't be snappy, it's still great to get started.

Server

You'd realise that any machine should get you going.

  • I had recently bought a second-hand desktop box (Dell OptiPlex 3020) from FB Marketplace and repurposed it here.
  • For specs, it was an Intel i5-4590 CPU @ 3.30GHz with 8GB of RAM and 250 GB of disk, nothing fancy.
  • It came with an AMD Radeon 8570 (2GB RAM) [4], and the Ollama install process recognized and optimized for the decade old GPU. Super-Nice!
  • For completeness, the box cost me $70 AUD (~50 USD) in May 2024. In other words, even for a cash-strapped avid learner, there's a very low barrier to entry here.

Install

The install steps were pretty simple [1] but as you may know, the models themselves are huge.

For e.g. look at this [3]:

  • mistral-7B - 4.1 GB
  • gemma2-27B - 16 GB
  • Code Llama - 4.8 GB

Given that, I'd recommend switching to a decent internet connection. If work allows, this may be a good time to go to work instead of WFH on this one. (Since I didn't have that luxury, my trusty but slow 60Mbps ADSL+ meant that I really worked up on my patience this weekend)

The thing that actually tripped me, was that Ollama threaded downloads really scream speed and it ended up clogging my test server (See my earlier blog post that goes into some details [2]).

Run with Nice

With system resources in short-supply, it made good sense, to ensure that once Ollama is installed, it is spun up with least priority.

On an Ubuntu server, I did this by modifying the ExecStart config for Ollama's systemd script.

ai@dell:~$ sudo service ollama status | grep etc
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)

ai@dell:~$ cat /etc/systemd/system/ollama.service | grep ExecStart
ExecStart=nice -n 19 /usr/local/bin/ollama serve

So when I do end up asking some fun questions, ollama is always playing "nice" :D




Enjoy ...

Reference:

  1. Install + Quick Start: https://github.com/ollama/ollama/blob/main/README.md#quickstart

  2. Model downloads made my server unresponsive: https://www.thatguyfromdelhi.com/2024/07/ollama-is-missing-rate-limits-on.html

  3. Model sizes are in GBs: https://github.com/ollama/ollama/blob/main/README.md#model-library

  4. Radeon 8570: https://www.techpowerup.com/gpu-specs/amd-radeon-hd-8570.b1325

16 Jun 2024

Compiling latest gcc to test more architectures

Off late, I've had two separate needs to compile GCC by hand and although my first foray into compiling gcc from git took patience, stumbling over the basics was interesting to say the least.

The first time I realised that an old GCC version could matter, was this feedback [1] that one of my buildfarm members was running an old (for its arch) gcc version, something that I almost never paid much attention to. The other being that that led me to newer architectures (more on that below) and how this could repeat itself if / when I end up playing with more architectures.

So finally, I can say I have a framework that frequently checks / recompiles gcc and ensures all my local tests are using the latest and the greatest gcc :) . (I am happy with how this has taken shape on my home server, and once I am able to port it to my other machines, don't see why this shouldn't land on github).

Now admittedly, compiling gcc on a nightly basis was already an overkill, but then what the heck - I went and did this hourly basis just because well-why-not. My personal ask was to:

  1. Incrementally learn how compiling gcc unfolds
  2. Have some fun scripting while at it
  3. ... but most importantly, see whether I could utilize this experience in other experiments where the idea is to forewarn database developers about upcoming changes.

A little more on point 3 above, I oversee a few machines on the postgres buildfarm and they differ in some aspects:

  1. Different archictures:
    1. aarch64: Gravitons
    2. x86-64: A vanilla off-the-shelf dell workstation
    3. armv7l - Raspberry Pi4
  2. Different GCCs:
    1. 8.3.0 (default in pi4)
    2. 7.3.1 (default on most ALs)
    3. 13.2 (default on Ubuntu)
    4. 14.0.1 (naive attempt at compiling whatever cleared make check)
    5. gcc (experimental nightly)
  3. (Internally I also run some different fuzzing workloads but that's besides the point)
  4. (Future plans - add some form of randomizer to test odd combinations of compilation flags, but more on that in an upcoming post)


Now if all goes to plan, I should also add to the mix, 2 new architectures. They wouldn't be the snappiest processors in the market (at least not in the pricing-level I am after), but hey it should be fun to play with!

  1. loongarch64 - (cough) Recent (but sure as butterflies, a promising) entrant - the MIPS64 Loongson has been around for some time now, but my interest has grown off late owing to sporadic reports that its becoming somewhat competitive, which should be interesting to review.
  2. riscv64 - Another interesting arch that should be fun to try out. Again, am not holding my breath that it'd top any charts, but could still end up being interesting nonetheless.

On the GCC front, getting the setup ready and stable, clears the path to now focus upgrading my buildfarm animals one-by-one and basically start focussing beyond this hurdle.

gcseb02 20240615_1700 - git checkout successful.
gcseb02 20240615_1700 - git pull successful.
gcseb02 20240615_1700 - No change in gcc version. Quitting.

gcsa36f 20240615_1800 - git checkout successful.
gcsa36f 20240615_1800 - git pull successful.
gcsa36f 20240615_1800 - gcc has changed - [471fb092601] vs [57af69d56e7]. Recompiling.
gcsa36f 20240615_1800 - make successful
gcsa36f 20240615_1800 - make install successful.
gcsa36f 20240615_1800 - gcc version string has changed from [15.0.0 20240615 (experimental) - 471fb092601] to [15.0.0 20240615 (experimental) - 57af69d56e7]

gcsf66a 20240615_1900 - git checkout successful.
gcsf66a 20240615_1900 - Unable to git pull. Are we connected? Quitting.
gcsf66a 20240615_1900 - git switched back to 57af69d56e7.

gcs629f 20240615_2000 - git checkout successful.
gcs629f 20240615_2000 - git pull successful.
gcs629f 20240615_2000 - gcc has changed - [57af69d56e7] vs [6762d5738b0]. Recompiling.
gcs629f 20240615_2000 - make successful
gcs629f 20240615_2000 - make install successful.
gcs629f 20240615_2000 - gcc version string has changed from [15.0.0 20240615 (experimental) - 57af69d56e7] to [15.0.0 20240615 (experimental) - 6762d5738b0]

.
.

gcsc115 20240616_0400 - git checkout successful.
gcsc115 20240616_0400 - git pull successful.
gcsc115 20240616_0400 - No change in gcc version. Quitting.

Reference

2. Compilation script source - https://github.com/robins/gcc_compile

3 Aug 2017

Reducing Wires

Recently got an additional monitor for my workstation@home and found that the following wires were indispensable:

  • USB Mouse
  • Monitor VGA / HDMI / DVI cable
  • USB Hub cable (Pen Drive etc.)
I was lucky that this ($20 + used) Dell monitor was an awesome buy since it came with a Monitor USB Hub (besides other goodies such as vertical rotate etc).

After a bit of rejigging, this is how things finally panned-out:
  • 1 USB Wire (from the laptop) for the MUH (Monitor USB Hub)
    • This is usually something like this.
  • Use a USB->DVI converter and use that to connect MUH -> Monitor DVI port
    • This is usually something like this.
  • Plug USB Mouse to MUH
  • With things working so well, I also plugged a Wireless Touchpad dongle to the MUH
So now when I need to do some office work, connecting 1 USB wire gets me up and running!

#LoveOneWires :)

Now only if I could find a stable / foolproof Wireless solution here ;)

Managing PostgreSQL Buildfarm Members: A status update

The PostgreSQL Buildfarm is a global network of machines that continuously test PostgreSQL across a wide range of operating systems, arc...