Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

8 Jul 2024

On-Prem AI chatbot - Hello World!

In continuation of the recent posts...


Finally got a on-premise chat-bot running! Once downloaded, the linux box is able to spin up / down the interface in a second.

(myvenv) ai@dell:~/proj/ollama$ time ollama run mistral
>>> /bye

real    0m1.019s
user    0m0.017s
sys     0m0.009s

That, on a measly ~$70 Marketplace i5/8GB machine is appreciable (given what all I had read about the NVidia RTX 4090s etc.). Now obviously this doesn't do anything close to 70 tokens per second, but am okay with that.

(myvenv) ai@dell:~/proj/ollama$ sudo dmesg | grep -i bogo
[sudo] password for ai:
[    0.078220] Calibrating delay loop (skipped), value calculated using timer frequency.. 6585.24 BogoMIPS (lpj=3292624)
[    0.102271] smpboot: Total of 4 processors activated (26340.99 BogoMIPS)

Next, I wrote a small little hello-world script to test the bot. Now where's the fun if it were to print a static text!!:

(myvenv) ai@dell:~/t$ cat a.py
from langchain_community.llms import Ollama

llm = Ollama(model="llama3")
result=llm.invoke("Why is 42 the answer to everything? Keep it very brief.")
print (result)

And here's the output, in just ......... 33 seconds :)

(myvenv) ai@dell:~/t$ time python a.py
A popular question! The joke about 42 being the answer to everything originated from Douglas Adams' science fiction series "The Hitchhiker's Guide to the Galaxy." In the book, a supercomputer named Deep Thought takes 7.5 million years to calculate the "Answer to the Ultimate Question of Life, the Universe, and Everything," which is... 42!

real    0m33.299s
user    0m0.568s
sys     0m0.104s
(myvenv) ai@dell:~/t$

And, just for kicks, works across languages / scripts too. Nice!

(myvenv) ai@dell:~/t$ ollama run mistral
>>> भारत की सबसे लंबी नदी कौन सी है?
 भारत की सबसे लंबी नदी गंगा है, जिसका पूरण 3670 किमी होता है। यह एक विश्वमित्र नदी है और बहुप्रकार से कई प्रदेशों के झिल्ले-ढाल में विचलित है।

>>>

Again, am pretty okay with this for now. I'll worry about speed tomorrow, when I have a script that's able to test the limits, and that's not today.

Hello World!

7 Jul 2024

Installing Ollama on an old linux box

Trying out Ollama - Your 10 year old box would do too.

TLDR

  • Yes, you CAN install an AI engine locally
  • No, you DON'T need to spend thousands of dollars to get started!
  • Agreed, that your ai engine wouldn't be snappy, it's still great to get started.

Server

You'd realise that any machine should get you going.

  • I had recently bought a second-hand desktop box (Dell OptiPlex 3020) from FB Marketplace and repurposed it here.
  • For specs, it was an Intel i5-4590 CPU @ 3.30GHz with 8GB of RAM and 250 GB of disk, nothing fancy.
  • It came with an AMD Radeon 8570 (2GB RAM) [4], and the Ollama install process recognized and optimized for the decade old GPU. Super-Nice!
  • For completeness, the box cost me $70 AUD (~50 USD) in May 2024. In other words, even for a cash-strapped avid learner, there's a very low barrier to entry here.

Install

The install steps were pretty simple [1] but as you may know, the models themselves are huge.

For e.g. look at this [3]:

  • mistral-7B - 4.1 GB
  • gemma2-27B - 16 GB
  • Code Llama - 4.8 GB

Given that, I'd recommend switching to a decent internet connection. If work allows, this may be a good time to go to work instead of WFH on this one. (Since I didn't have that luxury, my trusty but slow 60Mbps ADSL+ meant that I really worked up on my patience this weekend)

The thing that actually tripped me, was that Ollama threaded downloads really scream speed and it ended up clogging my test server (See my earlier blog post that goes into some details [2]).

Run with Nice

With system resources in short-supply, it made good sense, to ensure that once Ollama is installed, it is spun up with least priority.

On an Ubuntu server, I did this by modifying the ExecStart config for Ollama's systemd script.

ai@dell:~$ sudo service ollama status | grep etc
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)

ai@dell:~$ cat /etc/systemd/system/ollama.service | grep ExecStart
ExecStart=nice -n 19 /usr/local/bin/ollama serve

So when I do end up asking some fun questions, ollama is always playing "nice" :D




Enjoy ...

Reference:

  1. Install + Quick Start: https://github.com/ollama/ollama/blob/main/README.md#quickstart

  2. Model downloads made my server unresponsive: https://www.thatguyfromdelhi.com/2024/07/ollama-is-missing-rate-limits-on.html

  3. Model sizes are in GBs: https://github.com/ollama/ollama/blob/main/README.md#model-library

  4. Radeon 8570: https://www.techpowerup.com/gpu-specs/amd-radeon-hd-8570.b1325

29 Dec 2020

Which SQL causes a Table Rewrite in Postgres?

EDIT: Updated to v17 (devel) - (Jan 2024).


While developing SQL based applications, it is commonplace to stumble on these 2 questions:

  1. What DDLs would block concurrent workload?
  2. Whether a DDL is going to rewrite the table (and in some cases may need ~ 2x disk space)?

Although completely answering Question 1 is beyond the scope of this post, one of the important pieces that helps answering both of these questions is whether a DDL is going to cause a relfilenode change..

For a brief background, each regular table in Postgres stores data in one or more files, each of which is referenced in the postgres catalog with a relfilenode. A simple way to check whether the current implementation is going to create / refer to another copy (file) is whether the relfilenode changes. (TRUNCATE is a standout here, which by design is going to purge the table data, so although the relfilenode would change here, in total it obviously wouldn't consume anywhere close to 2x disk-space)

The table below shows which DDLs would cause a table rewrite. As has been discussed here, we need some more info to completely answer Question 1, however meanwhile this table helps in making some concurrency / disk-usage related decisions for all Postgres versions supported today.



29 Feb 2020

Optimizations in GROUP BY vs SELECT DISTINCT

(This came out of something I was trying out + discussing with Postgres enthusiasts - thanks to all for clarifying doubts)

This article aims at highlighting one aspect of how the query planner implementation of SELECT * GROUP BY differs from SELECT DISTINCT.

For example:

SELECT b,c,d FROM a GROUP BY b,c,d;
vs
SELECT DISTINCT b,c,d FROM a;


We see a few scenarios where Postgres optimizes by removing unnecessary columns from the GROUP BY list (if a subset is already known to be Unique) and where Postgres could do even better. To highlight this difference, here I have an empty table with 3 columns:

postgres=# create table a (b integer, c text, d bigint);
CREATE TABLE


postgres=# \d a
                 Table "public.a"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 b      | integer |           |          |
 c      | text    |           |          |
 d      | bigint  |           |          |


On this table, we can see that SELECT * GROUP BY generates the exact same plan as SELECT DISTINCT. In particular, we're interested in the "Group Key" which is the same for both SQLs:

postgres=# explain select distinct b,c,d from a;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=29.78..31.78 rows=200 width=44)
   Group Key: b, c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)

postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=29.78..31.78 rows=200 width=44)
   Group Key: b, c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)



Having said that, if the same table is created with a PRIMARY KEY, we see that GROUP BY becomes smarter, in that we can see that the "Group Key" uses the Primary Key (here it is 'b') and correcty discards columns 'c' and 'd'. Nice 😄!

postgres=# create table a (b integer PRIMARY KEY, c text, d bigint);
CREATE TABLE
postgres=# explain select distinct b,c,d from a;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=29.78..41.08 rows=1130 width=44)
   Group Key: b, c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)

postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=24.12..35.42 rows=1130 width=44)
   Group Key: b
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)


Let's check if we get the same optimization if we create a UNIQUE index on the column. The answer? Sadly No! Furthermore, I went ahead and created a NOT NULL constraint, but that didn't change anything either. (Do note that UNIQUE columns can have multiple rows with NULLs).

postgres=# create table a (b integer unique not null, c text, d bigint);
CREATE TABLE


postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=29.78..41.08 rows=1130 width=44)
   Group Key: b, c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)



Regarding the above, IIUC this is an obvious performance optimization that Postgres is still leaving on the table (as of v13+):

postgres=# select version();
                                                     version                                                     
------------------------------------------------------------------------------------------------------------------
 PostgreSQL 13devel on i686-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 32-bit
(1 row)


Next, does it still optimize this, if the PRIMARY KEY is not the first column in the GROUP BY? Answer? Yes! (The engine can optimize if any of the GROUPed BY column is a Primary Key! Noice !


postgres=# create table a (b integer, c text primary key, d bigint);
CREATE TABLE


postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=24.12..35.42 rows=1130 width=44)
   Group Key: c
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)


... and what if the PRIMARY KEY is a composite key of any of the columns in the GROUP BY column list? YES again 😄 !

postgres=# create table a (b int, c text, d bigint, primary key (c,d)) ;
CREATE TABLE

postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=26.95..28.95 rows=200 width=44)
   Group Key: c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)


Lastly, although some of these "optimizations" are things-to-avoid when writing good SQL, the reality is that ORM generated SQLs aren't that smart yet and then it's great that Postgres already implements these obvious optimizations.

18 Apr 2019

How about 1000 cascading Replicas :)

The other day, I remembered an old 9.0-era mail thread (when Streaming Replication had just launched) where someone had tried to daisy-chain Postgres Replicas and see how many (s)he could muster.

If I recall correctly, the OP could squeeze only ~120 or so, mostly because the Laptop memory gave way (and not really because of an engine limitation).

I couldn't find that post, but it was intriguing to know if we could reach (at least) a thousand mark and see what kind of "Replica Lag" would that entail; thus NReplicas.

On a (very) unscientific test, my 4-Core 16G machine can spin-up (create data folders and host processes for all) 1000 Replicas in ~8m (and tear them down in another ~2m). Now am sure this could get better, but amn't complaining since this was a breeze to setup (in that it just worked without much tinkering ... besides lowering shared_buffers).

For those interested, a single UPDATE on the master, could (nearly consistently) be seen on the last Replica in less than half a second, with top showing 65% CPU idle (and 2.5 on the 1-min CPU metric) during a ~30 minute test.

Put in simple terms, what this means is that the UPDATE change traveled from the Master to a Replica (lets call it Replica1) and then from Replica1 it cascaded the change on to Replica2 (and so on a 1000 times). The said row change can be seen on Replica1000 within half a second.

So although (I hope) this isn't a real-world use-case, I still am impressed that this is right out-of-the-box and still way under the 1 second mark.... certainly worthy of a small post :) !


Host: 16GB / 4 core
Time to spin up (1000k Cascading Replicas): 8minutes
Time to tear down: 2 minutes

Test type: Constant UPDATEs (AV settings default)
Test Duration: 30min
Time for UPDATE to propagate: 500 ms!! (on average)
CPU Utilization: ~65%
CPU 1-min ratio: 2.5

20 Nov 2017

Update: RDS Prewarm script updated to fetch FSM / VM chunks

(This post is in continuation to my previous post regarding Initializing RDS Postgres Instance)

This simple SQL "Initializes" the EBS volume linked to an RDS Instance, something which isn't possible to do without sending workload (and experience high Latency in the first run).

Key scenarios, where this is really helpful are:

  • Create a Read-Replica (or Hot Standby in Postgres terms)
  • Restore a new RDS Instance from a Snapshot


Update: The Script, now also does the following:

  • Now also fetches disk blocks related to FSM / VM of all tables
  • Now fetches all Indexes

Limitations that still exist:
  • TOAST tables are still directly inaccessible in RDS
    • Indexes for TOAST columns also fall under this category
    • Trying hard to see if this last hurdle can be worked around
      • Anyone with any ideas?!
  • Script needs to be run once per Database Owner
    • Not sure if there is any magic around this
      • Object ownership is a Postgres property
        • RDS Postgres does not give Superuser access
    • I'll try to ease this in the future
      • By creating a script to list the Users that this needs to run as
      • The other possibility is to use DBLink to run this for separate Users in a single run

I'll update here, in case I make any significant changes.

Sample Run



-[ RECORD 1 ]-------+------------------------------
clock_timestamp     | 2017-11-19 15:40:08.291891-05
table_size          | 13 GB
freespace_map_size  | 3240 kB
visibility_map_size | 408 kB
blocks_prefetched   | 1639801
current_database    | pgbench
schema_name         | public
table_name          | pgbench_accounts
-[ RECORD 2 ]-------+------------------------------
clock_timestamp     | 2017-11-19 15:43:37.703711-05
table_size          | 2142 MB
freespace_map_size  | 0 bytes
visibility_map_size | 0 bytes
blocks_prefetched   | 274194
current_database    | pgbench
schema_name         | public
table_name          | pgbench_accounts_pkey
-[ RECORD 3 ]-------+------------------------------
clock_timestamp     | 2017-11-19 15:44:12.899115-05
table_size          | 440 kB
freespace_map_size  | 24 kB
visibility_map_size | 8192 bytes
blocks_prefetched   | 59
current_database    | pgbench
schema_name         | public
table_name          | pgbench_tellers
-[ RECORD 4 ]-------+------------------------------
clock_timestamp     | 2017-11-19 15:44:12.901088-05
table_size          | 240 kB
freespace_map_size  | 0 bytes
visibility_map_size | 0 bytes
blocks_prefetched   | 30
current_database    | pgbench
schema_name         | public
table_name          | pgbench_tellers_pkey
-[ RECORD 5 ]-------+------------------------------
clock_timestamp     | 2017-11-19 15:44:12.905107-05
table_size          | 40 kB
freespace_map_size  | 0 bytes
visibility_map_size | 0 bytes
blocks_prefetched   | 5
current_database    | pgbench
schema_name         | public
table_name          | pgbench_branches_pkey
-[ RECORD 6 ]-------+------------------------------
clock_timestamp     | 2017-11-19 15:44:12.907089-05
table_size          | 40 kB
freespace_map_size  | 24 kB
visibility_map_size | 8192 bytes
blocks_prefetched   | 9
current_database    | pgbench
schema_name         | public
table_name          | pgbench_branches
-[ RECORD 7 ]-------+------------------------------
clock_timestamp     | 2017-11-19 15:44:12.907142-05
table_size          | 0 bytes
freespace_map_size  | 0 bytes
visibility_map_size | 0 bytes
blocks_prefetched   | 0
current_database    | pgbench
schema_name         | public
table_name          | pgbench_history

7 Nov 2017

Prewarming / Initializing an RDS Postgres instance (from S3)

UPDATE: Read this for recent updates. Now the SQL successfully fetches *all* disk blocks on most RDS PostgreSQL (read post for the rare exceptions).


As many of you know, that AWS RDS Postgres uses EBS which has an interesting feature called Lazy Loading that allows it to instantiate a disk (the size of which can be mostly anything from 10GB to 6TB) and it comes online within a matter of minutes. Although a fantastic feature, this however, can lead to unexpected outcomes when high-end production load is thrown at a newly launched RDS Postgres instance immediately after Restoring from a Snapshot.

One possible solution is to use the pg_prewarm Postgres Extension that is well supported in RDS Postgres, immediately after Restoring from a Snapshot, thereby reducing the side-effects of Lazy Loading.

Although pg_prewarm was originally meant for populating buffer-cache, this extension (in this specific use-case) is heaven-sent to initialize (fetch), (almost) the entire snapshot from S3 on to the RDS EBS volume in question. Therefore, even if you use pg_prewarm to run through all tables etc., thereby effectively evicting the recent run for the previous table from buffer-cache, it still does the job of initializing all disk-blocks with respect to the EBS volume.

I've just checked in the SQL to this repository that seems to do this magic pretty well. It also enlists why this would only take you ~70% of the way owing to restrictions / limitations (as per my current understanding).

In the Sample below, I restored a new RDS Postgres instance from a Snapshot and immediately thereafter ran this SQL on it.


  • Notice that the first table (pgbench_accounts) takes about 22 seconds to load the first time, and less than a second to load the second time.
  • Similarly the second table (pgbench_history) takes 15 seconds to load the first time and less than a second, the second time :) !



pgbench=>       SELECT clock_timestamp(), pg_prewarm(c.oid::regclass),
pgbench->       relkind, c.relname
pgbench->       FROM pg_class c
pgbench->         JOIN pg_namespace n
pgbench->           ON n.oid = c.relnamespace
pgbench->         JOIN pg_user u
pgbench->           ON u.usesysid = c.relowner
pgbench->       WHERE u.usename NOT IN ('rdsadmin', 'rdsrepladmin', ' pg_signal_backend', 'rds_superuser', 'rds_replication')
pgbench->       ORDER BY c.relpages DESC;
        clock_timestamp        | pg_prewarm | relkind |        relname
-------------------------------+------------+---------+-----------------------
 2017-11-07 11:41:44.341724+00 |      17903 | r       | pgbench_accounts
 2017-11-07 11:42:06.059177+00 |       6518 | r       | pgbench_history
 2017-11-07 11:42:17.126768+00 |       2745 | i       | pgbench_accounts_pkey
 2017-11-07 11:42:21.406054+00 |         45 | r       | pgbench_tellers
 2017-11-07 11:42:21.645859+00 |         24 | r       | pgbench_branches
 2017-11-07 11:42:21.757086+00 |          2 | i       | pgbench_branches_pkey
 2017-11-07 11:42:21.757653+00 |          2 | i       | pgbench_tellers_pkey
(7 rows)

pgbench=>
pgbench=>       SELECT clock_timestamp(), pg_prewarm(c.oid::regclass),
pgbench->       relkind, c.relname
pgbench->       FROM pg_class c
pgbench->         JOIN pg_namespace n
pgbench->           ON n.oid = c.relnamespace
pgbench->         JOIN pg_user u
pgbench->           ON u.usesysid = c.relowner
pgbench->       WHERE u.usename NOT IN ('rdsadmin', 'rdsrepladmin', ' pg_signal_backend', 'rds_superuser', 'rds_replication')
pgbench->       ORDER BY c.relpages DESC;
        clock_timestamp        | pg_prewarm | relkind |        relname
-------------------------------+------------+---------+-----------------------
 2017-11-07 11:42:33.914195+00 |      17903 | r       | pgbench_accounts
 2017-11-07 11:42:33.917725+00 |       6518 | r       | pgbench_history
 2017-11-07 11:42:33.918919+00 |       2745 | i       | pgbench_accounts_pkey
 2017-11-07 11:42:33.919412+00 |         45 | r       | pgbench_tellers
 2017-11-07 11:42:33.919427+00 |         24 | r       | pgbench_branches
 2017-11-07 11:42:33.919438+00 |          2 | i       | pgbench_branches_pkey
 2017-11-07 11:42:33.919443+00 |          2 | i       | pgbench_tellers_pkey
(7 rows)


13 Apr 2016

Postgres Performance - New Connections

Last in the the PostgreSQL Performance series:

Please read more about Test Particulars / Chart Naming methodology from the previous post in the series.

Takeaway:
  • New Connection performance has been constant (and at times have mildly deteriorated)
    • i.e. Pgbench with -C
  • I tried to use all possible combinations to find whether a corner case got better over the releases, but almost every test gave the same result.
  • Possibly Tom and others don't consider this a practical use-case and therefore a non-priority.
  • Unrelated, in the future, I'd club / write about such tests in a better fashion, rather than posting them as separate posts. I guess I got carried away by the results! Apologies about that.













Postgres Performance - Default Pgbench configurations

Continuing the PostgreSQL Performance series:

Please read more about Test Particulars / Chart Naming methodology from the previous post in the series.

Takeaway:
  • Unrelated to the Read-Only performance mentioned earlier, which grew slowly over each Major release, these numbers have grown considerably specifically in 9.5
  • 9.5 Branch is at times 35-70% faster than 9.4 Branch
  • To reiterate, this test had no Pgbench flags enabled (no Prepared / no Read-Only / etc.) besides 4 Connections & 4 Threads.





Postgres Performance - Read Only

Continuing the PostgreSQL Performance series:

Please read more about Test Particulars / Chart Naming methodology from the previous post in the series.

Takeaway:
  • Read-Only Performance numbers have consistently grown from (at least) 9.1 onwards
  • 9.5 Branch is 35%-50% faster than 9.1 Branch





Postgres performance - File + ReadOnly

Just wanted to see how Postgres performed when comparing its performance over the different Major releases. I had a spare Pi2 lying around and found it useful for such a performance test.

More inferences to follow, in future posts.
As always, any feedback is more than welcome.

TLDR:
  • Despite a regression in 9.3 onwards, the combination of File + Read-Only has improved drastically in the 9.6dev branch
  • 9.6dev branch is 
    • 2x faster than 9.1 on some tests
    • 50% faster than 9.5.2 (currently, the latest stable release!) on some tests
  • Yay!!
Hardware:
  • Raspberry Pi Model 2B
  • 1GB RAM
  • 900 MHz Quad-Core-ARM Cortex-A7 (ARMv7 Processor rev 5 (v7l))
  • 32GB Class 10 SD Card
Software: Source used for this test

Note:
  1. All configurations were run 10 times each
  2. All configurations were run for 100 secs each
  3. The Phrases in the name of each chart tells what Pgbench was run with:
    1. ConnX: with -cX (For e.g. Conn64: with -c64)
    2. Thread4: with -j4
    3. Prepared: with -M Prepared 
    4. File: External SQL file with one SQL command "SELECT 1;"
    5. Readonly: with -S
    6. 100secs: with -T 100








12 Nov 2015

Pigz - Parallel GZip

For a while, I've been frustrated with the fact that GZip was unable to consume idle CPUs on my laptop. Recently read about PIGZ (a drop-in GPL replacement) that implicitly consumes all CPUs.

Pronounced Pig-Zee, PIGZ is a drop-in replacement for GZIP and automatically consumes all CPUs of the host machine in order to get the compression completed much faster.

OLD: tar -cf - sourcefolder | gzip > sourcefolder.tar.gz

NEW: tar -cf - sourcefolder | pigz > sourcefolder.tar.gz

Enjoy!

11 Jun 2015

Using Pi as an home-based Media Server

This is among a series of articles on my experience with the Pi.

This article is about using a Pi as a primitive low-end File-Server for your home network:

Expectations:

  • Torrent-Server
    • Download Torrents
    • Store on a File-Share
  • Windows Share
    • Serve the File-Share as a Windows Share Drive
    • Allow Read / Write to this Windows Share Drive
  • Use File-Share as Media-Server
    • Using any Smartphone / Laptop
      • Play Movies using VLC
      • View all Photos / Home-Videos


Effectively:
  1. Always on
    1. Always accepting new Torrent requests
    2. Instantly start downloading 
  2. In Real-time
    1. Allow user's to view torrent download status
  3. Use any UPnP Phone App to play Video content over WiFi on a SmartPhone
  4. Use VLC (Network Streaming) to play any Video over WiFi on Laptop / Desktop
  5. Use Windows Share Drive and view all Photos / Home-Videos as needed

How-To:
  • On Server
    • Install Torrent-Daemon
    • Configure Torrent-Daemon to listen on RPC requests
      • Here's the howto for that
    • Configure Samba
      • Set the Download folder to be shared
  • On Windows
    • Install Transmission-GUI-Remote
    • Configure GUI to use the RPC based Torrent-Daemon server
    • Make this application the default for .torrent & magnet files
  • On Linux
    • Install Transmission-Remote
      • sudo apt-get install transmission-remote
    • Configure that to use Daemon (instead of downloading directly with transmission-cli)

Pros/Cons:
  • Pros
    • Once torrent download has begun, client can disconnect / shutdown client computer
    • Server continues to get torrents, after a restart
    • miniDLNA serving speed pretty decent
      • Watching a movie (over WiFi) on VLC
        • CPU ratio barely 0.01 which is pretty decent
          • Should be able to easily serve a small army :)
            • If no other IO is happening
            • If WiFi isn't the bottleneck 
  • Cons
    • Storage on Pi needs to be managed from time-to-time
      • Currently is highly adhoc based
        • Truly taking RAID concept to heart!!
          • Have 6 Pen Drives (ranging from 4GB to 16GB)
            • 8GBs are INR ~170 ( $3 )
            • All connected via 3 USB Hubs (both USB 2.0 and USB 3.0)
              • One Powered, two non-powered hubs
            • Most on Btrfs
              • Pretty stable, surprising that still get 1+ MBps with Pi's CPU
            • Some VFAT fs since have need of moving stuff off of Pi to a Windows Laptop
            • All mounted under /disk
              • Very temporary arrangement
                • Ideally am looking for a LVM solution (which allows me to remove / add pen-drives on the fly and thus can be called as one) that actually is suited for this purpose.
            • But miniDLNA picks up Photo / Audio / Video pretty well from all different mounted folders
    • Download speed limitations
      • Internet Speed = 2 MBps
      • Daemon Download speed capping = 1Mbps
        • But PI never reaches it :D !
          • Just imagine the poor configuration
            • Pi + Btrfs + USB 2.0 + 3 USB Chain + Unpowered Hub
              • Still consistent at 450kbps! Impressive!
    • Lack of Transmission-Daemon configuration tool means all configuration has to be done via black-screen configuration files
  • Careful configuration
    • Give all access to 'all' users only if you're sure that all users are going to be careful

5 Nov 2014

PostgreSQL Explain Plan-Nodes Grid


While reading up about the various PostgreSQL EXPLAIN Plan nodes from multiple sources, I realized a clear lack of a consolidated grid / cheat-sheet, from which I could (in-one-view) cross-check how the plan nodes perform on a comparative basis. While at it, I also tried to attribute what their (good / bad) characteristics are.

Hope this (Work-In-Progress) lookup table helps others on the same path. Any additions / updates are more than welcome.

Update:
The image below is an old-snapshot of the document. The updated web document is available here.



What's in an empty table?

How much storage does an empty table in Postgres take? This is a post about Postgres tables that store ... well basically ...  Nothing . The...