24 Dec 2016

Watch an Online Movie: (Wget -c || Deluge) && Chrome > Chromecast

For all those who are in an odd situation where:


  • They have a paid account to a Movie site (like Netflix etc.)
  • Are unable to watch movies online, just because the video-streaming is just too slow
    • Either because your convenient times are 'peak' times for the server
    • Or, you are behind a painfully bad ISP
  • And are able to download the movie, as an option.

To such customers, downloading the movie overnight (using for e.g. wget) would be a big help!

I regularly use this, to download the movie, and watch with non-tech people (my kids) who can't be explained why the movie keeps 'Buffering'!

c:\bin\wget \
  -O KD1242.mp4 \ # Output filename
  -t 0 \ # Retrying indefinitely
  -c -T 10 \ # Reconnect + Timeouts are 10 seconds
  -w 10 \ # wait 10 seconds before retrying a disconnection
"https://www.yourfavouritechannel.com/abcd/KD1242.mp4?g=1f3410635&sha1=JGgJm02BOvqgsdvC32BcUg"

Windows binaries for GPL'ed GNU software (such as Wget) are heaven sent here:



And if you need a Big-TV experience (for e.g. if you have a Chromecast), you could stitch things together by using the Google Cast extension for your Chrome Browser, and open up "c:\" on the browser to play the movie directly in the browser (since VLC stream is still in Beta):


If by chance you're downloading videos via torrents (For e.g. NASA videos), here are my GPL recommendations for Windows:

  • Deluge: If you haven't seen this, you should really replace your uTorrent etc. clients with this one
  • Use the Streaming Extension (Github Link)
  • Copy the URL that the Extension provides + Paste to Chrome
  • Cast your tab to your Chromecast + Enjoy!


Have Fun!

9 Dec 2016

Custom pg_dumpall now works with AWS Redshift

While trying to work with AWS Redshift, it was interesting to see pg_dumpall failing to dump databases in my cluster! 

Delving further, for obvious reasons a managed service like this hides some global-information, which pg_dump(all) needs ... and no, this post is NOT about circumventing that.

This branch, gives a 'near' workaround for those who are okay with being able to extract all databases + (almost*) all Global information (Users etc.) in a single command. 


There are caveats though:

  • Redshift doesn't support COPY TO, so the best workaround is using INSERT. Painful, but works.
  • Barring passwords, all Globals can be dumped. The script just resets all users to be password-less, but that's better than having to do 'CREATE USER ... ' commands for your users by hand!

For some people, these caveats are going to be okay, considering that they get a scriptable way of taking a dump of all databases at one go, along with User information. 

If you're interested in a similar hack for AWS Postgres, you're in luck!

Down the line, I'll try to push this to the core, but for now, this works!

Go Play :) !!

Patched pg_dumpall works with AWS RDS Postgres

UPDATE: This patch is now a part of Native Postgres!! Read more here.

While trying to work with AWS RDS Postgres, it was interesting to see pg_dumpall failing to dump databases at all!

Delving further, for obvious reasons a managed service like this hides some global-information, which pg_dump(all) needs ... and no, this post is NOT about circumventing that.

This branch, gives a 'near' workaround for those who are okay with being able to extract all databases + (almost*) all Global information (Users etc.) in a single command. The *only* Global that can't be dumped are the User-Passwords, which for some people is okay, considering that they still get a scriptable way of taking a dump of all databases at one go, along with all User information. 

Note: For those keen to know, the script just resets all users to be password-less, but that's better than having to do 'CREATE USER ... ' commands for your users by hand!

Down the line, I'll try to push this to the core, but for now, this works!

UPDATE: This patch is now a part of Native Postgres!! Read more here.

Go Play :) !!

14 Apr 2016

ALTER TABLE Gotcha related to USING

This is an old gotcha, that has been documented (although probably not much) and is something that we (in my company) almost got caught with recently.

This post, is about documenting the issue that looks like something that isn't going away anytime soon.

Consider this:

CREATE TABLE k (id INTEGER, b text);
CREATE TABLE l (id INTEGER);

CREATE VIEW vw_k AS SELECT k.b FROM k JOIN l USING (id);
TABLE vw_k;
ALTER TABLE l RENAME COLUMN id TO id2;
INSERT INTO k(id, b) VALUES (1, 'abc');
INSERT INTO l(id2)   VALUES (1);
TABLE vw_k;
DROP VIEW vw_k;
CREATE VIEW vw_k AS SELECT k.b FROM k JOIN l USING (id);


This is the output:

rt_testing=# CREATE TABLE k (id INTEGER, b text);
CREATE TABLE
rt_testing=# CREATE TABLE l (id INTEGER);
CREATE TABLE
rt_testing=# CREATE VIEW vw_k AS SELECT k.b FROM k JOIN l USING (id);
CREATE VIEW
rt_testing=# TABLE vw_k;
 b
---
(0 rows)

rt_testing=# ALTER TABLE l RENAME COLUMN id TO id2;
ALTER TABLE
rt_testing=# INSERT INTO k(id, b) VALUES (1, 'abc');
INSERT 0 1
rt_testing=# INSERT INTO l(id2)    VALUES (1);
INSERT 0 1
rt_testing=# TABLE vw_k;
  b
-----
 abc
(1 row)

rt_testing=# DROP VIEW vw_k;
DROP VIEW
rt_testing=# CREATE VIEW vw_k AS SELECT k.b FROM k JOIN l USING (id);
ERROR:  column "id" specified in USING clause does not exist in right table

If you noticed, despite changing the base table's relevant column (i.e. l.id => l.id2), the VIEW kept working without fail, until you actually try to DROP / CREATE the VIEW sometime down the line.

In fact, this gets difficult to trace because the VIEW continues to work normally even after the column name change. For e.g. New rows inserted to the updated table, continue to show up in the VIEW as if its definition was up to date!

To summarize, whenever changing a column name (used in a USING clause on a dependent VIEW), be careful that PostgreSQL may not necessarily complain about the dependency. You'd need to be careful to make that check yourself. Or else its possible that sometime down the line, when you're trying to restore a production backup, you'd start seeing errors related to changes you made a year back!

13 Apr 2016

Postgres Performance - New Connections

Last in the the PostgreSQL Performance series:

Please read more about Test Particulars / Chart Naming methodology from the previous post in the series.

Takeaway:
  • New Connection performance has been constant (and at times have mildly deteriorated)
    • i.e. Pgbench with -C
  • I tried to use all possible combinations to find whether a corner case got better over the releases, but almost every test gave the same result.
  • Possibly Tom and others don't consider this a practical use-case and therefore a non-priority.
  • Unrelated, in the future, I'd club / write about such tests in a better fashion, rather than posting them as separate posts. I guess I got carried away by the results! Apologies about that.













Postgres Performance - Default Pgbench configurations

Continuing the PostgreSQL Performance series:

Please read more about Test Particulars / Chart Naming methodology from the previous post in the series.

Takeaway:
  • Unrelated to the Read-Only performance mentioned earlier, which grew slowly over each Major release, these numbers have grown considerably specifically in 9.5
  • 9.5 Branch is at times 35-70% faster than 9.4 Branch
  • To reiterate, this test had no Pgbench flags enabled (no Prepared / no Read-Only / etc.) besides 4 Connections & 4 Threads.





Postgres Performance - Read Only

Continuing the PostgreSQL Performance series:

Please read more about Test Particulars / Chart Naming methodology from the previous post in the series.

Takeaway:
  • Read-Only Performance numbers have consistently grown from (at least) 9.1 onwards
  • 9.5 Branch is 35%-50% faster than 9.1 Branch





Postgres performance - File + ReadOnly

Just wanted to see how Postgres performed when comparing its performance over the different Major releases. I had a spare Pi2 lying around and found it useful for such a performance test.

More inferences to follow, in future posts.
As always, any feedback is more than welcome.

TLDR:
  • Despite a regression in 9.3 onwards, the combination of File + Read-Only has improved drastically in the 9.6dev branch
  • 9.6dev branch is 
    • 2x faster than 9.1 on some tests
    • 50% faster than 9.5.2 (currently, the latest stable release!) on some tests
  • Yay!!
Hardware:
  • Raspberry Pi Model 2B
  • 1GB RAM
  • 900 MHz Quad-Core-ARM Cortex-A7 (ARMv7 Processor rev 5 (v7l))
  • 32GB Class 10 SD Card
Software: Source used for this test

Note:
  1. All configurations were run 10 times each
  2. All configurations were run for 100 secs each
  3. The Phrases in the name of each chart tells what Pgbench was run with:
    1. ConnX: with -cX (For e.g. Conn64: with -c64)
    2. Thread4: with -j4
    3. Prepared: with -M Prepared 
    4. File: External SQL file with one SQL command "SELECT 1;"
    5. Readonly: with -S
    6. 100secs: with -T 100








On-Prem AI chatbot - Hello World!

In continuation of the recent posts... Finally got a on-premise chat-bot running! Once downloaded, the linux box is able to spin up / down t...