tag:blogger.com,1999:blog-27526421246670777892024-03-14T16:47:36.681+10:30That Guy From DelhiRamblings of That Guy From Delhi ...Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.comBlogger55125tag:blogger.com,1999:blog-2752642124667077789.post-81145098301793925732021-12-21T22:18:00.000+10:302021-12-21T22:18:23.265+10:30Find Database DNS / Endpoint via SQLHow to get Database identifier using SQLOften there is a need for client programs to find "where am I logged into?". This blog post provides a few ways to know more to answer this question. When more and more applications are breaking out of the regular (database + application) architecture, this gets more and more important in scenarios when you're not 100% sure "where" you just logged Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-21900996786479160932020-12-29T18:24:00.005+10:302024-01-10T21:25:59.409+10:30Which SQL causes a Table Rewrite in Postgres?EDIT: Updated to v17 (devel) - (Jan 2024).While developing SQL based applications, it is commonplace to stumble on these 2 questions:What DDLs would block concurrent workload?Whether a DDL is going to rewrite the table (and in some cases may need ~ 2x disk space)?Although completely answering Question 1 is beyond the scope of this post, one of the important pieces that helps answering both of Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com3tag:blogger.com,1999:blog-2752642124667077789.post-49610194142085092842020-02-29T13:49:00.002+10:302020-02-29T14:58:41.314+10:30Optimizations in GROUP BY vs SELECT DISTINCT(This came out of something I was trying out + discussing with Postgres enthusiasts - thanks to all for clarifying doubts)
This article aims at highlighting one aspect of how the query planner implementation of SELECT * GROUP BY differs from SELECT DISTINCT.
For example:
SELECT b,c,d FROM a GROUP BY b,c,d;
vs
SELECT DISTINCT b,c,d FROM a;
We see a few scenarios where Postgres optimizes by Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-10897823778521552752019-11-08T20:42:00.000+10:302019-11-08T20:42:22.200+10:30Compiling any C source on WSL (Linux on Windows)This is a short post, in hopes that someone doesn't spend hours trying to wonder why a fresh Postgres Source Clone (or any C code for that matter) complains on the first (1st) non comment line about something very trivial (see sample below) as soon as you trigger a ./configure
$ ./configure
: not found: 18: ./configure:
./configure: 34: ./configure: Syntax error: newline unexpected (expecting ")Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com1tag:blogger.com,1999:blog-2752642124667077789.post-2671066890745469332019-04-18T04:30:00.003+09:302020-09-28T12:15:06.620+09:30How about 1000 cascading Replicas :)The other day, I remembered an old 9.0-era mail thread (when Streaming Replication had just launched) where someone had tried to daisy-chain Postgres Replicas and see how many (s)he could muster.
If I recall correctly, the OP could squeeze only ~120 or so, mostly because the Laptop memory gave way (and not really because of an engine limitation).
I couldn't find that post, but it was intriguingRobins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-6689801579237234072019-01-17T22:24:00.000+10:302019-01-17T22:24:11.218+10:30How to add remove DEFAULT PRIVILEGESAt times we need to do a DROP USER and are unable to do so, because of existing "DEFAULT PRIVILEGES" that exist associated to the user, which disallow a DROP USER to go ahead.
Common SQL such as this do not give DEFAULT PRIVILEGES.
You can find DEFAULT PRIVILEGES by using \ddp in psql. If you haven't heard of psql, probably that'd be a good place to start.
Once you have the privileges, Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com1tag:blogger.com,1999:blog-2752642124667077789.post-83233234840486643232017-11-20T07:40:00.001+10:302018-01-21T05:30:53.218+10:30Update: RDS Prewarm script updated to fetch FSM / VM chunks(This post is in continuation to my previous post regarding Initializing RDS Postgres Instance)
This simple SQL "Initializes" the EBS volume linked to an RDS Instance, something which isn't possible to do without sending workload (and experience high Latency in the first run).
Key scenarios, where this is really helpful are:
Create a Read-Replica (or Hot Standby in Postgres terms)
Restore a Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-53776085179830403142017-11-07T22:32:00.000+10:302018-01-21T05:19:19.676+10:30Prewarming / Initializing an RDS Postgres instance (from S3)UPDATE: Read this for recent updates. Now the SQL successfully fetches *all* disk blocks on most RDS PostgreSQL (read post for the rare exceptions).
As many of you know, that AWS RDS Postgres uses EBS which has an interesting feature called Lazy Loading that allows it to instantiate a disk (the size of which can be mostly anything from 10GB to 6TB) and it comes online within a matter of Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-74610578206660345752017-10-14T05:45:00.002+10:302017-10-14T05:45:18.332+10:30First alpha release of PsqlForks - MenonPrimer: PsqlForks aims to support all DB Engines that (even partially) speak Postgres (psqlforks = psql for Postgres forks).
Given that PsqlForks has been in development for a few weeks, it's time to stabilize a bit and towards that, we finally have Menon, PsqlForks first Alpha Release. Being an alpha, by definition it isn't ready for production, but it feels stable enough ...&Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com1tag:blogger.com,1999:blog-2752642124667077789.post-12640882573698283222017-10-13T06:07:00.001+10:302018-02-13T14:07:10.560+10:30PsqlForks supports AWS IAM authentication for RedshiftWith this commit, PsqlForks ( http://psqlforks.com ) can now fetch credentials from AWS IAM. Read more about Redshift's support for generating database credentials using IAM authentication feature, here.
Since the entire AWS CLI isn't baked into PsqlForks (yet!), you'd need a working copy of AWS CLI installed / working on the host (from where psql is called).
This took a while, since I Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-68605309458047627002017-10-12T18:51:00.000+10:302017-10-12T18:51:50.814+10:30PsqlForks now recognizes RDS PostgreSQL separatelyWith this commit, PsqlForks ( http://psqlforks.com ) now recognizes RDS PostgreSQL separately.
This isn't utilized much yet, but the infrastructure is going to be helpful in skipping / avoiding some commands that are defunct / not possible in the RDS PostgreSQL offering.
Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-70711965615096863932017-09-29T03:38:00.000+09:302017-09-30T03:32:42.452+09:30PsqlForks now recognizes PgBouncerWith this commit, PsqlForks knows when it's talking to PgBouncer (and not Postgres).
Down the line, this should pave way for PsqlForks to more cleanly convey why (most of) the given psql shortcut(s) don't work (and what else does).
As always, the psql/README always has the most updated status of any engine support.
$ psql -h localhost -E -p6543 -U postgres pgbouncer
psql (client-version:Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-22170118078247389272017-09-25T23:29:00.002+09:302017-09-30T03:32:43.923+09:30PsqlForks now supports CockroachDBPsqlForks now supports CockroachDB as much as is currently possible. You can check it's current SQL status here.
$ /opt/postgres/master/bin/psql -h localhost -E -p 26257 -U root
psql (client-version:11devel, server-version:9.5.0, engine:cockroachdb)
Type "help" for help.
root=> select version();
&Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-39879954269429748792017-09-15T21:01:00.000+09:302017-10-02T19:05:17.139+10:30PsqlForks now supports PipelineDBAfter working on this PSQL variant that intends to support all Postgres forks, I finally narrowed down to naming it.
Since this was essentially Psql (for) Forks, quite intuitively, I chose to name it PsqlForks.
Considering that until recently this fork just supported Amazon Redshift, this naming didn't make much sense if it wasn't supporting at least 2 forks :) !
Thus, PsqlForks now Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-34696458774595271962017-09-02T14:53:00.001+09:302017-10-02T19:04:44.871+10:30psql \d now supports Interleaved / Compound SORTKEYs (in Redshift)In continuation of support for Redshift series, now Describe Table (for e.g. \d tbl) shows SORTKEY details. This resolves Issue #6 and shows both COMPOUND / INTERLEAVED variations along with all the column names.
This change was complicated because Redshift doesn't natively support LISTAGG() function on System / Catalog tables, which meant that I had to resort to a pretty verbose Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-8511893466257873132017-08-31T20:36:00.000+09:302017-10-02T19:04:43.780+10:30psql \d now supports DISTKEY / SORTKEY / ENCODING (in Redshift)This is in continuation of my work for (my forked version of) psql to better support Redshift (read more here).
Now \d table provides some additional Redshift specific table properties such as:
DISTKEY
SORTKEY
COMPRESSION (ENCODING)
ENCRYPTION
Sample:
t3=# CREATE TABLE customer(
custkey SMALLINT ENCODE delta NOT Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-21427865963185690112017-08-12T16:41:00.000+09:302017-09-29T03:51:48.544+09:30Redshift support for psqlAm sure you know that psql doesn't go out of it's way to support Postgres' forks natively. I obviously understand the reasoning, which allowed me to find a gap that I could fill here.
The existing features (in psql) that work with any Postgres fork (like Redshift) are entirely because it is a fork of Postgres. Since I use psql heavily at work, last week I decided to begin maintaining a Postgres Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-20670657387888683632017-08-03T06:23:00.000+09:302017-08-09T06:50:00.455+09:30Reducing WiresRecently got an additional monitor for my workstation@home and found that the following wires were indispensable:
USB Mouse
Monitor VGA / HDMI / DVI cable
USB Hub cable (Pen Drive etc.)
I was lucky that this ($20 + used) Dell monitor was an awesome buy since it came with a Monitor USB Hub (besides other goodies such as vertical rotate etc).
After a bit of rejigging, this is how thingsRobins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-64031593291540428902017-07-29T20:30:00.000+09:302017-07-29T20:48:57.688+09:30Symbols in Redshift User Passwords work just fine
Recently read a few posts / discussions about people doubting Redshift not accepting (working-well with) ASCII symbols in User-Passwords.
It felt like a good time to write this short post showing that Redshift (Engine) seems to work fine with (non-alphanumeric) (printable) ASCII symbols.
You can see a few things (in the sample output given below):
All non-alphanumeric printable ASCII Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-73835199213807685612017-07-21T02:24:00.000+09:302017-07-25T20:39:01.347+09:30Using generate_series() in RedshiftConsidering that Redshift clearly states that it doesn't support (the commonly used postgres function) generate_series(), it gets very frustrating if you just want to fill a table with a lot of rows and can't without a valid data-source.
Solution (Generates a billion integers on my test-cluster):
--INSERT INTO tbl
WITH x AS (
SELECT 1
FROM stl_connection_log a,&Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com2tag:blogger.com,1999:blog-2752642124667077789.post-22166762433381793662017-05-31T05:02:00.000+09:302018-01-26T18:54:17.449+10:30Patch: Using --no-comments with pg_dumpRecently I submitted a patch for review that allows a non-superuser to practically use a backup taken from pg_dump.
Currently it is a kludge (and well known at that - Ref 1 / Ref 2 / Ref 3 / Ref 4) but since it's (the ideal solution) too big a byte to chew and not in high-demand, it has seen little to no traction in the past decade.
This patch should allow the above. But more Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com4tag:blogger.com,1999:blog-2752642124667077789.post-5853751840873713242017-05-31T04:51:00.000+09:302017-05-31T04:51:22.681+09:30horntail it is :)I had been planning to contribute to Postgres Buildfarm by running it on one of my VMs and finally got that done... and horntail it is.
A few things to note, before it fades from (my) memory:
It (obviously) requires quite a bunch of libraries, especially if you're using the defaults (for e.g. with ssl / krb5 etc..)
You may either disable some of those exotic options (not ideally Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com0tag:blogger.com,1999:blog-2752642124667077789.post-7697256876723005262017-03-14T20:39:00.000+10:302017-03-17T06:04:00.671+10:30Using pg_dumpall with Google Cloud PostgresLuckily the recent update to pg_dumpall works (without any extra effort) to dump Globals off the recently launched Google Cloud PostgreSQL service well.
This is because this Postgres service seems to be using the same limitation (as AWS RDS Postgres does) in that none of the user accounts are SuperUser in the Postgres sense. Therefore, this platform too disallows accessing pg_authid.
The Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com2tag:blogger.com,1999:blog-2752642124667077789.post-72943632552556434472017-03-12T05:06:00.001+10:302017-03-12T05:18:44.077+10:30Redshift LimitationsRecently have been toying around with Redshift (a popular Data-Warehouse tool by AWS) a little bit and see some minor but obvious limitations in SQL limitations:
FULL OUTER JOIN + ON Func(x) = Func(y) : Not Supported
Although Redshift supports most variants of FULL OUTER JOIN, however, when the JOIN condition contains a function on both sides, it gives up complaining "ERROR: Robins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com2tag:blogger.com,1999:blog-2752642124667077789.post-91057656849389203902017-03-08T04:22:00.000+10:302020-05-11T17:03:00.031+09:30Using pg_dumpall with AWS RDS PostgresMy patch (allowing a non-super-user to dump Globals) just got committed to Postgres 10.0.
Besides the use mentioned above, this patch allows one to dump Globals from their AWS RDS Postgres Database and have a complete backup as a part of their Disaster Recovery (DR) solution. This earlier required an inelegant hack (detailed here in another post of mine) that wasn't very convenient forRobins Tharakanhttp://www.blogger.com/profile/02748267202194708735noreply@blogger.com13