29 Jul 2017

Symbols in Redshift User Passwords work just fine


Recently read a few posts / discussions about people doubting Redshift not accepting (working-well with) ASCII symbols in User-Passwords.

It felt like a good time to write this short post showing that Redshift (Engine) seems to work fine with (non-alphanumeric) (printable) ASCII symbols.

You can see a few things (in the sample output given below):

  • All non-alphanumeric printable ASCII characters worked fine (at least all that my US-International / QWERTY keyboard could throw at it)
  • For those who also need ' (single-quote) and " (double-quote) you could always use $$ as quote-delimiters
  • You still need at least One of each of the following:
    • Upper-Case English-Letter
    • Lower-Case English-Letter
    • One Digit / Numeral


------------------------------------------------------------
# psql -U adminuser -h  rs_cluster -p 5439 db

psql (9.6.3, server 8.0.2)
Type "help" for help.

rs_cluster adminuser@db-# alter user userb with password 'Aa1~!@#$%^&*()_+-`{}[]|";:,<.>/?';
ALTER USER
Time: 237.012 ms
rs_cluster adminuser@db-# \q

# psql -U userb -h  rs_cluster -p 5439 db
Password for user userb:

psql (9.6.3, server 8.0.2)
Type "help" for help.

rs_cluster userb@db-# alter user userb with password $$Aa1~!@#$%^&*()_+-`{}[]|";:,<.>/?'"$$;
ALTER USER
Time: 191.505 ms
rs_cluster adminuser@db-# \q

# psql -U userb -h  rs_cluster -p 5439 db
Password for user userb:

psql (9.6.3, server 8.0.2)
Type "help" for help.

rs_cluster userb@db-#
------------------------------------------------------------

21 Jul 2017

Using generate_series() in Redshift

Considering that Redshift clearly states that it doesn't support (the commonly used postgres function) generate_series(), it gets very frustrating if you just want to fill a table with a lot of rows and can't without a valid data-source.

Solution (Generates a billion integers on my test-cluster):

--INSERT INTO tbl
WITH x AS (
  SELECT 1 
  FROM stl_connection_log a, stl_connection_log b, stl_connection_log c
  -- LIMIT 100
)
  SELECT row_number() over (order by 1) FROM x;

For a Redshift server with even a basic level of login activity, this should generate enough rows. For e.g. On my test cluster, where I am the only user, this currently generates 4034866688 (4 billion) rows :) !

Interestingly, irrespective of the document, generate_series() actually does work on Redshift:

# select b from generate_series(1,3) as a(b);
┌───┐
│ b │
├───┤
│ 1 │
│ 2 │
│ 3 │
└───┘
(3 rows)

The reason why this wouldn't let you insert any rows to your table though, is that this is a Leader-Node-Only function, whereas INSERTs (on any non-single Redshift Cluster) are run on the Compute Nodes (which don't know about this function).

The reason why the above works, is ROW_NUMBER() and CROSS JOIN allow us to generate a large number of rows, but for that, the initial data-set (here the STL_CONNECTION_LOG System Table) should have at least some rows to multiply on! You could use any other system table (that is available on Compute Nodes) if required, for some other purpose.

Play On!

Find Database DNS / Endpoint via SQL

How to get Database identifier using SQL Often there is a need for client programs to find "where am I logged into?". This blog po...