28 Apr 2024

Boost Database Security: Restrict Users to Read Replicas

Only Allow Login to Read-Replicas and Standbys

When you're working with large databases in production, it is incredibly common to use read-replicas to improve performance. These read-replicas are a copy of your primary (main) database and let your applications offload read-heavy queries, which in-turn reduces strain on your primary database, effectively making the application faster and snappier.

Sometimes, you may want to restrict specific database users so they can connect ONLY to these read-replicas, and not to the primary database server. This can be tricky to implement, since any permissions configured for this use-case, whether on the user-level, the database level, the schema-level or even the table level would be quickly replicated to the read-replicas and thus would not work as expected.

This guide will show how to configure a database user to only login successfully on a read-replica. The only requirement is to enable the pg_tle extension [3] on your PostgreSQL database. This is simple to do on your Ubuntu based Laptop (see how to do that here [2]) or virtual-machines offered by your favourite cloud-provider. Furthermore, you could apply your login rules using Pl/PgSQL, PL/v8 or even PL/Rust - See here[1].

Why Restrict Access?

There are many good reasons for restricting users to read-replicas:

  • Performance: You can dedicate your primary database server to handling write operations (like updating data), ensuring those operations happen as fast as possible.

  • Reporting / Analytics: Production environments often have dedicated users for ancillary tasks, such as monitoring, reporting dashboards, read-only tenants etc. Restricting these database users to read-replica helpsreducing extra load on the primary database.

  • Security: In some cases, granting direct access to the primary database might be considered a security risk. Further, you may not be able to force login hygeine for all your database users, and then having a lockdown system to reject those database users to login to primary is crucial for application rollout.

Prerequisites

  • An existing PostgreSQL database instance with at least one read-replica.
    • You could also try this on your own Postgres database with pg_tle extension. Read here [2] for more on how to install pg_tle on your Ubuntu system.
  • Basic understanding of users and permissions within a database.

Steps

  1. Identify Target Database and Users: First we need to define how to implement the restriction. i.e. Which users (and database) are to be restricted to login only to read-replica. In the example below, we would restrict the user standby_only_user to only be able to login to Standbys / Read-Replicas on database prod_db.
psql <<SQL
  \c prod_db
  CREATE EXTENSION pg_tle;
SQL 
  1. Ensure that shared_preload_libraries is properly set to allow pg_tle. Also make sure that the pgtle.clientauth_db_name is appropriately set to the desired database (here prod_db):
cat <<EOL >> data/postgresql.conf
  shared_preload_libraries='pg_tle'
  pgtle.enable_clientauth=require
  pgtle.clientauth_db_name=prod_db
  pgtle.clientauth_users_to_skip=robins
  pgtle.clientauth_databases_to_skip=''
EOL
  1. Secret Sauce:

Next we create the key pg_tle function that restricts the user standby_only_user to login successfully only if this is a standby / read-replica:

SELECT pgtle.install_extension (
  'standbyusercheck',
  '1.0',
  'Allow some users to login only to standby / read-replicas',
$_pgtle_$
  CREATE SCHEMA standbycheck_schema;

  REVOKE ALL ON SCHEMA standbycheck_schema FROM PUBLIC;
  GRANT USAGE ON SCHEMA standbycheck_schema TO PUBLIC;

  CREATE OR REPLACE FUNCTION standbycheck_schema.standbycheck_hook(port pgtle.clientauth_port_subset, status integer)
  RETURNS void AS $$
    DECLARE
      is_standby bool := TRUE;
    BEGIN
      IF port.user_name = 'standby_only_user' THEN
        SELECT pg_is_in_recovery()
          INTO is_standby;
        IF is_standby THEN
          RAISE NOTICE 'User allowed to login';
        ELSE
          RAISE EXCEPTION 'User can only login to Standby / Read-Replicas';
        END IF;
      END IF;
    END
  $$ LANGUAGE plpgsql SECURITY DEFINER;

  GRANT EXECUTE ON FUNCTION standbycheck_schema.standbycheck_hook TO PUBLIC;
  SELECT pgtle.register_feature('standbycheck_schema.standbycheck_hook', 'clientauth');
  REVOKE ALL ON SCHEMA standbycheck_schema FROM PUBLIC;
$_pgtle_$
);

And now that the function is defined,CREATE EXTENSION would install the function and bind it to future login attempts.

CREATE EXTENSION standbyusercheck;
SHOW pgtle.clientauth_db_name;
  1. Test Connection:
  • Attempting to connect as a privileged user (here robins) to either of primary or read-replica should succeed.
Logging into Replica as robins
 login  | current_database | pg_is_in_recovery
--------+------------------+-------------------
 robins | prod_db          | t
(1 row)

Logging into Primary as robins
 login  | current_database | pg_is_in_recovery
--------+------------------+-------------------
 robins | prod_db          | f
(1 row)
  • However, the user standby_only_user should NOT be able to login to the primary.
Logging into Primary as standby_only_user
psql: error: connection to server at "localhost" (127.0.0.1), port 6432 failed: FATAL:  User can only login to Standby / Read-Replicas
  • While the user (standby_only_user) should only be able to login to any read-replica.
Logging into Replica as standby_only_user
       login       | current_database | pg_is_in_recovery
-------------------+------------------+-------------------
 standby_only_user | prod_db          | t
(1 row)

Other important aspects of this feature

  • You could force clientauth for all logins by setting the parameter pgtle.enable_clientauth = require

  • You could configure some users to always be allowed to login to either of Primary / Read-replica in cases of emergency, by adding that user to the pgtle.clientauth_users_to_skip. Ideally you would want your admin database roles to this list.

  • Orthogonally, you could configure some databases to always allow users to skip clientauth by setting the pgtle.clientauth_databases_to_skip feature.

  • Note, that both clientauth_databases_to_skip and clientauth_databases_to_skip can be utilised together. This is a good way to ensure that some set of database users (and some databases) are exempt from such a login restriction.

  • If pgtle.enable_clientauth is set to on or require and if the database mentioned in pgtle.clientauth_db_name is not configured correctly, postgres would complain with the messsage FATAL: pgtle.enable_clientauth is set to require, but pg_tle is not installed or there are no functions registered with the clientauth feature. This is a good engine check, helping us avoid basic misconfigurations.

  • If you're anticipating connection storms, you can also increase the workers (that would help enforce the login restriction) by setting the pgtle.clientauth_num_parallel_workers parameter to greater than 1.

Conclusion

By following the above steps, you've now successfully configured your PostgreSQL environment to restrict certain users to only login to the read-replicas. This helps not just optimize your database performance, but also bolster security.

Let me know if you'd like to explore more advanced scenarios or discuss IAM integration for fine-grained access control!

Reference

  1. Clientauth Hook Documentation - https://github.com/aws/pg_tle/blob/main/docs/04_hooks.md'
  2. Install pg_tle On Ubuntu - https://www.thatguyfromdelhi.com/2024/04/installing-pgtle-on-ubuntu-quick-guide.html
  3. Unlock PostgreSQL Super Powers with pg_tle - https://www.thatguyfromdelhi.com/2024/04/unlock-postgresql-superpowers-with-pgtle.html

23 Apr 2024

Installing pg_tle on Ubuntu: A Quick Guide

Compile & Install pg_tle on Postgres

PostgreSQL is a powerful database, but sometimes you want to extend its functionality with custom features, and that's where extensions like pg_tle (Trusted Language Extensions) come into play.

If you're new to pg_tle, here's a quick round-up of why it makes sense for you - See Unlock PostgreSQL Superpowers with pg_tle.

Given the power of pg_tle, you may want to install it locally (on your laptop or an EC2 instance) before deploying to environments with restricted access (such as Production, or PostgreSQL services offered by major cloud providers). This is not only helpful to thoroughly test your code, but also to save on cost given that all development can then happen on-premise.

In this blog post, we'll go through the process of compiling and installing pg_tle for postgres on your Ubuntu system.

Prerequisites

An operating system running Ubuntu (this guide assumes Ubuntu 20.04 or similar).

  • A PostgreSQL database server, installed and running.
  • Basic familiarity with the command line and postgresql.conf.
  • Some development tools (we'll install these as we go).

Steps

Install Build Tools and Dependencies:

Start by updating your Ubuntu package list & install the necessary tools and libraries (This is required, since we would be compiling the pg_tle extension by source).

$ sudo apt update
$ sudo apt install build-essential make git postgresql-server-dev-all 

The postgresql-server-dev-all package version may need to be adjusted to match your specific PostgreSQL version. If this doesn't work for you, you can instead read more about setting up your Ubuntu operating system (albeit in a dated post) here - See Setup Linux for PostgreSQL development [3].

Download pg_tle Source Code:

Get the pg_tle source code from the GitHub project repository:

$ git clone https://github.com/aws/pg_tle.git

Compile pg_tle:

Compile the source code to create the extension files:

$ cd pg_tle
$ make

Although rare, make may fail if it can't find pg_config. As in the example below, a quick hack could be to help by pointing make to the pg_config binary location:

$ make                                     <<===== Fails
Makefile:24: /usr/lib/postgresql/15/lib/pgxs/src/makefiles/pgxs.mk: No such file or directory
make: *** No rule to make target '/usr/lib/postgresql/15/lib/pgxs/src/makefiles/pgxs.mk'.  Stop.

$ PG_CONFIG="`type -P -a pg_config`" make  <<===== Works successfully
gcc -Wall -Wmissing-prototypes ...
.
.
. (compiling starts successfully)

Install pg_tle:

Install the compiled extension into your PostgreSQL database. This command would install the extension related files to the postgres binaries folder, pointed to by PG_CONFIG:

$ sudo make install

Enable pg_tle in Your Database:

Connect to your PostgreSQL database using your preferred tool (e.g., psql) and run the following SQL command:

test_pgtle=# CREATE EXTENSION pg_tle;
CREATE EXTENSION

Verification:

To confirm pg_tle is installed correctly, run this SQL query:

test_pgtle=# SELECT * FROM pg_available_extensions WHERE name = 'pg_tle';
-[ RECORD 1 ]-----+-------------------------------------------
name              | pg_tle
default_version   | 1.4.0
installed_version | 1.4.0
comment           | Trusted Language Extensions for PostgreSQL

You should see a result similar to the above, where installed_version confirms the pg_tle version that's installed successfully.

Conclusion

You've now successfully compiled and installed the pg_tle extension on your Ubuntu system! This opens up the possibility to create and deploy custom extensions to enhance your PostgreSQL database.

pg_tle is a powerful tool that allows you to develop more advanced extensions. You can find more information and examples in the official pg_tle documentation at https://github.com/aws/pg_tle.

If you're intrigued, keep an eye out for a follow-up post where I'll show a simple example of how to use pg_tle extension for a real-world need!

References

  1. Unlock PostgreSQL Superpowers with pg_tle - https://www.thatguyfromdelhi.com/2024/04/unlock-postgresql-superpowers-with-pgtle.html
  2. Setup Linux for PostgreSQL development - https://www.thatguyfromdelhi.com/2011/12/setup-ubuntu-for-postgresql-development.html

20 Apr 2024

Unlock PostgreSQL Superpowers with pg_tle

pg_tle - A Must-Know for Developers

PostgreSQL is a fantastic database, packed with features. But sometimes, you need to add a little something extra – a custom function, a specialized data type, or maybe a procedure written in your favorite programming language. That's traditionally where PostgreSQL extensions come in. But for developers working with managed databases (from major cloud providers), while installing supported extensions is trivial installing custom or unsupported extensions can be tricky or even impossible.

Enter pg_tle!

What exactly is pg_tle?

pg_tle stands for Trusted Language Extensions. Here's the breakdown:

  • Extension: A piece of software that adds new functionality to PostgreSQL.
  • Trusted Language: A programming language (like JavaScript, Perl, or PL/pgSQL) that the database 'trusts' due to security restrictions built into the language itself.
  • The Magic: pg_tle provides a framework to build, package, and install extensions in these trusted languages, even in environments where you can't normally touch the underlying server.

Why is pg_tle Important for Application Developers?

  1. Unlock New Possibilities: pg_tle lets you create custom database functions tailored to your application's unique logic. Say you need complex data validation or even specialized calculations – pg_tle can make it happen.

  2. Bypass Restrictions: On managed database services, you often can't install traditional extensions. pg_tle works within these constraints allowing you to add functionality even in these environments - See Use Cases below for more.

  3. Enhanced Security: Because pg_tle uses trusted languages, your extensions have limits on what they can do. It leads to a more secure database overall.

  4. Open Sourcepg_tle is open source [3] and available on Github (Apache 2.0 License).

Use Cases - Just A Sample Of Ideas

  • Use Unsupported Extensions: This itself is a reason big enough to try out pg_tle! [4]
  • Custom Password Strength Checks: Go beyond basic password rules with an extension.
  • Login Triggers: Build custom rules that get triggered for every time a user logs into the database - not just for (the upcoming) v17 but also for older PostgreSQL versions!
  • Custom Data-Types: Build custom data-types that you could use to store data, functions and views within your database.
  • Data Transformations: Perform complex data manipulations directly within the database.

The Trade-offs

  • Complexity: Creating pg_tle extensions can be more involved than basic SQL scripting.
  • Limitations: Trusted languages still have constraints compared to the full capability of extensions developed in lower-level languages (like C).

Should You Learn More?

If any of the following resonate with you, then deeper pg_tle knowledge could be a big win:

  • You crave more flexibility in how you work with data, and although have found an extension it's not yet natively supported in your PostgreSQL installation.
  • Your application has a few "extra tricky" calculations or data processing needs, that are much easily possible with Perl, Javascript, PL/PGSQL.
  • You work with managed databases and miss those power-user extensions.

Conclusion

pg_tle is a powerful tool that adds flexibility and extensibility for PostgreSQL developers. While a bit more advanced, understanding pg_tle unlocks a new level of database customization for your applications.

If you're intrigued, keep an eye out for a follow-up post where I'll show a simple example of building a pg_tle extension!

References

  1. Trusted Language Extensions for PostgreSQL on Amazon Aurora and Amazon RDS - https://aws.amazon.com/blogs/aws/new-trusted-language-extensions-for-postgresql-on-amazon-aurora-and-amazon-rds/
  2. Creating custom data-types using Trusted Language Extensions - https://aws.amazon.com/blogs/database/create-custom-postgresql-data-types-using-trusted-language-extensions/
  3. pg_tle is open source! - https://github.com/aws/pg_tle
  4. pg_tle makes using custom extensions easier - https://aws.amazon.com/blogs/opensource/supabase-makes-extensions-easier-for-developers-with-trusted-language-extensions-for-postgresql/
  5. Installing pg_tle on Ubuntu - https://www.thatguyfromdelhi.com/2024/04/installing-pgtle-on-ubuntu-quick-guide.html

21 Dec 2021

Find Database DNS / Endpoint via SQL

How to get Database identifier using SQL

Often there is a need for client programs to find "where am I logged into?". This blog post provides a few ways to know more to answer this question. 


When more and more applications are breaking out of the regular (database + application) architecture, this gets more and more important in scenarios when you're not 100% sure "where" you just logged into.

Some common scenarios, where this is helpful are:


- Proxy between application and Database

- Load Balancer (application Load balancers & Database Load Balancers)

- Serverless Database Clusters

- Auto Scaling Instances (for e.g. EC2 based)

- To confirm whether the Database server quietly changed since last invocation (in the current session)


PSQL only - :HOST variable

When connected via PSQL, the HOST variable can tell which server you're currently connected to.  

postgres10@postgres=> SELECT :'HOST' db;

                            db

----------------------------------------------------------

 postgres10.xyz.ap-southeast-2.rds.amazonaws.com

(1 row)


Caveats:

  • This variable can be set during session, so probably it is a good idea to use it only in trusted environments.
  • This would need good testing with Proxy based architectures, since the value returned is what the client 'thinks' it connected to.
  • Also note that if the client was connected via IP address, this method doesn't do a reverse DNS lookup, so what you'd get is the IP address.


→ psql -U postgres10 -h 172.31.45.165
psql (15devel, server 10.13)
Type "help" for help.

postgres10@postgres=> SELECT :'HOST' db;
      db
---------------
 172.31.45.165
(1 row)

PSQL only - \conninfo

Similar to the method above, another psql only method is to get Connection Info via the \conninfo command. This is more descriptive (and so may be not very well suited for some needs) but additionally doesn't work very well with SELECTs.

postgres10@postgres=> \conninfo

You are connected to database "postgres" as user "postgres10" on host "postgres10.xyz.ap-southeast-2.rds.amazonaws.com" (address "1.2.3.4") at port "54321".



RDS Only - Lambda invocation

Another way to get the Server Endpoint is to use a Lambda function and let that return the Server endpoint. You can read more about how to use Lambda with RDS servers here.


Aurora Postgres only - aurora_replica_status() function

When connected to Aurora Postgres, one additional way for a client to know which of the servers it is connected to (in the cluster), it can use the Aurora function aurora_replica_status() that tells which server you're currently connected to. One additional benefit with this mechanism is that it is executed on the server, and thus returns a true picture of the servers present (and their role) in the cluster.


apg2@pg=> select server_id from aurora_replica_status();

server_id
-----------
apg2b
apg2a
(2 rows)


29 Dec 2020

Which SQL causes a Table Rewrite in Postgres?

EDIT: Updated to v17 (devel) - (Jan 2024).


While developing SQL based applications, it is commonplace to stumble on these 2 questions:

  1. What DDLs would block concurrent workload?
  2. Whether a DDL is going to rewrite the table (and in some cases may need ~ 2x disk space)?

Although completely answering Question 1 is beyond the scope of this post, one of the important pieces that helps answering both of these questions is whether a DDL is going to cause a relfilenode change..

For a brief background, each regular table in Postgres stores data in one or more files, each of which is referenced in the postgres catalog with a relfilenode. A simple way to check whether the current implementation is going to create / refer to another copy (file) is whether the relfilenode changes. (TRUNCATE is a standout here, which by design is going to purge the table data, so although the relfilenode would change here, in total it obviously wouldn't consume anywhere close to 2x disk-space)

The table below shows which DDLs would cause a table rewrite. As has been discussed here, we need some more info to completely answer Question 1, however meanwhile this table helps in making some concurrency / disk-usage related decisions for all Postgres versions supported today.



29 Feb 2020

Optimizations in GROUP BY vs SELECT DISTINCT

(This came out of something I was trying out + discussing with Postgres enthusiasts - thanks to all for clarifying doubts)

This article aims at highlighting one aspect of how the query planner implementation of SELECT * GROUP BY differs from SELECT DISTINCT.

For example:

SELECT b,c,d FROM a GROUP BY b,c,d;
vs
SELECT DISTINCT b,c,d FROM a;


We see a few scenarios where Postgres optimizes by removing unnecessary columns from the GROUP BY list (if a subset is already known to be Unique) and where Postgres could do even better. To highlight this difference, here I have an empty table with 3 columns:

postgres=# create table a (b integer, c text, d bigint);
CREATE TABLE


postgres=# \d a
                 Table "public.a"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 b      | integer |           |          |
 c      | text    |           |          |
 d      | bigint  |           |          |


On this table, we can see that SELECT * GROUP BY generates the exact same plan as SELECT DISTINCT. In particular, we're interested in the "Group Key" which is the same for both SQLs:

postgres=# explain select distinct b,c,d from a;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=29.78..31.78 rows=200 width=44)
   Group Key: b, c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)

postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=29.78..31.78 rows=200 width=44)
   Group Key: b, c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)



Having said that, if the same table is created with a PRIMARY KEY, we see that GROUP BY becomes smarter, in that we can see that the "Group Key" uses the Primary Key (here it is 'b') and correcty discards columns 'c' and 'd'. Nice 😄!

postgres=# create table a (b integer PRIMARY KEY, c text, d bigint);
CREATE TABLE
postgres=# explain select distinct b,c,d from a;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=29.78..41.08 rows=1130 width=44)
   Group Key: b, c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)

postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=24.12..35.42 rows=1130 width=44)
   Group Key: b
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)


Let's check if we get the same optimization if we create a UNIQUE index on the column. The answer? Sadly No! Furthermore, I went ahead and created a NOT NULL constraint, but that didn't change anything either. (Do note that UNIQUE columns can have multiple rows with NULLs).

postgres=# create table a (b integer unique not null, c text, d bigint);
CREATE TABLE


postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=29.78..41.08 rows=1130 width=44)
   Group Key: b, c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)



Regarding the above, IIUC this is an obvious performance optimization that Postgres is still leaving on the table (as of v13+):

postgres=# select version();
                                                     version                                                     
------------------------------------------------------------------------------------------------------------------
 PostgreSQL 13devel on i686-pc-linux-gnu, compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609, 32-bit
(1 row)


Next, does it still optimize this, if the PRIMARY KEY is not the first column in the GROUP BY? Answer? Yes! (The engine can optimize if any of the GROUPed BY column is a Primary Key! Noice !


postgres=# create table a (b integer, c text primary key, d bigint);
CREATE TABLE


postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=24.12..35.42 rows=1130 width=44)
   Group Key: c
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)


... and what if the PRIMARY KEY is a composite key of any of the columns in the GROUP BY column list? YES again 😄 !

postgres=# create table a (b int, c text, d bigint, primary key (c,d)) ;
CREATE TABLE

postgres=# explain select b,c,d from a group by b,c,d;
                         QUERY PLAN                        
------------------------------------------------------------
 HashAggregate  (cost=26.95..28.95 rows=200 width=44)
   Group Key: c, d
   ->  Seq Scan on a  (cost=0.00..21.30 rows=1130 width=44)
(3 rows)


Lastly, although some of these "optimizations" are things-to-avoid when writing good SQL, the reality is that ORM generated SQLs aren't that smart yet and then it's great that Postgres already implements these obvious optimizations.

What's in an empty table?

How much storage does an empty table in Postgres take? This is a post about Postgres tables that store ... well basically ...  Nothing . The...