Home

Performance Tuning

The Supabase platform automatically optimizes your Postgres database to take advantage of the compute resources of the plan your project is on. However, these optimizations are based on assumptions about the type of workflow the project is being utilized for, and it is likely that better results can be obtained by tuning the database for your particular workflow.

Examining Query Performance#

Unoptimized queries are a major cause of poor database performance. The techniques on this page can help you identify and understand queries that take the most time and resources from your database.

Database performance is a large topic and many factors can contribute. Some of the most common causes of poor performance include:

  • An inefficiently designed schema
  • Inefficiently designed queries
  • A lack of indexes causing slower than required queries over large tables
  • Unused indexes causing slow INSERT, UPDATE and DELETE operations
  • Not enough compute resources, such as memory, causing your database to go to disk for results too often
  • Lock contention from multiple queries operating on highly utilized tables
  • Large amount of bloat on your tables causing poor query planning

Thankfully there are solutions to all these issues, which we will cover in the following sections.

Postgres Cumulative Statistics system#

Postgres collects data about its own operations using the cumulative statistics system. In addition to this, every Supabase project has the pg_stat_statements extension enabled by default. This extension records query execution performance details and is the best way to find inefficient queries. This information can be combined with the Postgres query plan analyzer to develop more efficient queries.

Here are some example queries to get you started.

Most frequently called queries:


_20
select
_20
auth.rolname,
_20
statements.query,
_20
statements.calls,
_20
-- -- Postgres 13, 14, 15
_20
statements.total_exec_time + statements.total_plan_time as total_time,
_20
statements.min_exec_time + statements.min_plan_time as min_time,
_20
statements.max_exec_time + statements.max_plan_time as max_time,
_20
statements.mean_exec_time + statements.mean_plan_time as mean_time,
_20
-- -- Postgres <= 12
_20
-- total_time,
_20
-- min_time,
_20
-- max_time,
_20
-- mean_time,
_20
statements.rows / statements.calls as avg_rows
_20
from
_20
pg_stat_statements as statements
_20
inner join pg_authid as auth on statements.userid = auth.oid
_20
order by statements.calls desc
_20
limit 100;

This query shows:

  • query statistics, ordered by the number of times each query has been executed
  • the role that ran the query
  • the number of times it has been called
  • the average number of rows returned
  • the cumulative total time the query has spent running
  • the min, max and mean query times.

This provides useful information about the queries you run most frequently. Queries that have high max_time or mean_time times and are being called often can be good candidates for optimization.

Slowest queries by execution time:


_20
select
_20
auth.rolname,
_20
statements.query,
_20
statements.calls,
_20
-- -- Postgres 13, 14, 15
_20
statements.total_exec_time + statements.total_plan_time as total_time,
_20
statements.min_exec_time + statements.min_plan_time as min_time,
_20
statements.max_exec_time + statements.max_plan_time as max_time,
_20
statements.mean_exec_time + statements.mean_plan_time as mean_time,
_20
-- -- Postgres <= 12
_20
-- total_time,
_20
-- min_time,
_20
-- max_time,
_20
-- mean_time,
_20
statements.rows / statements.calls as avg_rows
_20
from
_20
pg_stat_statements as statements
_20
inner join pg_authid as auth on statements.userid = auth.oid
_20
order by max_time desc
_20
limit 100;

This query will show you statistics about queries ordered by the maximum execution time. It is similar to the query above ordered by calls, but this one highlights outliers that may have high executions times. Queries which have high or mean execution times are good candidates for optimisation.

Most time consuming queries:


_18
select
_18
auth.rolname,
_18
statements.query,
_18
statements.calls,
_18
statements.total_exec_time + statements.total_plan_time as total_time,
_18
to_char(
_18
(
_18
(statements.total_exec_time + statements.total_plan_time) / sum(
_18
statements.total_exec_time + statements.total_plan_time
_18
) over ()
_18
) * 100,
_18
'FM90D0'
_18
) || '%' as prop_total_time
_18
from
_18
pg_stat_statements as statements
_18
inner join pg_authid as auth on statements.userid = auth.oid
_18
order by total_time desc
_18
limit 100;

This query will show you statistics about queries ordered by the cumulative total execution time. It shows the total time the query has spent running as well as the proportion of total execution time the query has taken up.

Queries which are the most time consuming are not necessarily bad, you may have a very efficient and frequently ran queries that end up taking a large total % time, but it can be useful to help spot queries that are taking up more time than they should.

Hit rate#

Generally for most applications a small percentage of data is accessed more regularly than the rest. To make sure that your regularly accessed data is available, Postgres tracks your data access patterns and keeps this in its shared_buffers cache.

Applications with lower cache hit rates generally perform more poorly since they have to hit the disk to get results rather than serving them from memory. Very poor hit rates can also cause you to burst past your Disk IO limits causing significant performance issues.

You can view your cache and index hit rate by executing the following query:


_10
select
_10
'index hit rate' as name,
_10
(sum(idx_blks_hit)) / nullif(sum(idx_blks_hit + idx_blks_read), 0) * 100 as ratio
_10
from pg_statio_user_indexes
_10
union all
_10
select
_10
'table hit rate' as name,
_10
sum(heap_blks_hit) / nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0) * 100 as ratio
_10
from pg_statio_user_tables;

This shows the ratio of data blocks fetched from the Postgres shared_buffers cache against the data blocks that were read from disk/OS cache.

If either of your index or table hit rate are < 99% then this can indicate your compute plan is too small for your current workload and you would benefit from more memory. Upgrading your compute is easy and can be done from your project dashboard.

Optimizing poor performing queries#

Postgres has built in tooling to help you optimize poorly performing queries. You can use the query plan analyzer on any expensive queries that you have identified:


_10
explain analyze <query-statement-here>;

When you include analyze in the explain statement, the database attempts to execute the query and provides a detailed query plan along with actual execution times. So, be careful using explain analyze with insert/update/delete queries, because the query will actually run, and could have unintended side-effects.

If you run just explain without the analyze keyword, the database will only perform query planning without actually executing the query. This approach can be beneficial when you want to inspect the query plan without affecting the database or if you encounter timeouts in your queries.

Using the query plan analyzer to optimize your queries is a large topic, with a number of online resources available:

You can pair the information available from pg_stat_statements with the detailed system metrics available via your metrics endpoint to better understand the behavior of your DB and the queries you're executing against it.

Optimizing the number of connections#

By default, the number of connections allowed to Postgres and PgBouncer is configured based on the resources available to the database.

Compute Add-onPostgresql connectionsPGBouncer connections
None60200
Small90200
Medium120200
Large160300
XL240700
2XL3801500
4XL4803000
8XL4906000
12XL5009000
16XL50012000

If the number of connections is insufficient, you will receive the following error upon connecting to the DB:


_10
$ psql -U postgres -h ...
_10
FATAL: remaining connection slots are reserved for non-replication superuser connections

In such a scenario, you can consider:

Configuring clients to use fewer connections#

You can use the pg_stat_activity view to debug which clients are holding open connections on your DB. pg_stat_activity only exposes information on direct connections to the database. Information on the number of connections to pgbouncer is available via the metrics endpoint.

Depending on the clients involved, you might be able to configure them to work with fewer connections (e.g. by imposing a limit on the maximum number of connections they're allowed to use), or shift specific workloads to connect via pgbouncer instead. Transient workflows, which can quickly scale up and down in response to traffic (e.g. serverless functions), can especially benefit from using a connection pooler rather than connecting to the DB directly.

Allowing higher number of connections#

You can configure Postgres by executing the following statement, followed by a server restart:


_10
alter system set max_connections = '<val-here>';

Note that the default configuration used by the Supabase platform optimizes the database to maximize resource utilization, and as a result, you might also need to configure other options (e.g. work_mem, shared_buffers, maintenance_work_mem) in order to tune them towards your use-case, and to avoid causing instability in your database.

Once overridden, the Supabase platform will continue to respect your manually configured value (even if the add-on size is changed), unless the override is removed with the following statement, followed by a server restart:


_10
alter system reset max_connections;
_10
alter system reset <other-overridden-conf>;
_10
...

Configuring the number of PgBouncer connections is not supported at this time.