Ensuring a healthy and performant PostgreSQL service is crucial as it directly impacts the overall response time and stability of your backend. Since Postgres serves as the centerpiece of your backend, prioritize the optimization and maintenance of your Postgres service to achieve the desired performance and reliability.

In case your Postgres service is not meeting your performance expectations, you can explore the following options:

  1. Consider upgrading your dedicated compute resources to provide more processing power and memory to the Postgres server.

  2. Fine-tune the configuration parameters of Postgres to optimize its performance. Adjust settings such as shared_buffers, work_mem, and effective_cache_size to better align with your workload and server resources.

  3. Identify and analyze slow-performing queries using tools like query logs or query monitoring extensions. Optimize or rewrite these queries to improve their efficiency.

  4. Evaluate the usage of indexes in your database. Identify queries that could benefit from additional indexes and strategically add them to improve query performance.

  5. If the problem is related to IOPS, consider increasing disk performance.

By implementing these steps, you can effectively address performance concerns and enhance the overall performance of your Postgres service.

Upgrade to our latest postgres image

Before trying anything else, always upgrade to our latest postgres image first. You can find our available images in our dashboard, under your database settings.

Upgrade to dedicated compute

Increasing CPU and memory is the simplest way to address performance issues. You can read more about compute resources here.

Fine-tune configuration parameters

When optimizing your Postgres setup, you can consider adjusting various Postgres settings. You can find a list of these parameters here. Keep in mind that the optimal values for these parameters will depend on factors such as available resources, workload, and data distribution.

To help you get started, you can use pgtune as a reference tool. Pgtune can generate recommended configuration settings based on your system specifications. By providing information about your system, it can suggest parameter values that may be a good starting point for optimization.

However, it’s important to note that the generated settings from pgtune are not guaranteed to be the best for your specific environment. It’s always recommended to review and customize the suggested settings based on your particular requirements, performance testing, and ongoing monitoring of your Postgres database.

Identifying slow queries

Monitoring slow queries is a highly effective method for tackling performance issues. Several tools leverage pg_stat_statements, a PostgreSQL extension, to provide constant monitoring. You can employ these tools to identify and address slow queries in real-time.

pghero

PgHero is one of such tools you can use to idenfity and address slow queries. You can easily run pghero alongside your postgres with Nhost Run:

  1. First, make sure the extension pg_stat_statements is enabled.

  2. Click on this one-click install link

  3. Select your project:

  4. Replace the placeholders with your postgres password, subdomain and a user and password to protect your pghero service. Finally, click on create.

  5. After confirming the service, copy the URL:

  6. Finally, you can open the link you just copied to access pghero:

When you create a new service, it can take a few minutes for the DNS (Domain Name System) to propagate. If your browser displays an error stating that it couldn’t find the server or website, simply wait for a couple of minutes and then try again.

After successfully setting up pghero, it will begin displaying slow queries, suggesting index proposals, and offering other valuable information. Utilize this data to enhance your service’s performance.

Adding indexes

Indexes can significantly enhance the speed of data retrieval. However, it’s essential to be aware that they introduce additional overhead during mutations. Therefore, understanding your workload is crucial before opting to add an index.

There are tools you can use to help analyze your workload and detect missing indexes.

pghero

PgHero, in addition to help with slow queries, can also help finding missing and duplicate indexes. See previous section on how to deploy pghero with Nhost Run.

dexter

Dexter can leverage both pg_stat_statements and hypopg to find and evaluate indexes. You can run dexter directly from your machine:

  1. Enable hypopg
  2. Execute the command docker run --rm -it ankane/dexter [POSTGRES_CONN_STRING] --pg-stat-statements
$ docker run --rm -it ankane/dexter [POSTGRES_CONN_STRING] --pg-stat-statements
Processing 1631 new query fingerprints
No new indexes found