Overview
While PostgreSQL’s pg_hba.conf is the file responsible for restricting connections, when listen_addresses is set to * (wildcard), it is possible to discover the open port on 5432 using nmap and learn the database exists, thereby possibly opening the server up for an exploit. Setting it to the an IP address prevents PostgreSQL from listening on an unintended interface, preventing this potential exploit.
Monitoring and Continuous Optimization
Use tools like pg_stat_activity and pg_stat_database to monitor connection usage and adjust the values as needed:
|
|
ALLOW OR DENY CONNECTIONS TO A DB
|
|
- Temporarily prevent new connections
|
|
Active Connections
|
|
- Check:
- state: Should be active for working connections. If you see many connections in idle, Pentaho might not be closing them properly.
- waiting: If true, there may be locks or concurrency issues.
To check global connections to any database:
If you want to list all databases with active connections, you can use:
|
|
To check if a database is connected in PostgreSQL, you can use the system view pg_stat_activity. Here is a query to check if there are active connections to a specific database:
|
|
- Close active connections
|
|
max_connections
Check the configured maximum limit:
|
|
max_connections: Assess the maximum number of connections you actually need. Too many active connections can impact performance. Use a higher value, but combine this with a connection pooler like PgBouncer if you need thousands of connections.
|
|
shared_buffers
The shared_buffers parameter defines how much RAM is reserved for PostgreSQL operations. Increasing it can improve performance, especially if you increase connections. If you are setting shared_buffers to 2GB, ensure that max_connections is adjusted accordingly. You can use this formula as a reference:
- Total RAM * 0.25 = shared_buffers
- Remaining RAM = work_mem, processes, etc.
pg_hba.conf
This is the authentication configuration file in PostgreSQL. It defines how clients can connect to the server and what authentication methods should be used. In this file, you can specify:
- Connection type (local, host, hostssl, etc.).
- Database.
- User.
- IP address (if applicable).
- Authentication method (md5, trust, scram-sha-256, etc.).
The file is usually located in PostgreSQL’s data directory and is used to control database access.
Allow local connections without encryption for the user admin in the database db_name.
local db_name admin trust
If you prefer to allow access to all users and databases via local connection.
|
|
Handle many clients
If you expect to handle many clients, it is preferable to use PgBouncer with a high max_client_conn and adjust max_connections in Patroni according to the available resources.
PgBouncer can handle more clients (max_client_conn), but it can only open connections to PostgreSQL up to the limit defined by max_connections.
For example:
- If max_connections = 200 in Patroni and default_pool_size = 20 in PgBouncer, PgBouncer can only open 200 connections to PostgreSQL in total (if connecting to multiple databases, the pool is split among them).
- If max_client_conn = 500 in PgBouncer, it can accept up to 500 client connections, but only 200 connections will be established to PostgreSQL at the same time.
- Increase max_connections in Patroni
|
|
- Configure default_pool_size in PgBouncer based on needs:
Set a reasonable pool size based on the expected load. For example, if you expect 500 concurrent clients and have 200 connections available in PostgreSQL, a default_pool_size = 10 might be sufficient to distribute the load.
- Monitor connection usage:
Use views like pg_stat_activity and PgBouncer statistics to ensure you are not reaching the limits.
|
|
Connect to PgBouncer:
|
|
locks
It seems you’re encountering a lock or access issue with your database, where multiple users or processes are trying to access the database “x” simultaneously. This usually occurs when the database is already being used for another query or transaction, causing delays or preventing new queries from running.
Here are a few ways to resolve or troubleshoot this:
- Check Active Queries: Use a command like pg_stat_activity in PostgreSQL to see if there are any long-running or blocking queries:
|
|
This will show you all the active sessions and queries. Look for ones that are holding locks or running for a long time.
- Terminate Blocking Queries: If you find a query that’s taking too long or causing the block, you can terminate it (carefully) by identifying the pid of the process and then running:
|
|
- Locks Monitoring: You can also check for specific locks:
|
|
This will show you locks that are waiting to be granted, which may indicate which query or user is causing the conflict.
-
Transaction Management: Ensure that you are managing transactions correctly, especially if multiple users are performing write operations at the same time. Long-running transactions should be avoided if possible.
-
Timeouts and Retries: If it’s a short-term contention issue, you might simply retry the query after a few moments.
-
Drop connections to db:
|
|
Temporarily Suspending the Database
You can prevent users from working on a database by disconnecting it:
- Deny:
|
|
- Allow:
|
|
Revoke privileges
|
|
- Temporarily deny to user:
|
|
User Permissions
If the role is a user that can connect: You can use LOGIN to ensure the role has login permissions.
|
|
If you want to force the password to expire so the user must change it on the next login:
|
|
Transaction timeouts
- This value is in milliseconds and allows more time to complete the operations. For specific queries that may take longer, override this value at the session level:
|
|
at the global level:
|
|
- It controls how long a transaction can remain open without activity before the server automatically closes it.
|
|
- Testing
|
|
- Value of 0 (zero): Disables this limit, allowing queries to run indefinitely.
Disadvantages:
- Risk of zombie transactions:
A transaction that doesn’t close properly could remain blocking resources indefinitely, consuming connections and locking tables.
- Degraded performance: Long-running queries or transactions open for too long can slow down the database, especially in busy systems.
- Increased susceptibility to deadlocks: If transactions stay open without limits, the risk of mutual blocking increases.
- Difficult troubleshooting: Without limits, problematic or poorly optimized queries may go unnoticed, making it harder to identify bottlenecks.
Troubleshooting
Use ALTER ROLE or ALTER DATABASE to customize limits by user or database:
|
|
Consider implementing tools like pg_stat_activity or scripts to identify long-running or blocked connections:
|
|