Why we built Postgres Monitor

January 16th, 2023

0 min read


PostgreSQL is widely regarded as one of the best open-source relational databases. It's used for high-throughput web and mobile apps as well as a diverse array of analytics use cases. It's scalability, reliability and extensibility set it apart from the competition, making it the "most loved" and "most desired" database according to Stack Overflow's acclaimed 2022 developer survey.

As PostgreSQL continues to grow in popularity, more and more cloud platforms are offering hosted Postgres solutions. It's never been a better time to jump into Postgres, and it's never been more important to have robust tools for monitoring its health and performance. While many of these cloud platforms (such as AWS, GCP, and Heroku) do provide high-level dashboards for monitoring Postgres databases, they function as severely limited pack-ins that don't offer the tools or visibility needed to troubleshoot real-world issues. It can be a costly mistake to overlook this.

Why monitor your database?

Databases are often the single most important component in an application's architecture, so it sometimes puzzles me why first-class monitoring isn't more frequently prioritized. Performant databases are critical for ensuring that end users have a great experience on your app. Users expect apps to load quickly, work correctly, and keep track of state changes (ex. items they add to their cart should stay in their cart). Databases often play a crucial role with each of these expectations. If your database is the heart of your business, doesn't it seem useful to install a heart monitor?

If you don't have monitoring set up, your users are at risk of having a bad experience without your knowledge. The only thing worse than your user having a bad experience is not even being aware of it. You can't fix what you don't see. This probably isn't the first time you've been reminded that Amazon loses 1% of revenue for every 100ms of latency. Speed can matter just as much as functionality.

In addition to trivially identifying performance bottlenecks, an effective monitoring solution will proactively detect issues that could lead to database downtime. Detectable warning signs are almost always present before an outage; wouldn't it be nice if you received an alert before an issue affects production?

Finally, the right DB monitoring solution can often save you money by identifying opportunities to reduce disk space and reduce performance costs.

Introducing Postgres Monitor

Our team has spent much of 2022 building a new monitoring product from the ground up, specifically for Postgres. We aim to provide better insights into Postgres for each of the top cloud platforms, starting with Heroku (support for AWS RDS & Aurora is coming up next).

Postgres Monitor provides in-depth dashboards for Postgres health for critical top line metrics such as Load Avg, IOPS, memory and disk usage. We also offer detailed dashboards for many other important metrics such as connections, PgBouncer, the database cache, tables and indexes, replication and more.

Additionally, Postgres Monitor provides query tracking with performance stats and automatic EXPLAIN plans for your slowest queries. Query performance statistics are tracked from pg_stat_statements and offer a view into how your queries are performing over time. Stats include mean execution time, call counts, max and total time, as well as block and cache level metrics.

Crucially, we built Postgres Monitor to be accessible to smaller teams that might not have a dedicated database administrator. Databases are complicated to say the least, and we've found that developers don't always know what to do with the data dumps that monitoring products throw at them. We want Postgres Monitor to be actionable. Thats where our dynamic recommendations come in. As our system continuously monitors your database, we detect performance and cost-saving opportunities. For example, we let you know when any unused, invalid or redundant indexes are found. Or if one of your tables has poor cache hit ratios. Or too many table scans.

These are just some of the useful features that we're initially launching with Postgres Monitor that can help you improve the performance and reliability of your database.

What's next?

We have a lot more planned, starting with configurable alerts. Pretty soon you'll be able to set granular custom metric alerts on specific servers, tables or indexes and have it delivered via email, Slack, and PagerDuty. More dynamic recommendations will be coming soon as well as support for hosted Postgres on AWS (RDS & Aurora).

We're excited about the insights and improvements Postgres Monitor can give you. By using a robust monitoring tool, you can trust that your database is healthy and reliable, allowing you to focus on what really matters for your users.

Ready to see it for yourself? Check out our interactive demo!

Page Outline

Discover a better way to monitor and debug your Postgres database.

Postgres insights delivered to your inbox:

Discover a better way to monitor and debug your Postgres database.

This website uses cookies to enhance the user experience.  View our Privacy Policy