-
-
Notifications
You must be signed in to change notification settings - Fork 11
feat(postgres): add pooling control for PG knex config #53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
A quick aside (happy to open a separate issue): i can't find any docs on how to run |
/** | ||
* Configuration for PostgreSQL backend that supports full Knex configuration options. | ||
*/ | ||
export type PostgresBackendConfig = Pick<Knex.Config, "pool" | "connection">; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal here is to limit the options we can pass to Knex. I didnt want to allow overriding everything, just the pool and connection options (which you already partially allow via the Knex.ConnectionConfig
typing).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to combine this into the other test file. I just split as it was a separate flow.
Hello @plukevdh thanks for the PR. About having a docker compose, it's valid. Currently we have a task |
Indeed, I think we haven't described it properly. |
Have not! Didn't see it, but I can try it. I ran the tasks in GH Actions in my own repo and they passed. I'll give that a try as well. Thanks! |
using |
Also, I was thinking about the connection pool. I don't think having two separate pools is necessarily a problem, but maybe we should allow a dedicated configuration specifically for the dashboard connection. Anyways we may tackle that in another issue. |
So a couple of usage comments to that end. I am currently trying to spin out three distinct components from this project given how projects like
The way this project is architected at the moment, these three components are somewhat tied together. The dashboard can run independently, but still requires workers being run which is less than ideal. For the workers and queuing components, I can just |
I realize my goals are maybe not the goals of this project. The upside of what you all have here is a super simple, single runtime model that contains the queuing mechanism, the workers, and a dashboard all in a single process. I'm looking at this project particularly because of the simplicity and efficiency of the data model. In high throughput tests, knex blows up because of connection pool starvation when trying to queue jobs above like 2-3k req/sec. |
@plukevdh just to clarify a few things:
This isn’t in the docs yet, but we’ll make sure to add it soon. |
Maybe we need to rethink those entrypoints to make it easier to understand. It all makes sense in our heads 😂 |
BTW, feel free to merge this PR :) |
# [1.3.0](v1.2.0...v1.3.0) (2025-08-06) ### Features * add pooling control for PG knex config ([#53](#53)) ([7db5d6b](7db5d6b))
🎉 This PR is included in version 1.3.0 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
Checklist for Pull Requests
yarn test:all
)Summary of Changes
In super high throughput scenarios, queuing thousands of jobs a sec cause pool availability issues given the default knex pooling config. It'd be very helpful to expose the pooling config for the underlying Knex pooling config. This PR attempts to do that.
We could optionally expose some of the other config options, but those don't concern my needs at the moment. Happy to revise if it would help.