Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Using statement_cache_size asyncpg setting / prepared statement name for asyncpg w pgbouncer #6467

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
dyens opened this issue May 11, 2021 · 56 comments
Labels
asyncio postgresql PRs (with tests!) welcome a fix or feature which is appropriate to be implemented by volunteers use case not really a feature or a bug; can be support for new DB features or user use cases not anticipated
Milestone

Comments

@dyens
Copy link

dyens commented May 11, 2021

Hi!

I use sqlalchemy 1.4 with asyncpg driver with pgbouncer.

    from sqlalchemy.ext.asyncio import create_async_engine
    from sqlalchemy.orm import sessionmaker
    from sqlalchemy.ext.asyncio import AsyncSession

    engine = create_async_engine(
        f'postgresql+asyncpg://{username}:{password}@{host}:{port}/{dbname}',
        echo=False,
    )
    session_maker = sessionmaker(
        engine,
        class_=AsyncSession,
    )

I have an error:

asyncpg.exceptions.DuplicatePreparedStatementError: prepared statement "__asyncpg_stmt_a__" already exists
HINT:
NOTE: pgbouncer with pool_mode set to "transaction" or
"statement" does not support prepared statements properly.
You have two options:

* if you are using pgbouncer for connection pooling to a
  single server, switch to the connection pool functionality
  provided by asyncpg, it is a much better option for this
  purpose;

* if you have no option of avoiding the use of pgbouncer,
  then you can set statement_cache_size to 0 when creating
  the asyncpg connection object.

How i can pass this setting (statement_cache_size=0) to asyncpg connection object?

@dyens dyens added the requires triage New issue that requires categorization label May 11, 2021
@zzzeek zzzeek added asyncio postgresql and removed requires triage New issue that requires categorization labels May 11, 2021
@zzzeek
Copy link
Member

zzzeek commented May 11, 2021

SQLAlchemy doesn't make use of "statement_cache_size" as it necessarily uses the asyncpg.connection.prepare() method directly, so there is no way to disable the use of prepared statements across the board. However, you can remove the use of prepared statement caching itself, so that it will make a new prepared statement each time, using prepared_statement_cache_size=0, if you can try that and see if it works we can mention this in the docs:

https://docs.sqlalchemy.org/en/14/dialects/postgresql.html#prepared-statement-cache

@dyens
Copy link
Author

dyens commented May 11, 2021

Thank you, @zzzeek

In my case it does not work.

I found in :

https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/asyncpg.py#L747

                await_only(self.asyncpg.connect(*arg, **kw))

in **kw we does not pass statement_cache_size.

If i add:

if 'prepared_statement_cache_size' in kw:
    kw['statement_cache_size'] = kw['prepared_statement_cache_size']

It helps sometimes, but sometimes does not (i don't know why...)

@zzzeek
Copy link
Member

zzzeek commented May 11, 2021

hi -

did you try just direct usage of the prepared_statement_cache_size variable as given:

engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=0")

? if that doesn't work, we are out of luck. SQLAlchemy is forced to use connection.prepare() which means we have to use prepared statements.

@dyens
Copy link
Author

dyens commented May 11, 2021

Yes, i use this:

engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=0")

@zzzeek
Copy link
Member

zzzeek commented May 11, 2021

all of our SELECT statements have to use connection.prepare() because we need to be able to call get_attributes(). if asyncpg could be convinced to give us this API without necessitating the use of connection.prepare() we could begin to think about how to support that.

@zzzeek
Copy link
Member

zzzeek commented May 11, 2021

the doc at https://magicstack.github.io/asyncpg/current/api/index.html?highlight=statement_cache_size doesnt claim this disables prepared statements, just that it doesn't cache them:

statement_cache_size (int) – The size of prepared statement LRU cache. Pass 0 to disable the cache.

oh perhaps this is needed for when we do INSERT/UPDATE/DELETE, OK.... the patch you tried at kw['statement_cache_size'] = kw['prepared_statement_cache_size'] should do that.

@zzzeek
Copy link
Member

zzzeek commented May 11, 2021

that is, if you use prepared_statement_cache_size=0 and add code like this:

diff --git a/lib/sqlalchemy/dialects/postgresql/asyncpg.py b/lib/sqlalchemy/dialects/postgresql/asyncpg.py
index 4a191cd286..415862a17b 100644
--- a/lib/sqlalchemy/dialects/postgresql/asyncpg.py
+++ b/lib/sqlalchemy/dialects/postgresql/asyncpg.py
@@ -735,6 +735,7 @@ class AsyncAdapt_asyncpg_dbapi:
         prepared_statement_cache_size = kw.pop(
             "prepared_statement_cache_size", 100
         )
+        kw["statement_cache_size"] = prepared_statement_cache_size
         if util.asbool(async_fallback):
             return AsyncAdaptFallback_asyncpg_connection(
                 self,

that's the best we can do. if there are still problems then we need support from asyncpg

@dyens
Copy link
Author

dyens commented May 11, 2021

I try this. But for some reason this is not help. (sometimes an error appears but sometimes not...)

@zzzeek
Copy link
Member

zzzeek commented May 11, 2021

this might be related to the fact that we still use connection.prepare(). I dont know the internals of asyncpg enough to advise further on what might be going on.

@dyens
Copy link
Author

dyens commented May 11, 2021

Yes, this is a reason..

Also in AsyncAdapt_asyncpg_cursor._prepare_and_execute

We actively use prepared statements for executing queries.

In asyncpg, as i can see, for not prepared queries they does not use prepared statements

# Pseudo code:

# In asyncpg for statement_cache_size = 0
stmt = Connection._get_statement(
        self,
        query,
        named=False  # named=True is used for connection.prepare call
)

# in deep of this function, if named = False and statement_cache_size = 0
# asyncpg use empty name for prepared statements.

result = Connection._protocol.bind_execute(stmt, args, ...) # a little wrong but essence is 

@zzzeek zzzeek added the question issue where a "fix" on the SQLAlchemy side is unlikely, hence more of a usage question label May 12, 2021
@SlavaSkvortsov
Copy link

SlavaSkvortsov commented Jun 21, 2021

We had a similar problem due to multiple web workers. They generated prepared statements with the same names - original function to generate IDs looks like this:

    def _get_unique_id(self, prefix):
        global _uid
        _uid += 1
        return '__asyncpg_{}_{:x}__'.format(prefix, _uid)

So we just changed the Connection class a bit

from uuid import uuid4

from asyncpg import Connection


class CConnection(Connection):
    def _get_unique_id(self, prefix: str) -> str:
        return f'__asyncpg_{prefix}_{uuid4()}__'

You need to provide it when you create the engine

engine = create_async_engine(
    settings.database_url,
    connect_args={
        'connection_class': CConnection,
    },
)

@CaselIT
Copy link
Member

CaselIT commented Jun 21, 2021

@SlavaSkvortsov that seems to be something we cannot change on out end, other than subclassing the connection class to change a private method like in you example.

You may want to open an issue on the asyncpg repo for a solution out of the box. Imho _get_unique_id should at least take into consideration the process pid (and/or thread id). The best way would be the ability of customizing it, maybe using a connect kwarg or with the ability to specify a name when calling prepare, since there is no way of manually assigning a name to a statement

@forshev
Copy link

forshev commented Sep 15, 2021

engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=0")

Should this configuration pass the prepared statement cache size parameter to asyncpg connection?
With SQLAlchemy@1.4.22 it doesn't.

So i managed to set cache size on asyncpg side like this:

engine = create_async_engine(
    "postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=0",
    poolclass=NullPool,
    future=True,
    connect_args={'statement_cache_size': 0},
)

Note the connect_args={'statement_cache_size': 0} parameter.

And in combination with @SlavaSkvortsov's unique ids suggestion it seems that i got rid of prepared statement errors with pgbouncer.

@rslinckx
Copy link

When using asyncpg only, you can use statement_cache_size=0 and it won't use prepared statements at all, thus working behind pgbouncer in transaction mode.

My understanding is that sqlalchemy/asyncpg will use prepared statements no matter what the setting prepared_statement_cache_size value. This means that it will create prepared statements in any case. Using the unique ID workaround only hides the problem, you will create a lot of prepared statements in your open sessions to the database (each with a unique name), and each transaction will get a random session including some subset of those statements. Since you are disabling the cache, it won't be a real issue as it will keep creating new ones, and they will never conflict because the name is random, but they are still there and created. I'm not sure what's the mechanism to delete them, maybe they just get deleted whenever the backend session is closed, once in a while?

As for why sqlalchemy needs to use prepared statements in any case, I have no idea.

@CaselIT
Copy link
Member

CaselIT commented Sep 15, 2021

As for why sqlalchemy needs to use prepared statements in any case, I have no idea.

asyncpg has no other way of returning the type information of the selected columns in the query when using a normal query, so sqlalchemy needs to use prepared statements:

if cache is None:
prepared_stmt = await self._connection.prepare(operation)
attributes = prepared_stmt.get_attributes()
return prepared_stmt, attributes

and
if attributes:
self.description = [
(
attr.name,
attr.type.oid,
None,
None,
None,
None,
None,
)
for attr in attributes
]

@zzzeek
Copy link
Member

zzzeek commented Sep 15, 2021

As for why sqlalchemy needs to use prepared statements in any case, I have no idea.

#6467 (comment)

@zzzeek
Copy link
Member

zzzeek commented Sep 15, 2021

assuming we keep using prepared statements theres a bunch of people on this issue now can we come up with a way to fix what's wrong here and get this closed? thanks

@zzzeek zzzeek changed the title Using statement_cache_size asyncpg setting Using statement_cache_size asyncpg setting / prepared statement name for asyncpg w pgbouncer Nov 16, 2021
@zzzeek
Copy link
Member

zzzeek commented Nov 16, 2021

MagicStack/asyncpg#837 is closed via MagicStack/asyncpg#846 . we will want to expose this and then add a documentation section that this is a means of using our asyncpg driver with pgbouncer along with a recipe based on UUID or similar.

@zzzeek zzzeek added this to the 1.4.x milestone Nov 16, 2021
@zzzeek zzzeek added use case not really a feature or a bug; can be support for new DB features or user use cases not anticipated and removed question issue where a "fix" on the SQLAlchemy side is unlikely, hence more of a usage question labels Nov 16, 2021
@jacksund
Copy link

jacksund commented Jul 17, 2022

Is there any update on how use sqlalchemy+asyncpg+pgbouncer? I'm new to sqlalchemy's async engine and stumbled onto this issue while trying out Prefect's v2 beta. Their package uses create_async_engine under the hood, but even if I modified their implementation, I don't think they would accept the strategy used by @SlavaSkvortsov. The only other suggestion I've seen is setting pgbouncer’s pool_mode to session (according to asyncpg's FAQ), but I didn't have any luck with it.

EDIT: I should add -- session mode didn't work for me because I have many long-lived clients connecting. So I depend on a pool in transaction mode.

@zzzeek
Copy link
Member

zzzeek commented Jul 18, 2022

yes something like that. but i dont think it should be necessary, because once the transaction is released back to pgbouncer, that prepared statement is useless anyway, so just DEALLOCATE them all, at the start of a transaction is best. then, we also can't rely on any kind of caching of these prepared statements, because with transactional mode, they are similarly lost every transaction to us, hence prepared statement cache needs to be zero. those two steps should solve the problem. if not, then i dont understand what's going on.

@zzzeek
Copy link
Member

zzzeek commented Jul 18, 2022

we could probably support statement mode also if the engine is run with autocommit

@zzzeek
Copy link
Member

zzzeek commented Jul 18, 2022

confirm the prepared_statement_cache_size works setting it to zero, test case:

import asyncio

from sqlalchemy import text
from sqlalchemy.ext.asyncio import create_async_engine


async def async_main():
    engine = create_async_engine(
        "postgresql+asyncpg://scott:tiger@localhost/test",
        echo=True,

        # when commenting this out, we get
        # "prepared statement "__asyncpg_stmt_7__" does not exist"
        # setting the cache to zero allows the below statement to invoke
        connect_args=dict(prepared_statement_cache_size=0)
    )

    for i in range(3):
        async with engine.begin() as conn:
            await conn.execute(text("select 1"))
            await conn.exec_driver_sql("DEALLOCATE ALL")

asyncio.run(async_main())

@vamshiaruru-virgodesigns

Just want to chime in and say that setting prepared_statement_cache_size to zero in create_async_engine doesn't help with prepared statement already exists error. The solution from comment #6467 (comment) works.

@zzzeek
Copy link
Member

zzzeek commented Aug 18, 2022

@vamshiaruru-virgodesigns can you confirm some points for me?

The solution of just naming them all randomly seems very much like it would fill up the PostgreSQL session with thousands of unused prepared statements which we would assume uses memory.

@vamshiaruru-virgodesigns

@zzzeek , I'm using transaction mode in pgbouncer, and just setting prepared_statement_cache_size or statement_cache_size to zero still doesn't work. (I keep getting the prepared statement already exists error). Only after adding the custom Connector I was able to get SQLAlchemy working with PGBouncer.
I haven't noticed this block:

    @event.listens_for(engine.sync_engine, "begin")
    def clear_prepared_statements_on_begin(conn):
        conn.exec_driver_sql("DEALLOCATE ALL")

I can quickly try with that and see if it works without custom connector.

@zzzeek
Copy link
Member

zzzeek commented Aug 18, 2022

thanks for the quick reply. if you keep working with the other approach, can you check on the memory use of your postgresql workers, if you are doing things at scale. it intuitively seems like it would fill up memory but I don't really know the specifics.

@vamshiaruru-virgodesigns

Update, with this code

engine = create_async_engine(
    settings.database_dsn,
    pool_size=settings.sqlalchemy_pool_size,
    pool_pre_ping=True,
    connect_args={
        "statement_cache_size": 0,
        "prepared_statement_cache_size": 0,
        # "connection_class": CConnection,
    },
)

@event.listens_for(engine.sync_engine, "begin")
def clear_prepared_statements_on_begin(conn):
    conn.exec_driver_sql("DEALLOCATE ALL")

It works. But I am not sure how expensive it is to run DEALLOCATE ALL at every begin.
A question I have is this. AsyncPG documentation says setting statement_cache_size to 0 disables prepared queries (doc link is here https://magicstack.github.io/asyncpg/current/faq.html?highlight=statement_cache_size ), but just setting that connection argument without using your DEALLOCATE ALL handler doesn't work. I am curious as to why that could be happening.
As for your memory query, I very recently implemented the change to use custom Connector, and in that time our memory usage hasn't increased. We usually have ~1.7k db connections on our db. But we'll have to keep monitoring it as its been only few hours since I've implemented it. But logically speaking, postgres automatically deallocates a prepared statement when session ends, so regardless of how we name the prepared statement, those would be released, right? I could be wrong here since I don't know asyncpg internals, but it feels like it shouldn't make a difference how we constructed the query name.

Here's how asyncpg constructs the name:

def _get_unique_id(self, prefix):
        global _uid
        _uid += 1
        return '__asyncpg_{}_{:x}__'.format(prefix, _uid)

I don't think they are keeping track of this _uid anywhere and it is just their way of making a unique name maybe?

@zzzeek
Copy link
Member

zzzeek commented Aug 18, 2022

Update, with this code

engine = create_async_engine(
    settings.database_dsn,
    pool_size=settings.sqlalchemy_pool_size,
    pool_pre_ping=True,
    connect_args={
        "statement_cache_size": 0,
        "prepared_statement_cache_size": 0,
        # "connection_class": CConnection,
    },
)

@event.listens_for(engine.sync_engine, "begin")
def clear_prepared_statements_on_begin(conn):
    conn.exec_driver_sql("DEALLOCATE ALL")

It works. But I am not sure how expensive it is to run DEALLOCATE ALL at every begin.

pgbouncer itself, when using session mode (which I still think is a much better idea here) uses DISCARD ALL by default which is more "heavy", although it does this after connection release so I suppose this is something it can do in the background on its end.

A question I have is this. AsyncPG documentation says setting statement_cache_size to 0 disables prepared queries (doc link is here https://magicstack.github.io/asyncpg/current/faq.html?highlight=statement_cache_size ),

it actually gives a hint:

(and, obviously, avoid the use of Connection.prepare());

we are using connection.prepare() for all statements. We have no choice in that regard due to limitations in asyncpg's API.

As for your memory query, I very recently implemented the change to use custom Connector, and in that time our memory usage hasn't increased. We usually have ~1.7k db connections on our db. But we'll have to keep monitoring it as its been only few hours since I've implemented it. But logically speaking, postgres automatically deallocates a prepared statement when session ends, so regardless of how we name the prepared statement, those would be released, right?

no, because PG Bouncer is pooling the connections, hence those sessions are still opened. per docs the server reset query isn't used. There is also server_reset_query_always which as they advertise is for "broken" apps. From my POV using PGBouncer transaction mode with asyncpg's prepared statements is "broken", I would agree. I dont see the wisdom at all in using a different connection for every transaction, when SQLAlchemy already has a clear system of distinguishing the scope of a "connection" from that of a "transaction" and should not be worked around.

Here's how asyncpg constructs the name:

def _get_unique_id(self, prefix):
        global _uid
        _uid += 1
        return '__asyncpg_{}_{:x}__'.format(prefix, _uid)

I don't think they are keeping track of this _uid anywhere and it is just their way of making a unique name maybe?

they dont need to make it globally unique because asyncpg assumes their connection corresponds to a single PostgreSQL session that did not exist beforehand.

@zzzeek
Copy link
Member

zzzeek commented Aug 18, 2022

I mean performancewise, the prepared_statement_cache_size=0 setting definitely degrades performance very measurably.

I really think someone should do some benches here with pgbouncer session vs. transaction mode, I think the former will perform better in every way. transaction mode seems like it is intended for applications that are mis-designed to hold onto single connections for long idle spans, or something (Edit: yes like when you still want to have client side pooling in place also. OK).

@zzzeek
Copy link
Member

zzzeek commented Aug 18, 2022

im actually going to document this. I see zero advantage at all to transaction pooling mode (Edit: OK it can reduce latency on connect by keeping QueuePool in place)

@vamshiaruru-virgodesigns

Thanks for your explanations. In our requirements, we have a django application connecting to same DB and I have read that session pooling with django is not very performant compared to transaction level pooling. But I haven't personally performed any bench marks against it.

@zzzeek
Copy link
Member

zzzeek commented Aug 18, 2022

OK yes, thinking some more, using transaction mode, you can keep using QueuePool and that will save you on the connectivity overhead as well, if PGBouncer is able to keep many client connections persistently linked to a smaller set of PG server connections, OK, I get it then.

From all of this it seems like the best approach would be to use server_reset_query_always . that way the server reset query is kept out of the client, it's done on the server after the connection has been released.

@mimre25
Copy link

mimre25 commented Feb 5, 2023

I wanted to chime in here, as I recently faced this problem and solved an edge case that wasn't discussed above.

While the solutions from #6467 (comment) and #6467 (comment) work in most scenarios, there is a further problem, if one wants to use transaction isolation levels.
If you want to use a statement a la

SET TRANSACTION ISOLATION LEVEL <level>

then you need to run this as first statement of a connection.

In such a scenario the event listener from #6467 (comment) doesn't work, as it only rhttps://github.com//issues/6467#issuecomment-864943824uns after the BEGIN statement.

However, you can hook into the connect event of the engine's pool, as this is run right after the connection is created.:

async_engine = ...

@event.listens_for(async_engine.sync_engine.pool, "connect")
def clear_prepared_statements_on_begin_and_isolate(conn, branch):
    conn.run_async(
        lambda con: con.execute("SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;")
    )
    conn.run_async(lambda con: con.execute("DEALLOCATE ALL;"))

As I had the situation that the transaction isolation is only necessary in some cases, I've created this gist to allow acquiring a session with and without the isolation level.

Note that, if you use the uuid approach (#6467 (comment)) this is probably not necessary, but it was not an option in my scenario.

@CaselIT
Copy link
Member

CaselIT commented Feb 5, 2023

@mimre25 I don't think that gist works at all:

session: AsyncSession = sessionmaker(engine)()

this returns you a Session, not an AsyncSession. You should need to use async_sessionmaker with async sessions.
Also there is no need to create a session maker if you instantiate it inside a function every time it's called. Just instantiate an AsyncSession directly (a session maker is useful only if it's placed at the module level)

@zzzeek
Copy link
Member

zzzeek commented Feb 5, 2023

I wanted to chime in here, as I recently faced this problem and solved an edge case that wasn't discussed above.

While the solutions from #6467 (comment) and #6467 (comment) work in most scenarios, there is a further problem, if one wants to use transaction isolation levels. If you want to use a statement a la

SET TRANSACTION ISOLATION LEVEL <level>

then you need to run this as first statement of a connection.

In such a scenario the event listener from #6467 (comment) doesn't work, as it only rhttps://github.com//issues/6467#issuecomment-864943824uns after the BEGIN statement.

However, you can hook into the connect event of the engine's pool, as this is run right after the connection is created.:

async_engine = ...

@event.listens_for(async_engine.sync_engine.pool, "connect")
def clear_prepared_statements_on_begin_and_isolate(conn, branch):
    conn.run_async(
        lambda con: con.execute("SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;")
    )
    conn.run_async(lambda con: con.execute("DEALLOCATE ALL;"))

As I had the situation that the transaction isolation is only necessary in some cases, I've created this gist to allow acquiring a session with and without the isolation level.

Just so you know, for the isolation level part of the above, this feature is built in. Simply use engine-level isolation level. does the exact same thing.

@mimre25
Copy link

mimre25 commented Feb 9, 2023

Thanks for pointing that out @CaselIT - I've corrected the gist to directly instantiate the AsyncSession.

@zzzeek thank you for that - I wasn't aware of that. - I won't edit the gist in this regard (besides adding a comment), as it still shows how to "hook" directly in the underlying connection.

sqlalchemy-bot pushed a commit that referenced this issue Apr 21, 2023
I faced an issue related to pg bouncer and prepared statement cache flow in asyncpg dialect. Regarding this discussion #6467 I prepared PR to support an optional parameter `name` in prepared statement which is allowed, since 0.25.0 version in `asyncpg` MagicStack/asyncpg#846

**UPD:**
the issue with proposal: #9608

### Description
Added optional parameter `name_func` to `AsyncAdapt_asyncpg_connection` class which will call on the `self._connection.prepare()` function and populate a unique name.

so in general instead this

```python

from uuid import uuid4

from asyncpg import Connection

class CConnection(Connection):
    def _get_unique_id(self, prefix: str) -> str:
        return f'__asyncpg_{prefix}_{uuid4()}__'

engine = create_async_engine(...,
    connect_args={
        'connection_class': CConnection,
    },
)

```

would be enough

```python
from uuid import uuid4

engine = create_async_engine(...,
    connect_args={
        'name_func': lambda:  f'__asyncpg_{uuid4()}__',
    },
)

```

### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)

-->

This pull request is:

- [ ] A documentation / typographical error fix
	- Good to go, no issue or tests are needed
- [ ] A short code fix
	- please include the issue number, and create an issue if none exists, which
	  must include a complete example of the issue.  one line code fixes without an
	  issue and demonstration will not be accepted.
	- Please include: `Fixes: #<issue number>` in the commit message
	- please include tests.   one line code fixes without tests will not be accepted.
- [x] A new feature implementation
	- please include the issue number, and create an issue if none exists, which must
	  include a complete example of how the feature would look.
	- Please include: `Fixes: #<issue number>` in the commit message
	- please include tests.

**Have a nice day!**

Fixes: #9608
Closes: #9607
Pull-request: #9607
Pull-request-sha: b4bc8d3

Change-Id: Icd753366cba166b8a60d1c8566377ec8335cd828
@zzzeek
Copy link
Member

zzzeek commented Aug 16, 2023

why is this issue opened? you can use prepared statements with pgbouncer + asyncpg. it's working here right in
#10226 save for one issue with "ping" that we can fix.

Is there anyone here still unclear on how to use asyncpg with pgbouncer ?

@CaselIT
Copy link
Member

CaselIT commented Aug 16, 2023

it may make sense to move to a discussion?

@zzzeek
Copy link
Member

zzzeek commented Aug 16, 2023

I think so?

@CaselIT
Copy link
Member

CaselIT commented Aug 16, 2023

ok, so once fixed we can add a new post with the link to the fix

@sqlalchemy sqlalchemy locked and limited conversation to collaborators Aug 16, 2023
@CaselIT CaselIT converted this issue into discussion #10246 Aug 16, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
asyncio postgresql PRs (with tests!) welcome a fix or feature which is appropriate to be implemented by volunteers use case not really a feature or a bug; can be support for new DB features or user use cases not anticipated
Projects
None yet
Development

No branches or pull requests

10 participants