Opened 3 years ago
Last modified 6 weeks ago
#33625 assigned Cleanup/optimization
Under ASGI, PyLibMCCache closes memcached connection after every request
Reported by: | Anders Kaseorg | Owned by: | Pablo Montepagano |
---|---|---|---|
Component: | Core (Cache system) | Version: | 4.0 |
Severity: | Normal | Keywords: | asgi, async |
Cc: | Triage Stage: | Accepted | |
Has patch: | no | Needs documentation: | no |
Needs tests: | no | Patch needs improvement: | no |
Easy pickings: | no | UI/UX: | no |
Description (last modified by )
When I run an application that uses django.core.cache.backends.memcached.PyLibMCCache
with WSGI under Gunicorn, it makes a single connection to memcached and leaves it open, as I expect.
But when I run it with ASGI under uvicorn, it makes a different connection to memcached for every request and immediately closes it. That is, #11331/#15324 has returned. This at least adds undesirable latency, and may also eventually lead to connection failures as described in #11331.
I think this happens because e3f6e18513224c8ad081e5a19da641f49b0b43da switched the thread-local cache objects to task-local cache objects. That was justified in #31253 by “a potential race condition if two coroutines touch the same cache object at exactly the same time”. But that justification doesn’t make sense: two coroutines cannot be running the same synchronous code on the same thread at the same time, so thread-local objects were already safe.
Now that asynchronous cache backend support is being developed, the argument is slightly different—but it seems obvious to expect that any asynchronous cache backend should at least properly support concurrent use from multiple asynchronous tasks on the same thread, so thread-local objects should still be safe.
Change History (9)
comment:1 by , 3 years ago
comment:2 by , 3 years ago
Keywords: | asgi async added |
---|---|
Triage Stage: | Unreviewed → Accepted |
I didn't test, but seems reasonable. Thanks for the report Anders.
(If you could attach a reproducing project with exact steps, that would save a cycle later.)
comment:3 by , 3 years ago
Sure. Here’s a minimal test project. Clone it, run memcached -vv
along with gunicorn mctest_wsgi
or uvicorn mctest_asgi:application
, visit http://localhost:8000 several times, and check whether memcached -vv
is repeating lines like “new auto-negotiating client connection” and “connection closed.”
follow-up: 6 comment:5 by , 2 years ago
Owner: | changed from | to
---|---|
Status: | new → assigned |
I'd like to try to solve this ticket. If I understand correctly, what would be needed is a connection pool for PyLibMCCache
and for PyMemcacheCache
that works well under ASGI. I'm thinking of doing something similar to what was done in this repo: https://github.com/Andrew-Chen-Wang/django-async-redis
Before I start developing, do we have a good way to do integration testing for this? I'll try to look into that first.
comment:6 by , 2 years ago
Replying to Pablo Montepagano:
Before I start developing, do we have a good way to do integration testing for this? I'll try to look into that first.
Good question! We have a few ASGI related tickets for which this is on the agenda. Currently working on putting test projects together and then using Locust to run requests against it. If you fancied going that route I'd be happy to input. (Perhaps ping me on a sample repo... or...) Thanks!
comment:7 by , 2 years ago
Description: | modified (diff) |
---|
We don’t need a new pool. Both pylibmc and pymemcache already have one.
It seems to me it’d be best to leave pooling up to the backend library, because it’s in a better position to know what kinds of concurrency are safe.
Possibly related: #33497 (for database connections).