Opened 13 years ago

Last modified 10 years ago

#16549 new New feature

In Django models, save/delete preconditions would help in handling optimistic concurrency control problems

Reported by: Steven Cummings Owned by: nobody
Component: Database layer (models, ORM) Version: 1.3
Severity: Normal Keywords:
Cc: anton@…, anssi.kaariainen@…, Daniel Naab, medoslav@…, info+coding@…, carsten.fuchs@… Triage Stage: Someday/Maybe
Has patch: no Needs documentation: no
Needs tests: no Patch needs improvement: no
Easy pickings: no UI/UX: no

Description (last modified by Carl Meyer)

Most agree that in webapps one cannot lock rows to be edited ahead of time. Optimistic concurrency is the typical choice here. So, it is then the job of the appdev to think about how to handle contention when it does occur. I've found scenarios when it would have been useful to know the rows-modified count for an update or delete:

  1. Models have a "version" field, and we'd like to prevent two requests from clobbering each other with updates from the same version. These race conditions are generally the way of the web when two people are allowed to edit data simultaneously, but there are times when the data is a little more critical and we'd like to be able to present the conflict to the 2nd user trying to update (e.g., bugzilla mid-air collisions). In this case, it would be nice to be able to save() only on the precondition that the version in the DB hasn't been updated. At a low level, this means something like UPDATE ... WHERE ID = xxx AND VERSION = xxx, and then the rows-modified count would indicate whether it worked or not.
  2. When we really want to know that a delete() call deleted data. Django ORM delete() passes silently if the object is already gone. However, there are cases when we want to know that the current request actually did the delete. One example is deleting OAuth auth codes: they should be used once and only once. (Yes, we could just mark it as used, but that still leaves us with the update/versioning problem above... and this is just an example). If two requests retrieve the auth code and otherwise believe they deleted (used) it, this would be considered a potential security problem in OAuth.

In both cases, having access to rows modified before commit would help greatly towards application-specific optimistic concurrency control handling. IFAICT, only that answer from the DB driver can tell us if the current request actually got to do a given data update. In looking at the django.db.models code it looks like rows-modified is available deep in the code, where DB specific connections and cursors are used, but is otherwise ignored there.

So, the actual proposal:

  • Model.delete(), QuerySet.update(), and QuerySet.delete() return rows-modified counts
  • New Model method save_if() that accepts a precondition, such as version=5 (which would be more like F('version')==5 under the covers, obviously we want to compare the actual value in storage). If the precondition fails, return MyModel.PreconditionFailed, which extends ObjectPreconditionFailed. In my example above I would probably translate that to a model-specific VersionObsolete exception, or similar.

Thoughts?

I can work on patches, but I wanted to log this first and see what kind of reaction it got before spending the time on that.

Update from comment thread: The issue of delete returning counts (update already does) has been moved to #16891. This ticket's scope is now strictly the idea of a conditional save.

Change History (35)

comment:1 by anonymous, 13 years ago

Summary: In Django models, modified row would help in handling optimistic concurrency control problemsIn Django models, modified-rows count would help in handling optimistic concurrency control problems

comment:2 by Aymeric Augustin, 13 years ago

Per our policy, feature requests should be discussed on the mailing-list. You won't get enough attention on Trac. Please follow the instructions here: https://docs.djangoproject.com/en/dev/internals/contributing/bugs-and-features/#requesting-features

Otherwise:

  • #11652 describes the same problem in the context of the admin.
  • I think the idea is sound.
  • Changing the return value of Model.delete(), QuerySet.update() and QuerySet.delete() may be an issue, though.

comment:3 by Aymeric Augustin, 13 years ago

Triage Stage: UnreviewedDesign decision needed

comment:4 by anonymous, 13 years ago

Could be handy indeed.

Changing the return value of Model.delete(), QuerySet.update() and QuerySet.delete() may be an issue, though.

True. I wouldn't change the default behavior nor I would be inclined to add more stuff to models.Model.

I think a simple subclass would be more appropriate:

from django.db import models

class SomeModel(models.ModelConcurencySafe):
   ...

comment:5 by Anton Strogonoff, 13 years ago

Cc: anton@… added

comment:6 by Steven Cummings, 13 years ago

A sub-class is fair enough, yeah. Especially given that the situations are probably off the beaten path.

comment:7 by Anssi Kääriäinen, 13 years ago

Cc: anssi.kaariainen@… added

There is some proof-of-concept code of how to do optimistic locking using pre_save and pre_delete signals. The code is at https://github.com/akaariai/django_optimistic_lock. I think the approach taken in the POC code should work. However, the code is nothing more than POC. The code is really ugly and will probably eat your data. Probably does not work at all on any database other than PostgreSQL.

comment:8 by Steven Cummings, 13 years ago

I see where you're going with that field but I'm hesitant to agree that doing things with a version field are exactly what everybody wants to do. I was going for the more primitive aspects of things: the precondition and rows modified. I'm hoping we gather more use-cases on the django-devs thread though to be more sure.

comment:9 by Steven Cummings, 13 years ago

I accidentally double-clicked submit and my second submit was presented with a "this ticket has already been modified by someone else". So apparently *this* system does version checking. Is this implemented in Django, should we take notes? :)

comment:10 by Anssi Kääriäinen, 13 years ago

One can implement any logic in the signal callback. But it does take too much work to implement such a field. Also, relying on catching the exceptions from the save is just plain wrong (it results in transaction commit all too easily). The optimistic lock check should be done in model validation if possible.

A conditional save would be handy. But it is not trivial to implement. The implementation should be one that handles derived classes, does not easily lead to half-updates (the condition was ok for the first table in derived save, but not for the second) and does not lead to too much code duplication. Having two different versions of base_save does not look appealing. Are there enough use cases that are not solvable by overriding the save() method or signals?

It is a good idea to return models.UPDATED or models.INSERTED from save(), returning models.DELETED or models.NO_ACTION from delete() and modified object counts from qs.update(), qs.delete(). Row counts instead of object counts from qs.update() and qs.delete() is a bit harder problem. The object count would not include related models deleted or updated. The return values from save() and delete() are just examples.

in reply to:  10 ; comment:11 by anonymous, 13 years ago

Replying to akaariai:

Also, relying on catching the exceptions from the save is just plain wrong (it results in transaction commit all too easily).

I don't follow this statement. Catching exceptions results in commits? Do you mean that the programmer might not think about whether or not they want to proceed with other operations or rollback a transaction on condition failure? This does highlight the need to ensure that this feature's interaction with transaction management is clear.

A conditional save would be handy. But it is not trivial to implement. The implementation should be one that handles derived classes, does not easily lead to half-updates (the condition was ok for the first table in derived save, but not for the second) and does not lead to too much code duplication.

I think the answer there is to check the precondition first and cut off the whole operation if it fails. You give the precondition to the immediate object. Even if we devise a way to specify preconditions for the cascaded updates of related objects, those updates would have the opportunity to rollback the transaction. I agree on code duplication. My goal is that you can generally use a provided ModelConcurrencySafe.save() and provide simple precondition expressions.

Having two different versions of base_save does not look appealing.

Agreed. Could also be save(precondition=...) instead of save_if().

Are there enough use cases that are not solvable by overriding the save() method or signals?

As it stands today, the use cases I've described so far require writing raw SQL updates, manually handling batching of related objects, and manually firing signals. That's what I'd like to avoid. Note that I'm not arguing these are majority scenarios, just scenarios that could have better support and shouldn't result in resorting to raw sql overrides of save(), delete(), etc. The only reason for this is simply the need to work with the cursor object directly to get the update count.

It is a good idea to return models.UPDATED or models.INSERTED from save(),

With a precondition, save() could return models.NO_ACTION as well (understanding that you're just giving examples/pseudocode).

returning models.DELETED or models.NO_ACTION from delete() and modified object counts from qs.update(), qs.delete(). Row counts instead of object counts from qs.update() and qs.delete() is a bit harder problem. The object count would not include related models deleted or updated. The return values from save() and delete() are just examples.

Related models are interesting because we first have to devise a clear way to specify the relevant conditions on those objects. I guess that's what you're assuming with your OptimisticLock field?

It's quite possible I've misunderstood some of your scenarios. Good discussion. Keep it coming.

comment:12 by Steven Cummings, 13 years ago

Doh. I didn't realize I wasn't logged in anymore. That comment was from me.

in reply to:  11 ; comment:13 by Anssi Kääriäinen, 13 years ago

Replying to anonymous:

Replying to akaariai:

Also, relying on catching the exceptions from the save is just plain wrong (it results in transaction commit all too easily).

I don't follow this statement. Catching exceptions results in commits? Do you mean that the programmer might not think about whether or not they want to proceed with other operations or rollback a transaction on condition failure? This does highlight the need to ensure that this feature's interaction with transaction management is clear.

Yes, I mean the programmer error. If the only indication of concurrent modification is the exception from the save, it is all too easy to do the following code:

try:
    validate data
    obj1.save()
    obj2.save() # throws ConcurrentModificationException
except ConcurrentModificationException:
    show user that there was an concurrent modification -> whoops

As you said how to handle transactions is important in this case. This is not only about transactions, even if obj2.save_if() would return NOT_SAVED, the same problem is still there. It also seems returning NOT_SAVED instead of an exception is a bad thing. The return value is all too easy to miss...

A conditional save would be handy. But it is not trivial to implement. The implementation should be one that handles derived classes, does not easily lead to half-updates (the condition was ok for the first table in derived save, but not for the second) and does not lead to too much code duplication.

I think the answer there is to check the precondition first and cut off the whole operation if it fails. You give the precondition to the immediate object. Even if we devise a way to specify preconditions for the cascaded updates of related objects, those updates would have the opportunity to rollback the transaction. I agree on code duplication. My goal is that you can generally use a provided ModelConcurrencySafe.save() and provide simple precondition expressions.

Having two different versions of base_save does not look appealing.

Agreed. Could also be save(precondition=...) instead of save_if().

If this is the case, why not just override the save()?

    def save(self, precondition=None, **kwargs)
        if precondition is not None:
            if precondition not passed:
                throw Exception / return NO_ACTION
        super(MyModel, self).save(**kwargs)

But I still have the feeling that save() is too late for this check. Data validation is the right time. save() should at most guard against concurrent modification, but relying on save()'s guard in program logic seems problematic.

Are there enough use cases that are not solvable by overriding the save() method or signals?

As it stands today, the use cases I've described so far require writing raw SQL updates, manually handling batching of related objects, and manually firing signals. That's what I'd like to avoid. Note that I'm not arguing these are majority scenarios, just scenarios that could have better support and shouldn't result in resorting to raw sql overrides of save(), delete(), etc. The only reason for this is simply the need to work with the cursor object directly to get the update count.

It is a good idea to return models.UPDATED or models.INSERTED from save(),

With a precondition, save() could return models.NO_ACTION as well (understanding that you're just giving examples/pseudocode).

This seems dangerous. Raising an exception is better, as the return condition is too easy to miss.

By the way, I think I have a cleaner approach to OptimisticLockField, that is using the pre_save() hook of the field to do the guarding and .validta() to do the validation (so forms would "just work"). No signals! :) And writing such a field should be easy. If one wants more complex logic, then model validation and overriding save and delete should work. Using abstract superclass should make this relatively straightforward. If I have time I will write a new implementation.

in reply to:  13 comment:14 by Steven Cummings, 13 years ago

Replying to akaariai:

Replying to anonymous:

Replying to akaariai:

Yes, I mean the programmer error. If the only indication of concurrent modification is the exception from the save, it is all too easy to do the following code:

try:
    validate data
    obj1.save()
    obj2.save() # throws ConcurrentModificationException
except ConcurrentModificationException:
    show user that there was an concurrent modification -> whoops

As you said how to handle transactions is important in this case. This is not only about transactions, even if obj2.save_if() would return NOT_SAVED, the same problem is still there. It also seems returning NOT_SAVED instead of an exception is a bad thing. The return value is all too easy to miss...

So we could add some transaction control options (rollback_on_failed=True) to help with this.

If this is the case, why not just override the save()?

    def save(self, precondition=None, **kwargs)
        if precondition is not None:
            if precondition not passed:
                throw Exception / return NO_ACTION
        super(MyModel, self).save(**kwargs)

But I still have the feeling that save() is too late for this check. Data validation is the right time. save() should at most guard against concurrent modification, but relying on save()'s guard in program logic seems problematic.

Now we're coming down to the potential misunderstanding. precondition is not just a python expression that I'm evaluating in memory (otherwise, yeah, I'd do that and wouldn't have this ticket). No, the precondition is potentially a WHERE clause on the update that may lead to rows modified being zero. This is the only way for the current process to know *right at the time of save* that it's not in conflict. Save can't be too late, because the whole point of this ticket is to avoid race conditions and handle them more intelligently. Any time you leave between the check and the DB operation is time two processes can enter the same region of code and lose the benefit of the check.

Are there enough use cases that are not solvable by overriding the save() method or signals?

As it stands today, the use cases I've described so far require writing raw SQL updates, manually handling batching of related objects, and manually firing signals. That's what I'd like to avoid. Note that I'm not arguing these are majority scenarios, just scenarios that could have better support and shouldn't result in resorting to raw sql overrides of save(), delete(), etc. The only reason for this is simply the need to work with the cursor object directly to get the update count.

It is a good idea to return models.UPDATED or models.INSERTED from save(),

With a precondition, save() could return models.NO_ACTION as well (understanding that you're just giving examples/pseudocode).

This seems dangerous. Raising an exception is better, as the return condition is too easy to miss.

If the programmer has elected to use our new Model subclass then it's a pretty good bet they've read the docs and know WHY they didn't just inherit from Model. This isn't a gotcha we're throwing into Model, the person using "ModelConcurrencySafe" (or whatever we call it) didn't do so by accident.

However, there's still the option that we provide transaction control parameters so that ModelConcurrencySafe.save() does the right (e.g., rollback) thing before returning control to the calling code.

comment:15 by Anssi Kääriäinen, 13 years ago

I am now convinced that having model.save() do update with WHERE condition is a good idea. I now see that this is analogous to what happens when there is a unique constraint violation. If I understand the code correctly, the normal procedure for uniqueness checks is:

  • model validation checks that there is no violation. No FOR UPDATE locking is done.
  • If there is a concurrent transaction that will cause a unique violation, the database will see the error due to unique constraint violation. Exception from .save() is the result.
  • When using PostgreSQL, if you are inside a transaction, the transaction will be set to aborted state (the abortion is done in PostgreSQL, not in Python code). You can only rollback the transaction, or rollback to savepoint. If you are outside a transaction, a half-update is possible.

I think this is exactly what should be done in optimistic locking. Also the part about transaction abortion is what should be done, only this time in Python code. If there is a ConcurrentModificationException, then the transaction can not be committed, it can only be rolled back to a previous savepoint or rolled back completely. No queries are possible. I think this is the safest behavior. Rolling back the transaction automatically is surprising, and allowing commit of the transaction without any action seems dangerous. The user can wrap the save in savepoint if he so wants.

The next big question is if there is a nice way to implement this. In my opinion the problems are the following:

  • There must not be duplication of save_base. save_base is not the simplest piece of code, and it is changed somewhat often.
  • How to indicate from save_base what happened.
  • How to pass the WHERE conditions to the right updates / deletes. What is the semantics of that condition.
  • Not complicating save_base too much.
  • Bonus points if this can be done without complicating base save. One idea is based on the fact that all the queries are done using _base_manager. Maybe overriding that is the correct solution? save_base should pass the instance to the _base_manager._update() and _delete(). Overridden _delete and _update can then do whatever checking needed in the update WHERE condition. I think this approach is worth some investigation.

comment:16 by Anssi Kääriäinen, 13 years ago

Quick look suggests that update seems to be possible using the _base_manager approach. But related object updates and deletes and also the base object deletion are done using batch_delete and batch_update from django.db.models.sql.subqueries. These seem harder to fix. Using optimistic locking in related object batch_update and batch_delete seems to be the hardest problem. Maybe that doesn't need to be possible for this feature to be useful.

in reply to:  16 comment:17 by Steven Cummings, 13 years ago

Replying to akaariai:

Quick look suggests that update seems to be possible using the _base_manager approach. But related object updates and deletes and also the base object deletion are done using batch_delete and batch_update from django.db.models.sql.subqueries. These seem harder to fix. Using optimistic locking in related object batch_update and batch_delete seems to be the hardest problem. Maybe that doesn't need to be possible for this feature to be useful.

That's the idea I was initially going with, but I'd be willing to investigate it while hacking on the main feature. Are we at a point where it would be worthwhile for me to get started on this and work towards patches to submit?

comment:18 by Anssi Kääriäinen, 13 years ago

I think this is a good idea, but you still need to convince a core developer that this is a good idea :) A working and clean patch is a good way to do so.

Maybe the sanest approach is to modify save_base and delete with appropriate hooks so that optimistic locking can be implemented as an external/contrib module. I do still think that duplicating save_base is a bad idea. Return values from qs.update, qs.delete, model.save and model.delete are probably easier to get included.

Sorry if I have made this more complicated than this really should be, I tend to get a little bit carried away sometimes... Anyways, +1 from me to modified save_base.

comment:19 by h3, 13 years ago

Maybe we could take this idea up a notch..

Do you think such work-flow would be possible to implement ?

  1. Two users editing the same object
  2. First user saves, but has only modified the first field
  3. Second user saves, but has only modified the second field
  4. Precondition is not met, but there's no conflicts
  5. Merge both saves silently

The model could provide two basic way to deal with concurrency which could be extensible:

object.save(precondition=..., concurrencyhandler=MERGE_SILENTLY)

or

object.save(precondition=..., concurrencyhandler=RAISE_ERROR)

The first way would try to merge silently and raise an error if it cannot and the other would raise an error even if there's no field level conflicts.

Of course I think the default way would be RAISE_ERROR since merging silently could possibly lead to unexpected results in some cases.

btw, glad to see my subclass idea took steam (forgot to log in)

Last edited 13 years ago by h3 (previous) (diff)

in reply to:  19 comment:20 by Steven Cummings, 13 years ago

Replying to h3:

Maybe we could take this idea up a notch..

Do you think such work-flow would be possible to implement ?

  1. Two users editing the same object
  2. First user saves, but has only modified the first field
  3. Second user saves, but has only modified the second field
  4. Precondition is not met, but there's no conflicts
  5. Merge both saves silently

The model could provide two basic way to deal with concurrency which could be extensible:

object.save(precondition=..., concurrencyhandler=MERGE_SILENTLY)

or

object.save(precondition=..., concurrencyhandler=RAISE_ERROR)

The first way would try to merge silently and raise an error if it cannot and the other would raise an error even if there's no field level conflicts.

Of course I think the default way would be RAISE_ERROR since merging silently could possibly lead to unexpected results in some cases.

Merge is an interesting idea, but it's definitely not the first part of this I would try to implement :) It's complicated because:

  • Some models may have fields that have some relation to another and shouldn't be independently merged. I suppose for this problem there could be a fields merge blacklist, or sets of fields that need to be updated together.
  • Does this expand to N objects being involved in a concurrent update (i.e., more than two simultaneous updates of the same object)?

Maybe it's not *so* bad, but it does seem to present some interesting challenges.

comment:21 by Steven Cummings, 13 years ago

Dropping in to say that I intend to work up some patches in the coming weeks. Hopefully sooner, but making that promise wouldn't be a good idea. I do this understanding we still need design decision, so I'll do so focusing on demonstrating desired functionality discussed in this ticket, and on the mailing list thread.

comment:22 by Anssi Kääriäinen, 13 years ago

I created a patch for changing the logic of .save() to do an update, check if anything was updated, and if not to do an insert. And while doing that I ended doing a bit more, for example save_base return UPDATED or INSERTED now. It is in ticket #16649 and could be of interest when creating a patch for this ticket.

comment:23 by h3, 13 years ago

I haven't tried it yet, but this project might be a good starting point:

https://github.com/stdbrouw/django-locking

in reply to:  23 comment:24 by Steven Cummings, 13 years ago

Replying to h3:

I haven't tried it yet, but this project might be a good starting point:

https://github.com/stdbrouw/django-locking

That looks higher level than what I was going for. That works when all you'll have is users trying to avoid trampling each others edits, but consider:

  • That mechanism leaves data open to be locked indefinitely if a person starts an edit and leaves the browser(tab) to do something else. I don't see this kind of thing often on the web as it's kind of intrusive to others who expect to be able to edit data.
  • Some will want the choice to not pre-lock be more reactive to problems. This is the model taken by bugzilla from one of my original examples.
  • This is either awkward or doesn't work at all if your site provides APIs for managing any of the same data that can be edited by users on the site. What does the API do while the data is locked, return 409 until it's unlocked? API consumers now have to build retry strategies around this? (hope that consumer wasn't trying to fulfill a page request of their own...)

I'm hoping to build something a little more basic, with which you could support the reactive models. Obviously that project would be a great choice for CMS-centric frameworks (fein, djangocms, etc.) and sites.

Hoping to actually start working on this over the holiday weekend....

comment:25 by Daniel Naab, 13 years ago

Cc: Daniel Naab added

comment:26 by Steven Cummings, 13 years ago

FYI: I'm going to hack on this over here: https://github.com/estebistec/django

comment:27 by Jacob, 13 years ago

Triage Stage: Design decision neededSomeday/Maybe

There's sort of too much going on here for a single ticket to track easily. Can someone split off the delete/update-should-return-number-of-rows part into a separate ticket (that's an obvious fix), keeping this ticket for discussing the save-if stuff?

I'm marking this "someday" because it's a big enough feature that whether we accept it really depends on the implementation. If the implementation is compelling enough it'll go in, but it's a pretty serious idea and I can't accept or reject it without a good look at the implementation. estebistec, I'll try to keep an eye on what you're doing on on GitHub, but if you get to a point that you'd like some feedback a thread on django-dev is going to be your best bet.

Thanks!

in reply to:  27 ; comment:28 by Steven Cummings, 13 years ago

Replying to jacob:

There's sort of too much going on here for a single ticket to track easily. Can someone split off the delete/update-should-return-number-of-rows part into a separate ticket (that's an obvious fix), keeping this ticket for discussing the save-if stuff?

Agreed. I can create the new ticket for num-rows part. Is there a way to have tickets depend on each other in track (i.e., have the num-rows ticket block this one once I create it?)

I'm marking this "someday" because it's a big enough feature that whether we accept it really depends on the implementation. If the implementation is compelling enough it'll go in, but it's a pretty serious idea and I can't accept or reject it without a good look at the implementation. estebistec, I'll try to keep an eye on what you're doing on on GitHub, but if you get to a point that you'd like some feedback a thread on django-dev is going to be your best bet.

You bet. Thanks!

in reply to:  28 comment:29 by Alex Gaynor, 13 years ago

Replying to estebistec:

We have a very advanced system for tracking dependencies between two tickets: we leave a comment on both sides indicating the relationship ;)

in reply to:  28 comment:30 by Steven Cummings, 13 years ago

Replying to estebistec:

Replying to jacob:

There's sort of too much going on here for a single ticket to track easily. Can someone split off the delete/update-should-return-number-of-rows part into a separate ticket (that's an obvious fix), keeping this ticket for discussing the save-if stuff?

Agreed. I can create the new ticket for num-rows part. Is there a way to have tickets depend on each other in track (i.e., have the num-rows ticket block this one once I create it?)

Done: https://code.djangoproject.com/ticket/16891

comment:31 by Steven Cummings, 13 years ago

Summary: In Django models, modified-rows count would help in handling optimistic concurrency control problemsIn Django models, save/delete preconditions would help in handling optimistic concurrency control problems

Updating summary to be more specific now that ticket 16891 directly confronts the rows-modified aspect of this.

https://code.djangoproject.com/ticket/16891

comment:32 by Carl Meyer, 13 years ago

Description: modified (diff)

comment:33 by medoslav@…, 13 years ago

Cc: medoslav@… added

comment:34 by Markus Holtermann, 10 years ago

Cc: info+coding@… added

A while ago I wrote a snippet for version locking on forms: https://gist.github.com/MarkusH/ac4cf709be285c2660be

comment:35 by Carsten Fuchs, 10 years ago

Cc: carsten.fuchs@… added
Note: See TracTickets for help on using tickets.
Back to Top