Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make cached_property derive from property #137

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

althonos
Copy link

@althonos althonos commented Nov 23, 2018

Hi @pydanny ! Since I'm using this library on a daily basis I figured it would be time to contribute at last 😉

Class hierarchy (#26, #27)

  • cached_property now derives from property.
  • threaded_cached_property and cached_property_with_ttl are cached_property subclasses.
  • threaded_cached_property_with_ttl is a subclass of both cached_property_with_ttl and threaded_cached_property.

This is not as drastic as #30 suggests but still helps avoiding code duplication.

Support __slots__ and stop using obj.__dict__ (#69, #123)

The cached_property does not use the object __dict__ to store the cached value, instead using a WeakKeyDictionary with the object as the key. This allows supportings slotted classes as long as we can create weakref to the object, i.e. "__weakref__" is in __slots__. Not using weakrefs would cause objects not to be garbage collected.

Better function wrapping (#72)

In Python 3, functools.update_wrapper is now used to extract attributes from the wrapped function. This means the cached_property will also expose the __qualname__ and __annotations__ of the wrapped function.

Miscellaneous (#124)

Supersedes #124, so that you don't have to resolve the merge yourself (I added @VadimPushtaev to the AUTHORS.rst since I discovered the __set_name__ protocol thanks to his PR, and I think he deserves some praise 😄 )

@althonos althonos force-pushed the master branch 2 times, most recently from 6c2af32 to f3ac288 Compare November 23, 2018 23:26
@althonos
Copy link
Author

Rebased to latest master (Dec. 28).

@althonos
Copy link
Author

@pydanny : Any reason to stall this ?

@bashtage
Copy link

Sounds like a clear improvement.

@althonos
Copy link
Author

For anyone interested (@carver, @bashtage ?): this has been released on PyPI (property-cached) until development resumes in here.

@vbraun
Copy link
Contributor

vbraun commented Aug 11, 2019

Looks good to me. Should also fix #168

@Qu4tro
Copy link

Qu4tro commented Aug 20, 2019

Thanks for the heads up @vbraun .
I gave it a try, but no dice:

  File "/home/quatro/.local/share/virtualenvs/scenographer/lib/python3.7/site-packages/property_cached/__init__.py", line 54, in __get__
    value = self.cache.get(obj, self._sentinel)
  File "/home/quatro/.local/share/virtualenvs/scenographer/lib/python3.7/weakref.py", line 433, in get
    return self.data.get(ref(key),default)  File "/home/quatro/.local/share/virtualenvs/scenographer/lib/python3.7/site-packages/property_cached/__init__.py", line 54, in __get__
    value = self.cache.get(obj, self._sentinel)
  File "/home/quatro/.local/share/virtualenvs/scenographer/lib/python3.7/weakref.py", line 433, in get
    return self.data.get(ref(key),default)
TypeError: cannot create weak reference to 'RelationDAG' object

@althonos
Copy link
Author

@Qu4tro : seems like you have a slotted object but it doesn't have __weakref__ in its slots.

@Qu4tro
Copy link

Qu4tro commented Aug 22, 2019

I've updated https://github.com/Qu4tro/cachedproperty_immutable_classes with property_cached.

Yeah, it's derived from NamedTuple. I guess the only to do it now, since python 3.5.1, is to maintain an external store to save the cache to?

@althonos
Copy link
Author

@Qu4tro : maybe you could open an issue on python/typing? I don't know if NamedTuple is not weakrefable because of implementation details or just because it's missing the feature; this might be worth a try.

Otherwise, you could write a metaclass that does the same as NamedTuple, but also adds a __weakref__ slot. I know this is not ideal but I assume this would be better than the external cache solution.

@Qu4tro
Copy link

Qu4tro commented Aug 23, 2019

@althonos : I've created the ticket. Thanks for the advice.

@bashtage
Copy link

bashtage commented Aug 29, 2019

@althonos Sorry about that - seems to be something else!

@Qu4tro
Copy link

Qu4tro commented Aug 29, 2019

Wasn't that bad. Notified me of this thread which I had forgotten to type into some findings.

Basically, if anyone has the same issue as I do: caching NamedTuples properties, you can totally use functools lru_cache decorator. This has a caveat: The cache is shared per method across instances and the cache-key used is self - the NamedTuple itself. You may need to add a bigger limit or let it all in with @lru_cache(None). Works with @classmethod as well if anyone's wondering (cache is still stored across instances, but the cache-key is the Class, so the default limit should be okay).

If anyone knows any other issue to this approach it would also be great to hear.

@matfax
Copy link

matfax commented Oct 26, 2019

By using the weak dictionary in the decorating class, compatibility with any non-hashable types is lost.

@althonos
Copy link
Author

@matfax : but in exchange you gain support for slotted classes, that cached_property do not support.

@matfax
Copy link

matfax commented Oct 29, 2019

@althonos I do not doubt that it would be a valuable addition. But maybe having both options in one library would be the wiser choice instead of superseding the current one so that dependents that rely on this library for having support for non-hashables do not become trapped in an old version. After all, there are valid use cases in that hashes can not be more than a purely random number. This package seems to be widely adopted. There would be quite an affair if existing builds stopped working after a version bump.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants