Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"test" + "add" in a single call #68

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

piskvorky
Copy link
Contributor

In the (common?) scenario of "test if item already in filter, if not, add it; run additional business logic based on whether the element is new or not", we now have the choice of:

if item in bloom_filter:
    bloom_filter.add(item)
    ...code
else:
   ...other code
exists = bloom_filter.add(item)
if exists:
    ...code
else:
   ...other code

There's not much difference in performance (hashing is fast, memory caches primed), though the second approach is nicer. However, it sets the array bits even if they are already set (always writes through). This is unfortunate because the write access is unnecessary for duplicates and makes things more complicated and slower (more IO, cache stalls).

This PR adds a new method, contains_add, which acts just like add but only sets the bits if not already set:

exists = bloom_filter.contains_add(item)
if exists:
    ...code
else:
   ...other code

According to benchmarks, it's also about 11% faster than option 2) above, when expecting 20% duplicates.

@axiak
Copy link
Owner

axiak commented Sep 23, 2015

I would expect option (2) to be faster, not slower. We should be within the same cache and there isn't any real I/O. Still, I can't argue with the numbers -- I suspect the difference is accounted for by extra python interpreter work, rather than just doing it all in C.

return -1;
}
DTYPE *chunk = array->vector + (size_t)(array->preamblesize + bit / (sizeof(DTYPE) << 3));
int byte = 1 << (bit % (sizeof(DTYPE) << 3));
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you use the _vector_byte static inline function here?

@piskvorky
Copy link
Contributor Author

There is I/O -- processor caches can make a huge difference, and I think any write invalidates it, even when the write is a no-op (no bits changed). Memory writes are expensive.

errno = EINVAL;
return -1;
}
DTYPE *chunk = array->vector + (size_t)(array->preamblesize + bit / (sizeof(DTYPE) << 3));
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and please use the _vector_offset static inline function here as well.

@axiak
Copy link
Owner

axiak commented Sep 23, 2015

If you want to earn brownie points -- try writing a Cython version that calls out to the C test & add and see what the performance improvement is :)

@piskvorky
Copy link
Contributor Author

Done. Let me know if you can replicate the performance improvements, I wonder how other HW / compiler factors play into this (OS X here, Apple LLVM version 7.0.0, MacBookPro11,3).

I merged the 32bit fix as well -- it modifies the same file, and I don't want conflicts later. Weirdly enough, even the pypy test passed now.

@piskvorky
Copy link
Contributor Author

I think you're right -- I checked a version that simply writes instead of the if, and it's faster still.

So, either was just Python overhead that makes this faster, or the branch is even more expensive than a write (which kinda makes sense). Either way, contains_add is consistently faster than add -- maybe worth making it the default?

@earonesty
Copy link

Should make it the default, if it's typically faster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants