Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bitfield optimization of phase2 #120

Merged
merged 11 commits into from
Nov 9, 2020
63 changes: 63 additions & 0 deletions src/bitfield_index.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
// Copyright 2020 Chia Network Inc

// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at

// http://www.apache.org/licenses/LICENSE-2.0

// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#pragma once

#include <algorithm>

struct bitfield_index
{
// cache the number of set bits evey n bits. This is n. For a bitfield of
// size 2^32, this means a 2 MiB index
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested comment change:

// Cache the number of set bits every kIndexBucket bits.
// For a 2^32 entries, this means a 200KiB index.

static inline const int64_t kIndexBucket = 16 * 1024;

bitfield_index(std::vector<bool> const& b) : bitfield_(b)
{
uint64_t counter = 0;
auto it = bitfield_.begin();
index_.reserve(bitfield_.size() / kIndexBucket);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

index_.reserve(bitfield_.size() / kIndexBucket);

Should this be index_.reserve((bitfield_.size() / kIndexBucket)+1);, or is the index not used for the last bucket where bucket size < kIndexBucket ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, I think this is right. This provides an index for the number of set bits at the start of every kIndexBucket bits. So it rounds down.


for (int64_t idx = 0; idx < int64_t(bitfield_.size()); idx += kIndexBucket) {
index_.push_back(counter);
int64_t const left = std::min(int64_t(bitfield_.size()) - idx, kIndexBucket);
counter += std::count(it, it + left, true);
it += left;
}
}

std::pair<uint64_t, uint64_t> lookup(uint64_t pos, uint64_t offset) const
{
uint64_t const bucket = pos / kIndexBucket;

assert(bucket < index_.size());
assert(pos < bitfield_.size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert(pos < bitfield_.size());

I see. So, it's okay for pos >= kIndexBucket?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they are different domains. kIndexBucket is the interval of precomputed counts of bits. the index of such precomputed counts of set bits is bitfield_.size() / kIndexBucket.

It's OK for pos to be greater than bucket * kIndexBucket, but only as long as it's < bitfield_.size().

assert(pos + offset < bitfield_.size());

uint64_t const base = index_[bucket];

uint64_t const diff = std::count(
bitfield_.begin() + (bucket * kIndexBucket), bitfield_.begin() + pos, true);
uint64_t const new_offset = std::count(
bitfield_.begin() + pos
, bitfield_.begin() + pos + offset, true);

assert(new_offset <= offset);

return { base + diff, new_offset };
}
private:
std::vector<bool> const& bitfield_;
std::vector<uint64_t> index_;
};

Loading