Skip to content

Commit

Permalink
Merge pull request #11 from hkimura/develop
Browse files Browse the repository at this point in the history
Cursor API for Sequential Storage
  • Loading branch information
hkimura committed May 28, 2015
2 parents a512fc4 + f726c0b commit 81def0a
Show file tree
Hide file tree
Showing 38 changed files with 2,015 additions and 129 deletions.
5 changes: 5 additions & 0 deletions README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,11 @@ Right click project, Click "Open Configuration", Click "Make" icon, "Number of s

Running Tests (For FOEDUS Developers)
--------
**First**, make sure you have set up the environment, especially hugepages/shared memory.
See the *Environment Setup* section in [foedus-core](foedus-core).
If a large number of tests fail, it's most likely cauesed by memory/permission issues.


Go to build folder, and:

ctest
Expand Down
4 changes: 2 additions & 2 deletions experiments-core/src/foedus/tpcc/README.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ with detailed steps to reproduce them. Unless individually noted,

Target Systems
-----------------
Under this folder, we have experiments to test three sets of systems.
Under this folder, we have experiments to run three systems.

* FOEDUS. Scripts/programs in this folder.
* H-Store. Files under [hstore_related](hstore_related).
Expand All @@ -33,7 +33,7 @@ Under this folder, we have experiments to test three sets of systems.
This document focuses on FOEDUS experiments. Go to sub folders for H-Store/SILO experiments.

Actually, we have fourth system in the paper, Shore-MT.
What we used was HP's intenval version with Foster B-tree and a few other improvements.
What we used was HP's internal version with Foster B-tree and a few other improvements.
It is not OSS-ed. Most likely the EPFL's version of Shore-MT has a similar performance, though.

Machine Sizing and Scripts
Expand Down
13 changes: 13 additions & 0 deletions foedus-core/include/foedus/memory/engine_memory.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,19 @@ class EngineMemory CXX11_FINAL : public DefaultInitializable {
ErrorStack initialize_once() CXX11_OVERRIDE;
ErrorStack uninitialize_once() CXX11_OVERRIDE;

/**
* As part of the global shared memory, we reserve this size of 'user memory' that can be
* used for arbitrary purporses by the user to communicate between SOCs.
* @see get_shared_user_memory_size()
*/
void* get_shared_user_memory() const;
/**
* @returns the byte size of shared user-controlled memory.
* Equivalent to SocOptions.shared_user_memory_size_kb_ << 10.
* @see get_shared_user_memory()
*/
uint64_t get_shared_user_memory_size() const;

// accessors for child memories
NumaNodeMemory* get_local_memory() const {
return local_memory_;
Expand Down
12 changes: 12 additions & 0 deletions foedus-core/include/foedus/memory/page_pool.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,18 @@ class PagePool CXX11_FINAL : public virtual Initializable {

storage::Page* get_base() const;
uint64_t get_memory_size() const;
/**
* @returns recommended number of pages to grab at once.
* @details
* Especially in testcases, grabbing chunk-full of pages (4k pages) at a time
* immediately runs out of free pages
* unless we allocate huge pools for each run, which would make test-time significantly longer.
* For example, if the pool has only 1024 pages, it doesn't make sense to
* grab 2000 pages at a time! As soon as one thread does it, all other threads
* will get out-of-memory error although the culprit probably needs only a few pages.
* To avoid that, we respect this value in most places.
*/
uint32_t get_recommended_pages_per_grab() const;
Stat get_stat() const;
std::string get_debug_pool_name() const;
/** Call this anytime after attach() */
Expand Down
7 changes: 7 additions & 0 deletions foedus-core/include/foedus/savepoint/savepoint.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,13 @@ struct Savepoint CXX11_FINAL : public virtual externalize::Externalizable {
*/
Epoch::EpochInteger durable_epoch_;

/**
* The earliest epoch that can exist in this system. it advances only after we
* do a system-wide compaction. Otherwise, it's kEpochInitialDurable, which is indeed
* the earliest until the epoch wraps around.
*/
Epoch::EpochInteger earliest_epoch_;

/**
* The most recent complete snapshot.
* kNullSnapshotId if no snapshot has been taken yet.
Expand Down
1 change: 1 addition & 0 deletions foedus-core/include/foedus/savepoint/savepoint_manager.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ class SavepointManager CXX11_FINAL : public virtual Initializable {

Epoch get_initial_current_epoch() const;
Epoch get_initial_durable_epoch() const;
Epoch get_earliest_epoch() const;
Epoch get_saved_durable_epoch() const;
snapshot::SnapshotId get_latest_snapshot_id() const;
Epoch get_latest_snapshot_epoch() const;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ struct SavepointManagerControlBlock {
std::atomic<bool> master_initialized_;
Epoch::EpochInteger initial_current_epoch_;
Epoch::EpochInteger initial_durable_epoch_;
Epoch::EpochInteger earliest_epoch_;

/**
* savepoint thread sleeps on this condition variable.
Expand Down Expand Up @@ -115,6 +116,7 @@ class SavepointManagerPimpl final : public DefaultInitializable {

Epoch get_initial_current_epoch() const { return Epoch(control_block_->initial_current_epoch_); }
Epoch get_initial_durable_epoch() const { return Epoch(control_block_->initial_durable_epoch_); }
Epoch get_earliest_epoch() const { return Epoch(control_block_->earliest_epoch_); }
Epoch get_saved_durable_epoch() const { return Epoch(control_block_->saved_durable_epoch_); }
snapshot::SnapshotId get_latest_snapshot_id() const {
return control_block_->savepoint_.latest_snapshot_id_;
Expand Down
7 changes: 6 additions & 1 deletion foedus-core/include/foedus/snapshot/snapshot_manager.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -64,11 +64,16 @@ class SnapshotManager CXX11_FINAL : public virtual Initializable {
/**
* @brief Immediately take a snapshot
* @param[in] wait_completion whether to block until the completion of entire snapshotting
* @param[in] suggested_snapshot_epoch the epoch up to which we will snapshot.
* Must be a durable epoch that is after the previous snapshot epoch.
* If not specified, the latest durable epoch is used, which is in most cases what you want.
* @details
* This method is used to immediately take snapshot for either recovery or memory-saving
* purpose.
*/
void trigger_snapshot_immediate(bool wait_completion);
void trigger_snapshot_immediate(
bool wait_completion,
Epoch suggested_snapshot_epoch = INVALID_EPOCH);

/** Do not use this unless you know what you are doing. */
SnapshotManagerPimpl* get_pimpl() { return pimpl_; }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,9 @@ class SnapshotManagerPimpl final : public DefaultInitializable {

ErrorStack read_snapshot_metadata(SnapshotId snapshot_id, SnapshotMetadata* out);

void trigger_snapshot_immediate(bool wait_completion);
void trigger_snapshot_immediate(
bool wait_completion,
Epoch suggested_snapshot_epoch);
/**
* This is a hidden API called at the beginning of engine shutdown (namely restart manager).
* Snapshot Manager initializes before Storage because it must \e read previous snapshot,
Expand Down
8 changes: 8 additions & 0 deletions foedus-core/include/foedus/soc/soc_manager.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,14 @@ class SocManager CXX11_FINAL : public virtual Initializable {
/** Returns the shared memories maintained across SOCs */
SharedMemoryRepo* get_shared_memory_repo();

/** Shortcut for get_shared_memory_repo()->get_global_user_memory() */
void* get_shared_user_memory() const;
/**
* @returns the byte size of shared user-controlled memory.
* Equivalent to SocOptions.shared_user_memory_size_kb_ << 10.
*/
uint64_t get_shared_user_memory_size() const;

/**
* @brief Wait for master engine to finish init/uninit the module.
*/
Expand Down
2 changes: 2 additions & 0 deletions foedus-core/include/foedus/storage/sequential/fwd.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ struct SequentialMetadata;
class SequentialPage;
class SequentialPartitioner;
class SequentialRootPage;
struct SequentialRecordBatch;
class SequentialRecordIterator;
class SequentialStorage;
struct SequentialStorageControlBlock;
class SequentialStorageFactory;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@
#include "foedus/storage/page.hpp"
#include "foedus/storage/storage_id.hpp"
#include "foedus/storage/sequential/fwd.hpp"
#include "foedus/storage/sequential/sequential_page_impl.hpp"

namespace foedus {
namespace storage {
Expand Down Expand Up @@ -58,14 +59,14 @@ namespace sequential {
*/
class SequentialComposer final {
public:
/** Output of one compose() call, which are then combined in construct_root(). */
/**
* Output of one compose() call, which are then combined in construct_root().
* Each compose() returns just one pointer to a head page.
*/
struct RootInfoPage final {
PageHeader header_; // +16 -> 16
/** Number of pointers stored in this page. */
uint32_t pointer_count_; // +4 -> 20
uint32_t dummy_; // +4 -> 24
/** Pointers to head pages. */
SnapshotPagePointer pointers_[(kPageSize - 24) / 8]; // -> 4096
PageHeader header_;
HeadPagePointer pointer_;
char filler_[kPageSize - sizeof(PageHeader) - sizeof(HeadPagePointer)];
};

explicit SequentialComposer(Composer *parent);
Expand All @@ -77,9 +78,12 @@ class SequentialComposer final {
Composer::DropResult drop_volatiles(const Composer::DropVolatilesArguments& args);

private:
SequentialPage* compose_new_head(
SequentialPage* compose_new_head(snapshot::SnapshotWriter* snapshot_writer);
ErrorStack dump_pages(
snapshot::SnapshotWriter* snapshot_writer,
RootInfoPage* root_info_page);
bool last_dump,
uint32_t allocated_pages,
uint64_t* total_pages);

Engine* const engine_;
const StorageId storage_id_;
Expand Down
Loading

0 comments on commit 81def0a

Please sign in to comment.