-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(store): migrate db backends to store/db
and remove the cosmos-db
deps completely
#21765
base: main
Are you sure you want to change the base?
Conversation
@cool-develope your pull request is missing a changelog! |
WalkthroughWalkthroughThe changes encompass a significant transition from the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Recent review detailsConfiguration used: .coderabbit.yml Files ignored due to path filters (2)
Files selected for processing (13)
Files skipped from review as they are similar to previous changes (13)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Outside diff range and nitpick comments (3)
orm/go.mod (3)
8-8
: Update the version before the final release.The addition of the
cosmossdk.io/core/testing
dependency with a pseudo-version is acceptable for development purposes. However, ensure that the version is updated to a proper release version before the final release.
53-53
: Consider updating to a stable release version before the final release.Updating the version of the
github.com/pmezard/go-difflib
dependency tov1.0.1-0.20181226105442-5d4384ee4fb2
is acceptable for development purposes. However, consider updating it to a stable release version before the final release to ensure stability and reliability.
62-62
: Remove the replace directives before the final release.The addition of the indirect dependency
github.com/tidwall/btree v1.7.0
is a result of updating other dependencies in the module and is acceptable.The replace directives are used to redirect the
cosmossdk.io/core/testing
andcosmossdk.io/store
dependencies to local paths, which is a common practice during development or testing. However, ensure that these replace directives are removed before the final release to use the official released versions of the dependencies.Also applies to: 71-74
Review details
Configuration used: .coderabbit.yml
Review profile: CHILL
Files ignored due to path filters (16)
go.sum
is excluded by!**/*.sum
orm/go.sum
is excluded by!**/*.sum
simapp/go.sum
is excluded by!**/*.sum
simapp/v2/go.sum
is excluded by!**/*.sum
store/go.sum
is excluded by!**/*.sum
tests/go.sum
is excluded by!**/*.sum
x/accounts/defaults/lockup/go.sum
is excluded by!**/*.sum
x/accounts/defaults/multisig/go.sum
is excluded by!**/*.sum
x/accounts/go.sum
is excluded by!**/*.sum
x/authz/go.sum
is excluded by!**/*.sum
x/bank/go.sum
is excluded by!**/*.sum
x/distribution/go.sum
is excluded by!**/*.sum
x/feegrant/go.sum
is excluded by!**/*.sum
x/gov/go.sum
is excluded by!**/*.sum
x/group/go.sum
is excluded by!**/*.sum
x/staking/go.sum
is excluded by!**/*.sum
Files selected for processing (40)
- client/pruning/main.go (2 hunks)
- client/snapshot/restore.go (2 hunks)
- go.mod (0 hunks)
- orm/go.mod (4 hunks)
- orm/internal/testkv/leveldb.go (1 hunks)
- orm/internal/testkv/mem.go (3 hunks)
- orm/model/ormdb/module_test.go (2 hunks)
- orm/model/ormtable/bench_test.go (2 hunks)
- orm/model/ormtable/table_test.go (2 hunks)
- server/constructors_test.go (1 hunks)
- server/mock/app.go (1 hunks)
- server/start.go (2 hunks)
- server/util.go (4 hunks)
- server/util_test.go (2 hunks)
- simapp/go.mod (0 hunks)
- simapp/v2/go.mod (0 hunks)
- simsx/runner.go (2 hunks)
- store/db/db.go (1 hunks)
- store/db/db_test.go (1 hunks)
- store/db/goleveldb.go (1 hunks)
- store/db/pebbledb.go (1 hunks)
- store/db/rocksdb.go (1 hunks)
- store/db/rocksdb_noflag.go (1 hunks)
- store/db/rocksdb_test.go (1 hunks)
- store/go.mod (3 hunks)
- tests/go.mod (0 hunks)
- x/accounts/defaults/multisig/go.mod (0 hunks)
- x/accounts/go.mod (0 hunks)
- x/authz/go.mod (0 hunks)
- x/bank/go.mod (0 hunks)
- x/circuit/go.mod (0 hunks)
- x/distribution/go.mod (0 hunks)
- x/evidence/go.mod (0 hunks)
- x/feegrant/go.mod (0 hunks)
- x/gov/go.mod (0 hunks)
- x/group/go.mod (0 hunks)
- x/mint/go.mod (0 hunks)
- x/protocolpool/go.mod (0 hunks)
- x/staking/go.mod (0 hunks)
- x/upgrade/go.mod (0 hunks)
Files not reviewed due to no reviewable changes (18)
- go.mod
- simapp/go.mod
- simapp/v2/go.mod
- tests/go.mod
- x/accounts/defaults/multisig/go.mod
- x/accounts/go.mod
- x/authz/go.mod
- x/bank/go.mod
- x/circuit/go.mod
- x/distribution/go.mod
- x/evidence/go.mod
- x/feegrant/go.mod
- x/gov/go.mod
- x/group/go.mod
- x/mint/go.mod
- x/protocolpool/go.mod
- x/staking/go.mod
- x/upgrade/go.mod
Files skipped from review due to trivial changes (1)
- orm/internal/testkv/leveldb.go
Additional context used
Path-based instructions (19)
store/db/rocksdb_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"server/constructors_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"orm/internal/testkv/mem.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.store/db/db.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.client/snapshot/restore.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.store/db/rocksdb_noflag.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.store/db/db_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"client/pruning/main.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.server/mock/app.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.store/db/pebbledb.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.orm/model/ormtable/bench_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"store/db/rocksdb.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.store/db/goleveldb.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.orm/model/ormdb/module_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"simsx/runner.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.server/util_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"server/util.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.orm/model/ormtable/table_test.go (2)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Pattern
**/*_test.go
: "Assess the unit test code assessing sufficient code coverage for the changes associated in the pull request"server/start.go (1)
Pattern
**/*.go
: Review the Golang code for conformity with the Uber Golang style guide, highlighting any deviations.
Additional comments not posted (103)
store/db/rocksdb_test.go (1)
13-20
: LGTM!The test function
TestRocksDBSuite
is well-structured and follows best practices:
- It initializes a RocksDB instance using a temporary directory for storage during the test, ensuring isolation and preventing conflicts with other tests.
- It properly handles any errors encountered during the initialization of the database using the
require.NoError
assertion from thetestify
library.- It executes the
DBTestSuite
with the initialized RocksDB instance, allowing for comprehensive testing of the database's functionality.- It follows the naming convention for test functions in Go (
TestXxx
).- It utilizes the
testify
library, which is a common and well-established testing library in the Go ecosystem.Overall, the test function is well-implemented and should effectively test the RocksDB functionality.
server/constructors_test.go (1)
8-9
: LGTM!The import path update aligns with the PR objective of migrating database backends to the
store/db
package. This change is consistent with the overall direction of the pull request.orm/internal/testkv/mem.go (2)
13-14
: LGTM! The changes align with the PR objective.The migration from
dbm.NewMemDB()
tocoretesting.NewMemDB()
for creating memory databases in theCommitmentStore
andIndexStore
aligns with the PR objective of removing thecosmos-db
dependencies.
23-23
: LGTM! The changes align with the PR objective.The migration from
dbm.NewMemDB()
tocoretesting.NewMemDB()
for creating the memory database in theCommitmentStore
aligns with the PR objective of removing thecosmos-db
dependencies.store/db/db.go (5)
1-9
: LGTM!The package declaration, import statements, and the
DBType
type definition look good.
11-18
: LGTM!The constants for database types and file suffix are well-defined and cover common use cases.
20-23
: LGTM!The
DBOptions
interface provides a flexible way to specify database options, and theGet
method signature is appropriate for a key-value store.
25-35
: LGTM!The
NewDB
function provides a clean and extensible way to create database instances based on the specified type. The switch statement handles the supported database types effectively.
37-38
: LGTM!Returning an error for unsupported database types is a good way to handle invalid input and provide meaningful feedback to the caller.
client/snapshot/restore.go (3)
11-11
: LGTM!The import path update for the
dbm
package aligns with the PR objective of migrating database backends to a new location within the project structure.
55-55
: LGTM!The update to the
dbm.NewDB
function call, including the additionalnil
argument, aligns with the changes in thedbm
package.
53-53
: Verify the function signature change in the codebase.The change in the
openDB
function signature, specifically the parameter type forbackendType
, aligns with the updates in thedbm
package.Run the following script to verify the function usage:
Verification successful
Function signature change verified and consistently applied
The
openDB
function signature change has been successfully implemented across the codebase. All definitions and calls to the function are using the newdbm.DBType
parameter type. No inconsistencies or potential issues were found.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all function calls to `openDB` match the new signature. # Test: Search for the function usage. Expect: Only occurrences of the new signature. rg --type go -A 5 $'openDB'Length of output: 1926
store/db/rocksdb_noflag.go (14)
6-6
: LGTM!The import statement and the package alias follow the Uber Go Style Guide.
8-8
: LGTM!The type assertion correctly ensures that the
RocksDB
type implements thecorestore.KVStoreWithBatch
interface.
10-13
: LGTM!The
RocksDB
struct is correctly defined, and the comment provides a clear explanation of its purpose and usage.
15-17
: LGTM!The
NewRocksDB
function is correctly defined, and the panic statement is used to indicate that the function is not implemented when therocksdb
tag is not set.
19-21
: LGTM!The
NewRocksDBWithOpts
function is correctly defined, and the panic statement is used to indicate that the function is not implemented when therocksdb
tag is not set.
23-25
: LGTM!The
Close
method is correctly defined, and the panic statement is used to indicate that the method is not implemented when therocksdb
tag is not set.
27-29
: LGTM!The
Get
method is correctly defined, and the panic statement is used to indicate that the method is not implemented when therocksdb
tag is not set.
31-33
: LGTM!The
Has
method is correctly defined, and the panic statement is used to indicate that the method is not implemented when therocksdb
tag is not set.
35-37
: LGTM!The
Set
method is correctly defined, and the panic statement is used to indicate that the method is not implemented when therocksdb
tag is not set.
39-41
: LGTM!The
Delete
method is correctly defined, and the panic statement is used to indicate that the method is not implemented when therocksdb
tag is not set.
43-45
: LGTM!The
Iterator
method is correctly defined, and the panic statement is used to indicate that the method is not implemented when therocksdb
tag is not set.
47-49
: LGTM!The
ReverseIterator
method is correctly defined, and the panic statement is used to indicate that the method is not implemented when therocksdb
tag is not set.
51-53
: LGTM!The
NewBatch
method is correctly defined, and the panic statement is used to indicate that the method is not implemented when therocksdb
tag is not set.
55-57
: LGTM!The
NewBatchWithSize
method is correctly defined, and the panic statement is used to indicate that the method is not implemented when therocksdb
tag is not set.orm/go.mod (2)
11-11
: LGTM!Updating the version of the
cosmossdk.io/store
dependency tov1.1.1
is a good practice to incorporate the latest features, bug fixes, and improvements.
18-18
: LGTM!Updating the version of the
golang.org/x/exp
dependency tov0.0.0-20240506185415-9bf2ced13842
is a good practice to incorporate the latest features, bug fixes, and improvements.store/go.mod (4)
13-13
: LGTM!The addition of the
github.com/cockroachdb/pebble
dependency is a good choice for enhancing data management capabilities. The specified versionv1.1.0
is a stable release.
24-25
: Looks good!The inclusion of
github.com/linxGnu/grocksdb
andgitpro.ttaallkk.top/spf13/cast
dependencies is beneficial for enhancing database handling and type casting functionalities, respectively. The specified versions are stable releases.
27-27
: Verify the stability and necessity of the specificgoleveldb
version.The
github.com/syndtr/goleveldb
dependency has been updated to a specific commit hash126854af5e6d
. Please ensure that this version is stable, well-tested, and necessary for the desired functionality.
37-43
: Verify the compatibility and security of the new indirect dependencies.Several new indirect dependencies have been introduced, which can enhance the project's observability, error handling, and performance. However, it's crucial to ensure that these dependencies are compatible with the project and don't introduce any security vulnerabilities.
Please review the indirect dependencies and confirm their compatibility and security.
Also applies to: 47-47, 50-50, 56-58, 62-62, 67-70
store/db/db_test.go (5)
13-17
: LGTM!The
DBTestSuite
struct is well-defined, embeddingsuite.Suite
and including adb
field for testing database operations. It follows the conventions of thetestify
package.
19-21
: LGTM!The
TearDownSuite
method properly closes the database connection after the test suite runs, ensuring proper resource cleanup. It uses theRequire
assertion to check for any errors during the close operation, following the conventions of thetestify
package.
23-69
: LGTM!The
TestDBOperations
method comprehensively tests various database operations, including batch set, get, has, delete, and individual set and delete. It uses appropriate assertions to verify the expected behavior and covers a wide range of scenarios. The test cases are well-structured and provide good coverage for the database operations.
71-108
: LGTM!The
TestIterator
method thoroughly tests the iterator functionality of the database. It covers both forward and reverse iteration scenarios and uses appropriate assertions to verify the expected behavior. The test cases are well-structured, setting up the database with multiple key-value pairs and properly closing the iterators after use to prevent resource leaks. The method provides good coverage for the iterator functionality.
110-133
: LGTM!The test functions
TestPebbleDBSuite
,TestGoLevelDBSuite
, andTestPrefixDBSuite
ensure that theDBTestSuite
is executed against different database implementations. This approach verifies that the database operations work correctly across various backends.The functions initialize the respective databases, using
require.NoError
to check for any errors during initialization. Running theDBTestSuite
against each database implementation provides comprehensive test coverage.The use of
t.TempDir()
is a good practice, ensuring that the databases are created in a temporary directory and cleaned up after the tests, preventing any persistent state between test runs.client/pruning/main.go (2)
111-113
: Verify the consistency of theopenDB
function usage across the codebase.The changes in the
openDB
function signature and the additional argument in thedbm.NewDB
call align with the updates in thedbm
package. Ensure that all calls to theopenDB
function have been updated to match the new signature and provide the required arguments.Run the following script to verify the consistency of the
openDB
function usage:Verification successful
The
openDB
function usage is consistent across the codebaseThe verification process confirms that the changes to the
openDB
function have been properly implemented and consistently used throughout the codebase. All occurrences of the function definition and its calls match the new signature, including the use ofdbm.DBType
and the additionalnil
argument indbm.NewDB
. No further action is required.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that all calls to the `openDB` function match the new signature. # Test: Search for the function usage. Expect: Only occurrences of the new signature. rg --type go -A 5 $'openDB'Length of output: 1926
13-13
: Verify the consistency of the import path update across the codebase.The change in the import path for the
dbm
package aligns with the broader restructuring of the Cosmos SDK. Ensure that all references to the old import path have been updated consistently across the codebase to avoid compilation errors.Run the following script to verify the consistency of the import path update:
Verification successful
Import path update verified and consistently applied across the codebase.
The change in the import path for the
dbm
package fromgitpro.ttaallkk.top/cosmos/cosmos-db
tocosmossdk.io/store/db
has been successfully and consistently applied across the entire codebase. This includes the file under review (client/pruning/main.go
) and multiple other Go files in the repository.Key findings:
- No occurrences of the old import path were found.
- The new import path is used in various files, indicating a comprehensive update.
- The change aligns with the broader restructuring of the Cosmos SDK, as initially suspected.
This verification confirms that the original review comment was correct in approving the change and that the suggested consistency check has been satisfied.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that all references to the old `dbm` package import path have been updated. # Test: Search for the old import path. Expect: No occurrences. rg --type go $'github.com/cosmos/cosmos-db'Length of output: 42
Script:
#!/bin/bash # Description: Verify the import path update and repository structure # Test 1: Check if rg (ripgrep) is available echo "Checking ripgrep availability:" rg --version # Test 2: Verify the presence of Go files echo -e "\nChecking for Go files:" find . -name "*.go" | head -n 5 # Test 3: Search for the new import path echo -e "\nSearching for new import path:" grep -R --include="*.go" "cosmossdk.io/store/db" .Length of output: 2056
server/mock/app.go (1)
18-18
: LGTM!The import path change for
dbm
fromgitpro.ttaallkk.top/cosmos/cosmos-db
tocosmossdk.io/store/db
aligns with the PR objective of migrating database backends and removingcosmos-db
dependencies. This change promotes better integration and consistency within the Cosmos SDK ecosystem.store/db/pebbledb.go (13)
21-23
: LGTM!The
PebbleDB
struct has a clean and straightforward design, encapsulating the underlying PebbleDB instance in a single field.
25-27
: LGTM!The
NewPebbleDB
function provides a convenient way to create a newPebbleDB
instance with default options by delegating toNewPebbleDBWithOpts
withnil
options.
29-49
: LGTM!The
NewPebbleDBWithOpts
function provides flexibility in creating a newPebbleDB
instance with custom options. It sets sensible defaults and allows overriding options through the providedDBOptions
. Error handling is done correctly.
51-55
: LGTM!The
Close
method correctly closes the underlying PebbleDB database and sets thestorage
field tonil
to prevent further usage. It returns any error encountered during the close operation.
57-77
: LGTM!The
Get
method correctly retrieves the value associated with a given key from the underlying PebbleDB database. It performs necessary error handling, such as checking for empty keys and handling specific PebbleDB errors. It also takes care of closing the value closer and returningnil
for empty values.
79-86
: LGTM!The
Has
method provides a convenient way to check the existence of a key in the database. It leverages theGet
method internally, which handles the necessary error checking and value retrieval. The method returnstrue
if the key exists andfalse
otherwise, along with any error encountered.
88-97
: LGTM!The
Set
method correctly sets the value associated with a given key in the underlying PebbleDB database. It performs necessary error handling, such as checking for empty keys andnil
values. It uses the appropriate PebbleDBSet
method with theSync
option set tofalse
.
99-105
: LGTM!The
Delete
method correctly deletes the value associated with a given key from the underlying PebbleDB database. It performs necessary error handling, such as checking for empty keys. It uses the appropriate PebbleDBDelete
method with theSync
option set tofalse
.
107-118
: LGTM!The
Iterator
method correctly creates and returns an iterator over a domain of keys in ascending order. It performs necessary error handling, such as checking for empty start and end keys. It uses the appropriate PebbleDBNewIter
method with the specified key range. The returned iterator is a custompebbleDBIterator
type that wraps the underlying PebbleDB iterator.
120-131
: LGTM!The
ReverseIterator
method correctly creates and returns an iterator over a domain of keys in descending order. It performs necessary error handling, such as checking for empty start and end keys. It uses the appropriate PebbleDBNewIter
method with the specified key range. The returned iterator is a custompebbleDBIterator
type that wraps the underlying PebbleDB iterator, with thereverse
flag set totrue
to indicate descending order.
133-138
: LGTM!The
NewBatch
method correctly creates and returns a new batch instance. It uses theNewBatch
method of the underlying PebbleDB database to create a new PebbleDB batch. The returned batch is a custompebbleDBBatch
type that wraps the underlying PebbleDB batch and holds a reference to the currentPebbleDB
instance.
140-145
: LGTM!The
NewBatchWithSize
method correctly creates and returns a new batch instance with a specified size. It uses theNewBatchWithSize
method of the underlying PebbleDB database to create a new PebbleDB batch with the specified size. The returned batch is a custompebbleDBBatch
type that wraps the underlying PebbleDB batch and holds a reference to the currentPebbleDB
instance.
149-155
: LGTM!The
pebbleDBIterator
struct provides a custom implementation of thecorestore.Iterator
interface using a PebbleDB iterator. It wraps the underlying PebbleDB iterator and adds additional functionality and state management. The struct fields are appropriately named and typed to represent the iterator's state and configuration.orm/model/ormtable/bench_test.go (2)
13-13
: LGTM!The import statement for the
coretesting
package is correctly added with an alias. This aligns with the update in the memory database instantiation later in the file.
245-245
: LGTM!The memory database instantiation is correctly updated to use the
NewMemDB
function from thecoretesting
package. This change aligns with the import statement update and suggests a shift towards using a standardized memory database implementation for testing purposes. The overall functionality of the benchmark test remains intact.store/db/rocksdb.go (13)
1-2
: LGTM!The build tags are correctly specified to ensure conditional compilation of the file only when the
rocksdb
tag is specified.
6-16
: LGTM!The import statements are correctly specified and relevant to the functionality of the file.
18-22
: LGTM!The variable declarations are appropriate:
defaultReadOpts
is initialized with default read options from thegrocksdb
package.- The empty interface type assertion ensures that the
RocksDB
struct implements thecorestore.KVStoreWithBatch
interface.
24-29
: LGTM!The
RocksDB
struct is defined appropriately:
- It encapsulates a
grocksdb.DB
instance.- The struct is intended to implement the
corestore.KVStoreWithBatch
interface.
31-48
: LGTM!The
defaultRocksdbOptions
function is implemented correctly:
- It sets various options for RocksDB, including block cache, filter policy, max open files, and write buffer size.
- The options are optimized for performance, considering factors like cache size, parallelism, and memory usage.
50-55
: LGTM!The
NewRocksDB
function is implemented correctly:
- It calls
defaultRocksdbOptions
to get the default options.- It sets the
CreateIfMissing
option totrue
.- It calls
NewRocksDBWithOpts
with the provided parameters and options to create a newRocksDB
instance.
57-67
: LGTM!The
NewRocksDBWithOpts
function is implemented correctly:
- It constructs the database path by joining
dataDir
,name
, andDBFileSuffix
.- It calls
grocksdb.OpenDb
with the provided options and database path to open the RocksDB database.- If an error occurs during opening the database, it returns the error wrapped with additional context.
- If the database is opened successfully, it returns a new
RocksDB
instance with the opened database.
69-73
: LGTM!The
Close
method is implemented correctly:
- It calls
Close
on thestorage
field to close the RocksDB database.- It sets the
storage
field tonil
to indicate that the database is closed.- The method returns
nil
as there are no error conditions.
75-82
: LGTM!The
Get
method is implemented correctly:
- It calls
GetBytes
on thestorage
field with default read options and the providedkey
.- If an error occurs during the get operation, it returns
nil
and the error.- If the get operation is successful, it returns the retrieved value and
nil
error.
84-91
: LGTM!The
Has
method is implemented correctly:
- It calls the
Get
method with the providedkey
.- If an error occurs during the get operation, it returns
false
and the error.- If the get operation is successful, it checks if the returned value is
nil
.- It returns
true
if the value is notnil
, indicating the key exists, andfalse
otherwise.
93-102
: LGTM!The
Set
method is implemented correctly:
- It checks if the
key
is empty and returnsErrKeyEmpty
if true.- It checks if the
value
isnil
and returnsErrValueNil
if true.- It calls
Put
on thestorage
field with default write options, the providedkey
, andvalue
.- It returns the error returned by the
Put
operation.
104-110
: LGTM!The
Delete
method is implemented correctly:
- It checks if the
key
is empty and returnsErrKeyEmpty
if true.- It calls
Delete
on thestorage
field with default write options and the providedkey
.- It returns the error returned by the
Delete
operation.
112-119
: LGTM!The
Iterator
method is implemented correctly:
- It checks if
start
orend
is notnil
and has a length of 0, returningErrKeyEmpty
if true.- It creates a new iterator using
NewIterator
on thestorage
field with default read options.- It calls
newRocksDBIterator
with the created iterator,start
,end
, andfalse
(indicating forward direction).- It returns the new
rocksDBIterator
andnil
error.store/db/goleveldb.go (12)
29-42
: LGTM!The
NewGoLevelDB
constructor function sets reasonable default options and allows customization through the providedDBOptions
. The delegation toNewGoLevelDBWithOpts
for the actual creation with the default options is a good design choice.
44-51
: LGTM!The
NewGoLevelDBWithOpts
constructor function properly constructs the database file path and opens the LevelDB database using the provided options. The error handling and wrapping of the opened database in aGoLevelDB
instance are implemented correctly.
54-66
: LGTM!The
Get
method properly validates the key, handles the "not found" case, and returns the value and error according to the expected behavior of a key-value store. The error handling and return values are implemented correctly.
69-71
: LGTM!The
Has
method correctly delegates the existence check to the underlying LevelDB database and returns the result and error accordingly.
74-82
: LGTM!The
Set
method properly validates the key and value, returning appropriate errors for empty keys and nil values. It correctly sets the key-value pair using thePut
method of the underlying LevelDB database.
85-93
: LGTM!The
SetSync
method properly validates the key and value, returning appropriate errors for empty keys and nil values. It correctly sets the key-value pair using thePut
method of the underlying LevelDB database with theSync
option, ensuring the write is persisted to disk before returning.
96-101
: LGTM!The
Delete
method properly validates the key, returning an error for empty keys. It correctly deletes the key-value pair using theDelete
method of the underlying LevelDB database.
104-109
: LGTM!The
DeleteSync
method properly validates the key, returning an error for empty keys. It correctly deletes the key-value pair using theDelete
method of the underlying LevelDB database with theSync
option, ensuring the deletion is persisted to disk before returning.
111-113
: LGTM!The
RawDB
method correctly returns the underlying LevelDB database instance, providing direct access to the raw database if needed.
116-118
: LGTM!The
Close
method correctly delegates the closing of the database to the underlying LevelDB database, ensuring proper shutdown and resource release.
121-135
: LGTM!The
138-157
: LGTM!The
Stats
method correctly retrieves various database statistics by iterating over a predefined list of statistic keys and calling theGetProperty
method of the underlying LevelDB database. The retrieved statistics are stored in a map and returned, providing a convenient way to access the database statistics.orm/model/ormdb/module_test.go (3)
20-20
: LGTM!The import statement for the
cosmossdk.io/core/testing
package is valid and the aliascoretesting
is clear.
392-393
: Looks good!The usage of
coretesting.NewMemDB()
is consistent with the updated import statement and aligns with the package change.
398-398
: LGTM!The usage of
coretesting.NewMemDB()
is consistent with the updated import statement and aligns with the package change.simsx/runner.go (2)
16-16
: LGTM!The import statement change aligns with the PR objective of migrating database backends and removing
cosmos-db
dependencies.
315-315
: Verify theDBType
enum values.The change to the
dbm.NewDB
function call looks good and aligns with the updated method of specifying the database backend type.However, please ensure that the
DBType
enum values match the expected database backend types to avoid any runtime issues.Run the following script to verify the enum values:
Check the output and confirm that the enum values align with the supported database backends (e.g., GoLevelDB, RocksDB, PebbleDB, etc.).
Verification successful
Change is consistent with codebase usage of DBType
The modification from
dbm.BackendType(tCfg.DBBackend)
todbm.DBType(tCfg.DBBackend)
is consistent with the codebase. DBType is defined as a string type and used throughout the project for specifying database backends.While we don't see explicit constant declarations for DBType values, the usage in switch statements (e.g.,
case DBTypeGoLevelDB:
) suggests that predefined values exist. The addition of thenil
argument to theNewDB
function call is also consistent with the function signatures found in the codebase.To ensure complete accuracy, I recommend manually verifying that the DBType values used in
tCfg.DBBackend
match the expected database backend types (such as GoLevelDB, and any others used in the project).Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify DBType enum values match the expected database backend types. # Test: Search for the DBType enum declaration. ast-grep --lang go --pattern $'type DBType int const ( $$$ )'Length of output: 520
Script:
#!/bin/bash # Description: Search for DBType declaration and enum values # Search for DBType type declaration echo "Searching for DBType type declaration:" rg -t go "type\s+DBType\s+\w+" -g '!vendor/' echo "\nSearching for DBType enum values:" rg -t go "const\s+\([^)]*\bDBType" -g '!vendor/' -A 10Length of output: 342
Script:
#!/bin/bash # Description: Investigate DBType usage and potential string constants echo "Searching for DBType usage in function signatures and variable declarations:" rg -t go "DBType(\s+\w+|\))" -g '!vendor/' -A 2 -B 2 echo "\nSearching for potential DBType string constants:" rg -t go "const\s+\(\s*\w+\s+DBType\s*=\s*\"" -g '!vendor/' -A 5 echo "\nSearching for functions related to DBType creation or validation:" rg -t go "func.*DBType" -g '!vendor/' -A 2 -B 2Length of output: 4513
server/util_test.go (2)
46-50
: LGTM!The test cases have been updated correctly to use the new database backend types from the
cosmossdk.io/store/db
package. The test logic remains sound.
18-19
: LGTM!The updated import statement for the database package does not affect the test logic. The test function remains unchanged and valid.
server/util.go (2)
434-449
: Verify the function return type change in the codebase.The changes to the
GetAppDBBackend
function look good. The updated return typedbm.DBType
and the panic for unsupported backend types are appropriate.However, as this is a breaking change, please ensure that all code that depends on this function is updated to handle the new return type
dbm.DBType
instead ofdbm.BackendType
.Run the following script to verify the function usage:
Verification successful
GetAppDBBackend function usage is correct across the codebase
After reviewing the usage of
GetAppDBBackend
function throughout the codebase, I can confirm that all instances are compatible with the updated return typedbm.DBType
. The function is consistently used as an argument to database opening functions or in comparisons withdbm.DBType
values, which aligns with the new return type.No further action is required regarding this change. The codebase appears to have been properly updated to handle the new return type.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all calls to `GetAppDBBackend` handle the updated return type. # Test: Search for the function usage. Expect: No type mismatches. rg --type go -A 5 $'GetAppDBBackend'Length of output: 4105
477-480
: Verify the function signature change in the codebase.The changes to the
OpenDB
function look good. The updated function signature withdbm.DBType
and the additionalnil
argument todbm.NewDB
are appropriate.However, as this is a breaking change, please ensure that all calls to
OpenDB
have been updated to match the new signature, passing the correctdbm.DBType
and the additionalnil
argument.Run the following script to verify the function usage:
Verification successful
Function signature change verified successfully
The updated
OpenDB
function signature has been correctly implemented and used across the codebase. All observed calls toOpenDB
are using the correct number of arguments and appropriate types. No issues or mismatches were found in the function usage.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all calls to `OpenDB` match the updated function signature. # Test: Search for the function usage. Expect: No argument mismatches. rg --type go -A 5 $'OpenDB'Length of output: 2643
orm/model/ormtable/table_test.go (12)
745-746
: LGTM!The changes consistently update the memory database implementation used for testing the read-only backend. This aligns with the AI-generated summary.
Line range hint
752-765
: LGTM!The test function correctly sets up the required components, calls the
InsertReturningFoo
method, and verifies the returned value. The implementation looks good.
Line range hint
708-728
: LGTM!The helper function correctly compares two tables for equality by iterating over their contents and comparing the messages using
assert.DeepEqual
withprotocmp.Transform()
. The implementation looks good.
Line range hint
730-738
: LGTM!The helper function correctly converts a slice of
protoreflect.Value
to a slice ofinterface{}
by iterating over the input slice and converting each value usingks[i].Interface()
. The implementation looks good.
Line range hint
675-706
: LGTM!The test function correctly tests the JSON export and import functionality by inserting generated messages, exporting to JSON, validating the JSON, importing into a new store, and comparing the original and imported tables. The implementation looks good.
Line range hint
655-665
: LGTM!The
Less
method correctly compares two messages using the index by encoding their keys and comparing them using the index'sCompareKeys
method. The implementation looks good.
Line range hint
667-669
: LGTM!The
Swap
method correctly swaps two messages in thedata
slice. The implementation looks good.
Line range hint
581-588
: LGTM!The
reverseData
function correctly reverses a slice ofproto.Message
by creating a new slice and copying the messages in reverse order. The implementation looks good.
Line range hint
590-605
: LGTM!The
checkIteratorAgainstSlice
function correctly compares an iterator against a slice of messages by iterating over both simultaneously and comparing each message usingassert.DeepEqual
withprotocmp.Transform()
. The implementation looks good.
Line range hint
607-643
: LGTM!The
TableDataGen
function correctly generates aTableData
object with random data by generating a random prefix and message type, building a new table, creating a new store, and inserting the specified number of generated messages into the table. The implementation looks good.
Line range hint
651-653
: LGTM!The
Len
method correctly returns the length of thedata
slice. The implementation looks good.
Line range hint
558-579
: LGTM!The
testTable
function correctly tests a table with the providedTableData
by iterating over the indexes, creating anIndexModel
for each index, sorting theIndexModel
, and calling the appropriate test functions based on the index type. The implementation looks good.server/start.go (2)
42-42
: LGTM!The import statement for the
dbm
package has been correctly updated to usecosmossdk.io/store/db
.
123-123
: Verify theDBOpener
function usage in the codebase.The
backendType
parameter type in theDBOpener
function signature has been correctly updated todbm.DBType
. However, ensure that all usages of theDBOpener
function have been updated to match the new signature.Run the following script to verify the
DBOpener
function usage:Verification successful
DBOpener function usage verified and consistent
The DBOpener function usage has been thoroughly checked, and all occurrences are consistent with the new signature using
dbm.DBType
. The change is localized to the server/start.go file, and no issues were found in its implementation or usage.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all usages of the `DBOpener` function match the new signature. # Test: Search for the function usage. Expect: Only occurrences with the new signature. rg --type go -A 5 $'DBOpener'Length of output: 1674
server/constructors_test.go
Outdated
) | ||
|
||
func Test_OpenDB(t *testing.T) { | ||
t.Parallel() | ||
_, err := OpenDB(t.TempDir(), dbm.GoLevelDBBackend) | ||
_, err := OpenDB(t.TempDir(), dbm.DBTypeGoLevelDB) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tip
Codebase Verification
Update remaining occurrences of GoLevelDBBackend
for consistency
The rename from GoLevelDBBackend
to DBTypeGoLevelDB
is not fully consistent across the codebase. While most files have been updated, two test-related files still use the old naming:
- orm/model/ormtable/bench_test.go
- orm/internal/testkv/leveldb.go
Please update these files to use DBTypeGoLevelDB
for consistency. Additionally, conduct a thorough review of all test files to ensure they align with the new naming convention.
Analysis chain
Verify the consistency of the GoLevelDBBackend
rename across the codebase.
The change from dbm.GoLevelDBBackend
to dbm.DBTypeGoLevelDB
aligns with the updates in the store/db
package. However, please ensure that this change is consistently applied across the codebase wherever the OpenDB
function is used with the GoLevelDBBackend
argument.
Run the following script to verify the consistency of the change:
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the consistency of the `GoLevelDBBackend` rename across the codebase.
# Test: Search for the old usage. Expect: No occurrences of `GoLevelDBBackend`.
rg --type go $'GoLevelDBBackend' || echo "No occurrences of 'GoLevelDBBackend' found."
# Test: Search for the new usage. Expect: Only occurrences of `DBTypeGoLevelDB`.
rg --type go -A 5 $'DBTypeGoLevelDB'
Length of output: 2848
store/db/rocksdb_noflag.go
Outdated
//go:build !rocksdb | ||
// +build !rocksdb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the duplicated build tag.
The build tag is duplicated. The // +build
prefix is deprecated since Go 1.17. Please remove the line with the // +build
prefix.
Apply this diff to remove the duplicated build tag:
//go:build !rocksdb
-// +build !rocksdb
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
//go:build !rocksdb | |
// +build !rocksdb | |
//go:build !rocksdb |
store/db/pebbledb.go
Outdated
@@ -0,0 +1,301 @@ | |||
package db |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
were these rewritten?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removing cosmos-db from everywhere seems fine but not sure we need to add all the dbs into store/db. Instead simapp as the top level can still use cosmos-db while removing it in cosmos-sdk and modules.
personally dont think we should have rewritten the dbs here as its duplicating a lot of work
Tend to agree here, that was the part of the previous big PR (#21373) that I wasn't super much in favor either. I personally think it is good as it is, |
ok, then I will allow the |
Description
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.
I have...
!
in the type prefix if API or client breaking changeCHANGELOG.md
Reviewers Checklist
All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.
Please see Pull Request Reviewer section in the contributing guide for more information on how to review a pull request.
I have...
Summary by CodeRabbit
Release Notes
New Features
Improvements
Dependency Management