Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dist-txns] CassandraSecondaryIndex write throughput drops gradually #368

Closed
robertpang opened this issue Jul 8, 2018 · 2 comments
Closed
Assignees
Labels
kind/bug This issue is a bug

Comments

@robertpang
Copy link
Contributor

Using commit 2ebadb8, write throughput of CassandraSecondaryIndex workload drops gradually over time.

$ java -jar ~/code/yugabyte/java/yb-loadtester/target/yb-sample-apps.jar --workload CassandraSecondaryIndex --nodes X.X.X.X:9042 --num_threads_read 0 --num_threads_write 64 --num_unique_keys 100000000
...
2018-07-07 07:12:11,428 [INFO|com.yugabyte.sample.apps.AppBase|AppBase] Created a Cassandra table using query: [CREATE TABLE IF NOT EXISTS CassandraSecondaryIndex (k varchar, v varchar, primary key (k)) WITH transactions = { 'enabled' : true };]
2018-07-07 07:12:16,542 [INFO|com.yugabyte.sample.apps.AppBase|AppBase] Created a Cassandra table using query: [CREATE INDEX IF NOT EXISTS CassandraSecondaryIndexByValue ON CassandraSecondaryIndex (v) ;]
2018-07-07 07:12:21,546 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 116.56 ops/sec (322.67 ms/op), 583 total ops  |  Uptime: 5003 ms | maxWrittenKey: -1 | maxGeneratedKey: 646 |
2018-07-07 07:12:26,547 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 405.32 ops/sec (99.85 ms/op), 2610 total ops  |  Uptime: 10004 ms | maxWrittenKey: -1 | maxGeneratedKey: 2673 |
2018-07-07 07:12:31,547 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 4179.33 ops/sec (27.16 ms/op), 23509 total ops  |  Uptime: 15004 ms | maxWrittenKey: 23489 | maxGeneratedKey: 23574 |
2018-07-07 07:12:36,548 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 6334.47 ops/sec (10.10 ms/op), 55185 total ops  |  Uptime: 20005 ms | maxWrittenKey: 55147 | maxGeneratedKey: 55248 |
2018-07-07 07:12:41,549 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 6268.33 ops/sec (10.21 ms/op), 86530 total ops  |  Uptime: 25006 ms | maxWrittenKey: 86430 | maxGeneratedKey: 86594 |
...
2018-07-07 07:14:36,562 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 4778.88 ops/sec (13.45 ms/op), 725707 total ops  |  Uptime: 140019 ms | maxWrittenKey: 725691 | maxGeneratedKey: 725770 |
2018-07-07 07:14:41,562 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 4884.64 ops/sec (13.07 ms/op), 750133 total ops  |  Uptime: 145019 ms | maxWrittenKey: 749971 | maxGeneratedKey: 750196 |
...
2018-07-07 07:24:26,630 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 2704.15 ops/sec (23.91 ms/op), 2676633 total ops  |  Uptime: 730087 ms | maxWrittenKey: 2676220 | maxGeneratedKey: 2676696 |
2018-07-07 07:24:31,630 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 2514.80 ops/sec (24.91 ms/op), 2689208 total ops  |  Uptime: 735087 ms | maxWrittenKey: 2688267 | maxGeneratedKey: 2689271 |
...
2018-07-07 07:27:16,648 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 1982.22 ops/sec (33.22 ms/op), 3075534 total ops  |  Uptime: 900105 ms | maxWrittenKey: 3075023 | maxGeneratedKey: 3075597 |
2018-07-07 07:27:21,648 [INFO|com.yugabyte.sample.common.metrics.MetricsTracker|MetricsTracker] Read: 0.00 ops/sec (0.00 ms/op), 0 total ops  |  Write: 1849.02 ops/sec (34.91 ms/op), 3084780 total ops  |  Uptime: 905105 ms | maxWrittenKey: 3084119 | maxGeneratedKey: 3084843 |
...
@robertpang
Copy link
Contributor Author

In addition, there are some warnings logged in yb-tserver's log:

W0707 07:13:09.056851 19428 transaction_participant.cc:700] 5f427d094ec54c148a089663e4fa1931: Transaction not found: dca2a134-6040-4f91-a139-084122b87386
W0707 07:13:09.064049 19428 transaction_participant.cc:700] 5f427d094ec54c148a089663e4fa1931: Transaction not found: dca2a134-6040-4f91-a139-084122b87386
W0707 07:13:09.062410 19424 transaction_participant.cc:700] 1389663b840145d99da1d041d8e98f7d: Transaction not found: 67d04d33-d301-4c90-bb5b-83c685d32ed2
W0707 07:13:09.066887 19384 transaction_participant.cc:700] 7d06a863c0c740c3addfe99221e1afd6: Transaction not found: dca2a134-6040-4f91-a139-084122b87386
W0707 07:13:09.064155 19428 transaction_participant.cc:597] 5f427d094ec54c148a089663e4fa1931: Apply of unknown transaction: dca2a134-6040-4f91-a139-084122b87386
W0707 07:13:09.067965 19384 transaction_participant.cc:700] 7d06a863c0c740c3addfe99221e1afd6: Transaction not found: dca2a134-6040-4f91-a139-084122b87386
W0707 07:13:09.068012 19384 transaction_participant.cc:597] 7d06a863c0c740c3addfe99221e1afd6: Apply of unknown transaction: dca2a134-6040-4f91-a139-084122b87386
W0707 07:13:09.068076 19361 transaction_participant.cc:700] 7964e69f7cfe4ae4afbe9f2fc95169e1: Transaction not found: 67d04d33-d301-4c90-bb5b-83c685d32ed2
W0707 07:13:09.069407 19424 transaction_participant.cc:700] 1389663b840145d99da1d041d8e98f7d: Transaction not found: 67d04d33-d301-4c90-bb5b-83c685d32ed2

Also, after stopping the workload and rerunning it without recreating the table/index, the througput resumes at the same level as it was left off and continues the drop.

@robertpang robertpang added the kind/bug This issue is a bug label Jul 8, 2018
yugabyte-ci pushed a commit that referenced this issue Jul 9, 2018
Summary:
Transaction coordinator sends an apply request to participant in 2 cases:
1) Right after a transaction was committed.
2) Every 500ms, until it receives a response from the participant that the transaction was applied.

That could cause "Apply of unknown transaction" error, because second request could be send right after the first one.
Also in this scenario the participant tries to load this transaction from DB with lock acquired, so it slows down the whole performance.

This fix prevents sending of first periodical retry after a commit.

Test Plan:
Create cluster of 3 n1-standard-16 nodes and run the following workload:
java -jar ~/code/yugabyte/java/yb-loadtester/target/yb-sample-apps.jar --workload CassandraSecondaryIndex --nodes XXX:9042 --num_threads_read 0 --num_threads_write 64 --num_unique_keys 100000000

Reviewers: robert, mikhail

Reviewed By: mikhail

Subscribers: kannan, ybase

Differential Revision: https://phabricator.dev.yugabyte.com/D5123
@kmuthukk
Copy link
Collaborator

kmuthukk commented Jul 9, 2018

Fixed in add75f7

@kmuthukk kmuthukk closed this as completed Jul 9, 2018
jasonyb pushed a commit that referenced this issue Jun 11, 2024
PG-602: Updated the README and added PG15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug This issue is a bug
Projects
None yet
Development

No branches or pull requests

3 participants