Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server: set multiple concurrentReadTx instances share one txReadBuffer. Approach #2 #12935

Closed

Conversation

wilsonwang371
Copy link
Contributor

This is a different approach for the similar issue trying to resolve in #12933

Currently, the concurrentReadTx does a copy of the txReadBuffer inside backend readTx. However, this buffer is never modified by concurrentReadTx. Therefore, we can just use a single copy of the txReadBuffer as long as there are no new changes committed to backend DB.

Therefore, we use a cached txReadBuffer to avoid excessive deep copy operations for the new read transaction.

@wilsonwang371
Copy link
Contributor Author

This is a patch inspired by #12529

@wilsonwang371 wilsonwang371 changed the title server: set multiple concurrentReadTx instances share one txReadBuffe… server: set multiple concurrentReadTx instances share one txReadBuffer. Approach #2 May 10, 2021
@wilsonwang371 wilsonwang371 force-pushed the shared_txReadBuffer2 branch 3 times, most recently from e215853 to 9d5be87 Compare May 10, 2021 04:10
@codecov-commenter
Copy link

codecov-commenter commented May 10, 2021

Codecov Report

Merging #12935 (d422795) into master (aeb9b5f) will decrease coverage by 0.34%.
The diff coverage is 83.33%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #12935      +/-   ##
==========================================
- Coverage   73.31%   72.97%   -0.35%     
==========================================
  Files         430      430              
  Lines       34182    34676     +494     
==========================================
+ Hits        25060    25304     +244     
- Misses       7194     7393     +199     
- Partials     1928     1979      +51     
Flag Coverage Δ
all 72.97% <83.33%> (-0.35%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
server/mvcc/backend/backend.go 80.07% <75.00%> (-0.32%) ⬇️
server/mvcc/backend/tx_buffer.go 92.50% <100.00%> (+0.39%) ⬆️
client/v3/txn.go 68.42% <0.00%> (-31.58%) ⬇️
server/etcdserver/api/rafthttp/peer.go 77.41% <0.00%> (-18.07%) ⬇️
server/etcdserver/api/rafthttp/peer_status.go 89.28% <0.00%> (-10.72%) ⬇️
client/pkg/v3/tlsutil/tlsutil.go 83.33% <0.00%> (-8.34%) ⬇️
server/config/config.go 66.00% <0.00%> (-8.25%) ⬇️
client/v3/leasing/cache.go 87.50% <0.00%> (-4.61%) ⬇️
server/etcdserver/api/v3rpc/member.go 92.45% <0.00%> (-3.78%) ⬇️
server/etcdserver/api/rafthttp/msgappv2_codec.go 67.88% <0.00%> (-3.67%) ⬇️
... and 22 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update aeb9b5f...d422795. Read the comment docs.

@wilsonwang371
Copy link
Contributor Author

I am going to run a performance evaluation of this patch later.

@wilsonwang371
Copy link
Contributor Author

@gyuho @ptabor @jingyih

Need u guys to take a look at this patch too. This patch can significantly increase the read qps performance.

…r. Regenerate a cache each time after writeback()
@wilsonwang371
Copy link
Contributor Author

Just realized this patch is shorter than #12933, but the performance will be most probably lower than #12933.

The reason for this is that write requests will have to do a copy of the readTx txReadBuffer each time doing a commit.

But I will still upload a performance evaluation for this patch too.

@wilsonwang371
Copy link
Contributor Author

Never mind about this patch. Its initial performance is horrible.

@wilsonwang371 wilsonwang371 deleted the shared_txReadBuffer2 branch May 27, 2021 00:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants