-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ord index is very slow #1648
Comments
@andrewtoth I have merged https://github.com/casey/ord/pull/1516 and https://github.com/casey/ord/pull/1636 and reverted https://github.com/casey/ord/pull/1357. It gets to the value set in the --first-inscription-height parameter very quickly. However, afterwards it is extremely slow taking around 1 minute per block. I have built using the cargo build -r command. Any other ideas? |
Did you start bitcoind with |
Yes bitcoind is running with the -rest flag |
If you run with |
Ok, the first blocks are the slowest because every input has to be fetched. As it indexes more blocks, it will already have the previous inputs so less inputs need to be fetched from disk. |
It is a virtual HDD with google cloud. Seems to max out at around 80 MiB/s (~600mbps if I am correct), but ord and bitcoind are using a fraction of that. (5-15 MiB/s at most). cpu utilisation is about 1-3% for both processes together too. Ram is 16GB and only using around 600MB |
got it working with compile 0.4.2 and using a bootrap file from community |
@ChristianGrieger could you share further details please? |
|
Where did you get the file from? |
Added the speedup improvements in my branch https://github.com/andrewtoth/ord/tree/speedup-improvements if you'd like to help test/benchmark.
Make sure you set |
% git clone git@github.com:andrewtoth/ord Please make sure you have the correct access rights |
Updated the post, use |
Now
now fails on cargo line with: zsh: command not found: cargo |
Oh I assumed you'd have rust installed. Install rust first, which includes cargo. After the cargo build line you can find |
Whoops had a bad merge in there. Fixed. Do a |
Success!! Will report back. |
I didn't get too far! Running ord:
|
Ahh this has a newer db version. You'll have to nuke your previous index file. It should be in |
But... that's 3 day's worth of work! 🥴 Will this new one be fast enough to catch me up? |
Ahh you can specify a new index file. Run |
Well! It just ran through all the indexing in ~10 seconds and returned:
|
I think I'm wrong, it didn't quite finish all the indexing. It zipped right up to 766244/775968 and then crashed out. If I run it again it goes through a few more blocks and crashes out again. |
Yeah it blows through everything until block 767430 in about 10 seconds, then it takes me a little over an hour to sync to tip. |
Yes, default port. Yes to rest. Other than that, vanilla, I think. |
Hmm not sure what's happening. What OS are you running? It's connecting properly to the P2P interface to be able to sync all headers. It might be breaking on REST. Try running bitcoind with |
@apemithrandir you need You can also check by running |
Thanks for the reply. I have txindex=1 (I also have a fully indexed fulcrum server running on the same machine). If I run getrawtransaction for the id's that fail i get the expected response. It started happening around blockheight 770k. The indexer goes for a while then hits a failed to get transaction and exits. Unfort it doesn't save the progress thus far and reverts to the index height it was at beforehand. It keeps happening and for different tx id's each time. I can always get the raw tx id's afterwards. If I start the index and then ctrl+c escape it after it progresses, I can kind of inch it along but if I leave it to it's own devices it eventually runs into this issue and exits while losing any progress. |
@apemithrandir Hmm... I see you have fulcrum running as well, it could be making a lot of RPC calls that will cause I will make a fix so that |
Sure. I can try stopping my fulcrum server while it finishes the sync as well. |
Sync finished but it looks like the memory didn't get automatically cleared afterwards. Also I'm guess since it locks up the ram you shouldn't create a daemon and have --index-stats index running in the background all the time. |
@apemithrandir thanks for testing. Doesn't look like the changes on my branch will be merged. Check out #1759 instead. |
I'll test out https://github.com/andrewtoth/ord/tree/batch-tx-requests |
The branch compiles fine and I'm running the indexer now. leaving my fulcrum running in the background this time but did add the |
One of the goals should be to not require configuring anything other than txindex. So if you can unset rpcworkqueue and make sure it works that would be a big help. |
Ok I will stop the index and try that. I also notice on this branch it doesn't go straight to the first inscription block. It is still going through all the blocks. |
Getting the same issue at block 767433. Tried rpcworkqueue=64 but nothing changed. Any fixes? |
Not sure if I did something wrong on my end but this branch: Basically stalled at block height 476k. I was surprised when it didn't do like your speed-improvments branch and skip the blocks without inscriptions. |
@apemithrandir tell Casey how much faster it is using p2p. https://github.com/casey/ord/pull/1516#issuecomment-1431862594 |
After reading @casey's comment, I would agree with him. I would expect that an indexer should be using the full node on my machine (or a specific node that I tell it to connect to) and not connecting to a random node on the network. I personally don't mind too much if the indexer takes a while, tho if it fails while indexing that is obviously a bigger issue. I'm not sure if it is possible to skip up the pre-inscriptions blocks while still using RPC, this would likely help the most. |
@apemithrandir it's not able to connect to a random node though, that patch uses the same host as the RPC and it connects to RPC first before connecting to P2P. So if you're not running your node over RPC it doesn't start, and it can't connect to a different node because it uses the RPC host. My patch does skip pre-inscription blocks andrewtoth@3225527. But using RPC to fetch headers and blocks is that much slower than P2P. With P2P you can request 2000 headers at a time in binary. With RPC you have to fetch block height in hex, then block header in hex. |
Let me know when you and @casey resolve it. As it stands with my machine I currently cannot successfully run --index-sats index unless I build the speed-improvements branch. |
I tried compiling this code (was a detached head but merged a local branch): |
Make sure to include the |
@apemithrandir you can't skip blocks if you use There are many things that could potentially make index-sats faster, but ultimately it will take longer and longer as sat ranges become more fragmented. So it will always take a long time and there is likely no solution that will make it take only a few hours or so. |
Right, there's an issue for memory management https://github.com/casey/ord/issues/1630. I'm working on fixing the json-rpc issues as well. |
so if I run |
Buiilt from batch-tx-requests. Ran |
Maybe not the place for this but I I have some basic questions about inscriptions/ordinals. In order to "take advantage" of ordinal theory do you need some sort of super-powered coin control? If you found out you had a "rare sat" in one of your Ord Wallet UTXOs is there any commands in If you had a "rare sat" in position K of N in one of your UTXOs you would need to create a sort multi-path transaction that would split your UTXO into a |
Yes:
Just by doing the calculation correctly I have reduced the sync time from 190 days to 9 days without changing a single line of code! |
What about start counting from -1206 ? Or count into the negative with the lost ordinals? |
I'm going to close this since we pulled in most of the improvements. Open a new issue if you have slow index problems :) |
Hey,
my index is syncing for 2 days already and each second only 1 point of the 775k gets added. this would take 190 days to full sync the index. The first 100k starts to sync in 10 minutes and after it gets super slow.
I am on debian.
Any idea how to fix that or to speed that up?
Thanks
The text was updated successfully, but these errors were encountered: