Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Computing digest takes 10½ min on Mainnet – could we get progress there, too? #1894

Open
michalrus opened this issue Aug 23, 2024 · 1 comment
Assignees
Labels
good first issue 👋 Good for newcomers UX 🌞 User experience

Comments

@michalrus
Copy link
Member

Why

Considering that the digest currently takes over 10 minutes to compute on Mainnet, it would be better UX if we could show the progress to the user.

image

What

I see in this code fragment that no progress is being updated.

This can be tricky because I'm assuming that the digest is recursive. But we could check the whole unpacked directory size first? Maybe even in parallel, delaying the first progress update to not lose unpacking time? Or even just progress in terms of the number of files processed would work well, the speed wouldn't be constant, but still.

(original Slack thread)

@jpraynaud jpraynaud added UX 🌞 User experience good first issue 👋 Good for newcomers labels Aug 23, 2024
@jpraynaud
Copy link
Member

jpraynaud commented Aug 23, 2024

Hi @michalrus, thanks for creating the issue.

I guess this is something that can be done file by file as we compute the digest of Cardano database immutable files one by one with the CardanoImmutableDigester 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue 👋 Good for newcomers UX 🌞 User experience
Projects
None yet
Development

No branches or pull requests

3 participants