Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify what "Augment Reconstruction" does #40

Open
endolith opened this issue Nov 11, 2022 · 0 comments
Open

Clarify what "Augment Reconstruction" does #40

endolith opened this issue Nov 11, 2022 · 0 comments

Comments

@endolith
Copy link

endolith commented Nov 11, 2022

https://github.com/alicevision/meshroom-manual/blob/develop/source/feature-documentation/gui/image-gallery/augment-reconstruction.rst

This is pretty short and I don't really understand why you would do this or what it accomplishes. There isn't much else online about it. Some things I've found:

Using Augment Reconstruction will create Image Groups. For each image group a new CameraInit node is being created. Adding images with different focal lengths or from different camera models to Image Groups will help to prevent accidental mix-ups.

Meshroom can handle different focal lengths from the same camera in the default pipeline (without Image Group), if the focal length provided in the EXIF.

So it should be used when combining images from different cameras?

One quick question: I plan to reconstruct a space (e.g. room), and then place the camera in a fixed position. Is it possible, without running the whole reconstruction pipeline, to localize that camera (i.e. to get the extrinsic matrix)?

When you drop new images in Meshroom, you have 2 options:
1- "Add Images"
2- "Augment Reconstruction"
So use the second one to only localize the new images.

It will still make a global BA that can adjust a little bit the overall scene. If it's important to avoid that, after dropping new images, select the newly created StructureFromMotion node and check the option "Lock Scene Previously Reconstructed".

I don't understand this at all.

Have you done an augmentation from your successful test with 300 images? If yes, what are the results? Does it decrease the quality of the first one??

So you do photogrammetry with a set of images and then add more to the Augment field and do it again? What does this accomplish? Why not just put them all in once?

Try it with downscale 8 or split the dataset in smaller chunks and augment reconstruction

I would load the image dataset in chunks, using augment reconstruction with ~50 images each (in order of capturing time).

You could try to split your project dataset into parts using the Augment Reconstruction drop field and merge the reconstructed SFM nodes.

So it can be used for a very large set of images, to break it up into chunks and save computation and memory? So it's doing a normal processing cycle on each chunk and then combining them into one big mesh? And it could theoretically do a better job if they were all dumped into one group, but it would take a lot longer and might not be much better?

For each batch of images, a new Group will be created in the Images Pane.

Screenshots of this process might help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant