Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: add strong_constrains on sysroot_{{ target_platform }}? #63

Open
h-vetinari opened this issue Apr 21, 2024 · 18 comments
Open

RFC: add strong_constrains on sysroot_{{ target_platform }}? #63

h-vetinari opened this issue Apr 21, 2024 · 18 comments

Comments

@h-vetinari
Copy link
Member

In conda-forge/libcxx-feedstock#131, I noticed that setting c_stdlib_version to 2.17 (and being in a cos7 image) still leads to

$PREFIX/bin/x86_64-conda-linux-gnu-ld: $PREFIX/lib/libc++abi.so: undefined reference to `memcpy@GLIBC_2.14'
$PREFIX/bin/x86_64-conda-linux-gnu-ld: $PREFIX/lib/libc++abi.so: undefined reference to `aligned_alloc@GLIBC_2.16'

if sysroot_linux-64 2.17 is not available at test time.

It seems reasonable to me that when a package indicates it requires c_stdlib_version >=x, that we shouldn't allow pulling in older sysroots when building against that package?

conda-build has a similar feature like run-exports also for adding run_constrained: entries, which would do the job here IMO.

Thoughts @beckermr @isuruf?

@isuruf
Copy link
Member

isuruf commented Apr 21, 2024

Although I sympathize. I don't like this condition getting added to basically all compiled packages as a dependency.

@isuruf
Copy link
Member

isuruf commented Apr 21, 2024

This also makes sense only for packages that end up in build as there will be no sysroot_* package in host.

@h-vetinari
Copy link
Member Author

Although I sympathize. I don't like this condition getting added to basically all compiled packages as a dependency.

It would just be a constraint, not a dependency. I thought that kind of metadata was the point of the stdlib-infrastructure, i.e. it's obviously going to affect a lot of packages. Should we not strive for correct metadata? I guess the only mitigating factor is that we'll be moving to 2.17 globally soon, so it'll become the "ambient" expectation again.

This also makes sense only for packages that end up in build as there will be no sysroot_* package in host.

AFAICT we cannot know in advance whether a package is going to be used in a build environment, so then the alternative is people just run into the missing symbol errors, and document how to fix them?

@isuruf
Copy link
Member

isuruf commented Apr 21, 2024

Other than the test section, when will this ever help?

@h-vetinari
Copy link
Member Author

Anything where the dynamic linker looks for symbols from the newer glibc, so in principle anything built against 2.17 that gets executed (either in build or test env)?

Note that I said we can just document how to deal with the missing symbols, so this isn't black-and-white to me - I asked because I don't see the issue with adding the run-constraint, even if it gets added very broadly.

@isuruf
Copy link
Member

isuruf commented Apr 21, 2024

Anything where the dynamic linker looks for symbols from the newer glibc, so in principle anything built against 2.17 that gets executed (either in build or test env)?

Can you give an example recipe? I still don't understand.

@h-vetinari
Copy link
Member Author

Can you give an example recipe? I still don't understand.

In libcxx we failed during the testing phase while trying to load libc++abi.so, causing the linker to look for the required glibc symbols and not finding them. This goes away if we add the constraint on the sysroot, but should equally go away if we used the cos7 image, as that should provide a new enough glibc.

My point (to the degree that I'm not missing something) was that loading a library is not restricted to the test phase. If there's an executable called in the build environment that needs to load something (e.g. libc++abi.so in this case) that ends up requiring newer glibc, we're in the same situation - we'd fail to find the symbols, though this time in the build environment.

AFAICT we'll be in a similar situation with conda_2_28 / alma 8. Our images will be cos7-based, but people will eventually start depending on newer glibc. To get this to work without the constraint, dependent feedstocks then either need to add the sysroot manually or change the image version. I like the constraint because it minimizes changes in dependent feedstocks:

  • no changes necessary to meta.yaml
  • not necessary to change image version in conda-forge.yml

(there's a corner case where bumping the image version is still necessary, but I still think we'd cover a large percentage with the constraint.)

@minrk
Copy link
Member

minrk commented Jun 10, 2024

Whether this gets added by sysroot's run_exports or not, should packages that appear to have this requirement add the run_constrained themselves (e.g. openmpi)? Or is something else wrong?

I guess the constraint won't have the desired effect when the package is not in build, which means only cross-compiled openmpi dependents would be affected. Is that right?

How is a package meant to communicate that "to link against me, you must build with c_stdlib >= X"? Which appears to be the case in practice for openmpi. Or are all recipes that encounter missing glibc symbols linking openmpi doing something wrong?

@minrk
Copy link
Member

minrk commented Jun 19, 2024

I'm not sure this is the best Issue to discuss this, but is there a way for the "weighed down" c_stdlib_version to affect conda-forge infrastructure / conda-build, but not runtime installs? It is a bit surprising that 2.12 is installed when users to conda instal gcc. If that didn't happen, the situation for runtime compilation would be simpler, I think. Maybe when the c_stdlib_version migration finishes, it will make sense to remove whatever's causing the prefer-2.12-by-default in the solver rather than conda_build_config?

@h-vetinari
Copy link
Member Author

Maybe when the c_stdlib_version migration finishes, it will make sense to remove whatever's causing the prefer-2.12-by-default in the solver rather than conda_build_config?

Getting rid of the sysroot hack is indeed one of the goals of the whole stdlib effort. I'm not directly involved but I think it'll come after the switch to 2.17

@minrk
Copy link
Member

minrk commented Jun 19, 2024

Nice. I think that will solve the runtime compilation issues, so no other action on that front is probably required.

@beckermr
Copy link
Member

Yeah to be clear, once we remove the hacks @h-vetinari is referring to, conda will install the latest sysroot when a compiler is installed. That will be w/e the latest is at the time, which right now is 2.28 IIRC. It won't mean we ship the compiler without the sysroot.

@minrk
Copy link
Member

minrk commented Jun 19, 2024

Yeah, that's exactly what I would hope to happen. Looking forward to it!

@minrk
Copy link
Member

minrk commented Jul 30, 2024

I'm trying to follow what's happened this summer, and it seems the transition to 2.17 has happened, but the weighing down to the oldest supported sysroot is still happening.

I see comments about 'breaking non-conda-build' in conda-forge/conda-forge-pinning-feedstock#6070 , but that's exactly the situation I want to fix because weighing down outside conda-build is what causes all the breakages in my experience.

Is there a separate issue to track the removal of track_features on sysroot, or should I open one?

Using 2.17 by default in conda-forge builds is fine, but it really seems like conda install gcc for a user env should get the latest sysroot that's <= __glibc, not 2.17. Least common denominator should not be the default target for runtime compilation outside conda-build, IMO.

@beckermr
Copy link
Member

Right. I am confused about multiple issues and how they relate, so it has been tough for me to follow. AFAIK we have ~3 things going on

  1. What do we do about dependencies that build against a newer sysroot than the default in recipes? I think we've seen some issues about that but I am not 100% sure.

  2. How much of the weighing down of sysroots do we still need to do now that we've moved to the explicit stdlib config?

  3. What should the default be outside of conda-forge builds for external users? (The comment @minrk just made.)

These are all somewhat related at a technical level since implementations of some effect the others. However there are I think some policy / convention issues in the mix too, especially for the last item.

Thoughts?

@minrk
Copy link
Member

minrk commented Jul 30, 2024

What do we do about dependencies that build against a newer sysroot than the default in recipes? I think we've seen some issues about that but I am not 100% sure.

Yeah. I was working on that in conda-forge/conda-forge.github.io#2206 . So far, every problem I've encountered there has been caused by:

  1. not respecting $LDFLAGS, and/or
  2. outdated sysroot installed at runtime

and either installing later sysroot or fixing ignored or missing $LDFLAGS (e.g. depending on gcc instead of c-compiler) fixed it.

I think 2 and 3 are related, because going forward (IIUC), weighing down should only affect packages that don't use c_stdlib pinning (i.e. not conda-forge packages). I'd argue that latest sysroot is the right default choice for user runtime envs, and other build environments that want to pin down for portability should follow conda-forge's example and pin down c_stdlib explicitly. To me, that suggests that weighing down should go away if/when c_stdlib is a reasonable expectation for conda-forge packages, and latest available sysroot should become the default install.

@h-vetinari
Copy link
Member Author

@minrk: I see comments about 'breaking non-conda-build' in conda-forge/conda-forge-pinning-feedstock#6070 , but that's exactly the situation I want to fix because weighing down outside conda-build is what causes all the breakages in my experience.

This piece of infrastructure largely precedes my involvement, but the understanding I had gotten was that we'd obviously like to get rid of the sysroot hack. When this was brought up in the PR you mention, Isuru pointed out some issues, though I believe they can be resolved (there's hasn't been an answer to my proposal there, so I'll summarize it below).

@beckermr: AFAIK we have ~3 things going on

There's actually 4-5 things AFAIU:

  1. What do we do about dependencies that build against a newer sysroot than the default in recipes? I think we've seen some issues about that but I am not 100% sure.

That part is IMO the easiest - it's what {{ stdlib("c") }} tackled first and foremost. If a package builds against a newer sysroot, the __glibc part of the requirement spreads virally (in principle). As long as the build requires that bump, this is fine.

(1b) The one problem with this is that the sysroot requirement does not spread (which is what this issue is about), so packages that themselves need to get compiled against need to manually set that constraint (c.f. conda-forge/openmpi-feedstock#160). This problem is mostly orthogonal to the sysroot hacks though (i.e. we'd still have it even if we get rid of them).

  1. How much of the weighing down of sysroots do we still need to do now that we've moved to the explicit stdlib config?

If it's just for using the compilers, we can add the sysroot with our global baseline to the compilers feedstock. For people who use our production compilers (e.g. gcc_linux-64) directly, my response would be that it's their responsibility to similarly install an appropriate sysroot. The support situation for our production compilers is a grey area, but I think this is a fair ask for using something we don't explicitly support.

  1. What should the default be outside of conda-forge builds for external users?

I tend to agree with @minrk here that the newest sysroot suitable for a users system would actually be a good default. Anyone else can explicitly install a specific version

The fourth point that Isuru brought up is that there are packages (like r-base) that depend on our compilers, and there we're somewhat boxed in. We cannot pin the sysroot to a single version in r-base (e.g. 2.17), because there would always be cases where you need the other (e.g. 2.28; or vice versa). However, I think this problem would be solvable if we provide two variants of r-base; one for each each of 2.17 & 2.28; that way there's always an appropriate build available, without requiring the sysroot hacks to select 2.17.

The feasibility of this approach depends a bit how many packages there are that would need such double variants (though note that it's not necessary to build twice; e.g. just one _r-base and then two r-base wrapper that carry the constraints). Overall this sounds feasible to me, and IMO preferable to keeping the hacks indefinitely.

@minrk: Is there a separate issue to track the removal of track_features on sysroot, or should I open one?

At this point I believe this would be a good thing to do...

@minrk
Copy link
Member

minrk commented Aug 5, 2024

Thanks. I don't really understand what's special about the r-base issue. I know R installations may compile things, but is this not the same "runtime compilation" situation we've already covered for mpicc, etc.? i.e. it would still be better for r-base to pull in latest sysroot by default for environments, and not the oldest sysroot. Or is that not what's happening? It is not true, as I understand it, that r-base needs to pin down sysroot except for building binaries to be redistributed. And I think at this point it is not the best experience to be preparing user environments for that task by default.

At this point I believe this would be a good thing to do...

Okay, I'll do that and try to summarize/link where this has been spread out so far

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants