Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best practices for extensions and fallbacks #1256

Open
donmccurdy opened this issue Feb 21, 2018 · 5 comments
Open

Best practices for extensions and fallbacks #1256

donmccurdy opened this issue Feb 21, 2018 · 5 comments

Comments

@donmccurdy
Copy link
Contributor

donmccurdy commented Feb 21, 2018

Variations of this have come up in recent conversations for texture transforms and Draco extensions:

From EXT_texture_transform

Implementation Note: For maximum compatibility, it is recommended that exporters generate UV coordinate sets both with and without transforms applied, use the post-transform set in the texture texCoord field, then the pre-transform set with this extension. This way, if the extension is not supported by the consuming engine, the model still renders correctly. Including both will increase the size of the model, so if including the fallback UV set is too burdensome, either add this extension to extensionsRequired or use the same texCoord value in both places.

From KHR_draco_mesh_compression

To prevent transmission of redundant data, exporters should generally write compressed Draco data into a separate buffer from the uncompressed fallback, and shared data into a third buffer. Loaders may then optimize to request only the necessary buffers.


While I think it's worthwhile that we're taking pains to ensure assets can include fallback behavior for unsupported extensions, I am skeptical that tooling will actually do these things. For example, writing three buffers for Draco is contrary to our advice in Best practices for .gltf / .glb , and assumes engines that support no extensions will still be clever enough to avoid loading extra buffers.

And as a developer using exporters to create experiences (where my assets are not re-distributed) I would certainly want my tooling to provide out-outs on these duplicate textures, UVs, and buffers. I'm not sure what an exporter should do by default, but I would lean toward saying duplication is to be avoided by default, and to provide e.g. Runtime or Compatibility presets.

To complement our advice on fallbacks, perhaps we should also provide best practices on use of glTF extensions. A first pass at such language:

Certain glTF assets are reproducible within the core glTF specification, but for whatever reason (often performance) include or require extensions. Such extensions include:

  • KHR_draco_mesh_compression
  • KHR_materials_unlit
  • EXT_texture_transform

In these cases, we consider it best practice for asset distributors[1] to provide as few required extensions as possible. Where implementing fallbacks may be impractical (e.g. compression) distributors should prefer to remove these extensions — decompressing data or baking texture transforms to UVs — rather than providing assets with such extensions required. End-users may re-apply optimizations as needed, via the glTF tooling ecosystem[2].

Where reasonable fallback behaviors are possible (e.g. KHR_materials_unlit), distributors are
encouraged to use their best judgment on what to provide.

[1] By "asset distributor" I mean tools like Sketchfab, Google Poly, Microsoft Remix3D, and any others that are agnostic to any particular engine. The Unity Asset Store could be an exception, as it services exactly one engine.
[2] Having the ability to both add and remove extensions via glTF-Toolkit might increase the odds of this actually happening.

@pjcozzi
Copy link
Member

pjcozzi commented Feb 28, 2018

+1

This is aligned with the original intent of gltf-pipeline - and really glTF in general - make exporters as simple as possible and push the optimizations to a common tool. It will also, of course, help with fragmentation.

It does put a bit of implementation burden on some exporters as it can create two paths, e.g., a web service would generate an optimized glTF for viewing in their engine, but then a more vanilla one for explicit export.

@vpenades
Copy link
Contributor

vpenades commented Apr 3, 2018

In my humble opinion, Any extension should be required to be designed in a way that the model can be imported by any engine/tool, faithfully and without glitches.

If an extension breaks any engine/tool that doesn't support extensions, then that extension should become part of the core specification, so all engines/tools are required to implement it, or the extension should be removed altogether.

Guys, you're defining a file format that is expected to live for years to come, don't do the same mistakes of collada and fbx... for true, long standing standards, consistency and robustness is more important than being feature rich.

@sbtron
Copy link
Contributor

sbtron commented Jun 21, 2018

The text from @donmccurdy in the first comment still stands just elaborating with some examples in my own words since there was discussion about this recently in #1015

The high level idea behind fall backs in extensions is to maintain compatibility with existing engine implementations that may not understand the extension and at the same time allow engines that understand the extension to take advantage of new extension capabilities.

Depending on the extension in question the fallback to core glTF 2.0 spec can mean different things. The fallback could range from simply ignoring the additional capability altogether or it could be providing an alternative to the capability minus the benefits. For example if an asset containing the KHR_lights extension is loaded into an engine that doesn't understand it, the engine will ignore the extension and use its own lights or environment just like it would do today with the core glTF 2.0 spec. The Draco geometry extension could provide a fallback to use regular geometry so that the object still works when loading into an engine that doesn't support Draco. The texture-transform and LOD extensions with fallback fall in the same category as Draco where the asset would still work as it does with the core glTF 2.0 spec and you just don't get the additional benefits of the extension.

The fallback is highly encouraged if you are trying to create assets for maximum compatibility across different engine endpoints. If you are creating assets for a specific use case for example, you know the exact engine capabilities where assets will be rendered and the assets are not expected to be available to any other engine then you could opt out of having compatibility with core glTF 2.0 and not include the fallback. This is the extensionRequired approach where you are explicitly choosing extension capabilities at the cost of compatibility by saying this asset should only be loaded into engines that understand the extension and should not be loaded in engines that only understand the core glTF 2.0 spec.

The fallback approach works best for client/server type scenarios using glTF json that points to separate files and may not make sense for on disk scenarios or a self contained glb approach. Taking Draco as the example again, you can have a glTF json that points to both both draco geometry and uncompressed geometry. Depending on whether an engine understands draco extension, it would download and render the appropriate geometry. For self contained glb scenarios having both geometries in the glb file is not very useful as it will bloat the file size. The extensionsRequired approach might be a better choice if you want to use draco in a self contained glb but that does come at the cost of not being able to load that model in engines that don't support Draco. Over time as more engines support Draco this would be less of a problem but it is something to consider if you were creating these types of assets right now.

Lastly there were some questions on how the client-server api should work which really depends on the individual implementation. You could have just one endpoint with the glTF and that glTF has all the extensions with individual clients downloading the glTF and any related files/extensions understood by that client. Another perfectly valid approach is to have different API endpoints for delivering a glTF with specific capabilities - say a GET url/Model could return a core glTF 2.0 model and a GET url/Model/Draco could return a glTF with only Draco compressed geometry and no core spec fallback. Only clients that understand the Draco extension should call the second API endpoint. The format itself is flexible enough to enable both approaches and its up to the individual server API implementer to pick the approach that suites their needs best. My personal preference would be to just have one endpoint for the asset and the glTF loaders for each engine will just be able to work with the extensions they are aware of without having to learn about the different API designs for each service.

Hope that helps explain some of the thinking.

@vpenades
Copy link
Contributor

vpenades commented Jun 21, 2018

@sbtron I agree with you about extensions having fallbacks for maximum compatibility, I fear extensions can become an "anything goes" scenario where it can be nearly impossible to load a gltf file in a standard way.

I've always seen extensions as a way of adding extra data that doesn't change how the core data is interpreted. This is no longer the case with many extensions.

For me another issue is Draco. The whole point is to make the file smaller, so providing a non compresses fallback defeats the whole purpose of the extension. But then, Draco only has implementations for c++ and JavaScript, so in case it becomes mainstream, it limits the usage of gltf to these two languages.

@zellski
Copy link
Contributor

zellski commented Aug 20, 2018

I've been banging the drum a bit monotonously lately, about how important the universality of glTF is, especially vis-à-vis extensions. It was good go back and through all this; it captures the concerns and it's clear we're pretty much on the same page.

I don't have new insights per se, but I think it's useful to keep the discussion running, even at the cost of redundancy. I do agree with the suggestion on adding some best-practices commentary. There are non-obvious implications that are worth spelling out.

Draco really does vividly illustrate some of the conceptual turbulence... Are these uncontroversial statements?

  • A self-contained, Draco-compressed .glb file will always be divisive, since it must "require" its extension, and some loaders just won't or can't include the bitstream decoder.
    • This seems to be to be the sharpest example of a violation of "JPEG of 3D"; its burdens the user with knowing or remembering whether their orc.glb embeds some additional opaque requirement, with knowing where it will load and where it won't.
  • Performant fallback behaviour is meaningfully achievable only with distribution systems which can naturally provide or generate URIs for the split buffers required.
    • Furthermore, as @donmccurdy suggests above, simpler & preexisting loaders need be clever enough to know to load only some buffers. That's perhaps a stern requirement.
  • And yet Draco is a superb and irreplaceable extension of glTF!

(E.g. MSFT_texture_dds is similar. The fallback (for machines with incompatible GPUs) requires sourcing JPEG/PNG from separate URIs, or conceivably bundling a software decoder for the compressed format.)

What's the conclusion? "Where implementing fallbacks may be impractical (e.g. compression) distributors should prefer to remove these extensions — decompressing data or baking texture transforms to UVs — rather than providing assets with such extensions required." is brutal, but I'm tempted to agree.

I suppose a different file extension could also work in a pragmatic sort of way; a .glbx file (or whatever) could signal to the user that it contains at least one required extension beyond the core version it specificies. But this path seems fraught with difficulty, too.

Further, there's a subtler point, which I've been trying to express in my head, and for which I don't have a proposed conclusion. I would love to hear if it's been discussed elsewhere.

So it seems to me that beyond the well-defined glTF core, there is a secondary standard of sorts. It is a amorphous, and harder to control: the subset of extensions that are well-supported enough in the overall ecosystem that our users relax into trusting they will be well-supported everywhere. This is the "JPEG of 3D" expectation again, that while our trusty orc.glb will look different e.g. in differently lit environments, it won't suddenly sprout a clear-coat finish.

This may be a premature concern as yet; perhaps the worst that happens today is that models that use KHR_materials_unlit run the danger of being somewhat lit. But I see a good half-dozen extensions coming the pipe, and even with reasonable fallback measures, they will all mutate the landscape of visual results within the ecosystem. This is a perpetually moving target; desktop apps and web apps alike constantly auto-upgrade; the very file that looked one way yesterday may look different today. The expectation of constancy is damaged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants