-
-
Notifications
You must be signed in to change notification settings - Fork 35.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Promote Node-based Materials to Core and Polish #16440
Comments
At some point the project will start to work at I personally would leave the material system for |
hi @bhouston, thank you very much for this incentive... Did you ever see the My idea is finish this material and test with all examples of threejs until normalizing the code replacing the references, for example: // import three.js lib
THREE.MeshStandardMaterial = THREE.MeshStandardNodeMaterial;
// start/test example And do this with the others materials... I would what you think of this node because it is based on your shaders:
Did you think any way to add anisotropy in physical materials? Or a path you recommend.
I think it was done here: three.js/examples/webgl_materials_nodes.html Lines 987 to 988 in 93e72ba
What exactly is that? Currently there is a cache system to reuse texture samples in the same UV.
I think something has been done in this sense here: three.js/examples/webgl_materials_nodes.html Lines 912 to 917 in 93e72ba
|
@sunag I believe strongly your stuff belongs in core. :)
Clara.io has supported anisotropic roughness since 2014. I want to contribute it back to three.js but I hate modifying our horribly complex uber shaders -- very few people understand the uber shaders because their complexity is so high, so many nested defines spread everywhere. Moving to node-based will simplify Three.JS's core so we can move to the next level.
I forgot about this, but yes, this is the easy way to move this to core! :) And then we can deprecate the old one and use this instead. I bet it could make ThreeJS smaller in terms of its code base or at least keep it the same size. Thus Three.JS is better than ever other open source JS-based 3D rendering library and it remains the same code size.
I did reference the wrong person, it wasn't jojobyte, but rathan Jan Jordan the NVIDIA MDL product manager. Basically he suggested a T2V type that is passed around. This allows for some different node structures that he says are better. I can explain later as I want to keep this discussion focused.
With regards to arbitrary UV channels, I know that your system supports it. :) |
Why delay? @sunag made a system that works with WebGLRenderer right now. It is just a matter of adopting it. Sunag has made it relatively easy. I know there is still a bunch of work to polish it up and have it work correctly with the various post effect scenarios, etc. I can commit to doing this if we do it in the relatively near term. I've done a few heavy lift PRs in the past such as redesigning the lighting system and the animation system, so generally I can pull these off: |
I'm just concerned about backwards compatibility and the necessary time effort to convert the related example code which is quite extensive to be honest. Just wanted to point out a different approach that might be easier to handle and is maybe more strategic. |
So does this mean that the built-in materials would be replaced by node materials entirely, and user custom node materials would be treated the same way in the renderer? |
No, the built-in materials will be implemented as node materials. |
The idea is that we polyfill MeshStandardMaterial such that it uses the node-based system to compile while keeping the current interface. Thus we get rid of the insane set of defines and what not that our uber materials utilize and instead we use the node-based compiler system of sunag. Thus it is work, and I am sure we will find bugs, but it isn't impossible. |
Looking at the overall plan, getting the uber-materials out of the renderer codebase is big. Ideally the renderer should have no dependencies on either the uber-materials or the node-based materials. If I'm only using a small custom ShaderMaterial, the entire threejs material system (nodes or otherwise) would ideally be tree-shakeable. Is that a realistic goal?
Could we split this list, to identify tasks that would definitely be required up front? Reworking the texture node and full AO support might be candidates.
Does this work with a procedural bump map? Just to mention them here, a couple other issues I've run into with the current implementation:
^This also applies to NodeMaterials. |
"oomph". Can you please explain what you are referring to? |
I believe the issue was that in a scene reusing many otherwise-identical materials with different uniform values, state change optimizations that would apply to default materials do not apply to ShaderMaterial. |
If I understand the current system correctly, having two identical Shader Material instances (in terms of vert and frag shader) with different uniforms would still cause a full useProgram switch, which is expensive, whereas having two MeshStandardMaterial instances would only update the uniforms (aside from differences that affect defines). Any new system should, ideally, give custom materials the same optimizations as the build in materials. |
possible and interesting...
Considering that I understood the question... If use an differents material with same inputs and types but diferents nodes, threejs would share the same program without problems like this benchmark show:
That is a good question... The correct approch today for multiples functions is this: three.js/examples/js/nodes/misc/TextureCubeUVNode.js Lines 137 to 158 in 51afa0b
Create a FunctionNode for each GLSL function and add the depedencies using includes argument of FunctionNode.
I am thinking of starting the dev of a var tjslNodes = new TJSLNode(`{
vec3 hash(vec3 p) { return vec3( 1.0 ); }
vec3 voronoi3D(const in vec3 x) {
return hash( x );
}
}`);
// this would parse the code and return it in nodes
tjslNodes.methods.voronoi3D // return FunctionNode with hash function depedencie
tjslNodes.methods.hash // return FunctionNode no depedencies There is a care in this process for a better automatic optimization and auto rename system(avoiding name conflict). And for example most of the root nodes like |
I think that is easy to fix. Just have a content hash of the shader network that is independent of the uniforms and use that as a lookup into a shader cache of some sort. This would save both compilation as well as switching costs. This is a fixable problem and a straight forward optimization I think. |
I think it may be useful to introduce the concept of a reusable shader graph in which you can just change the uniforms on it as well. This would save a lot of memory allocations for otherwise identical material graphs. This is similar to the material instances in UE4: https://docs.unrealengine.com/en-us/Engine/Rendering/Materials/MaterialInstances Thus one could have a material graph and then also a material graph instance which references the material graph but with replacement uniforms. |
I think we should just have classes for each math node type. I think these can be defined very easily so the code side is about the same. Thus one would use SinNode rather than Math2Node( Sin ). It would just be syntax sugar really but it would be easier to use. |
I agree it should be different PRs. Otherwise it will be a 4 month PR that will never get accepted. One first PR to get it into place and ensure that the examples still work and no major performance degradations. And then a bunch of follow-up minor PRs to clean up the loose ends. I think that once it is in place there will be a ton of extension and optimization ideas, which we will all work on for years. I just want to get this into place so we can get rid of those insanely complex Uber materials. |
that is incorrect. Programs are looked up by a fairly large string key, and it's mostly just the GLSL code (okay, there's a lot more, but uniforms aren't part of that key). Switching to Node-based model would have 0 impact of that Program re-use mechanism. |
I do not see how this could happen as the benchmark shows: three.js/examples/js/nodes/materials/nodes/StandardNode.js Lines 14 to 16 in d5743a4
Moreover in NodeMaterial you can define a custom property name for each node, this way, you can make the material identical to the current one like: three.js/examples/webgl_materials_nodes.html Lines 2552 to 2553 in 93e72ba
But it is much more advantage to invest in dev a uniforms sorting into types to share a hash than working with the current system using names(labes), for example:
|
@sunag three.js/src/renderers/WebGLRenderer.js Lines 1445 to 1448 in c3c945d
three.js/src/renderers/WebGLRenderer.js Line 1522 in c3c945d
three.js/src/renderers/webgl/WebGLPrograms.js Lines 269 to 271 in c3c945d
|
@Usnul My benchmark is about sharing the same program with differents materials and time to build the shader(initialization time). |
@bhouston Ben. |
I have an idea of a three.js shader editor that allows the user to expand includes and either modify them or just collapse them again (if not modified). Macros could similarly be displayed in expanded form. A typical workflow would be to start out with a copy of one of the builtin
(Usnul is right) I have suggested two small improvements (#17116 , #17117 ) to reduce the overhead of unneeded material initialization operations. Both PRs are accompanied by demonstrations of performance differences. However, both demonstrations depend on really extreme cases. In what I consider normal scenarios, the differences will likely be negligible. (But I still think my PRs are right.) |
@sunag How do you envision the NodeMaterial API for shadows? |
Today I add // add light group per material
material.light = new LightsNode([ hemisphere, pointLight, directLight ]);
// change shadow color to green for example
material.shadow = new MulNode( new ShadowNode(), new Vector3Node( 0, 1, 0 ) );
// A extended light implementation
material.light = new TranslucentLightNode(); |
According to Three.js philosophy this can be a great improvement. Some guys use it for backgrounds and tree-shaking is good in this case. |
I came here because @munrocket pointed out in the three-forum that this proposal could help for tree-shaking which would make the bundle of a three.js app smaller and improve boot-time performance 🙂 I think this would be awesome. One thing I wanted to add here: We also encountered the problem with the big shader strings in the final bundle. Our three.js app is used in e-commerce and since many e-commerce users are on mobile devices boot-time performance is essential. In an e-commerce scenario, we just can not afford to waste time on parsing unnecessary JavaScript code. Especially because all of our clients read the Amazon study which claims that they loose 1% in sales if latency increases by 100ms. Of course they also read the Google study which outlines that 500ms delay decreases the traffic by 20%. So we have no margin for wasting bandwidth and CPU cycles. Therefore we created a "hack" to get the glsl shaders out of the final bundle and save it to a JSON file. This way we can benefit from the faster parsing of JSON. I just wanted to drop my two-cents and maybe it's totally unrelated to the initial issue. As pointed out above I just came here because of the link in the three.js forum. But if three.js refactors all of the glsl stuff maybe you could think about how to make things even more performant and maybe it's an idea to put the glsl stuff into another data-format than JavaScript. If someone is interested in our JSON hack, just let me know and I'll setup a repo 🙂 |
This will not be productive. Chrome JSON parser is fast, and JSON is a subset of JavaScript, so that makes sense. glsl is neither JSON not JavaScript, so any motions you make there that involve encoding/decoding glsl to/from JSON will be pure waste of space and CPU time. This is off-topic, but I thought I would clarify for the hopefuls. |
@Usnul I'll create a repo to show you what we do. I do not think it's a waste of CPU time because what we do is essentially something like Nevertheless, I think it would be awesome to have the glsl shader code in something more efficient than in plain old JavaScript because JavaScript is expensive to parse. And better support for tree-shaking would be also great. So maybe it's worth considering those aspects when refactoring the shader parts of the code-base. |
Aha, i see, so not the GLSL string, but the chunk library. I don't think it makes too much of a difference, but you might save a bit there. I'm not sure if it's worth it though, since there is very little JS code that constitutes chunk library. The expensive part of parsing JS comes from building AST, not from tokenization. Tokenization is actually quite fast, that's why this "JSON trick" works in principle. If you have a very large string in JS, it's essentially a single literal string token, so it takes almost no toll on the AST building process. Also worth mentioning, it's possible to be mindful of the ambiguity of token sequences and write JS code that's cheaper to parse. I'm not sure if it's worth doing this directly as a programmer writing code though. |
@sunag Hey, great work on the node materials! Would it be possible to specify |
@vanruesc Thanks. The output is converted automatically, this would only be for the purpose of typification (.ts)? |
Sorry, I should've been more specific: Currently, people have to use Scene.overrideMaterial to render scene data such as normals to a texture. This approach has some shortcomings, though (see #14577, #18533). Instead of doing it this way, I'd much rather use additional draw buffers to render out normals (and other custom data) without having to render the same scene multiple times. I think the node material system has the potential to support this feature. It would make life a lot easier if users could specify custom Do you think it would be possible to add something like this at some point or do you see any roadblocks ahead? I should also mention that |
Already exist something similar (render-to-texture) RTTNode. |
RTT is similar to but not the same as MRT. The I want to be able to modify and extend built-in materials with additional shader code in a structured manner to render scene data into The main goal is to render a complex scene as usual, but instead of just saving the scene colors and depth, you'd also render various additional textures at the same time and you'd only have to calculate everything once. After that you could use these textures as inputs for RTT effects such as Screen Space Reflections. MRT is also essential for deferred shading pipelines. |
Interesting... Maybe let mtl = new MRTMaterial();
mtl.inputs['albedo'] = new SomeNode();
mtl.inputs['normal'] = new SomeNode(); |
I think this issue can be closed since the roadmap for node materials is now clear. The rewritten version of |
Description of the problem
@sunag did a great jobs on the node-based materials. While he got some criticism for his PRs when he was submitted them, usually dealing with internal design of the code compiler, I've always thought his work was amazing.
We have adopted node-materials for our rendering recently. We have found that it leads to more artistic creativity. It also leads to less complex code base -- the current Uber materials in Three.Js are just insanely complex and hard to maintain.
Thus I would propose that we move the node-based materials into core and then we slowly move most of the examples to use the node-based materials instead of the complex do-everything uber materials.
It is a big change, but I think it moves Three.JS forward.
Some of the changes I think are needed in order to make node-based materials first class citizens of Three.JS:
This ties into my current proposal to glTF to adopt node-based materials:
https://docs.google.com/document/d/1Y6JFE2FV164IFDe7_cYhp2gzhSapB76fUNPgmsI6DDY/edit
I am not the first to suggest this but I can not find the previous discussion. I think it is just a matter of time until this happens, so might as well do this now. No other JavaScript 3D renderer has fully node-based materials. ;)
/ping @WestLangley @donmccurdy
Three.js version
Browser
OS
Hardware Requirements (graphics card, VR Device, ...)
The text was updated successfully, but these errors were encountered: