Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backwards/Forward compatibility #129

Open
jonahwilliams opened this issue Sep 1, 2022 · 7 comments
Open

Backwards/Forward compatibility #129

jonahwilliams opened this issue Sep 1, 2022 · 7 comments
Assignees

Comments

@jonahwilliams
Copy link
Collaborator

Overview

We'd still like the ability to make major revisions, so we'll communicate via semver: Major changes indicate that the format is completely incompatible with old clients. Minor changes indicate that both new clients can use old vg files and old clients can use new vg files. At the time of a major version release, we can increment the version number and possibly consider bundling the old decoder as a fallback - but this is out of scope at the time.

Background

The VG binary format is divided into sections which compose a topologically sorted description of a Flutter picture. The intention is that we can trivially convert the binary format into a Flutter picture "online" without creating and traversing a secondary tree structure. First, we describe shaders and gradients. Then we describe paints, which may reference those shaders/gradients. Then we describe paths. Then we describe text. Then finally we describe commands which may reference any of those objects.

Each section is composed of objects which are indicated by an integer identifier that is specific to that section. Each object is then composed of a fixed size number of fields that are implicit in the type of that object.

Adding backwards/forwards compatibility.

There are a few dimensions to backwards compatibility. One would be the ability to add new fields to existing objects.

Consider the encoded paint. Suppose we want to add the ability to provide a ColorFilter object. Today we would expand the size of the object, breaking old clients. An alternative to allow objects to consider growing is to include the size of the object as the first field after the type. This would allow old clients to ignore and read past unknown new fields, while new clients could fill in default values. This does not allow re-arranging of existing fields. Newly added values would need to be nullable in the decoder, or have default values provided. This would need to handled on a case by case basis.

Another dimension of backwards compatibility is new command types. This can be handled by making the command decode tolerate and skip over these commands with an embedded size tag.

The next dimension would be the addition of new sections. Because the sections are in a reasonable topological order, we need to be more careful. Instead of a more generic solution, let us consider some known missing features: filters and _. We know that these will need to be defined before paint, so we will simply insert an empty section that could be filled in later on. We also give each command section a header size and update the decoder to skip over them.

This does not address all possible backwards compatibility problems, but does allow continued iteration. A major backwards compatibility change could allow us to make further changes, but none are known at this time

@jonahwilliams jonahwilliams self-assigned this Sep 1, 2022
@dnfield
Copy link
Owner

dnfield commented Sep 1, 2022

One thing that might get sad:

Let's say we come up with some way to significantly compress the size of the binary (for example, when we dropped precision by at least half a ways back).

That will always be a major break. Our Google based customers won't really care that we're using semver unless we completely fork the thing internally, which I don't think we or they will want.

I guess the main thing is: if and once we do this, we've created a format we're committed to supporting. That might be ok but it's contrary to our initial design goals.

In particular, there will be little to no reason to run the compiler as part of the build anymore, and destructive optimizations may make it nearly impossible to recover any meaningful vector data once compilation has occurred. And customers will have no way to know if they got the latest optimizations or not, or even necessarily what optimizations could still be applied.

@jonahwilliams
Copy link
Collaborator Author

I guess the main thing is: if and once we do this, we've created a format we're committed to supporting. That might be ok but it's contrary to our initial design goals.

Yes this is true, but at the time I think we were a bit more hopeful this could be solved with infra on the client teams side. If thats less possible than we initially thought then we should change course instead.

That will always be a major break. Our Google based customers won't really care that we're using semver unless we completely fork the thing internally, which I don't think we or they will want.

We could techinically bundle both old and new. The decoders are quite small. If we did this, say, once a year for 10 years. We'd end up with 10 decoders to maintian forever. If we assume they can share a lot of code, its not the worst but its definitely more complicated than having a single active version.

In particular, there will be little to no reason to run the compiler as part of the build anymore, and destructive optimizations may make it nearly impossible to recover any meaningful vector data once compilation has occurred. And customers will have no way to know if they got the latest optimizations or not, or even necessarily what optimizations could still be applied

I'm not following this at all

@dnfield
Copy link
Owner

dnfield commented Sep 1, 2022

Customers will stop rebuilding using the latest version of the compiler and miss out on optimizations we add.

@jonahwilliams
Copy link
Collaborator Author

for network assets, yes that is true. Also worth considering that impeller will have completely different performance characteristics, so we may want to actually make more intrusive changes based on expected renderer as well. Though its not quite far enough along for extensive benchmarking

@zanderso
Copy link

zanderso commented Sep 1, 2022

Breaking changes only need a migration path between two successive versions. A totally cromulent migration path is, "Over the course of the next N months, you must stop using old assets." Client teams can deploy updates to achieve that.

@jonahwilliams
Copy link
Collaborator Author

That could also be achieved by bundling more than one version of the decoder at a time. i.e. we have 1 major version per year, and we support the last major version, which gives folks a 2 year migration window.

@jonahwilliams
Copy link
Collaborator Author

That is, the policy + bundling multiple deocders could allow us to have "major versions"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants