Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[3.0] - pagination example does not work as intended #5881

Closed
tafelito opened this issue Jan 30, 2020 · 15 comments
Closed

[3.0] - pagination example does not work as intended #5881

tafelito opened this issue Jan 30, 2020 · 15 comments
Assignees
Milestone

Comments

@tafelito
Copy link

Intended outcome:

I'm trying to implement the pagination example with the new cache implementation

Actual outcome:

Using pagination example based n #5677 doesn't work as expected

merge(existing: any[], incoming: any[], { args }) {
  const merged = existing ? existing.slice(0) : [];
  // Insert the incoming elements in the right places, according to args.
  for (let i = args.offset; i < args.offset + args.limit; ++i) {
    merged[i] = incoming[i - args.offset];
  }
  return merged;
},
read(existing: any[], { args }) {
  // If we read the field before any data has been written to the
  // cache, this function will return undefined, which correctly
  // indicates that the field is missing.
  return existing && existing.slice(
    args.offset,
    args.offset + args.limit,
  );
},

In the merge function, instead of doing

for (let i = args.offset; i < args.offset + args.limit; ++i) 

I had to do

for (let i = args.offset; i < Math.min(args.offset + incoming.length, args.offset + args.limit) ; ++i)

since the incoming length could be less than the offset, so it will add undefined items to the array

In the read function, if the existing arg has data, and we slice to a bigger offset, it return an empty array, not undefined, so the query is not passed to the server

insetad of doing

return existing && existing.slice(
  args.offset,
  args.offset + args.limit,
);

I did

const data =
  existing && existing.slice(args.offset, args.offset + args.limit);
return data && data.length === 0 ? undefined : data;

Also the types of the example fails as they are

intead of

merge(existing: any[], incoming: any[], { args }) {
...
}
read(existing: any[], { args }) {
...
}

I replaced with

merge(existing: any, incoming: any, { args }: { args: any }) {
...
}
read(existing: any, { args }: { args: any }) {
....
}

Versions

 System:
    OS: macOS 10.15.2
  Binaries:
    Node: 12.12.0 - /usr/local/bin/node
    Yarn: 1.19.1 - /usr/local/bin/yarn
    npm: 6.11.3 - /usr/local/bin/npm
  Browsers:
    Chrome: 79.0.3945.130
    Safari: 13.0.4
  npmPackages:
    @apollo/client: ^3.0.0-beta.30 => 3.0.0-beta.30
    @apollo/link-context: ^2.0.0-beta.3 => 2.0.0-beta.3
    @apollo/link-error: ^2.0.0-beta.3 => 2.0.0-beta.3
@benjamn benjamn added this to the Release 3.0 milestone Jan 30, 2020
@benjamn benjamn self-assigned this Jan 30, 2020
@benjamn
Copy link
Member

benjamn commented Jan 30, 2020

Thanks for testing not only the AC3 betas but the (recently added) documentation, too! Out of curiosity, what sort of type error are you seeing that required weakening the parameter types to any?

@tafelito
Copy link
Author

the error is

Types of parameters 'existing' and 'existing' are incompatible.
                          Type 'string | number | void | Readonly<Object> | readonly string[] | Readonly<Reference> | readonly Reference[] | null | undefined' is not assignable to type 'any[] | undefined'.
                            Type 'null' is not assignable to type 'any[] | undefined'.ts(2345)

@tafelito
Copy link
Author

tafelito commented Jan 30, 2020

Another thing I tried to do is this

having the typePolicies

Query: {
      fields: {
        areas: {
          keyArgs: ['where'],
          ...offsetLimitPaginatedField<Areas>(),
        },
      },
    },

I have this in the cache

image

then after a delete mutation, I'm trying to update the cache in the update function like this

await deleteAreas({
      variables: {
        ids: selected,
      },
      update: cache => {
        const query = gql`
          query areas($where: areas_bool_exp) {
            areas(where: $where) {
              ...Area
            }
          }
        `;
        const areas = cache.readQuery({
          query,
          variables: { where: { client_id: { _eq: clientId } } },
        });
      },
    });

But I get this error

Invariant Violation: Can't find field areas on ROOT_QUERY object

The read function is called, but since I don't set any limit or offset, it returns undefined

So I guess what's happening is that when a query is executed in the client, if the read returns undefined, then the query is passed and executed on the server. But if the query is only meant to run in the cache, if the read function returns undefined, it throws an error. Is that how is supposed to work?

@benjamn
Copy link
Member

benjamn commented Jan 30, 2020

@tafelito It's perfectly fine to have a read function that handles a variety of argument styles, including args.{offset,limit} pagination and args.where filtering, but I'm guessing your offsetLimitPaginatedField helper only handles pagination? If you want to combine different capabilities into one FieldPolicy, I think you need to use a different helper function to generate the policy. Ideally it could reuse offsetLimitPaginatedField internally.

Returning undefined is how read functions express that the field they're computing is missing, so throwing a missing field error is the intended behavior. That said, if it's a @client field that's not meant to be sent to the server, we might want to throw the exception anyway, rather than sending a query to the server that doesn't have the relevant field in it.

@wesbos
Copy link

wesbos commented Jan 30, 2020

I'm also having issues with Pagination - I've detailed them here: https://spectrum.chat/apollo/apollo-client/apollo-client-3-evict-api~cd289ed2-92e7-4281-8f73-ddfebb8ad09d

Are these related or should I make a new issue?

@benjamn
Copy link
Member

benjamn commented Jan 30, 2020

@wesbos Your issues have more to do with reactive updates as a result of cache modifications, I think, whereas this issue is more about writing appropriately defensive read and merge functions to handle different pagination use cases. Those feel like separate (but both important) topics to me.

@tafelito
Copy link
Author

@benjamn yes you're right, I only wanted to handle the pagination in the read function. What I was trying to point out is that the difference in the read function btw doing client.query and client.cache.query , is that, when doing client.query, if you return undefined (assuming the fetchPolicy is cache-and-network) then the query is requested to the server. In case of doing cache.query, if you return undefined, it throws an exception. (I'm pretty sure actually this was the same behavior as in 2.x ). The error just thrown me off (it'd be nice to instead of an error to just return undefined imho)

benjamn added a commit that referenced this issue Jan 30, 2020
Should help with the type issues reported in #5881, by defaulting
TExisting and TIncoming to any.

The last time I tried this, I ran into problems when the user-supplied
types were primitive (already read-only) types like string, since string
values are not assignable to Readonly<any> parameters. This time, I was
able to work around that dubious error using the SafeReadonly<T> type.
benjamn added a commit that referenced this issue Jan 30, 2020
Follow-up to #5677.

Thanks to @tafelito for actually trying out the code examples in the new
cache policies documentation!
@tafelito
Copy link
Author

tafelito commented Jan 30, 2020

@benjamn I think I found 2 new edge cases.

I have a list that shows 5 items per page. I go to the second page and I have only 1 item. I remove the item by doing cache.writeQuery with the new data array without the last item.
When the read function is executed, the existing array has 5 items, that's correct. Because now the page is an empty array, it returns undefined.

The problem is, when you do client.query you want the results to be undefined in order to pass the request to the server. But after the cache.writeQuery (I can't just evict the item since I' can actually remove a list of items), if the read fn returns undefined, the query hooks returns the previous data, including deleted item, even tho in the cache was already deleted.
In that case it will have to return an empty array instead. But it can't return an empty array if we want to pass the request to the server.

Btw this only happens if there are only 6 items (2 pages) in a list of 5 items per page. If there are 7 items it works fine because the read fn return an array with 1 item

The 2 case is when you have a total of 6 items, but only 1 page was loaded, so in the cache there are only 5 items. If I delete 1, the read function will return 4 items only, since the 6th item is not saved in the cache. How do you deal with that in this case?

I hope I was clear enough or I'm missing something

UPDATE: This might be related to what @wesbos reported above

benjamn added a commit that referenced this issue Jan 30, 2020
…ult-parameter-types

Address feedback on pagination examples raised in #5881.
@benjamn
Copy link
Member

benjamn commented Jan 30, 2020

if the read fn returns undefined, the query hooks returns the previous data, including deleted item, even tho in the cache was already deleted

This sounds like a bug related to this code, which has been a source of frustration for some time now. Could you try putting together a reproduction so that we can confirm the diagnosis?

As for the second case, the behavior you're describing actually sounds reasonable to me, though you do have the option of implementing your read function so that it returns undefined if it can't return a full page of results. Also, if you're removing elements from a list (rather than just replacing them with null), you're probably going to have problems with offset/limit-based pagination, since the offsets are sensitive to the original list ordering. I would suggest not actually deleting items from the list in the cache, but instead filter them out in your UI code somehow… or try a different pagination strategy that's less sensitive to positional changes.

@tafelito
Copy link
Author

@benjamn you can see the repro of the 1 issue https://codesandbox.io/s/nostalgic-pike-94wxh

@tafelito
Copy link
Author

tafelito commented Feb 6, 2020

@benjamn when you call the cache.writeQuery, aside of the issue mentioned above, does the evict still need to be called for the individual items to be removed from the cache, even tho they are deleted from the ROOT_QUERY? Isn't cache.gc() supposed to take care of it?

Same otherwise, if you evict an item by id, shouldn't it also be removed from the root queries? If not, what's the purpose of evicting an item by id if the item will stay in the query?

Also I was thinking about the second case I mentioned above and your suggestion for that.

If I delete the item 5, read the cache, filter the deleted item, write the new data to the cache. After writing the new data, in the read function, if you return undefined as you suggested (in case the page doesn't have enough items), it won't pass the request to the server. I'm not sure where you meant to return undefined for that case.

A numbered page style pagination like the one in the sandbox I put together before does seem a very common use case, and I don't see how not deleting the items will help there,
unless we can do some sort of conditionally refetchQuery.
For data that does not change very often, the offset/limit solution should work fine, unlike an infinite list for a feed or any other list that do change data often

@tafelito
Copy link
Author

tafelito commented Jun 3, 2020

@benjamn I know you guys are busy with the release of the upcoming v3 but can we get back to this at some point, at least on the docs?. We use a lot of tables in a admin dashboard where we can edit/remove/add rows and we still have to refetch after any crud operation. I thought with the new AC cache the pagination was gonna be fixed but this still happening.
i think is a really common use case and It would be great to include a best way to implement this sort of pagination in the docs

Tanks for the great work!

@tafelito
Copy link
Author

tafelito commented Jun 12, 2020

with the latest updates of AC, this seems to be working fine now. I just wanted to leave a working example on how to handle a numbered pagination, where you can delete and insert items to the list without having to refetch queries from the server in it's not necessary. Of course this works in hand with the cache.modify and cache.evict functions when doing an insert and a delete mutation

const cache = new InMemoryCache({
  typePolicies: {
    Query: {
      fields: {
        users: offsetLimitPaginatedField(),
        user(existing, { args, toReference }) {
          return existing || toReference({ __typename: "users", id: args?.id });
        },
      },
    },
  },
});

pagination.ts

type Items = {
  nodes: any[];
  totalCount: number
}

export function offsetLimitPaginatedField() {
  return {
    keyArgs: ["where"],
    merge(
      existing: Items,
      incoming: Items,
      { args, canRead, readField }: FieldFunctionOptions
    ) {
      const offset = args?.offset ?? -1;
      const limit = args?.limit ?? -1;

      // Insert the incoming elements in the right places, according to args.
      if (offset >= 0 && limit > 0) {
        // filter dangling references from the existing items
        const merged = existing?.nodes.length
          ? existing.slice(0).filter(canRead)
          : [];
        // we need the offset difference of the existing array and what was requested,
        // since existing can already contain newly inserted items that may be present in the incoming
        const offsetDiff = Math.max(merged?.length - offset, 0);
        const end = offset + Math.min(limit, incoming.nodes.length);
        for (let i = offset; i < end; ++i) {
          const node = incoming.nodes[i - offset];

          // find if the node is already present in the array
          // this could happen when new  obj is added at the top of the list, and when
          // requesting a new page to the server, the server sends the same id back in the new page
          const existing = merged.find(
            (m: any) => readField("id", m) === readField("id", node)
          );

          if (!existing) {
            merged[i + offsetDiff] = node;
          }
        }
        // we filter for empty spots in case the incoming contained existing items.
        // This could happen if items were inserted at the top of the list
        const nodes = merged.filter((m: any) => m);
        return {
          ...incoming,
          nodes,
        };
      }
      return incoming;
    },
    read(existing: any, { args, canRead }: FieldFunctionOptions) {
      const offset = args?.offset ?? -1;
      const limit = args?.limit ?? -1;

      if (offset < 0 && limit < 0) {
        return existing;
      }

      // If we read the field before any data has been written to the
      // cache, this function will return undefined, which correctly
      // indicates that the field is missing.
      const nodes =
        existing?.length && offset >= 0 && limit > 0
          ? existing.nodes
              // we filter for empty spots because its likely we have padded spots with nothing in them.
              // also filter obj that are not valid references (removed from the cache)
              .filter(canRead)
          : existing;
      // we have to filter first in order to slice only valid references and prevent a
      // server roundtrip if length doesn't cover the items per page
      const page = nodes?.length ? nodes.slice(offset, offset + limit) : nodes;
      // if the total items read from the cache is less than the number requested,
      // but the total items is greater, it means that we don't have enough items cached,
      // so in order to request the items from the server instead, we return undefined
      // for this to work we need to know the total count of all items
      const itemsPerPage = args?.limit || 0;
      if (page?.length < itemsPerPage && nodes?.length < existing?.totalCount) {
        return undefined;
      }

      if (nodes?.length) {
        return {
          ...existing,
          nodes: page,
        };
      }
    },
  };

@benjamn
Copy link
Member

benjamn commented Jul 10, 2020

I agree this is an area that we need to handle better in the documentation. Let us know if the updates in #6429 do not answer your questions (or the questions you think other developers might have). Thanks again for pointing out the issues with the original pagination examples!

@benjamn benjamn closed this as completed Jul 10, 2020
@faerylay
Copy link

i have same issue or i don't know how to do it.
Screen Shot 2022-03-21 at 20 10 06
when i was use relaystylepagination , fetchmore infinite scroll is fine but scrolling error occur and then i changed it to offsetlimitpagination . scrolling error disable but like and delete post mutation are doesn't work until refresh page .
Screen Shot 2022-03-21 at 20 12 49

how to do it pls help me....
(sry for my english.. )

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 15, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants