-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[3.0] - pagination example does not work as intended #5881
Comments
Thanks for testing not only the AC3 betas but the (recently added) documentation, too! Out of curiosity, what sort of type error are you seeing that required weakening the parameter types to |
the error is
|
Another thing I tried to do is this having the typePolicies Query: {
fields: {
areas: {
keyArgs: ['where'],
...offsetLimitPaginatedField<Areas>(),
},
},
}, I have this in the cache then after a delete mutation, I'm trying to update the cache in the update function like this await deleteAreas({
variables: {
ids: selected,
},
update: cache => {
const query = gql`
query areas($where: areas_bool_exp) {
areas(where: $where) {
...Area
}
}
`;
const areas = cache.readQuery({
query,
variables: { where: { client_id: { _eq: clientId } } },
});
},
}); But I get this error
The read function is called, but since I don't set any limit or offset, it returns undefined So I guess what's happening is that when a query is executed in the client, if the read returns undefined, then the query is passed and executed on the server. But if the query is only meant to run in the cache, if the read function returns undefined, it throws an error. Is that how is supposed to work? |
@tafelito It's perfectly fine to have a Returning undefined is how |
I'm also having issues with Pagination - I've detailed them here: https://spectrum.chat/apollo/apollo-client/apollo-client-3-evict-api~cd289ed2-92e7-4281-8f73-ddfebb8ad09d Are these related or should I make a new issue? |
@wesbos Your issues have more to do with reactive updates as a result of cache modifications, I think, whereas this issue is more about writing appropriately defensive |
@benjamn yes you're right, I only wanted to handle the pagination in the read function. What I was trying to point out is that the difference in the read function btw doing |
Should help with the type issues reported in #5881, by defaulting TExisting and TIncoming to any. The last time I tried this, I ran into problems when the user-supplied types were primitive (already read-only) types like string, since string values are not assignable to Readonly<any> parameters. This time, I was able to work around that dubious error using the SafeReadonly<T> type.
@benjamn I think I found 2 new edge cases. I have a list that shows 5 items per page. I go to the second page and I have only 1 item. I remove the item by doing cache.writeQuery with the new data array without the last item. The problem is, when you do client.query you want the results to be undefined in order to pass the request to the server. But after the cache.writeQuery (I can't just evict the item since I' can actually remove a list of items), if the Btw this only happens if there are only 6 items (2 pages) in a list of 5 items per page. If there are 7 items it works fine because the The 2 case is when you have a total of 6 items, but only 1 page was loaded, so in the cache there are only 5 items. If I delete 1, the read function will return 4 items only, since the 6th item is not saved in the cache. How do you deal with that in this case? I hope I was clear enough or I'm missing something UPDATE: This might be related to what @wesbos reported above |
…ult-parameter-types Address feedback on pagination examples raised in #5881.
This sounds like a bug related to this code, which has been a source of frustration for some time now. Could you try putting together a reproduction so that we can confirm the diagnosis? As for the second case, the behavior you're describing actually sounds reasonable to me, though you do have the option of implementing your |
@benjamn you can see the repro of the 1 issue https://codesandbox.io/s/nostalgic-pike-94wxh |
@benjamn when you call the Same otherwise, if you evict an item by id, shouldn't it also be removed from the root queries? If not, what's the purpose of evicting an item by id if the item will stay in the query? Also I was thinking about the second case I mentioned above and your suggestion for that. If I delete the item 5, read the cache, filter the deleted item, write the new data to the cache. After writing the new data, in the read function, if you return undefined as you suggested (in case the page doesn't have enough items), it won't pass the request to the server. I'm not sure where you meant to return undefined for that case. A numbered page style pagination like the one in the sandbox I put together before does seem a very common use case, and I don't see how not deleting the items will help there, |
@benjamn I know you guys are busy with the release of the upcoming v3 but can we get back to this at some point, at least on the docs?. We use a lot of tables in a admin dashboard where we can edit/remove/add rows and we still have to refetch after any crud operation. I thought with the new AC cache the pagination was gonna be fixed but this still happening. Tanks for the great work! |
with the latest updates of AC, this seems to be working fine now. I just wanted to leave a working example on how to handle a numbered pagination, where you can delete and insert items to the list without having to refetch queries from the server in it's not necessary. Of course this works in hand with the const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
users: offsetLimitPaginatedField(),
user(existing, { args, toReference }) {
return existing || toReference({ __typename: "users", id: args?.id });
},
},
},
},
}); pagination.ts type Items = {
nodes: any[];
totalCount: number
}
export function offsetLimitPaginatedField() {
return {
keyArgs: ["where"],
merge(
existing: Items,
incoming: Items,
{ args, canRead, readField }: FieldFunctionOptions
) {
const offset = args?.offset ?? -1;
const limit = args?.limit ?? -1;
// Insert the incoming elements in the right places, according to args.
if (offset >= 0 && limit > 0) {
// filter dangling references from the existing items
const merged = existing?.nodes.length
? existing.slice(0).filter(canRead)
: [];
// we need the offset difference of the existing array and what was requested,
// since existing can already contain newly inserted items that may be present in the incoming
const offsetDiff = Math.max(merged?.length - offset, 0);
const end = offset + Math.min(limit, incoming.nodes.length);
for (let i = offset; i < end; ++i) {
const node = incoming.nodes[i - offset];
// find if the node is already present in the array
// this could happen when new obj is added at the top of the list, and when
// requesting a new page to the server, the server sends the same id back in the new page
const existing = merged.find(
(m: any) => readField("id", m) === readField("id", node)
);
if (!existing) {
merged[i + offsetDiff] = node;
}
}
// we filter for empty spots in case the incoming contained existing items.
// This could happen if items were inserted at the top of the list
const nodes = merged.filter((m: any) => m);
return {
...incoming,
nodes,
};
}
return incoming;
},
read(existing: any, { args, canRead }: FieldFunctionOptions) {
const offset = args?.offset ?? -1;
const limit = args?.limit ?? -1;
if (offset < 0 && limit < 0) {
return existing;
}
// If we read the field before any data has been written to the
// cache, this function will return undefined, which correctly
// indicates that the field is missing.
const nodes =
existing?.length && offset >= 0 && limit > 0
? existing.nodes
// we filter for empty spots because its likely we have padded spots with nothing in them.
// also filter obj that are not valid references (removed from the cache)
.filter(canRead)
: existing;
// we have to filter first in order to slice only valid references and prevent a
// server roundtrip if length doesn't cover the items per page
const page = nodes?.length ? nodes.slice(offset, offset + limit) : nodes;
// if the total items read from the cache is less than the number requested,
// but the total items is greater, it means that we don't have enough items cached,
// so in order to request the items from the server instead, we return undefined
// for this to work we need to know the total count of all items
const itemsPerPage = args?.limit || 0;
if (page?.length < itemsPerPage && nodes?.length < existing?.totalCount) {
return undefined;
}
if (nodes?.length) {
return {
...existing,
nodes: page,
};
}
},
}; |
I agree this is an area that we need to handle better in the documentation. Let us know if the updates in #6429 do not answer your questions (or the questions you think other developers might have). Thanks again for pointing out the issues with the original pagination examples! |
Intended outcome:
I'm trying to implement the pagination example with the new cache implementation
Actual outcome:
Using pagination example based n #5677 doesn't work as expected
In the merge function, instead of doing
I had to do
since the incoming length could be less than the offset, so it will add undefined items to the array
In the read function, if the existing arg has data, and we slice to a bigger offset, it return an empty array, not undefined, so the query is not passed to the server
insetad of doing
I did
Also the types of the example fails as they are
intead of
I replaced with
Versions
The text was updated successfully, but these errors were encountered: