I would like to share with you something that I noticed many developers have been trying to implement in their applications and that is reactive pagination with onSnapshot listeners, i.e. loading documents from collection in batches/bulks/pages and also listening when someone changes, adds or removes a document in the collection.
First of all, before I share my code I would like to emphasize that this is probably not what you want to do generally. As Firebase Consultant
Doug Stevenson said in one of his comments on stackoverflow posts
"You can't really combine paging with listening. You have to choose one or the other in order to maintain the sanity of your code."
It's good to understand trade-offs and use cases of this. I can't think of application which uses realtime listening on a big feed with pagination / infinite scrolling and handles in realtime actions of editing and removing an item. Even in chats, if you remove a message, you just change its state to removed=true
and don't immediately delete it from database completely. It's sort of heavy thing to maintain and have in your application and it is good to evaluate which functionality do you actually need.
But, for the sake of theoretical exercise, let's try to implement it!
This isn't completely trivial as you have to set up startAfter
and endAt
firestore cursors to prevent document(s) from one bulk/batch/page to load in another one.
You therefore need two queries.
One for initial load of documents with limit(PAGE_LIMIT)
(it still needs startAfter
- that is in query
variable)
query.limit(PAGE_LIMIT).get()
And another one which listens for document changes in between startAfter
and endAt
changes (yup, startAfter
is in query
variable already)
query.endAt(newPosts[newPosts.length-1].timestamp).onSnapshot()
I don't really think you can make it just with one query, unless you can predict temporal scale of your posts and ignore explicit PAGE_LIMIT.
So here's the code:
import { useEffect, useRef, useState } from 'react'
const PAGE_LIMIT = 5;
const useInfiniteScroll = ({ fetching: fetchingInit = false, hasMore: hasMoreInit = false, threshold = 200 }) => {
const [fetching, setFetching] = useState(fetchingInit);
const [hasMore, setHasMore] = useState(hasMoreInit);
useEffect(() => {
window.addEventListener('scroll', handleScroll);
return () => window.removeEventListener('scroll', handleScroll);
}, [fetching, hasMore]);
function handleScroll() {
const offsetHeight = document.documentElement.offsetHeight, innerHeight = window.innerHeight,
scrollTop = window.pageYOffset || document.documentElement.scrollTop || document.body.scrollTop || 0;
if (!hasMore || fetching || innerHeight + scrollTop + threshold <= offsetHeight) return;
setFetching(true);
}
return [fetching, setFetching, setHasMore];
}
const PostFeed = ({}) => {
const [pages, setPages] = useState([[]]),
[fetching, setFetching, setHasMore] = useInfiniteScroll({ fetching: true, hasMore: true }),
listeners = useRef([]);
useEffect(() => () => listeners.current.map(l => l()), []);
useEffect(() => {
if (!fetching) return;
const l = pages.length, page = pages[l - 1] || [], postLast = page[page.length - 1];
let query = firebase.firestore().collection('posts').orderBy("timestamp", "desc");
if (0 < l) query = query.startAfter(postLast?.timestamp || new Date(253402300799999));
let mounted = true;
query.limit(PAGE_LIMIT).get().then(snapshot => {
if (mounted) {
const posts = snapshot.docs.map(doc => doc.data());
setPages(ps => ps.concat([posts]));
const unsubscribe = query.endAt(posts[posts.length-1]?.timestamp || new Date()).onSnapshot(snapshot => {
const postsUpdated = snapshot.docs.map(doc => doc.data());
setPages(ps => ps.map((b, i) => i === l ? postsUpdated : b));
});
listeners.current = listeners.current.concat([unsubscribe]);
if (posts.length < PAGE_LIMIT) setHasMore(false);
setFetching(false);
}
});
return () => mounted = false;
}, [fetching]);
return (
<>
{pages.map(page => page.map(post =>
<PostCard key={post.id} post={post} />
))}
{ fetching && <SomePostSkeleton /> }
</>
)
}
export default PostFeed;
Some caveats:
-
new Date(253402300799999)
is the maximum possible date for firestore Timestamp object and that is 9999-12-31T23:59:59.999999999Z. We use this to reuse the same query and listen for new future posts in the first page. (I changed some stuff, see the last point and I believe we do not need it anymore - If0 < l
->last(batch)?.[orderBy]
has to exist, only if the last batch was empty, but in such casesetHasMore
would betrue
and we wouldn't be able to setfetching
totrue
anymore and that code would never run.) -
useInfiniteScroll
is traditionaluseInfiniteScroll
hook, nothing that special about it, maybe it's important to notice that we set its value offetching
to true immediately inPostFeed
, because we use the same queries for initial load and then for scroll / page load. - The notion of page... I used the terms bulk/batch before as well. At this point I am unsure how to name these 'clusters' of posts. But yes, as you can notice, our post array is an array of arrays really and it is called 'pages'.
- The variable
l
is preserved within the scope ofuseEffect
so each time a document in batch/page is updated, the change will map into the right batch/page. - Don't forget to unsubscribe your listeners. Those we store in
useRef
becauseuseState
should be used only if it affects the render and unsubscribe all of them at once in the return function of the otheruseEffect
. - we use
mounted
variable to handle out-of-order responses. I.e., it can happen that firestore promise is still pending, but component has been unmounted. If promise finishes, setting a state on unmounted component will result in React warning. See https://reactjs.org/docs/hooks-faq.html#how-can-i-do-data-fetching-with-hooks and search for 'out-of-order' responses. - We also use
endAt
instead ofendBefore
becausestartAfter
excludes the end of previous batch/page (open interval) andendAt
includes the end of current batch/page (closed interval). - we use
if (0 < l)
condition to include also documents which do not havetimestamp
property. We expect every document to havetimestamp
property, but... actually, when updating a document, there is a brief moment of time when thetimestamp
is not indexed yet - such document is not included instartAfter
-endAt
interval and it can cause annoying flicker of modified document, therefore we do not havestartAfter
when adding the first batch.
Please, feel free to ask any questions or add remarks! I might not have managed to explain everything in clarity.
Cheers and enjoy
Top comments (0)