Very nice explanation and visualizations of the code. Well done. But I am still missing valid use cases where to use the generator functions in "real" applications, because your last example can also simple and efficient realized by using the es5+ features (flatMap and find):
Your example works when we already have the all data available (to give bookclubs a value) and for datasets that aren't too big. However, we would be storing the entire bookclubs array in memory, which is something we sometimes want to avoid when working with a lot of data which might be useless.
A good example for which I often use generators is decoding a stream. In your example, we'd have to wait before we've received the entire stream before we can start decoding it (in order to give bookclubs a value). By iterating over smaller pieces of the stream, and decoding these smaller pieces, we can already start decoding it right from the beginning instead of having to wait. If you're looking for a specific piece of data that may be right at the beginning of the stream, it means that we don't have to call next again and don't have to use more memory in order to store the rest of our data, which would be useless.
(Although this is a micro-optimization which doesn't matter in most cases, I'm also not sure about performance of flatMap when working with larger, deeply nested datasets.)
My understanding is if we use flatMap will flatMap the whole block of dataset? then find the book.id===id?
But if we use generator yield* iterateClubMember(bookClub.clubMembers)
Then if seems will like recursive first jump to index 0 of clubMmebers, and same step
yield* iterateBook(clubMember.books)
go in the index 0 index 0 books, if found books id===id, return?
The worse situation is we yield till end of the whole dataset, the perfect case is the id is 0 index of the 0index for clubMmmber, and got the book.
Am I correct?
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Very nice explanation and visualizations of the code. Well done. But I am still missing valid use cases where to use the generator functions in "real" applications, because your last example can also simple and efficient realized by using the es5+ features (flatMap and find):
Maybe I have to explicitly search for some use cases and dive a little deeper into this topic, until I finally have the "ahhhhh" effect :)
Your example works when we already have the all data available (to give
bookclubs
a value) and for datasets that aren't too big. However, we would be storing the entirebookclubs
array in memory, which is something we sometimes want to avoid when working with a lot of data which might be useless.A good example for which I often use generators is decoding a stream. In your example, we'd have to wait before we've received the entire stream before we can start decoding it (in order to give
bookclubs
a value). By iterating over smaller pieces of the stream, and decoding these smaller pieces, we can already start decoding it right from the beginning instead of having to wait. If you're looking for a specific piece of data that may be right at the beginning of the stream, it means that we don't have to callnext
again and don't have to use more memory in order to store the rest of our data, which would be useless.(Although this is a micro-optimization which doesn't matter in most cases, I'm also not sure about performance of
flatMap
when working with larger, deeply nested datasets.)Ok now i got the point, makes sense. Thanks! π
const findbook = (bookID) => {
for(var i=0; i<members.length; i++)
for(var j=0; j<member[i].Books.length; j++)
if(members[i].Books[j].id == bookID)
return members[i].Books[j];
}
findbook("ey812");
Shouldn't the return statement just finish execution if/when book is found?
I think so, for loop seems the same flow with generator function for above book club example.
My understanding is if we use flatMap will flatMap the whole block of dataset? then find the book.id===id?
But if we use generator yield* iterateClubMember(bookClub.clubMembers)
Then if seems will like recursive first jump to index 0 of clubMmebers, and same step
yield* iterateBook(clubMember.books)
go in the index 0 index 0 books, if found books id===id, return?
The worse situation is we yield till end of the whole dataset, the perfect case is the id is 0 index of the 0index for clubMmmber, and got the book.
Am I correct?