This release went quite smooth and FAST thanks to the recent docker changes, now we're pulling docker images instead of building them.
Improve parser's e2e test:
The problem was I loaded real feeds into the feedQueue
and made it download real resource through network requests. This approach wasn't correct and prone to random failures. David suggested me using the resources from test-web-content
. Looking back, I think I need to create another xml
feed, these feeds below are duplicate. PR #3773
const valid = [
{
author: 'Tue Nguyen',
url: 'http://localhost:8888/feed.xml',
},
{
author: 'Antonio Bennett',
url: 'http://localhost:8888/feed.xml',
},
];
And for invalid feeds I just gave the queue non-existent feed urls
const invalid = [
{
author: 'John Doe',
url: 'https://johnhasinvalidfeed.com/feed',
},
{
author: 'Jane Doe',
url: 'https://janehasinvalidfeed.com/feed',
},
];
Change parser to pull feeds from Supabase:
I added SUPABASE_URL
& ANON_KEY
to parser
container for it to connect to the database and wrote a function that returns all feeds. PR #3363
async getAllFeeds() {
const { data: feeds, error } = await supabase.from('feeds').select('wiki_author_name, url');
if (error) {
logger.warn({ error });
throw Error(error.message, "can't fetch feeds from supabase");
}
const formattedFeeds = feeds.map((feed) => ({
author: feed.wiki_author_name,
url: feed.url,
}));
return formattedFeeds;
},
Some pieces of the tests had to be modified and wiki parsing
code was removed.
Remove src/backend:
This one is huge but at least I didn't have to write new code 😅. Most of the work was to determine which part of the legacy backend
was still used by other services. Jerry helped me a lot with this PR, we went through files and found that API_URL
was still passed to services though, it had only one use in /src/web/app/src/pages/_document.tsx
.
The PR does work but there's still work to be done and tested.
Top comments (0)