To scrape all images from a single page using curl
only, you're limited to sites(Memes apk) where images are directly embedded in the HTML (not loaded via JavaScript). Here's a step-by-step guide using only curl
and basic Unix tools:
✅ Step-by-Step: Scraping Images Using curl
🧰 Requirements:
curl
-
grep
,sed
, orawk
-
wget
or another method to download the images
🔹 Step 1: Download the HTML
curl -L https://memes-apk.com > page.html
-
-L
follows redirects. - Save the page to a file.
🔹 Step 2: Extract Image URLs
grep -Eo 'https://[^"]+\.(jpg|jpeg|png|gif)' page.html > image-urls.txt
- This command looks for
https://...
URLs ending in image extensions.
You can also refine it if the images are relative paths:
grep -Eo 'src="[^"]+\.(jpg|jpeg|png|gif)"' page.html | sed 's/src="//;s/"$//' > image-urls.txt
If the URLs are relative (like
/images/meme.png
), prepend the domain:
sed -i 's|^/|https://memes-apk.com/|' image-urls.txt
🔹 Step 3: Download the Images
wget -i image-urls.txt
This downloads all the images from the list.
✅ One-liner (Optional)
If you want it done in a single line (for simpler sites):
curl -s https://memes-apk.com | grep -Eo 'https://[^"]+\.(jpg|jpeg|png|gif)' | xargs -n 1 wget
⚠️ Limitations of curl
:
- It does not run JavaScript, so you won’t get images loaded dynamically.
- It does not crawl multiple pages — just the one you provide.
✅ Bonus: Download All Images Recursively (Not Just curl
)
If you want to crawl multiple pages recursively, use wget
:
wget -r -l2 -nd -H -A jpg,jpeg,png,gif -e robots=off https://memes-apk.com
Do you want help writing a shell script that does this automatically for all pages or categories?
Top comments (0)