You don’t need to write a web scraper to do this, just some simple code and standard linux/unix commands.
- Open the page in your web browser
- Open the Developer Tools
- Paste in the following Javascript
var images = document.getElementsByTagName('img');
var srcList = [];
for(var i = 0; i < images.length; i++) {
srcList.push(images[i].src.split('?', 1)[0]);
}
srcList.join('\n');
- Create a folder to store the images
- Copy the text output from above into a file & save as
images.txt
in your folder - Inspect
images.txt
to make sure it looks right - Run the following commands in a terminal, from your folder
cat images.txt | sort | uniq > uniq_images.txt
wget -i uniq_images.txt
Now all the images from that page should be in the folder you created.