Happy new year! Hope you all had a great festive season! So with all of your advice I refactored my approach to outputting resized IG photos back to the user using the Image Intervention library.
My approach was to loop thorough the first set of pictures, detect the original width & height and therefore work out the original ratio of each picture.
Then apply a resize reduction factor of either third, a quarter or 22.5% depending on if the image width was less than 800px, in between 800 and 1200 px or greater than 1200px. After this reduction conversion was done, my approach then applies the original ratio to either the width or height dependent on which one is bigger than the other. I then converted the resized image into base64 image url and echoed the output along with html and css for styling.
Note, my image size went from circa 250kb per image down to 20-30kb per image.
All in all this did see a marked overall improvement. Currently the page download size of the images alone is 11MB and its overall page download is 18 - 20 MB taking 58.15s or nearly 1 minute to render on screen.
With the new approach the page download size went from a 18-20MB down to 4.8MB or 11MB with the first pull of images down to 4.1MB and a total render time of 28.76s to download.
However, due to how I've implemented this approach, the image conversion and output of the new image blocks the page rendering in full until the foreach loop within this script has completed. (i.e. the script does all this work in the background and then spits out to the front end all 28 converted images at once.
Is there an approach I can take with my PHP script that outputs each image as it is processed?
Ideally I would like to dynamically inject a resized image into the output HTML as soon as the image is processed?
Perhaps I could wrap the base64 image url as as json payload and push a processed image to the front end in this manner?
Or is it possible to implement web sockets with PHP ?
Accessibility Specialist. I focus on ensuring content created, events held and company assets are as accessible as possible, for as many people as possible.
Cache the images, so resize the image once and then save it to a folder called cached-images or similar. Then use either media queries or the picture element to pull the right image for each screen size.
Once you have that working we can look at automating the process so when you update an image it automatically creates the caged versions for you.
So I refactored my code and post successful auth with the IG API, I am now doing the following with a cached folder;
Check if a custom user centric folder exists if not create it, so result is something like ~/cached-images/{username}, here I store each unique image according to it's IG post ID
I then perform my image optimisation and resizing according to original image ratio, width & height, create & save the new image to desired resizing factor. These images are then saved to the user’s unique cached sub folder ( i.e. ~/cached-images/{username}/{post-id}.jpg
I then output the images to the frontend with the use of the foreach loop
Observations:
After implementing the above approach, there has been a minor performance improvement however the page still renders slowly taking 25-30 seconds; I suspect the foreach loop that does the initial image optimisation and resizing needs to execute in full to produce the output.
Ideally, I need to implement some sort of background process that performs the image optimisation and primes the images into a unique user sub folder whilst the user journeys from the log in view to the profile view.
I also need to add some logic that skips the optimisation and caching of the images if it has already been done if I revisit the profile view. @ravavyr tagging you in here for the update ;)
Accessibility Specialist. I focus on ensuring content created, events held and company assets are as accessible as possible, for as many people as possible.
Exactly, from what you describe you are still having to run the cache script on the first time the page loads. But where you should notice a massive difference is when the page loads a second time it should be much much faster. Caching in advance is defo the best bet if you can (perhaps a script that runs on a cron job or when you add a user account.)
The other thing you can do is to lazy load everything “below the fold” but do it with JS so you can request those images be generated separately so you only need to generate say 9 images “on the fly” for the page to load (hope that makes sense)
Thanks again @inhuofficial for the feedback, it's very much appreciated bouncing ideas off you and getting feedback. Especially, seeing as this is bit of a solo project with just me working on it, so I really do appreciate your input.
It's bit of chicken & egg scenario of which one should come first for me when it comes to my app and it's UX with a first time user signing up/auth'ing via IG. There's an expectation there by the user for the app to behave and respond like the real thing, hence I really want that first view load / screen render to really slick and fast else huge bounce rate/drop off me suspects.
You mentioned automation in your previous post, this led me down the php cli rabbit hole and I think I've a possible solution using either proc_open, exec or shell_exec.
My thinking is post successful IG auth I can kick off a php background process / script that can do the caching in advance / on the fly whilst the app routes the user to their profile view.
The previous script would do all the hard work of creating a unique folder per user under the parent folder cached images and in that way my profile view can call upon that unique folder and render via lazy load images below the fold.
Do you think that's a good approach or am I barking up the wrong tree?
Accessibility Specialist. I focus on ensuring content created, events held and company assets are as accessible as possible, for as many people as possible.
Right so the only concern left is the first time someone views their profile after setting up an account now it seems (correct me if I am wrong as I do not know the project and am just guessing from our conversations so far).
So I would simplify this bit to:
Check the cache
If no cached image then serve the original image straight from the source and add the image to a processing queue.
process the queue on a cron job (see if this is helpful: code.tutsplus.com/tutorials/managi...) or if you use cPanel just set the cron job up there.
then the next time those images are needed by the user they should have them cached (step 2)
You will lose a little bit of first load speed as the images aren’t optimised (bear in mind we are only going to serve unoptimised “above the fold” images and serve the rest with lazy loading) but it will still be faster. Then on further loads it will be much faster.
As you said you can move the “adding to the queue” part forward at the point the user is signing up to start the caching process earlier and (hopefully) having the images cached before they access the page. But if you do the above it should still be fast even when the server is under load.
The last observation / thought is to minimise that initial delay only create one size of optimised image on the first run (queue the rest separately as “lower priority” using a priority integer in the DB), the less processing you can do initially the faster that first page load will be.
Getting quite advanced there so in case anything isn’t clear I am using the following logic:
minimise how much processing is required for initial page load.
server from cache where we can
prioritise caching for initial page load
Hopefully it all makes sense! Sounds like a great project and by the end of it you should have an awesome image loading optimisation library that you can recycle into numerous projects!
@inhuofficial thanks again for your feedback and guidance. I refactored my code merging your suggestions with the optimisations and got decent performance gains. Must admit, been learning a lot since you've pointed me in the right direction.
So my approach now is;
User auths against IG
Successful auth routes user back to my app and in the background kicks off an optimising and caching script
App does once off "on-the-fly" image optimisation of the first set of images pulled from the API
then on every subsequent page reload if the unique user's cached folder then it pulls images from that source.
Overall, this as considerably improved the initial UX of my MVP so I'm pretty stoked with what I've achieved to date.
However, looking into crystal ball I'm envisioning a few performance issues / concerns.
Whenever my dyno restarts the "cached" folders are lost.
I still suspect my current solution may not scale elegantly
Future approach
Refactor background optimisation script to store cached images in a Cloud Storage Solution
Build on this script to pull, optimise and store more images from an authenticated IG user and display in my app .
Accessibility Specialist. I focus on ensuring content created, events held and company assets are as accessible as possible, for as many people as possible.
Not sure about the lost cached folders, I guess there must be an easy way to persist then as actual physical folders but I am a LAMP stack dev so I am not sure I can offer much help here other than saying that using a cloud storage solution as you suggested sounds like the simplest route.
As for scaling, worry about that when the time comes, if it scales to a point where it becomes a bottleneck then hopefully you will be generating revenue (at which point you can pay someone to consult on optimisation and get a more customised / detailed plan!)
Glad you are making massive improvements, I look forward to seeing it all in action! ❤
Morning @inhuofficial if you want to check it out and you have an IG handle you can login via shuzzle.me
That is true, probs only have to worry about scaling issues when the time comes and the project is paying for itself and a little extra. I'm still intrigued with the clod object storage and potential BE solution I could implement so may spin up a side project to get a proof of concept working.
Thanks again for your insights and guidance. Much appreciated.
Web Dev full-stack [LAMP] since 2005, but much heavier on the JS stuff these days.
Jack of all Stacks, Master of some.
Always looking to learn new things. Always glad to help out, just ask.
Location
Atlanta, GA
Education
B.S. in Biochemistry 2004, M.S. in Computer Information Systems 2007
I would not recommend you resize "on the fly" with php. In the long run this will crash your server because more users, more images loaded, one server, php will run out of memory eventually. In fact, i bet it would crash if you render 100 images on one page and just hit refresh like 50 times.
On upload, process the images and save them in a "resized" folder. That way you can keep the original uploads and the resized version separate. More storage , yes, but you keep the originals which in some cases matters. If it doesn't, just don't save the originals.
Also your image resize script is incomplete and doesn't handle all size scenarios.
Look at the conditions in the answer here: stackoverflow.com/questions/146496...
It covers more variations than yours does.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Hey everyone ( @inhuofficial, @izio38 , @ravavyr )
Happy new year! Hope you all had a great festive season! So with all of your advice I refactored my approach to outputting resized IG photos back to the user using the Image Intervention library.
My approach was to loop thorough the first set of pictures, detect the original width & height and therefore work out the original ratio of each picture.
Then apply a resize reduction factor of either third, a quarter or 22.5% depending on if the image width was less than 800px, in between 800 and 1200 px or greater than 1200px. After this reduction conversion was done, my approach then applies the original ratio to either the width or height dependent on which one is bigger than the other. I then converted the resized image into base64 image url and echoed the output along with html and css for styling.
' if ($width / $height > $ratio_orig) {
$width = $height * $ratio_orig;
} else {
$height = $width / $ratio_orig;
}
Note, my image size went from circa 250kb per image down to 20-30kb per image.
All in all this did see a marked overall improvement. Currently the page download size of the images alone is 11MB and its overall page download is 18 - 20 MB taking 58.15s or nearly 1 minute to render on screen.
With the new approach the page download size went from a 18-20MB down to 4.8MB or 11MB with the first pull of images down to 4.1MB and a total render time of 28.76s to download.
However, due to how I've implemented this approach, the image conversion and output of the new image blocks the page rendering in full until the foreach loop within this script has completed. (i.e. the script does all this work in the background and then spits out to the front end all 28 converted images at once.
Is there an approach I can take with my PHP script that outputs each image as it is processed?
Ideally I would like to dynamically inject a resized image into the output HTML as soon as the image is processed?
Perhaps I could wrap the base64 image url as as json payload and push a processed image to the front end in this manner?
Or is it possible to implement web sockets with PHP ?
Cache the images, so resize the image once and then save it to a folder called cached-images or similar. Then use either media queries or the
pictureelement to pull the right image for each screen size.Once you have that working we can look at automating the process so when you update an image it automatically creates the caged versions for you.
@inhuofficial thanks for the advice.
So I refactored my code and post successful auth with the IG API, I am now doing the following with a cached folder;
Observations:
After implementing the above approach, there has been a minor performance improvement however the page still renders slowly taking 25-30 seconds; I suspect the foreach loop that does the initial image optimisation and resizing needs to execute in full to produce the output.
Ideally, I need to implement some sort of background process that performs the image optimisation and primes the images into a unique user sub folder whilst the user journeys from the log in view to the profile view.
I also need to add some logic that skips the optimisation and caching of the images if it has already been done if I revisit the profile view. @ravavyr tagging you in here for the update ;)
Exactly, from what you describe you are still having to run the cache script on the first time the page loads. But where you should notice a massive difference is when the page loads a second time it should be much much faster. Caching in advance is defo the best bet if you can (perhaps a script that runs on a cron job or when you add a user account.)
The other thing you can do is to lazy load everything “below the fold” but do it with JS so you can request those images be generated separately so you only need to generate say 9 images “on the fly” for the page to load (hope that makes sense)
Thanks again @inhuofficial for the feedback, it's very much appreciated bouncing ideas off you and getting feedback. Especially, seeing as this is bit of a solo project with just me working on it, so I really do appreciate your input.
It's bit of chicken & egg scenario of which one should come first for me when it comes to my app and it's UX with a first time user signing up/auth'ing via IG. There's an expectation there by the user for the app to behave and respond like the real thing, hence I really want that first view load / screen render to really slick and fast else huge bounce rate/drop off me suspects.
You mentioned automation in your previous post, this led me down the php cli rabbit hole and I think I've a possible solution using either proc_open, exec or shell_exec.
My thinking is post successful IG auth I can kick off a php background process / script that can do the caching in advance / on the fly whilst the app routes the user to their profile view.
The previous script would do all the hard work of creating a unique folder per user under the parent folder cached images and in that way my profile view can call upon that unique folder and render via lazy load images below the fold.
Do you think that's a good approach or am I barking up the wrong tree?
Right so the only concern left is the first time someone views their profile after setting up an account now it seems (correct me if I am wrong as I do not know the project and am just guessing from our conversations so far).
So I would simplify this bit to:
You will lose a little bit of first load speed as the images aren’t optimised (bear in mind we are only going to serve unoptimised “above the fold” images and serve the rest with lazy loading) but it will still be faster. Then on further loads it will be much faster.
As you said you can move the “adding to the queue” part forward at the point the user is signing up to start the caching process earlier and (hopefully) having the images cached before they access the page. But if you do the above it should still be fast even when the server is under load.
The last observation / thought is to minimise that initial delay only create one size of optimised image on the first run (queue the rest separately as “lower priority” using a priority integer in the DB), the less processing you can do initially the faster that first page load will be.
Getting quite advanced there so in case anything isn’t clear I am using the following logic:
Hopefully it all makes sense! Sounds like a great project and by the end of it you should have an awesome image loading optimisation library that you can recycle into numerous projects!
@inhuofficial thanks again for your feedback and guidance. I refactored my code merging your suggestions with the optimisations and got decent performance gains. Must admit, been learning a lot since you've pointed me in the right direction.
So my approach now is;
Overall, this as considerably improved the initial UX of my MVP so I'm pretty stoked with what I've achieved to date.
However, looking into crystal ball I'm envisioning a few performance issues / concerns.
I still suspect my current solution may not scale elegantly
Future approach
Refactor background optimisation script to store cached images in a Cloud Storage Solution
Build on this script to pull, optimise and store more images from an authenticated IG user and display in my app .
Not sure about the lost cached folders, I guess there must be an easy way to persist then as actual physical folders but I am a LAMP stack dev so I am not sure I can offer much help here other than saying that using a cloud storage solution as you suggested sounds like the simplest route.
As for scaling, worry about that when the time comes, if it scales to a point where it becomes a bottleneck then hopefully you will be generating revenue (at which point you can pay someone to consult on optimisation and get a more customised / detailed plan!)
Glad you are making massive improvements, I look forward to seeing it all in action! ❤
Morning @inhuofficial if you want to check it out and you have an IG handle you can login via shuzzle.me
That is true, probs only have to worry about scaling issues when the time comes and the project is paying for itself and a little extra. I'm still intrigued with the clod object storage and potential BE solution I could implement so may spin up a side project to get a proof of concept working.
Thanks again for your insights and guidance. Much appreciated.
I would not recommend you resize "on the fly" with php. In the long run this will crash your server because more users, more images loaded, one server, php will run out of memory eventually. In fact, i bet it would crash if you render 100 images on one page and just hit refresh like 50 times.
On upload, process the images and save them in a "resized" folder. That way you can keep the original uploads and the resized version separate. More storage , yes, but you keep the originals which in some cases matters. If it doesn't, just don't save the originals.
Also your image resize script is incomplete and doesn't handle all size scenarios.
Look at the conditions in the answer here: stackoverflow.com/questions/146496...
It covers more variations than yours does.