Hi, it's been a long time i didn't post here.
Was mostly working on clients project since couples of months now, mostly as backend developer with Django framework, but in past i was writing Js code with React (before hooks comes out). One of my latest contract leads me to work with AWS and it's quite large, so i decided to train by practicing and obtain 2 certifications this year. My plan for this consist more in building various tools and host them using various AWS services.
What tool am i building now ?
My first tool is a URL shortener service like bitly. It is small enough to allow me to add new features at every working session and avoid discouragement and stop working.
Current tech stack:
- Backend: Flask (Python)
- Database: Redis & PostgreSQL (or another one)
What am i expecting to learn ?
- React's hooks
- React's new dev practice (it's been a long time i didn't touch that)
- Flask, yes it feel like a shame to be a python backend dev without experience in Flask, but it's never too late to learn it.
- Some basics UX design principles to build a usable UI
- AWS cache service.
- Small app deployment service with AWS (will need advices here)
How will i progress ?
As my plan is to work every night on one feature, i will have to made a small blog post about the feature in this series to show my progress and explain what i've learned, where i was stuck and what to do next.
What i can expect from you
Am open to new ideas about the project and even advices about everything you thing is necessary (from hosting service to project design).
Thank you, hope we will discover great things.
Top comments (6)
Curious thing! I was asked once to "architecture" such a service on a paper while being interviewed for bigdata developer position.
It quickly boiled down to understanding that we expect url shortener service to be able to scale infinitely. Then it comes to questions "what bottlenecks we expect" and what approaches and technologies may help in workarounding them.
The task itself in "small scale" is rather exercise than a challenge (though, of course, useful in some sense).
Briefly, imagine we have endpoint (or page) which allows to submit url, store id to our database (or other storage) assign short identifier and return it. Other endpoint just fetches data from storage by these identifiers, right?
Now, if amount of requests is huge we may expect database becoming bottleneck. If you use some distributed AWS storage, you most likely can avoid this. But then you'll need to decide how to assign short identifiers so that they never duplicate (doing this in parallel!) At last, when all those issues are solved we may need to invent how to deal with load balancer or other "edge-side" proxy...
I just share this for case that you may yourself either think of this beforehand or at least prepare some answers for case if you will be asked about your solution in future :)
Good luck, looking forward to your posts!
Thank for you response. My architecture is exactly as you have defined, one endpoint to reduce urls and save them into the system and another one to do the redirection, and yes one of the main bottleneck to expect is how to scale that.
Initially i think it can be easily scaled out by putting those two endpoint in separate process (or services) and do the main processing by using a in-memory db like redis and some other one in background tasks, those processing are for example:
As everything should be stored on AWS it shouldn't be too difficult to scale a such architecture, like you said with a load balancer in front and the two services in backend.
Anyway i will continue todo some reseach on the subject. It seem like i will need to put each services in a separated docker image in order to easily scale them (horizontally or vertically) if needed, what do you think about this possibility ?
Thanks for all.
Well, thanks for that detailed answer :)
I think the scalability question may still arose when supposed "in-memory db" won't be enough. But hopefully by this time you will have further ideas :)
I think it is not necessarily, as you are going for AWS. Much depends on which AWS service you are going to use. I think there is no sense in using anything you can put docker into. AWS has something (sorry, forget the name) which works as a service for you, so you just deploy the code and it is scaled almost to infinity.
I just prefer google services instead, here we have famous AppEngine with the same property. You don't need to care of hardware. It allows free tier for infinite time.
Other free alternative (supporting Python and Flask) by the way is PythonAnywhere - though perhaps it is not that interesting in terms of scalability (it itself lives on AWS I suspect).
Yeah you certainly refer to AWS ElasticBeanStalk, in fact it is like AppEgine. I even use it now.
But the thing is actually i wanted to put the frontend app in the same folder than the backend app and put maybe everything in a Docker. Or divide it in two service each in separated docker images.
I didn't think of deploying on ELB because it seemed to be too huge for a such small app.... but it seem like no that huge.
Honestly, that is (IMHO) the thing which we'd prefer to avoid. All frontend stuff should live at least on some static serving proxy (like nginx) or better some CDN (or something alike, like github pages).
In your application the traffic for logic (sending urls and short handles between backend and frontend) is going to be much smaller than size of all JS/CSS/PNG etc, right?