This article will focus mainly on the golang side of work, so if you're interested please continue reading
The problem:
I'm working at a r...
For further actions, you may consider blocking this person and/or reporting abuse
I think some is confusing here. The main progress is not your Go lang code but FFmpeg. You are also running on Lambda, so concurrent could be a dangerous thing. Furthermore, if your Lambda instances need several minutes, could be a huge impact on the cost.
Beware: You are using Ultrafast preset, that means the data loss is highest.
No, I've deployed the service on a compute optimized EC2 - costs around $250 a month.
Isn't ultrafast for faster compression but bigger sizes? I did it because the size does't difference is't relevant compared to the speed
My mistake. That's correct. Preset relevant to compression, not quality.
Use EC2 is good choice.
I've built something similar but using ECS Fargate running a linux container and with a simple bash script I was able to read the source video file from S3, process it using ffmpeg CLI and then upload the output on another S3 bucket
I've done a few tests manually and it's working perfectly fine, but I'm not sure how robust it is on a bigger scale
Did you considered this option too ?
I'm a bit confused. If you're using an EC2 instance, what does your lambda function do?
gets event notifications from s3 and put on sqs to enqueue to the go service
You can even drop the lambda function in favor of S3 event notification docs.aws.amazon.com/AmazonS3/lates...
Didn't know about that, it's brilliant. Thanks!
Awesome article! Really nice architecture. Do you have a rough idea of your cost savings?
Hi,
Did you create some comparative performance test between the go solution and aws service?
Thanks for your article by the way.
Hello. Yeah the go service is slightly slower than MediaConvert that's why we're using SQS queue. You can still achieve the same speed if you used a bigger instance I was using g4ad.xlarge