For a system using the Amazon cloud provider, it is unavoidable that you may experience some issue that requires checking the logs for some service like API Gateway, Elastic Beanstalk, EC2, or similar, to identify the problem.
I don't know about you, but I always leave this kind of task with a feeling like "we could have a tool inside of a nice UI working integrated with some log groups, allowing me to [insert some killer features here]."
But then I remember that these logs are in Cloudwatch. And wonder when will be the next time I face its terminal 🥲
Some may argue that Cloudwatch has an Athena connector - but it isn't something you get out-of-the-box, and querying it with Athena is not always the kind of latency your application may afford.
You can also export the logs to S3, both manually or via CLI - which brings this task either to your Google Calendar or using some CRON-like Cloudwatch Event to trigger a lambda to trigger the CLI.
Another option is to stream the logs data to the OpenSearch, formerly ElasticSearch. But it can become quickly expensive.
My drive against all the options above is the outrage: how Amazon dares to offer you a hard limit of only ten transactions per second for querying what is often the most extensive data stream from your application?
Hard to answer. Hard to query at scale, also.
Top comments (0)