In the previous two posts, I covered the CloudFormation template for creating a REST API to log to CloudWatch from a frontend app and then the communication protocol of the REST API. This final post in the series covers the implementation of the frontend client to use the logging REST API.
Overview
The frontend client is responsible for creating a CloudWatch log stream for each app session and then writing log messages to it in sequential batches. Each batch must include a sequence number from the previous batch, unless it is the first batch.
For Blazor WebAssembly apps built with LambdaSharp, the implementation of the client resides in the LambdaSharpAppClient class. The class is instantiated as a singleton and can be injected either directly or indirectly via the ILogger
interface. Only one of the following statement is needed for the frontend app to log to CloudWatch. The choices comes down to personal preference.
@inject LambdaSharpAppClient AppClient
@inject ILogger<Index> Logger
Implementation
CloudWatch Logs organizes log entries into log streams. A log stream is a chronological sequence of entries. Many log streams can exist concurrently. For Blazor WebAssembly apps--and other single-page apps--it is recommended to create a log stream on-demand when the first log message is generated.
Note that the code in this post is modified for simplicity. The actual implementation covers a few more edge-cases that would be distracting. The complete implementation can be found here.
Sending Log Messages
The app client queues messages into an internal accumulator. This enables the implementation to send multiple messages at once to avoid being throttled.
private readonly List<PutLogEventsRequestEntry> _logs = new List<PutLogEventsRequestEntry>();
private void SendMessage(string message) {
// queue message for server-side logging
_logs.Add(new PutLogEventsRequestEntry {
Timestamp = DateTimeOffset.UtcNow.ToUnixTimeMilliseconds(),
Message = message ?? throw new ArgumentNullException(nameof(message))
});
}
Timed Accumulator
The accumulator is checked every second by a timer for pending messages. The timer callback first makes sure that any previous asynchronous operation is completed. It then attempts to flush any accumulated messages.
private Task _previousOperationTask;
private void OnTimer(object _) {
if(!(_previousOperationTask?.IsCompleted ?? true)) {
// previous operation is still going; wait until next timer invocation to proceed
return;
}
// initialize invocation to FlushAsync(), but don't wait for it to finish
_previousOperationTask = FlushAsync();
}
Batch Sending
Before the first batch of accumulated messages can be sent, the client must ensure that a log stream was created. It then chunks the accumulated messages into batches constrained to 1MB in size or 10,000 messages, whichever is lower. If the operation fails, because the client is possibly offline, the messages are inserted back into the accumulator.
private string _logStreamName;
private string _sequenceToken;
private async Task FlushAsync() {
// check if any messages are pending
if(!_logs.Any()) {
return;
}
// check if a log stream must be created
if(_logStreamName == null) {
_logStreamName = AppInstanceId;
var response = await CreateLogStreamAsync(new CreateLogStreamRequest {
LogStreamName = _logStreamName
});
if(response.Error != null) {
Console.WriteLine($"*** ERROR: unable to create log stream: {_logStreamName} (Error: {response.Error})");
return;
}
}
// NOTE (2020-08-06, bjorg): we limit the number of log message we send in the unlikely event that we have too many
// See: https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html
const int MaxPayloadSize = 1_048_576;
const int MaxMessageCount = 10_000;
// consume as many accumulated log messages as possible
var payloadByteCount = 0;
var logs = _logs.Take(MaxMessageCount).TakeWhile(log => {
var logMessageByteCount = Encoding.UTF8.GetByteCount(log.Message) + 26;
if((payloadByteCount + logMessageByteCount) >= MaxPayloadSize) {
return false;
}
payloadByteCount += logMessageByteCount;
return true;
}).ToList();
_logs.RemoveRange(0, logs.Count);
// send log messages to CloudWatch Logs
try {
var response = await PutLogEventsAsync(new PutLogEventsRequest {
LogStreamName = _logStreamName,
LogEvents = logs,
SequenceToken = _sequenceToken
});
// on error, re-insert the log messages and try again later
if(response.Error != null) {
_logs.InsertRange(0, logs);
return;
}
_sequenceToken = response.NextSequenceToken;
} catch {
// on exception, re-insert the log messages and try again later
_logs.InsertRange(0, logs);
}
}
Disposal
Finally, the client performs a final flush operation when being disposed to ensure that all pending messages in the accumulator are sent to CloudWatch.
async ValueTask IAsyncDisposable.DisposeAsync() {
// stop timer and wait for any lingering timer operations to finish
await _timer.DisposeAsync();
// wait for any in-flight operation to complete
if(!(_previousOperationTask?.IsCompleted ?? true)) {
await _previousOperationTask;
}
// flush all remaining messages
while(_logs.Any()) {
await FlushAsync();
}
}
Conclusion
That's it. I hope you enjoyed this behind-the-scenes series of how LambdaSharp implemented CloudWatch Logs support for Blazor WebAssembly frontend apps. Hopefully you may find it useful in your own endeavors.
Happy Hacking!
Top comments (0)