Extracting the Lambda Node.js 24.x Runtime Source Code
To extract the Runtime source code, refer to How to Extract AWS Lambda Runtime Source Code: Using Node.js as an Example.
Analyzing the bootstrap
After extracting the Runtime, we begin our analysis.
Since bootstrap is the entry point for all Runtimes, we start from there.
The complete bootstrap script and related Runtime JS files can be obtained from here.
Node Module Search Path Configuration
Opening the bootstrap script, we see a section at the top that configures the Node module search paths:
if [ -z "$NODE_PATH" ];
then
nodejs_mods="/opt/nodejs/node_modules"
nodejs24_mods="/opt/nodejs/node24/node_modules"
runtime_mods="/var/runtime/node_modules"
task="/var/runtime:/var/task"
export NODE_PATH="$nodejs24_mods:$nodejs_mods:$runtime_mods:$task"
fi
This first checks whether the NODE_PATH environment variable is set. If it isn't, it sets the following paths as the Node.js module search paths:
/opt/nodejs/node_modules/opt/nodejs/node24/node_modules/var/runtime/node_modules/var/runtime/var/task
Node Memory Limit Configuration
if [ -n "$AWS_LAMBDA_FUNCTION_MEMORY_SIZE" ];
then
# For V8 options, both '_' and '-' are supported
# Ref: https://github.com/nodejs/node/pull/14093
semi_space_str_und="--max_semi_space_size"
old_space_str_und="--max_old_space_size"
semi_space_str=${semi_space_str_und//[_]/-}
old_space_str=${old_space_str_und//[_]/-}
# Do not override customers' semi and old space size options if they specify them
# with NODE_OPTIONS env var. If they just set one, use the default value from v8
# for the other.
case $NODE_OPTIONS in
*$semi_space_str_und*);;
*$old_space_str_und*);;
*$semi_space_str*);;
*$old_space_str*);;
*)
# New space should be 5% of AWS_LAMBDA_FUNCTION_MEMORY_SIZE, leaving 5% available for buffers, for instance,
# very large images or JSON files, which are allocated as C memory, rather than JavaScript heap in V8.
new_space=$(($AWS_LAMBDA_FUNCTION_MEMORY_SIZE / 10))
# The young generation size of the V8 heap is three times the size of the semi-space,
# an increase of 1 MiB to semi-space applies to each of the three individual semi-spaces
# and causes the heap size to increase by 3 MiB.
semi_space=$(($new_space / 6))
# Old space should be 90% of AWS_LAMBDA_FUNCTION_MEMORY_SIZE
old_space=$(($AWS_LAMBDA_FUNCTION_MEMORY_SIZE - $new_space))
MEMORY_ARGS=(
"$semi_space_str=$semi_space"
"$old_space_str=$old_space"
)
;;
esac
fi
The bootstrap then checks whether the AWS_LAMBDA_FUNCTION_MEMORY_SIZE environment variable is set. If it is, it configures the V8 engine's memory limit parameters --max_semi_space_size and --max_old_space_size, and adds them to the MEMORY_ARGS array.
This array will be passed as additional arguments to the node command later, ensuring that the Lambda function runs with appropriate memory limits.
Node Certificate Path Configuration
Next, the bootstrap script sets the CA certificate bundle path:
# If NODE_EXTRA_CA_CERTS is being set by the customer, don't override. Else, include RDS CA
if [ -z "${NODE_EXTRA_CA_CERTS+set}" ]; then
# Use the default CA bundle in regions that have 3 dashes in their name
if [ "${AWS_REGION:0:6}" != "us-gov" ] && [ "${AWS_REGION//[^-]}" == "---" ]; then
export NODE_EXTRA_CA_CERTS=/etc/pki/tls/certs/ca-bundle.crt
fi
fi
This checks whether the user has already set the NODE_EXTRA_CA_CERTS environment variable:
- If not set, it decides whether to set
NODE_EXTRA_CA_CERTSbased on the value of theAWS_REGIONenvironment variable (which is automatically set by the AWS Lambda service at runtime). - If already set, it does not override the user's configuration.
Preset Environment Variable AWS_EXECUTION_ENV and Thread Pool Size Configuration
export AWS_EXECUTION_ENV=AWS_Lambda_nodejs24.x
# Set UV_THREADPOOL_SIZE to 16 in multi-concurrency environments if not already set
if [ -n "$AWS_LAMBDA_MAX_CONCURRENCY" ] && [ -z "$UV_THREADPOOL_SIZE" ]; then
export UV_THREADPOOL_SIZE=16
fi
The bootstrap then sets the AWS_EXECUTION_ENV environment variable to AWS_Lambda_nodejs24.x.
For Lambda Managed Instances mode, it also configures the thread pool size (more on this later).
Note: The Lambda environment has many preset environment variables (see: Lambda default environment variables). According to the documentation, although
AWS_EXECUTION_ENVis listed as aReserved environment variable, it is actually set by thebootstrapscript rather than automatically by the Lambda service. Other environment variables such asAWS_REGIONare automatically set by the Lambda service.
Starting the Node.js Runtime Event Loop
Finally, the bootstrap script executes the following command to officially start the Node.js Runtime event loop:
NODE_ARGS=(
--expose-gc
--max-http-header-size 81920
"${EXPERIMENTAL_ARGS[@]}"
"${MEMORY_ARGS[@]}"
/var/runtime/index.mjs
)
if [ -z "$AWS_LAMBDA_EXEC_WRAPPER" ]; then
exec /var/lang/bin/node "${NODE_ARGS[@]}"
else
wrapper="$AWS_LAMBDA_EXEC_WRAPPER"
if [ ! -f "$wrapper" ]; then
echo "$wrapper: does not exist"
exit 127
fi
if [ ! -x "$wrapper" ]; then
echo "$wrapper: is not an executable"
exit 126
fi
exec -- "$wrapper" /var/lang/bin/node "${NODE_ARGS[@]}"
fi
This is a critical piece of code. It constructs the argument list for the node command via NODE_ARGS:
-
--expose-gc: This flag exposes V8's garbage collection mechanism, allowing user Lambda functions to trigger garbage collection by callingglobal.gc(). -
--max-http-header-size 81920: This sets the maximum HTTP request header size to 80KB, supporting larger request headers. -
${EXPERIMENTAL_ARGS[@]}: Sets some experimental flags. -
${MEMORY_ARGS[@]}: Sets the memory limit parameters calculated earlier. -
/var/runtime/index.mjs: This is the key - it is the entry file for the entire Node.js Runtime, responsible for handling the Lambda function's event loop and request processing.
Before diving into the details of /var/runtime/index.mjs, let's focus on the final part of the bootstrap script and how it invokes node:
It first checks whether the
AWS_LAMBDA_EXEC_WRAPPERenvironment variable exists. If it doesn't, it directly executes thenodecommand to start the Node.js Runtime.If the
AWS_LAMBDA_EXEC_WRAPPERenvironment variable exists, it passes thenodecommand (along with its arguments) as parameters to the executable specified by theAWS_LAMBDA_EXEC_WRAPPERenvironment variable.
This AWS_LAMBDA_EXEC_WRAPPER will be familiar to anyone who has used Lambda Wrapper scripts. By setting it, Lambda allows users to execute additional initialization logic before the Lambda Runtime starts. So in reality, the implementation of Lambda wrapper scripts is extremely simple - it just checks whether the corresponding variable exists before executing the Runtime, and if it does, runs the user-specified command first.
Analyzing index.mjs
Next, we analyze the implementation of the Node.js Runtime's core file /var/runtime/index.mjs to see how it implements the Lambda Runtime event loop and request processing.
index.mjs is a file with over 1,000 lines of code, actually bundled from multiple modules. In the following sections, we will extract and analyze the core code.
Entry Point
First, let's analyze the entry point of the entire codebase:
// dist/worker/ignition.js
var { isMainThread } = cjsRequire("node:worker_threads");
var verboseLog3 = logger("Ignition");
async function ignition() {
if (isMultiConcurrentMode() && isMainThread) {
verboseLog3.verbose("Running in MultiConcurrent Mode");
const manager = new WorkerManager();
await manager.start();
} else {
verboseLog3.verbose("Running worker thread");
const runtime = await createRuntime();
await runtime.start();
}
}
// dist/index.js
ignition();
The code uses the ignition() function as its entry point. It first checks whether the current environment is in MultiConcurrent Mode. If it is in MultiConcurrent Mode and running on the main thread, it enters the first if branch to initialize the thread pool for parallel request processing.
You might wonder: according to Lambda's model, doesn't each Lambda instance (execution environment) handle only one request at a time? Why would there be a thread pool? In fact, this mode is part of a new Lambda feature called Managed Instances.
It allows users to host Lambda environments on EC2 instances in exchange for stronger networking and compute performance. Moreover, under this mode, Lambda's entire execution model undergoes a fundamental change - it no longer processes just one request per environment. Instead, like a traditional server, it handles multiple requests concurrently within a single environment.
For details, refer to Node.js runtime for Lambda Managed Instances.
NOTE:
Managed Instancesmode is not the focus of this analysis, and its behavior is completely different from the traditional Lambda model. Therefore, all subsequent analysis will skip code related toManaged Instancesmode and focus on the Runtime implementation under the traditional Lambda model.
Initializing the Runtime
Under the traditional Lambda model, the ignition() function enters the second else branch:
- It calls
createRuntime()to create a Runtime instance. - Then it calls
Runtime.start()to start the Runtime event loop.
Next, we analyze the implementation of createRuntime():
function setupGlobals() {
const NoGlobalAwsLambda = process.env["AWS_LAMBDA_NODEJS_NO_GLOBAL_AWSLAMBDA"] === "1" || process.env["AWS_LAMBDA_NODEJS_NO_GLOBAL_AWSLAMBDA"] === "true";
if (!NoGlobalAwsLambda) {
globalThis.awslambda = {
...globalThis.awslambda,
streamifyResponse: (handler, options) => { ...... };
}
}
async function createRuntime(rapidClientOptions = {}) {
setupGlobals();
const runtimeApi = process.env.AWS_LAMBDA_RUNTIME_API;
const handlerString = process.env._HANDLER;
const taskRoot = process.env.LAMBDA_TASK_ROOT;
// ...
const rapidClient = await RAPIDClient.create(runtimeApi, rapidClientOptions, isMultiConcurrent);
try {
const { handler, metadata: handlerMetadata } = await UserFunctionLoader.load(taskRoot, handlerString);
errorOnDeprecatedCallback(handlerMetadata);
return Runtime.create({
rapidClient,
handler,
handlerMetadata,
isMultiConcurrent
});
} catch (error) {
structuredConsole.logError("Init Error", error);
await rapidClient.postInitError(error);
throw error;
}
}
Setting up the globalThis.awslambda Object
createRuntime() first calls setupGlobals() to extend the awslambda object on the global scope.
The globalThis.awslambda here is pre-configured by earlier code. It reads the AWS_LAMBDA_NODEJS_NO_GLOBAL_AWSLAMBDA environment variable to decide whether to set the awslambda object on the global scope.
var NO_GLOBAL_AWS_LAMBDA = ["true", "1"].includes(process.env?.AWS_LAMBDA_NODEJS_NO_GLOBAL_AWSLAMBDA ?? "");
if (!NO_GLOBAL_AWS_LAMBDA) {
globalThis.awslambda = globalThis.awslambda || {};
}
...
var InvokeStore;
(function(InvokeStore2) {
let instance = null;
async function getInstanceAsync() {
if (!instance) {
instance = (async () => {
...
const newInstance = isMulti ? await InvokeStoreMulti.create() : new InvokeStoreSingle();
if (!NO_GLOBAL_AWS_LAMBDA && globalThis.awslambda?.InvokeStore) {
return globalThis.awslambda.InvokeStore;
} else if (!NO_GLOBAL_AWS_LAMBDA && globalThis.awslambda) {
globalThis.awslambda.InvokeStore = newInstance;
return newInstance;
} else {
return newInstance;
}
})();
}
return instance;
}
...
})(InvokeStore || (InvokeStore = {}));
According to the code above, if setting globalThis.awslambda is allowed, it also adds an InvokeStore property to the globalThis.awslambda object.
This InvokeStore is actually the functionality from an open-source library by AWS called aws-lambda-invoke-store. It provides a per-invocation context store for the AWS Lambda Node.js runtime environment (e.g., for storing per-request information such as request IDs).
At the end of that library's documentation, it mentions:
Integration with AWS Lambda Runtime
The @aws/lambda-invoke-store package is designed to be integrated with the AWS Lambda Node.js Runtime Interface Client (RIC). The RIC automatically
...
The InvokeStore integrates with the Lambda runtime's global namespace:
const globalInstance = globalThis.awslambda.InvokeStore;
...
If you prefer not to modify the global namespace, you can opt out by setting the environment variable:
AWS_LAMBDA_NODEJS_NO_GLOBAL_AWSLAMBDA=1
So in reality, the integration between InvokeStore and the Runtime, as well as the toggle controlled by the AWS_LAMBDA_NODEJS_NO_GLOBAL_AWSLAMBDA environment variable, are all implemented through the Runtime code shown above.
Initializing rapidClient
Next, createRuntime() creates a rapidClient instance:
const rapidClient = await RAPIDClient.create(runtimeApi, rapidClientOptions, isMultiConcurrent);
rapidClient is a native Node.js addon, present in the extracted Runtime files as rapid-client.node.
The RAPIDClient class dynamically loads this addon via cjsRequire("./rapid-client.node");.
The main functions of rapidClient are:
- Acting as a wrapped HTTP client to call the Lambda Runtime API (e.g., calling the
/nextand/responseAPIs), simplifying request construction and error handling. - Wrapping and processing Runtime API results for easier consumption.
- Handling various error scenarios.
NOTE: Different versions of the Lambda Runtime can vary significantly. This article analyzes the Node.js 24.x Runtime. In previous versions (e.g., Node.js 20.x), the entire Runtime implementation was completely different and generally more complex.
Parsing and Loading the Handler Code Specified by _HANDLER
Next comes the critical part: parsing the handler we configured (e.g., index.handler) and loading the corresponding JS code file.
NOTE: The handler information we set in the AWS Console is passed to the Lambda runtime as the value of the
_HANDLERenvironment variable. Therefore, the Runtime needs to parse the_HANDLERenvironment variable and load the corresponding code.
createRuntime() uses the following code to read the value of _HANDLER from the environment variables and calls UserFunctionLoader.load(). This UserFunctionLoader.load() is the key to parsing and loading the handler code.
const handlerString = process.env._HANDLER
const { handler, metadata: handlerMetadata } = await UserFunctionLoader.load(taskRoot, handlerString);
Let's dive deeper into the implementation of UserFunctionLoader.load():
var UserFunctionLoader = class {
...
static async load(appRoot, handlerString) {
...
const { moduleRoot, moduleName, handlerName } = parseHandlerString(handlerString);
const module = await loadModule({
appRoot,
moduleRoot,
moduleName
});
const handler = resolveHandler(module, handlerName, handlerString);
return {
handler,
metadata: this.getHandlerMetadata(handler)
};
}
...
};
First, UserFunctionLoader.load() calls parseHandlerString() to parse the handlerString. It converts a handler string like src/index.handler into the following structure:
{
moduleRoot: "src",
moduleName: "index",
handlerName: "handler"
}
After parsing, it calls loadModule({ appRoot, moduleRoot, moduleName }) to load the handler module. Let's continue analyzing the implementation of loadModule():
var path2 = cjsRequire("node:path");
async function loadModule(options) {
const fullPathWithoutExtension = path2.resolve(options.appRoot, options.moduleRoot, options.moduleName);
const extensionLookupOrder = ["", ".js", ".mjs", ".cjs"];
try {
for (const extension of extensionLookupOrder) {
const module = await tryAwaitImport(fullPathWithoutExtension, extension);
if (module)
return module;
}
const resolvedPath = cjsRequire.resolve(options.moduleName, {
paths: [options.appRoot, path2.join(options.appRoot, options.moduleRoot)]
});
return cjsRequire(resolvedPath);
} catch (err) {
if (err instanceof SyntaxError) {
throw new UserCodeSyntaxError(err);
} else if (err instanceof Error && err.code === "MODULE_NOT_FOUND") {
throw new ImportModuleError(err);
} else {
throw err;
}
}
}
The implementation of loadModule() is actually quite straightforward.
First, loadModule() concatenates appRoot, moduleRoot, and moduleName into a full path without an extension (for src/index.handler, this would be "/var/task" + "src" + "index").
Then it tries appending different extensions ("", .js, .mjs, .cjs) to this path, for example:
/var/task/src/index/var/task/src/index.js/var/task/src/index.mjs/var/task/src/index.cjs
It uses tryAwaitImport() to attempt loading each path as an ESM module. If a path doesn't exist, it moves on to the next extension combination until loading succeeds.
If all loading attempts fail, it falls back to using cjsRequire() to try loading the module as a CommonJS module.
Extracting the Handler Function from the Handler Module
Up to this point, only the handler module has been loaded. Next, we need to extract the handler function from the module. This is the job of resolveHandler():
function resolveHandler(module, handlerName, fullHandlerString) {
let handler = findIn(handlerName, module);
if (!handler && typeof module === "object" && module !== null && "default" in module) {
handler = findIn(handlerName, module.default);
}
if (!handler) {
throw new HandlerNotFoundError(`${fullHandlerString} is undefined or not exported`);
}
if (!isUserHandler(handler)) {
throw new HandlerNotFoundError(`${fullHandlerString} is not a function`);
}
return handler;
}
function findIn(handlerName, module) {
return handlerName.split(".").reduce((nested, key) => {
return nested && typeof nested === "object" ? nested[key] : void 0;
}, module);
}
Upon analysis, we find that the implementation of resolveHandler() is quite simple - it directly extracts the corresponding handler function from the module via module[handlerName]. If it doesn't exist, an error is thrown.
At this point, the entire handler module parsing and handler function resolution is complete. Next, we analyze the Runtime object's implementation and its event loop.
Creating the Runtime Instance
After the handler has been parsed and loaded, createRuntime() finally passes the handler object, rapidClient object, and other information to Runtime.create() to create a Runtime instance and then starts the Runtime event loop:
const runtime = Runtime.create({
rapidClient,
handler,
handlerMetadata,
isMultiConcurrent
});
await runtime.start();
Analyzing Runtime.start()
Next, let's analyze the Runtime class and the implementation of its .start() method:
var Runtime = class _Runtime {
...
constructor(handler, handlerMetadata, isMultiConcurrent, lifecycle) {
this.handler = handler;
this.handlerMetadata = handlerMetadata;
this.isMultiConcurrent = isMultiConcurrent;
this.lifecycle = lifecycle;
}
async start() {
const processor = this.createProcessor();
if (this.isMultiConcurrent) {
await this.processMultiConcurrent(processor);
} else {
await this.processSingleConcurrent(processor);
}
}
createProcessor() {
if (this.handlerMetadata.streaming) {
return new StreamingInvokeProcessor(this.handler, this.lifecycle, this.handlerMetadata);
} else {
return new BufferedInvokeProcessor(this.handler, this.lifecycle);
}
}
async processSingleConcurrent(processor) {
while (true) {
const { context, event } = await this.lifecycle.next();
await this.runWithInvokeContext(context.awsRequestId, context.xRayTraceId, () => processor.processInvoke(context, event));
}
}
...
}
- First, the
Runtimeclass constructor stores the handler and other parameters as internal properties. - Then in the
start()method, it callscreateProcessor()to create a Processor instance (the Processor is responsible for invoking and executing the handler function). - Then it calls
processSingleConcurrent()to enter an infinitewhile(true)loop that continuously calls the/nextAPI to retrieve events. When an event is received, it invokes the corresponding Processor to call the handler and formally execute the Lambda function.
Analyzing processSingleConcurrent()
Since we are focusing on the traditional Lambda single-request-single-processing model, we only analyze the implementation of processSingleConcurrent(), which is designed specifically for this mode:
async processSingleConcurrent(processor) {
// infinite loop to keep processing incoming events
while (true) {
// call /next API to get the next event and context
const { context, event } = await this.lifecycle.next();
// call
await this.runWithInvokeContext(context.awsRequestId, context.xRayTraceId, () => processor.processInvoke(context, event));
}
}
Doesn't this look very similar to the Lambda Runtime event loop we implemented using while + curl in the previous articles?
In essence, the Node.js Runtime works the same way - through an infinite loop that continuously calls the /next API to retrieve events.
Here:
-
await this.lifecycle.next();calls the/nextAPI to retrieve events, processes and analyzes them, then returns an object containingcontextandevent. - After obtaining the event via
.next(),processSingleConcurrent()callsrunWithInvokeContext()to execute the handler function.
Analyzing runWithInvokeContext()
So how does runWithInvokeContext() invoke the handler? The implementation is actually simpler than you might expect:
async processInvoke(context, event) {
try {
const result = await this.handler(event, context);
await this.lifecycle.succeed(context.awsRequestId, result);
} catch (err) {
await this.lifecycle.fail(context.awsRequestId, err);
}
}
Yes, it simply calls the user's handler function directly via await handler(event, context) to process the request. Since await works correctly even with synchronous functions, a simple await this.handler(event, context) supports both sync and async handlers.
After the handler call succeeds, it calls await this.lifecycle.succeed(context.awsRequestId, result); to invoke the /response API and return the handler's processing result.
At this point, the core analysis of the entire Runtime event loop is complete.
Additional Feature Implementation Analysis
Next, let's analyze how some other additional features are implemented.
How the Handler Arguments event and context Are Constructed
Based on the previous content, lifeCycle.next() calls the /next API to retrieve the event and then constructs event and context.
Let's take a closer look at how lifeCycle.next() parses and constructs the event and context objects:
async next() {
const invocationRequest = await this.client.nextInvocation();
const context = ContextBuilder.build(invocationRequest.headers);
const event = JSON.parse(invocationRequest.bodyJson);
return {
context,
event
};
}
We can see that:
- It first calls the
/nextAPI via rapidClient to retrieve the event (this.client.nextInvocation()). - Then it calls
ContextBuilder.build()to construct thecontextobject. - Then it directly parses the
bodyJsonreturned by the/nextAPI as JSON to create theeventobject.
Let's analyze how the Context is constructed:
var REQUIRED_INVOKE_HEADERS = {
FUNCTION_ARN: "lambda-runtime-invoked-function-arn",
REQUEST_ID: "lambda-runtime-aws-request-id",
DEADLINE_MS: "lambda-runtime-deadline-ms"
};
var OPTIONAL_INVOKE_HEADERS = {
CLIENT_CONTEXT: "lambda-runtime-client-context",
COGNITO_IDENTITY: "lambda-runtime-cognito-identity",
X_RAY_TRACE_ID: "lambda-runtime-trace-id",
TENANT_ID: "lambda-runtime-aws-tenant-id"
};
var ContextBuilder = class {
static build(headers) {
...
const invokeHeaders = this.validateAndNormalizeHeaders(headers);
const headerData = this.getHeaderData(invokeHeaders);
const environmentData = this.getEnvironmentData();
moveXRayHeaderToEnv(invokeHeaders);
return Object.assign(headerData, environmentData);
}
static getEnvironmentData() {
return {
functionName: process.env.AWS_LAMBDA_FUNCTION_NAME,
functionVersion: process.env.AWS_LAMBDA_FUNCTION_VERSION,
memoryLimitInMB: process.env.AWS_LAMBDA_FUNCTION_MEMORY_SIZE,
logGroupName: process.env.AWS_LAMBDA_LOG_GROUP_NAME,
logStreamName: process.env.AWS_LAMBDA_LOG_STREAM_NAME
};
}
static getHeaderData(invokeHeaders) {
const deadline = this.parseDeadline(invokeHeaders);
return {
clientContext: this.parseJsonHeader(invokeHeaders[OPTIONAL_INVOKE_HEADERS.CLIENT_CONTEXT], OPTIONAL_INVOKE_HEADERS.CLIENT_CONTEXT),
identity: this.parseJsonHeader(invokeHeaders[OPTIONAL_INVOKE_HEADERS.COGNITO_IDENTITY], OPTIONAL_INVOKE_HEADERS.COGNITO_IDENTITY),
invokedFunctionArn: invokeHeaders[REQUIRED_INVOKE_HEADERS.FUNCTION_ARN],
awsRequestId: invokeHeaders[REQUIRED_INVOKE_HEADERS.REQUEST_ID],
tenantId: invokeHeaders[OPTIONAL_INVOKE_HEADERS.TENANT_ID],
xRayTraceId: invokeHeaders[OPTIONAL_INVOKE_HEADERS.X_RAY_TRACE_ID],
getRemainingTimeInMillis: function() {
return deadline - Date.now();
}
};
}
};
As we can see, Context construction is actually quite straightforward:
- It calls
validateAndNormalizeHeaders()andgetHeaderData()to extract information such aslambda-runtime-invoked-function-arnandlambda-runtime-client-contextfrom the headers. - Then it retrieves information such as
AWS_LAMBDA_FUNCTION_NAMEandAWS_LAMBDA_FUNCTION_VERSIONfrom environment variables. - Finally, it combines all this information into a
contextobject and returns it.
This information is consistent with the Lambda context object structure described in the official documentation.
Support for the Handler callback Has Been Removed
As we know, in previous versions of Node.js, the Lambda handler also supported a third callback parameter for asynchronous processing - Callback-based function handlers:
export const handler = (event, context, callback) => { }
;
However, according to the documentation, support for the handler callback has been removed starting from Node.js 24:
Callback-based function handlers are only supported up to Node.js 22. Starting from Node.js 24, asynchronous tasks should be implemented using async function handlers.
Therefore, in the current Node.js 24.x Runtime, support for callback has been removed. The callback invocation pattern is no longer supported, and no callback-related implementation exists in the entire Runtime.
Stream Handler
The handler also supports streaming response handlers - see Response streaming function handlers.
According to the official documentation, you can use a streaming response handler as follows:
export const handler = awslambda.streamifyResponse(async (event, responseStream, context) => { });
Do you recall the Runtime's handling of globalThis.awslambda mentioned earlier? The awslambda.streamifyResponse() above is actually the globalThis.awslambda object set by the Runtime at startup:
function setupGlobals() {
const NoGlobalAwsLambda = process.env["AWS_LAMBDA_NODEJS_NO_GLOBAL_AWSLAMBDA"] === "1" || process.env["AWS_LAMBDA_NODEJS_NO_GLOBAL_AWSLAMBDA"] === "true";
if (!NoGlobalAwsLambda) {
globalThis.awslambda = {
...globalThis.awslambda,
streamifyResponse: (handler, options) => {
const typedHandler = handler;
typedHandler[UserFunctionLoader.HANDLER_STREAMING] = UserFunctionLoader.STREAM_RESPONSE;
if (typeof options?.highWaterMark === "number") {
typedHandler[UserFunctionLoader.HANDLER_HIGHWATERMARK] = parseInt(String(options.highWaterMark));
}
return handler;
},
HttpResponseStream
};
}
}
Hooking console.log and Related Methods
In Log and monitor Node.js Lambda functions, it is mentioned that the handler can directly call console.log() and other methods to print logs without any additional processing.
Moreover, the output logs are not printed as-is but are prefixed with information such as requestId. How is this achieved?
exports.handler = async function(event, context) {
console.log("ENVIRONMENT VARIABLES\n" + JSON.stringify(process.env, null, 2))
console.info("EVENT\n" + JSON.stringify(event, null, 2))
console.warn("Event not processed.")
return context.logStreamName
}
Output:
2019-06-07T19:11:20.562Z c793869b-ee49-115b-a5b6-4fd21e8dedac INFO ENVIRONMENT VARIABLES
{
"AWS_LAMBDA_FUNCTION_VERSION": "$LATEST",
"AWS_LAMBDA_LOG_GROUP_NAME": "/aws/lambda/my-function",
"AWS_LAMBDA_LOG_STREAM_NAME": "2019/06/07/[$LATEST]e6f4a0c4241adcd70c262d34c0bbc85c",
"AWS_EXECUTION_ENV": "AWS_Lambda_nodejs12.x",
"AWS_LAMBDA_FUNCTION_NAME": "my-function",
"PATH": "/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin",
"NODE_PATH": "/opt/nodejs/node10/node_modules:/opt/nodejs/node_modules:/var/runtime/node_modules",
...
}
2019-06-07T19:11:20.563Z c793869b-ee49-115b-a5b6-4fd21e8dedac INFO EVENT
{
"key": "value"
}
In fact, before the Runtime starts, it calls LogPatch.patchConsole() to hook all console methods:
static patchConsoleMethods(logger2) {
const createLogFunction = (level) => {
if (!logger2.shouldLog(level)) {
return this.NopLog;
}
return (message, ...params) => {
logger2.log(level, message, ...params);
};
};
console.trace = createLogFunction(LOG_LEVEL.TRACE);
console.debug = createLogFunction(LOG_LEVEL.DEBUG);
console.info = createLogFunction(LOG_LEVEL.INFO);
console.warn = createLogFunction(LOG_LEVEL.WARN);
console.error = createLogFunction(LOG_LEVEL.ERROR);
console.fatal = createLogFunction(LOG_LEVEL.FATAL);
console.log = console.info;
}
As shown above, patchConsoleMethods() replaces all console methods (trace, debug, info, warn, error, fatal) with logger2.log().
So all calls to console.log() are redirected to logger2.log().
The corresponding class for logger2 is StdoutLogger, and the class along with its .log() method implementation are as follows:
.log() accepts level, message, ...params as parameters, processes this information, and writes it to stdout via process.stdout.write():
var StdoutLogger = class extends BaseLogger {
log(level, message, ...params) {
if (!this.shouldLog(level))
return;
const timestamp = (/* @__PURE__ */ new Date()).toISOString();
const requestId = this.invokeStore.getRequestId();
const tenantId = this.invokeStore.getTenantId() || "";
if (this.options.format === LOG_FORMAT.JSON) {
this.logJsonMessage(timestamp, requestId, tenantId, level, message, ...params);
} else {
this.logTextMessge(timestamp, requestId, level, message, ...params);
}
}
logTextMessge(timestamp, requestId, level, message, ...params) {
const line = formatTextMessage(timestamp, requestId, level, message, ...params).replace(/\n/g, FORMAT.CARRIAGE_RETURN);
process.stdout.write(line + FORMAT.LINE_DELIMITER);
}
logJsonMessage(timestamp, requestId, tenantId, level, message, ...params) {
const line = formatJsonMessage(timestamp, requestId, tenantId, level, message, ...params).replace(/\n/g, FORMAT.CARRIAGE_RETURN);
process.stdout.write(line + FORMAT.LINE_DELIMITER);
}
};
This is how the Node.js Lambda Runtime hooks console.log() and related methods.
Top comments (0)