SeaLights Node.js agent - AWS Lambda Support

.

General

New agent implementation to support FAAS on AWS Lambda based on the node v8 engine.

 

Support Model

Sealights Lambda Support Model

 

Pre Requisites

AWS lambda functions are small and short pieces of code that is invoked by calling to an http endpoint.

In order to support coverage monitoring by Sealights agent here are the pre-requisites:

  1. Node and npm - tested on version v16.20.2 and above

    1. Based on AWS Lambda support - see Lambda runtimes - AWS Lambda

  2. Additional step on the pipeline sequence to configure the lambda support

  3. Changes to AWS deployment manifest (See below for full end-end example)

 

How Sealights Lambda Support Works

 

The support of AWS lambda functions is handled by a lambda internal layer (sealights_layer) that is installed during the pipeline steps (more on that step is below) and intercepting the original lambda handler.
Here is the flow when the lambda function is invoked:

Step 1 - Execution of setup code

Within the sealights_layer code, there's a setup file that runs during the initialization of the lambda function.

The setup file operates as follows:

#!/bin/bash # Copyright Sealights.io, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: MIT-0 # This script is designed for use in an AWS Lambda environment. It enables coverage collection using the native V8 engine coverage. # Store command line arguments in an array args=("$@") # Get the original handler name (if set) orig_name=${_HANDLER:-} # Set the Sealights wrapper as the handler for the Lambda function export _HANDLER='/opt/wrapper.handleExecution' # Export the original handler name as an environment variable - we use it in the Sealights wrapper to locate original lambda function export ORIG_NAME=$orig_name export NODE_V8_COVERAGE=/tmp # Run the Lambda function with the original runtime "${args[@]}"



This code intercepts the original lambda handler name and replaces it with the Sealights lambda handler.

Step 2 - Invoking the Sealights Lambda Handler

Once the setup and initialization are complete, the AWS backend calls the Sealights lambda handler, which then loads and begins processing the request.

Step 3 - Initiating Coverage

After loading the configuration, the code initiates coverage monitoring and saves all coverage data to a temporary file.

Step 4 - Invoking the Original Lambda Function

Once coverage monitoring has started, the code invokes and retrieves the original lambda's response.

Step 5 - Terminating Coverage

After the original lambda function has completed and provided a response, coverage monitoring is halted. The data is then processed into a Footprint data JSON, making it ready for transmission to Sealights.

Step 6 - Transmitting Footprints to Sealights

At this juncture, a brief HTTP POST request is made to the backend, sending the footprint model.

Step 7 - Returning the Response

Following the communication with the backend, the original lambda handler's response is relayed back to the AWS backend.

 

Configuration

In order to add SL to a give FAAS, you only need to do one thing, and change the deployment manifest to includes the support of Sealights lambda layer

Collector changes

The collector needs to be configured to support node lambda calls. This should be done by adding the following flags under the collectors->properties section:

collectors: ... properties: ... enableNYCCollector: true nycCollectorUploadInterval: 60

Deployment Manifest Changes

There are two main changes that need to be done to the deployment manifest:

  1. Adding Sealights Lambda layer - contains the code of the sealights lambda support

  2. Add reference to Sealights Lambda Layer on every Lambda function definition.

 

Example:

your-api: handler: ./src/test-lambda-1/index.handler events: - httpApi: path: /sealights method: get # this is all you need to add layers: - arn:aws:lambda:eu-west-1:159616352881:layer:sl-nodejs-layer:44 # end of what's needed

The layer can also be defined on a global level for all functions:

Important Notes:

  • The only needed part is the lambda layer (in this case arn:aws:lambda:eu-west-1:159616352881:layer:sl-nodejs-layer:44) ARN

  • In order to make the layer work in a testing environment you should set the following as environment variables:

    • AWS_LAMBDA_EXEC_WRAPPER should be /opt/sealights-extension

      • If you don’t set this than SL will not impact the running function.

    • SL_TOKEN the same token as used today when setting up an instrumented application.

    • SL_BUILD_SESSION_ID is the same build session id as used today when setting up an instrumented application.

    • SL_PROJECT_ROOT should be directing to where your source code resides, in our example ./src

Additional Environment Variables:

In addition to the mandatory 'AWS_LAMBDA_EXEC_WRAPPER: /opt/setup' environment variable there are more environment variables that should be defined, as mentioned above:

Environment Variable Name

Description

Type

Environment Variable Name

Description

Type

SL_TOKEN

Agent token needed for authentication

string

SL_PROJECT_ROOT

Determine the root directory of project, default is current working directory.

string

SL_BUILD_SESSION_ID

Set build session id name

string

LAB_ID

Set the lab ID value

string

Code Example

Code repository

https://github.com/Sealights/SL.OnPremise.Lambda.Layers/tree/master/node/example

This is a very small and simple Serverless project that demonstrates usages of the Sealights AWS Lambda Layer (runtime)

Setup

Here are the steps to add sealights lambda support.

Step 0 - Config and scanning

In serverless.yml replace the default Layer ARN with the desired one.

Replace the environment variables with your values using your preferred method (the template yaml, from AWS console etc...)

Configure Sealights, make sure to match your Workspace path with the PROJECT_ROOT!

After this continue with the deploy and testing steps bellow.

Step 1 - Amending the deploy manifest

Here is the original deploy manifest

We will add Sealights Layer and do changes to the functions settings.

Here is the amended deployment manifest:

 

Special Considerations

Support for Additional Layers:

  • Currently, using other layers with SeaLights layer is supported only for:

    • Dynatrace (AWS_LAMBDA_EXEC_WRAPPER=/opt/dynatrace)

    • OTEL (AWS_LAMBDA_EXEC_WRAPPER:/opt/otel-handler)

  • If you are using the Dynatrace or OTEL handlers, the SL layer will automatically detect this and work with it.

  • When you do not want SeaLights Lambda Layer to run OTEL layer, you must explicitly disable it with DISABLE_OTEL_HANDLER=true

  • When you do not want SeaLights Lambda Layer to run Dynatrace layer, you must explicitly disable it with DISABLE_DYNATRACE=true