.
General
New agent implementation to support FAAS on AWS Lambda based on the node v8 engine.
Support Model
Pre Requisites
AWS lambda functions are small and short pieces of code that is invoked by calling to an http endpoint.
In order to support coverage monitoring by Sealights agent here are the pre-requisites:
Node and npm - tested on version v16.20.2 and above
Based on AWS Lambda support - see https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html
Additional step on the pipeline sequence to configure the lambda support
Changes to AWS deployment manifest (See below for full end-end example)
How Sealights Lambda Support Works
The support of AWS lambda functions is handled by a lambda internal layer (sealights_layer) that is installed during the pipeline steps (more on that step is below) and intercepting the original lambda handler.
Here is the flow when the lambda function is invoked:
Step 1 - Execution of setup code
Within the sealights_layer
code, there's a setup file that runs during the initialization of the lambda function.
The setup file operates as follows:
#!/bin/bash # Copyright Sealights.io, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: MIT-0 # This script is designed for use in an AWS Lambda environment. It enables coverage collection using the native V8 engine coverage. # Store command line arguments in an array args=("$@") # Get the original handler name (if set) orig_name=${_HANDLER:-} # Set the Sealights wrapper as the handler for the Lambda function export _HANDLER='/opt/wrapper.handleExecution' # Export the original handler name as an environment variable - we use it in the Sealights wrapper to locate original lambda function export ORIG_NAME=$orig_name export NODE_V8_COVERAGE=/tmp # Run the Lambda function with the original runtime "${args[@]}"
This code intercepts the original lambda handler name and replaces it with the Sealights lambda handler.
Step 2 - Invoking the Sealights Lambda Handler
Once the setup and initialization are complete, the AWS backend calls the Sealights lambda handler, which then loads and begins processing the request.
Step 3 - Initiating Coverage
After loading the configuration, the code initiates coverage monitoring and saves all coverage data to a temporary file.
Step 4 - Invoking the Original Lambda Function
Once coverage monitoring has started, the code invokes and retrieves the original lambda's response.
Step 5 - Terminating Coverage
After the original lambda function has completed and provided a response, coverage monitoring is halted. The data is then processed into a Footprint data JSON, making it ready for transmission to Sealights.
Step 6 - Transmitting Footprints to Sealights
At this juncture, a brief HTTP POST request is made to the backend, sending the footprint model.
Step 7 - Returning the Response
Following the communication with the backend, the original lambda handler's response is relayed back to the AWS backend.
Configuration
In order to add SL to a give FAAS, you only need to do one thing, and change the deployment manifest to includes the support of Sealights lambda layer
Collector changes
The collector needs to be configured to support node lambda calls. This should be done by adding the following flags under the collectors->properties section:
collectors: ... properties: ... enableNYCCollector: true nycCollectorUploadInterval: 60
Deployment Manifest Changes
There are two main changes that need to be done to the deployment manifest:
Adding Sealights Lambda layer - contains the code of the sealights lambda support
Add reference to Sealights Lambda Layer on every Lambda function definition.
Example:
your-api: handler: ./src/test-lambda-1/index.handler events: - httpApi: path: /sealights method: get # this is all you need to add layers: - arn:aws:lambda:eu-west-1:159616352881:layer:sl-nodejs-layer:44 # end of what's needed
The layer can also be defined on a global level for all functions:
service: my-service provider: name: aws runtime: nodejs14.x layers: - arn:aws:lambda:eu-west-1:159616352881:layer:sl-nodejs-layer:44 functions:
Important Notes:
The only needed part is the lambda layer (in this case
arn:aws:lambda:eu-west-1:159616352881:layer:sl-nodejs-layer:44
) ARNIn order to make the layer work in a testing environment you should set the following as environment variables:
AWS_LAMBDA_EXEC_WRAPPER
should be/opt/sealights-extension
If you don’t set this than SL will not impact the running function.
SL_TOKEN
the same token as used today when setting up an instrumented application.SL_BUILD_SESSION_ID
is the same build session id as used today when setting up an instrumented application.SL_PROJECT_ROOT
should be directing to where your source code resides, in our example./src
Additional Environment Variables:
In addition to the mandatory 'AWS_LAMBDA_EXEC_WRAPPER: /opt/setup
' environment variable there are more environment variables that should be defined, as mentioned above:
Environment Variable Name | Description | Type |
---|---|---|
| Agent token needed for authentication | string |
| Determine the root directory of project, default is current working directory. | string |
| Set build session id name | string |
| Set the lab ID value | string |
Code Example
Code repository
https://github.com/Sealights/SL.OnPremise.Lambda.Layers/tree/master/node/example
This is a very small and simple Serverless project that demonstrates usages of the Sealights AWS Lambda Layer (runtime)
Setup
Here are the steps to add sealights lambda support.
Step 0 - Config and scanning
In serverless.yml
replace the default Layer ARN with the desired one.
Replace the environment variables with your values using your preferred method (the template yaml, from AWS console etc...)
process.env.SL_TOKEN = {yourToken} process.env.SL_BUILD_SESSION_ID = {yourBsid} process.env.SL_PROJECT_ROOT = {yourProjectRoot} process.env.AWS_LAMBDA_EXEC_WRAPPER: /opt/sealights-extension
Configure Sealights, make sure to match your Workspace path with the PROJECT_ROOT
!
npx slnodejs config --tokenfile sltoken.txt --appName "AWS Lambda" --branch "master" --build 3 npx slnodejs scan --workspacepath ./src --tokenfile sltoken.txt --buildsessionidfile buildSessionId --scm none --awsConfigure npx slnodejs start --tokenfile sltoken.txt --buildsessionidfile buildSessionId --teststage "Sealights on Lambda"
After this continue with the deploy and testing steps bellow.
Step 1 - Amending the deploy manifest
Here is the original deploy manifest
service: aws-node-http-api-project provider: name: aws vpc: securityGroupIds: - sg-965602ef subnetIds: - subnet-028c8368058b24e21 region: eu-west-1 runtime: nodejs16.x functions: api-normal: handler: ./src/test-lambda-1/index.handler events: - httpApi: path: /default method: get resources: Resources: MyLambdaExecutionRole: Type: AWS::IAM::Role Properties: RoleName: MyLambdaExecutionRole AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: lambda.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: LambdaVPCAccessPolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - logs:CreateLogStream - logs:CreateLogGroup - logs:TagResource Resource: - "arn:aws:logs:eu-west-1:534369319675:log-group:/aws/lambda/aws-node-http-api-project-dev*:*" - Effect: Allow Action: - logs:PutLogEvents Resource: - "arn:aws:logs:eu-west-1:534369319675:log-group:/aws/lambda/aws-node-http-api-project-dev*:*" - Effect: Allow Action: - lambda:InvokeFunction Resource: "*" - Effect: Allow Action: - ec2:CreateNetworkInterface Resource: "*"
We will add Sealights Layer and do changes to the functions settings.
Here is the amended deployment manifest:
service: aws-node-http-api-project provider: name: aws vpc: securityGroupIds: - sg-965602ef subnetIds: - subnet-028c8368058b24e21 region: eu-west-1 runtime: nodejs16.x functions: api-normal: handler: ./src/test-lambda-1/index.handler events: - httpApi: path: /default method: get # START - SL addition layers: - arn:aws:lambda:eu-west-1:159616352881:layer:sl-nodejs-layer:44 # END - SL addition resources: Resources: MyLambdaExecutionRole: Type: AWS::IAM::Role Properties: RoleName: MyLambdaExecutionRole AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: lambda.amazonaws.com Action: sts:AssumeRole Policies: - PolicyName: LambdaVPCAccessPolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Action: - logs:CreateLogStream - logs:CreateLogGroup - logs:TagResource Resource: - "arn:aws:logs:eu-west-1:534369319675:log-group:/aws/lambda/aws-node-http-api-project-dev*:*" - Effect: Allow Action: - logs:PutLogEvents Resource: - "arn:aws:logs:eu-west-1:534369319675:log-group:/aws/lambda/aws-node-http-api-project-dev*:*" - Effect: Allow Action: - lambda:InvokeFunction Resource: "*" - Effect: Allow Action: - ec2:CreateNetworkInterface Resource: "*"
Special Considerations
Support for Additional Layers:
Currently, using other layers with SeaLights layer is supported only for:
Dynatrace (
AWS_LAMBDA_EXEC_WRAPPER=/opt/dynatrace
)OTEL (
AWS_LAMBDA_EXEC_WRAPPER:/opt/otel-handler
)
If you are using the Dynatrace or OTEL handlers, the SL layer will automatically detect this and work with it.
When you do not want SeaLights Lambda Layer to run OTEL layer, you must explicitly disable it with
DISABLE_OTEL_HANDLER=true
When you do not want SeaLights Lambda Layer to run Dynatrace layer, you must explicitly disable it with
DISABLE_OTEL_HANDLER=true