Serverless notes
Deploying AWS Lambda
Here we are going to see the steps needed to create our first lambda and deploy using serverless.
Below are the minimum needed for a serverless.yml config file to deploy a simple lambda function.
Note: To use serverless we need to install serverless cli and aws cli and configure the access key and secret key of aws into ~/.aws/credentials file
serverless.yml
service: sls-test
provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
stage: dev [dev is default if not specified]
functions:
helloLambda:
handler: index.handler
index.js
async function hello(event, context) {
return {
statusCode: 200,
body: JSON.stringify({ success: true, message: "hello world"}),
};
}
module.exports.handler = hello;
Here hello function is exported as handler from index.js
we can deploy the code using command sls deploy --verbose
This will create lambda function in AWS with name sls-test-lambda-dev-helloLambda
. Here service name + stage + functionName (given in yml file) is taken to form the name of the lambda function.
Testing
We can test the function by using sls invoke
command
sls invoke -f helloLambda
This will execute the function and provide the output in command line. We can also go to the aws console and test the function.
sls invoke -f helloLambda -l
will give us the logs from cloud watch during the execution time
sls logs -f helloLambda
will give us the complete cloud watch logs for this function.
sls logs -f helloLambda --startTime 1m
will give the last 1 minute logs. You also try hrs.
Connecting API Gateway
We can attach API gateway as trigger point for the lambda so we can get API url using which the function can be called. Here is the serverless.yml config will look like
API Gateway lets you deploy HTTP APIs. It comes in two versions:
- v1, also called REST API
- v2, also called HTTP API, which is faster and cheaper than v1
serverless.yml using v1
service: sls-test
provider:
name: aws
runtime: nodejs12.x
region: ap-southeast-1
stage: dev [dev is default if not specified]
functions:
helloLambda:
handler: index.handler
events:
- http:
method: get
path: /hello
This will create a rest end point for GET method using /hello as route. There is also another event type called httpApi which will create http end point. Example is below for v2 type of http end point.
We can still use the same v1 syntax by replacing http with httpApi and keeping method and path as child to that, else we can have short syntax as below.
functions:
helloLambda:
handler: index.handler
events:
- httpApi: 'GET /hello'
How to include dependencies
We often use npm modules as dependencies while writing code. In general AWS Lambda can execute the code what we give, we cannot perform npm install
inside the Lambda. So basically the solution is to either
- Upload the full project (including node_modules folder)
(or) - Compile the code with webpack or parcel and upload the compiled version of the code
serverless helps to do both of them. Lets see with an example.
Example1
We are going to include axios module as our npm dependency and using axios we are going to get data from public end point and return the result as Lambda output.
Here is the same index.js code (in common js format) including the axios module
const axios = require("axios");
async function hello(event, context) {
const { data } = await axios.get("https://jsonplaceholder.typicode.com/todos/1");
console.log(data);
return {
statusCode: 200,
body: { success: true, data },
};
}
module.exports.handler = hello;
Then we just need to do sls deploy --verbose
to update the function in AWS. Now if you go to the AWS console, you can see the node_modules folder is also present (to support axios package). it will work just fine.
If we want to do this without serverless, then we take zip file of the code and upload into Lambda using AWS cli. Refer more here
The following update-function-code example replaces the code of the unpublished ($LATEST) version of the my-function function with the contents of the specified zip file.
aws lambda update-function-code \
--function-name my-function \
--zip-file fileb://my-function.zip
Example2
Here we are going to use the power of webpack. This will compile all our code and create single js file that will be uploaded into AWS Lambda.
If we configure webpack by our own it will require more config. So we are going to serverless plugin called serverless-bundle
Read more about it here
To use this plugin we need to first install it as dev dependency like
npm install --save-dev serverless-bundle
then modify serverless.yml to include one more section like below.
plugins:
- serverless-bundle
With this one change, we do no need any more config and try to deploy the function and see the magic.
After running sls deploy
You can see the Lambda function in AWS now contains only one js file that is totally unreadable for us. But the node environment will be able to read and deliver exactly what we asked for.
Since we are using webpack now, we can extend our code to include all the latest ES6+ syntax like below.
index.js
import axios from 'axios';
async function hello(event, context) {
const { data } = await axios.get("https://jsonplaceholder.typicode.com/todos/1");
console.log(data);
return {
statusCode: 200,
body: { success: true, data },
};
}
export const handler = hello;
Create DynamoDB using Serverless
Serverless uses cloudformation to do things. So we can pretty much do everything what cloudformation will do. So we can create AWS resources on the fly using serverless.
How Serverless works
The framework will look for serverless.yml file and its configuration and upload your files into S3 and from there it will run cloudformation stack. So once the deployment is completed you can go to the S3 and look for files that was used by serverless to invoke the cloudformation.
If you go to cloudformation then you will be able to see the stack and how many resources were created by them.
So all the AWS resources we want to create will go under resources
section of the serverless.yml
file. Here is an example of creating a DynamoDB table.
Keep in mind, resources is the serverless section and then what comes under is full of cloudformation syntax.
resources:
Resources:
tasksTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.tableName}
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
How to provide IAM roles for Lambda to access DynamoDB
If you worked in AWS you know that by default resources created in AWS does not have permission on other AWS resources. So if we create a Lambda function and if we want the function to connect to DynamoDB and get the data then the Lambda should have permission to do that.
Normally Lambda will be created with a role. The role can contain policies that gives permission for the Lambda to use other AWS resources. If we create the lambda function using AWS console, we manually create these policies and attach with the lambda. With serverless we need a way such that the lambda will be created in AWS along with the required permission.
This can be achieved by giving the IAM statements in iamRoleStatements
section.
Below is an example statement. Under the section it takes array of statements. Each statement consist of Effect, Action and Resource details.
Below one gives Allow permission to all dynamodb* actions (API like get, put etc…) to two resources. Resources are specified using ARN. Second one is direct hard coded value but the first one is kind of taken from configuration. See below on how it works.
provider:
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:*
Resource:
- arn:aws:dynamodb:${self:provider.region}:${self:custom.accountId}:table/${self:custom.tableName}
- arn:aws:dynamodb:ap-southeast-1:123456789012:table/users
custom:
accountId: ${aws:accountId}
tableName: TasksTable-${self:provider.stage}
Other Important serverless.yml config
Custom section
custom
section is the section where we can provide our own key value pair values that can be used within serverless.yml file as configurations
The way how we can refer to the custom values are using below yml syntax. self refers to the current file then followed by
${self:custom.tableName}
Environment variable
We often want the program to take the environment specific values from environemnt variables. In Node this can be accessed in code via process.env.ENV_VAR_NAME
In AWS console under each Lambda we can manually enter the environment value as key-value pair. In serverless the same can be done using provider.environment
section.
We can either give the value directly or can refer from other values (like custom section)
provider:
environment:
TASK_TABLE_NAME: ${self:custom.tableName}
KEY: SOME_VALUE
custom:
tableName: TasksTable-${self:provider.stage}
Command line options
During execution of sls deploy
command we can specify the options in run time using --
prefix. E.g. if we want to set the stage to sit or prod then we can do so by
sls deploy --verbose --stage prod
We can get access to this value in serverless.yml using below syntax. opt
refers to the command line options. Below example takes the value from opt:stage if not provided it defaults to ‘dev’
provider:
stage: ${opt:stage, 'dev'}
Splitting the serverless.yml
In some cases, if your serverless.yml file is getting bigger, it is possible to split. Meaning that creating separate yml file for some section and refer that file inside serverless.yml.
Lets us take an example below. We are referring hello
function in serverless.yml
based on the helloFunction
section inside that hello.yml
serverless.yml
functions:
hello: ${file(functions/hello.yml):helloFunction}
welcome:
handler: handler.welcome
events:
- http:
method: GET
path: /welcome
functions/hello.yml
Should be placed inside the
functions
folder and it should be in same level with serverless.yml
helloFunction:
handler: handler.hello
events:
- http:
method: GET
path: /hello
No comments:
Post a Comment