Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dashboard integration significantly increases Lambda duration #513

Open
vbichkovsky opened this issue Oct 28, 2020 · 8 comments
Open

Dashboard integration significantly increases Lambda duration #513

vbichkovsky opened this issue Oct 28, 2020 · 8 comments
Assignees
Labels
bug Something isn't working

Comments

@vbichkovsky
Copy link

Enabling Dashboard integration via setting org and app properties in serverless.yml while using webpack packaging and aws-sdk as a runtime dependency, results in significant Lambda runtime duration when an error occurs in the handler.

serverless.yml
service: issue
frameworkVersion: '2'

org: your-org-here
app: sls-issue

plugins:
  - serverless-webpack

provider:
  name: aws
  runtime: nodejs12.x
  stage: prod
  region: eu-west-1
  memorySize: 256
  timeout: 10

functions:
  hello:
    handler: index.hello
    events:
      - http:
          path: /
          method: get
I've prepared a sample project in a public repo: https://github.com/vbichkovsky/sls-issue In order to expose a problem, the app has to be deployed and an API Gateway endpoint has to be called.

Installed version

Framework Core: 2.8.0 (local)
Plugin: 4.1.1
SDK: 2.3.2
Components: 3.2.7

@vbichkovsky vbichkovsky changed the title Dashboard integration significantly introduces Lambda duration Dashboard integration significantly increases Lambda duration Oct 28, 2020
@medikoo
Copy link
Contributor

medikoo commented Oct 28, 2020

@vbichkovsky Is issue also present when webpack plugin is not used?

@medikoo medikoo transferred this issue from serverless/serverless Oct 28, 2020
@medikoo medikoo added the question Further information is requested label Oct 28, 2020
@vbichkovsky
Copy link
Author

@medikoo yes, it's present, just to lesser degree. I think it may have something to do with the size of the payload sent to the serverless dashboard back-end - it's much bigger when everything is bundled together (in case with webpack). Looks like this data is sent synchronously, hence increasing the duration of lambda.

I just ran the version deployed without using webpack, here are the stats:

  • with sls dashboard enabled: duration from a cold start - around 250ms, subsequent calls - around 40ms each
  • without sls dashboard: cold start - around 20ms, subsequent calls - around 2 ms

I guess it can be easily overlooked in such conditions, but in case with a big webpack package (I included whole aws-sdk in my example) the payload in CloudWatch logs is huge and it takes more than 2 seconds (to prepare/log/send it?)

The payload I'm referring to looks like this in CloudWatch logs:

INFO	SERVERLESS_ENTERPRISE {"c":true,"b":"H4sIAAAAAAAAA7VYC1PjOBL....

@medikoo medikoo added bug Something isn't working and removed question Further information is requested labels Oct 29, 2020
@medikoo
Copy link
Contributor

medikoo commented Oct 29, 2020

@vbichkovsky thanks for the details, we will check what could cause that, and get back to you

@astuyve
Copy link
Contributor

astuyve commented Oct 29, 2020

@vbichkovsky Data isn't actually sent to the back-end via HTTP request. The dashboard integration works by ingesting logs asynchronously from CloudWatch. So the only operation performed synchronously is preparing that payload to write to the log file.

There are a few things we can do to help troubleshoot this. First, if the payload in cloudwatch is huge - please try disabling gzip compression and posting the entire logged result.
You can do that by adding the following custom block in the serverless.yml:

custom:
  enterprise:
    compressLogs: false

That should result in the full json payload being written out. Please then share it here.

I suspect that part of this issue is stackman, a dependency used to capture stack traces from the node environment.

@vbichkovsky
Copy link
Author

There you go, here are uncompressed logs for one invocation, with webpack plugin enabled.
cloudwatch-logs.csv.log

@vbichkovsky
Copy link
Author

Here is another one, without the webpack plugin.
cloudwatch-logs.no-webpack.csv.log

@astuyve
Copy link
Contributor

astuyve commented Oct 29, 2020

Thanks @vbichkovsky! As I'm sure you've noticed; this appears to be an interesting interaction between stackman and webpack, where the webpacked version is 1000x larger than the non-webpacked version:
image.

Generating an 8MB log file is likely quite time consuming (and also expensive both in terms of CloudWatch costs at $.50/GB ingested and compute time of the lambda invocation itself.)

Just to confirm - if there is no error in the invocation, is lambda duration impacted at all?

@vbichkovsky
Copy link
Author

No, it's only impacted when there is an error.

I think it's stackman, indeed, as it tries to provide context about the error and assumes that runnable code conststs of multiple relatively short lines whereas with webpack (and other bundlers) there is just one huge line of code with all the modules bundled together.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants