How to Optimize AWS Lambda Performance with Power Tuning

How to Optimize AWS Lambda Performance with Power Tuning


Serverless applications can be extremely performant, thanks to the ease of parallelization and concurrency. While the Lambda service manages scaling automatically, you can optimize the individual Lambda functions used in your application by setting the proper memory to save money and improve performance.

Memory is the principal lever available to Lambda developers for controlling the performance of a function. You can configure the amount of memory allocated to a Lambda function, between 128MB and 10240MB.


Why power tune your Lambda functions in the first place?

Whenever you create a lambda within AWS, the memory is defaulted to 128MB (lowest setting). Whilst this is a great start for simple tasks and keeps your costs down, sometimes developers need, want & should update this value based on the type of task it is performing. By simply tweaking this value, your performance of your lambda can be drastically improved with little to no effect to the cost (even cheaper at times).

How can you increase your Lambda CPU?

It's memory again, the amount of memory also determines the amount of virtual CPU available to your function. Adding more memory proportionally increases the amount of CPU, increasing the overall computational power available. If a function is CPU-, network- or memory-bound, then changing the memory setting can dramatically improve its performance.



Since the Lambda service charges for the total amount of gigabyte-seconds consumed by a function, increasing the memory has an impact on overall cost if the total duration stays constant. Gigabyte-seconds are the product of total memory (in gigabytes) and duration (in seconds). However, in many cases, increasing the memory available causes a decrease in the duration. As a result, the overall cost increase may be negligible or may even decrease.

And the good news… developers are in control of the memory!

Power Tuning

While you can manually run tests on functions by selecting different memory allocations and measuring the time taken to complete, the AWS Lambda Power Tuning tool allows you to automate the process.

This tool uses AWS Step Functions to run multiple concurrent versions of a Lambda function at different memory allocations and measure the performance.


The tool is already available on AWS Serverless Application Repository service, you can find it here. Then with a single click on the "Deploy" button, it will create the resources necessary to run the tool (Note that it will create custom IAM roles as well as Step function and Lambda functions).

Screenshot 2021-08-31 at 15.54.45.png


The input function needs to run in your AWS account, performing live HTTP calls and SDK interaction, to measure likely performance in a live production scenario. Here for sake of demo, I'll create a simple Lambda function using Node.js that computes prime numbers until a maximum number we define as an input (maxNumber attribute).

exports.handler = async (event) => {
    let sieve = [], i, j, primes = [];
    for (i = 2; i <= event.maxNumber; ++i) {
        if (!sieve[i]) {
            for (j = i << 1; j <= event.maxNumber; j += i) {
                sieve[j] = true;

    const response = {
        statusCode: 200,
        body: JSON.stringify(primes),
    return response;

Let's try invoking this function using a test event with a max number of 10000:


Now you could see in the summary of the Execution results:

  • Duration, which is the actual duration of the function execution
  • Billed Duration, which is actual duration rounded up to the nearest 1ms (Awesome that they changed recently from 100ms down to 1ms)
  • Memory Size, which we go with the default settings of 128MB
  • Max Memory Used self explanatory
  • Init Duration, usually high if a cold start occurs but no worries you are not charged for that.


You can spend the a lot of time trying around different input for your function against different memory configuration until you figure out, what's the best cost-wise/speed-wise memory to use... but that's why we have the tool here!

Now let's copy our Lambda function ARN, and go to AWS Step functions, you will find a state machine already created prefixed with powerTuningStateMachine, select it then click on the "Start execution" button and here you need to pass the required attributes to the state machine as follow:

  "lambdaARN": "<your-lambda-function-arn>",
  "powerValues": [128, 256, 512, 1024],
  "num": 50,
  "payload": "[{\"payload\":{\"maxNumber\":1000},\"weight\":50},{\"payload\":{\"maxNumber\":10000},\"weight\":30},{\"payload\":{\"maxNumber\":100000},\"weight\":20}]",
  "parallelInvocation": true,
  "strategy": "speed"

As you can see powerValues is the list of power/memory values to be tested, num the # of invocations for each power configuration, payload the static payload that will be used for every invocation (Weighted payloads can be used in scenarios where the payload structure and the corresponding performance/speed can vary a lot in production and you'd like to include multiple payloads in the tuning process), parallelInvocation means all the invocations will be executed in parallel if true, strategy it can be "cost" or "speed" or "balanced" depends on what we want to compromise.

After the execution succeeded, you will see the "Execution output" section the result with a visualization attribute linking to the graph:


Now you can with confident decide on the best memory size for your case and set it to your Lambda function.


That was it for a simple Lambda function but generally, CPU-bound Lambda functions see the most benefit when memory increases, whereas network-bound see the least. This is because more memory provides greater computational capability, but it does not impact the response time of downstream services in network calls. Running the tool on your functions provides insight into how your code performs at different memory allocations, allowing you to make better decisions about how to configure your functions.

Check out more cases and detailed explanation on the tool docs.

That's it!

I'd love for you to leave me a feedback below in the comments!

Did you find this article valuable?

Support Basim Hennawi by becoming a sponsor. Any amount is appreciated!