
This may sound like a superman character or a Hollywood movie brawl scene, with the good-hearted hero felling 10 baddies with a single swing of his arm. I could only wish if this was as defining as these fairy tales depict. However, the reality is far from it. This is probably one of the challenging situations most engineering teams implementing Lambda functions often go through.
Implement as a single big Lambda function or to break it down into multiple smaller Lambda functions? Trust me, this isn’t really a decisive good vs evil scenario, but this is something tangled between perfectionism on one side and pragmatism on the other. Or, better put it, it is down to Purity vs Practicality.
To set the context, here is an interesting situation
Young and enthusiastic software developer Joe Scriptor, often nicknamed as Mr Lambda, has been tasked with implementing a user-facing feature-rich Lambda function. This Lambda function is part of a synchronous call sequence and is expected to perform sharp and sleek.
Working in an agile and iterative development environment, Joe carries out his analysis and breaks the feature down into a number of independent smaller functions. Having succeeded on the first phase of the work, Joe now focuses on carrying out the implementation phase of the work.
For Joe, an admirer of the SOLID principles (as every software engineer should be), the first principle — Single Responsibility - passes through his mind. The way he identified the smaller functions perfectly matched with the very same principle. With a smile, he thinks he has found the perfect solution and starts churning out Lambdas.
There is nothing unusual, wrong or concerning in the above scenario. Rather, it sounded positive and encouraging for any aspiring software engineer as a reference. Given the context (and the build-up!), let us now look deeper into the feature itself that Joe was tasked and trusted with.
The feature
The functionality was one of many that form part of an online shopping (eCommerce) application - a front-line service to add a product to the online shopping basket. As we can imagine, adding items to the basket is probably the most commonly used service and the one that links closely with the customers. The NPS (Net Promoter Score) of the site often links directly to how this particular functionality performs, hence this feature is required to be performant all the time without any observable lag.
Using the service
When we think about the implementation of this service, in all likelihood there is going to be an API endpoint that the front-end applications will invoke — no surprises there. This could well be implemented as a POST method that accepts the shopping basket (cart) identifier, product code and the country code as part of the request. In addition to these, there could also be authentication token, API key and other header parameters for different reasons and purpose. No big surprises here either. After all, this is a simple, routine API configuration, as pictured in this abstract view below.

As you would expect, there is a Lambda behind the API that acts as the proxy that implements the business logic related to adding products to the basket. In the above setup the basket/cart and all the associated details belong to an external 3rd party eCommerce application. The Lambda is responsible for enforcing all the logic specific to that particular business. The eCommerce platform and/or other 3rd party vendor applications could be on AWS Cloud or on other cloud providers, which is beyond the perimeters of this discussion here.
While in operation, the front-end applications will invoke the API with the details and receive a response, be it a success or a failure of any nature. The important point to note here is that this is a synchronous (request/response) invocation where the calling client application waits for a response from the API. This adds extra onus on to the API and its implementation behind the scenes by the lambda function to perform fast and efficient.
Joe’s implementation
Now, let us revisit Joe’s story and see how he might have implemented the solution with his thinking of ‘single responsibility’ and breaking into separate functions. All these functions contribute to the main goal, which is to perform the required validations before a product can be successfully added to the basket. Here is the depiction of such a solution.

As shown in the above diagram, the ‘Add Product’ Lambda acts as the entry point to the service and co-ordinates with the other Lambdas that carry out the individual piece of the overall functionality — be it checking product availability or computing the tax value or similar.
This is a perfectly workable solution that will fulfil the purpose as intended. So, what is the problem here and why are we debating as if it is not a viable solution?
Core argument factors
Let us now take a little time to look at some of the aspects of the Lambda implementation that are often less obvious:
Implementation — takes time and effort and not to mention all the testing involved, unit as well as integration tests.
Deployment — though this may just fall into any existing deployment pipeline, it still requires care when changes to logic and other aspects change.
Maintenance — maintaining the code, versioning, enhancements, etc. Can’t also ignore the attention required in terms of tuning and optimisation of each of those Lambdas across different environments.
Monitoring — more Lambdas means more CloudWatch log streams and more distributed data to dig into while investigating production issues. If there are dedicated monitoring and tracing applications involved then consider the overhead of extra dashboards or correlations, etc.
Cost — one of the prime factors here. Even if the individual Lambdas complete in few milliseconds they still get charged for in the order of 100 ms of execution. This will certainly add extra cost even if that cost is relatively small.
Payload — depending in the nature of the domain or application, the size of the payload that needs passing between the Lambdas could have an impact. In a typical eCommerce application, the data structure of a cart or an order or a product in JSON format could be substantial. Though not expected to challenge (at least that often!) the Lambda synchronous call payload size limit of 6MB, this could still have a serious effect on the performance of the API — not to mention the parsing and data extraction needed in each Lambda to carry out its logic.
External services interaction — this involves external service API calls, data store updates amongst other things. If certain Lambdas need to hit the same service API in order to fetch data or check status, etc.. it may lead to a situation of API call volume restrictions or throttling issues. If Lambdas require updates to the same data object in a data store such as DynamoDB then that requires taking into consideration atomicity, transaction limitations, etc.
Performance — the crux of this discussion. Even if each little Lambda completes in just a few milliseconds, even if there is a possibility of parallelism, the overall execution time would still increase however small that may be. For a customer facing feature, every extra millisecond lag would make a negative impact.
The list above highlights some, if not all, of the issues and the added extra overhead involved when having small Lambdas come together to perform one big task. You may feel that I have blown up one or two things beyond proportion but that is often the reality where we won’t realise until it hits us badly.
So, where do I stand?
Let me make this clear. I am not at all against having separate Lambdas performing single responsibility if that piece of functionality is going to be reused. For example, if one of those Lambdas is to check the stock status of a product and if that Lambda is behind an API that provides it as a service to consumers then it would make sense to keep it as a separate Lambda. However, there are a number of other situations in serverless development where it makes little sense to have separate Lambda functions for the sake of it.
So, what is the correct way of doing this, you may ask… well, there isn’t a right or wrong way of doing this, as I mentioned earlier. In situations like this, we should be open and acceptable for the approach that would make the most sense. The approach that adds business value by offering the best customer experience, rather than sticking to the only approach that satisfies the laws of software engineering and pleases few individuals within the team.
Conclusion
In summary, when we are faced with situations like these, it would require strong willpower to go ahead with one Lambda that can be both performant as well as cost-effective. There is little reason to make things complicated beyond the need just to satisfy the ego or to stay as a puritan. Such things may look appealing on paper or on an architectural blueprint, but are likely to fail to yield the practical benefits that everyone is expecting in a serverless application.
Can Practicality win over Purity in such situations? Only experience will tell!
Lambda vs Lambdas was originally published in LEGO Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.