Quantcast
Channel: Engineers @ The LEGO Group - Medium
Viewing all articles
Browse latest Browse all 70

Why the ‘WHY’ matters more than the ‘WHAT’ in Serverless!

$
0
0

It is indeed time to change gears. Even after six years since we first witnessed lambda function, and many years benefiting from managed (serverless) services, we are repeatedly getting stuck and falling back into the same cycle of defining serverless with the ever so popular and mundane question of ‘What is serverless?’.

Definition overload

For me, it is getting to the tipping point of hearing the same thing over and over again. There is plentiful, handful, earful, mouthful and over spilled definitions of serverless all around us. Often we spend awful lot of time perfecting (and arguing on) the definition of serverless than producing something that is beneficial to our customers and valuable to our business stakeholders.

For anyone who is still searching for a definition for serverless, then there are many around in different shapes and forms — one word version, one liner version, paragraph version and even a full page version.

Whether serverless is a doctrine, enigma, dogma, direction, destination, religion or whatever we fantasise, it simply does not bring any value until we put that into practice. Hence we must not waste our energy by perfecting its definition but must cultivate its value by adopting it. The moment we shift our focus to look at the ‘why’ side of serverless, we sense our wisdom eyes wide open and we begin to see things far beyond a definition can show us!

There are a number of fruitful things we can talk about when it comes to adopting serverless, but for our focus here in this writeup, I am going to summarise into just four reasons. Keep it simple & sweet! (Oops! Another definition for KISS).

The four wise WHYs

1. Serverless is an ecosystem to grow with!

2. Provides granular level service optimisation to perfect with!

3. Aids an iterative development culture to relish with!

4. Brings engineering diversity to cherish with!

An ecosystem to grow with

An ecosystem where FaaS (function as a service) is part of it but certainly not the sole proprietor of it. Many serverless and fully-managed services come together to form the solution.

Here, business features and products are viewed as event-driven orchestration of serverless services, knitted together with infrastructure code (IaC), that brings the best value for the users. The benefitting end users could be the customers around the globe and could equally be the business stakeholders.

Here is a simple example of such a feature. This is a click-stream event ingestion pipeline. The event producer is the web browser that captures the user interactions as data events and sends those events to the underlying serverless implementation of the solution.

Serverless ecosystem

It is evident here that this solution is fulfilled by a combination of lambda functions and managed serverless services such as API Gateway, Kinesis Firehose, S3 and Parameter Store. In addition, it is equally important to remember that there are other services that are not visible in this diagram, also form part of the same solution. IAM, CloudWatch, CloudTrail, CloudFormation, etc., also play crucial part in this feature ecosystem.

Value ecosystem

Let us spare a minute at the event consumers at the far end of the architecture. They are there, most likely consuming different flavours of the incoming events, in order to perform different tasks. For brevity, let us say one consumer is a monitoring dashboard showing sales details, the other giving insights on the A and B testing statistics and the third one is part of a data lake to perform data analytics and returns the product recommendations. All these contributing, either directly or indirectly, to the business value ecosystem.

Given this simple yet powerful data streaming pipeline, if the business wants to utilise the available data for a different purpose, then it becomes a relatively easy exercise to extend this architecture without causing any disruption to the existing flow. That way the entire ecosystem grows! How cool is that?

Granular level service optimisation to perfect with

Majority of the serverless service optimisation discussions often lead to lambda resource tuning or lambda timeout adjustments or lambda cold start remediation tricks. For this, there are many recommendations, blog posts and helper tools that lead us into the right direction. Besides meddling with only the lambda functions, the serverless ecosystem offers us the capability to fine tune most other services to perfection as well.

Take for instance the following simple event-driven data pipeline. This is probably one of the simplest architectural constructs we commonly see. A data object dropped into a S3 bucket triggers a lambda function that then pushes a message to a queue to be fulfilled by another lambda. Simple and straightforward.

Let us assume that the different data feeds (Product, SKU and Pricing) carry distinguishable file names (S3 object keys) and each pipeline gets triggered accordingly.

Lambda functions

As a start, we might go with the default settings for the services (S3, Lambda functions, SQS) used in the architecture. Then we realise (with the help of logs and monitoring tools) that the SKU feed is complex and the transform lambda for the SKU feed could benefit from some extra memory. So we do just that.

At the same time on a different track, the Pricing feeds, even though simpler to process, require express treatment in order to get the data updated in the commerce platform as soon as possible. That would make us to adjust the resources for both the lambda functions on the Pricing feed pipeline.

SQS queues

How about those SQS queues? Would they require any TLC (Tender Loving Care)? Yes sure, but more than that they can play a different role as well. The queues can act as a buffer to soak the pressure and smoothen the updates to the commerce platform.

We said earlier that the Pricing feeds require express treatment. From a queue point of view, that can imply that a Pricing message should spend the least amount of time in the queue when compared to the other feeds. If you guessed correctly, yes, I am alluding towards promoting priority queueing here (short polling). This now becomes possible by adjusting the properties of that particular queue and also the trigger and the message delivery configurations between the Pricing SQS queue and the update lambda function. Possibilities are endless!

S3

One might wonder how can we include the S3 bucket as part of the optimisation that we’ve been discussing here. Here is a simple one to close this part of our discussion. Let us say we don’t require the feed data in the bucket once the data is successfully processed. However, in case of a processing failure we need access to the data for an engineer to investigate. So we decide to keep the data for, say, 5 days and expire or archive after that. Now, this can become part of the S3 optimisation.

Together, they all tuned to perfection!

Aids an iterative development culture to relish with

Iterative development is not something new and definitely not something that the serverless imposes on us. We can implement big architectural constructs iteratively, we can develop a smaller part of the architecture iteratively or we can even start our serverless journey iteratively. Whichever camp we belong, it always enables us to visualise the progress as we iteratively move forward. What serverless helps us in this regard is the ease and the efficiency that we can achieve, either as an individual or as a team in that progression.

Here is a trivial use case that shows how we can make a stubborn developer (or a team) to adopt a new service (I have chosen Amazon EventBridge for this illustration but it could be anything). An all-out implementation to use EventBridge could become counterproductive in such situations and hence this gentle iterative approach!

Iteration #1 — setup a custom event bus

That’s all it requires to start with. Get the team to create a custom event bus in the AWS account the team is working with. Nothing else is needed or added at this stage. KISS!

While doing so, if the team is new to EventBridge then perhaps feed them with few docs, blogs, videos related to the topic in order to stimulate interest and curiosity.

Iteration #2 — send events to the bus

Identify the service that is the ideal candidate to send events to the bus. At this stage no one is sure who the consumers are but that’s fine. All we aim to do here is to put events onto the newly setup event bus. I wouldn’t even bother to perfect the structure of the custom event at this stage. Just get the events flowing!

Just an addition. To visualise the events that are sent to the bus, it may be a good idea to setup a CloudWatch log stream subscriber temporarily.

Iteration #3 — focus on the event

Hopefully by now the engineers are on board with EventBridge! Perhaps now is the time to give a little attention to the custom event itself. Focus on what specific data elements the event is going to carry. A common structure for the event may be useful. Perhaps something like the sample below but not at all mandatory.

{
"version": "0",
"id": "0c90d427-dd5d-0f19-12cd-c90bb8ae53fd",
"detail-type": "event",
"source": "service-checkout",
"account": "1234567890",
"time": "2019-12-17T10:29:48Z",
"region": "eu-central-1",
"resources": [],
"detail": {
"metadata": {
"domain": "SHOP",
"service": "service-checkout",
"type": "ORDER",
"status": "SUBMITTED"
},
"data": {
"orderNumber": "T123123123",
"customerId": "23hdfjdf-34ff-34ghj",
"totalValue": 29.99,
"items": 5
}
}
}

The development grows slowly but iteratively!

Iteration #4 — setup event targets

Now that we have events available to consume for other services, it is time to decouple services and make them event-driven. The event filtering and target routing configurations slowly take shape. The possibilities are plenty.

Start small, iterate, evolve!

Brings engineering diversity to cherish with

To illustrate the point, here is a brief bio of Jane, a growing ‘serverless engineer’!

Jane joined as an application engineer with sufficient web and javascript knowledge. Completely alien to AWS and serverless, Jane was tasked with refactoring a serverless service after 3 months of joining the team. She learned how to write lambda functions. Soon she realised there is an API involved, so she learned about API Gateway and its main features such as API keys, usage plans, stages, custom domains, etc.

Initial code review with her colleague exposed the lack of tests, especially the integration tests. So Jane referred other services and also spoke to few other engineers and put in place adequate tests to cover all the scenarios.

Jane raised a PR for her changes. Within few minutes she was flooded with review comments ranging from IAM permissions, location of few config data used in her code, lambda resources and many more. Though initially disappointed, she soon saw the positive side of those comments and realised how little she knew about security and service optimisation. She learned few things by herself and for the rest she spent couple of hours with a senior engineer who explained everything in detail. Jane made the changes and resubmitted the PR. After couple of rounds, she received many approvals. She felt victorious!

Earlier, while setting up the service package in the code repository, she came across setting up the deployment steps, as each microservice has separate deployment pipeline. Though challenging, Jane sat with a Devps engineer to understand the CI/CD configurations and got excited about the ease of taking the implementation to production. She made a decision to herself that she would deploy her service to production all by herself.

Ever since her PR got approved and merged, Jane had been working with the QA engineer to test her changes in the acceptance environment. When all the tests were green and when QA happy to go ahead and deploy to production, Jane expressed her desire to deploy it to production. Jane’s wish came true. She pressed the button and within seconds her changes became live!

Jane knew that her work didn’t end there. She switched tabs and opened the monitoring dashboard for that service and glued her eye in there looking for any errors. She also kept an eye on the CloudWatch logs streamed through Kibana. Happy that everything was functioning well, Jane posted a message on Team’s channel about the successful deployment and running of the service. She then moved her Jira ticket to the Done column on the scrum board.

Jane, who once started as a web programmer has now progressed into a multi-skilled serverless engineer. An asset to the team and to the organisation!

Conclusion

Thanks for reading through this brief write-up. It was never intended to cover all the ‘whys’ that we can think of. Purposely I avoided discussing the cost side of serverless here. Yes, there can be cost benefits and the margin of savings can vary depending on who we speak to and which business use case we look into. For me what serverless offers is more than just the cost.

I hope it inspired your serverless thinking in some way. Let the inspiration motivates you to achieve greater things along your serverless journey. If you are new to serverless then remember the following-

No one starts perfect with Serverless. That’s fine, but strive to be better at every iteration, and that is important.

Go Build Serverless!

#ServerlessForEveryone


Why the ‘WHY’ matters more than the ‘WHAT’ in Serverless! was originally published in LEGO Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.


Viewing all articles
Browse latest Browse all 70

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>