Guest post originally published on SparkFabrik’s blog by SparkFabrik Team
Serverless computing is an execution model in which the Cloud service provider is responsible for executing part of the code through the dynamic allocation of resources. In this model, the client only pays for the amount of resources actually used.
Code typically runs inside stateless containers that can be triggered by various events, including HTTP requests, database events, queuing services, monitoring alerts, file uploads, etc.
So why choose the serverless model?
Also referred to as “Function-as-a-Service” or FaaS, the serverless model can be the ideal solution to several problems traditionally encountered by IT teams. When the application runs on a proprietary server and the company is responsible for providing and managing the underlying resources, it finds itself:
- Having to pay for the operation of the server, even when no requests are actually being served.
- Being responsible for the uptime and maintenance of the server and of all the underlying resources.
- Having to manage server security updates.
- Having to resize the server accordingly, as usage increases or decreases.
In small companies without an Operations team dedicated to managing servers, as well as in large corporations with dedicated resources, carrying out these operations takes time away from the core activities: building and maintaining applications. This is precisely where serverless computing comes in.
The benefits of enabling a serverless architecture
Serverless architecture is becoming increasingly popular. Indeed, according to research carried out by MarketsandMarkets, the industry is expected to increase revenues from an estimated $7.6 billion in 2020, to an estimated $21.1 billion by 2025, with a compound annual growth rate (CAGR) of 22.7%. This growth in the adoption of serverless architecture is attributable to a series of advantages deriving from the new development model. Here they are!
Unlike Infrastructure-as-a-Service (IaaS) models, which entail the rental of hardware resources regardless of their actual use, the Function-as-a-Service (FaaS) model is based on a pay-as-you-go rate: assets are paid for the time strictly necessary for the execution of a function, when the latter is called by an event.
Paying exclusively for the value of the services actually used allows teams to focus on the development of the product and its unique features, and not on the costs or implementation of services which, in fact, are merely integrated to support the main functions.
Scalability is a critical factor for fast-growing companies, as vertical or horizontal infrastructure scaling is required. A somewhat challenging task which often requires a great deal of time and effort, with a corresponding increase in operating costs.
Serverless environments remove these limitations, allowing companies to start small and then support their growth over time, without service disruptions and without the need to implement costly and unplanned changes.
3. FLEXIBILITY AND ADAPTABILITY
Since the provisioning and management of computational resources are offloaded to the Cloud provider, companies are able to rapidly adopt new technologies that allow them to respond to business and market needs quickly and effectively, without having to worry about infrastructure upgrades and all the associated costs.
4.HIGH AVAILABILITY AND FAULT TOLERANCE
In today’s companies, it is common knowledge that business is heavily dependent on IT: this is precisely why IT services must guarantee high availability. Cloud providers offer a well-designed global infrastructure capable of guaranteeing availability and resilience for customer workloads.
5. BUSINESS CONTINUITY AND DISASTER RECOVERY
Today, business continuity is a critical aspect for companies and, as a result, activities must be supported by a solid Disaster Recovery strategy and plan. Cloud providers of serverless solutions offer advanced features that facilitate the automatic recovery of applications and underlying systems against any type of disaster (natural disasters, cyberattacks, hardware defects and so on).
Serverless architectures: critical aspects to consider
Although the advantages of adopting a serverless architecture are evident and numerous, there is nevertheless a downside that needs to be considered. So let’s take a look at the challenges and critical aspects to keep in mind when deciding to adopt this new development model.
1. VENDOR LOCK-IN
When it comes to Serverless architectures, the issue of vendor lock-in must be taken into consideration in the design and migration phase towards this paradigm. Generally, these types of architectures develop more easily within the “walled gardens” of individual vendors.
This is precisely why it is essential to clearly understand, from the very beginning, the critical issues that can occur in the transition from one vendor to another:
- Runtime and programming language support is not uniform for all vendors, even if they are slowly aligning
- The lack of a standardized format for describing the events that trigger the execution of serverless code
- Some platforms use proprietary or in-house developed tools for packaging and deployment
To mitigate these problems, the Cloud Native Computing Foundation, responsible for promoting the dissemination of open standards for Cloud Native implementations, maintains an observatory that keeps track of serverless products organized by category. The CNCF supports the development of open standards and solutions, such as CloudEvents (a standardized format for event data) and open products such as Knative, used to implement FaaS services in the Cloud and on-premises.
2. CHALLENGES IN ESTIMATING COSTS
Since the pricing model of FaaS services is purely pay-per-use, it becomes difficult to estimate costs. In the absence of fixed fees, resources need to be paid when necessary and therefore, companies can often run into nasty surprises when applications are implemented into production.
It is a good idea to analyze the offers of the various vendors. Sometimes you may in fact run into significant differences in costs, as well as in the number of free tiers available.
An interesting estimating tool is the Serverless Cost Calculator, which allows you to simulate the costs of the most popular platforms such as AWS Lambda, Azure Functions, Google Cloud Functions and IBM OpenWhisk.
3. COLD START
In the serverless paradigm, we have seen that resources are paid only when they are actually used. This is precisely why Cloud vendors, to make this model economically sustainable, deactivate resources when not actually in use.
The downside of this is that, at times, there may be activation delays (cold start). A cold start refers to the delay between the moment a function is invoked and the time it takes for instances to activate and respond to the request.
Several factors can influence a cold start problem, including:
- The programming language used
- The assigned and available resources
- The number of dependencies and the overall application complexity
Therefore, it is important to work on each of these parameters to optimize the start time of the function, adopting specific techniques recommended by the vendors as described, for example, by AWS for Lambda functions or by Google Cloud Platform for the Cloud Run function.
While it is true that all Cloud providers offer advanced security systems, it should be remembered that servers that provide services to multiple customers are naturally more vulnerable to security problems than dedicated local servers.
This is due to the larger set of event sources which, in turn, increases the potential attack surface. Some of the most common risks are those caused by reliance on serverless functions obtained from third-party software such as open source packages and libraries, and Distributed Denial of Service (DDoS) attacks.
Despite the various challenges that can be encountered when adopting a serverless architecture, in most cases, the advantages obtained outweigh the risks of critical issues.
Moreover, some problems are obvious and easily addressed through measures such as carefully choosing provider technologies to avoid lock-in or implementing the options described earlier to mitigate cold starts.