Serverless Deployment: How It Works and Practical Examples

Serverless deployment is the latest trend in the cloud arena. You have your code available, but it’s only executed when a request is made for it. It’s all provisioned within milliseconds and discarded afterward. You only pay for what you use. Sounds nice, doesn’t it? It’s also easier to update whenever you have to. So let’s take a closer look at how it works, what a deployment means, and a few examples.

Cloud with Scalyr colors signifying serverless deployment

Serverless, the Concept

Whenever we refer to serverless or serverless architecture, we are talking about relying on a third-party service. So your provider is in charge of managing and assigning computer resources for your code to execute.

In the end, it provides you with a runtime environment for your code execution. This kind of service is more commonly known as function as a service (FaaS). It scales automatically whenever you need more resources. Finally, you only have to pay for the resources you use. More information about the model is available here.

A New Deployment Way

In the last paragraph, I quickly reviewed what serverless means. Using this kind of architecture, you must handle deployments in a different way.

Most likely, you’ll have a continuous-integration and continuous-delivery pipeline. When you complete your development process, your delivery pipeline kicks in. So your process starts by taking down your application containers.

Afterward, it launches new containers with your code’s latest features. If you have any kind of high availability configuration, you have to make sure all your nodes are replaced. Then you are done.

When using a serverless architecture, you only have to package your updated code and give it to your provider. By the way, this step completely depends on your FaaS provider. And you are done.

Subsequently, your provider is in charge of making your updated code available for any new requests. Since each function is isolated in its own environment, the update won’t affect any code during execution. Also, old functions will die after completion. The update will seem instantaneous.

Drawbacks

Just updating your code looks like a dream, nevertheless, FaaS is not always the right solution.

For example, it’s not the best scenario for executing long-running applications, as it can be more expensive than having a virtual machine or an already provisioned container.

Another drawback is vendor lock in. This happens because every vendor has its own way for deployment, libraries and software version requirements. If you need to change to a different provider, you may need to change big part of your code. Right now, no vendors are compatible.

Additionally we have resource provisioning, or cold start. The first time you execute the function, resources to execute the function need to be initialized. Subsequent request will function better because resources are ready, but if the function is not invoked for some time, the resources my be released.

Security

As in every application, security should be taken into place since starting the project. Some recommendations include giving your functions the least privileges possible, review third party dependencies, encrypt your data when you need to call other APIs or functions, use environment variables for credentials, and logging.

Specifically for the last point, it’s better to have a place to concentrate all your logs, not just your serverless functions, for further analysis. For this, I suggest you give Scalyr a try.

Setting up the Scenario

I have already described how the deployment may work, but let’s look at a specific example. Serverless is better for small functions, microservices, or tasks that are very quick to execute. In addition, there isn’t support for many languages yet. So I will use Python as an example with the most common providers: Google Cloud, Amazon Web Services, and Microsoft Azure.

For this example to work, you’ll need to install Python on your platform (Windows, Linux, or Mac OS X). I will not cover installation as part of this post since it’s platform dependent. But I will give you some clues. For Windows, there is an installer on the download page. For Linux, the simplest way is using your distribution package manager. And for Mac OS X, I recommend using Brew to install it.

The final step is to isolate your development environment. This step is especially useful when you have different projects, each with different requirements or even module versions. I use virtualenvwrapper.

For this example, although the main code will be almost the same, each provider has its own libraries to implement FaaS.

So, let’s get started by creating a Python virtual environment and a directory for our project (I’m using Windows, so you may have to adapt this part for your platform):

mkvirtualenv testpython 
mkdir c:\project
cd c:\project

Working With Google Cloud

If you’re using Google Cloud, you need to enable the Cloud Functions API. Now that you have Python and your virtual environment, let’s add Google Cloud Libraries.

pip install --upgrade google-cloud-storage

Then, install the Cloud SDK to help you manage your Google Cloud account from the command line.

Setting up the Code

Google Cloud requires that the Python file is called main.py. Within that file you have to define the function to call.

# main.py
def hello(request):
    return "Hello world".

Deployment

To deploy the code, you just need to execute the following command within your project directory:

gcloud functions deploy hello --runtime python37 --trigger-http --allow-unauthenticated

Let’s deconstruct the command:

  • gcloud is part of the Cloud SDK.
  • functions deploy executes a deployment in the Functions API.
  • hello is the name of the function to call.
  • –runtime specifies the desired runtime to execute the function—in this case, Python version 3.7.
  • –trigger-http indicates that the function will be called when a web request happens.
  • –allow-unauthenticated calls the function without user authentication.

Finally, to call the function, either use CURL or point your browser to the following URL:

https://REGION-PROJECT_ID.cloudfunctions.net/hello

REGION refers to the region your function is deployed. PROJECT_ID is your project’s project ID.

You can refer to the Google Cloud tutorial for something a bit more detailed or if you want to use a different programming language.

Working With Amazon Web Services (AWS)

Deploying in AWS is a bit more complicated, so it is better to use the serverless framework. Installing the framework requires Node.js. Once installed, just execute the following command:

npm install -g serverless

The -g flag installs the package globally, so you can use it from any directory.

The next step is to create a serverless project. In order to do it, just execute the next command:

serverless

It is an interactive command that will help you set up your Python service and configure your AWS account to work with the framework. Afterward, it will generate a new directory for your project, with two main files (handler.py and serverless.yml).

handler.py is where you write your function’s code, while serverles.yml describes your Lamda function. Look at this handler.py example:

# handler.py
import sys
import json
def endpoint(event, context):
    body = {
        "message": "Hello world"
    }
    response = {
        "statusCode": 200,
        "body": json.dumps(body)
    }
    return response

endpoint is the function that AWS will execute. The response object will be the response to the service call.

Now let’s look at the serverless.yml file.

service: aws-python-hello

frameworkVersion: ">=1.2.0 <2.0.0"

provider:
  name: aws
  runtime: python3.7

functions:
  pythonHello:
    handler: handler.endpoint
    events:
    - http:
      path: hello
      method: get

This is what the file means:

  • service is the name for the AWS service.
  • frameworkVersion is the serverless framework to use.
  • provider states the provider name, aws for this case, and the runtime environment, Python 3.7.
  • The function is called pythonHello, and it will execute what is coded in the function endpoint in the handler.py file.
  • events indicate that the trigger to execute the function will be an http request.
  • The path to the request is hello.
  • The method for the request is GET.

Deployment to AWS

This part is easy. To deploy your function, just execute the following command:

serverless deploy -v

Part of the output will be the URL to execute your function. So just open it in your browser:

https://xxxxxxxxxx.execute-api.us-east-1.amazonaws.com/dev/hello

If you want to see more details about serverless with AWS, refer to this entry.

Working With Microsoft Azure

In order to use Microsoft Azure to deploy your functions, you need to install the Azure Functions Core Tools. Now, within your project directory, execute the following command:

func init --python

It will initialize your Azure project within the current directory and with python as the worker runtime.

Now let’s define the function to execute with the command:

func new --name Hello --template "HTTP trigger"
  • Hello is the function name.
  • –template specifies the trigger to execute the function. HTTP trigger states a web request.

The code to be executed is stored in the file __init__.py.

import logging
import azure.functions as func

def main(req: func.HttpRequest) -> func.HttpResponse:
    logging.info('Python HTTP trigger function processed a request.')
    return func.HttpResponse("Hello world!")
)

The file function.json gives details about the function to execute.

{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "methods": [
        "get",
        "post"
      ]
    },
    {
      "type": "http",
      "direction": "out",
      "name": "$return"
    }
  ]
}

In the json file, you specify the file to execute, when it is to execute, and what methods are allowed. We are almost ready to deploy.

Deploying to Microsoft Azure

Before deploying to Azure, you need a resource group, a storage account, and a functions app. Use the following commands :

az login
az group create --name AzureHello --location westeurope
az storage account create --name storeHello --location westeurope --resource-group AzureHello  --sku Standard_LRS
az functionapp create --resource-group AzureHello  --os-type Linux --consumption-plan-location westeurope --runtime python --name HelloApp --storage-account storeHello
  • First, log in to Azure.
  • Next, create a group named AzureHello.
  • Then create a storage name storeHello.
  • Finally, create an app called HelloApp that uses the resource group AzureHello and the storage storeHello.

Finally, you can deploy your application with this command:

func azure functionapp publish HelloApp

After deployment, a URL will be part of the output. Navigate with your browser to that address to execute your function.

You can follow this tutorial for additional information and clarification.

Conclusions

As you can see, each provider has its own method to deploy serverless functions. However, deployment is the easiest part. It’s the provider setup and the function’s code that takes the most time to prepare. Still, the process doesn’t end with the deployment.

I also recommend setting up some logging and error handling routines in order to keep your functions in good health and discover errors as soon as they arise.