Introduction
In software development, we are constantly building solutions for end-users that solve a particular problem or ease/automate a certain process. Therefore, designing and building the software is not the only part of the process, as we have to make the software available to the intended users.
For web-based applications, deployment is a very important aspect and part of the process since the application not only needs to work, but also needs to work for many users concurrently and be highly available.
Some of the deployment options that are available to us include buying our own server hardware and deploying our applications or renting server space in other companies. This comes at a cost not only of the resources needed to acquire them, but also the maintenance costs and personnel to monitor the server resources.
What if we could make our application available without having to worry about provisioning servers or maintaining them? Our agility and delivery would be greatly enhanced.
We can achieve this through a serverless computing platform such as AWS Lambda, which is made available by Amazon Web Services.
What is Serverless Computing?
Cloud providers offer different solutions to deploying and running applications, one of them being serverless computing. In this architecture, the cloud provider hosts your applications and takes on the responsibilities of server management in terms of software and hardware. Think of it as Infrastructure as a Service (IaaS).
The cloud provider handles scaling, availability, server maintenance, and configuration among other things so that as the developers, our focus is entirely on our code. This, in turn, reduces the overhead required to make our applications run and available to our end users.
Serverless computing has its advantages, but it also poses some drawbacks such as the developer is limited to the options or tools that the provider uses for purposes such as logging, tracing, and monitoring and the developer cannot use their own tools. Also, as a developer, you are tied to the availability of the provider, if they experience issues or outages, then our application is also affected.
AWS is a leading cloud provider that offers serverless computing through AWS Lambda. This is a serverless compute runtime that allows developers to run their code in response to certain events from users, such as making a request or uploading files into an S3 bucket.
This service also allows us to only pay for the computing resources that we utilize rather than a blanket cost for the services. This happens through a Lambda function that scales to match the scale and is independent of the underlying infrastructure.
What is Chalice?
Chalice is a microframework for building and quickly deploying serverless applications in Python to AWS Lambda functions. Chalice not only helps us create Python applications but also quickly deploy them by providing a command-line tool for creating, managing, and deploying our application.
Chalice also provides integration functionality to other Amazon services such as Amazon API Gateway, Amazon Simple Storage Service(S3), and Simple Queue Service (SQS), among others. We can create RESTful APIs, tasks that run on a certain schedule or integrate to the S3 bucket for storage.
Setup
AWS Setup
To get started with Chalice, we need to have an AWS account set up to interact with and deploy our code to. This can be achieved through the AWS homepage where we can sign up or log in to an existing AWS account. AWS requires that we not only offer our details but also our billing details, but for this demonstration we will be using the AWS Free Tier for testing and development purposes, for which we will not be billed.
Once our account is set up, under our profile dropdown, there is a section called "My Security Credentials". Here we will be able to create credentials that will be used when interacting with the AWS console. These credentials will be also used by the Amazon CLI tool.
Amazon also offers a CLI tool that we can use to interact with our AWS services using commands in our terminal. It is available for Mac, Linux and Windows platforms and requires Python 2.6+ or Python 3.3 or later versions. We can install it by running the following pip command:
$ pip install awscli
Once set up, we can test the CLI tool by running:
$ aws --version
More details about the CLI tool and installation on other platforms can be found here.
With the AWS CLI tool setup, we will use the credentials, i.e, secret key and access ID, that we generated earlier to configure our CLI tool by running:
$ aws configure
We will get a prompt to fill in our Access Key ID
, Secret Access Key
and default regions and output formats. The last two are optional but we will need the access key and secret that we obtained from the AWS console dashboard.
You can also configure different credentials for different users on AWS. More on that and other details can be found here.
Project Setup
For this demo project, we will be building a Python application and it is good practice to work within a virtual environment to keep our project environment abstracted from the system’s Python environment. For this purpose, we will utilize the virtualenv
tool to create a virtual environment within which we will work.
In case the virtualenv
tool is not yet installed, we can install it by simply running:
$ pip install virtualenv
More information about the virtualenv
tool can be found here.
With virtualenv
installed, let us head over to our working directory and create an environment by running the following command:
$ virtualenv --python=python3 venv-chalice
We will activate our environment by running:
$ source venv-chalice/bin/activate
Our environment is now set up, and we can now install Chalice and verify the installation by running the following commands:
$ pip install chalice
$ chalice --help
The second command here is simply used to verify the installation of Chalice.
Implementation
We now have an AWS account, the AWS CLI tool installed, the environment setup, and Chalice installed. We can now use Chalice to create our simple RESTful API as follows:
$ chalice new-project demoapp
Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!
This command creates a simple Chalice project within a folder that is of the following structure:
$ tree demoapp
demoapp
├── app.py
└── requirements.txt
Any other requirements that our Chalice app will require to run while deployed on AWS Lambda will go into the requirements.txt
file within the demoapp
folder, and our new functionality will mainly reside in the app.py
file. We can create other files and import them into the app.py
file, which is our main project file.
For our simple API, we will create an API that returns a list of a user’s public GitHub repositories, the languages used in each, and the number of stars that repository has. This information is publicly available on the GitHub API, therefore we will not need credentials to interact with the API. We will create a function that receives a username and returns the details we require. If the username provided does not exist, then we will receive an empty response payload.
Let us create the github_repos
function that will be responsible for the GitHub API interaction:
import requests
def github_repos(username):
# Final list to contain our repository objects
formatted_repos = []
if username:
# Format the url by inserting the passed username
url = "https://api.github.com/users/{}/repos".format(username)
r = requests.get(url)
# Get the JSON containing the list of repositories
list_of_repos = r.json()
for repo in list_of_repos:
repo_object = {
"name": repo["name"],
"stars": repo["watchers"],
"language": repo["language"],
}
formatted_repos.append(repo_object)
return formatted_repos
The function github_repos
receives a username and plugs it into the GitHub API URL before making the request. The response received has a lot of information that we do not need for now, so we extract the details of a repository that we need, create a new object and add it to the list of formatted_repos
that we will send back to the user via the Chalice app.
Let us run a few local tests for our function first and this is the output:
The function is now ready to be integrated into our Chalice app on the app.py
file, and this is the final version of our app:
import requests
from chalice import Chalice
def github_repos(username):
# Function implementation above
app = Chalice(app_name='demoapp')
@app.route('/')
def index():
return {'hello': 'world'}
# Create our new route to handle github repos functionality
@app.route('/user/{username}')
def github(username):
return {"repos": github_repos(username)}
Our application is now ready to be consumed by users, let us now deploy it to AWS Lambda.
Deploying our App
Deploying a Chalice application to AWS Lambda is as simple as running the following command in our working directory:
$ chalice deploy
Chalice will handle the deployment process for us and return a link with which we can interact with the RESTful API we have just created:
To test our API, we can use Postman, a web browser, or any other API interaction tool to make requests to the /user/<github-username>
endpoint on the "REST API URL" from the screen-shot above. I passed in my GitHub username and this was the output:
If we make any changes to our code, we just run the chalice deploy
command again and Chalice will redeploy our application with the changes we have just made.
When we head over to the AWS Console and click on the "Functions" section in the collapsible sidebar on the left side, we can see the Lambda function that is currently running our application:
When we click on our function, we get more details about it such as the current configuration, environment variables set for our application, execution roles, and memory configuration.
AWS also gives us monitoring options such as event logs and metrics through CloudWatch which is a monitoring and management service offered by AWS.
This is the view of the monitoring dashboard for our application:
We get statistics on the number of invocations by users, durations of requests served by our API, the success and error rates, among other things.
We even get a view of the individual requests in the same dashboard, though this is not visible in the screenshot above. There’s so much that AWS does for us out of the box, making our deployment experience short and straight to the point. We do not have to worry about maintaining our own server or implementing our own methods of monitoring and logging as AWS has us covered for this.
This is the serverless experience.
Summary
In this article, we have created a serverless Python API using the Chalice microframework and deployed it on AWS Lambda. The AWS CLI, alongside the Chalice CLI tools, has helped us bootstrap our project quickly and we deployed it to AWS Lambda using just one single command.
Ours is a serverless application since we did not have to handle any server provision or maintenance on our side. We just wrote the code and let AWS handle the rest for us, including deployment, scaling, and monitoring of our application. The time taken to make our application available has decreased significantly, even though we are still reliant on AWS for other aspects such as monitoring our application.
The source code for this project is available here on GitHub.