Getting Started: Monitoring a FastAPI App with Grafana and Prometheus - A Step-by-Step Guide
Introduction
Monitoring plays a crucial role in ensuring the performance, availability, and stability of FastAPI applications. By closely tracking key metrics and identifying potential issues, developers can proactively address them and deliver a better user experience. In this guide, we will explore how to set up monitoring for a FastAPI app using two powerful tools: Grafana and Prometheus.
What is Prometheus?
Prometheus is an open-source monitoring system that collects metrics from your application and stores them in a time series database. It can be used to monitor the performance of your application and alert you when something goes wrong.
What is Grafana?
Grafana is an open-source visualization tool that can be used to create dashboards for your application. It can be used to create dashboards that show the status of your application.
Overview of monitoring FastAPI apps
Monitoring is an important part of any application. It helps you to understand how your application is performing and how it is being used. It also helps you to identify and fix issues before they become a problem.
There are many tools available for monitoring applications, but they all have their own pros and cons. In this guide, we will be using Prometheus and Grafana to monitor our FastAPI app.
Importance of Grafana and Prometheus in monitoring
Grafana is a tool for visualizing data. It can be used to create dashboards that show the status of your application. Prometheus is a tool for collecting metrics from your application. It can be used to collect metrics such as CPU usage, memory usage, and network traffic.
Prerequisites
- Docker
- Docker Compose
- Python 3.8+ and Pip
- Terminal or Command Prompt
- Text Editor or IDE (VS Code, PyCharm, etc.)
- Basic knowledge of FastAPI, Docker, and Python
- Basic knowledge of Prometheus and Grafana
- Basic knowledge of Docker & Docker Compose
Project Setup
Inorder to keep we'll use an existing FastAPI app for this guide. You can clone the repo here. However, if you want to create/use your own FastAPI app, feel free to do so.
git clone https://github.com/KenMwaura1/Fast-Api-example.git
Once you have cloned the repo,you can run the following commands to create a virtualenv and install the dependencies.
cd Fast-Api-example
python3 -m venv venv
source venv/bin/activate
cd src
pip install -r requirements.txt
Inorder to run the app, you can run the following command.
uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8002
The command above will start the app on port 8002. You can access the app by visiting http://localhost:8002/docs in your browser. Feel free to change the command to suit your needs. The default port is usually 8000.
Setting up Prometheus
A. Installation and Configuration of Prometheus with Docker
Install Docker on your system if not already installed. Pull the Prometheus Docker image from the official repository using the command:
docker pull prom/prometheus
Create a folder prometheus_data
, inside it create a configuration file named: prometheus.yml
to define Prometheus settings and targets. Example configuration:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'fastapi-app'
static_configs:
- targets: ['web:8000']
This configuration specifies the scrape interval and sets the FastAPI app's target to be monitored. Start the Prometheus container using the following command:
docker run -p 9090:9090 -v /path/to/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
Replace /path/to/prometheus.yml with the actual path to your prometheus.yml configuration file. Access Prometheus by navigating to http://localhost:9090 in your web browser. You should see the Prometheus web interface.
B. Instrumenting FastAPI App for Prometheus Metrics
Ceate a Virtual Environment for the FastAPI app and activate it then install the required Python libraries for the app to run for Prometheus integration:
python3 -m venv venv
source venv/bin/activate
cd src
pip install -r requirements.txt
pip install prometheus-fastapi-instrumentator
In your FastAPI app's main file (in this case its found in src/app/main.py
), import the Instrumentator class from prometheus_fastapi_instrumentator:
from prometheus_fastapi_instrumentator import Instrumentator
Initialize and instrument your FastAPI app with the Instrumentator:
Instrumentator().instrument(app).expose(app)
This step automatically adds Prometheus metrics instrumentation to your FastAPI app and exposes the metrics endpoint. Restart your FastAPI app to apply the instrumentation changes. If everything is succesfull, there should be a new endpoint at http://localhost:8002/metrics that returns the Prometheus metrics.
Connecting Prometheus and Grafana
In order to connect Prometheus and Grafana, we will be using the Prometheus data source plugin for Grafana. This plugin allows you to connect Grafana to Prometheus and create dashboards that show the status of your application.
Installing the Prometheus data source plugin for Grafana
In order to install the Prometheus data source plugin for Grafana, you will need to download the plugin from the Grafana website. You can download the plugin from here. However in docker the plugin is already installed.
Once you have downloaded the plugin, you can install it by running the following command:
grafana-cli plugins install grafana-prometheus-datasource
Running the app with Docker Compose
Now that we have our app running, we can use Docker Compose to run it with Prometheus and Grafana. We will be using the official Prometheus and Grafana images from Docker Hub.
Docker Compose file
We will be using a Docker Compose file to run our app with Prometheus and Grafana. The Docker Compose file will contain the following services:
- FastAPI app as web service
- Prometheus as prometheus service
- Grafana as grafana service
- Postgres as a database service
The Docker Compose file will also contain the following volumes:
- Prometheus data volume
- Grafana data volume
The Docker Compose file will also contain the following network:
- hello_fastapi_network
Now create a file named docker-compose.yml
in the root directory of your project and add the following code:
version: "3.8"
services:
web:
build: ./src
command: uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000
volumes:
- ./src/:/usr/src/app/
ports:
- "8002:8000"
environment:
- DATABASE_URL=postgresql://hello_fastapi:hello_f
---> Verify that Prometheus is scraping the metrics from your FastAPI app by visiting <http://localhost:9090/targets> in your web browser. The FastAPI app target should be listed with a "UP" state.
With Prometheus now installed and configured in a Docker container, and your FastAPI app instrumented with Prometheus metrics, you are ready to move on to the next steps of integrating Grafana for visualization and analysis.astapi@db/hello_fastapi_dev
depends_on:
- db
db:
image: postgres:13.1-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=hello_fastapi
- POSTGRES_PASSWORD=hello_fastapi
- POSTGRES_DB=hello_fastapi_dev
ports:
- "5432:5432"
prometheus:
image: prom/prometheus
container_name: prometheus
ports:
- 9090:9090
volumes:
- ./prometheus_data/prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
grafana:
image: grafana/grafana
container_name: grafana
ports:
- 3000:3000
volumes:
- grafana_data:/var/lib/grafana
volumes:
prometheus_data:
driver: local
driver_opts:
o: bind
type: none
device: ./prometheus_data
grafana_data:
driver: local
driver_opts:
o: bind
type: none
device: ./grafana_data
postgres_data:
networks:
default:
name: hello_fastapi
Lets go through the code above and see what each part does:
version: '3.8'
- This is the version of the Docker Compose file format we are using. You can find more information about the Docker Compose file format here.services:
- This is the start of the services section of the Docker Compose file. This section contains all the services we want to run with Docker Compose. Each service will run in its own container. For more information about Docker Compose services, you can read the Docker Compose documentation.prometheus:
- This is the start of the prometheus service. This service will run the Prometheus image from Docker Hub.image: prom/prometheus
- This is the image we want to run with the prometheus service. This image is the official Prometheus image from Docker Hub.container_name: prometheus
- This is the name of the container we want to run with the prometheus service. This name will be used to refer to the container in other parts of the Docker Compose file.ports:
- This is the start of the ports section of the prometheus service. This section contains all the ports we want to expose with the prometheus service.9090:9090
- This is the port we want to expose with the prometheus service. This port will be used to access the Prometheus web interface.volumes:
- This is the start of the volumes section of the prometheus service. This section contains all the volumes we want to mount with the prometheus service../prometheus_data/prometheus.yml:/etc/prometheus/prometheus.yml
- This is the volume we want to mount with the prometheus service. This volume will be used to store the Prometheus configuration file.command:
- This is the start of the command section of the prometheus service. This section contains all the commands we want to run with the prometheus service.--config.file=/etc/prometheus/prometheus.yml
- This is the command we want to run with the prometheus service. This command will be used to specify the location of the Prometheus configuration file.networks:
- Here we specify the network we want to use for the prometheus service. This network will be used to connect the prometheus service to the other services.grafana:
- This is the start of the grafana service. This service will run the Grafana image from Docker Hub.image: grafana/grafana
- This is the image we want to run with the grafana service. This image is the official Grafana image from Docker Hub.container_name: grafana
- This is the name of the container we want to run with the grafana service. This name will be used to refer to the container in other parts of the Docker Compose file.ports:
- This is the start of the ports section of the grafana service. This section contains all the ports we want to expose with the grafana service.3000:3000
- This is the port we want to expose with the grafana service. This port will be used to access the Grafana web interface.volumes:
- This is the start of the volumes section of the grafana service. This section contains all the volumes we want to mount with the grafana service.grafana_data:/var/lib/grafana
- This is the volume we want to mount with the grafana service. This volume will be used to store the Grafana data.networks:
- Here we specify the network we want to use for the grafana service. This network will be used to connect the grafana service to the other services.volumes:
- This is the start of the volumes section of the Docker Compose file. This section contains all the volumes we want to mount with the Docker Compose file.prometheus_data:
- This is the volume we want to mount with the Docker Compose file. This volume will be used to store the Prometheus data.
Prometheus configuration file
As mentioned earlier prometheus needs a configuration file to know what to monitor. We will be using the default configuration file that comes with Prometheus. However we will need to make some changes to the configuration file to make it work with our app.
Now update the prometheus_data/prometheus.yml
file with the following code:
# config file for prometheus
# global config
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 15s
alerting:
alertmanagers:
- follow_redirects: true
enable_http2: true
scheme: http
timeout: 10s
api_version: v2
static_configs:
- targets: []
scrape_configs:
- job_name: prometheus
honor_timestamps: true
scrape_interval: 15s
scrape_timeout: 10s
metrics_path: /metrics
scheme: http
follow_redirects: true
enable_http2: true
static_configs:
- targets:
- localhost:9090
- job_name: 'fastapi'
scrape_interval: 10s
metrics_path: /metrics
static_configs:
- targets: ['web:8000']
Lets go through the code above and see what each part does:
global:
- This is the start of the global section of the Prometheus configuration file. This section contains all the global settings for Prometheus.scrape_interval: 15s
- This is the scrape interval for Prometheus. This setting tells Prometheus how often to scrape the targets.scrape_timeout: 10s
- This is the scrape timeout for Prometheus. This setting tells Prometheus how long to wait for a scrape to complete before timing out.evaluation_interval: 15s
- This is the evaluation interval for Prometheus. This setting tells Prometheus how often to evaluate the rules.alerting:
- This is the start of the alerting section of the Prometheus configuration file. This section contains all the alerting settings for Prometheus.alertmanagers:
- This is the start of the alertmanagers section of the Prometheus configuration file. This section contains all the alertmanagers settings for Prometheus.follow_redirects: true
- This setting tells Prometheus to follow redirects when sending alerts to the alertmanager.enable_http2: true
- This setting tells Prometheus to enable HTTP/2 when sending alerts to the alertmanager.scheme: http
- This setting tells Prometheus to use HTTP when sending alerts to the alertmanager.timeout: 10s
- This setting tells Prometheus how long to wait for a response from the alertmanager before timing out.api_version: v2
- This setting tells Prometheus to use the v2 API when sending alerts to the alertmanager.static_configs:
- This is the start of the static_configs section of the Prometheus configuration file. This section contains all the static_configs settings for Prometheus.targets: []
- This setting tells Prometheus to use the default alertmanager.scrape_configs:
- This is the start of the scrape_configs section of the Prometheus configuration file. This section contains all the scrape_configs settings for Prometheus.job_name: prometheus
- This is the name of the job we want to scrape with Prometheus. This name will be used to refer to the job in other parts of the Prometheus configuration file.honor_timestamps: true
- This setting tells Prometheus to honor timestamps when scraping the job.scrape_interval: 15s
- This is the scrape interval for the job. This setting tells Prometheus how often to scrape the job.scrape_timeout: 10s
- This is the scrape timeout for the job. This setting tells Prometheus how long to wait for a scrape to complete before timing out.metrics_path: /metrics
- This is the metrics path for the job. This setting tells Prometheus where to find the metrics for the job.scheme: http
- This setting tells Prometheus to use HTTP when scraping the job.follow_redirects: true
- This setting tells Prometheus to follow redirects when scraping the job.enable_http2: true
- This setting tells Prometheus to enable HTTP/2 when scraping the job.static_configs:
- This is the start of the static_configs section of the Prometheus configuration file. This section contains all the static_configs settings for Prometheus.targets:
- This is the start of the targets section of the Prometheus configuration file. This section contains all the targets settings for Prometheus.localhost:9090
- This is the target we want to scrape with Prometheus. This target will be used to refer to the target in other parts of the Prometheus configuration file.job_name: 'fastapi'
- This is the name of the job we want to scrape with Prometheus. This name will be used to refer to the job in other parts of the Prometheus configuration file.scrape_interval: 10s
- This is the scrape interval for the job. This setting tells Prometheus how often to scrape the job.metrics_path: /metrics
- This is the metrics path for the job. This setting tells Prometheus where to find the metrics for the job.static_configs:
- This is the start of the static_configs section of the Prometheus configuration file. This section contains all the static_configs settings for Prometheus.targets:
- This is the start of the targets section of the Prometheus configuration file. This section contains all the targets settings for Prometheus.web:8000
- This is the target we want to scrape with Prometheus. This target will be used to refer to the target in other parts of the Prometheus configuration file.
We use web:8000
as the target for Prometheus because that is the name of the service we defined in the Docker Compose file.
If you want to learn more about the Prometheus configuration file you can read the Prometheus documentation.
Running the app
Now that we have the Docker Compose file and the Prometheus configuration file we can run the app. To run the app we need to run the following command:
docker-compose up -d
Verify that Prometheus is scraping the metrics from your FastAPI app by visiting http://localhost:9090/targets in your web browser. The FastAPI app target should be listed with a "UP" state. Example screenshot:
Grafana Dashboard
Now that we have Prometheus running we can create a Grafana dashboard to visualize the metrics from our FastAPI app. To create a Grafana dashboard we need to do the following:
- Create a new Grafana dashboard.
- Add a new Prometheus data source.
- Add a new graph panel.
- Add a new query to the graph panel.
- Apply the changes to the graph panel.
- Save the dashboard.
- View the dashboard.
- Repeat steps 3-7 for each metric you want to visualize.
- Repeat steps 2-8 for each dashboard you want to create.
- Repeat steps 1-9 for each app you want to monitor.
Create a new Grafana dashboard
Once you have grafana running go to: localhost:3000. You should see the following screen:
Enter the default username and password (admin/admin) and click "Log In". You should be prompted to change the password. Enter a new password and click "Save". You should see the following screen:
Click on the "Create your first data source" button. You should see the following screen:
Click on the "Prometheus" button. You should see the following screen:
Enter the following information:
- Name: Prometheus
- URL: prometheus:9090
- Access: Server (Default)
- Scrape interval: 15s
- HTTP Method: GET
- HTTP Auth: None
- Basic Auth: None
- With Credentials: No
- TLS Client Auth: None
- TLS CA Certificate: None
Click on the "Save & Test" button. You should see the following screen:
Click on the "Dashboards" button. You should see the following screen:
Click on the ""New Dashboard" button. You should see the following screen:
Click on the "Add Visualization" button. You should see the following screen:
Here you can select the type of visualization you want to add to the dashboard. For this example we will select the "Time Series" visualization. You should see the following screen:
Now lets add a query to the graph. Click on the "Query" button. You should see the following screen:
Grafana provides a query builder we can use to select the metrics we want to visualize.
- Click on the "Metrics" button.
- We'll use
api_request_duration_seconds_count
as the metric we want to visualize. - Click on the "label_filters" button. Select "endpoint" from there select
/notes/{id}
. - Click + Add Filter. Select "http_status" from there select
200
. - Now click "Run Query". You should see the tome series graph for the
api_request_duration_seconds_count
metric: - Enter panel title:
api_request_duration_seconds_count
and click "Apply". - Click on the "Save" button. You should see the following screen:
Rince and repeat, modifying for each metric you want to visualize. You can also add multiple graphs to the same dashboard.
Feel free to use my Grafana dashboard as a starting point. Find the JSON file in the GitHub repo.
Sample dashboard:
Conclusion
In this article we learned how to monitor a FastAPI app using Prometheus and Grafana. We learned how to create a Docker Compose file to run Prometheus and Grafana. We also learned how to create a Prometheus configuration file to scrape the metrics from our FastAPI app. Finally we learned how to create a Grafana dashboard to visualize the metrics from our FastAPI app.
Thanks for reading! Free free to leave a comment below if you have any questions or suggestions. You can also reach out to me on Twitter. If you found this article helpful feel free to share it with others.