Use case — Runtime Threat Mapping for GKE using Deepfence ThreatMapper

Use case — Runtime Threat Mapping for GKE using Deepfence ThreatMapper
July 16, 2020
Author:

You can use Deepfence ThreatMapper to visualize and scan unlimited number of hosts, containers and pods at runtime as well as container images out of container registries, as part of CI/CD pipelines. In last few articles, we had described how to install Deepfence community edition i.e. ThreatMapper on AWS ECS and Azure AKS as well as how to use it for vulnerability scanning of hosts and containers. In this article will explore the usage of ThreatMapper on Google GKE for visualization and scanning of VMs, containers and pods at runtime as well as images from Google container registry.

Getting Started

Deepfence installation consists of two components namely Deepfence Management Console which is installed outside of a cluster being threat mapped and super lightweight Deepfence agents which are deployed as daemon service on GKE cluster.

Installing Deepfence Management Console

We will briefly repeat the steps for a single node installation of management console here:

  1. Spin up a Linux VM instance with at least 4 cores, 16GB RAM and 120GB disk space. This configuration can support up to 250 node AKS cluster depending upon load. For larger clusters, you will need to upgrade the console as mentioned in Pre-requisites.
  2. Download docker compose file from here
  3. Run docker-compose as follows

docker-compose -f docker-compose.yml up -d

Give it a few seconds and you are ready to register your product installation as described here.

Installing Deepfence Agent on GKE

  1. To install deepfence agent, you can directly use the Cloud shell on your Google cloud account or install gcloud & kubectllocally using the following steps:
  • Install the Cloud SDK, which includes the gcloud command-line tool.
  • After installing Cloud SDK, install the kubectl command-line tool by running the following command:

gcloud components install kubectl

2. Setup default location and project name using following commands (replace project-id with your project ID and compute-zone with your compute zone, e.g. us-west1-a) , if needed.

gcloud config set project project-idgcloud config set compute/zone compute-zone

3. Use the following command to create a cluster, if not already created. Replace cluster-name with the name of your cluster and number of nodes, zone-name and project-name to appropriate values. You may have to create a VPC network, if none exists. You can also create the cluster directly using the Google cloud UI.

gcloud container clusters create cluster-name --num-nodes=1 --network=test-vpc [--zone zone-name] [--project project-name]

4. Then, either connect to your GKE cluster using gcloud shell or use the following command locally (change your cluster name, zone-name etc. appropriately) to get authentication credentials to interact with the cluster using kubectl:

gcloud container clusters get-credentials cluster-name [--zone zone-name] [--project project-name]

5. If the IP address of the VM or host that has the Deepfence management console is 192.168.1.10, then edit the kubernete-agent.yml file, and change the value of MGMT_CONSOLE_IP_ADDR to 192.168.1.10.

6. Upload kubernetes-agent.yml to cloud shell using upload button at the top right corner, if user is using gcloud shell.

7. Grant permissions for the installation with following command (this step is only needed up to k8s v1.11):

kubectl create clusterrolebinding "cluster-admin-$(whoami)" --clusterrole=cluster-admin --user="$(gcloud config get-value core/account)"

8. Run this command to start deepfence agent daemonset in all nodes in the cluster:

kubectl apply -f kubernetes-agent.yml

9. deepfence-agent-daemon will be visible in Workloads page alongside other workloads in the cluster:

10. It may take few minutes for deepfence agents to get installed and show up on the console UI. You can check the status of agent installation using the following command:

kubectl get ds deepfence-agent-daemon -n deepfence --watch

Once the agents are installed you can visualize the nodes, containers and pods on from the topology tab the console UI.

You can click on individual nodes in the topology view to initiate various tasks like vulnerability scanning. You can also scan the nodes, containers and pods for vulnerabilities after the vulnerability database is populated. It can take up to 30–60 minutes for vulnerability database download (you can check the download status of the vulnerability database on the notification panel).

Topology Visualization and Vulnerability Scanning

You can also initiate vulnerability scanning of any number of number of nodes by using the our APIs. You can visualize the vulnerabilities found on each node and a ranked list of the most Exploitable Vulnerabilities across images by navigating to Vulnerabilities tab. Users can tag and scan a subset of nodes by using user defined tags.

You can also scan for vulnerabilities in images stored in GCR Container Registry from the registry scanning dashboard. First, you will need to click the GCR tab and then “Add registry” button. Then, you need to add the GCR registry name, URL and service account json to populate available images as shown the picture below. After that you can select the images and click the scan button to start scanning.

Adding GCR Registry
GCR Image List
GCR Image Vulnerabilities

If you encounter any issues, please file a ticket on our ThreatMapper GitHub issue tracker. Please do not hesitate to reply back to give any feedback for improvement or to request any additional features.