Engineering · 6 min read

Burp Suite Enterprise in GCP

Yasmine Hal
Posted January 08, 2024
Burp Suite Enterprise in GCP

Fullstory recognizes the importance of securing our product and views it as a critical investment for the business. Security in depth is one of our guiding principles and a variety of tools are used to achieve that. Burp Suite Enterprise (BSEE) is the tool we use to run dynamic application security testing (DAST) scans on our web app.

Portswigger, the company that created burp which is a tool commonly used by security practitioners, also has an enterprise offering of this product. This edition allows you to run scans that the community/professional editions provide, with much needed enterprise features. These enterprise features include:

  • Unified view of test results and issue tracking

  • Scheduled scans for continuous monitoring and timely vulnerability detection

  • Reporting capabilities, and integration options for tracking and notifying of issues

Though the enterprise edition fulfills our needs, it proved to be more complicated to set up than expected. This article shares what we’ve learned and some of the reasons for the steps that were needed to deploy Burp Suite Enterprise in our preexisting Kubernetes Cluster.

Deployment type

BSEE has two deployment types: standard (e.g. VM-based) and Kubernetes. We decided to go with the latter, for a few reasons: 

  • Fullstory uses GKE extensively and has expertise in the area.

  • Choosing Kubernetes deployment allows us to rely on Google’s patching and maintenance so it adds the least technical debt to our team. This aligns with Google’s shared responsibility model.

Fetching the Helm chart

At the time of writing this post, the BSEE Helm chart is not provided through a Helm repository.

To work around this, we downloaded the Helm chart zip file and integrated that code into our GitHub repository. The desire to make merge conflicts easily resolvable underscored a key principle for this deployment: minimizing alterations to the Helm chart files provided by PortSwigger.

Any change from the original Helm chart is to be documented with a comment about it so that whenever we update the chart we’d know which changes need to be kept.

System requirements

If you already have a GKE cluster running some other apps, you’d be tempted to skip reading these instructions. Don’t. To make things easy, here’s the part that you need to pay attention to:

A PersistentVolumeClaim that is created in the namespace to which Burp Suite Enterprise Edition is going to be deployed. The access mode is ReadWriteMany.

This is not a very common type of persistent volume claim (PVC) access mode. In GKE, a persistent volume is by default backed by a persistent disk, however persistent volumes that are backed by persistent disk do not support this access mode. For that reason, having a PVC with this access mode requires extra configuration to enable it. 

The steps to enabling this PVC are as follows:

Enable Filestore

Enable the Filestore CSI Driver in the GKE cluster

Enabling the Filestore CSI driver in the kubernetes cluster allows it to dynamically provision persistent volumes.

Terraform

If you use the Kubernetes Engine terraform module created by google, it’s as simple as setting the value of the filestore_csi_driver property to true.

Google Cloud CLI

Make sure you are logged in to the right GKE cluster:

Then enable the filestore CSI driver:

Adding the PVC to the Burp Helm chart

To do this, you need to create a new type of storage class in the cluster. Admittedly, the storage class is added on the cluster level, rather than just the app. However, in our use case, we only use this storage class for the BSEE app, so it’s appropriate to have this as part of the app’s Helm chart.

Add a file called storage-class.yml to the templates directory of the Helm chart:

Add a file called persistent-volume-claim.yml to the templates directory of the Helm chart:

Database setup

This type of deployment requires having a database setup for BSEE to store its data. The PortSwigger docs provide you with the details of what needs to be set, however, you’d need to build your own terraform off of that. This is the terraform we used to set this up:

Communication with the database

The terraform configuration outlined above, along with specifying the username/password combinations in the BSEE Helm chart, would only work if the database were open to receive any connections from the internet.

Since this goes against security best practice that states that databases should be as isolated as possible, we decided to use cloud sql proxy.

Initially we tried to deploy this as a sidecar to each of the containers, but we quickly realized this won’t work: whenever a new scan is started, the server spawns a k8s job. This container exists only for the duration of the scan. The Cloud SQL proxy is a long-lived process, so having it run as a sidecar to a k8s job will make the job run forever. There are possible ways around this, but we kept true to our guiding principle of making the least amount of changes to the original Helm chart.

Instead, we chose to set up a Service object that points to the Cloud SQL Proxy on port 5432. Then, we set the Database JDBC connection string to that of the Service hostname. In the Helm chart’s values file it looks like this:

Notice that with this deployment, every container in the cluster is able to try to initiate connections to the database. That is mitigated by the fact that the database’s authentication is still in play here, and any initiated connections must specify a valid username/password combination.

Using this authentication method requires some setup, which will be listed in the following sections.

Enable Cloud SQL Admin APIs

CloudSQL auth proxy uses the sqladmin APIs to communicate with the database, so these APIs have to be enabled.

Set up service accounts

We need to set up a GCP service account with CloudSQL permissions, then to link this GCP service account with the kubernetes service that is running the CloudSQL auth proxy container.

Add the CloudSQL Auth Proxy deployment to the Helm chart

Create a directory called cloud-sql-proxy under the templates directory.

Add a file called deployment.yml:

Add a file called service-account.yml:

Add a file called service.yml:

You’ll notice in the above files that there are values defined as .Values.custom.. The custom section in the values file has all the values we had to add to the Helm chart that was not part of the original Helm chart.

Add custom values to values.yaml:

Done

This might be a little complicated, but these are all the steps we had to perform to deploy BSEE in our cluster. For your convenience, we’ve wrapped this all up in a github repo.

Once all this is done, you should be able to go into your BSEE interface, configure sites and start scanning your web apps. Happy Burping!

Want a perfect website or app? Fullstory can help. Request a demo today.

author
Yasmine HalSenior Application Security Engineer

About the author

Yasmine Hal is an esteemed Senior Security Engineer with 18 years of experience in software development, currently working at Fullstory. Specializing in the integration of security products into SaaS environments, she previously led complex projects with innovative solutions.

Return to top

Related posts

Blog Post
Building a resilient organization: Strategies for risk mitigation and compliance considerations

Master risk management and compliance for robust organizational resilience and data protection.

Read the post
Blog Post
Fullstory’s journey to safer client data with Semgrep

Discover how Fullstory uses Semgrep for advanced static code analysis to enhance client data security.

Read the post
Blog Post
Fullstory’s guide to protecting behavioral data and user privacy

Explore best practices for handling behavioral data and PII, ensuring privacy and security while unlocking valuable insights.

Read the post