A blog icon.

Avoid 503 Errors — Stop Putting Off Performance Testing

In a typical enterprise, many important tests are often delayed due to manual processes and bureaucratic red tape. These delays can have a big impact on delivery schedules.
A red speedometer at 120mph against a black background.

To improve the quality of testing and delivery, developers need to identify problems earlier and avoid dealing with costly 503 errors in production. The key is for individual cross-functional teams to be responsible for both code and quality during development. That’s where DevOps and the use of automated testing come in. Streamlining workflow is one of the biggest goals of a true DevOps culture.

With the advent of the Everything-as-Code movement where quality and engineering meet, we can extend our test automation suite to performance testing. Performance testing is a type of non-functional testing used to assess system parameters in terms of responsiveness, scalability, and stability under a variety of workloads. Automating performance testing is especially valuable since it’s usually a manual process that occurs at the end of the development cycle, which tends to slow new feature delivery.

Below, I’ll show a quick example of how to get started with performance testing using Gatling.

Automated Performance Testing

It’s important to kick off a series of tests before new code is integrated with the rest of the application and new software features go into production. Automated performance tests can be run as code locally as a first line of defense before they are run through the delivery pipeline.

Here, I’ll use Gatling, a simple and manageable open-source tool using Scala syntax. While performance testing can be a broader discussion covering many types of subtests (including system, load, stress, capacity, etc.), we will focus on a more basic understanding and test the response when one user hits the server.

Writing a Local Performance Test Simulation

Before pushing code changes, developers should first run local performance tests. We will use the powerful Maven ecosystem to manage the project build and Gatling dependencies. Here’s how to get started. For reference, you can simply use our quickstart repository.


  • JDK 1.8 or later
  • Maven
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._

class SubmitForm extends Simulation {  
	val devUrl = "http://dev.timshort.liatr.io/personal-banking/"
  // specify the protocol
  val httpProtocol = http.baseUrl(devUrl) 
  // define the scenario
  val scn = scenario("Submit a contact form")
    .exec(http("PostForm") // Submit form
      .formParam("name", "John Adams")
      .formParam("email", "johnadams@test.liatr.io")
      .formParam("message", "I would like more information on your products")
   // define users, execute the scenario, assert condition  

Gatling DSL uses Scala, and its classes must extend the Simulation interface. Let’s break down the above code snippet.

Specifying the Protocol

Gatling allows testing over various protocols, although most often we will use the HTTP protocol. Gatling supports the common HTTP methods get, post, put, delete. In our example, we are simply defining our simulation to use the HTTP protocol for our web application.

Defining the Scenario

A fundamental test scenario contains the protocol, method, and execution steps. (Additional customizations like user pauses, control statements, and in-scenario assertions can be used as well.) In our example, we are defining:

  • Our scenario name: “Submit a contact form”
  • Our get request
  • Realistic user pause for 5 seconds
  • User filling out a form

Define Users, Execute Scenario, Assert Conditions

Now that we have defined our scenario, we are ready to execute. The setUp() method puts the pieces together as we inject users on our simulation, defining the number of users and any pattern or ramp-up period. In this example, we are defining:

  • Users to act upon the defined scenario using the defined HTTP protocol
  • One user for testing
  • An assert condition requiring that there should be no failed requests

In particular, when running a test locally, you may want to review different test results than when you’re running a test in the pipeline. For example, you can define simulations for dozens or hundreds of users and then analyze how the system performs, whereas when you run a test in the pipeline you may be more interested in executing a quick, simple simulation with an assertion quality gate.

Writing a Performance Test Simulation for the Pipeline

You can use performance testing tools to run test suites through the delivery pipeline, which moves your code through a series of automated tests, or quality gates, before that code passes to production.

Writing a quick, simple test at the beginning of the pipeline can provide fast feedback to let you know if you’ve introduced an issue in the code. Code that passes early-stage testing can then move on to more advanced regression test suites later in the pipeline.

Here’s how you can use a simple performance test in the pipeline with Gatling inside a declarative Jenkinsfile.

stage(‘Performance Test’) {
	agent {
  	docker { image ‘maven:3-alpine }
   steps {
   	sh: mvn gatling:test

This stage in the Jenkinsfile simply runs the Gatling test inside a Docker container built on the Maven image.

Using Performance Testing Tools Locally and in the Pipeline

Above, I’ve highlighted why automating performance testing is important both locally and in the pipeline. Running the code above on your local system and on your delivery pipeline should help you greatly improve the quality of your code before it goes to production. Iterating through this process and enhancing your simulations with more users and various load levels will give better coverage to your overall performance test suite, giving you confidence that your system can handle the requests and load of your customers.

Share This Article
Have a question or comment?
Contact uS

Related Posts

The Github, Github Actions, and Gatling logos overtop of a laptop computer running code.
Take Action to Measure Your App’s Performance!

Leverage GitHub Actions and Gatling to measure your application’s performance.

Three stages of a butterfly in a cocoon in development.
Taking Quality Back Through Quality Engineering Transformation

In the age of lean agile continuous delivery, is there also a leaner, more agile approach to QA? And if so, what does a lean approach to QA look like?

A 1950's TV sitting on a midcentury modern table in a room with olive green carpet and orange wallpaper.
Retro QA Practices Lead to QA Testing Flaws

Is a retro approach to QA cramping your enterprise transformation? Lots of ’80s trends are in style, but old-school approaches won’t work for your transformation.

A team of 5 people working on laptops on a large table in a conference room.
Why Enterprise Quality Is Broken (and Why Many Enterprises Have No Idea)

Today, achieving enterprise quality is an ongoing struggle. Various reasons for this struggle include a lack of engineering focus around quality and too great a focus on delivery dates and tools over frameworks.

The Liatrio logo mark.
About Liatrio

Liatrio is a collaborative, end-to-end Enterprise Delivery Acceleration consulting firm that helps enterprises transform the way they work. We work as boots-on-the-ground change agents, helping our clients improve their development practices, react more quickly to market shifts, and get better at delivering value from conception to deployment.