DEV Community

Cover image for Web Security: Recon
Gerson Enriquez
Gerson Enriquez

Posted on

Web Security: Recon

What is Web Application Recon

Think about the Money Heist crime series; going inside the Bank of Spain is easier to say than done. The thief, first of all, has to have a specific target; why does he want to go inside? in this case, it is because he wants to steal all the money, so he has to know where the money is. Once he has a target, he must study the planimetry and the bank's floor plans to know which points he can access quickly and see all the exit routes.

Money heist cover with the Bank of Spain
A general planimetry
Unfortunately, it is not enough. The thief needs to know how the business works inside, the available roles, and who works inside. It is probably also helpful to know a bit about each of them to apply successful Social engineering.
After having physical and intellectual knowledge about the bank, the thief can start thinking about the best strategy to go inside, the weak points, and all the alternatives. On the Web, the logic is the same, the thief, in our case, is the malicious user (the attacker), and the bank is the application.
The Recon phase focuses on acquiring all the possible knowledge about the application, not just technical expertise but also functional; this means knowing who application users are, how the application generates revenue, for what purpose users select the application over competitors, who are the competitors, what functionality is found in the application, etc.
In this phase, we will play the role of an attacker, and the goal will be to have our planimetry of the application, but first, let's take a brief look at the structure of modern web apps.

The Structure of a Modern Web Application

Before going deep into the Recon phase, we will see an overall overview of some fundamental technologies of a modern web application. A question comes to my mind: what happens when you type an URL into your browser?

Search bar

After pressing enter, the first thing the Browser has to do is translate the website name into the respective IP address; thanks to the DNS (Domain name system) process. Once the Browser knows the exact address, it's possible to search where the resource is. When the resource is found, it starts the TCP connection with the server, and the server responds with an HTML page that the Browser has to render. As we can see, the Web is a digital universe composed mainly of a few protocols (DNS, HTTP, URL) built over other protocols that allow computers to communicate between them.

Image description

The following are some technologies and standards that modern applications often use to make the client-server communication easier:

  • HTTP

HTTP is a protocol for fetching resources such as HTML documents. It is the foundation of any data exchange on the Web and it is a client-server protocol, which means requests are initiated by the recipient, usually the Web browser. A complete document is reconstructed from the different sub-documents fetched, for instance, text, layout description, images, videos, scripts, and more.

HTTP means Hypertext Transfer Protocol and we can see it as a bridge between client and server.

HTTP layers

The fundamental principle behind the Web was that once someone somewhere made available a document, database, graphic, sound, video, it should be accessible by anyone, with any type of computer, in any country. HTTP helps to establish a common language in order to make client-server communication possible, the feature is called format negotiation that allowed a client to say what sorts of data format it could handle, and allow the server to return a document in any one of them.
Since HTTP is a stateless protocol, each request is independent, and we have to pass all the information needed by the server in every request. Nevertheless, it's possible to have some session states thanks to HTTP Cookies. Cookies are set by the server and handled by the browser and allow users to have the authorization to access protected resources.

HTTP Request

  • REST APIs

REST stands for Representational State Transfer, which is a fancy way of defining an API.
With REST APIs we will have as an outcome an interface of all apis that the application makes available.

REST APIs Swagger interface

The server response is usually in a JSON format.

  • JSON

JSONstands for Javascript Object Notation, which replaced XML as the most used format to represent information between client and server.

JSON format

These are the most common technologies used on the Web, followed by Javascript, the Browser language, and SPA frameworks like React, Vue, and Angular.
Now we can go deep into the central part of the article, where we will look at how we can create our planimetry of the application and understand how it works.

Mapping the Application

As we said at the beginning, we have to study the target before going inside the bank or the application. We need to map the application; this means knowing all the possible paths, understanding what kind of payload the APIs accept, etc.
Here we see some common practical techniques to acquire that kind of knowledge.

- ENUMERATING CONTENT AND FUNCTIONALITY
In a typical application, most content and functionality can be identified via manual browsing, so starting the flow from the main initial page, then walking through the application following every link. This was the basic approach. There are also automatic tools that allow us to do Web Crawling. Unfortunately, these kinds of tools are great for discovering public paths of the application but not so well for discovering the kind of routes that are protected.

Web crawling

The most effective approach is a combination of both; it is called user-directed spidering, which means walking through the application in a usual way but intercepting all the network calls with the help of an automatic tool. Burp suite can help with this; it has a Proxy tab where we can put the intercept feature on.

Image description

With user-directed spidering, the user can simply log in to the application using his browser, and the proxy/spider tool picks up the resulting session and identifies all the additional content now available to the user.
It is also common for applications to contain content and functionality that is not directly linked to or reachable from the main visible content. A common example is functionality that has been implemented for testing or debugging purposes and has never been removed.
The application can also present different functionality to different categories of users (anonymous, authenticated, and administrators).

Often, rather than list and map all the possible pages, is more useful map the application through his functionalities. By identifying these, you can better understand the expectations and assumptions of the application's developers when implementing the functions.

Functionalities map

At this step, we know, more or less, how the application is structured, what APIs are called and what are the possible paths. The next step focuses on the application's functionality and reveals possible vulnerabilities.

- ANALYZING THE APPLICATION
Enumerating as much of the application's content as possible is only one element of the mapping process. Equally important is the task of analyzing the application's functionality, behaviour, and technologies employed to identify the key attack surfaces exposes and to begin formulating an approach to probing the application for exploitable vulnerabilities. Here are some key areas to investigate:

  • The app core functionality: the actions that can be leveraged to perform when used as intended.

  • Error messages, admin functions and the use of redirects.

  • The core security mechanisms: session state, access controls, and authentication mechanisms with the supporting logic (user registration, password change, account recovery).

To understand what actions the application can take, it's important, first of all, to find all the possible APIs, then analyze them. When we analyze an API, since most APIs follow a REST format, as we saw before, we can understand the method of the API and its parameters. Once we know the kind of parameters it receives, we can brute force the API and try to pass a different type of params and see how the web server responds.
The API's discovery happens in the previous phase when we navigate the application intercepting all the network calls.

Now we have to analyze it: the two most common HTTP methods are GET and POST; the first one is used when we ask for a resource, and the second one is used when we send data to the server that needs to be persisted.

Burp network intruder

Since every application with a public web user interface should have a login page, it's also important to understand the core security mechanism. The way the session is handle may differ. It's important to know what type of authentication scheme you are working with because many modern applications send authentication token with every request. This means if we can reverse engineer the type of authentication used and understand how the token is being attached to requests, it will be easier to analyze other API endpoints that rely on an authenticated user token.

Mayor authentication scheme

To recap, the recon phase focuses on acquiring expertise about the application, enumerating its content, and analyzing its functionalities and APIs. However, Recon techniques are constantly evolving, and it can be difficult to accurately determine which techniques outshine others. Because of this, you should always be on the lookout for new and interesting recon techniques.
In the next part of this series, we will see the offense part, where we will go deep into some common attacks. See you there!

Top comments (1)

Collapse
 
pestrinmarco profile image
Marco Pestrin

nice! :)