Saturday, July 14, 2007

Microsoft Windows Vista: Windows Defender

(Windows Defender was released in October 2006 as a download for Windows XP and 2003. Now it's also built into Windows Vista, making it more convenient to protect your computer against spyware threats.)

Windows Vista comes with a built-in anti-spyware application called Windows Defender, to help you protect your computer against malicious software designed to gather information about you and your system for the purpose of advertising or even identity theft. Defender is an integral part of Vista's heightened security. Here are 10 things you need to know to use Defender to your best advantage.

Windows Defender is only one part of a multilayered security strategy:

Defender is designed to detect and remove or quarantine known and suspected spyware programs that may be installed on your computer without your knowledge. It does not prevent all attacks against your computer. Defender should be used in conjunction with other security mechanisms such as a firewall, antivirus software, and encryption.

Defender is enabled on Vista by default:

You can turn Defender on and off and configure its properties and behavior through the Windows Defender Control Panel applet. It can also be accessed through the Security Center in Vista. The interface is simple, with a one-click button to scan immediately for spyware and the ability to schedule automatic scans on a daily basis or on a selected day of the week at a time of your choosing.

Defender can perform three types of scans:

A Quick Scan looks in the locations where spyware is most commonly found. This saves time and catches most spyware. A Full Scan checks every drive and folder on the computer. This is the most thorough option but it can take quite some time, depending on the size of your hard disk(s) and the number of files you have. During the scan, there may be a performance hit on other activities you perform on the computer. A custom scan allows you to select the specific drive(s) or folder(s) you want to scan. If Defender detects spyware during a Custom Scan, it will then perform a Quick Scan to remove or quarantine it.

You can specify how you want Defender to perform a scan:

You can choose whether Defender should scan files and folders that have been archived. You can select to use heuristics methods to identify software that is likely to be spyware, based on patterns and behavior, in addition to using definition files that identity known spyware. In addition, you can choose whether to create a restore point before removing detected items, so that if a file that's necessary to one of your legitimate programs is removed by mistake, it will be easy to fix the problem. You can also specify files and folders that Defender should skip altogether when performing a scan.

Real-time protection alerts you immediately if a suspected spyware program attempts to install itself or run on your computer:

Real-time protection is enabled by default, but you can choose whether to use it and you can select which security agents should be turned on to monitor various aspects of the system. A number of security agents are available to monitor such items as startup programs, security-related configuration settings, IE add-ons, IE configuration settings, downloaded files and programs, services and drivers, application registration files, Windows utilities, or any program that's started.

Administrators can control how Defender runs on user machines:

Admins can allow all users to use Windows Defender to scan the computer, choose actions for Defender to take when suspected spyware is detected. and review Defender's activities. They can also restrict the use of Defender with administrative privileges. By default, everyone is allowed to use Windows Defender.

You can view the activities Windows Defender has performed via the History page:

On the History page, you'll see a list of programs and activities that includes a description of detected items, advice regarding what to do about each item, and resources such as the file location and registry keys associated with the program. You'll see the alert level, what action was taken on what date, and the current status of the item. You can also review a list of items you've permitted to run via the Allowed Items link. You can see what you've prevented from running, and remove or restore these items, via the Quarantined Items link.

Windows Defender classifies possible spyware threats according to four alert levels:

Severe means it's a malicious program that can damage your computer. High means it's a program that might collect your personal information or change your settings. Software classified as Severe or High alert should be removed immediately. Medium pertains to programs that might collect personal information but may also be part of a trusted program. Low alert signifies software that might collect information or change settings but that was installed in accordance with a licensing agreement you accepted. You should review programs flagged as Medium or Low alert and decide whether you want to block or remove them. Some programs are not yet classified.

You should have Defender check for new definitions on a regular basis:

To be effective, anti-spyware software uses definitions files that must be kept up to date because new spyware threats appear on a frequent basis. Best practice is to have Defender automatically check for new definitions through Windows Update before performing a scheduled scan. You can also check for new definitions manually. If you rely on manual updating only, you should check for new definitions at least once per week.

Microsoft relies on the SpyNet community of Defender users to help expand the spyware database:

You're not required to participate in SpyNet to use Defender, but if you do, Defender will send information to Microsoft about the suspected spyware it detects and the actions you apply to each. You can join the SpyNet community easily via the Tools Settings options, and you can select either a basic or advanced membership. With an advanced membership, you'll receive an alert when Defender detects software that hasn't been analyzed, and more detailed information is sent to Microsoft about detected software.

Microsoft Windows Vista: Network Access Protection (NAP)

Microsoft's Network Access Protection (NAP) is built into the Windows Longhorn Server and Windows Vista client operating systems and expands upon the functionality of the Network Access Quarantine Control feature in Windows Server 2003. NAP allows you to monitor the health status of all computers that attempt to connect to your network--not just remote access clients--and ensure that they're compliant with your health policies. Noncompliant computers can be given access to a restricted network where you can place resources they can use to gain compliance. Here are 10 basic facts you need to know before deploying NAP on your network.

NAP is a supplemental feature:

NAP does not take the place of other network security mechanisms, such as firewalls, anti-malware programs, and intrusion detection systems. It does not in any way prevent unauthorized access to your network. Instead, it helps protect your network from attacks and malicious software that can be introduced by authorized users who connect to your network via unpatched, misconfigured, or unprotected computers.

NAP can be deployed in two modes-monitoring mode or isolation mode:

If you configure a monitoring policy, authorized users are given access to the network even if their computers are found noncompliant, but the noncompliant status is logged so that administrators can instruct the users to bring the computers into compliance. In isolation mode, noncompliant computers are given access only to the restricted network, where they can find resources to gain compliance.

You can select compliance criteria for the computers that connect to your network:

Compliance criteria include requirements for service packs and security updates, antivirus software, anti-spyware protection, firewalls, and Windows Automatic Updates. The criteria are configured on the System Health Validator (SHV) on the NAP server.

The NAP server must run Windows Longhorn Server:

The NAP server is a Network Policy Server (NPS). NPS is Longhorn's replacement for Internet Authentication? Service (IAS) in Windows Server 2003 and provides authentication and authorization. NAP services include the NAP Administration Server and the NAP Enforcement Server. The System Health Validator (SHV) runs on the server.

NAP requires that the client computers have NAP client software installed:

The NAP client is built into Windows Vista, and a NAP client for Windows XP is expected to be made available with the release of Windows Longhorn Server. The System Health Agent (SHA) runs on the client. If you have computers on the network running operating systems that don't support NAP, you can exempt them from the health status requirements by creating exceptions, so that those computers can still access the network. If no exceptions are made for them, non-NAP capable computers will have access to the restricted network only.

The SHA prepares a Statement of Health (SoH) based on the health status of the client computer:

The NAP software submits the SoH to the SHV. The SHV communicates with the Policy Server and determines whether the health status provided in the SoH meets the requirements of your health policy. If it does, the computer is allowed full access to the network. If not (in isolation mode), the computer is given access to the restricted network where it can download the updates or software needed to come into compliance. The computers on the restricted network that contain these resources are called remediation servers.

You can use health certificates to prove compliance:

In this case, you need a Longhorn server running Internet Information Services (IIS) and Certificate Services to act as a CA and issue the health certificates. This server is called the Health Registration Authority (HRA). The NAP client sends the SoH to the HRA, which sends it to the NPS server. The NPS server communicates with the Policy Server to find out if the SoH is valid. If it is, the HRA obtains a health certificate for the client, which can be used to initiate IPSec-based communications.

There are four types of NAP enforcement:

IPSec enforcement relies on the HRA and X.509 certificates. 802.1x enforcement relies on an EAPHost NAP enforcement client and is used for clients connecting through an 802.1x access point. (This can be a wireless access point or an Ethernet switch.) Restricted access profiles are placed on noncompliant clients using packet filters or VLAN identifiers to restrict them to the restricted network. VPN enforcement relies on VPN servers to enforce the health policy when a computer attempts to make a VPN connection to the network. DHCP enforcement relies on the DHCP servers to enforce the health policy when a computer leases or renews its IP address. You can use one, some, or all of the enforcement methods on a given network.

Only computers that connect to the network via one of the four enforcement methods will have their access restricted if they're noncompliant:

DHCP enforcement is the easiest to deploy and most comprehensive because most computers will need to lease IP addresses (all except those assigned static addresses), but IPSec enforcement is the strongest enforcement method. When a computer's access is restricted, it will still have access to the DNS and DHCP servers, as well as the remediation servers. You can, however, place secondary DNS servers or forwarding servers on the restricted network, rather than primary DNS servers.

NAP is different from Network Access Quarantine Control in Windows Server 2003:

NAP can be applied to all the systems on the network, not just remote access clients. With NAP, you can also monitor and control the health status of visiting laptops and even on-site desktop computers. It's also easier to deploy because it doesn't require the creation of custom scripts and manual configuration with command-line tools, as does NAQC. In addition, third-party software vendors can use the NAP APIs to create NAP-compatible health status validation and network access limitation components. NAP and NAQC can be used simultaneously, but generally NAP will serve as a replacement for NAQC.

Microsoft Windows Vista: Services’ Hardening

Service hardening is one of many new security mechanisms in Windows Vista and the next generation of Windows server, currently known as Longhorn Server. Because it's not always desirable or possible to disable Windows services that provide attackers with an exploitable point of attack, the new operating systems include features that make it more difficult for service exploits to do damage.

Here are a few facts you should know about service hardening:

SCM manages services:

Windows services are programs that are managed by the Service Control Manager (SCM), which maintains a database of installed services and manages each service's state. Usually services start automatically when Windows boots and run continuously, making them always available and thus attractive to attackers.

Higher privileges = greater exposure:

In previous Windows operating systems, most services ran under the LocalSystem account, which has a high level of privileges. That meant that if the service were compromised, attackers could do major damage because they would have access to almost everything.

Vista and Longhorn Server run services with lowest possible privileges:

In Vista and Longhorn, many of the services that used to run under LocalSystem now run under the NetworkService or LocalService accounts, which have a lower level of privileges. Services run with the lowest possible privileges. Any privileges that a service doesn't need are removed, which helps reduce the attack surface.

Vista protects services by using "isolation" techniques:

Isolation techniques includes Session 0 isolation, which prevents user applications from running in Session 0 (the first session created when Windows starts up). Only services and other applications that are not associated with a user session can run there. This protects the services from the actions of other applications.

Vista assigns a Security Identifier (SID) to each service:

Assigning an SID to each service allows services to be separated from one another and enables the operating system to apply the Windows access control model to restrict services' access to resources in the same way user and group accounts' access can be restricted.

In Vista, access control lists (ACLs) can now be applied to services:

An ACL is a set of access control entries (ACEs). Every resource on the network has a security descriptor that contains the ACLs assigned to it. Permissions defining who or what can access that resource are stored in the ACL.

Vista allows the application of network firewall policies to services:

The policy is linked to the service's SID. This allows you to control how the service is allowed to access the network and prevent it from using the network in ways it's not supposed to, such as sending outbound network traffic. The Vista Firewall is integrated with the service hardening feature.

Specific services can be restricted so that they can't make edits to the registry, write to system files, and so forth:

If a service needs to perform those actions to function properly, it can be restricted so that it can write only to specific areas of the registry or a file system. Services can also be prevented from making changes to configuration settings and performing other actions that can be exploited by an attacker.

Each service is pre-assigned a service hardening profile:

This profile defines what the service should and shouldn't be allowed to do. Based on this profile, the SCM assigns the services only the privileges they must have. This all happens transparently, with no configuration or administrative overhead required.

Service hardening does not prevent attackers from compromising services:

The Windows Firewall and other protective layers are designed to prevent that. The purpose of service hardening is to reduce the level of damage that can be done if the service does become compromised. It provides inner layer protection in Vista's multilayered security strategy.

User Account Control in Microsoft Windows Vista

Vista's User Account Control (UAC) protects against malware elevation of privileges, even when someone is logged on with an administrative account.
UAC is at the heart of Windows Vista's focus on security, but it is also one of Vista's most misunderstood new features. Love it or hate it, you'll need to learn more about it to balance security and user-friendliness in your Vista deployment. Let's take a look at 10 things you need to know about UAC before you roll out Vista, whether on an individual machine or throughout an organization.

UAC cuts the risk of logging on as an administrator:

It's a common problem: Users who have administrative accounts tend to log on with those accounts, even if they also have regular user accounts and realize that using a standard user account for routine tasks is a better security practice. It's just more convenient, and human nature puts a high priority on convenience.

With User Account Control, some of the risk of logging on as an admin is ameliorated because Vista performs most tasks with regular user privileges even when someone is logged on as an administrator.

The logon process has changed:

Although it appears the same to the user--you still enter your account name and password in the same way--the Vista logon process has changed under the hood. Now when you log on with an administrative account, you not only get an access token for that account, but you also get a standard user access token. The standard token is used to launch Explorer.exe, so all child processes will run with that token's privileges unless privileges are elevated by responding to a UAC prompt.

It's easier to tell which tasks require admin privileges:

Vista makes it easier to know which actions will require elevated privileges. Options in dialog boxes for which you must have administrative privileges are marked with a shield-shaped icon to indicate that if you select that option, you'll need to respond to the UAC prompt (or, if Group Policy is so configured, you may not be able to perform the operation at all when logged on as a standard user).

Administrator Approval Mode is the default:

By default, Vista runs with standard user privileges, even when you're logged on as an administrator. If a task requires administrative privileges, a dialog box asks for your permission to continue the action. This prevents malware from elevating privileges without your knowledge.

You can make it more secure:

You can change the behavior of UAC by editing Group Policy (the local security policy or domain policy). You can increase security by requiring that a user enter administrative credentials to elevate privileges, rather than just clicking the Continue button, even when already logged on as an administrator. Users logged on with standard user accounts will, by default, be prompted to enter administrative credentials when they try to perform a task that requires elevated privileges. In a domain environment, the default is to disallow the elevation of privileges. You can change these behaviors by editing Group Policy, too.

You can increase security even more:

By default, both signed and unsigned executable files will run with elevated privileges when you respond to the prompt. However, in a high security environment, this behavior can be changed by editing Group Policy so that Vista will elevate only executables that are signed and valid. When you enable this policy, Vista will check the executable's digital certificate whenever that application requests elevation of privileges.

You can make it less secure (but more convenient):

It's not recommended, but if you're in an environment that you're absolutely certain is free of malware, you can edit Group Policy to allow those logged on as administrators to perform tasks with elevated privileges without being required to respond to the UAC prompt. This essentially negates the extra security provided by UAC when logged on as an administrator and exposes the system to the same security threats that exist when you log on with an admin account in pre-Vista versions of Windows. However, it does do away with the sometimes annoying dialog boxes and makes it more convenient for admins who are, for example, installing a lot of software.

You can turn off UAC or the Secure Desktop:

When UAC prompts for permission to elevate privileges, the desktop is locked so that it can receive messages only from Windows processes. No other software can interact with the desktop at this time, and it goes dark to indicate this. By editing Group Policy, you can disable the Secure Desktop. The prompt will still pop up but will be displayed on the interactive desktop.

It's also possible (although not recommended) to turn off UAC completely. This is done by disabling the policy to Run All Administrators In Administrator Approval Mode.

Legacy applications may need to be marked:

Pre-Vista applications that were not written to be aware of UAC may have to be specially configured to work with Vista. If the programs need to perform tasks that require administrative privileges, you need to mark them with a requested execution level to prompt users for approval. This can be done with the Application Compatibility Toolkit, available as a free download from Microsoft. For more details, see TechNet's Windows Application Compatibility page.

UAC is not a substitute for other security measures:

UAC provides extra protection; for example, it makes it more difficult for malicious software to do harm. However, it's not a substitute for antivirus and anti-spyware programs, and you should still use a good, properly configured firewall. To be effective, security must be multi-layered, and UAC is only one element of a good client security plan.

Saturday, July 7, 2007

Network Security: A Game of Golf

As the saying goes…”it is like a game of golf..A game that is played, but never won. So if you cheat (take shortcuts) you cheat yourself”

One thing which I have frequently observed during Network Security Audits is that organizations rely too much on network firewalls and hardening checklists. Certainly these two are key components of complete suite of Network Security; however ignorance of proper house-keeping and regular health-checks result into majority of Network Unavailability issues and attacks on network devices.

Sounds unbelievable?? Ok, go through the below mentioned list of network vulnerabilities (or list of IT Staff ignorance) and then retrieve the list and Root Cause Analysis (RCAs) of all the Network related Problem Tickets. Analyze RCAs and you will realize that the vulnerabilities mentioned below are worth immediate attention. Also keep in mind that many a times helpdesk guys and support engineers moderate the RCA so that end user get the impression of some severe technical issue for their problem ticket.

List of Network Vulnerabilities:

Unavailability of Network Design Guidelines for the organization may result into:

Problems related to DMZ
Un-patched servers kept on private network
Frequent Bandwidth choking (Slow Network)
IP Address Conflicts
Inefficient VLAN configurations
Access Control List (ACL) misconfiguration
Data Leakage through Tele-workers
Rogue Wireless Access Points
Unauthorized access through extranet/client network (Inbound Connections)

Unavailability of Patch Management Process for Network Devices, may result into:

Exploitation of known vulnerabilities of network devices
Unwanted services running on network devices
In-Secure version of Network Protocol (Ex: SNMP, NTP etc) implementation

Absence of regular Health-check of network devices, may result into:

Outdated OS versions
Outdated configurations
Redundant entries in ACL
Redundant userIDs configured on devices
Mis-Match of Network Passwords/Community Strings (Ex: TACACS+, RADIUS, OSPF)


Lack of clarity on preferred Vendors/Products, may result into:

Products inter-compatibility issues
Scalability issues
Outdated (End-of-Sale) products

Unavailability of cable-layout, may result into:

High-downtime during cable damage
Service Disruptions during new link commissioning
Problems related to Uplink ports’ connectivity

Unavailability of Network Diagram, may result into:

High-downtime during network issues
Service Disruptions during network problem resolution of some other issue
Problems related to Uplink ports’ connectivity
Routing Loops
Redundant Links
Inefficient Data Traffic Paths
Misconfiguration during Change Implementations

Overloaded power-supply, may result into:

Frequent power tripping
Short-circuits
Unavailability of power sockets during new installations
Unavailability of backup power sockets

Insufficient Rack space, may result into:

Over-heating of devices which lead to high error rate
Web of patch cords, you pull one and get four
No scope of identification of individual cables

Improper earthing/grounding of racks, may result into:

I would prefer to leave this explanation for Insurance Advisors J

Unavailability of regular Automated vulnerability scans on network, may result into:

Successful attacks from script kiddies
Successful attacks from internal network
Un-noticed misconfigurations by network admins

No Service Level Agreements (SLAs), may result into:

Inefficient network designs due to high pressure from management/end user for early Change Implementation and Network Commissioning
Inefficient review/approval of Change Requests
Support engineer continuously busy on telephone answering end user complaints, when he should be working on problem resolution i.e. High Downtime
Unwanted Escalations


These overlooked network vulnerabilities are the major cause of most of network related issues in any organization. Some of the organizations have documented procedures but EXECUTION is still a challenge. Yes! Internal IT infrastructure is a cost, but a cost worth paying to save losses due to high down-time, high number of problem tickets, outdated devices, insurance premiums and Network Attacks.

Application Security: A PCI Compliance Requirement

Payment Card Industry (PCI) Data Security Standard
https://www.pcisecuritystandards.org/pdfs/pci_scanning_procedures_v1-1.pdf

6.5 Develop all web applications based on secure coding guidelines such as the Open Web Application Security Project guidelines. Review custom application code to identify coding vulnerabilities. Cover prevention of common coding vulnerabilities in software development processes, to include the following:

6.5.1 Unvalidated input
6.5.2 Broken access control (for example, malicious use of user IDs)
6.5.3 Broken authentication and session management (use of account credentials and session cookies)
6.5.4 Cross-site scripting (XSS) attacks
6.5.5 Buffer overflows
6.5.6 Injection flaws (for example, structured query language (SQL) injection)
6.5.7 Improper error handling
6.5.8 Insecure storage
6.5.9 Denial of service
6.5.10 Insecure configuration management

6.6 Ensure that all web-facing applications are protected against known attacks by applying either of the following methods:

• Having all custom application code reviewed for common vulnerabilities by an organization that specializes in application security
• Installing an application layer firewall in front of web-facing applications.

Note: This method is considered a best practice until June 30, 2008, after which it becomes a requirement.

Monday, June 25, 2007

Paros Proxy: Step-by-Step Guide (Automated Security Assessment)

In continuation to the last post, here is a hands-on with Paros proxy. As already mentioned, it is a great tool for first-cut assessment of web-applications’ security. Configure the proxy settings of web browser, visit few web pages of the test site to create a seed for crawler to start from, spider and finally scan. That’s it! Within 15 minutes (since launch of the application) you will be having a vulnerabilities list from where you can start a further, in-depth assessment.

Step 1:

Check the Proxy settings of Paros from Tools>Options



Local proxy is the setting which you will configure on the web-browser. By default, Paros uses localhost as proxy address and 8080 as the port.
Under Connection settings you configure the address and port number for your corporate/ISP proxy. In case you are not behind a proxy server, leave it remain unchecked (default setting). Additionally, you can bypass certain addresses and configure proxy authentication details also.





Step 2:

Next, open the proxy settings configuration box of your web browser and configure proxy server address and port number i.e. Paros settings.

Step 3:

Next, open the web-site you want to assess.

And access (crawl) some of the URLs manually, so that Paros get a seed to start crawling.





Step 4:


Once seed has been generated in Paros, highlight the web-site, right-click and select Spider.



This will start the auto crawling function.
Step 5:


Now select Analyze>Scan policy from the top-menu.
And select the Vulnerabilities you want to scan for. Notice, that it has almost all the OWASP top 10 vulnerabilities.
Step 6:

Once scan policy is defined, you can start the scan for one or more (all) web-sites visible under Sites pane.





Once the scan process is completed, you can view the results with test data in Alerts window (Bottom).

Step 7:

Now, you can generate a detailed report on findings from Report> Last Scan Report





Assessment report will have Vulnerability description, exact instances (URL & affected parameters), recommended solution and relevant references.

I hope the post was informative for you and within a short time span you will also be able to perform your first Automated Web-Application vulnerability scan.

Automated Web-Site Crawling Tools

Manual Application Security testing provides granular control however it may sometimes become very exhaustive and monotonous. As a professional Application Security tester, often you will come across such projects where testing of hundreds of web pages would be required, and in case you do not have sufficient time their is a high probability of missing critical vulnerabilities.

There comes the requirement for automation of crawling and scanning. It can save tremendous amount of time and labor. Apart from these two obvious benefits, you can get a high level schema of the whole web site in just few minutes and once you can view the logical interconnectivity of web-pages/scripts its far easy to plan the attack phase.
The two most popular and free, automated crawlers are:

Web-Site layout after crawling is finished:

WebScarab
Basically these are proxy applications with in-built crawling function. These operate as a man-in the-middle between your browser and web server. Every request/response between your browser and the server is trapped and proxy maintains a log of every single transaction. You can select any of the sites cached by proxy and crawl (Paros and WebScarab use “spider” term for crawling) for interconnected web-pages.

Additionally, Paros can scan a web site for vulnerabilities and generate a vulnerability report with vulnerability priority, description and recommended countermeasures.
Other crawler applications are:
  • Wget (It can crawl and download the contents of a web site, supported on Windows & *nix)
  • Teleport Pro (It is a windows based commercial tool for crawling and local caching)
  • Lynx (It is an advanced, text based browser for *nix platform)

Till now, it was the positives of automated crawlers…however, these have following limitations as well:

  • Automated Crawlers do not work well with client side code like Java Script, Java Applet, Flash, ActiveX
  • Since these are automated by nature, they do not interact well with web pages requiring human input and may not be able to result in all possible child routes (web pages).
  • Crawlers may not be able to retrieve complete hierarchy of sites having multiple level of web authentication. For example, even after logging in, you may be required to submit a transaction password or fill an input, based on some graphics (CAPTCHAs).
  • Crawlers may not be able to retrieve complete hierarchy of those sites which produce different web-pages for different user types (role-based authentication). In such cases only pages accessible to current user will be retrieved.
  • Crawlers may miss the URLs which are coded inside function calls instead of html code.
  • Crawlers do a multi-thread search, therefore web-sites having restrictions on multiple simultaneous sessions for same user may get locked and result into denial of service.

Due to these reasons, Professional Testers prefer a mix of automated tools and manual testing. They split the whole projects on the basis of different access (crawl) levels and test them separately.

Friday, June 22, 2007

XSS Vulnerability in www.blogger.com

XSS Vulnerabilty example:

Hyderabadi Biryani @ HYDERABAD HOUSE

In the olden days when the armies marched long stretches, they were to be fed in the most befittingly nutritious manner. Hence they carried with them heards of sheep, goats, rations of rice and wheat as their staple diet. The meat was then cooked with rice or wheat, what was cooked with rice came to be known as BIRYANI and with wheat as HALEEM. The remnants of the lamb such as Trotters, organs etc was cooked overnight and served the next morning came to be known as NAHARI/PAYA.

The Nizam & other members of the royal family, such as Salar Jung, Viqar-ul-Umrah with unwavered dedication from the Royal cooks of the erstwhile Nizam perfected this most ordinary cooking by using expensive, special and aromatic ingredients to produce food of the highest quality of taste and nutrition, creating a perfect balance of proteins, carbohydrates etc. The extent of care taken is evident from the fact that even the metal that is used in the cooking vessels, was predominantly copper, which helps in slow cooking, while retaining the original flavours of meats etc.

HYDERABAD HOUSE, (estd 1975 by its founder Late Mir Baber Ali) continues this legacy with equal finesse, which has made it a household name among one and all in Hyderabad. Hyderabad House intends to promote this unique cuisine not only in the city of Hyderabad but also to all parts of the country and abroad.

Tuesday, June 19, 2007

Tamper Data: Firefox Add-On for Web-Application Security Testing

(Images have been used for html tags and scripts as tags are not permitted in some of blogspot fields and scripts may get executed in readers' browser)

Tamper Data is a very powerful, free add-on for Mozilla Firefox. Truly speaking, I never expected an 80 KB plugin to have so many functionalities.
You can tamper (As the name suggests) HTTP/HTTPS requests by traping browser responses, manipulating HTTP parameters like content-type and length (useful in HTTP Splitting), Cookies and POST data, you can add or delete elements/fields and last but not the least, you have a good number of in-built test cases which you can try during web-application security testing.

Lets have a quick look of the product and parallaly leran some hacking....;-)

Tablet Super Store is an Online PC Shop (http://www.bayden.com/sandbox/shop/), intentionally designed with a vulnerability so that wanna be penetration testers can test their metal.






We will also try to hack it, but in a while...

First, some home-work with Tamper Data (TD). If you have downloaded and installed the tool from Mozilla Firefox' addon site (https://addons.mozilla.org/en-US/firefox/addon/966) you will find it under Tools>Tamper Data.
Next, Select Start Tamper option from TD menu to trap every Web request/response. As you can see in the image below, the moment a request/response is generated it gets trapped in between your browser and the web-server. You get 3 options:
  1. Tamper (To manipulate)
  2. Submit (Accept the request/response AS-IS)
  3. Abort Request (stop the data flow before it reaches web-server)

Additionally, it asks Continue Tampering? (no need for explanation)




While shopping at the PC Mall, I selected the quantity of PCs I wanted to purchase and clicked ORDER and I got a pop-up with three options mentioned above. lets Tamper....
Wow..All HTTP request/response fields are available in an easy to understand format (I hope you also prefer tabuler view of HTTP data over raw view, and in case you dont understand raw view at the moment, forget it).

Here comes the best part of TD. As you can see in the image below, you get a good number of options to try on trapped data. Add/delete fields, play with encoding/decoding, try some Input Validation, Cross-Site Scripting or otherwise SQL Injection.


Lets see what do we have for Input validation....


A variety of data formats which you can try for Input validation, Client-Side Validation and sometime for Buffer overflow tests.
Next comes, XSS or Cross-Site scripting.

You have a good variety of scripting tests. You may start with Alert test which works well in most of the XSS vulnerable sites. (Hint: Try it in someone's guestbook or feedback form. In case the site is XSS vulnerable she will get a nice pop-up with hello message written over it.)
Next in the row is SQL (Mother of all Database hacks)


Try these tests for authentication, authrization testing i.e. to get the whole list of accounts when you are supposed to have access to only yours, or may be none :-)
Now back to online PC shop. So how many to buy......oh u can buy only upto 3 PCs in a shot :-(


Lets tamper...Hmmmmm so there lies the hidden cost field. How about 5 dollers per PC? and yes lets buy 30 PCs in one shot....

Bingo!!!!!!!! 30 PCs for 150$.....not bad for the first hands-on of Web-Applications penetration testing :-)


Similar to TD you have TamperIE for Internet Explorer. However, TamperIE is not as powerful as TD.

For the geeks.....their are more powerful tools, but everything comes at a cost. Either they are commercial tools (Appscan, webinspect, Acunetix) or else man-in-the-middle Proxies (Paros, WebScarab, both free) which require a lil better understanding of pen-testing concepts and proxy configuration.

For beginners..TD is worth a try....

Monday, June 18, 2007

Penetration Testing: Web-Applications Test-Cases (Chapter 1)

(Images have been used for html tags and scripts as tags are not permitted in some of blogspot fields and scripts may get executed in readers' browser)





Broken Authentication and Session Management:


•For well known applications try a Google search for default usernames and password. Try those first.

•If there is no lock out policy in place, try brute force or dictionary attack (You may try Brutus tool which supports both, basic Authentication and Frame based Authentication)

•Basic Authentication: Basic authentication uses 'Authorization' as the cookie name to store the user's credentials. Use WebScarab -> Tools -> Transcoder to Base64 decode the the value in the Authorization cookie.

•Server may skip authentication if you send the right cookie. Intercept the cookies using a Proxy (Paros or WebScarab, both are free) and try to replay the cookie.

•Try guessing cookie values and manipulate cookie value while transfer through Paros or WebScarab.

Buffer Overflows:

Make an http request to application with long query string . Request should be denied and the application should not crash.



You may try long Character string //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////


or

2652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652652 (You may try TamperIE tool for Internet Explorer, it’s a free tool and has few inbuilt cases)



Improper Error Handling:

You can change length, existence, or values of authentication parameters. Try deleting a parameter ENTIRELY with browser plug-in or proxy. Apart from interesting error messages, there is a high probability that you may get authenticated also.
Insecure Storage:

Primarily you test week encoding methods used for session ids, cookies, basic authentication etc. you may try Cain & Abel Tool (Free) or online ASCII converters.

Denial of Service:

Access 2 applications/services hosted on the same server. Bombard one of the applications/service with load of request. Now try to make request to other application. Request should be denied.
In case account lockout is configured, Try high number of invalid logons to lock-down. You may try automated tools.


Insecure Configuration Management:

Try to guess the URL for the admin page
Try directory traversal
Try OS command injection


To be continued…………with Chapter 2


            Friday, June 15, 2007

            Cost-Cutting, Compliance and Security

            My curiosity propelled me into the world of computer security. Beginning of the journey was quite tough as the bean counters of corporate world were busy with cost cutting (really? Or the cost cutting doesn’t apply to boss’ female secretary and other cronies ;-) )
            However, things have started improving for security consultants as organizations have started acting for the sake of standards compliance (typically a Whitewash).

            Hopefully, things will improve further and Chief Security Officers/Managers will also get a promotion from the role of scapegoat to some active role in the organizations. Till that time, live with cost cutting and compliance…..