Mar 19, 2012

The Available Plugins Of Ettercap

If you want to see "how to  ARP Poisoning with ettercap", you can go to the Source for that :)

Available plugins :

arp_cop  1.1  Report suspicious ARP activity
autoadd  1.2  Automatically add new victims in the target range
chk_poison  1.1  Check if the poisoning had success
dns_spoof  1.1  Sends spoofed dns replies
dos_attack  1.0  Run a d.o.s. attack against an IP address
dummy  3.0  A plugin template (for developers)
find_conn  1.0  Search connections on a switched LAN
find_ettercap  2.0  Try to find ettercap activity
find_ip  1.0  Search an unused IP address in the subnet
finger  1.6  Fingerprint a remote host
finger_submit  1.0  Submit a fingerprint to ettercap’s website
gre_relay  1.0  Tunnel broker for redirected GRE tunnels
gw_discover  1.0  Try to find the LAN gateway
isolate  1.0  Isolate an host from the lan
link_type  1.0  Check the link type (hub/switch)
pptp_chapms1  1.0  PPTP: Forces chapms-v1 from chapms-v2
pptp_clear  1.0  PPTP: Tries to force cleartext tunnel
pptp_pap  1.0  PPTP: Forces PAP authentication
pptp_reneg  1.0  PPTP: Forces tunnel re-negotiation
rand_flood  1.0  Flood the LAN with random MAC addresses
remote_browser  1.2  Sends visited URLs to the browser
reply_arp  1.0  Simple arp responder
repoison_arp  1.0  Repoison after broadcast ARP
scan_poisoner  1.0  Actively search other poisoners
search_promisc  1.2  Search promisc NICs in the LAN
smb_clear  1.0  Tries to force SMB cleartext auth
smb_down  1.0  Tries to force SMB to not use NTLM2 key auth
stp_mangler  1.0  Become root of a switches spanning tree


If you like my blog, Please Donate Me

Check the addons of Chrome by kkotowicz

Webpages can sometimes interact with Chrome addons and that might be dangerous, more on that later. Meanwhile, a warmup - trick to detect addons you have installed.

While all of us are used to http / https URI Schemes, current web applications sometimes use other schemes including:

    javascript: URIs bypassing XSS filters for years
    data: URIs that is a common source of new XSS vulnerabilities
    view-source: that may be used for UI-redressing attacks
    file: that reads your local files

Tough questions
Throughout the years, there have always been questions on how documents from these schemes are supposed to be isolated from each other (think of it like a 2nd order Same Origin Policy). Typical questions include:

    Can XMLHttpRequest from http:// document load a file:// URL? And the other way around?
    Can document from https:// load script from http://? Should we display SSL warning then?
    Can http:// document have an iframe with view-source: src?
    Can data: URI access the DOM of the calling http:// document?
    Can file:// URL access a file:// from upper directory (it's not so obvious)
    What about:blank?
    How to handle 30x redirections to each of those schemes?
    What about passing Referer header across schemes?
    Can I across schemes? Would window.postMessage() work?
    and many, many more issues

In general, all this questions come down to:

    How should we isolate the schemes from each other?
    What information is allowed to leak between scheme boundaries?

Every single decision that has been made by browser vendors (or standard bodies) in those cases has consequences to security. There are differences in implementation, some of them very subtle. And there are subtle vulnerabilities. Let me present one example of such vulnerability.
Meet chrome-extension://
Google Chrome addons are packaged pieces of HTML(5) + Javascript applications. They may:

    add buttons to the interface
    launch background tasks
    interact with pages you browse

All extension resources are loaded from dedicated chrome-extension:// URLs . Each extension has a global unique identifier. For example,
chrome-extension://oadboiipflhobonjjffjbfekfjcgkhco/help.html is URL representing help.html page from Google Chrome to Phone (you can try it, if you have this extension enabled).

Extension interact with web pages that you visit and have access to their DOM, but the Javascript execution context is separated (they cannot call each other Javascript code - and for a good reason).

However even in this separation model there is still place for page <-> addon cooperation. Malicious HTTP pages might interact with addons in various ways. One simple example is addon enumeration.
Finding your addons one by one
With a little Javascript code I can easily test if you're using a certain Chrome addon. Give me a list of most popular extensions and I'll test all of them in milliseconds. Why would I want that as an attacker?

    to fingerprint your browser (ad networks love this)
    to start attack against a certain known vulnerable addon (wait for the next post for this ;) )

See demo of Chrome addons fingerprinting. (src here)
The trick is dead simple:

var detect = function(base, if_installed, if_not_installed) {
    var s = document.createElement('script');
    s.onerror = if_not_installed;
    s.onload = if_installed;
    s.src = base + '/manifest.json';
detect('chrome-extension://' + addon_id_youre_after, function() {alert('boom!');});

Every addon has a manifest.json file. In http[s]:// page you can try to load a script cross-scheme from chrome-extension:// URL, in this case - the manifest file. You just need the addon unique id to put into URL. If the extension is installed, manifest will load and onload event will fire. If not - onerror event is there for you.
Update: TIL the technique was already published by @albinowax. Cool!

This is just one simple example of punching the separation layer between addons and webpages. There are more coming. Stay tuned.



If you like my blog, Please Donate Me

Smart Scapy By Lacofa

The importance of security
A basic aspect of Internet services, either at an application or network level, is their own security. Currently they make great efforts aimed to keep users’ security and privacy in Internet.
To have a proper security level is necessary to carry out tests not only on the devices and apps that will enable the access to information, but also on any item that will manage information within Internet.
There are many areas on which they work from a security point of view, one of them are the tests carried out on these devices that manage information. Generally speaking, we can say that devices include a protocol stack, such as TCP/IP.
Devices can range from a Hosting server in Internet to a simple robot created with “Arduino” with the capacity to connect a data network.
Furthermore, the release of the new addressing (IPv6) in Internet provides a new focus to the M2M concept (Machine to Machine) and brings forward the capacity to include as an Internet device any item that a priori wasn’t conceived to have this feature.
An example familiar to many of us is any of our devices capable of connecting to Wi-Fi networks (mobile terminals, tablets, mp3 players, etc.) All of them have a protocol stack able to provide us the access to the data network through a wireless access point.
Well, the stack that enables us to do this has been tested not only at a performance and compatibility level, but also from a security point of view. Checking the capacity of facing possible “attacks” from malicious users or other threats in order to be able to safeguard our security as the device users.
To carry out this kind of tests on the devices it is necessary to have tools that enable us to modify on our own way the data we exchange with other devices. There are several options, but one open source tool that has been used to undertake this task is Scapy, and today we’ll talk about it. 
Scapy: What makes it different from other tools?
As it is described, Scapy is an “interactive application able to manage and manipulate packets with a wide number of protocols. It enables to capture different network interfaces, checking parameters in real time, creating new protocols, etc. Thanks to its capacities it is possible to carry out manually and automatically tasks such as scanning, tracerouting or network tests. Moreover, it allows forging packets, injecting invalid 802.11 frames or combining technics automatically.”
This provides us a powerful base to cope with tests on network systems.
Scapy enables us to monitor the network as Wireshark and Tcpdump does, having protocol tests, adding new protocols to implement natively, doing “ fuzzing” tests on established protocols, etc.
But as it can’t have it all, Scapy sacrifices an intuitive interface for the great capacity of being used as a library within the Python scripts. It is here where Smart Scapy (SScapy) tries to help.
SScapy: Smart Scapy
SScapy provides a graphic interface that will enable us to create and inject packets (also to capture them in a future) intuitively. It includes the so-called “smart mode” (for dummy users), this mode participates in the creation of packets.
It was created within the IPv6 innovation project, that’s why this protocol has been installed in the first place. The tool allows creating rapidly and intuitively ICMPv6 packets, adding Extension Headers, NDP (Neighbour Discovery Protocol) packets, MRD, DHCPv6 (in a future), etc. and modifies them according to our needs.
Does your platform use IPv6? Maybe you are interested in evaluating its strength with SScapy.
SScapy limitation is determined by the very Scapy. SScapy will only support protocols that Scapy is able to decipher. The QT4 library and Python will be useful to provide Scapy with all the capacities we demand.
The interface depicts the following aspect:
The packet is built up from the left pane and it is shown on the right one. Below the controls to send packets, we’ll obtain information about the delivery and the packet construction.
By making use of auxiliary dialogues, information can be added on the layers and its attributes. For instance DNS registers in a possible answer:
SScapy allows including Scapy native functions as a value of the attributes. This helps us to do some “fuzzing” tests:
The first version, the so-called pre-alpha, with certain capacities, establishes the foundations to add gradually functionality according to our worries/needs. Broadly speaking, the steps we have written on the “roadmap” are
  1. Including new protocols.
  2. The tool acting in a server mode. Being able to interact with the network according to active communications and enabling to create little ”forged” servers.
  3. Implementing known attacks to be reproduced in a “simple” way.
  4. A little “Wizard” for the creation of layers to incorporate them in the very Scapy.
The pillars we are working on 
  • Open approach: Scapy is an open framework that provides us the possibility of growing the project, for us as well as for the community.
  • A tool developed from a necessity: SScapy was created from an internal necessity and it turned into a solution that can be useful for many others.
  • Participation: SScapy has been possible thanks to the research work framed in the Network and Security initiative, and the Ethical Hacking team.

If you like my blog, Please Donate Me

RDP Honeypot on Amazon EC2 Virtual Server


The MS12-20 vulnerability is red-hot right now. People are developing and testing exploits like mad, and a worm is expected very soon.
That makes it a good time to harvest all attacks on the RDP ports, because there may be interesting stuff there!
This is a simple way to set up an RDP honeypot on a Linux machine. But BE CAREFUL! I have no reason to imagine that this is safe or secure, so I recommend using something like an Amazon Free EC2 machine with nothing you love on it, so there's nothing there for a hacker to take.


Open an SSH connection to your EC2 machine.
Execute these commands:
sudo yum install gcc make pam-devel openssl-devel vnc-server libtool libX11-devel libXfixes-devel curl tcpdump -y wget
tar xzf xrdp-0.5.0.tar.gz
cd xdrp
sudo make install
sudo /usr/local/sbin/xrdp
You should see a message like "process 18076 started ok".
Execute this command:
netstat -an | grep 3389
You should see a process marked LISTEN, as shown below on this page:

Opening the Firewall Port

Open a browser and go to
Log in with your Amazon account.
On the right side, in the "My Resources" section, click "1 Running Instance".
In the left pane, click "Security Groups".
Near the top of the panel, check the "quick-start-1" box.
In the lower pane, click the "Inbound" tab.
In the lower pane, on the left, enter a "Port range" of "3389". Click the "Add Rule" button.
Your list of TCP ports should now include port 3389(RDP), as shown below (you may not have all the other ports open):
In the lower pane, on the left, click the "Apply Rule Change" button.
In the left pane, click Instances.
In the top center, check the box next to your running instance--in my example below, it is named "Sam's First AWS Server".
In the lower center, the complete DNS name of your Amazon machine appears. Make a note of this name--you will need it later.
In my example below, the name is

Starting the Logger

This starts a tcpdump session in the background collecting all RDP traffic for later analysis.
Execute these commands:
cd sudo tcpdump tcp port 3389 -i eth0 -vvX >> /var/www/html/rdplog.txt &
Press Enter again to get a $ prompt.

Testing the Honeypot with Nmap

You need nmap installed. If you don't have it, go to and download and install the correct version for your computer.
Run nmap and perform a default scan of your Amazon web server, using the DNS name you found earlier.
In my case, the name was
You should see port 3389 open, as shown below:

Viewing the Captured Packets

I designed mine to work with Apache, so the packets are visible on the Internet, as shown below:
That's it! Now you can see who is trying to exploit your RDP.

If you like my blog, Please Donate Me

Mar 18, 2012

Secure your vps by _St0rm

    # +----------------------------------------------------+
    # | ssh -P-your-port-number-here root@your-server-here |
    # | ssh -P67993                   |
    # |                                                    |
    # | Title: This is what I do when i'm bored.           |
    # | Author: _St0rm                                     |
    # +----------------------------------------------------+
    # Adding new user
    root@echelon:~$ adduser comrade
    # Getting sudoers file + nano file to edit it
    root@echelon:~$ apt-get install sudo
    root@echelon:~$ apt-get install nano
    # Adding new user to sudoers file + giving root priveliges
    root@echelon:~$ nano /etc/sudoers
    root    ALL=(ALL:ALL) ALL
    comrade ALL=(ALL:ALL) ALL
    root@echelon:~$ login
    Login: comrade
    Password for comrade:
    # upgrading and updating apt-get archive
    comrade@echelon:~$ sudo apt-get upgrade
    comrade@echelon:~$ sudo apt-get update
    # Installing apache, iptables, mysql and php5
    comrade@echelon:~$ sudo apt-get install apache2
    comrade@echelon:~$ sudo apt-get install iptables
    comrade@echelon:~$ sudo apt-get install mysql-server mysql-client
    comrade@echelon:~$ sudo apt-get install php5-mysql
    # creating a 404 page for visitors. EG: == 404.html - Redirects all to
    comrade@echelon:~$ nano /var/www/404.html
    <html><head><meta http-equiv="REFRESH" content="0;url="></head></html>
    # Creating the same but for Forbidden documents instead, this one will redirect to:
    comrade@echelon:/var/www$ nano 403.html
    <html><head><meta http-equiv="REFRESH" content="0;url="></head></html>
    comrade@echelon:~$ nano /var/www/robots.txt
    User-agent: Googlebot
    Disallow: /
    Crawl-delay: 10
    User-agent: Slurp
    Disallow: /
    Crawl-delay: 10
    User-agent: *
    Disallow: /
    Crawl-delay: 10
    User-agent: Bingbot
    Disallow: /
    Crawl-delay: 10
    User-agent: Msnbot
    Disallow: /
    Crawl-delay: 10
    User-agent: Scanner
    Disallow: /
    Crawl-delay: 10
    Disallow: /
    Crawl-delay: 10
    User-agent: YahooBot
    Disallow: /
    Crawl-delay: 10
    User-agent: baiduspider
    Disallow: /
    Crawl-delay: 10
    User-agent: naverbot
    Disallow: /
    Crawl-delay: 10
    User-agent: seznambot
    Disallow: /
    Crawl-delay: 10
    User-agent: teoma
    Disallow: /
    Crawl-delay: 10
    User-agent: Yandex
    Disallow: /
    Crawl-delay: 10
    User-agent: Bot
    Disallow: /
    Crawl-delay: 10
    # Installing fail2ban -- IPS.
    comrade@echelon:~$ sudo apt-get install fail2ban
    # Changing MOTD for when users login to my server
    comrade@echelon:~$ sudo nano /etc/motd
    ******Welcome back to the server!******
    -bla bla bla this is message of the day-
    # Installing aide
    comrade@echelon:~$ sudo apt-get install aide
    # Changing apache conf, these are the one's i have changed the rest are left as default
    comrade@echelon:/etc/apache2$ sudo nano apache2.conf
    Timeout 300
    KeepAlive Off
    HostnameLookups On
    LogLevel warn
    # Changing apache's httpd.conf file to give away less information
    comrade@echelon:/etc/apache2$ sudo nano httpd.conf
    ServerSignature Off
    ServerTokens Prod
    <Directory />
      Options None
      AllowOverride None
      Order allow,deny
      Allow from all
    ErrorDocument 404 /404.html
    ErrorDocument 403 /403.html
    # Here you can change icons and default icons.
    comrade@echelon:~$ sudo nano /etc/apache2/mods-enabled/autoindex.conf
    # Changing fail2ban's settings here are the ones i have changed
    comrade@echelon:~$ sudo nano /etc/fail2ban/fail2ban.conf
    loglevel = 2
    logtarget = /var/log/fail2ban.log
    socket = /var/run/fail2ban/fail2ban.sock
    # Changing jail.conf
    comrade@echelon:~$ sudo nano /etc/fail2ban/jail.conf
    ignoreip =
    bantime = 6000
    maxretry = 3
    backend = poiling
    destemail =
    mta = sendmail
    protocol = tcp
    enabled = true
    port    = ssh
    filter  = sshd
    logpath  = /var/log/auth.log
    maxretry = 3
    enabled = true
    port    = ssh
    filter  = sshd-ddos
    logpath  = /var/log/auth.log
    maxretry = 3
    enabled  = true
    port     = smtp,ssmtp
    filter   = postfix
    logpath  = /var/log/mail.log
    enabled  = true
    port     = smtp,ssmtp
    filter   = couriersmtp
    logpath  = /var/log/mail.log
    # Installing irssi for irc
    comrade@echelon:~$ sudo apt-get install irssi
    # Installing Python
    comrade@echelon:~$ sudo apt-get install python
    # Installing perl
    comrade@echelon:~$ sudo apt-get install perl
    # Installing PPP for a VPN connection
    comrade@echelon:~$ sudo apt-get install ppp
    # Changing ssh configurations to my own settings
    comrade@echelon:~$ sudo nano /etc/shh/ssh_config
     HashKnownHosts yes
     GSSAPIAuthentication yes
     GSSAPIDelegateCredentials no
    # Changing sshd configurations to my own settings
    comrade@echelon:~$ sudo nano /etc/ssh/sshd_config
    Port 65326
    Protocol 2
    HostKey /etc/ssh/ssh_host_rsa_key
    HostKey /etc/ssh/ssh_host_dsa_key
    HostKey /etc/ssh/ssh_host_ecdsa_key
    LogLevel WARN
    StrictModes yes
    PermitRootLogin yes
    # Debatable to you if you remove root login or not.
    IgnoreRhosts yes
    PermitEmptyPasswords no
    HostbasedAuthentication no
    # You can add host auths here, for example things like "only accept root login from this ip" etc. Also for PAM and RSA authing.
    PrintLastLog yes
    UsePAM yes
    comrade@echelon:~$ sudo apt-get install vim
    comrade@echelon:~$ sudo apt-get install yum
    # Editing the hash encryption level - taking out passwords in /etc/passwd - adding hashs to /etc/shadow
    comrade@echelon:~$ sudo nano /etc/pam.d
    password        [success=1 default=ignore] obscure sha512
    password        requisite             
    password        required              
    password        optional              
    # Rather than this; $1$jmG9uZUU$j2diNHCtKEC/0KLchlw96.
    # You now have this: $8$gMLLKls.$p0713lEoDgwZPQwEnP9r.xkSXUzldol7dAOgUQ88ZCswZmwkgg/jzHV.HlznaY/sMFx6C2leEog3dF/XvCDwn/
    # Editing the /var/www/.htaccess file -- this file should be: chmod 644
    LimitRequestBody 10240000 #bytes, 0-2147483647(2GB)
    IndexIgnore *
    ServerSignature Off
    Include httpd.conf
    <Directory "/var/www">
                    Options +Indexes FollowSymLinks +ExecCGI
                    AllowOverride AuthConfig FileInfo
                    Order allow,deny
                    Allow from all
    ErrorDocument 403 /var/www/403.html
    ErrorDocument 404 /var/www/404.html
    AllowOverride All
    KeepAlive Off
    Include httpd.conf
    Server: Apache
    <Directory />
      Options None
      AllowOverride None
      Order allow,deny
      deny from someones-ip-address
      deny from someones-ip-address
      deny from someones-ip-address
      allow from all
    # to check server headings
    comrade@echelon:~$ sudo apt-get install wget
    # Getting commands like dig, to get dns records
    comrade@echelon:~$ sudo apt-get install dnsutils
    # Setting up the directories for webserver
    ls /var/www/
    So lets say you upload a photo to your /var/www/
    so there is /var/www/test.jpg -- You forgot to add /images/
    so... just simply....
    chmod 711 /images/
    mv test.jpg /var/www/images/test.jpg
    cd images
    chmod o+r test.jpg
    So now: -- you can see this.
    But: -- takes you to (Forbidden) (403.html)
    Also, rather than defining
    try linking from index.html :
    you won't need .html and it makes it look more professional.
    The robots.txt will stop alot of scanners.
                                        The advantages i have outlined:
    When you have a 404, for example:
    You will be redirected to:
    When you have a 403 for example:
    You will be redirected to:
    When you do a server header check:
    comrade@echelon:/$ wget -S
    Server: Apache/2.2.8 (Ubuntu) Phusion_Passenger/2.2.2
    Content-Type: text/html; charset=utf-8
    X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.2
    Connection: keep-alive
    You NOW get:
    comrade@echelon:/$ wget -S your-website-here
    Server: Apache
    Content-Type: text/html
    Connection: close
    Connecting to server:
    Instead of port 22 for ssh: You now have your port on: 65326
    This is alot more harder to find with port scanners.
    Also now:
    comrade@echelon:~$ nmap -F your-website-here
    Not shown: 97 closed ports
    25/tcp   open  smtp        <-- SMTP for mail
    80/tcp   open  http        <-- Web server
    1829/tcp open  pptp        <-- VPN
    For example, this is default for apache /etc/passwd file for #LulzFailz
    root@delta [~]# cat /etc/passwd
    So here is mine:
    comrade@echelon:~$ sudo cat /etc/passwd
    The Passwords are now stored within /etc/shadow file.
    But the passwords are not shitty MD5 hash with salt.
    They are SHA512 with salt. Is very good imo.
    comrade@echelon:~$ sudo cat /etc/shadow
    Rather than just .html maybe you want to have mysql/php on website? So that is why is added above. :)
    This was more for just a quick vps setup. This can take you around 10 minutes.
    Then all you do: reboot
    wait 2 minutes, log into new ssh port, change password to something like:
    Then you have a fairly decent protected server.
    There are lots more ways to add more protection. But in 10 minutes setting all of this up is not bad imo.
    Now all you need to do is register a domain.
    Then you can host on cloudflare:
    A record: - Points to: cloudflare-IP
    CNAME record: - Points to: Cloudflare-IP
    Delete any other records if you do not wish to have them. etc.
    Now that you're hosting on cloudflare, you dont have any subdomains except maybe but then
    that would be on a different host.
    You are protected against -some- ddos attacks.
    Using the /var/log/apache2/access.log as a guide to see who to block and why.
    Think about blocking country by country. makes life alot easier.
    _St0rm - -

If you like my blog, Please Donate Me