Monthly Archives: September 2016

How to Prevent Hackers from Using Bad Bots To Exploit Your Website


(Image created by the author)

The Bot Bandits Are Out of Control

I’ve always known that bots crawl my websites and the sites of all my fellow developers, but I was unaware that bots now make more visits than people do to most websites. Yep, they officially overtook us in 2012, and bots now dominate website visits. Egad, it’s Star Wars run amok!

Before we become alarmed, though, let’s look at a few facts that demonstrate the preponderance of bots in our midst.

The bots are coming. The bots are coming. The bots are here!

(Image source)

Incapsula’s 2013 bot traffic report states that “Bot visits are up 21% to represent 61.5% of all website traffic.” If bots are preponderant, what does that mean for us?

For those of you just tuning in, preponderance means “the quality or fact of being greater in number, quantity, or importance.” That means the bots are “more important than humans” in determining the value of websites to potential readers.

A quick look at antonyms for preponderance reveals that our plight is worse than expected. Antonyms for preponderance include disadvantage, inferiority, subordination, subservience, surrender and weakness.

All is not lost, however. Not all bots are bad. In fact, in the wild and woolly world of SEO, Googlebots are actually our friends. A “Googlebot” is Google’s web crawling bot, also known as a “spider,” that crawls the Internet in search of new pages and websites to add to Google’s index.

Googlebots: Our Ally in the Bot Wars

If we think of the web as an ever-growing library with no central filing system, we can understand exactly what a Googlebot wants. A Googlebot’s mission is to crawl this library and create a filing system. Bots need to be able to quickly and easily crawl sites. When a Googlebot arrives at your site, its first point of access is your site’s robot.txt file, which highlights the importance of ensuring it’s easy for the bots to crawl your robots.txt file. The less time Googlebots spend on irrelevant portions of your site, the better. At the same time, be sure you have not inadvertently siloed or blocked pages of your site that should not be blocked.


(Image source)

Next, Googlebots use the sitemap.xml file to discover all areas of your site. The first rule of thumb is this: keep it simple. Googlebots do not crawl DHTML, Flash, Ajax nor JavaScript as well as they crawl HTML. Since Google has been less than forthcoming about how its bots crawl JavaScript and Ajax, avoid using this code for your site’s most important elements. Next, use internal linking to create a smart, logical structure that will help the bots efficiently crawl your site. To check the integrity of your internal linking structure, go to Google Webmaster Tools -> Search Traffic -> Internal Links. The top-linked pages should be your site’s most important pages. If they aren’t, you need to rethink your linking structure.

So, how do you know if the Googlebots are happy? You can analyze Googlebot’s performance on your site by checking for crawl errors. Simply go to Webmaster Tools -> Crawl and check the diagnostic report for potential site errors, URL errors, crawl stats, site maps and blocked URLs.

The Enemy in our Midst: Bandit Bots

Googlebots aren’t the only bots visiting your site. In fact, over 38% of the bots crawling our sites are out for no good. So not only are we out-numbered, but nearly 2 out of every 5 visitors to your site are trying to steal information, exploit security loopholes and pretend to be something they are not.

We’ll call these evil bots “bandit bots”.

So, what are we to do?

As an SEO provider and website developer, I could protest. I could blog my little heart out and get a few friends to join me. Or I could buckle down and take responsibility for my own little corner of the web and fight back against the bandit bots.

Let’s do this together.

Bandit Bots: What They Are and How to Fight BackTerminator-Robot-dreamstime_s_34845625-C

The bad guys come in four flavors. Learn which bots to watch out for and how to fight back.


These bandit bots steal and duplicate content, as well as email addresses. Scraper bots normally focus on retrieving data from a specific website. They also try to collect personal information from directories or message boards. While scraper bots target a variety of different verticals, common industries include online directories, airlines, e-commerce sites and online property sites. Scraper bots will also use your content to intercept web traffic. Additionally, multiple pieces of scraped content can be scrambled together to make new content and allow them to avoid duplicate content penalties.

What’s at risk: Scrapers grab your RSS feed so they know when you publish content. However, if you don’t know that your site is being attacked by scrapers, you may not realize there’s a problem. In the eyes of Google, however, ignorance is no excuse. Your website could be hit by severe penalties for duplicate content and even fail to appear in search engine rankings.

How to fight back: Be proactive and attentive to your site, thus increasing the likelihood that you can take action before severe damage is done.

There are two good ways to identify if your site is the victim of a scraper attack. One option is to use a duplicate-content detection service like Copyscape to see if any duplicate content comes up.


(Image created by the author)

A second option for alerting you that content might have been stolen from your site is to use trackbacks within your own content. In general, it’s good SEO to include one or two internal site links within your written content. When you include these links, be sure to activate WordPress’s trackback feature. In the trackback field on your blog’s entry page, simply enter the URL of the article you are referencing. (In this case, it will be one on your own websites, not another site).



(Image created by the author)

You can manually look at your trackbacks to see what sites are using your links. If you find that your content has been re-posted without your permission on a spam site, file a DMCA-complaint with Google.

Finally, if you know the IP address from which scraper bots are operating, you can block them from your feed directly. Add the following code to your .htaccess files. Learn how to edit your .htaccessfile. (See editing your .htaccess file on WordPress.)

RewriteEngine on
RewriteCond %{REMOTE_ADDR} ^
RewriteRule ^(.*)$

In this example, is the IP address you want to send to and is the custom content you want to send them.

Warning! Be very careful editing this file. It could break your site if done incorrectly. If you are unsure of how to edit this file, ask for help from a web developer.

Hacking Tools

Hacking bandit bots target credit cards and other personal information by injecting or distributing malware to hijack a site or server. Hacker bots also try to deface sites and delete critical content.

What’s at risk: It goes without saying that should your site be the victim of a hacking bot, your customers could lose serious confidence in the security of your site for e-commerce transactions.

How to fight back: Most of the attacked sites are victims of “drive-by hackings,” which are site hackings done randomly and with little regard for the impacted business. To prevent your site from becoming a hacking victim, make a few basic modifications to your .htaccess file, which is typically found in the public_html directory. This is a great starter list of common hacking bots. Copy and paste this list into the .htaccess file to block any of these bots from accessing your site. You can add bots, remove bots and otherwise modify the list as necessary.


Spam bots load sites with garbage to discourage legitimate visits, turn targeted sites into link farms and bait unsuspecting visitors with malware/phishing links. Spam bots also participate in high volume spamming in order to cause a website to be blacklisted in search results and destroy your brand’s online reputation.

What’s at risk: Failure to protect your site from spammers can cause your website to be blacklisted, destroying all your hard work at building a credible online presence.

How to fight back: Real-time malicious traffic detection is critical to your site’s security, but most of us don’t have the time to simply sit around and monitor our site’s traffic patterns. The key is to automate this process.

If you’re using WordPress, one of the first steps to fighting back against spam bots is to stop spam in the first place. Start by installing Akismet; it is on all my personal sites as well as the sites I manage for my client. Next, install a trusted security plugin and setup automatic backups of your database.


(Image create by the author)

Require legitimate registration with CAPTCHAs for all visitors who want to make comments or replies. Finally, follow to learn what’s new in the world of security.

Click Frauders

Click fraud bots make PPC ads meaningless by “clicking” on the ads so many times you effectively spend your entire advertising budget, but receive no real clicks from interested customers. Not only do these attacks drain your ad budget, they also hurt your ad relevance score for whatever program you may be using. Google AdWords and Facebook ads are the most frequent targets of these attacks.

What’s at risk: Click fraud bots waste your ad budget with meaningless clicks and prevent interested customers from actually clicking on your ad. Worse, your Ad Relevance score will plummet, destroying your credibility and making it difficult to compete for quality customers in the future.

How to fight back: If your WordPress site is being targeted by click fraud bots, immediately download and install the Google AdSense Click Fraud monitoring plugin. The plugin counts all clicks on your ads. Should the clicks exceed a specified number, the IP address for the clicking bot (or human user) is blocked. The plugin also blocks a list of specific IP addresses. The plugin is specifically for the Adsense customers to install on their websites; AdWords customers have no capabilities to implement this plugin.


(Image created by the author)

When defending a website from hacker bots, it takes a concentrated effort to thwart their attacks. While the above steps are important and useful, there are some attacks, like coordinated DDoS, that you simply cannot fight off on your own. Fortunately, a number of tech security companies specialize in anti-DDoS tools and services. If you suspect your site (or one of your client’s sites) is being targeted for DDoS, such companies can be key to a successful defense.

I recommend following to learn what’s new in the world of security.


Giving honest Googlebots what they want is quite simple. Develop strong, relevant content and publish regularly. Combatting the fake Googlebots and other bot bandits is a bit tougher. Like many things in life, it requires diligence and hard work.

Heartbleed – Best Hacking Technique 2014

The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.

What leaks in practice?

We have tested some of our own services from attacker’s perspective. We attacked ourselves from outside, without leaving a trace. Without using any privileged information or credentials we were able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.

How to stop the leak?

As long as the vulnerable version of OpenSSL is in use it can be abused. Fixed OpenSSL has been released and now it has to be deployed. Operating system vendors and distribution, appliance vendors, independent software vendors have to adopt the fix and notify their users. Service providers and users have to install the fix as it becomes available for the operating systems, networked appliances and software they use.


WHAT IS THE CVE-2014-0160?

CVE-2014-0160 is the official reference to this bug. CVE (Common Vulnerabilities and Exposures) is the Standard for Information Security Vulnerability Names maintained by MITRE. Due to co-incident discovery a duplicate CVE, CVE-2014-0346, which was assigned to us, should not be used, since others independently went public with the CVE-2014-0160 identifier.


Bug is in the OpenSSL’s implementation of the TLS/DTLS (transport layer security protocols) heartbeat extension (RFC6520). When it is exploited it leads to the leak of memory contents from the server to the client and from the client to the server.


Bugs in single software or library come and go and are fixed by new versions. However this bug has left large amount of private keys and other secrets exposed to the Internet. Considering the long exposure, ease of exploitation and attacks leaving no trace this exposure should be taken seriously.


No. This is implementation problem, i.e. programming mistake in popular OpenSSL library that provides cryptographic services such as SSL/TLS to the applications and services.


Encryption is used to protect secrets that may harm your privacy or security if they leak. In order to coordinate recovery from this bug we have classified the compromised secrets to four categories: 1) primary key material, 2) secondary key material and 3) protected content and 4) collateral.


These are the crown jewels, the encryption keys themselves. Leaked secret keys allow the attacker to decrypt any past and future traffic to the protected services and to impersonate the service at will. Any protection given by the encryption and the signatures in the X.509 certificates can be bypassed. Recovery from this leak requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys. Even doing all this will still leave any traffic intercepted by the attacker in the past still vulnerable to decryption. All this has to be done by the owners of the services.


These are for example the user credentials (user names and passwords) used in the vulnerable services. Recovery from this leak requires owners of the service first to restore trust to the service according to steps described above. After this users can start changing their passwords and possible encryption keys according to the instructions from the owners of the services that have been compromised. All session keys and session cookies should be invalidated and considered compromised.


This is the actual content handled by the vulnerable services. It may be personal or financial details, private communication such as emails or instant messages, documents or anything seen worth protecting by encryption. Only owners of the services will be able to estimate the likelihood what has been leaked and they should notify their users accordingly. Most important thing is to restore trust to the primary and secondary key material as described above. Only this enables safe use of the compromised services in the future.


Leaked collateral are other details that have been exposed to the attacker in the leaked memory content. These may contain technical details such as memory addresses and security measures such as canaries used to protect against overflow attacks. These have only contemporary value and will lose their value to the attacker when OpenSSL has been upgraded to a fixed version.


After seeing what we saw by “attacking” ourselves, with ease, we decided to take this very seriously. We have gone laboriously through patching our own critical services and are dealing with possible compromise of our primary and secondary key material. All this just in case we were not first ones to discover this and this could have been exploited in the wild already.


If you are a service provider you have signed your certificates with a Certificate Authority (CA). You need to check your CA how compromised keys can be revoked and new certificate reissued for the new keys. Some CAs do this for free, some may take a fee.


You are likely to be affected either directly or indirectly. OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. Many of online services use TLS to both to identify themselves to you and to protect your privacy and transactions. You might have networked appliances with logins secured by this buggy implementation of the TLS. Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services.


The most notable software using OpenSSL are the open source web servers like Apache and nginx. The combined market share of just those two out of the active sites on the Internet was over 66% according toNetcraft’s April 2014 Web Server Survey. Furthermore OpenSSL is used to protect for example email servers (SMTP, POP and IMAP protocols), chat servers (XMPP protocol), virtual private networks (SSL VPNs), network appliances and wide variety of client side software. Fortunately many large consumer sites are saved by their conservative choice of SSL/TLS termination equipment and software. Ironically smaller and more progressive services or those who have upgraded to latest and best encryption will be affected most. Furthermore OpenSSL is very popular in client software and somewhat popular in networked appliances which have most inertia in getting updates.


Status of different versions:

  • OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
  • OpenSSL 1.0.1g is NOT vulnerable
  • OpenSSL 1.0.0 branch is NOT vulnerable
  • OpenSSL 0.9.8 branch is NOT vulnerable

Bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.


The vulnerable versions have been out there for over two years now and they have been rapidly adopted by modern operating systems. A major contributing factor has been that TLS versions 1.1 and 1.2 came available with the first vulnerable OpenSSL version (1.0.1) and security community has been pushing the TLS 1.2 due to earlier attacks against TLS (such as the BEAST).


Some operating system distributions that have shipped with potentially vulnerable OpenSSL version:

  • Debian Wheezy (stable), OpenSSL 1.0.1e-2+deb7u4
  • Ubuntu 12.04.4 LTS, OpenSSL 1.0.1-4ubuntu5.11
  • CentOS 6.5, OpenSSL 1.0.1e-15
  • Fedora 18, OpenSSL 1.0.1e-4
  • OpenBSD 5.3 (OpenSSL 1.0.1c 10 May 2012) and 5.4 (OpenSSL 1.0.1c 10 May 2012)
  • FreeBSD 10.0 – OpenSSL 1.0.1e 11 Feb 2013
  • NetBSD 5.0.2 (OpenSSL 1.0.1e)
  • OpenSUSE 12.2 (OpenSSL 1.0.1c)

Operating system distribution with versions that are not vulnerable:

  • Debian Squeeze (oldstable), OpenSSL 0.9.8o-4squeeze14
  • SUSE Linux Enterprise Server
  • FreeBSD 8.4 – OpenSSL 0.9.8y 5 Feb 2013
  • FreeBSD 9.2 – OpenSSL 0.9.8y 5 Feb 2013
  • FreeBSD 10.0p1 – OpenSSL 1.0.1g (At 8 Apr 18:27:46 2014 UTC)
  • FreeBSD Ports – OpenSSL 1.0.1g (At 7 Apr 21:46:40 2014 UTC)


Even though the actual code fix may appear trivial, OpenSSL team is the expert in fixing it properly so fixed version 1.0.1g or newer should be used. If this is not possible software developers can recompile OpenSSL with the handshake removed from the code by compile time option -DOPENSSL_NO_HEARTBEATS.


Recovery from this bug might have benefitted if the new version of the OpenSSL would both have fixed the bug and disabled heartbeat temporarily until some future version. Majority, if not almost all, of TLS implementations that responded to the heartbeat request at the time of discovery were vulnerable versions of OpenSSL. If only vulnerable versions of OpenSSL would have continued to respond to the heartbeat for next few months then large scale coordinated response to reach owners of vulnerable services would become more feasible. However, swift response by the Internet community in developing online and standalone detection tools quickly surpassed the need for removing heartbeat altogether.


Exploitation of this bug does not leave any trace of anything abnormal happening to the logs.


Although the heartbeat can appear in different phases of the connection setup, intrusion detection and prevention systems (IDS/IPS) rules to detect heartbeat have been developed. Due to encryption differentiating between legitimate use and attack cannot be based on the content of the request, but the attack may be detected by comparing the size of the request against the size of the reply. This implies that IDS/IPS can be programmed to detect the attack but not to block it unless heartbeat requests are blocked altogether.


We don’t know. Security community should deploy TLS/DTLS honeypots that entrap attackers and to alert about exploitation attempts.


There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.


No, this does not require a man in the middle attack (MITM). Attacker can directly contact the vulnerable service or attack any user connecting to a malicious service. However in addition to direct threat the theft of the key material allows man in the middle attackers to impersonate compromised services.


No, heartbeat request can be sent and is replied to during the handshake phase of the protocol. This occurs prior to client certificate authentication.


No, OpenSSL Federal Information Processing Standard (FIPS) mode has no effect on the vulnerable heartbeat functionality.


Use of Perfect Forward Secrecy (PFS), which is unfortunately rare but powerful, should protect past communications from retrospective decryption. Please see how leaked tickets may affect this.


No, vulnerable heartbeat extension code is activated regardless of the results of the handshake phase negotiations. Only way to protect yourself is to upgrade to fixed version of OpenSSL or to recompile OpenSSL with the handshake removed from the code.


This bug was independently discovered by a team of security engineers (Riku, Antti and Matti) at Codenomiconand Neel Mehta of Google Security, who first reported it to the OpenSSL team. Codenomicon team found heartbleed bug while improving the SafeGuard feature in Codenomicon’s Defensics security testing tools and reported this bug to the NCSC-FI for vulnerability coordination and reporting to OpenSSL team.


The SafeGuard feature of the Codenomicon’s Defensics security testtools automatically tests the target system for weaknesses that compromise the integrity, privacy or safety. The SafeGuard is systematic solution to expose failed cryptographic certificate checks, privacy leaks or authentication bypass weaknesses that have exposed the Internet users to man in the middle attacks and eavesdropping. In addition to the Heartbleed bug the new Defensics TLS Safeguard feature can detect for instance the exploitable security flaw in widely used GnuTLS open source software implementing SSL/TLS functionality and the “goto fail;” bug in Apple’s TLS/SSL implementation that was patched in February 2014.


Immediately after our discovery of the bug on 3rd of April 2014, NCSC-FI took up the task of verifying it, analyzing it further and reaching out to the authors of OpenSSL, software, operating system and appliance vendors, which were potentially affected. However, this vulnerability had been found and details released independently by others before this work was completed. Vendors should be notifying their users and service providers. Internet service providers should be notifying their end users where and when potential action is required.


For those service providers who are affected this is a good opportunity to upgrade security strength of the secret keys used. A lot of software gets updates which otherwise would have not been urgent. Although this is painful for the security community, we can rest assured that infrastructure of the cyber criminals and their secrets have been exposed as well.


The security community, we included, must learn to find these inevitable human mistakes sooner. Please support the development effort of software you trust your privacy to. Donate money to the OpenSSL project.


This Q&A was published as a follow-up to the OpenSSL advisory, since this vulnerability became public on 7th of April 2014. The OpenSSL project has made a statement at NCSC-FI published an advisory at Individual vendors of operating system distributions, affected owners of Internet services, software packages and appliance vendors may issue their own advisories.


Introduction to Confused Deputy Attack

Let’s look at a simple example to understand the problem first.
Suppose, a client sends name of an input file and output file to the server. The server compiles the input file and stores it in the output file. Let’s also assume that, the client has less privilege than the server.

Now, also assume there is another file “restricted” on which the server has permission where the client does not. At this point, if the client sends an arbitrary input file and “restricted” as output file, the server will compile the input file and write it to the file “restricted”, overwriting its previous content. Here, the client did not have permission to “restricted”, but server had. So, the server here is a deputy who was exploited to perform a malicious action. This type of problems are called Confused Deputy Attack.
Is there any real life example of Confused Deputy Attack ?
Yes, there are couple of.
  • Cross-site request forgery is an example of Confused Deputy Attack. Web applications normally use a cookie to authenticate all requests transmitted by a browser. An attacker can take advantage of that and use JavaScript to submit an authenticated HTTP request using authority of the client of the web browser.
  • Clickjacking is another example of Confused Deputy Attack. A user visits an attacker controlled website and thinks he is harmlessly browsing a website. But actually, he is tricked to act as a confused deputy and performs sensitive actions to get infected by malware.
  • FTP Bounce Attack is an example of Confused Deputy Attack. In this attack, an attacker uses PORT command and uses a victim machine’s FTP Server to get access to TCP ports to which the attacker himself has to permission to connect to. Here, the FTP Server is the confused deputy.

Is there any countermeasures for Confused Deputy Attack ?
Yes, the client can send the input file and capability of the output file to the server, where a capability of a file is the name of the file, along with permission on the file of the client. As a result, if the client does not have permission on the output file, it won’t be able to overwrite it.
In the example of Cross-site request forgery, a URL supplied cross site would use its own authority irrespective of the authority of the client of the web browser.


This was an informative article on Confused Deputy Attack. Hope you liked it.

You can now earn $1.5 million for hacking the iPhone

A private exploit seller has tripled the reward for Apple iOS exploits and is now offering $1.5 million for valid attacks against fully patched iPhones and iPads.

Zerodium is a premium exploit platform which purchases zero-day vulnerabilities and exploits and pays heavy rewards to researchers that discover previously unknown security flaws in popular software.

The exploit peddler says it “focuses on high-risk vulnerabilities with fully functional exploits” and “we pay the highest rewards on the market.”

For new, novel attacks against Apple’s iOS and Google’s Android mobile operating systems, the company appears to be correct, with rewards for iOS 10 jailbreaking now reaching up to $1.5 million.

In an updated rewards list, Zerodium revealed that researchers able to produce a new attack against up-to-date iOS 10 iPhones and iPads which successfully compromises the devices remotely can expect up to $1,500,000. This is three times the amount of previous rewards, which were brought down to $500,000 after the company paid out $1 million to three research teams last year which were able to find remote zero-day exploits for iOS 9.

In addition, researchers who can provide the private exploit seller with remote exploits for Android 7 mobile devices can enjoy double the payout, with Zerodium now willing to pay up to $200,000 an exploit.

Zerodium’s updated rewards list.


If researchers are willing to sell their work privately rather than report them to vendors, exploits are then sold to private clients including government entities, which may use them for surveillance purposes, tracking and spying on criminals, terrorists and any other targets of interest.

The company is interested in working exploits against up-to-date software from Apple, Google, and Adobe, among others.

When asked about the extensive difference in reward rates for jailbreaking iOS devices in comparison to exploits for the Android operating system, speaking to Ars Technica, Zerodium founder Chaouki Bekrar said:

“Prices are directly linked to the difficulty of making a full chain of exploits, and we know that iOS 10 and Android 7 are both much harder to exploit than their previous versions.

That means that iOS 10 chain exploits are either 7.5 x harder than Android or the demand for iOS exploits is 7.5 x higher. The reality is a mix of both.”

Earlier this year, the FBI paid $1 million to a security company to provide an exploit used to access an iPhone belonging to one of the San Bernardino shooters.

mimikittenz – Extract Plain-Text Passwords From Memory

mimikittenz is a post-exploitation powershell tool that utilizes the Windows function ReadProcessMemory() in order to extract plain-text passwords from various target processes.

mimikittenz - Extract Plain-Text Passwords From Memory

The aim of mimikittenz is to provide user-level (non-admin privileged) sensitive data extraction in order to maximise post exploitation efforts and increase value of information gathered per target.

NOTE: This tool is targeting running process memory address space, once a process is killed it’s memory ‘should’ be cleaned up and inaccessible however there are some edge cases in which this does not happen.


Currently mimikittenz is able to extract the following credentials from memory:


  • Gmail
  • Office365
  • Outlook Web


  • Xero
  • MYOB

Remote Access

  • Juniper SSL-VPN
  • Citrix NetScaler
  • Remote Desktop Web Access 2012


  • Jira
  • Github
  • Bugzilla
  • Zendesk
  • Cpanel


  • Malwr
  • VirusTotal
  • AnubisLabs


  • Dropbox
  • Microsoft Onedrive
  • AWS Web Services
  • Slack
  • Twitter
  • Facebook

You can download mimikittenz here:


What is a Drive-By Download ?

Previously, malware used to infect a computer through installation of software initiated by the user. When a user used to click on a link and accept installation of software, software would start installation, and with that malware used to download and infect the computer. But, now many attackers use a concept called Drive-By Download to spread malware.

What is a Drive-By Download ?

A Drive-By Download is a technique through which a malware can start downloading simply through visiting the attacker controlled website. When a user visits a malicious website, download starts in background in the computer or mobile devices. Mostly, this type of download exploits some security flaw in the browser or other software commonly used.

How does Drive-By Download work ?

The initial code installed by Drive-By Download is very small. The code often simply contacts with other computers and instructs to download the rest of the malware. Normally, the malicious website contains several malware exploiting different security flaws. And when a user visits the website, at least one of them gets downloaded taking advantage of some security flaw.
Attackers normally send links of these malicious websites through email or text messages and even through attracting social media posts. The attackers sometimes post an interesting article or cartoon in social media and when a user enjoys the article, Drive-By Download starts in background.
Countermeasures of Drive-By Downloads
Security experts are constantly doing research on this topic. Normally, security experts use some test machine and visit websites that have previous records of spreading malware. If on visiting the website, malware starts downloading on the test machine, proper action is taken.
Though educating oneself is the best policy. Do not click on suspicious looking links. If you are not very sure about the authenticity of a website, it is better not to visit it. And be careful about clicking on interesting looking suspicious social media posts. They may do much harm than any benefit.
And it is always advisable to update the software you are using with security patches. Mostly, attackers take the advantage of security flaws in software to spread malware.
Preferable use a safe search tool that will keep you updated about possible malicious websites. And use a trusted antivirus software.


This article was to inform you about another recent threat. Hope it solved its purpose.

How Secure is Remote Desktop Protocol

I think almost all of us have used Remote Desktop Protocol or RDP at some point of our life. It is a proprietary protocol developed by Microsoft to enable connection between hosts over internet through graphical user interface. If you want to connect to remote hosts and work, this is a mostly used protocol. But, as we already know, data transfer through internet is unsecure by default, and so, security of Remote Desktop Protocol calls for question.
How secure is Remote Desktop Protocol?
With a little bit of research reveals that normally Remote Desktop Protocol or RDP is not very secure.

Normally, Remote Desktop Protocol uses native RDP encryption to transfer data between connected hosts. But, this encryption is not very strong. As a result, RDP with native RDP encryption is vulnerable to attacks like MITM or Man In The Middle Attack.

RDP is also vulnerable to Denial of Service Attack or DoS. Originally, if you open an RDP session, the login screen of the server will open for you. And if an attacker abuses that and opens a large number of RDP sessions, it may lead to DoS.

RDP sessions are also susceptible to in-memory user credentials harvesting, which can lead to Pass The Hash attack.

How can Remote Desktop Protocol be made secure ?

From RDP 6.0 onwards Microsoft has introduced Network Level Authentication. It establishes a secure connection between the hosts before any data transfer is made. In this protocol, user authentication is required before a full Remote Desktop connection is established and until then fewer resources of the server is used. It helps in mitigating Denial of Service or DoS attack. It also establishes a SSL/TLS connection and transfers data in secure encrypted format.
In RDP settings one has to click and select Network Level Authentication to get this advantage.


So, be informed about the security issues of software you use and take proper steps for mitigation. And stay safe, stay secured.

MODULE 8.3 Sniffing With TCPDUMP

Tcpdump is a commandline network analyzer tool or more technically a packet sniffer. It can be thought of as the commandline version of wireshark (only to a certain extent, since wireshark is much more powerful and capable).

As a commandline tool tcpdump is quite powerful for network analysis as filter expressions can be passed in and tcpdump would pick up only the matching packets and dump them.

In this tutorial we are going to learn to use tcpdump and how it can be used for network analysis. On ubunut for example it can be installed by typing the following in terminal

$ sudo apt-get install tcpdump

Tcpdump depends on libpcap library for sniffing packets. It is documented here.

For windows use the alternative called windump. It is compatible with tcpdump (in terms of usage and options). Download from

Basic sniffing

Lets start using tcpdump. The first simple command to use is tcpdump -n

$ sudo tcpdump -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
16:34:57.266865 IP > ICMP echo reply, id 19941, seq 1176, length 64
16:34:57.267226 IP > 23380+ PTR? (43)
16:34:57.274549 IP > 23380 1/4/2 PTR (195)
16:34:57.297874 IP > UDP, length 105

Why sudo ? Because tcpdump needs root privileges to be able to capture packets on network interfaces. On ubuntu prepending sudo to any command makes it run with superuser/root privileges. The -n parameter is given to stop tcpdump from resolving ip addresses to hostnames, which take look and not required right now.

Lets take a line from the above output to analyse.

16:34:57.267226 IP > 23380+ PTR? (43)

The first thing “16:34:57.267226” is the timestamp with microsecond precision. Next is the protocol of the packet called IP (stands for Internet protocol and it is under this protocol that most of the internet communication goes on). Next is the source ip address joined with the source port. Following next is the destination port and then some information about the packet.

Now lets increase the display resolution of this packet, or get more details about it. The verbose switch comes in handy. Here is a quick example

$ sudo tcpdump -v -n
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
16:43:13.058660 IP (tos 0x20, ttl 54, id 50249, offset 0, flags [DF], proto TCP (6), length 40) > Flags [.], cksum 0x6d32 (correct), ack 1617156745, win 9648, length 0
16:43:13.214621 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) > ICMP echo request, id 19941, seq 1659, length 64
16:43:13.355334 IP (tos 0x20, ttl 54, id 48656, offset 0, flags [none], proto ICMP (1), length 84) > ICMP echo reply, id 19941, seq 1659, length 64
16:43:13.355719 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 71) > 28650+ PTR? (43)
16:43:13.362941 IP (tos 0x0, ttl 251, id 63454, offset 0, flags [DF], proto UDP (17), length 223) > 28650 1/4/2 PTR (195)
16:43:13.880338 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has tell, length 28
16:43:14.215904 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) > ICMP echo request, id 19941, seq 1660, length 64

Now with the verbose switch lots of additional details about the packet are also being displayed. And these include the ttl, id, tcp flags, packet length etc.

Getting the ethernet header (link layer headers)

In the above examples details of the ethernet header are not printed. Use the -e option to print the ethernet header details as well.

$ sudo tcpdump -vv -n -e
[sudo] password for enlightened: 
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
17:57:27.218531 00:25:5e:1a:3d:f1 > 00:1c:c0:f8:79:ee, ethertype IPv4 (0x0800), length 98: (tos 0x20, ttl 54, id 53046, offset 0, flags [none], proto ICMP (1), length 84) > ICMP echo reply, id 19941, seq 6015, length 64
17:57:27.218823 00:1c:c0:f8:79:ee > 00:25:5e:1a:3d:f1, ethertype IPv4 (0x0800), length 85: (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto UDP (17), length 71) > [bad udp cksum 0x9cee -> 0xe5f6!] 23855+ PTR? (43)
17:57:27.226352 00:25:5e:1a:3d:f1 > 00:1c:c0:f8:79:ee, ethertype IPv4 (0x0800), length 269: (tos 0x0, ttl 251, id 10513, offset 0, flags [DF], proto UDP (17), length 255) > [udp sum ok] 23855 q: PTR? 1/4/4 PTR ns: NS NS4.GOOGLE.COM., NS NS2.GOOGLE.COM., NS NS1.GOOGLE.COM., NS NS3.GOOGLE.COM. ar: NS1.GOOGLE.COM. A, NS2.GOOGLE.COM. A, NS3.GOOGLE.COM. A, NS4.GOOGLE.COM. A (227)

Now the first thing after the timestamp is the source and destination mac address.

Sniffing a particular interface

In order to sniff a particular network interface we must specify it with the -i switch. First lets get the list of available interfaces using the -D switch.

$ sudo tcpdump -D
2.usbmon1 (USB bus number 1)
3.usbmon2 (USB bus number 2)
4.any (Pseudo-device that captures on all interfaces)

Next we can use the interface number of name with the -i switch to sniff the particular interface.

$ sudo tcpdump -i 1
$ sudo tcpdump -i eth0

Filtering packets using expressions

The next important feature of tcpdump as a network analysis tool is to allow the user to filter packets and select only those that match a certain rule or criteria. And like before this too is quite simple and can be learned easily. Lets take a few simple examples.

Selecting protocols

$ sudo tcpdump -n tcp

The above command will show only tcp packets. Similary udp or icmp can be specified.

Particular host or port

Expressions can be used to specify source ip, destination ip, and port numbers. The next example picks up all those packets with source address

$ sudo tcpdump -n 'src'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
20:04:04.856379 IP > Flags [.], seq 2781603453:2781604873, ack 338206850, win 41850, length 1420
20:04:05.216372 IP > Flags [P.], seq 3980513010:3980513027, ack 2134949138, win 28400, length 17

Next example picks up dns request packets, either those packets which originate from local machine and go to port 53 of some other machine.

$ sudo tcpdump -n 'udp and dst port 53'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
20:06:48.015359 IP > 41001+ A? (46)
20:06:50.842530 IP > 12380+ A? (59)

The above output shows the dns requests made by local system to the dns server port 53. Its all very intuitive and simple. Note the “and” which is used to combine multiple conditions. This is where the creativity begins, to write powerful expressions to analyse the network.

To display the FTP packets coming from to

$ sudo tcpdump 'src and dst and port ftp'

Note that the port number 21 has been specified by its name – ftp.

So similarly many different kinds of expressions can be developed to fit the needs of the network analyst and pick up matching packets.

Search the network traffic using grep

Grep can be used along with tcpdump to search the network traffic. Here is a very simple example

$ sudo tcpdump -n -A | grep -e 'POST'
[sudo] password for enlightened: 
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
E...=.@.@......e@.H..'.P(.o%~...P.9.PN..POST /blog/wp-admin/admin-ajax.php HTTP/1.1
E...c_@.@..=...e@.H..*.PfC<....wP.9.PN..POST /blog/wp-admin/admin-ajax.php HTTP/1.1
E.....@.@......e@.H...."g;.(.-,WP.9.Nj..POST /login/?login_only=1 HTTP/1.1

The above example detects packets with the string “POST” in them. It detects http post requests as shown.
The -A option displays the content of the packet in ascii text form, which is searchable using grep.

On windows the grep command is not available, but has an equivalent called find/findstr. Example usage

C:\tools>WinDump.exe -A | findstr "GET"
WinDump.exe: listening on \Device\NPF_{6019E682-FD40-4A54-BB75-9C2ACFA56CAA}
.....&....P..W.....P....k..GET /search?hl=en&sclient=psy-ab&q=asda&oq
.....&....P..[{..N.P...%-..GET /csi?v=3&s=web&action=&ei=LrmPUMrLNoHO
.P-%.}....P..$Ch..GET /subscribe?host_int=139535925&ns_map=2

So in the above example we used windump and searched the sniffed packet for the string “GET” (which mostly discover the http get requests).

So what is the idea behind searching packets. Well one good thing can be to sniff passwords.
Here is quick example to sniff passwords using egrep

tcpdump port http or port ftp or port smtp or port imap or port pop3 -l -A | egrep -i 'pass=|pwd=|log=|login=|user=|username=|pw=|passw=|passwd=|password=|pass:|user:|username:|password:|login:|pass |user ' --color=auto --line-buffered -B20


MODULE 8.2 Sniffing Password with Wireshark


Wireshark is the world’s foremost network protocol analyzer. It lets you see what’s happening on your network at a microscopic level. It is the de facto (and often de jure) standard across many industries and educational institutions.
This tutorial can be an angel and also devil in the same time, it depends to you who use this tutorial for which purpose…me as a writer of this tutorial just hope that all of you can use it in the right way , because I believe that no one from you want your password sniffed by someone out there so don’t do that to others too

Disclaimer – Our tutorials are designed to aid aspiring pen testers/security enthusiasts in learning new skills, we only recommend that you test this tutorial on a system that belongs to YOU. We do not accept responsibility for anyone who thinks it’s a good idea to try to use this to attempt to hack systems that do not belong to you
Requirements :

1. Wireshark Network Analyzer (
2. Network Card (Wi-Fi Card, LAN Card, etc) fyi : for wi-fi it should support promiscious mode

Step 1: Start Wireshark and capture traffic

In Kali Linux you can start Wireshark by going to

Application > Kali Linux > Top 10 Security Tools > Wireshark

In Wireshark go to Capture > Interface and tick the interface that applies to you. In my case, I am using a Wireless USB card, so I’ve selected wlan0.


Ideally you could just press Start button here and Wireshark will start capturing traffic. In case you missed this, you can always capture traffic by going back to Capture > Interface > Start


Step 2: Filter captured traffic for POST data

At this point Wireshark is listening to all network traffic and capturing them. I opened a browser and signed in a website using my username and password. When the authentication process was complete and I was logged in, I went back and stopped the capture in Wireshark.

when wee type in your username, password and press the Login button, it generates a a POST method (in short – you’re sending data to the remote server).

To filter all traffic and locate POST data, type in the following in the filter section

http.request.method == “POST”

See screenshot below. It is showing 1 POST event.


Step 3: Analyze POST data for username and password

Now right click on that line and select Follow TCP Steam


This will open a new Window that contains something like this:


So in this case,

username: sampleuser
password: e4b7c855be6e3d4307b8d6ba4cd4ab91
But hold on, e4b7c855be6e3d4307b8d6ba4cd4ab91 can’t be a real password. It must be a hash value.

to crack this password its simple just open new terminal window and type this :


and its looks like this:

  1. username: sampleuser
  2. password: e4b7c855be6e3d4307b8d6ba4cd4ab91:simplepassword

Meet Apache Spot, a new open source project for cybersecurity

The effort taps big data analytics and machine learning for advanced threat detection

strata apache spot hadoop
The Apache Spot project was announced at Strata+Hadoop World on Wednesday, Sept. 28, 2016.

Credit: Katherine Noyes

Hard on the heels of the discovery of the largest known data breach in history, Cloudera and Intel on Wednesday announced that they’ve donated a new open source project to the Apache Software Foundation with a focus on using big data analytics and machine learning for cybersecurity.

Originally created by Intel and launched as the Open Network Insight (ONI) project in February, the effort is now called Apache Spot and has been accepted into the ASF Incubator.

“The idea is, let’s create a common data model that any application developer can take advantage of to bring new analytic capabilities to bear on cybersecurity problems,” Mike Olson, Cloudera co-founder and chief strategy officer, told an audience at the Strata+Hadoop World show in New York. “This is a big deal, and could have a huge impact around the world.”

Based on Cloudera’s big data platform, Spot taps Apache Hadoop for infinite log management and data storage scale along with Apache Spark for machine learning and near real-time anomaly detection. The software can analyze billions of events in order to detect unknown and insider threats and provide new network visibility.

Essentially, it uses machine learning as a filter to separate bad traffic from benign and to characterize network traffic behavior. It also uses a process including context enrichment, noise filtering, whitelisting and heuristics to produce a shortlist of most likely security threats.

By providing common open data models for network, endpoint, and user, meanwhile, Spot makes it easier to integrate cross-application data for better enterprise visibility and new analytic functionality. Those open data models also make it easier for organizations to share analytics as new threats are discovered.

Other contributors to the project so far include eBay, Webroot, Jask, Cybraics, Cloudwick, and Endgame.

“The open source community is the perfect environment for Apache Spot to take a collective, peer-driven approach to fighting cybercrime,” said Ron Kasabian, vice president and general manager for Intel’s Analytics and Artificial Intelligence Solutions Group. “The combined expertise of contributors will help further Apache Spot’s open data model vision and provide the grounds for collaboration on the world’s toughest and constantly evolving challenges in cybersecurity analytics.”