devquotes, » Code http://www.devquotes.com devs are (s)talking. Wed, 29 Jun 2011 09:25:28 +0000 en hourly 1 http://wordpress.org/?v=3.1.2 PHP CVE-2011-2202 http://www.devquotes.com/2011/06/15/php-cve-2011-2202/ http://www.devquotes.com/2011/06/15/php-cve-2011-2202/#comments Wed, 15 Jun 2011 09:09:20 +0000 z. http://www.devquotes.com/?p=875 PHP is prone to a security-bypass vulnerability.Successful exploits will allow an attacker to delete files from the root directory, which may aid in further attacks.
PHP 5.3.6 is vulnerable; other versions may also be affected.

Webmasters are advised to manually patch their PHP installations after a serious flaw allowing attackers to potentially delete files from their root directories was publicly disclosed.

The vulnerability lies in the « SAPI_POST_HANDLER_FUNC() » function in rfc1867.c and can be exploited to append forward or back slashes before the file name during an upload. This allows an attacker, for example, to delete files from the root directory or can be combined with other vulnerabilities to enhance attacks. The flaw is described as an input validation error and security bypass issue. Vulnerability research vendor Secunia rates it as « less critical. » A Polish web application developer named Krzysztof Kotowicz is credited with discovering and reporting the issue, but even though it was patched on June 12, details about the flaw have been available online since May 27.

The vulnerability, identified as CVE-2011-2202, affects PHP 5.3.6 and earlier versions. No new package has been released yet, but a patch can be grabbed from the repository and applied manually. The vulnerability  does not require authentication, and has a partial impact on system integrity. System confidentiality  are affected too.

It’s still unclear whether its access complexity should be low, as listed in an IBM XSS Force advisory, or high, as considered by the Red Hat security team.

Exploit found on pastebin.com

HTTP Request:
====
POST /file-upload-fuzz/recv_dump.php HTTP/1.0
host: blog.security.localhost
content-type: multipart/form-data; boundary=———-ThIs_Is_tHe_bouNdaRY_$
content-length: 200

————ThIs_Is_tHe_bouNdaRY_$
Content-Disposition: form-data; name= »contents »; filename= »/anything.here.slash-will-pass »;
Content-Type: text/plain

any
————ThIs_Is_tHe_bouNdaRY_$–

HTTP Response:
====
HTTP/1.1 200 OK
Date: Fri, 27 May 2011 11:35:08 GMT
Server: Apache/2.2.14 (Ubuntu)
X-Powered-By: PHP/5.3.2-1ubuntu4.9
Content-Length: 30
Connection: close
Content-Type: text/html

/anything.here.slash-will-pass

PHP script:
=====
if (!empty($_FILES['contents'])) { // process file upload
echo $_FILES['contents']['name'];
unlink($_FILES['contents']['tmp_name']);
}

]]>
http://www.devquotes.com/2011/06/15/php-cve-2011-2202/feed/ 4
Fake referrer – SEO trap http://www.devquotes.com/2011/03/20/fake-referrer-seo-trap/ http://www.devquotes.com/2011/03/20/fake-referrer-seo-trap/#comments Sun, 20 Mar 2011 12:16:58 +0000 z. http://www.devquotes.com/?p=695 Time for me to write an article after the awesome post from luc about php.net hack rumors.
I’ll give you here some tips i used a loooong time ago. I used it to gather a maximum of fresh new content and backlinks. Let’s brain a little bit here.

It’s more a psychological trap for SEO admin checking stats x20 /day :)

Example

Let’s take an example, you are owner from a website directory dealing with pizzeria and you’re listing them into your website. For the example, we’ll choose « pizzeria-master-huge-directory.com » as name .
For the beginning you had created the content, scrapped manually google, searched different pizzerias in every cities you may know and you wrote 10 or 20 entries.
Now let’s think. The website is running, you got a correct position in SERP and you want to expand your visibility.

Here is the point: You want every pizzeria present on internet to come on your site, register themself and write BY HIMSELF an entry on your directory.
You would have fresh new content without doing anything.
But HOW TO DO THAT ?
Until you’re not ranking in the top #3, very few people will come and register by themselves.
Idea i had ( and i’m not the first), is to tell thoses pizzeria owner that you exist and your directory can bring them a little traffic.

Here come scripting :)
Don’t think about mail spam, it’s a waste of time. Noone cares about spams and it will be hard to get direct admin email.

Deep in SEO souls

Just remember how you were, what you did every day/week when you were a SEO beginner:

  1. go to google.com
  2. type analytics
  3. go to http://www.google.com/analytics/
  4. go to your website tab
  5. go to Overview Traffic Sources
  6. go to referrers
  7. check WHO IS INSANE ENOUGH TO LINK YOU !

Tricks is not to get in touch with admin website by mail, it is to set up a kind of trap. Just waiti for the webmaster to see « who is the website giving him SPECIALIZED traffic ». Don’t forget your victim own: « pizzeria-jose.com » and he is about to notice that « pizzeria-master-huge-directory.com » send him a little bit traffic.

As a SEO newbie, he’ll get on « pizzeria-master-huge-directory.com » and check the website, he’ll even try to understand « how the website sent me traffic » and he may finally regiser and create an entry on directory.

See what i mean ?

Time to setup the trap

Now remember what you did at the beginning and let’s automatize this part.

  1. go to google.com
  2. search localized pizzeria (ex: pizzeria paris, pizzeria new york, pizzeria london)
  3. go to found website and generate a hit on each of them using our pizzeria directory as referrer.

Trap is now set :D

You only have to wait for all those SEO rookies to check their analytics stats.
I didn’t measure the ROI of this trick, i think it totally depend on your business.

I just know that, this doesn’t cost a things to setup this trap and can bring you targeted webmaster who are potentially  interested on subscribing on « pizzeria-master-huge-directory.com ».

 

Sources

 

function fake_referrer($url, $proxy = false) {

	$ch = curl_init();
	//set the url, number of POST vars, POST data
	$userAgent='Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)';
	curl_setopt($ch,CURLOPT_URL,$url);
	curl_setopt($ch,CURLOPT_USERAGENT, $userAgent);
	curl_setopt($ch,CURLOPT_RETURNTRANSFER, 1);
	curl_setopt($ch,CURLOPT_REFERER, "http://www.pizzeria-master-huge-directory.com/");

	$result = curl_exec($ch);
//	echo curl_error($ch);

	curl_close($ch);

	return $result;
}

print_r($site_list);

foreach ($site_list as $ws) {
	fake_referrer($ws);
}

I won’t give here the way to scrap google, it’s not the purpose of the topic. What you only need if to set your target list in the $site_list array.

I hope this article was helpfull. As always, if anything’s needed just ask in comment :)

 

]]>
http://www.devquotes.com/2011/03/20/fake-referrer-seo-trap/feed/ 2
An efficient network throttling algorithm http://www.devquotes.com/2010/11/24/an-efficient-network-throttling-algorithm/ http://www.devquotes.com/2010/11/24/an-efficient-network-throttling-algorithm/#comments Wed, 24 Nov 2010 16:45:37 +0000 ed http://www.devquotes.com/?p=170 Blacklist

In most network applications, managing incoming flow is an important thing, and is a quite hard thing to set up. In case your algorithm is too restrictive, you will drop too much connection, and in case it’s too permissive, you will accept undesired connections. The real need is to tell your application: « Accept N connection(s) in a X second(s) time range ».

Concept

The way you should decide if a connection have to be dropped or not is looking in an historic of X second(s) how many connection(s) from an IP have been performed, and then deducing the count. This is the « simple » algorithm that does that:


class Blacklist {
private:
    qint32 m_maxConnectCount;
    quint32 m_historyRetention;

    QHash < quint32, QQueue < quint32 > > m_history;

    bool isBlacklisted(quint32 addr) {
        quint32 curTs = QDateTime::currentDateTime().toTime_t();

        QQueue < quint32 > & addrHistory = m_history[addr];
        while(!addrHistory.isEmpty() && (curTs - addrHistory.head()) > m_historyRetention)
            addrHistory.dequeue();

        bool blacklisted = addrHistory.count() > m_maxConnectCount;
        if(blacklisted) addrHistory.dequeue();

        m_history[addr].enqueue(curTs);
        return blacklisted;
    }

public:
    Blacklist(qint32 maxConnectCount, quint32 historyRetention) {
        m_maxConnectCount = maxConnectCount;
        m_historyRetention = historyRetention;
    }

    bool isBlacklisted(const QHostAddress & addr) {
        if(m_maxConnectCount > 0) {
            switch(addr.protocol()) {
            case QAbstractSocket::IPv4Protocol:
                return isBlacklisted(addr.toIPv4Address());
            case QAbstractSocket::IPv6Protocol:
                break;
            case QAbstractSocket::UnknownNetworkLayerProtocol:
            default:
                return true;
            }

            // ipv6
            union { const quint8 * b; const quint32 * d; } ipv6;
            ipv6.b = (quint8 *)&(addr.toIPv6Address().c);
            return isBlacklisted(ipv6.d[0] ^ ipv6.d[1] ^ ipv6.d[2] ^ ipv6.d[3]);
        }

        return false;
    }
};

All is done in this function « bool isBlacklisted(quint32 addr); ».

Explanations

This function will returns true of false in case this connection should be drop or not. This is simple concept, we have to lookup the historic associated with this connection using the associative array « m_history », pruning out of scope values (if you’re telling your application to drop if an amount of 10 connections have been performed in 10 seconds, historic values that are older than 10 seconds are just out of scope), and then just count how many connections happens in this historic. If this amount is greater than X, we have to drop, else we have to accept. Obviously, we have to take in count the current connection, even if we choose to drop it.

The main issue of this algorithm is you have to keep a full historic, which can lead into an huge amount of memory usage if you defined a large scope (10000 connections in 10 minutes). Moreover the Qt queue (QQueue) is a slow mechanism cause it has to strafe the whole list of value when calling QQueue::dequeue() (remove [0], and while there are value, move his index from i to i – 1).

Which kind of algorithm could leads to a low memory usage and a low CPU usage ?

Solution: average

Average sounds like an like notion we have all learn at elementary school, so how could it solve such a difficult issue ? Well, let’s explain this little trick: If we want to compute the average of the set (12, 13, 14), we have to sum all of those values, and divide by the size of the set. In that case, 13, well. But now we want to take in count a new value, let say 17, in this set without having to reconsider the whole set. We have to know only two thing: the average, and the count of value this average have been computed with: ((13 * 3) + 17) / 4. So more mathematically : ((average * count) + newVal) / (count + 1).

How can this stuff be useful for our issue ?

Well, let’s consider we will now keep in the historic the average time the last N connections have been performed, and just drop if the difference between this average and the current time stamp is greater the X value (the amount of seconds we want to consider a network flood). Here is the new algorithm:

class AverageBlacklist {
private:
    struct AverageHistory {
        quint64 m_averageTime;
        qint32 m_currentCount;

        AverageHistory() {
            reset();
        }

        bool isAverageReached(quint32 ts, qint32 maxConnectCount, quint32 historyRetention) {
            // daylightsaving or time change, or simply no communication since a long time
            if(ts < m_averageTime || ts > (m_averageTime + (historyRetention * 2)))
                reset();

            m_averageTime = ((m_averageTime * m_currentCount) + ts) / (m_currentCount + 1);
            if(m_currentCount >= maxConnectCount)
                return ts <= (m_averageTime + historyRetention);

            m_currentCount++;
            return false;
        }

        void reset() {
            m_averageTime = 0;
            m_currentCount = 0;
        }
    };

    qint32 m_maxConnectCount;
    quint32 m_historyRetention;

    QHash < quint32, AverageHistory > m_averageHash;

    bool isBlacklisted(quint32 addr) {
        return m_averageHash[addr].isAverageReached(QDateTime::currentDateTime().toTime_t(),
                                                    m_maxConnectCount, m_historyRetention);
    }

public:
    AverageBlacklist(qint32 maxConnectCount, quint32 historyRetention) {
        m_maxConnectCount = maxConnectCount;
        m_historyRetention = historyRetention;
    }

    bool isBlacklisted(const QHostAddress & addr) {
        if(m_maxConnectCount > 0) {
            switch(addr.protocol()) {
            case QAbstractSocket::IPv4Protocol:
                return isBlacklisted(addr.toIPv4Address());
            case QAbstractSocket::IPv6Protocol:
                break;
            case QAbstractSocket::UnknownNetworkLayerProtocol:
            default:
                return true;
            }

            // ipv6
            union { const quint8 * b; const quint32 * d; } ipv6;
            ipv6.b = (quint8 *)&(addr.toIPv6Address().c);
            return isBlacklisted(ipv6.d[0] ^ ipv6.d[1] ^ ipv6.d[2] ^ ipv6.d[3]);
        }

        return false;
    }
};

I tested those two algorithms in competition to see if they are rendering the same behavior, and it seems yes. Except in case we’re performing EXACTLY one connection each seconds on a 10/10 configuration (10 connections in 10 seconds), where it seems to be a little bit confusing, but in a real life system, it has exactly the same efficiency.

Feel free to post any feedback on this algorithm, or debating on this topic with me, via private message or comments.

]]>
http://www.devquotes.com/2010/11/24/an-efficient-network-throttling-algorithm/feed/ 0