Spam in blogs
528830
223724802
2008-07-05T13:33:35Z
Mindmatrix
160367
Reverted edits by [[Special:Contributions/210.211.217.243|210.211.217.243]] ([[User talk:210.211.217.243|talk]]) to last version by ClueBot
{{dablink|For blogs that are built only for spamming, see [[Spam blog]].}}
'''Spam in blogs''' (also called simply '''blog spam''' or '''comment spam''') is a form of [[spamdexing]]. It is done by automatically posting random comments or promoting commercial services to [[weblog|blogs]], [[wiki]]s, [[guestbook]]s, or other publicly accessible online discussion boards. Any web application that accepts and displays [[hyperlinks]] submitted by visitors may be a target.
Adding links that point to the spammer's web site artificially increases the site's search engine ranking. An increased ranking often results in the spammer's commercial site being listed ahead of other sites for certain searches, increasing the number of potential visitors and paying customers.
==History==
This type of spam originally appeared in internet [[guestbook]]s, where spammers repeatedly fill a guestbook with links to their own site and no relevant comment, to increase search engine rankings. If an actual comment is given it is often just "cool page", "nice website", or keywords of the spammed link.
In [[2003]], spammers began to take advantage of the open nature of comments in the [[weblog|blogging]] software like [[Movable Type]] by repeatedly placing comments to various blog posts that provided nothing more than a link to the spammer's commercial web site. Jay Allen created a free plugin, called MT-BlackList,<ref>[http://www.jayallen.org/projects/mt-blacklist/ MT-Blacklist - A Movable Type Anti-spam Plugin<!-- Bot generated title -->]</ref> for the Movable Type weblog tool (versions prior to 3.2) that attempted to alleviate this problem. Many blogging packages now have methods of preventing or reducing the effect of blog spam, although spammers have developed tools to circumvent them. Many spammers use special blog spamming tools like [[Trackback Submitter]] to bypass comment spam protection on popular blogging systems like Movable Type, Wordpress, and others.
==Possible solutions==
===Blocking by keyword===
Blocking specific words from posts is one of the simplest and most effective ways to reduce spam. Much spam can be blocked simply by banning names of popular pharmaceuticals and casino games.
The main problem with this approach is that it requires constant updating since spammers constantly find new ways to spell or hawk their goods. Blocking "viagra", for example, may reduce spam until spammers start spamming "vi@gra", "v1agr@", "vigra", etc. This system can be difficult to keep updated due to the large number of products spammers try to sell.
===nofollow===<!-- This section is linked from [[PageRank]] -->
{{main|nofollow}}
Google announced in early 2005 that hyperlinks with <code>rel="nofollow"</code> attribute<ref>[http://www.w3.org/TR/REC-html40/struct/links.html#adef-rel Links in HTML documents<!-- Bot generated title -->]</ref> would not influence the link target's ranking in the search engine's index. The Yahoo and MSN search engines also respect this tag. <ref>[http://googleblog.blogspot.com/2005/01/preventing-comment-spam.html Official Google Blog: Preventing comment spam<!-- Bot generated title -->]</ref>
[[nofollow]] is a misnomer in this case since it actually tells a search engine "Don't score this link" rather than "Don't follow this link." This differs from the meaning of <code>nofollow</code> used within a [[Robots.txt#Alternatives|robots meta tag]] which <strong>does</strong> tell a search engine: "Do not follow any of the hyperlinks in the body of this document."
Using <code>rel="nofollow"</code> is a much easier solution that makes the improvised techniques above irrelevant. Most weblog software now marks reader-submitted links this way by default (with no option to disable it without code modification). A more sophisticated server software could spare the nofollow for links submitted by [[trust management|trusted users]] like those registered for a long time, on a [[whitelist]], or with a high [[karma (Slashdot)|karma]]. Some server software adds <code>rel="nofollow"</code> to pages that have been recently edited but omits it from stable pages, under the theory that stable pages will have had offending links removed by human editors.
Some weblog authors object to the use of <code>rel="nofollow"</code>, arguing, for example,<ref>Michael Hampton (May 23, 2005), [http://www.homelandstupidity.us/2005/05/23/nofollow-revisited/ Nofollow revisited], ''HomelandStupidity.us'', retrieved November 2, 2007</ref> that
* Link spammers will continue to spam everyone to reach the sites that do not use <code>rel="nofollow"</code>
* Link spammers will continue to place links for clicking (by surfers) even if those links are ignored by search engines.
* Google is advocating the use of <code>rel="nofollow"</code> in order to reduce the effect of heavy inter-blog linking on page ranking.
* Google is advocating the use of <code>rel="nofollow"</code> only to minimize its own filtering efforts and to deflect that this actually had better been called <code>rel="nopagerank"</code>.
* Nofollow may reduce the value of legitimate comments<ref>[http://jeremy.zawodny.com/blog/archives/006800.html Nofollow No Good? (by Jeremy Zawodny)<!-- Bot generated title -->]</ref>
Other websites like [[Slashdot]], with high user participation, use improvised nofollow implementations like adding <code>rel="nofollow"</code> only for potentially misbehaving users. Potential spammers posing as users can be determined through various heuristics like age of registered account and other factors. Slashdot also uses the poster's karma as a determinant in attaching a nofollow tag to user submitted links.
<code>rel="nofollow"</code> has come to be regarded as a [[microformat]].
===Validation (reverse Turing test)===
A method to block automated spam comments is requiring a [[validation]] prior to publishing the contents of the reply form. The goal is to verify that the form is being submitted by a real human being and not by a spam tool and has therefore been described as a [[reverse Turing test]]. The test should be of such a nature that a human being can easily pass and an automated tool would most likely fail.
Many forms on websites take advantage of the [[CAPTCHA]] technique, displaying a combination of numbers and letters embedded in an image which must be entered literally into the reply form to pass the test. In order to keep out spam tools with built-in [[text recognition]] the characters in the images are customarily misaligned, distorted, and noisy. A drawback of many older CAPTCHAs is that passwords are usually [[case-sensitive]] while the corresponding images often don't allow a distinction of capital and small letters. This should be taken into account when devising a list of CAPTCHAs. Such systems can also prove problematic to blind people who rely on [[screen readers]]. Some more recent systems allow for this by providing an audio version of the characters.
A simple alternative to CAPTCHAs is the validation in the form of a [[password]] question, providing a hint to human visitors that the password is the answer to a simple question like "The Earth revolves around the... [Sun]".
One drawback to be taken into consideration is that any validation required in the form of an additional form field may become a nuisance especially to regular posters. Bloggers and guestbook owners may notice a significant decrease in the number of comments once such a validation is in place.
===Disallowing links in posts===
There is negligible gain from spam that does not contain links, so currently all spam posts contain (excessive number of) links. It is safe to require passing Turing tests only if post contains links and letting all other posts through. While this is highly effective, spammers do frequently send gibberish posts (such as "ajliabisadf ljibia aeriqoj") to test the spam filter. These gibberish posts will not be labeled as spam. They do the spammer no good, but they still clog up comments sections.
Garbage submissions might however also result from level 0 spambots, which don't parse the attacked HTML form fields first, but send generic POST requests against pages. So it happens that a "content" or "forum_post" POST variable is set and received by the blog or forum software, but the "uri" or other wrong url field name is not accepted and thus not saved as spamlink.
===Redirects===
Instead of displaying a direct hyperlink submitted by a visitor, a web application could display a link to a script on its own website that redirects to the correct [[Uniform Resource Locator|URL]]. This will not prevent all spam since spammers do not always check for link redirection, but effectively prevents against increasing their [[PageRank]], just as <code>rel=nofollow</code>. An added benefit is that the redirection script can count how many people visit external URLs, although it will increase the load on the site.
Redirects should be [[server-side]] to avoid accessibility issues related to client-side redirects. This can be done via the [[.htaccess|.htaccess file]] in [[apache server|Apache]].
Another way of preventing [[PageRank]] leakage is to make use of public [[redirection]] or [[HTTP referer|dereferral]] services such as [[TinyURL]]. For example,
<nowiki><a href="http://my-own.net/alias_of_target" rel="nofollow" >Link</a></nowiki>
where 'alias_of_target' is the alias of target address.
Note however that this prevents users from being able to view the target of a link before clicking it, thus interfering with their ability to ignore websites they know to be spam.
===Distributed approaches===
This approach is very new to addressing link spam. One of the shortcomings of link spam filters is that most sites receive only one link from each domain which is running a spam campaign. If the spammer varies IP addresses, there is little to no distinguishable pattern left on the vandalized site. The pattern, however, is left across the thousands of sites that were hit quickly with the same links.
A distributed approach, like the free [[LinkSleeve]]<ref>[http://www.linksleeve.org LinkSleeve : SLV : Spam Link Verification<!-- Bot generated title -->]</ref> uses [[XML-RPC]] to communicate between the various server applications (such as blogs, guestbooks, forums, and wikis) and the filter server, in this case LinkSleeve. The posted data is stripped of urls and each url is checked against recently submitted urls across the web. If a threshold is exceeded, a "reject" response is returned, thus deleting the comment, message, or posting. Otherwise, an "accept" message is sent.
A more robust distributed approach is [[Akismet]], which uses a similar approach to LinkSleeve but uses API keys to assign trust to nodes and also has wider distribution as a result of being bundled with the 2.0 release of [[WordPress]].<ref>[http://wordpress.org/development/2005/12/wp2/ WordPress › Blog » WordPress 2<!-- Bot generated title -->]</ref> They claim over 140,000 blogs contributing to their system. [[Akismet]] libraries have been implemented for Java, Python, Ruby, and PHP, but its adoption may be hindered by its commercial use restrictions. In 2008, [[Six Apart]] therefore released a [[beta]] version of their [[TypePad AntiSpam]] software, which is compatible with Akismet but free of the latter's commercial use restrictions.
[[Project Honey Pot]] has also begun tracking comment spammers. The Project uses its vast network of thousands of traps installed in over one hundred countries around the world in order to watch what comment spamming web robots are posting to blogs and forums. Data is then published on the top countries for comment spamming, as well as the top keywords and URLs being promoted. The Project's data is then made available to block known comment spammers through [[http:BL]]. Various plugins have been developed to take advantage of the [[http:BL]] API.
===Application-specific anti-spam methods===
Particularly popular software products such as [[Movable Type]] and [[MediaWiki]] have developed their own custom anti-spam measures, as spammers focus more attention on targeting those platforms. Whitelists and blacklists that prevent certain IPs from posting, or that prevent people from posting content that matches certain filters, are common defenses. More advanced [[access control list]]s require various forms of validation before users can contribute anything like linkspam.
The goal in every case is to allow good users to continue to add links to their comments, as that is considered by some to be a valuable aspect of any comments section.
====RSS feed monitoring====
Some wikis allow you to access an RSS feed of recent changes or comments. If you add that to your news reader and set up a smart search for common spam terms (usually [[viagra]] and other drug names) you can quickly identify and remove the offending spam.
====Response tokens====
Another filter available to webmasters is to add a hidden [[session token]] or [[hash function]] to their comment form. When the comments are submitted, data stored within the posting such as IP address and time of posting can be compared to the data stored with the session token or hash generated when the user loaded the comment form. Postings that use different IP addresses for loading the comment form and posting the comment form, or postings that took unusually short or long periods of time to compose can be filtered out. This method is particularly effective against spammers who [[IP address spoofing|spoof their IP Address]] in an attempt to conceal their identities.
===Ajax===
Some blog software such as [[Typo (content management system)|Typo]] allow the blog administrator to allow only comments submitted via [[Ajax (programming)|Ajax]] XMLHttpRequests, and discard regular form POST requests. This causes accessibility problems typical to Ajax-only applications.
Although this technique prevents spam so far, it is a form of [[security by obscurity]] and will probably be defeated if it becomes popular enough.
==See also==
* [[Adversarial information retrieval]]
* [[Social networking spam]]
==References==
{{reflist}}
==External links==
* [http://meta.wikimedia.org/wiki/Anti-spam_Features Anti-spam Features] of [[MediaWiki]]
* [http://sixapart.com/pronet/comment_spam.html Six Apart Comment Spam Guide], fairly broad overview from [[Movable Type]]'s authors.
* [http://www.clientwell.com/blog/2007/08/23/blog-spam/ Article bemoaning the proliferation of blog spam.]
* Gilad Mishne, David Carmel and Ronny Lempel: [http://airweb.cse.lehigh.edu/2005/mishne.pdf Blocking Blog Spam with Language Model Disagreement], PDF. From the First International Workshop on Adversarial Information Retrieval (AIRWeb'05) Chiba, Japan, 2005.
{{spamming}}
[[Category:Search engine optimization]]
[[Category:Spamming]]
[[Category:Black hat seo]]
[[de:Wikispam]]
[[pl:Link spam]]