Archive for the ‘Security’ Category

Security alert – SQL server injection attack on the loose

Sunday, August 10th, 2008

Another SQL Injection that targeting Microsoft’s SQL Server is doing the rounds and looks to be increasing in rate.  SANS is reporting that the activity is increasing, other sources indicate that close variants have infected several thousand sites.   I was contacted over the weekend to help recover a database for an international newspaper that had been subjected to attack earlier this weekend.  The damage? Some 40k pieces of data that had been modified to embed the undesirable Trojan code.

Looking into this indicates that this attack is sourced from the botnet ASProx, previously associated with phishing attacks, now pushing malware through sites vulnerable to SQL injection.  ASProx utilizes the google cache to target initially ASP but now also, PHP and coldfusion pages.  It also utilizes DNS fast-fluxing (DFFer) technique to hide the actual malware delivery sites.  It is understood that the ASProx botnet now exceeds some 30,000 unique IP addresses.

A quick assessment of the NZ landscape shows that over a sample of 100 of the latest attack signatures (specifically looking at the result of the successful SQL Injection) indicates that there are 68 distinct infections over some 18 “NZ” sites…  all IIS5/6 sites.  Not good news for some and given that a couple of these sites are in NZ’s “top 100” and would service a not-so-insignificant number of unique browsers.
Note: I’ve performed this assessment out of the google cache so I’d expect that this would be somewhat worse, given SEO (in general) and the timing of google spider visits.

SQL Injection is not a new method of attack, but the sites affected indicate that the quality of website code to protect against this type of attack still is not good enough.  The nature of this particular attack is such that there only needs to be a single vulnerable hole.  Reverse engineering the attack server-side is relatively easy, but the number of signatures/payload and the coding required to identify and block can become time consuming.  Preventing the attack in the first place is a little more difficult, but not impossible as the payload continually morphs.

Maintaining a strategy to provide you with protection against this type of attack in a multi-layered approach at the database, application and web server in my opinion is the best approach.

Understood, re-engineering an existing site can be time consuming, but, regardless of the complexity and costs involved, a (website) publisher has a responsibility to shield their website from the risk of infection and the result of becoming a virus distributing agent.  Publishers of any size must protect their sites’ visitors from the exposure to malicious scripts at all times.

Are you a responsible publisher?  Or do you believe that protection is the responsibility of your browsers?