Recently we had a customer whose website was infected by way of one of the infamous plugins that are sometimes bundled into themes, such as TimThumb. The result was their site became one of the foci of a world-wide bot-net which posted tiny abouts of information (greeting and e-mail addresses) to target fairly large SPAM messages out from their site.
This ran overnight at the weekend, so by the time we had tracked it down 25,000 messages had been submitted, all properly validated as coming from the customer’s site
Fortunately 10,000 messages were still in the system outgoing queue, and we were able to purge those, but it was still around 10 days before mail was flowing properly again.
In response to the issue we are “rate limiting” email by number of recipients from each sender. In a time window I’ve run some analysis on the traffic logs, and apart from the mailing lists we host, which would be expected to send to many recipients, and with only 8 exceptions, in the past month no-one sends to more than 10 recipients in an hour. We’ve exempted the known high volume senders from checks, and will be setting a sliding 1 hour window to restrict every other mail account’s submission rates to slightly more than this observed high
This may affect you if you suddenly decide to do a mass mailing. It’s easy to enable your account if you tell us in advance, also don’t forget you can have a hosted mailing list which can individualise each message for its recipient
It’s really nice that so many people share their thoughts on a regular basis, and only fair that they should get some idea of where their traffic is coming from. As a reader I’m getting massively fed up with failing to read the blogs on blogspot whose RSS feed comes through feedproxy:
often one can wait 2 minutes and click refresh and the redirect comes through, but I wonder how many false impressions are being recorded, and worse, how many readers are being turned off?
That’s why I run WordPress directly, and offer it to OA5‘s web hosting customers at no extra charge.
This was the ceremonial handover of the last block of addresses available in the Second Internet Addressing Scheme (IP4) at the London Transport Museum, Covent Garden on Tuesday 22 March 2011. This marks the end of the Internet as we knew it and the start of a lot of hard work to make the switch over to IP6 invisible to the users. There are no free address blocks remaining. The end happened about 5 years later than I expected, mainly due to the massive use of NAT in Home and Business networks.
On the right Nigel Titley accepts the addresses on behalf of the European Internet Resources Centre (RIPE) from Leo Vegoda on behalf of the Internet Assigned Numbers Authority (IANA)
This is actually a scary moment. Be afraid, be very afraid.
Sometimes I wonder when I get requests like this, especially from customers on a very basic hosting package. Initially it might seem a perfectly reasonable request, but it does not work for multiple customers, and each customer’s follow-up is inevitably a request that would take several hours to achieve.
Now let’s just think what the outage might be, be it a router failure losing connexion, or an Internet storm, typically these things are about 3 hours duration. If we are to arrange to call the customers, even at 5 minutes each, we’ll have barely started with just 36 calls per line, and have achieved nothing, by the time it all comes back. If the failure is our equipment that time would be better spent configuring a replacement and getting it on site.
So the answer is “No, it’s not in your, or our other customers’ best interest. If it really is an issue for you, would you like to talk about multi-site redundant servers?”
The only way the customer is going to get that sort of personal response is when their hosting fees go up by three orders of magnitude, effectively having their own dedicated full time support person.
There are 2 really lovely pieces of contributer-supported software out there from an IPPs perspective. Both have a really lovely property — they can be installed as ‘multisite’ configurations, that is one only needs to install the software once to have many users able to use it to create their own sites. Within certain limits. (continue reading…)
I love XEN virtual machines, they take up so much less rack space, but I’m probably missing a trick or two in terms of backing them up.
Today’s big job is recovering from whatever hit the hosting centre at 15:15 yesterday. It knocked sideways a couple of big disks. One was on the main server, a SAS disk that appears to have recovered for now, but has been promoted up the preventative maintenance stack, and needs to be replaced in due course. This was operating with Software Raid mirroring.
The other big disk was a 1T (920G really — I wish suppliers would work in base 2 like the rest of th computer industry, and that the marketing types who set out to intentionally mislead with this sort of flummery could be censured, but it will never happen, as the advertising standards authority seems to be staffed with innumerate arts graduates). Anyway this disk is Serial ATA, and was supposed to be running under the on-motherboard hardware RAID. Well as one of the meanings for supposed is ‘fondly believed’ that’s about right.
The disk containing the mirror was largely bare. Of course the main disk is complete Toast, so we’re bringing stuff back from backup onto a spare machine. To make matters more interesting the toasted machine had 7 virtual servers on board, which makes the restore exercise that much larger. It’s been a long night
And then we have to understand what went wrong, and take steps to prevent repetition. We await feedback from the hosting centre.