May 142013
 

As I have mentioned several times I now only use dreamhost for parking. and Keeping some of my older sites and databases as they are irrelevant.  Dreamhost has been down for several hours. I mean the entire Dreamhost

http://www.dreamhoststatus.com/2013/05/13/networking-issues-affecting-us-west-data-center-los-angeles-ca

 

The US-WEST as they call it now actually hosts majority of the users. From what i can tell they were performing some sort of maintenance on May 14th which involved networks

“This maintenance involves moving networking routes to brand new equipment, while directing traffic through thoroughly pre-tested and staged backups.”

http://www.dreamhoststatus.com/2013/05/11/network-maintenance-in-us-west-data-center-los-angeles-may-14th-6pm-8hrs

 

The maintenance turned into an all out attack on their network. How this escalated from one thing to another and why they are unrelated beats me. Perhaps the network maintenance did not go as planned and “tested”, the attack bit is a cover up of sorts.

Fact is I don’t really care. You can get better hosting. You can move to the Cloud like Redhat Openshift (this blog hosted free) or Amazon AWS (free tier). Pay someone 100$ to move your sites for you and be done with it.

Why do I still visit the status page? It is almost accidental when I have to check on a particular domain that is parked and it is down. But mostly I love reading the comments. For some reason there is no spam but lots of trolling. Far better in my opinion and hilarious without fail.

Some comments:

Mike Hunt Says

OMFG! THE PHONE IS RINGING OFF THE HOOK WITH ANGRY CLIENTS. I AM LOOSING ONE HUNDRED THOUSAND DOLLARS A MINUTE! HOW DARE THIS HAPPEN ON MY SHARED HOSTING PLAN THAT I RARELY EVER PAY FOR WITH ALL THE CLIENT REFERRAL MONEY COMING IN!

I DEMAND THIS BE FIXED whenever you can because i know you are working very hard to fix things! :)

– DH Cust since 2002

 

Adolf Says:

These comments are depressing

 

 

Have fun reading comments if you can’t access your sites

 Posted by at 11:13 am
Apr 122013
 

Recently I posted about Linode’s Network and Hardware improvement project at no additional cost to customers. Linode has further announced not more than a week ago that they will double the Memory allocated per plan.

For example I started out on a Linode 768 about 2 years ago and upgraded to 1GB plan for 40$/month approximately. According to the latest upgrades I will be on 2G memory soon. Most DC VPS  have already been upgraded except for Fremont DC for which there is no ETA at this time.

Effectively I have doubled my VPS capacity over the year at t he same price.

Here is a breakdown of the latest plans from Linode

Plan RAM Disk XFER CPU Price
Linode 1G 1 GB 24 GB 2 TB 8 cores (1x priority) $20 / mo
Linode 2G 2 GB 48 GB 4 TB 8 cores (2x priority) $40 / mo
Linode 4G 4 GB 96 GB 8 TB 8 cores (4x priority) $80 / mo
Linode 8G 8 GB 192 GB 16 TB 8 cores (8x priority) $160 / mo
Linode 16G 16 GB 384 GB 20 TB 8 cores (16x priority) $320 / mo
Linode 24G 24 GB 576 GB 20 TB 8 cores (24x priority) $480 / mo
Linode 32G 32 GB 768 GB 20 TB 8 cores (32x priority) $640 / mo
Linode 40G 40 GB 960 GB 20 TB 8 cores (40x priority) $800 / mo

 

Not long ago (2 years) that first plan was basically 512MB RAM with 10GB disk and about 100GB bandwidth at $19.95. There is that 5 cent difference but honestly it makes no difference. I am so used to shit resources everywhere else that this an all star customer service already.

Linode has finally come back to the charts with their latest upgrades and pricing. Linode is a very powerful VPS platform that supports stackscripts which is a glorified shell script in essence. It is quite similar to what we do with install/buildbot scripts (or chef / puppet etc for the jargon babbling crowd) and does help you get started quickly.

I was beginning to wonder, not too long ago, if I should just trash the expensive VPS and go for a dedi as it would be cheaper. Not to mention AWS is already a good choice if Linode has stuck to their resource vs. pricing plans. Whoever is the new BDM or incharge in Linode doing this, good work. At least you no longer walk with your eyes closed.

If you look at the 4G mark where most dedicated servers start you will see that the price is actually competitive and given that it’s a vps on a superior platform it would be a better choice that going for Low-mid range Dedicated server at this price. Secondly most Dedicated server stop at 10TB bandwidth but Linode has good network and is already exceeding by 60% at 8G Linode.

Linode also has a good set of tools and wiki help guides on several sysadmin topic, including the one I found most useful when I was stuck with a memory to disk swap issue : Identifying and fixing memory issues. I mean I have been used to my 12-24GB ram servers so one tends to forget 😀 Plus I haven’t really setup apache2 servers in a very… very long time.

source: Linode Blog

 

Linode logo is copyright of Linode, no infringement is intended.

 Posted by at 1:21 am
Aug 272012
 

Collected from experiences of self and other largely honest group of admins, AWS users group meeting (yes it’s like AA, we all say our name and begin our story)
This post will not make sense to those who have not actually used Amazon AWS even once. If you have had a dab at it or your boss asked you to go figure AWS and “set us up a server or you have no chance to survive” then this might help you.

Drink Driving

1. Treating Amazon AWS resources as a regular Data Center or Dedicated Host provider.

AWS is just that, a web service. All servers are virtual and disks are theoretical, sysadmins are asleep. Do not put all your beers in one basket. Also there is no Basket, only snapshots and AMIs
Largely the idea that servers are not unique and do not exist as physical hardware is hard to grasp. This leads to elaborate single server setup backed completely by EBS.
Both EBS and Instances (not servers as they are not) can and will fail – Yoda ( It was either him or me. )

Best practise both in and out of AWS is to assume that after you have done your perfect CentOS6 LAMP setup the whole thing will disappear. Write Bash scrips, WMI, Images whatever to be able to recreate everything on a new hardware from scratch.

Assuming AWS resources as if you were in a data center with a bunch of noisy servers and your expensive SAN disks, is just an accident waiting to happen. Psychological help is advised.

2. I know ” I will use NFS”

That’s just the alcohol talking. When faced with multi-tenanted architecture most sysadmins and developers realize they need to share “these” files between “those” autoscale servers. noobie sysadmin pronto hooks up another “server” with large “harddisk” and NFS shares the hell out of it “voila, i made bomb”. The problem here is you cannot scale EBS (which is a network attached storage at best) and neither can you scale your NFS server. Entire setup is henceforth considered fail and waste of startup funding. Solution here is to code to use S3, databases, memcache. For not frequently updating files like source code there is Source control (github? codesion? take your pick) which can just as easily be checked out on every system at specific events (push/pull). But please do not commit your user uploaded files to Source control as well.

3. We have now added ELB, bring it on “spiky traffic”

Rum is widely assumed to bring on unrealistic and sometimes fatal acts of bravery. ELB is a slow autoscale system and takes time to “warm up” and serve requests. If you have mountain shaped traffic spikes its best to email Amazon support with your peak traffic data and ask them to “pre warm” your ELB. They know what to do. Also advisable to check on Amazon SES rates, DynamoDB table limits , RDS etc which are not autoscale resources. Setting up SNS alerts for thresholds is a must and should be set a bit earlier than “you shall not pass” messages starts hitting users. I use 70% as a good number for DynamoDB rates, RDS CPU usage, Connection usage etc.

4. Haha, 2 instances before product launch and 100+ after launch. I’ll just sit here and refresh my dashboard…

The cops will get to you first, period. Amazon AWS has resources limits. It is unlikely you can allocate unlimited number of resources. It is best to email support and find out. Ec2 Instances are limited to 20 instances per account by default. Therefore if you needed more and getting flooded by traffic, you are out of luck as support can take at best a day to lift your limits, justifiable reasons aside.
In case you do find yourself in this situation that you have exceeded instance limit and need more right now, Spot Instances can come to your rescue. So far we have not seen any limits on those. They will buy you the time. Spot instances do not count towards your account quota limits.

5. I will use ELB as internal proxy, now have a neat gun of a MySQL load balancer as well.

Why would you snail mail your AA meeting flyers when you have everyone’s email address and facebook groups? We need to talk about this problem of yours.
ELBs are essentially gateways and can be compared to your home modem. They are not routers and cannot determine if you are talking to a PC on the Home Network or trying to SSH tunnel to your office. Therefore you are essentially adding lag to your traffic with additional round trip. It’s like sharing files using DropBox between two home computes on the same LAN. If you notice the ELB does not seem to have an internal domain name unlike Instances. The VPC ELBs operation is different and we are not going to discuss it with someone like you till you recover from this hangover.

6. Lets use a single zone for all instances for autoscaling and save on the stupid inter AZ data transfer cost. I should get a bonus for this.

Getting free drinks is no excuse. You are obviously not making progress here.

ELB will refer to your instance zones and setup one ELB endpoint per zone (you only see one ELB for simplicity). If you have only one zone, you my friend, are going to have a problem, you also have only one zone ELB. It is quite common to see a single, or sometimes more than one zone, experience issues like network latency, faults etc. I personally was once baffled when 2 of the zones had no instances to spare and every one was spot bidding above the on demand price just to stay in the same zone. I should get some of the free drinks those guys got but I am trying to be sober here while I attend this night beach party.

7. I save cost of EBS IO by starting an instance store and then dd copying from EBS attached volumes to instance store. Why do I have x and y problems?

You are obviously in the wrong meeting. You need to get in touch with the nearest hospital or something.
Why, just why?

8. We use resources from more than one REGION to server a single request.

Good Luck on your new adventures. You can add gambling to your list of problems.

More..coming too soon.

Jun 302012
 

Drizzle is an open source fork of MySQL, actually InnoDB only, that is designed to be faster and more scalable than the vanilla MySQL.

I finally decided to test it out instead of just mucking around as I had done earlier. * My system *


uname -a
Linux Tron 3.0.0-13-generic #22-Ubuntu SMP Wed Nov 2 13:25:36 UTC 2011 i686 i686 i386 GNU/Linux</pre>

Installing

Headed over to Drizzle and proceeded to see how I can install their box version. I chose Ubuntu as I have a bare metal 11.10 which will do quite well to test both MySQL and Drizzle out. Followed instructions athttp://docs.drizzle.org/installing/ubuntu.html The CentOS install looks a lot lengthier (workarounds?). The best way would be to compile from source but I am not really interested in that right now.

At the update I get the following errors. This is understandable.


Ign http://extras.ubuntu.com oneiric/main TranslationIndex
Ign http://extras.ubuntu.com oneiric/main Translation-en_SG
Ign http://extras.ubuntu.com oneiric/main Translation-en
Fetched 762 kB in 2min 15s (5,607 B/s)
W: Failed to fetch http://ppa.launchpad.net/drizzle-developers/ppa/ubuntu/dists/oneiric/main/source/Sources  404  Not Found
W: Failed to fetch http://ppa.launchpad.net/drizzle-developers/ppa/ubuntu/dists/oneiric/main/binary-i386/Packages  404  Not Found
E: Some index files failed to download. They have been ignored, or old ones used instead.</pre>

However, I was able to complete installation without issues. Time to RTFM. Yes! I like to dive first read later, makes it more interesting.

Configuring

If you visit the docs http://docs.drizzle.org/configuration/options.html#config-files you will see it does not really say where the config file is. This tells me that Drizzle assumes prior knowledge of MySQL Administration so you can fill in the gaps yourself, but Drizzle is very different. Or you could help with Documentation yeah? Drizzle claims to work without configuration out-of-the-box. However since I do have a mysql protocol port used I want to make sure I override it in Drizzle.

I create my config directories under a regular user, in mysql it just went to /etc/ mount point by default. Drizzle made no assumption, there was no directory or anything. Maybe it’s just on Ubuntu, honestly I don’t know. Doc says default direcroty is /etc/drizzle. it even goes on to say that it has a specific name for the default file if you want to skip specifying one.


cd ~
mkdir drizzled
touch drizzled/drizzled.cnf #
touch drizzled/conf.d/server1.cnf # drizzle will read all configs in this directory automatically
touch drizzled/conf.d/auth
touch drizzled/users</pre>

This is good enough for me. So I can now specify a default conf and a directory with all my config in case I need more than one.

Edit drizzled.cnf.


vi drizzled/drizzled.cnf

adding the config values I need.


drizzle-protocol.port=4427
mysql-protocol.port=3307 # since I use 3306 for mysql
innodb.buffer-pool-size=500M

Edit the auth file , which you should set to strict permission I suppose. In drizzled/conf.d/auth where I will write what the doc says.

plugin-remove=auth_all
plugin-add=auth_file
# Options for the plugin itself
[auth-file]
users=/home/drizzle/drizzled/users #assuming your home is drizzle a full path is needed.

I could apparently write everything in one file (drizzled.cnf) except the users.

The documentation breaks off at this point and goes into details of Options and Values so I go looking for RTFM on Auth.

Setup of authentication

This I like, you can easily setup Auth just like PostGreSQL anyway you want. You can read more here http://docs.drizzle.org/administration/authentication.html To keep things simple I used flat file. The suggested way is Schema which is same as MySQL, meaning usernames and passwords are stored in a table inside the database of the Drizzle DB, like the mysql.user table. To set that up will take more time but the gyst is, we first allow all, then create an admin users…snore. Feel free to read about it and knowck yourself out. Moving on to Flat file …

Since I already added my flat file settings above I am going to now fill it in with my users. This is easy (or so I thought), e.g.


user1:password1
user2:password2

… cute.

I suppose we’re done here, it was painful but now I have a neat gun. Now the good stuff.

Start your engine

Issuing command…

drizzled &#8211;config-dir=/home/drizzle/drizzled/
 output: Local catalog /var/lib/drizzle/local does not exist

 wtf! ouch!

Troubleshooting…as its obviously a permission issue.

Seems I am logged in as user abhishek but drizze is user drizzle facepalm A handy command line option appears: [-u] Unfortunately I need to sudo and then provide the user. Directly won;t work 

#sudo drizzled &#8211;user drizzle &#8211;config-dir=/home/drizzle/drizzled/
 To daemonize it, we need spawns @.@

sudo drizzled &#8211;user drizzle &#8211;daemon &#8211;config-dir=/home/drizzle/drizzled/

_ I believe my setup is wrong on this part, it could have been done better. It works fine for now so I can test _ We’re online, lets login.

drizzle

wtf!, I can login. So it seems my default file is not being read, contrary to RTFM. fair enough. I forced the –defaults-file= option but I got a new error. It turns out I need the plugin for auth-file which I am trying to use. I don’t know where I can download this, docs are being generally unhelpful here. Time to go fish… a.k.a google.

I never did get the auth working. I stopped using drizzle, though i believe you can use it as embedded DB or something.

 Posted by at 7:08 am
Jun 282012
 

I decided it’s not worth wasting time on Blogofile, awesome concept and a working S3 deployment plugin. Unfortunately I could never get the styles to work for Code highlighting seems there is much to fix and hardly any help on the matter.

I often considered simply using Flask microframework to do a python blog system by myself. Not to reinvent the wheel but to learn Python along the way. I do not have time for that either and delaying my itch to write things on my scratch pad is getting mildly annoying now. Have…to…scratch.

 

So good ol’ WordPress it is, it never fails except when it gets randomly hacked by script kiddies. Plus I am using Redhat Openshift so I still get to learn something here. Now I got to figure what to do with my AWS instance which I had setup for advanced fiddling. I guess I will use the Route 53 only for now then,  AWS has been kind enough to give me  enough credits to last quite a bit.

It’s time to copy paste my only two posts in blogofile which I made before I cracked, damn monokai styles wouldn’t load. The whole thing was screwed up at the template level.

I also hope no one finds this site.

 Posted by at 5:36 pm