Dec 252016
 

Save time and money. Go Backblaze. Crashplan sucks, terrible speed and it has not improved in years. They are full of excuses about it though so they are unlikely to address it in any correct way.

I am using both and going to ditch crashplan once subscription expires.

Keep it up Backblaze.

Nov 042016
 

If you read my posts you will realize I am a fan of Linode, have been for years. They are great and reasonable people.

I decided to give DigitalOcean a try since I have heard great things about it.

I found the connectivity was awesome from here in Singapore to my first Droplet in London! They have  quite a few locations that is hard to find from their site. Like you really have to search the FAQ etc. So here is a screenshot of all their locations.

Digital ocean data center locations

That’s pretty cool list of locations. Bangalore & Singapore! wow. And it all starts at $5 but for you it’s free $10  so you can try things out. And I get some referral credits so I can ping other data centers 😀

I wanted to find out more about how I could make it easy to deploy images like Docker/Chef or puppet. I need the ability to maintain multiple regions with the same exact image and all changes applied without too much DevOps.

I would definitely use my free credits to test other locations like Singapore and Bangalore.

Here is my MTR results to London from Singapore using actual vps. I would expect average 270-350ms of RTT so this is pretty good.

 

Start: Fri Nov 4 13:45:06 2016
HOST: xeon Loss% Snt Last Avg Best Wrst StDev
 1.|-- gateway 0.0% 100 0.3 0.3 0.2 3.9 0.3
 2.|-- 101.100.165.3 0.0% 100 1.8 201.2 1.6 3931. 726.0
 3.|-- 103-6-148-41.myrepublic.c 0.0% 100 1.7 14.7 1.5 57.4 18.5
 4.|-- 103-6-148-13.myrepublic.c 0.0% 100 2.5 3.3 2.2 19.8 2.5
 5.|-- 116.51.27.101 0.0% 100 7.7 3.4 2.2 15.5 1.6
 6.|-- ae-0.r21.sngpsi05.sg.bb.g 21.0% 100 11.4 4.1 2.1 14.1 2.7
 7.|-- ae-8.r24.londen12.uk.bb.g 0.0% 100 185.8 187.1 185.8 193.7 1.4
 8.|-- ae-1.r25.londen12.uk.bb.g 0.0% 100 182.2 183.4 182.1 193.1 1.9
 9.|-- ae-2.r02.londen01.uk.bb.g 0.0% 100 183.3 183.8 183.2 191.6 1.0
 10.|-- hds.r02.londen01.uk.bb.gi 0.0% 100 189.0 189.4 188.3 231.4 4.4
 11.|-- ??? 100.0 100 0.0 0.0 0.0 0.0 0.0
 12.|-- 188.166.149.149 0.0% 100 183.4 183.5 182.9 188.8 1.0


The lower the better. 185 ms is the lowest I have ever seen.

I added the IP to AWS Route 53 Latency based routing DNS and noticed that while most countries in Europe where getting the London IP assigned, UK itself was not. This could be either AWS problem or Digital Ocean problem.

So if you if you are going to give it a try for free (I paid the 5$) you can use my code by clicking through here http://www.digitalocean.com/?refcode=3a149653659e

Apr 282016
 

After years of not having to deal with AWS except for my pet projects , I decided to visit the AWS Summit in Singapore. Its right here, I thought great time to join and learn some Lambda as well as Lumberyard/gamelift. Suffice to say, I was disappointed all together at the future of Amazon Web services in general.

The backstory is where I stand now and this person captures it completely http://openmymind.net/Why-I-Dislike-ec2/    . tl:dr AWS is VERY expensive and it gets a lot more expensive every year and you don’t even see it.

I considered the fact that microservices like Lambda are great way to move forward. Unfortunately AWS is getting on a high horse right now. Great ideas but …

AWS is over, in my opinion. The hype, the love, the innovation…it’s old. It’s just that in Asia things are slow as a wooden raft. I see all the marketing hype picking up and over 2000 strong crowd showed up in undermanned exhibitor booths of the same 8 year old tech. underwhelming is an understatement.

What’s the right now?

Well, AWS has taught everyone in the Computer and Software Architecture world a lot of things and it’s still a great starting point. Like signing up with that 5$ web hosting provider to put a php page up. It’s getting less complex and massively multi-tenanted. It’s a great business challenge solved. There will always be space for PB scale data and number crunching challenges to solve. Thats always going to be AWS.

What CTO’s should look into is how to take this shared web hosting with awesome (although locked-in API) and abstract it away. Implement the same in private clouds , dedicated hardware  that is built on open standards. OpenStack & Cloudstack had the right idea years ago. I want to use the Dedicated managed or unmanaged solution with bare metal performance and the same “designed-for-failure” architecture as AWS.

If I refer back to my work with Razer Synapse and for years before that, I have always avoided vendor lock in to as far as possible. By abstracting layers of the code and making them stateless, independent of underlying platform. For e.g. S3 was written with wrapper class that can do FileStore, ObejctStore via Ceph or 3rd parties  or S3. Provider and IaaS independent. There was no elastic beanstalk but a combination of Python Fabric and Shell scripting.

I had used DynamoDB in 2011-12 early days before it was  GA. It was great, limited but great. I found I was just as happy if not happier with CouchDB and Redis on a multihomed setup with failover.

Still love AWS and by no means this is a rant or simply because I work for a competitor. Personally I believe we are complimentary. If the approach of today is anything to go by, then we have our work cut out and AWS is going to be left behind despite being pioneers.

I am going to start writing again, mostly rants against fat tech but alternatives to AWS/IaaS/Paas and the likes.

 

P.S. I couldn’t join the developer breakout for the lamba and stuff I wanted to give a whirl. I decided not to because the event people were being extremely Kafkaesque and difficult with their rules and “maybe”. You need to que and wait, maybe you can join, maybe you can’t, if you go out you maybe can’t come back in….. But there are videos uploaded later “The Hell!, why didnt you guys just say so before I decided to take a day off and come down” . So I walked out extremely pissed.

 

 

 

Apr 092015
 

Ever since I found Backblaze, and I don’t know why I did not find it earlier, I have been quite happy with money well spent. I currently have 4 USB drives apart from 2.5TB internal space. This totals to a lot of space and I only use 4Tb of it. This is my home setup. I have tons of VMs and development stuff lying around. I recently lost a fairly new Western Digital Drive of 2TB for no good reason. It just died…. like physically. Taking 1.5TB of VMs and snapshots. Terrible day as I spent 2-3 hours opening the drive up and confirming it was completely dead. I should have seen the signs…

I decided online backup was the way to go. While slow atleast it would keep my data recoverable without adding more drives. I looked at Amazon S3 as my first Choice. However with Terrabytes of data the cost would become VERY VERY prohibitive. I don’t mind paying but the difference between buying drives and backup has to be equal or less to be economic. I could just as easily built a raid 1 by buying lots of cheap drives and putting them into a NAS for backup.

After (a few) quick Google search revealed BackBlaze. A lot of Sysadmins around swear by it. I took a look at what these guys do and this post and knew they try very hard to make data backup affordable and reliable.

Their plan amazed me. US$50 for 1 year of unlimited backup per system and it includes attached drives. Secondly the ability to ship drives of your data back to you. While I have 1GBps connection it would still be slow to get a few Terrabytes from across the world and it is not going to be healthy use of time.

I downloaded the trial. I had some issues initially getting it running because the interface isn’t very intuitive but once you know what’s where it’s pretty much on it’s own as it starts its first back up. So off I want happily to sign up and backup everything.

Unfortunately the upload speed was painfully slow…. somewhere in the region of 200-500KB/s at best. It would take a whole 3 years to transfer 5TB of data if not more. I emailed support to ask if there is option for multiple parallel upload since their application uses only ONE upload thread. They replied back that they would “soon” be releasing the multithreaded upload/download. I waited and 3 weeks later I was still left with over 90% of the volume of my files. However Backblaze was smart enough to keep the larger files for the last. The number of at-risk files were a few thousand. I usually keep my PC on at all times but I dont mind restarting now and then due to forced Windows 8 updates.This time I have had it on since Backblaze for almost 21 days non-stop. Something is better than nothing I say.

 

I was hypothesizing (read:daydreaming)  upon what could change in my life over the next 3 years as the first backup actually started to come to a closure…. what would I be doing on the day when I  discover it actually completed. Perhaps I would celebrate and then find out a few more files have been added to the never ending list…. or may be Joe from support was telling the truth

I hate their site for finding any new information so I googled for the answer once again hoping someone made a workaround…. I could not believe the search result… It has the word “Multithreaded”. In my entire programming life , MT has not made me as happy as it did now.

so the GOOD news! The latest version of Backblaze enabled multi threaded uploads. So despite the physical distances between my BackBlaze and my local desktop, the conspiring bast**d of a  RTT I could now upload in parallel and use the max of my 500Mbps upload speed. Well anything better than the 2Mbps I was getting normally anyway 🙂

Excited. I could not wait and installed the latest version. Sure enough the option was there as per their pitch. With one thread my transfer was shown as 2.35 Mbps which I can easily confirm with other tools and my UBNT router.

With the thread set to 4 backups were already Flying. The file names whizzed by in the Backblaze control panel. Still small files though as the program had found new tiny useless fragments of my entire Raspberry Pi mirror to backup again. Who cares anyway. The only  I could tell if the speed was actually being maximized was when large files would be transferred. I was already seeing an improvement by 10x times

I am currently able to get a consistent 25-30 Mbits/ Sec.  The next step is only to contact my ISP MyRepublic and get then to do a better job of this. A Lower RTT could mean the world of difference.

May 042013
 

Let me quickly review what I found so far about DreamObjects.

 

First the good stuff

 

Using Apache Benchmark (yeah, screw you naysayers, it is what I have installed on my Fedora box) compared Amazon US-West and DreamObjects HTTP GET of a jquery file (90.5KB). I used AWS West (North California) region because dreamhost is still using the same DC in Brea as with other servers I suppose. At least the nslookup confirms that it is in the same city. I don’t know where the frack Brea is but it’s in the same state I assume.

On multiple runs I found AWS is slower in all areas. Note I am connecting from Singapore and this is a very rough benchmark, though it does give me a little more significant numbers as compared to say testing from US itself since I will always have higher latency. That being said just take it for what it is.

Connection Times: DreamObjects was faster by 30% in all connection stages.

Transfer rate: S3 was slower by almost 15%.

Longest request times: DreamObjects was faster in all percentages and at 90% time it was already 2x faster that S3. The longest request on S3 was over twice the time that it took DreamObjects to serve. Dreamobjects also consistently delivered the same request time over the entire test with only a small variation of 10% whereas S3 managed to frack it up and escalated gradually from the first 20% to 100% requests served in almost 2.5x the time. That is actually quite bad. I might have to reconsider our own production projects on S3. Interesting.

 

Errors: there weren’t any , in any case the number of requests are really small.

I was not benchmarking for performance/error rate but for throughput. I only used a very small sample test runs (5 each) with concurrency level of 10. It seems that requests per second are not very wide due to this.

But so far it looks DreamHost is outperforming Amazon S3 by a good margin. Then again dreamhost crawls when it has enough customers and sardine cans everyone so we don’t know what the future holds.

 

The Migration interface is quite awesome on their panel.

 

Bad Stuff

Not well thought out interface. Recursive deletes don’t work, couldn’t find a way to do recursive <anything>. Heck even I managed to do that on beta.3ezy.net. It’s not super hard.

They applied the same AUP http://dreamhost.com/acceptable-use-policy/ to DreamObjects as they use for Webhosting. The same T&C etc. So according to their own AUP you cannot use DreamObjects for backup, file hosting services, or image galleries etc. basically any web application allowing User uploaded content is off-limits.However their wiki states

Target Audience & Use-Cases

  • Personal Backup – Users needing to store large amounts of data for personal backup of music, photos, videos, complete system backup, etc. Generally, these users will connect with a third-party application.

  • Web Developerment – Developers of web applications needing object storage to easily augment or replace existing S3 or Swift functionality. Store images, logs, generated data and more via API with DreamObjects.

  • WordPress Sites – Anyone with a WordPress site can automate site backups, upload images to any bucket, and use a shortcode to display images using the DreamObjects Connection plugin.

 

Yeah I am confused. Since I have had sites removed for bandwidth overage on Dreamhost image gallery before (and gallery2 used to be one click install), I moved to Amazon S3 or dedis. The TOS, AUP etc are pretty good for business on AWS, I know they shit-me-not. I hope Dreamhost figures this out.

Also they don’t support CNAME yet.

summary: Worth a shot for fun and games. don’t put anything sensitive there. Don’t use for backups or hosting anything you will ever need. At least not yet.

If you have a dreamhost account and if you can file a support ticket pointing out the AUP and other legalese issues. Fact is I do want to see some real competition to AWS. If dreamhost can clean up it’s act it can really put a dent on the largely monopolized Redundant cloud storage market.

 

Update 6th may 2013:

Dreamhost has responded regarding the confusing TOS/AUP issue:

Thank you for contacting technical support. I apologize for the confusion
in regards to this. You can use DreamObjects for file storage like you do
with Amazon. This does include images for a gallery, and backups, if you
wanted to use it for that 🙂 Here is some more information in regards to
this:

http://wiki.dreamhost.com/DreamObjects_Overview

In regards to the CNAME. You can create a CNAME say objects.example.com
that points to objects.dreamhost.com, then you would access your file as
such:

objects.example.com/testbucket/test.jpeg

You can use any cname of course, as I used objects as an example 🙂

Hope this helps in regards and please let me know if there is anything
else I can assist you with.

Thanks!

Jerry

May 042013
 

I happened to login to Dreamhost Control panel today to setup email forwarding. I  have mentioned before that the Dreamhost hosting I have is so pathetic that now its even impossible to host simple non DB websites. Forget the DB websites all together. It is slow as frack. It’s completely and utterly useless and has been for over an year now. I still use it for email though. It’s the only thing that works best on DH and saves me the hassle of mailbox setup etc.

Anyway back to Dreamhost panel. So I see that Dreamhost prompted me to check out the “DreamObjects” which offers a 3 cent /GB storage cost for the promo. Sounds good? Well no mention of Data transfer charges so I dig into the details http://dreamhost.com/cloud/dreamobjects/pricing/ Rightly named. I too would “Object” to this because whatever happened to their unlimited space deal which we all have. Turns out the pricing isn’t bad. But hang on! We are talking about dreamhost here. They are like the crappiest of web hosting around today. They have a lot of their image to recover in my books before I can buy their claims into an S3 equivalent reliable, available and cost effective storage.

While in Singapore I can still download from my US east bucket at close to 10Mbps. The response times stay consistent, never lost an object in over 3.5 years even at reduced redundancy and S3 has a proven track record. My first impression is “Great! DreamHost is trying to reinvent itself.” soon followed by “Wait it’s dream host…. dream on”.

Further it’s not really pay-as-use its still pre-paid plan. Without the promo it comes down to the following pricing which is definitely much lower than S3 and they “say” no cost on API request. If they are using their crappy servers to run this program on their already oversold network I am quite disappointed. Amazon S3 advertises it’s SLA and durability in the first product page http://aws.amazon.com/s3/#protecting. Unsurprisingly DreamHost has failed to mention any of that in clear writing.

Pre-paid Storage Plans:

PLAN STORAGE INCLUDED (GB) PLAN PRICE / MTH EFFECTIVE RATE / GB
20 $1.35 6.8¢
100 $6.49 6.5¢
500 $29.95 6.0¢
1,024 $54.95 5.4¢
5,120 $269.95 5.3¢
10,240 $529.95 5.2¢
51,200 $2,499.95 4.9¢
102,400 $4,499.95 4.4¢

To be fair I will give this a shot. I will not however be using it as storage for any real projects even if they are the FUN kind of projects.  My only fear is that if in the entire industry , service providers like DreamHost start to promise something like S3 and fail to deliver it will only make AWS stronger and kill the potential up and coming competition.

 

Right now I am just angry at DreamHost. I’ll wave my fist at my monitor for a little while more.

Apr 042013
 

I have posted earlier about Linode and it’s lack of performance on my node as time went by.  However in the recent few weeks it has been back to top performance and I did not change anything. In fact I added two more sites to it.

Linode has made a whole bunch of improvements recently:

 

Improved network performance

Increase in Data Transfer by 1000%. yes. I used to get 400GB but now I have 4TB outbound, inbound is always free. It seems the CPU is also much better though it seems there is either upgrade or move of some accounts around. I will need to do a reboot to get the improvement apparently, according to their blog post on March 18th.

Linode did improve their network for my own experience

CPU Upgrades

Now they are going ahead to add better CPU on nodes by upgrading them to 8 cores. Currently webmin is reporting I am still on 4 cores L5520

The thing about Linode is that they do upgrade and they do it for free. In one year the disk space increased by 50% and bandwidth by over 2000%. Unlike some hosts who are only interested in keeping you on old shit and charging you more for the same stuff.

Proof:

Linode CPU upgrades, before reboot

Linode CPU upgrades, before reboot screenshot

Going ahead with reboot…

 

 

2mins later.

 

holy shit. it’s 8 cores. Linode does it again. These upgrades Linode is doing is absolutely free. Considering that the Dreamhost shared host I use is still on the same hardware for at least 4 straight years and you can actually distinguish between crawling 2012 and crawling 2013 this is commendable. On a sidenote , the only thing that works well on Dreamhost for me is their Mail server… aptly named “homie”.

 

Their blog post said E5-2670 as the new processor. I don’t see it yet but then it could be webmin. Will check on this in detail. But heck at the free upgrade to 8 core I am not complaining.

Given the current upgrades at linode I am planning to move one of my dedicated host with 8 sites over to Linode to solve my problem with european providers who are now buying gold chains with your IP address money.

Proof:

linode after reboot cpu upgrade 8 core

linode after reboot cpu upgrade 8 core

 

 

I just like saying proof. proof.

 

source Linode Blog

affiliate note: I have slapped Linode before. I am pretty frank about it because there are enough aff based posts about each host to screw all our collective brains. I use aff links when available and it gets me money off my own hosting at Linode. It does not make me rich and it does not cost you more, so feel free to click … or not.

Dec 202012
 

I have had this site hosted on Redhat Openshift for almost an year. Considering I got this hosting free (and you can too) I was a bit apprehensive of whether I should move the site elsewhere. I let it be here anyway. It was surprising that not only do I get awesome performance but the uptime has been incredible. I was going to set this blog up on an AWS EC2 micro instance.  However micro instance being what it is would have costed as much as a Linode VPS and would be a time shared CPU. This can get very annoying as you find that on and off the site would be slower.

Redhat openshift offers a free micro instance equivalent, the difference being that you are probably on a much bigger Instance since the PaaS run atop of AWS cloud making this setup akin to a VPS. This makes more sense instead of spending on a Small instance or settling for micro. In fact I don’t recommend Micro instance at all for any purpose other than testing, compiling or other such process where on-demand CPU is not necessary.

Comparatively my Linode VPS which costs me $40 a month is not doing as well in terms of performance when I use it to host WordPress sites. I am still not clear as to why the memory usage and swap is higher on Linode VPS (perhaps the CPU is over subscribed) but this Openshift instance is at 512 MB Ram and doing just great.

If you are a developer who does not want to get into the hassle of setting up servers and services and just want to get down to coding your stuff I recommend you give Redhat Openshift a try. You will not be disappointed. Specially if you build sites for your clients.

Considering the way things are changing with PaaS and Cloud , the price being what it is, I am thinking why do I still put up with my Dreamhost account which is barely usable and hosts 1000s of users and sites on a single server. I coulld not even do a basic PHP development and test on it. Same goes for pretty much any webhost or reseler like Godaddy, Media temple and blah blah.

 

Since I also use Google App Engine for development and learning it is worth adding why you would choose Redhat over App Engine. Familiarity is possibly number 1. Known that App Engine support MySQL now but it remains that you have shell access to your instance on Openshift much as you would on your own instance. You can also access some basic metrics and new services are being built all the time on Open Shift. Check out the Websockets beta here https://openshift.redhat.com/community/blogs/newest-release-websockets-port-forwarding-more recently launched.

 

My favorite Python web framework Flask is effing supported as well https://openshift.redhat.com/community/get-started/flask . I cannot describe how much pain is involved in hosting these python apps on just about any distro. I think I am going to setup my Flask sites over at Openshift. Ofcourse Django is supported. I have tried neither of them.

 

Now that I am confident about Openshift here are the things I would like to learn to get to production deployment of my Python projects.

  1. How do I add SSL certificates
  2. How do I enable autoscale ( I suppose this is just to do with your AWS account and ofcourse you have to Openshift Enterprise for it it seems)
  3. How do I use my existing RDS with openshift (and securely)
  4. I am sure there is a whole bunch of things I haven’t thought of yet.
All of the above is in the docs somewhere.
Concern: Given that Redhat’s own Enterprise support is not highly regarded among devs and ops, I wonder what does Openshift “Enterprise” will do for us?
OpenShift Enterprise
truth is I do not have experience with Redhat Enterprise Support or with OpenShift Enterprise Sservice. Question is, should I be the one to take the first hand experience myself? Or to bet my job on it? That is going to be some daring. xD