Feb 132017
 

Found a good post on how to use the  rescue mode in Hetzner to setup a Logical Volume that spans multiple drives. This plus vsftpd can help setup Terabytes of space for storage as backup. In any case you should not use Hetzner for anything closely resembling a legit online service, website etc. It is only popular as Seedbox for a reason as they are quick to lock your server out.

Speaking of seedboxes, if you want to setup one here is a good script https://github.com/arakasi72/rtinst It can optionally install Webmin.

For file storage, search and download etc I have not yet found a tool. It’s mostly find and scp

If you feel adventurous I found a tricked out Seedbox setup script here https://github.com/dannyti/seedbox-from-scratch. Does everything and makes coffee.

With Bacula running on the local server I can send snapshots over to Hetzner albeit at a slow-as-snail speed. 30 Euros per month for 5.5TB of space is not too bad. I picked up a system from the their auctions and its quite alright but one shouldn’t expect performance or longevity from such systems. You get what you pay for essentially.

 

Nov 042016
 

If you read my posts you will realize I am a fan of Linode, have been for years. They are great and reasonable people.

I decided to give DigitalOcean a try since I have heard great things about it.

I found the connectivity was awesome from here in Singapore to my first Droplet in London! They have  quite a few locations that is hard to find from their site. Like you really have to search the FAQ etc. So here is a screenshot of all their locations.

Digital ocean data center locations

That’s pretty cool list of locations. Bangalore & Singapore! wow. And it all starts at $5 but for you it’s free $10  so you can try things out. And I get some referral credits so I can ping other data centers 😀

I wanted to find out more about how I could make it easy to deploy images like Docker/Chef or puppet. I need the ability to maintain multiple regions with the same exact image and all changes applied without too much DevOps.

I would definitely use my free credits to test other locations like Singapore and Bangalore.

Here is my MTR results to London from Singapore using actual vps. I would expect average 270-350ms of RTT so this is pretty good.

 

Start: Fri Nov 4 13:45:06 2016
HOST: xeon Loss% Snt Last Avg Best Wrst StDev
 1.|-- gateway 0.0% 100 0.3 0.3 0.2 3.9 0.3
 2.|-- 101.100.165.3 0.0% 100 1.8 201.2 1.6 3931. 726.0
 3.|-- 103-6-148-41.myrepublic.c 0.0% 100 1.7 14.7 1.5 57.4 18.5
 4.|-- 103-6-148-13.myrepublic.c 0.0% 100 2.5 3.3 2.2 19.8 2.5
 5.|-- 116.51.27.101 0.0% 100 7.7 3.4 2.2 15.5 1.6
 6.|-- ae-0.r21.sngpsi05.sg.bb.g 21.0% 100 11.4 4.1 2.1 14.1 2.7
 7.|-- ae-8.r24.londen12.uk.bb.g 0.0% 100 185.8 187.1 185.8 193.7 1.4
 8.|-- ae-1.r25.londen12.uk.bb.g 0.0% 100 182.2 183.4 182.1 193.1 1.9
 9.|-- ae-2.r02.londen01.uk.bb.g 0.0% 100 183.3 183.8 183.2 191.6 1.0
 10.|-- hds.r02.londen01.uk.bb.gi 0.0% 100 189.0 189.4 188.3 231.4 4.4
 11.|-- ??? 100.0 100 0.0 0.0 0.0 0.0 0.0
 12.|-- 188.166.149.149 0.0% 100 183.4 183.5 182.9 188.8 1.0


The lower the better. 185 ms is the lowest I have ever seen.

I added the IP to AWS Route 53 Latency based routing DNS and noticed that while most countries in Europe where getting the London IP assigned, UK itself was not. This could be either AWS problem or Digital Ocean problem.

So if you if you are going to give it a try for free (I paid the 5$) you can use my code by clicking through here http://www.digitalocean.com/?refcode=3a149653659e

Jun 082016
 

Finally I signed up for Google Cloud free trial. I wanted to setup GamingIO.com game servers in a way that I could manage it better. I found AWS EC2 was not up to the task and to get even a usable performance for a single game instance I would need to shell out 25$ a day at the least. I dropped it there and then, save for events I used to run for charitable causes.

Back to GCloud. Here I ran the same test using the small instance and was able to launch and play a decent CS:GO match for 10 players on it including bots. I would go as far as to say that even a micro instance on Gcloud can get you that performance. Just the way it should be. I used the Compute Engine and regular Disk (not SSD) for setting up the Steam Dedicated servers.

Compared to hosted Game servers that charge $5 a month for 10 slots/players/bots, I think folks could do better with this method. You could go upto 20 players.

To be fair to AWS, it has tons of tenants sharing that vCPU. GCloud maybe underselling to make that performance look good for now. Only time will tell how far we can go.

I did sign up for Azure Cloud but never got around to testing it. May be another time.

Jul 182015
 

Earlier this month towards the end of June I lost my main drive on the desktop.

surprised-gasp-l

It took me 2-3 days to figure out that it is not possible to boot the drive. To avoid further harm on drive I used my Second drive to boot my Windows 8.1. That was a stupid move. The windows was working again but I had lost all data on my second drive. I was certain that since I had two cloud backup providers thing would be ok. Unfortunately Backblaze which had all data except for “Program files”which it does not back up , I had no way to download it. There restore software does not work. Only way is to ship the data over.

Crashplan was much better. The application can restore and they have a Hong Kong location which is close. However it will take 3 weeks. I am now at last week of recovery. The other problem is that Crashplan backs up very slow so I did not have all the data. I will get bits and pieces.

I have the damaged internal HDD which I am trying to power on with external connector. It doesn’t seem to work. I will need to get a drive bay and try again as it did have an old copy of my data drive.

While I was in Tokyo last week, I managed to get a good deal on Samsung 3d SSD 1TB as well as Western Digital 6TB Red. I have hooked them up. I will be using the 6TB to copy all the recovered data from different sources. Then I need to compare and get the latest version out for restore. The SSD will replace my original Data drive.

 

If the above was not enough one of my main Servers had 2 out of 3 drives fail (hardware fault) earlier this week. Since I was travelling I could not investigate. Despair!


neutral-whyyyyy-l

 

All hope was not lost.

This is a 3 disk Raid5 with mdadm software raid and last night I started to investigate. I was able to boot to rescue mode and found that atleast 2 out of 3 drives can be read by the OS. Only one was completely unreadable. I asked the Hosting provider to replace only one of the two faulty drives (the unreadable one) for now so I can reassemble the Raid5 and start recovery. The recovery started successfully after a couple of failed attempts with mdadm. With forced mode I was able to reassemble the Raid. cat /proc/mdstat shows that recovery is in progress.

It has been over 12 hours now. I believe another 3 hours should complete the recovery and I can mount the Raid disk to check. However I will not be using the current volume yet at all. The faulty drive may fail further and data loss can occur.

Now the plan is to replace the 3rd drive once Raid has completed recovery. The recovery will start again with the 3rd new drive. I should be able to get the complete system up in another 24 hours. Then I will start a new Server in Raid1 with enough space to pump data across to the new Server. I will need to download all of the 2.5TB of data from the new host as my first backup and then setup Rsync.

In the meanwhile, I hope I can get all my development environment, my extremely bleeding edge modded Skyrim back on my desktop T_T

What a month!

 

 

 Posted by at 5:38 pm
Dec 172014
 

Classic overflow problem. Bottom-line is that it’s all about IO…. Logs hit both memory and disk.

I should learn to take my own advice. I have always minimized disk hits on my server instead of starting with database caching code.

Recently one of my projects server received higher than normal traffic and it killed the mysqld process. There was no way to keep the dB running as it would get terminated. This is bad…. Really bad. The mysqld was fine, queries are cached, not enough connections either. Problem was it was still using the most amount of memory. With low memory and swap out of space,  the os decided to kill the memory consuming process. I still need to dig deeper into how nix os decides on this. In any case this should not have happened. There is more than enough memory and the traffic wasn’t nearly high enough and the CPU usage was low.

The culprit for memory hogging was rsyslogd process. The second was php-fpm children. Normally I recommend using ramdisk type of location for logs or simply logging stuff only to a remote collector and no local logging. In this instance several logs were split up and some were in verbose mode. So despite being able to support 300 requests per second the site was barely keeping up with 10. The problem with logging is the Disk IO. It is still a write and no matter how much we optimize database , if the IO is spent elsewhere there is still a problem. The problem was more pronounced in this case because of software Raid 5  which is infamous for additional IO overhead and least return on the investment.

 

First thing I did to get back the mysql server to run and stay up was stop rsyslog daemon. Then cleared up my rsyslog configuration. I now only log everything via UDP to Splunk Storm. On AWS I used to run a collector written in Twisted Python, It was a simple script and it still work well. To be posted to github by end of this post.

 

PHP-FPM children:

I use dynamic allocation but the max child was set to 100 on one of the fpm configs. I lowered it to 20. There is really no reason to have 100 child process specially since I use fpm virtual hosts config to split different sites across different users and each site maintains it’s own environment. Each fpm vhost or virtual user had between 20-50 max children.

The overall memory improvement was about 65% and I was able to  serve 100 req/second again without problems and without forking out more money. This could explain why my AWS deployments used fewer resources as opposed to those which I have analysed and found them to spending 5-10 times more on instance usage and serving less a tenth of traffic.

 

Good luck with your Adventures and look before you upgrade.

 

 

Update: Another log I disabled but forgot to mention is Apache/NGinx access logs but kept the error logging to “critical” only. This dropped an additional 90% of IO usage and reduced memory usage to 25%.

If you use CDN services or remote logging then you’d be better off either way. Services like Akamai offer to log at the “Edge” which is far more useful solution. I am going to write another post on this.

Feb 022014
 

This is something I missed myself and discovered recently.  Happens to anyone. What you experience is that despite having Gbps port and the provider confirming it you are not able to see that speed in Up/Down.

You will also observe packet loss at the last/first mile on your service side. The fix is simple.

 

 #ethtool eth0 | egrep "Speed|Duplex|Auto-neg"
       Speed: 100Mb/s
       Duplex: Full
       Auto-negotiation: off

This shows the problem. If you have multiple interfaces test them all.

The fix is as such.

#ethtool -s eth0 autoneg on

Then test again with above command. Output should be like so.

# ethtool eth0 | egrep "Speed|Duplex|Auto-neg"
       Speed: 1000Mb/s
       Duplex: Full
       Auto-negotiation: on

If you see otherwise it means you don’t have the connectivity from your provider on the switch or your network isn’t 1Gb/s. Revert your changes
#ethtool -s eth0 speed 100 duplex full autoneg off

To make the change permanent check your respective OS interface configuration and remove the lines.

Ubuntu/debian like
in /etc/network/interfaces comment out
#post-up mii-tool -F 100baseTx-FD eth0

Centos/Redhat/Fedora
in /etc/sysconfig/network-scripts/ifcfg-eth0
comment out
ETHTOOL_OPTS=”speed 100 duplex full autoneg off”

Feb 022014
 

A lot of providers will sell you 1 Gbps port with oodles of bandwidth. A test file download to boot. For most people that seems to be enough. That is incorrect in most cases. Having good speed and bandwidth is just one piece of the cake you never ate. You can have a lot of bandwidth allocated but if there is bad network connectivity, crappy maintenance, a decade old routers etc you will never be able to reap any of the benefits. Which is why you will see that some dedicated hosts do poorly as opposed to smaller shared hosting served sites when serving end users.

Here’s how to evaluate a new provider where you do not know any existing sites that you can use to test. (And please don’t use the IPs, files as target to test, those are specially setup servers)

First find out the organization name. Here I target “NoUptime“, (fictitious) since they actually have oodles of packet loss for a good example (in a bad way) nothing against them, they are great cheap cost provider.

 

First we need to find an IP that is live with this provider and test against it.

1. Let us go to to RIPE who maintain list of who owns which IPs. it is not always up to date but for our purposes it is quite accurate.

I will visit their Database search

https://apps.db.ripe.net/search/full-text.html

type in NoUptime and you will see some results

on the right side of the page you have “result type” filter. Select “inetnum” i.e. IPv4. You can even try inet6num for IPv6 if thats what you want to test.

The results are updated.

2. Select an IP from the results. You may have to try multiple times to get one which is live. I normally select the first IP in the range given in the results. Try different search result pages, not the first only.

So I get an IP, let’s say 10.2.3.4 (not real IP)

3. Run the MTR

While this is only half the report it is still a good indicator. So let us begin. Get the MTR tool if you haven’t already. Once you get it installed run it on the target IP

 

Example:

#mtr 10.2.3.4 (edited to remove real identification IPs etc)

Host                                    Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 192.168.0.1                           0.0%   623    0.4   0.5   0.4   1.4   0.1
 2. some.ip.near.you                            0.0%   623    3.7   6.0   3.0 102.9   8.0
 3. some.ip.further                         0.0%   623    4.9   7.7   3.7 210.7  17.8
 4. some.ip.on.isp                          0.0%   623    4.2  10.0   3.9 206.4  24.1
 5. some.ip.isp.isp               0.0%   623   10.2   7.6   4.8  40.6   2.1
 6. init7.net.any2ix.coresite.com         0.0%   623  173.6 171.4 169.6 197.7   1.7
 7. r1nyc1.core.init7.net                 0.0%   623  256.7 255.8 251.4 267.1   3.6
 8. r1lon1.core.init7.net                 0.0%   623  321.7 317.0 312.5 332.3   4.3
 9. r1nue1.core.init7.net                 0.0%   623  335.1 336.0 333.7 347.9   2.6
10. gw-nouptime.init7.net                  0.0%   622  333.5 339.0 332.3 415.5  13.3
11. core12.nouptime.de                    44.4%   622  335.5 335.6 334.2 351.6   1.6
12. core22.nouptime.de                    12.5%   622  336.8 337.3 335.3 360.8   2.9
<span style="color: #ff0000;">13. juniper2.rz13.nouptime.de              10.0%   622  373.3 351.5 328.5 424.0  23.5
14. hos-tr4.ex3k11.rz13.nouptime.de        15.0%   622  331.9 332.4 329.9 343.2   1.6</span>
15. static.4.3.2.10.clients.your-server.de    20.0%   622  337.1 336.9 335.2 346.5   0.9

This above shows the packets sent to the NoUptime IP that hosts a customer’s server and this amount of packet loss is really really bad. The packets lost are retransmitted. If you were on an ip at NoUptime you can even run the same trace from Server to your connection. It is safe to say it will be just as crappy. As per above we observe that almost 50% of the packets are lost in NoUptime at hop # 11. Which means half the data you send will never reach and will have to resent. What’s worse is that at Hop #12 another 25% of the packets that do manage survive Hop # 111 die of unnatural circumstances. so End of the day your 100 MB file will take more than twice as long to upload. Now CAVEAT: Some routers don’t respond to ICMP as claimed by providers, what I do not is why they’d respond to half the packets and not all. In any case what you need to see in the Last hop, in this case #15. Here we see 20% packet loss. This is the REAL loss and that is what matters. Again, the reason I say try with different IPs is because someone may have configured their network wrong like not turning “autoneg on”. Which was my case.

I have observed that on reverse MTR at NoUptime it’s even worse as such Downloads that account for 90% over of all regular website traffic will suffer greatly contributing to End User experience . This provider is consistent enough for me to just decide one day (today) to write about how to test and still get the same sort of lossy MTR. The reason I write this is because packet losses are normal on the internet and do not mean it is same from every location or every day.

To give any provider the benefit of the doubt you need to conduct the test over a couple of days and not continuously. Just get a sample at every few hours and for a few days (2-5 days), try from multiple locations if you have SSh access to remote servers you currently run, try the RIPE for different IP of the provider. Because device or packet loss can be fixed by engineers when detected, normally they fix on their own. If their network maintenance is really crappy and despite customer complaints they do not fix it then now you know. 

they has your packets.

they has your packets.

Do you have any more ideas on how to evaluate a new Hosting Provider? Share in the comments please.

Until next time.

 

EDIT: I have corrected the example. I ran a real traceroute at the time and adjusted the numbers manually for laziness. I have marked the packet loss by hops correctly to explain the scneario. Point is if you see zeroes in between then it is not an issue. It should be loss from one start point to end or -1 hop (in case the destination host is also blocking ICMP)

Also see comments from Chris below

May 042013
 

I happened to login to Dreamhost Control panel today to setup email forwarding. I  have mentioned before that the Dreamhost hosting I have is so pathetic that now its even impossible to host simple non DB websites. Forget the DB websites all together. It is slow as frack. It’s completely and utterly useless and has been for over an year now. I still use it for email though. It’s the only thing that works best on DH and saves me the hassle of mailbox setup etc.

Anyway back to Dreamhost panel. So I see that Dreamhost prompted me to check out the “DreamObjects” which offers a 3 cent /GB storage cost for the promo. Sounds good? Well no mention of Data transfer charges so I dig into the details http://dreamhost.com/cloud/dreamobjects/pricing/ Rightly named. I too would “Object” to this because whatever happened to their unlimited space deal which we all have. Turns out the pricing isn’t bad. But hang on! We are talking about dreamhost here. They are like the crappiest of web hosting around today. They have a lot of their image to recover in my books before I can buy their claims into an S3 equivalent reliable, available and cost effective storage.

While in Singapore I can still download from my US east bucket at close to 10Mbps. The response times stay consistent, never lost an object in over 3.5 years even at reduced redundancy and S3 has a proven track record. My first impression is “Great! DreamHost is trying to reinvent itself.” soon followed by “Wait it’s dream host…. dream on”.

Further it’s not really pay-as-use its still pre-paid plan. Without the promo it comes down to the following pricing which is definitely much lower than S3 and they “say” no cost on API request. If they are using their crappy servers to run this program on their already oversold network I am quite disappointed. Amazon S3 advertises it’s SLA and durability in the first product page http://aws.amazon.com/s3/#protecting. Unsurprisingly DreamHost has failed to mention any of that in clear writing.

Pre-paid Storage Plans:

PLAN STORAGE INCLUDED (GB) PLAN PRICE / MTH EFFECTIVE RATE / GB
20 $1.35 6.8¢
100 $6.49 6.5¢
500 $29.95 6.0¢
1,024 $54.95 5.4¢
5,120 $269.95 5.3¢
10,240 $529.95 5.2¢
51,200 $2,499.95 4.9¢
102,400 $4,499.95 4.4¢

To be fair I will give this a shot. I will not however be using it as storage for any real projects even if they are the FUN kind of projects.  My only fear is that if in the entire industry , service providers like DreamHost start to promise something like S3 and fail to deliver it will only make AWS stronger and kill the potential up and coming competition.

 

Right now I am just angry at DreamHost. I’ll wave my fist at my monitor for a little while more.

Apr 042013
 

I have posted earlier about Linode and it’s lack of performance on my node as time went by.  However in the recent few weeks it has been back to top performance and I did not change anything. In fact I added two more sites to it.

Linode has made a whole bunch of improvements recently:

 

Improved network performance

Increase in Data Transfer by 1000%. yes. I used to get 400GB but now I have 4TB outbound, inbound is always free. It seems the CPU is also much better though it seems there is either upgrade or move of some accounts around. I will need to do a reboot to get the improvement apparently, according to their blog post on March 18th.

Linode did improve their network for my own experience

CPU Upgrades

Now they are going ahead to add better CPU on nodes by upgrading them to 8 cores. Currently webmin is reporting I am still on 4 cores L5520

The thing about Linode is that they do upgrade and they do it for free. In one year the disk space increased by 50% and bandwidth by over 2000%. Unlike some hosts who are only interested in keeping you on old shit and charging you more for the same stuff.

Proof:

Linode CPU upgrades, before reboot

Linode CPU upgrades, before reboot screenshot

Going ahead with reboot…

 

 

2mins later.

 

holy shit. it’s 8 cores. Linode does it again. These upgrades Linode is doing is absolutely free. Considering that the Dreamhost shared host I use is still on the same hardware for at least 4 straight years and you can actually distinguish between crawling 2012 and crawling 2013 this is commendable. On a sidenote , the only thing that works well on Dreamhost for me is their Mail server… aptly named “homie”.

 

Their blog post said E5-2670 as the new processor. I don’t see it yet but then it could be webmin. Will check on this in detail. But heck at the free upgrade to 8 core I am not complaining.

Given the current upgrades at linode I am planning to move one of my dedicated host with 8 sites over to Linode to solve my problem with european providers who are now buying gold chains with your IP address money.

Proof:

linode after reboot cpu upgrade 8 core

linode after reboot cpu upgrade 8 core

 

 

I just like saying proof. proof.

 

source Linode Blog

affiliate note: I have slapped Linode before. I am pretty frank about it because there are enough aff based posts about each host to screw all our collective brains. I use aff links when available and it gets me money off my own hosting at Linode. It does not make me rich and it does not cost you more, so feel free to click … or not.

Jan 142013
 

I just received email from one of my server providers in Europe “Hetzner” who I have used for over 2 years. I have a rather old server I got with a total of 8 IPs (included) in the monthly price as well as paid a hefty setup fee.

In the over almost 3 years I have never asked for price reduction or upgrade despite better offers elsewhere. Seeing as it was reasonable. I was surprised that Hetzner has out of nowhere increased the price for new IPs and also at the same time applied a 1 Euro per month additional charge for ALL existing IPs that were in fact included in the package. I know that they are not the most clean Hosts out there in their dealing and I have several problems with them. But then again I never put anything critical on their server except for my test projects or projects in development.

The change will affect everyone from March 2013. The thing is I was already looking for an alternate host out of Europe because communication and local laws is a big problem. Several times local individuals have, without substantial investigation, gotten my sites shutdown by easily issued court orders. Some of my sites like the really popular like Grepler.com which then I had to move out of this host due to this harassment.

Here is the extract of the email communication I received when I was checking my mails for price comparison of what i am paying and where to move.

As you are probably aware from various reports in the media, the number of available IPv4
addresses continues to drop drastically on a global level. In Europe, the pool of available
addresses has already been completely allocated. We shall also not be receiving any further
IPv4 addresses from the Regional Internet Registry for Europe, Reseaux IP Europeens (RIPE).

As demand for IPv4 addresses continues to remain very high, we are forced to tighten
restrictions as regards both their use and allocation to clients.

For additionally allocated IPv4 addresses and subnets, we are obliged to introduce a new
pricing model for both existing and new orders.

As a first step from 1 March 2013, additional single IPs which are to be used in addition
to the Main IP will be charged at a price of 1 euro per month. This excludes the additional
IP address required for the chargeable KVM-over-IP Remote Management Option. The change affects
existing clients, who have obtained IP addresses without additional charge. Nothing changes for
clients who are already being charged for additional single IPs.

Furthermore, adjustments in the monthly price for regular and failover subnets will take place.
Clients who are affected here are those who have up to now received subnets free-of-charge or
at favorable rates.

From 1 March, the price will change for /29 subnets which have already been allocated to
clients. Prices for larger subnets for existing contracts will be adjusted in the months
following, so as to provide clients who have correspondingly larger nets with a little more
time to prepare for consolidation of the nets.

The following overview serves to clarify the introduction of the pricing adjustment as well
as the future conditions:

Price valid from  Product              Price per Month
1 March 2013      /30 Subnet           4 euros
1 March 2013      /29 Subnet           8 euros
1 March 2013      /29 Failover Subnet  18 euros
1 April 2013      /28 Subnet           16 euros
1 April 2013      /28 Failover Subnet  26 euros
1 May 2013        /27 Subnet           32 euros
1 May 2013        /27 Failover Subnet  42 euros
1 May 2013        /26 Subnet           64 euros
1 May 2013        /26 Failover Subnet  74 euros
1 June 2013       /25 Subnet           128 euros
1 June 2013       /25 Failover Subnet  138 euros
1 June 2013       /24 Subnet           256 euros
1 June 2013       /24 Failover Subnet  266 euros
1 June 2013       /23 Subnet           512 euros

The “Price valid from” is for existing subnets. The prices mentioned above immediately apply
to all new orders for subnets. All prices incl. 19% VAT.

 

The email then described how much I will add on which is 8 euros. This price increase includes ALL IPs including the bare minimum  single IP you need for a server to operate anyway. My current server where the impact is highest is this old server I have which is AMD Athlon(tm) 64 Processor 3800+ single CPU with 8 IPs for which I pay 41 Euros to this day. with 8 Euro added it is a 20% increase.

 

Further to this they add that you can return the IPs (so they can sell for more) or you can get max of 3 IPs (chargeable)

(contd…)With this measure, we hope to receive any unused single IPs as well as subnets from you.
Depending on the size of the subnet, it is possible for you to return whole subnets to us
and to receive chargeable single IPs (max. three per server) in return

 

 

Well for one I knew I should not have paid them again to upgrade the second server I have with them which has seen several hardware failures in the first year of operation. Secondly I should have made my move long time ago, but you know its a whole lot of work to do so. I think it’s a good time to move out. What is disappointing is that they have changed the terms of agreement. What they included as part of package is now being charged again. Fact is we ran out of IPv4 over an year ago if I am not wrong. Obviously some people are willing to pay a bit more and now the same Hosts who used IP addresses in their offers are smashing through the agreements to get them back for that extra penny.  They want their subnets? They can have their whole servers back.

Problem is they don’t support IPv6 anyway. A lot of new hosts already support it and the DNS like Cloudflare, AWS RDS and Godaddy(not sure about this guy) also support it.

So far it was a very stressed situation with Hetzner.de and I am more than happy to let go and move on. I completely blame myself for falling for the cheap hosting when I went to hetzner. I don’t think other europe hosting is behind.

 

Considering 100Tb or liquidweb. Problem is, I automatically hate hosting that paste the word “cloud” for no good reason except buzzwords to charge more. So those are easily out. Feel free to suggest a good host for me 🙂