Re: [bitfolk] Support this weekend / Ubuntu Lucid LTS releas…

Top Page

Reply to this message
Author: James Gregory
Date:  
Subject: Re: [bitfolk] Support this weekend / Ubuntu Lucid LTS release
oller going defective, which disables the
>    write cache.

>
> --
> http://bitfolk.com/ -- No-nonsense VPS hosting
>
> _______________________________________________
> announce mailing list
> announce@???
> https://lists.bitfolk.com/mailman/listinfo/announce
> _______________________________________________
> users mailing list
> users@???
> https://lists.bitfolk.com/mailman/listinfo/users
>


--bcaec5430f4e0a8e3704ae8838e7
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

It would have to be significantly cheaper than same amount of RAM to make i=
t worthwhile.<br><br><div class=3D"gmail_quote">On Wed, Oct 5, 2011 at 5:34=
AM, Andy Smith <span dir=3D"ltr">&lt;<a href=3D"mailto:andy@bitfolk.com">a=
ndy@???</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex;">Hello,<br>
<br>
This email is a bit of a ramble about block device IO and SSDs and<br>
contains no information immediately relevant to your service, so<br>
feel free to skip it.<br>
<br>
In considering what the next iteration of BitFolk infrastructure<br>
will be like, I wonder about the best ways to use SSDs.<br>
<br>
As you may be aware, IO load is the biggest deal in virtual hosting.<br>
It&#39;s the limit everyone hits first. It&#39;s probably what will dismay<=
br>
you first on Amazon EC2. Read<br>
<a href=3D"http://wiki.postgresql.org/images/7/7f/Adam-lowry-postgresopen20=
11.pdf" target=3D"_blank">http://wiki.postgresql.org/images/7/7f/Adam-lowry=
-postgresopen2011.pdf</a><br>
or at least pages 8, 29 and 30 of it.<br>
<br>
Usually it is IO load that tells us when it&#39;s time to stop putting<br>
customers on a server, even if it has a bunch of RAM and disk<br>
space left. If disk latency gets too high everything will suck,<br>
people will complain and cancel their accounts. When the disk<br>
latency approaches 10ms we know it&#39;s time to stop adding VMs.<br>
<br>
Over the years we&#39;ve experimented with various solutions. We built<br>
a server with 10kRPM SAS drives, and that works nicely, but the<br>
storage then costs so much that it&#39;s just not economical.<br>
<br>
After that we started building bigger servers with 8 disks instead of<br>
4, and that&#39;s where we are now. This worked out, as we can usually<br>
get around twice as many VMs on one server, and it saves having to<br>
pay for an extra chassis, motherboard, PSUs and RAID controller.<br>
<br>
SSD prices have now dropped enough that it&#39;s probably worth looking<br>
at how they can be used here. I can think of several ways to go:<br>
<br>
- Give you the option of purchasing SSD-backed capacity<br>
=A0=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D<br>
<br>
=A0Say SSD capacity costs 10 times what SATA capacity does. You get<br>
=A0to choose between 5G of SATA-backed storage or 0.5G of SSD-backed<br>
=A0storage for any additional storage you might like to purchase, the<br>
=A0same price for either.<br>
<br>
=A0Advantages:<br>
<br>
=A0- The space is yours alone; you get to put what you like on it. If<br>
=A0 =A0you&#39;ve determined where your storage hot spots are, you can put=
<br>
=A0 =A0them on SSD and know they&#39;re on SSD.<br>
<br>
=A0Disadvantages:<br>
<br>
=A0- In my experience most people do not appreciate choice, they just<br>
=A0 =A0want it to work.<br>
<br>
=A0 =A0Most people aren&#39;t in a position to analyse their storage use<b=
r>
=A0 =A0and find hot spots. They lack either the inclination or the<br>
=A0 =A0capability or both - the service is fine until it&#39;s not.<br>
<br>
=A0- It means buying two expensive SSDs that will spend most of their<br>
=A0 =A0time being unused.<br>
<br>
=A0 =A0Two required because they&#39;ll have to be in a RAID-1.<br>
<br>
=A0 =A0Most of the time unused because the capacity won&#39;t be sold<br>
=A0 =A0immediately.<br>
<br>
=A0 =A0Expensive because they will need to be large enough to cater to<br>
=A0 =A0as large a demand as I can imagine for each server.<br>
=A0 =A0Unfortunately I have a hard time guessing what that demand would<br=
>

=A0 =A0be like so I&#39;ll probably guess wrong.<br>
<br>
- Find some means of using SSDs as a form of tiered storage<br>
=A0=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
=A0We could continue deploying the majority of your storage from SATA<br>
=A0disks while also employing SSDs to cache these slower disks in<br>
=A0some manner.<br>
<br>
=A0The idea is that frequently-accessed data is backed on SSD