[bitfolk] Support this weekend / Ubuntu Lucid LTS release

Top Page

Reply to this message
Author: Andy Smith
Date:  
Subject: [bitfolk] Support this weekend / Ubuntu Lucid LTS release
> use and make a call on what needs to be backed by SATA or SSD.
>
> I think we first need to try to make it as good as possible for
> everyone, always. There may be a time in the future where it's
> commonplace for customers to evaluate storage in terms of IO
> operations per second instead of gigabytes, but I don't think we are
> there yet.
>
> As for the "low-end customers subsidise higher-end customers"
> argument, that's just how shared infrastructure works and is already
> the case in many existing metrics, so what's one more? While we
> continue to not have a good way to ration out IO capacity it is
> difficult to add it as a line item.
>
> So, at the moment I'm more drawn to the "both" option but with the
> main focus being on caching with a view to making it better for
> everyone, and hopefully overall reducing our costs. If we can sell
> some dedicated SSD storage to those who have determined that they
> need it then that would be a bonus.
>
> Thoughts? Don't say, "buy a big SAN!" :-)
>
> Cheers,
> Andy
>
> [1] You know, when we double the RAM or whatever but keep the price
>    to you the same.

>
> [2] Hot swap trays plus Linux md =3D online array grow. In theory.
>
> [3] "Nice virtual machine you have here. Would be a real shame if
>     the storage latency were to go through the roof, yanno? We got
>     some uh=85 extras=85 that can help you right out of that mess.
>     Pauly will drop by tomorrow with an invoice."
>       =97 Tony Soprano's Waste Management and Virtual Server Hosting,
>         Inc.

>
> [4] echo "oryyvav, pbfzb, cerfvqrag naq hedhryy unir rvtug qvfxf.
>    oneone unf sbhe FNF qvfxf." | rot13

>
> [5] Barring *very* occasional problems like a disk broken in such a
>    way that it doesn't die but delays every IO request, or a
>    battery on a RAID controller going defective, which disables the
>    write cache.

>
> --
> http://bitfolk.com/ -- No-nonsense VPS hosting
>
> _______________________________________________
> announce mailing list
> announce@???
> https://lists.bitfolk.com/mailman/listinfo/announce
> _______________________________________________
> users mailing list
> users@???
> https://lists.bitfolk.com/mailman/listinfo/users
>


--bcaec5430f4e0a8e3704ae8838e7
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: quoted-printable

It would have to be significantly cheaper than same amount of RAM to make i=
t worthwhile.<br><br><div class=3D"gmail_quote">On Wed, Oct 5, 2011 at 5:34=
AM, Andy Smith <span dir=3D"ltr">&lt;<a href=3D"mailto:andy@bitfolk.com">a=
ndy@???</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex;">Hello,<br>
<br>
This email is a bit of a ramble about block device IO and SSDs and<br>
contains no information immediately relevant to your service, so<br>
feel free to skip it.<br>
<br>
In considering what the next iteration of BitFolk infrastructure<br>
will be like, I wonder about the best ways to use SSDs.<br>
<br>
As you may be aware, IO load is the biggest deal in virtual hosting.<br>
It&#39;s the limit everyone hits first. It&#39;s probably what will dismay<=
br>
you first on Amazon EC2. Read<br>
<a href=3D"http://wiki.postgresql.org/images/7/7f/Adam-lowry-postgresopen20=
11.pdf" target=3D"_blank">http://wiki.postgresql.org/images/7/7f/Adam-lowry=
-postgresopen2011.pdf</a><br>
or at least pages 8, 29 and 30 of it.<br>
<br>
Usually it is IO load that tells us when it&#39;s time to stop putting<br>