[bitfolk] Ubuntu 10.04

Top Page

Reply to this message
Author: Andrew Nixon
Date:  
Subject: [bitfolk] Ubuntu 10.04
i/graphs/graph_1485_=
6.png" target=3D"_blank">http://tools.bitfolk.com/cacti/graphs/graph_1485_6=
.png</a><br>
faustino: =A0<a href=3D"http://tools.bitfolk.com/cacti/graphs/graph_1314_6.=
png" target=3D"_blank">http://tools.bitfolk.com/cacti/graphs/graph_1314_6.p=
ng</a><br>
kahlua: =A0 =A0<a href=3D"http://tools.bitfolk.com/cacti/graphs/graph_1192_=
6.png" target=3D"_blank">http://tools.bitfolk.com/cacti/graphs/graph_1192_6=
.png</a><br>
kwak: =A0 =A0 =A0<a href=3D"http://tools.bitfolk.com/cacti/graphs/graph_111=
3_6.png" target=3D"_blank">http://tools.bitfolk.com/cacti/graphs/graph_1113=
_6.png</a><br>
obstler: =A0 <a href=3D"http://tools.bitfolk.com/cacti/graphs/graph_1115_6.=
png" target=3D"_blank">http://tools.bitfolk.com/cacti/graphs/graph_1115_6.p=
ng</a><br>
president: <a href=3D"http://tools.bitfolk.com/cacti/graphs/graph_2639_4.pn=
g" target=3D"_blank">http://tools.bitfolk.com/cacti/graphs/graph_2639_4.png=
</a><br>
urquell: =A0 <a href=3D"http://tools.bitfolk.com/cacti/graphs/graph_2013_6.=
png" target=3D"_blank">http://tools.bitfolk.com/cacti/graphs/graph_2013_6.p=
ng</a><br>
<br>
(Play at home quiz: which four of the above do you think have eight<br>
disks instead of four? Which one has four 10kRPM SAS disks? Answers<br>
at [4])<br>
<br>
In general we&#39;ve found that keeping the IO latency below 10ms keeps<br>
people happy.<br>
<br>
There have been short periods where we&#39;ve failed to keep it below<br>
10ms and I&#39;m sure that many of you can remember times when you&#39;ve<b=
r>
found your VPS sluggish. Conversely I suspect that not many<br>
customers can think of times when their VPSes have been the *cause*<br>
of high IO load, yet high IO load is in general only caused by<br>
customer VMs! So for every time you have experienced this, someone<br>
else was causing it![5]<br>
<br>
I think that, being in the business of providing virtual<br>
infrastructure at commodity prices, we can&#39;t really expect too many<br>
people to want or be able to take the time to profile their storage<br>
use and make a call on what needs to be backed by SATA or SSD.<br>
<br>
I think we first need to try to make it as good as possible for<br>
everyone, always. There may be a time in the future where it&#39;s<br>
commonplace for customers to evaluate storage in terms of IO<br>
operations per second instead of gigabytes, but I don&#39;t think we are<br=
>

there yet.<br>
<br>
As for the &quot;low-end customers subsidise higher-end customers&quot;<br>
argument, that&#39;s just how shared infrastructure works and is already<br=
>

the case in many existing metrics, so what&#39;s one more? While we<br>
continue to not have a good way to ration out IO capacity it is<br>
difficult to add it as a line item.<br>
<br>
So, at the moment I&#39;m more drawn to the &quot;both&quot; option but wit=
h the<br>
main focus being on caching with a view to making it better for<br>
everyone, and hopefully overall reducing our costs. If we can sell<br>
some dedicated SSD storage to those who have determined that they<br>
need it then that would be a bonus.<br>
<br>
Thoughts? Don&#39;t say, &quot;buy a big SAN!&quot; :-)<br>
<br>
Cheers,<br>
Andy<br>
<br>
[1] You know, when we double the RAM or whatever but keep the price<br>
=A0 =A0to you the same.<br>
<br>
[2] Hot swap trays plus Linux md =3D online array grow. In theory.<br>
<br>
[3] &quot;Nice virtual machine you have here. Would be a real shame if<br>
=A0 =A0 the storage latency were to go through the roof, yanno? We got<br>
=A0 =A0 some uh=85 extras=85 that can help you right out of that mess.<br>
=A0 =A0 Pauly will drop by tomorrow with an invoice.&quot;<br>
=A0 =A0 =A0 =97 Tony Soprano&#39;s Waste Management and Virtual Server Hos=
ting,