Our ultimate selection criterion is cost per query, expressed as the sum of capital expense (with depreciation) and operating costs (hosting, system administration, and repairs) divided by performance. Realistically, a server will not last beyond two or three years, because of its disparity in performance when compared to newer machines. Machines older than three years are so much slower than current-generation machines that it is difficult to achieve proper load distribution and configuration in clusters containing both types. Given the relative short amortization period, the equipment cost figures prominently in the overall cost equation.Interesting to contrast with LJ, where the scale isn’t great enough for equipment cost (or power, which is the subject of the next section of the article) to be the major factor yet.
I was thinking this morning: Moore’s law indicates that computing power is growing exponentially (as in doubling), while the population of the world is growing at a much slower rate. If the output of the population isn’t significantly different than the number of people in the population, it would seem Google actually has the edge; their computers will get faster faster than the amount of data to index increases.