I am just going to add a passing comment to his theme, perhaps at too much of a tangent to add as comment on his blog post.
If you buy a relational database to build an application I can guess a few things about what you intend to do. Firstly you are going to store data and secondly you are going to query the that data to access the information. One of the clues to second point is that we are using SQL – the “Q” stands for “QUERY – I could be glib and suggest that the “S” for structured means that the data is structured, but actually it’s the language that is structured.
My point is that if we are creating an application that needs to store, access and process information then we really need to put in some design work upfront. If the data is only being archived and there is no need to routinely access it then we don’t necessarily need to think about traditional data stores or schema design, but if we depend on fast access to data to allow the application to work we have to be rigorous with our design, whether that is RDBMS, columnar, noSQL or whatever other flavour of datastore we can come up with. This requires a fundament knowledge of how our storage layer works and techniques we can use to boost access performance (without undue cost to data storing times).
Typically this going to be about either doing more work at once (parallel processing) or accessing less data to achieve to goal (reduce IO) and this needs knowledge about the storage technology being used and how data should be accessed optimally. Only this week Michelle Kolbe spoke to the Utah Oracle User Group Training Days on some of DW tuning wins. Some of these wins were a matter of getting the model right so that the need to access far too much data is eradicated.
Knowing how the technology behind your particular data storage works pays back time and time again; in reducing the direct costs in accessing information and some of the indirect costs of unnecessary background tasks the just happen because of sub-optimal design and configuration.