Nah. That may take multiple milliseconds. When fine tuned, this query runs in <1ms. It would be an option if it would execute thousands of times per hour, but not millions.
> write identical queries in stored procs with different names
Changing a 3rd party vendor application is not an option. At least for a DBA. Well, maybe it could be an option, when there are really no other options and business starts cashing out to vendor tons of money. But that's a hack - better hack SQL directly and don't bother implementing these ugly hacks in a vendor app that may be applicable only to particular case, particular data distribution with hardcoded stuff.
> Further, make sure your stats are a) present and b) up-to-date
Yep, that's valid - for some tables, there is even a agent job which do this fairly regularly.
> Further redux, don't assume that a covering index is all good news. I'd have to look at the query plan.
Query is very simple: SELECT SUM(COL1),SUM(COL2),...snip..., SUM(COL20)) FROM X WHERE OTHERCOL1=@P1 AND OTHERCOL2=@P2 AND OTHERCOL3=@P3 AND OTHERCOL4=@P4 AND OTHERCOL5=@P5 AND OTHERCOL6=@P6 AND (OTHERCOL7=@P7 OR (OTHERCOL8=@P8 AND OTHERCOL9=@P9)) AND OTHERCOL10>@P10 AND OTHERCOL11>@P11 AND OTHERCOL12<=@P12 AND (OTHERCOL13<=@P13 OR OTHERCOL14<>@P14)
I don't know exact purpose of query, but I think its part of a process that rollups some transactional data into less rows (recalculates item quantities).
Covering index provides a simple seek from a single index. Having clustered index with that much key columns would be bad for every other index, which would have to carry all those columns to other indexes and use them for lookup. But this is a business critical table and this is what it takes for SQL Server and application keeps ticking. Of course, there are probably solutions if you own the application that you can rearchitect etc. But here, operations must be transactionally correct, locking must be minimized etc. If execution time grows to 20ms (I just checked how much time it takes for this query to generate query plan), that means we get more than 20x more total duration and MUCH longer process to execute. For a particular day, i see ~15,2mil executions between 08:00 and 20:12. That is 3h2m total duration with average 0.7ms execution time (Exellent). Generating query plan for every execution would be disaster and I'd get a call for an incident. For other queries 20ms execution time is not a big deal.
I'll add that for this particular query, it is rare it would cache a bad plan. But in those cases, we get an incident from customer and thus we had to prevent it from happening from time to time.
I don't want to argue, because we have different experiences, situations, applications, possibilities, designs and data. What you write are valid points, applicable to most cases. But I keep going with replies, because I'v learned a ton from HN and I think our discussion may help someone learn a thing or to about SQL Server. And I like to finally talk to someone that experiences this stuff.
Gotcha. It's a very different scenario from what I imagined and my suggestions don't apply. Agreed also your limited control on rewriting is limited and with the comment on clustered indexes.
I do wonder how is it possible to run that query on a billion rows and typically get sub-millisecond response, but however you did it, well done!
Nah. That may take multiple milliseconds. When fine tuned, this query runs in <1ms. It would be an option if it would execute thousands of times per hour, but not millions.
> write identical queries in stored procs with different names
Changing a 3rd party vendor application is not an option. At least for a DBA. Well, maybe it could be an option, when there are really no other options and business starts cashing out to vendor tons of money. But that's a hack - better hack SQL directly and don't bother implementing these ugly hacks in a vendor app that may be applicable only to particular case, particular data distribution with hardcoded stuff.
> Further, make sure your stats are a) present and b) up-to-date
Yep, that's valid - for some tables, there is even a agent job which do this fairly regularly.
> Further redux, don't assume that a covering index is all good news. I'd have to look at the query plan. Query is very simple: SELECT SUM(COL1),SUM(COL2),...snip..., SUM(COL20)) FROM X WHERE OTHERCOL1=@P1 AND OTHERCOL2=@P2 AND OTHERCOL3=@P3 AND OTHERCOL4=@P4 AND OTHERCOL5=@P5 AND OTHERCOL6=@P6 AND (OTHERCOL7=@P7 OR (OTHERCOL8=@P8 AND OTHERCOL9=@P9)) AND OTHERCOL10>@P10 AND OTHERCOL11>@P11 AND OTHERCOL12<=@P12 AND (OTHERCOL13<=@P13 OR OTHERCOL14<>@P14)
I don't know exact purpose of query, but I think its part of a process that rollups some transactional data into less rows (recalculates item quantities).
Covering index provides a simple seek from a single index. Having clustered index with that much key columns would be bad for every other index, which would have to carry all those columns to other indexes and use them for lookup. But this is a business critical table and this is what it takes for SQL Server and application keeps ticking. Of course, there are probably solutions if you own the application that you can rearchitect etc. But here, operations must be transactionally correct, locking must be minimized etc. If execution time grows to 20ms (I just checked how much time it takes for this query to generate query plan), that means we get more than 20x more total duration and MUCH longer process to execute. For a particular day, i see ~15,2mil executions between 08:00 and 20:12. That is 3h2m total duration with average 0.7ms execution time (Exellent). Generating query plan for every execution would be disaster and I'd get a call for an incident. For other queries 20ms execution time is not a big deal.
I'll add that for this particular query, it is rare it would cache a bad plan. But in those cases, we get an incident from customer and thus we had to prevent it from happening from time to time.
I don't want to argue, because we have different experiences, situations, applications, possibilities, designs and data. What you write are valid points, applicable to most cases. But I keep going with replies, because I'v learned a ton from HN and I think our discussion may help someone learn a thing or to about SQL Server. And I like to finally talk to someone that experiences this stuff.