Pages

Friday, October 28, 2011

Easy performance burst for large data

This week I started importing data from a new source. That means I take around 5-6 times as many data into the system as before. Going from handling around 10.000.000 (10M ) records a day to almost 60M. This is, lucky me, still in development so its possible to use all my tips and tricks I learn at SQL PASS this year.

I found a couple of things which quite cheap gave me a boost in performance. Data types! I have always just used int, datetime and float regardless of actual size of data. Never gave that much thought - we just bought bigger disks.

But as PASS I attended several sessions discovered two main issues with that approach: Memory spills (think that was the buzzword) and pages. Never mind I will explain.

Most of the operation on SQL Server starts out in memory but when data gets to large to keep up there it goes down under to the hard drive in tempdb or the OS paging file. This means a huge difference in speed as performance of the hard drive is drastic reduced compared to RAM. The lesson learn was keep data as small as possible which postpone or reduces the times when data has to go to disk to be sorted, joined or what ever.

I also realized another thing which I actually knew but didn't put into this context (shame on me). Table data are stored on disk in 8Kb pages.  And it always reads a full page at the time. So if you have a record which is below 8Kb then the server needs only to read this one page to access this record. But if the record is 8.1Kb (well max allowed for a row is 8060 bytes but stay with me) then it needs to read two pages. And as IO to disk is the slowest thing we have on the server, the more pages it needs to read/write every time, the slower it is. It's very easy to see how much IO you actually use by turning on statistics io. Let my give you a simple example.

Create a table with char(8000) just to show the point


create table test(c1 char(8000))

Then turn on statistics: set statistics io on
Run the following: insert test values('horse')
And then: select * from test
In the message window you should see:

Table 'test'. Scan count 0, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

The thing to look for in this case is the logical reads 1 which means that the server had to read one page.
Insert another row: insert test values('cow')
And: select from test
This time you see that logical reads says 2! It had to read two pages.

Drop the table (drop table test) and then lets reduce the records size by half.

create table test(c1 char(4000))

Run the two inserts again and do a select between each of them. You should now see that logical reads are 1 even after two insert. If you insert one more record then it will go to two reads.

Well it is not often we can reduce a record by this much. But if you have a int field which only are used for storing 1 to 10 then a tinyint would do just fine and give you 3 bytes in return. In one of my tables I found that my using tinyint, smallint and smalldatetime instead of int and datetime I could reduce the record by 54 bytes. In round numbers that means I can spare a page for each 150 records. And in a table with almost 1.000.000.000 records that is somewhat a lot :) Almost 50Gb less data.

Of cause I never get all those records as once. But I always gets way more than 150 records at the time. And this is done a lot. So that's a lot of io reduction - for free.

I found another table which had 9 columns of ValidSomething fields which was 0 if that condition was not valid and 1 if it was. They where all int. That is 9x4 bytes (36) which can be reduces to 2 bytes. If the column only holds 0 and 1 (what a developer would call a boolean) then it can be stored as a bit column. If you only have one bit column in a record the tinyint and bit would be the same namely 1 byte. But the server stores up to 8 bit columns in one byte so with 9 bit columns only 2 bytes are need. And the smart thing is that adding 7 extra bit fields later will not cost me anything - I like that.

So working with large data every byte counts. I am currently redesigning my data collection flow. I tried to load the new daily data burst the old way at once. It took 13 hours. By reducing the data size and then I did some T-SQL redesign as well I had it down to around half an hour. Not bad for a couple of days work.

Maybe I should optimize me - by that rate I only need to go to work half of Monday then my 40+ hour week would be over :) But where is the fun in that, right? :D 

I will return with a few queries "not to" tips I discovered when re-writing some of the code.
Happy optimizing,

Thursday, October 20, 2011

Do it your self perfmon

At SQL PASS I attended Adam Machanic pre-con one day class of 


I really liked the well structured way of storing the recorded samples. With the risk of being #sqlsue I will try to lay out his idea on the database structure.

He has two timing tables. Dates which holds all dates from today and until 2020. One record for each date with a unique dateid (int) and then all the components of the date as a separate column like mount, day, year, quater, week, day number in week and year and so on. The same with the time. That table holds a record for each 15 seconds on a day. And the smart thing is that all times has been marked with a segment of day like morning, busy period and so on. And a 15 min segment time. 

All recorded data is then stored with a dateid and a timeid. This means that your are able to easily see how all parameters has performed on exactly this time of day the last month to see if today or any other day behave abnormally. Or your can see how the behavior is the last day of the month compared to all the other last day of month. And so on.

I have done a lot of perf mon recording but that time layout never struck me. I always had to deal with the slow extraction of datetime and between. So that I really liked. 

He also had a beta version of his datacollector which can monitor and store data in that format. It works very well but wasn't that good on a lot of instances on the same server - which I have. You could set it up to use instances but that meant adding a counter for each server instance instead of just adding the instances and have the collector pick it up. Adam will make this in a up-coming version.

But I though it was fun to make the data collector my self. That is actually my main work programming data collectors for everything from radars to stuck exchanges. So this should be fun.

Actually it's quite easy to get perfmon data from the server using C#. 

                    // Create the counter
                    Counter = new PerformanceCounter(CategoryName, CounterName, InstanceName, MachineName);
                    // "Prime the pump" by getting the first value
                    Counter.NextValue();

PerformanceCounter is part of System.Dianostics and you just give it like
  • Category like "PhysicalDisk"
  • Counter like "Avg. Disk sec/Read"
  • Instance like "1 C:"
  • Machine like "YourServer"
Note instance has nothing to do with SQL Server instance.


You can just use PerfMon to see the categories, counters and instances.


Then every time you call Counter.NextValue() you get the current value.

Create as many counters as you like. I have 192 counters in total for all my instances (4) and it takes like 0.2 secs to sample and the data is send to database all in all around 0.4 secs every 15 seconds for the counters and a few wait_stat, file_stat and query_stat queries.


If you want to get all the instances for the counters for a given category use


PerformanceCounterCategory cat = new PerformanceCounterCategory(CategoryName, MachineName);
cat.GetInstanceNames()


Note that all counters in the given category always have the same instances.

If you don't want to go into the more complex thread programming you could just use a Timer with a callback function to fire every 15 seconds and you are on your way.

No mater how you do it - it's a really good idea to have a baseline sample. Because when things go bad you actually can see where it goes wrong just by comparing it to your baseline. PerfMon don't tell you how it usually is only how it is now. And you need as least a month of good sampling data to establishing a good baseline. So better get started.

But the really genius is in storing the records with that very well segmented date and time fragmentation. That really gives you a heads up on exactly how everything looks this time of day.

So thanks to Adam for sharing his database layout and hints for baseline.

Saturday, October 15, 2011

CTE Christmas tree

Just went to Audrey's T-SQL Awesomeness at SQL Pass for some enthusiastic pointers on writing awesome and cool code :) I really loved to see how excited she was about writing readable, functional but smart looking code.

She talk mostly about CTE (Common Table Expressions) and she showed how to make recursive functions. She had a simple ex. which produce a 25 records each with one more + using recursive CTE. She said she wanted to make a Christmas tree but could figure out how. So I though I would give it a go :)




with ChristmasTree (RowNumber, TreeData) as
(
/* Select the top - this is the anchor */
select 0 as RowNumber, cast(replicate(' ', 25)+'*'+replicate(' ', 25) as varchar(51)) as TreeData
union all
/* Select the rest from it self until we have 25 rows (plus the top)
  For each time it picks up one more star on each side of the tree
  This is the recursive menber */
select RowNumber+1 as RowNumber, cast(replicate(' ', 25-RowNumber)+replicate('*',RowNumber+1)+replicate(' ', 25-RowNumber) as varchar(51)) as TreeData
from ChristmasTree
where RowNumber < 25
)
/* Statement to fire it all up */
select TreeData from ChristmasTree



The last statement is the one firing up under the recursive call. It call into ChristmasTree (CTE).

The first statement call in the CTE is the anchor. It needed to lay the foundation of the recursiveness.

The statement below is call. And was it really does is sending RowNumber and TreeData into the next recursive call. So the next call has RowNumber 2 and so on. This keeps happening until RowNumber hits 25.

If you remove the where part it will keep going until it has nested down for 32767 (signed int max number) and then it breaks. Well it might stop after 100 iteration which is SQL server default recursion deept.

But this is a truly powerful tool to get for instance all children from a parent throug multible levels.

Like Audrey would say. That is awesome :)  

Sunday, October 9, 2011

Ready PASS but who to meet?

We arrived yesterday at Seattle after traveling for 19 hours strait. After checking into The Paramount Hotel we went to Fox Sports Grill. They had everything a guy is looking for. Football on TV, Streaks on white toasted  bread and a beer :) Great evening.

Normally I feel very uncomfortable meeting new people so I try to avoid it. But reading up on all the great tips for SQL PASS Summit and what I see over and over again is the words networking and meeting new people. So I decided that to get the most out of this I will try to meet some new people each day.

And to insure that I at least meet some new people I will put it into writing that I will at least try to meet the following people:

Brent Ozar
This guy really kicked my interest into diving deeper into the SQL Server. I attended one of his online web casts on how to take over a SQL Server. I was so excited after hearing his enthusiasm about the SQL Server. I thing they where three guys and the really had a lot of fun. I was sold and he might be the reason why (besides my company how send me) I'm here today. He kick started things for us. So this guy I want to meet.

Jes Borland
I joined twitter to figure out more about this Summit and to try to learn some people beforehand. And this girl are hard to miss on twitter :) She is a sheer energy bomb. She has already, even though she is quite young, done so much. Actually I thing she is running her first half marathon today. I can relate to this. I remember every inch of my first half marathon a couple of year ago. This girl I just have to meet.

Pinal Dave
I have used a script for many years to find unused indexes. This was blog post has been my first bookmark in the browsers bookmark bar. So this is used quite a lot. I find out that this post well the entire blog is written by Dave and he is coming to the Summit as well. So this would be kind of fun to get a chance to say thanks and let him know what a important part of my DBA life this one post has.

Jason Strate
Well have you seen his videos pre-con :) I'm still laughing just thinking about it :D Well him and his wife are very dedicated to the SQL Server community. I like to meet people like that which burns with such passion for there "believes" :) And then he is from Minnesota. I lived there for a year (go Vikings :) - I even brought my Vikings Bret Farve jersey). This time in MN I have looked back up for the last 26 years with a lot of joy So him and his wife I like to meet.

And many more...
There are a lots of out people I like to meet but lets start here. And if you see me at the summit and are not afraid of new people like me - please come and say Hi to me. You can find me on twitter as well.

Now its time for a hot bath and some breakfast before we go out to see Seattle. I need to get a convert for my electronic equitment and then I would like to see if its possible to buy a small mobil data packet just for this week. The iPhone are best with data but it cost the tip of an fighter jet (so my boss says) on our Danish mobil provider when we are over here.