Measuring Growth of a BBCRM Database

By: Greg Howe

Posted on: July 5, 2012

This blog post provides guidance for measuring the growth of a BBCRM database; specifically how to identify the fastest growing tables, the total number of tables in the database, and then how to analyze the data so that you can develop a plan to curb the growth.

Identifying Fast Growing Tables:
The SQL script at the bottom of this article is written to identify database tables that are greater than 100MB in size (you can adjust the minimum size by changing the WHERE clause). The script’s output will show the table name, number of table rows, reserved space, actual used space, index size, and unused space.

In order to identify your fastest growing tables, you need to run this script over the course of 10-14 days and save the results in an Excel spreadsheet (put each day’s data below the previous day in the same Excel sheet). I highly recommend that you run the script at the same time each day so that the results are comparable.

After you have 10-14 result sets you should be in a good place to perform some analysis on your data. The results of the SQL script contain the date that you took the measurements and the table names. Use these values and Excel’s built-in features to build a pivot-table showing the growth of each table over time. Once created, you can use the pivot-table to see which tables are growing the fastest. Here’s an example of what your pivot table might look like:

We used this approach for a client who was experiencing extraordinary growth in their BBCRM database. Among other fast growing tables, we noticed that the BATCHAUDIT table was approximately 45GB in size and was the fastest growing table in the database. Then we started analyzing why this table was growing so fast.

We discovered that the client had written a custom batch extension for Revenue and it was updating the BATCH.PROJECTEDNUMBEROFRECORDS and BATCH.PROJECTEDTOTALAMOUNT for every record being imported to the system. The incoming files had thousands of records in them and were run multiple times each week. These updates to the BATCH table were triggering the creation of 2 audit records (Before & After) in the BATCHAUDIT table for every record being imported to the batch. Once we discovered this code, we took corrective action to curb the growth of the BATCHAUDIT table, and we performed similar analysis on other tables that were fast growing.

Important note:
You may have monthly business processes that are contributing to growth of individual tables. You may find it beneficial to capture the growth of tables over the course of 2-3 months to identify table growth that is resulting from end of month processes.

 

Capturing Total Table Count:
In addition to analyzing the growth of individual tables, you should also capture the growth in total number of tables in the database. The Blackbaud Infinity platform creates tables dynamically. If these dynamic tables are not monitored and deleted, when no longer needed, the database can grow significantly.
The following SQL statement should be run for at least 10-14 days to capture a good set of data that you save in an Excel sheet for analysis.

 select GETDATE() as SNAPSHOT_DATE, COUNT(*) from sysobjects where type = ‘U’

Once you’ve collected several days of data in your Excel sheet, you can add columns to calculate the number of new tables being added each day and the percentage growth across days (see Figure 1 below). If you capture the data over a long enough period of time, patterns might emerge that show you when the largest growth in table count is occurring. You can then review the processes scheduled to run on that day of the week and take corrective action.

In the case of Figure 1 below, we noticed an unusually large growth in table count on 3/18/2012 and this repeated on the same day of the week across weeks of data capture. We discovered that most of those tables were being dynamically created by the Export process (the table names begin with “EXPORT_” followed by a GUID). Based on this information we created an instance of the out-of-the-box Global Change named “Business Process Output Delete”, and configured it to delete output tables that were more than 4 weeks old (see Figure 2).

Figure 1:


Figure 2:

 

 

The following SQL Script will return size statistics of database tables larger than 100MB:
SET NOCOUNT ON

SETNOCOUNTON

–Declare needed variables.

DECLARE

@max INT,

@min INT,

@owner NVARCHAR(256),

@table_name NVARCHAR(256),

@sql NVARCHAR(4000)

 

DECLARE @table TABLE(

ident INTIDENTITY(1,1)PRIMARYKEY,

owner_name NVARCHAR(256),

table_name NVARCHAR(256))

 

IF (SELECTOBJECT_ID(‘tempdb..#results’))ISNOTNULL

BEGIN

DROPTABLE #results

END

 

CREATETABLE #results(

ident INTIDENTITY(1,1)PRIMARYKEY,–Will be used to update the owner.

table_name NVARCHAR(256),

owner_name NVARCHAR(256),

table_rows INT,

reserved_space NVARCHAR(55),

data_space NVARCHAR(55),

index_space NVARCHAR(55),

unused_space NVARCHAR(55))

 

–Loop through statistics for each table.

INSERT @table(owner_name, table_name)

SELECT su.name, so.name

FROM sysobjects so

INNERJOINsysusers su ON so.uid= su.uid

WHERE so.xtype =’U’

 

SELECT @min = 1, @max = (SELECTMAX(ident)FROM @table)

 

WHILE @min <= @max

BEGIN

SELECT @owner = owner_name, @table_name = table_name

FROM @table

WHERE ident = @min

 

SELECT @sql =’EXEC sp_spaceused ”[‘+ @owner +’].[‘+ @table_name +’]”’

 

INSERT #results(table_name, table_rows, reserved_space, data_space, index_space, unused_space)

EXEC (@sql)

 

UPDATE #results

SET owner_name = @owner

WHERE ident = (SELECTMAX(ident)FROM #results)

 

SELECT @min = @min + 1

END

 

SELECT ident, table_name, owner_name, table_rows,

cast(SUBSTRING(replace(#results.reserved_space,’ ‘,”),1,(LEN(replace(#results.reserved_space,’ ‘,”))- 2))asint)’rsvd_space_kb’,

cast(SUBSTRING(replace(#results.data_space,’ ‘,”),1,(LEN(replace(#results.data_space,’ ‘,”))- 2))asint)as data_space_kb,

cast(SUBSTRING(replace(#results.index_space,’ ‘,”),1,(LEN(replace(#results.index_space,’ ‘,”))- 2))asint)’index_space_kb’,

cast(SUBSTRING(replace(#results.unused_space,’ ‘,”),1,(LEN(replace(#results.unused_space,’ ‘,”))- 2))asint)’unused_space_kb’,

cast(convert(nvarchar(10),GETDATE()-1,112)asdatetime)’Date’,

s.crdate as create_date

FROM #results

join sysobjects s on s.name COLLATE DATABASE_DEFAULT = #results.table_name COLLATE DATABASE_DEFAULT

where cast(SUBSTRING(replace(#results.data_space,’ ‘,”),1,(LEN(replace(#results.data_space,’ ‘,”))- 2))asint)> 102400

orderby table_name

 

Leave a Reply

Privacy Policy | Sitemap | © 2011 Blackbaud, Inc. All Rights Reserved

Digital Ocean