On the first day of Christmas dbatools gave to me: Find-DbaCommand
On the second day of Christmas dbatools gave to me: Get-DbaCmsRegServer
On the third day of Christmas dbatools gave to me: DbaMaxMemory
On the fourth day of Christmas dbatools gave to me: DbaAgentJob
On the fifth day of Christmas dbatools gave to me: DbaLogin
On the sixth day of Christmas dbatools gave to me: DbaTempDbConfig
On the seventh day of Christmas dbatools gave to me: DbaAgentAlert
On the eighth day of Christmas dbatools gave to me: DbaDbCompression
Today we will focus on a database command that will help us save on space in our data file. Table compression is a great option as your tables grow and start requiring more space. There is PAGE and ROW compression options available and both have their benefits. There is a command to Get, Set and Test database compression and provide great information.
This function gets the current size and compression for all objects in the specified database(s), if no database is specified it will return all objects in all user databases.
This function sets the appropriate compression recommendation, determined either by using the Tiger Team’s query or set to the CompressionType parameter.
Remember Uptime is critical for the Tiger Team query, the longer uptime, the more accurate the analysis is.
You would probably be best if you utilized Get-DbaUptime first, before running this command.
Set-DbaDbCompression script derived from GitHub and the tigertoolbox
This function returns the results of a full table/index compression analysis and the estimated, best option to date for either NONE, Page, or Row Compression.
Remember Uptime is critical, the longer uptime, the more accurate the analysis is, and it would be best if you utilized Get-DbaUptime first, before running this command.
Test-DbaDbCompression script derived from GitHub and the tigertoolbox
In the output, you will find the following information:
– Column Percent_Update shows the percentage of update operations on a specific table, index, or partition, relative to total operations on that object. The lower the percentage of Updates (that is, the table, index, or partition is infrequently updated), the better candidate it is for page compression.
– Column Percent_Scan shows the percentage of scan operations on a table, index, or partition, relative to total operations on that object. The higher the value of Scan (that is, the table, index, or partition is mostly scanned), the better candidate it is for page compression.
– Column Compression_Type_Recommendation can have four possible outputs indicating where there is most gain, if any: ‘PAGE’, ‘ROW’, ‘NO_GAIN’ or ‘?’. When the output is ‘?’ this approach could not give a recommendation, so as a rule of thumb I would lean to ROW if the object suffers mainly UPDATES, or PAGE if mainly INSERTS, but this is where knowing your workload is essential. When the output is ‘NO_GAIN’ well, that means that according to sp_estimate_data_compression_savings no space gains will be attained when compressing, as in the above output example, where compressing would grow the affected object.
This script will execute on the context of the current database.
Also be aware that this may take a while to execute on large objects, because if the IS locks taken by the
sp_estimate_data_compression_savings cannot be honored, the SP will be blocked.
It only considers Row or Page Compression (not column compression)
It only evaluates User Tables
Get-DbaDbCompression -SqlInstance localhost\sql2017 -Database AdventureWorks2014 | Out-GridView
You can see we do not have any data compression enabled. Now lets take a look at what the Test tells us to do.
Test-DbaDbCompression -SqlInstance localhost\sql2017 -Database AdventureWorks2014 | Out-GridView
Look at those savings on space. Imagine what you could find in a large database with millions and billions of rows.
Advanced Command Usage:
This command could get really advanced. However, I do not really have an example of an advanced command. But you could pick and choose your tables, your types, across multiple servers. I could see how it could get advanced and I might follow-up if I implement anything in the future or see a blog post on the topic.