Skip to main content

SQL Server Post Build Configurations


Right from when the environment is built, it is important to setup few parameters so that we do not face issues later when project team starts to use the environment. Since it is a specific documentation which targets the server builds where SQL Server is installed, it will have all the information related to the best practices to be followed right after the server build and SQL Server installation are performed. We have faced many issues in past where due to these small settings or configurations, we could not get system as it is after an outage.

  • Backup availability: For any database and server, it is important to have backups in place. Application backups and Server backup separately is required to bring up any service to a point in time in case of recovery from an outage. 
  • Resource setting: If the resources allocated to the system are not properly been configured right after the installation of the software and server builds, it might cause issues later when the system is being used by different users.
  • Health Check availability: For all the client environments or environments where data is crucial, it is important to have health checks in place as it will help in notifying the administrator in case of any issues or abnormality on the server.
  • Presence of Maintenance tasks: Check for the required level of permissions that a user should have for upgrading a SQL Server environment. It could be the admin level privileges or service account access issues. Keep the passwords of the 'system admin' account and service account handy with you.


Assumptions
Below mentioned are the key factors which should be taken into consideration while moving ahead for setting up the configurations:
  • Administrator should have proper knowledge about the backup and recovery process with tools like TSM(Tivoli Storage Manager) or even from disk backups. 
  • Scripts for health-check should not increase an overhead to the performance of the server.
  • Different maintenance jobs should not clash with each other and specific jobs should run only out of business hours.
  • Resource utilization of each application should be known at an early stage so that the amount of the resources to be used in future can be granted at an early stage.

Technical overview

While setting up the specific configurations after server build and SQL Server installation, below mentioned factors should be kept in mind so that there is no issue related to these in future:

1. Backup Configuration:

Backups should be configured for the application and databases separately. Which method of backup to choose can either be decided by the project team or by the administrator of the machine itself. We can have following types of backups:

Disk Backups
TSM backups
VEEAM backups
However, there are multiple tools available to backup the data, for now we are using the above-mentioned methods to back them up. We can either implement any one method or combination of them based on how critical the data is. 
It is always recommended to backup the disks if you have only kept disk backups for the databases.
  For TSM configuration, knowledge of baclient, opt files and configuration files are important.
For backup configuration of databases, Weekly full backups, daily differential backups should be in place. However, if the client cannot handle even few minutes of data loss then log backups should also be configured based on the data loss that is fine. 
For server backup configurations, weekly backups should be in place.
When the server is built, and the applications are installed, these backups should be configured by the administrators.
For TSM backups, you can use DP for SQL/IBM Spectrum Protect to take backups, schedule them and restore backups. Please find the screen captures below for DP for SQL/IBM Spectrum Protect:

Once you are on the home page you can select different instances of SQL and check for available backups:


             
We can also see the available backups for a database for specific SQL instance in Recover Tab:


2. Memory Capping:

     SQL Server is an application that uses as much memory as is provided to it. Setting memory too high can cause other applications to compete with SQL on the OS and setting memory too low can cause serious problems like memory pressure and performance issues. It is important to provide at least 4 GB memory for OS use, if you are running only SQL Server as an application on the server. 
     If memory capping is not performed on the system for SQL server application, it will tend to grab as much memory as is available and can cause other processes to dry out of memory. It is one of the main reasons why end users face performance issues and slowness in their applications. 
     As per Microsoft documentation, the max server memory that should be allocated to the SQL Server should be calculated as follows:
    • From the total OS memory, reserve 1GB-4GB to the OS itself.
    • Then subtract the equivalent of potential SQL Server memory allocations outside the max server memory control, which is comprised of stack size 1 * calculated max worker threads 2 + -g startup parameter 3 (or 256MB by default if -g is not set). What remains should be the max_server_memory setting for a single instance setup.
     But if you do not want to go ahead with accurate calculations, then allocating 4 to 6 GB to OS and rest all to SQL Server should work.
     The memory setting on SQL Server instance can be changed in instance properties as per the screen capture below. The setting of max server memory will depend on the total RAM that is there on your system.



3. Placing health check script:

Health check script is a mandatory step for the administrators. There should be a health check script in place which will run once in 24 hours and will check the important factors that can cause issues in the system.

These factors include:

• Disk Space
• Backup success/failure
• Job run success/failure
• Service down
• Log Space issues
These health check scripts can be created in powershell and it will run once in the morning using task scheduler and will generate reports which can be checked manually after logging in to servers. We do not have to login to each server to check the result. This can run on one server and will fetch the results for all others which are in same domain. 
We can also configure the scripts to run and send only the failure status and issues to individual mail IDs or DLs over email, so that it is easy for the administrator to check the issue rather than going through the entire report and finding out the error. However, for mail functionality to work, SMTP should be enabled for all the instances. If that is not possible in the environment, then we can color code the results so that it is easier for the person to see the issue in one go.

4. Setting up Maintenance Jobs:

Maintenance jobs are vital for maintaining the health of any SQL Server environment. There are multiple statements and queries issued to a database over the time which inserts, updates and deletes the records. There are indexes created and deleted and many more operations are performed. Few operations can induce fragmentation in the indexes over time and it is important for us to implement methods which can help in index optimization and updating statistics based on index’s size and fragmentation level. 
There are few of the basic maintenance jobs that should be in place for keeping a database healthy. 
Few of them are listed below:
• Database integrity check
• Error log recycle 
• Database backups in case of disk backup is the option which has been selected
• Index-Optimization 
• Cleanup jobs
Please make sure that maintenance plan wizard is not used to create these jobs. The tool serves the basic purpose but is not very flexible and if it is not properly used, it can result in poor database maintenance.
Above-mentioned jobs help in checking the integrity of the databases everyday, recycling error logs everyday, taking backups as per the backup policy decided and optimizing the indexes weekly.
Index-Optimization job should run weekly once in off business hours when there are no or minimum no. of users. 

Maintenance jobs that we can configure are shown in the screen capture below. You can write your own code or you can download the Ola Hallengren maintenance scripts which are approved worldwide for the health of your databases.






Comments

Popular posts from this blog

SQL Server Upgrade Issues

Pre-Requisites We usually think of SQL Server simply as a database engine, while it is a data platform which supports data warehousing, reporting, data integration and data analytics. An in-depth planning is required when upgrading from one version of SQL Server to a newer version along with the features that SQL Server offers. Upgrading these extensive number of features can be remarkably complicated and time-consuming than just upgrading the database engine alone.   Below mentioned are few of the “lessons learned” that help reduce risks of data losses etc.: 1.        Application compatibility: Before planning for upgrade activity, make sure that you discuss the compatibility of SQL Server and the associated application with your application vendor.  2.        Preparation for migration: In case your upgrade activity fails due to some reason and there is no solution possible other than migration,...

Data Compression - Row & Page ( Sketch notes)

Just some notes on compression... something helpful for a beginner.

Best Practices to be followed for SQL Server

1. Memory Capping SQL Server is an application that uses as much memory as is provided to it. Setting memory too high can cause other applications to compete with SQL on the OS and setting memory too low can cause serious problems like memory pressure and performance issues. It is important to provide at least 4 GB memory for OS use, if you are running only SQL Server as an application on the server.  2. Data File Locations SQL Server accesses data and log files with very different I/O patterns. Data file access is mostly random whilst transaction log file access is sequential. Spinning disk storage requires re-positioning of the disk head for random read and write access. Sequential data is therefore more efficient than random data access. Separating files that have different access patterns helps to minimize disk head movements, and thus optimizes storage performance.  3. Auto growth Auto-growth of the database files should be set in MBs as it will al...