Monitoring the effectiveness of MS SQL Server. Practical advice

SQL Profiler is a software tool used to trace SQL Server. "Trace" - a session to collect information about the operation of SQL Server 2008

Main purpose:

SQL Profiler is used by administrators to:

· Analysis of the application;

· Determining the optimality of requests sent to the server;

· Identifying Transact-SQL commands that generate an error;

· Collecting information about user activity over a long period of time;

· Monitoring the server operation in real time.

New opportunities:

a. Analysis Services profiling

b. Integration Services event profiling

c. the ability to record meter readings from Performance Monitor when recording command execution information;

d. Many new events and information sources have been added to Profiler that can be selected to be logged to a trace file;

f. the ability to group events in the profiler window.

Working with SQL Server Profiler

1. Launch SQL Server Profiler - from the menu Startà Programsà SQL Server 2008à Performance Toolsà SQL Server Profiler.

2. In the window that opens, in the menu File select New trace and connect to the SQL Server 2008 server we will be monitoring.

3. Configure session parameters in the window Trace Properties, which opens automatically before starting a trace session (see Fig. 8.1).

Rice. 8.1. Setting trace session parameters

a. In the tab General select from the list Use the template the most suitable template. The template is selected using the menu File à Templates in SQL Server Profiler. Initially, you have eight templates at your disposal:

1). Standard (default)- a default template that allows you to keep track of all stored procedures and Transact-SQL commands that are run for execution;

2). SP_Counts- collection of information about the stored procedures and functions launched for execution, sorted by names;

3). TSQL- collection of information about all Transact-SQL commands launched for execution on the server, indicating the identifier of user processes and start time;

4). TSQL_Duration- is similar to the previous template, but instead of information about the time when the TSQL command was launched, the time it took to execute it is recorded;

5). TSQL_Grouped- in addition to information about the Transact-SQL command code and the time of its launch, information about the application name, user account in the OS and user account that was used to connect is also recorded;



6). TSQL_Replay- recording the most detailed information about the executed Transact-SQL commands;

7). TSQL_SPs- in addition to recording information about the start of the stored procedure (SP: Starting), information about the execution of each of the stored procedure commands (SP: StmtStarting) is recorded;

8). Tuning- is used to collect information required by the Database Tuning Advisor.

b. In the tab General if it is necessary to specify the location for storing the trace information:

one). Trace information can be logged to a file (5MB by default):

· parameter Enable File Rollover determines whether the next file will be automatically created when filling one file. The name of the next file will be the same as the name of the previous one, but a number will be added to its name (1, 2, 3, etc.)

· parameter Server processes trace data can be used to increase the reliability of recording trace information. When this check box is selected, the server will handle the processing of the trace information.

2). Trace information can be stored in a SQL Server table. The table with the required set of columns will be created automatically.

3). Using the parameter Enable Trace Stop Time you can specify the time when tracing will be disabled automatically.

c. In the tab Events Selection determine the parameters for collecting information. In the table on this tab, you must select the required events (in rows) and the information (in columns) that will be recorded for them. To display all rows and columns, you need to check the boxes Show All Events and Show All Columns.

one). using the button Column Filters(Column Filters) set up filters to collect the necessary information (track actions performed in a specific database, or a specific application, or a specific user) - Like or Not Like;

2). using the button Organize Columns(Organize Columns) customize the order of the columns to display or record in the profiler with the ability to group data - Group section.

4. After specifying all trace parameters, click the button Run(Run) (see fig 8.2)

Rice. 8.2. Viewing Information During a Trace Session

The upper part of the window displays the events occurring on the server, and the lower part provides detailed information on each event (for example, the SQL command code).

Features available in the trace window:

1. If the tab Organize Columns in the properties of the template you have selected the columns to group, you can group by these columns the records in the viewport. For this purpose in the menu View there is a command Grouped View;

2. If the list Group only one column has been placed, then you have the opportunity to use the display mode Aggregated View(see fig. 8.3). This mode is enabled using the command Aggregated View from the same menu View.

Rice. 8.3. Display mode Aggregated View

3. You can open events saved in files and trace tables in the profiler. It is also possible to repeat the logged operations using the menu Replay;

4. You can link trace information to the performance counters of the System Monitor. For this:

Define a trace session during which information for columns must be recorded StartTime and EndTime;

· Start a trace session with information recording to a file or table. Simultaneously with it, collect a protocol of meter readings into a file Performance Monitor;

Open the collected information from the trace file in the profiler, and then use the command Import Performance Data from the menu File.

What will you learn from this article?

  • Purpose of the SQL Profiler Trace Tool
  • How to track the text of the query to the DBMS, into which the 1C query is translated
  • Trace filter settings
  • How to Personalize SQL Profiler

Often, a situation arises in work when a request in 1C for some reason works slowly, but the analysis of the request text does not tell us about any problems.

In this case, you have to study this problem at a lower level. To do this, we need to look at the text of the SQL query and the query plan. You can use SQL Profiler for this.

SQL Profiler - purpose

SQL Profiler is a program included in MS SQL Server that is designed to view all events that occur in SQL Server. In other words, it is needed to record a trace.

In what cases can this tool be useful for a 1C programmer? First of all, you can get the text of a query in SQL and see its plan. This can also be done in the technology journal (TJ), but the plan for requesting in the TJ is not so convenient and requires some skills and abilities. In addition, in the profiler, you can see not only a textual, but also a graphical query execution plan, which is more convenient.

The profiler also allows you to find out:

  • requests longer than a certain time
  • queries against a specific table
  • waiting on locks
  • timeouts
  • deadlocks, etc.

Analyzing Queries with SQL Profiler

Profiler is often used specifically to analyze queries. And at the same time, it is necessary to analyze not all executable queries, but how a certain 1C query is translated into SQL, and pay attention to its execution plan.

In particular, it is necessary to understand why the request is slow. Or, when building a large and complex query, you need to make sure that the SQL query does not contain subquery joins.

To track the request in the trace, perform the following steps:

1. Launch SQL Profiler: Start - All Programs - Microsoft SQL Server 2008 R2 - Productivity Tools - SQLProfiler.

2. Create a new trace: File - Create Trace(Ctrl + N).

3. Specify the DBMS server on which our database is located and click Connect:

Nothing prevents us from tracing the DBMS server located on any other computer.

4. In the window that appears Trace properties switch to bookmark Selecting events:

Since we need queries and query plans, we need to enable the corresponding events. To display a complete list of properties and events, turn on the flags Show all events and Show all columns... Now you need to select only the events shown in the figure below, the rest must be disabled:

Description of these events:

  • ShowplanStatisticsProfile - text plan of query execution
  • ShowplanXMLStatisticsProfile - graphical query execution plan
  • RPC: Completed– the text of the request, if it is executed as a procedure (if a 1C request with parameters is executed)
  • SQL: BatchCompleted - the text of the request, if it is executed as a normal request (if a 1C request was executed without parameters)

6. At this stage, you need to configure the filter for the selected events. If the filter is not installed, then we will see requests for all databases located on this DBMS server. Click the Column Filters button to set the filter by the database name:

Now we see in the trace only queries to the "TestBase_8_2" database.

You can also put a filter on other fields, the most interesting of them are:

  • Duration
  • TextData (usually the request text)
  • RowCounts (the number of rows returned by the query)

Let's say we need to "catch" all requests to the "_InfoRg4312" table lasting more than 3 seconds in the "TestBase_8_2" database. This requires:

a) Set filter by database (see above)
b) Set filter by duration (set in milliseconds):

c) Set filter by request text:

To set a filter by the request text, use a mask. If you want to keep track of queries that access multiple tables, multiple items are created in the "Looks Like" section. Overlaid filter conditions work together.

7. Now start tracing using the button Run in the window Trace properties and observe the events that fall under the installed filters, the display of which has been configured.

Command panel buttons are used to control routing:

Purpose of buttons:

  • Eraser- clears the trace window
  • Start- starts tracing
  • Pause- pauses tracing, when you click on Start, tracing resumes
  • Stop- stops tracing

8. Trace window consists of two parts. In the upper part there are events and their properties, in the lower part - information depending on the type of events. For our example, either the request text or its plan will be displayed here.

9. Let's run the request in the 1C query console and see how it will be reflected in the profiler:

You can see from the trace behavior that there were several requests in the end, and only one of them is of interest to us. The rest of the requests are service requests.

10. The properties of events make it possible to evaluate:

  • how many seconds the request took (Duration)
  • how many logical reads there were
  • how many rows the query returned as a result (RowCounts), etc.

In our case, the request took 2 milliseconds, made 4 logical reads and returned 1 row.

11. If you look at one event above, you can see the query plan in graphical form:

It can be seen from the plan that the search is carried out on the index by price, this plan cannot be called ideal, since the index is not covering, the code and name fields are obtained using KeyLookup, which takes 50% of the time.

Using the context menu, the resulting graphical query plan can be saved to a separate file with the * .SQLPlan extension and opened in the profiler on another computer or using the SQL Sentry Plan Explorer program, which is more advanced.

12. If we go even higher, then we will see the same query plan, but in text form. It is this plan that is displayed in the TZ, MCC and other 1C performance monitoring tools.

  • In the format of the profiler itself, that is, with the * .trc extension
  • To xml format
  • Make a template from the trace (See next item)
  • Save the resulting trace as a database table. This is a very convenient way when, for example, you need to find the slowest query in a trace or filter queries by some parameter.

Then we select the database on the specified server, specify the name of the table where the trace will be saved. You can use an existing table, or give it a new name, and then this table will be created automatically.

Now it is possible to build queries of any complexity to our table: for example, to search for the longest running queries.

You also need to remember that Duration is stored in the table in millionths of a second, and when displaying the result, you need to convert the value to milliseconds. There is also a RowNumber column in the table that shows the number of this row in the trace.

14. With frequent use of the profiler to analyze queries, constantly adjusting the desired events and filters will constantly take a lot of your time.

In this case, trace templates will help us, where we configure the filters and column order we need, and then just use the existing template when creating a new trace.

To create a template, use the menu File - Templates - New Template:

On the first tab, specify the server type, template name and, if necessary, set the flag to use this template by default.

On the second tab, we select the required events and set up the filters (as shown above).

When creating a new trace, we can specify the required template, and then on the second tab, all filters and events will be filled automatically according to the created template.

Burmistrov Andrey

SQL Server 2005 Profiler, Application Query Tracking, Trace Patterns, Query Information Grouping

One of the most useful tools for monitoring user activity is profiler (Profiler). Using this tool, you can find out what commands SQL Server is currently executing. The need to use a profiler arises very often. Here are some standard situations where it can be very difficult to do without it:

q you want to analyze the operation of the application and see what commands it executes on the server. This information may come in handy:

· To understand what tables in the database this application works with when performing certain operations. Very often in an enterprise there is a need to create reports in a form that is not provided by the application, and developers rarely provide detailed information about the structure of the database;

· To find out how optimal from the point of view of performance requests is sent to the server by the application. In practice, when using the profiler, you can often identify completely sub-optimal queries, for example, when filtering or sorting data is performed on the client;

· To understand when executing which Transact -SQL command from the application on the server an error occurs;

q to collect information about user activity over a long period of time (for example, you can collect all the requests that were sent to the server by a certain application during the working day). The collected information can then be manually analyzed or passed to Database Tuning Advisor for automated analysis;

q to monitor server performance in real time. For example, if the server suddenly slows down, you can see in the profiler window what commands are currently being executed on it.

SQL Server 2005 introduces a lot of new things to the profiler:

q Integration Services event profiling was introduced. You can now use the profiler to track the progress of new DTS packages;

q it is now possible to record the readings of the counters from the System Monitor when writing information about the execution of a command;

q Many new events and information sources have been added to the profiler that can be selected to be written to the trace file. The definition of what to write to the trace file can now be saved in XML format;

q Now you can save trace results in XML format (the ability to write to ANSI, OEM, UNICODE formats is also saved);

q Even the execution plans of Transact -SQL commands caught by the profiler can be saved in XML format. These plans can then be opened in SQL Server Management Studio for further analysis;

q it is now possible to group events directly in the profiler window. Using grouping, for example, you can very easily calculate how many times a particular Transact -SQL command has been executed on the server during the day.

Working with the profiler looks very simple. This application can be launched from the menu Start| Programs| Microsoft SQL Server 2005 | Performance Tools | SQL Server Profiler ... To get started, in the profiler window that opens in the menu File(File) you need to select NewTrace(New trace) and connect to the SQL Server 2005 server that you will be monitoring. Tracing refers to a session that collects information about the operation of SQL Server 2005. However, before you start collecting information, you need to configure the parameters of this session. This setting is done in the window TraceProperties(Trace properties), which opens automatically before starting a tracing session (Fig. 11.1).

Rice. 11.1. Setting trace session parameters

In the tab General(General) listed Usethetemplate(Use Template) you can choose the most suitable template for collecting information within your session. In principle, you can ignore the template settings, but manually define the parameters for collecting information (using the adjacent tab EventsSelection(Event selection)). However, specifying the correct template can save time and avoid mistakes. Therefore, we will dwell on templates in more detail.

Template is saved in a special file with the extension tdf trace session settings. Working with templates (adding new ones, changing existing ones, importing and exporting reports to other directories) is done using the menu File| Templates(File | Templates) in SQL Server Profiler. Initially, you have eight templates at your disposal:

q Standard (default)- as the name implies, this template is suitable for most situations and therefore is selected by default. It allows you to track all stored procedures and Transact -SQL commands that are run for execution;

q SP _Counts- information about stored procedures and functions launched for execution is collected. In this case, the information in the profiler window is sorted (in the terminology of the profiler - grouped) by the names of the stored procedures;

q TSQL- information is collected about all Transact -SQL commands launched for execution on the server. In addition to the command code, information about the identifiers of user processes and the start time is also recorded. Typically this template is used to monitor commands sent to the server by an application;

q TSQL _Duration- almost the same as the previous template, but instead of recording information about the time when the Transact -SQL command was run, the time it took to execute it is recorded. Typically this template is used to manually monitor server performance;

q TSQL _Grouped- in addition to information about the Transact -SQL command code and the time of its launch, information about the application name, user account in the operating system and the username that was used to connect is recorded. In this case, the records are grouped by login. Typically this pattern is used in situations where you want to track the activity of a specific application;

q TSQL _Replay- the most detailed information about the executed Transact -SQL commands will be recorded. Then this information can be used in order to reproduce the load on the server with maximum accuracy. Typically, this template is used to write a set of commands that will then be used to test various server settings from a performance point of view;

q TSQL _SPs- in addition to recording information about the start of the entire stored procedure (event SP: Starting), this trace option also records information about the execution of each command in this stored procedure (event SP: StmtStarting). This pattern is typically used to monitor the operation of complex stored procedures;

q Tuning- This template is designed to record information most suitable for the transfer of the Database Tuning Advisor. How to work with this automated analysis and performance optimization tool will be discussed in sect. 11.5.5.

As already mentioned, it is not at all necessary to be limited only to a set of ready-made templates. You can use your own trace session parameters by configuring them on the tab EventsSelection... In the table on this tab, you must select the required events (in rows) and the information (in columns) that will be recorded for them. Note that only a small portion of the available rows and columns are visible by default. To enable the display of all rows and columns, you need to select the checkboxes ShowAllEvents(Show all events) and ShowAllColumns(Show all columns).

It often happens that you need to track only the actions performed in a specific database, or a specific application, or a specific user, or select all of these conditions at the same time. Filters for collecting information can be configured by clicking the button ColumnFilters(Column Filters) on a tab EventsSelection... For each column, you can configure the recording of only certain values ​​( Like) or prohibiting the recording of certain values ​​( Not Like). By default, a single filter is configured. Not Like for column ApplicationName... It causes all SQL Server Profiler events to be ignored, that is, all events related to the process of collecting trace information itself. It is better not to remove this filter, because otherwise there may be a positive feedback with an endless recording of information.

With one more button OrganizeColumns(Organize Columns) which is located on the tab EventsSelection, you can customize the column order for display or entry in the profiler. Pay attention to the section Group(Group) on this list. For those columns that are placed in it, grouping will be performed automatically. If you put only one column in this section, then when viewing you will have the opportunity to use a very convenient mode AggregatedView(Aggregated view) (when information is automatically grouped, for example, by database, by application, username, etc., and the entries for the desired database, application, or user can be expanded and collapsed).

After you have selected the desired template or configured your own set of events for logging, you just need to return to the tab General and configure a few additional parameters for the trace session.

Trace information can be logged to a file. This file can be used in different situations:

q can be passed as a source of information to the Database Tuning Advisor;

q can be "played" repeatedly in the profiler by repeating all recorded commands, for example, to evaluate performance under different server settings;

q can be presented to developers in support of their claims to the application.

Let's note some points that are related to logging a trace session to a file:

q The default file size of 5 MB is very small. When profiling a production server, this size is typed in minutes. True, by default the checkbox is checked Enablefilerollover(Enable changing files), that is, after filling in one file, a second file will be automatically created, the name of which will be appended with the number 1, then 2, etc., but it is not always convenient to work with a large number of files. If you are collecting information for the Database Tuning Advisor transmission, it is better to set the file size limit to 1 GB (using the parameter Setmaximumfilesize(Customize maximum file size) tab General). Trace recording to a file is most often performed from the administrator's workstation, so disk space will be required on the workstation and not on the server;

q parameter Serverprocessestracedata(Server processes trace data) can be used to increase the reliability of recording trace information. By default, SQL Server Profiler handles trace data processing on the computer on which it is running (not necessarily on the server). If you select this check box, the server will handle the processing of the trace information. This ensures that all trace information will be collected (if the checkbox is unchecked at times of peak server load, some of the information may be skipped), but will increase the load on the server.

Another option for writing trace information is to write to a SQL Server table. The table with the required set of columns will be created automatically. You can only customize the maximum number of entries in this table. Please note that on this tab, the maximum number of records is indicated in thousands.

The last parameter on the tab General- EnableTracestoptime(Include trace stop time). You can specify the time when tracing will be disabled automatically. It usually makes sense to turn off tracing before starting any service operations that you are not interested in from a logging point of view (backups, bulk data loading, processing OLAP cubes, etc.).

After all the tracing parameters have been configured, you can click the button Run(Run) tab General and start tracing (Fig. 11.2).

Rice. 11.2. Viewing Information During a Trace Session

The work in the trace information viewer is quite obvious: the upper part shows the events that occur on the server, and the lower part provides detailed information for them (for example, the code of the SQL commands). Let's note some of the possibilities available in this window:

q if the tab OrganizeColumns in the properties of the template you selected the columns to be grouped, then you can group by these columns the records in the viewer. For this purpose in the menu View(View) command provided GroupedView(Grouped view);

q if on the same tab in the properties of the template to the list Group only one column has been placed, then you can use an even more convenient display mode AggregatedView(fig.11.3). This mode is enabled using the command AggregatedView from the same menu View and allows you to turn the values ​​from the column of your choice into tree nodes that you can collapse and expand. In addition, the number of events is automatically calculated for each of these nodes.

Rice. 11.3. Display mode AggregatedView

q In the profiler, you can display not only the events that were just caught, but also the saved files and trace tables. Alternatively, you can open regular SQL Server scripts with Transact -SQL commands. Information from these files or tables can be used to repeat the logged operations. The menu commands are intended for this purpose. Replay(Repeat);

q SQL Server 2005 Profiler introduces a new feature to associate trace information with performance counters in Performance Monitor. In order to take advantage of this opportunity, you need:

Define a trace session during which information for columns must be recorded StartTime and EndTime;

· Start a trace session with information recording to a file or table. Simultaneously collect the Performance Monitor meter readings into a file;

Open the collected information from the trace file in the profiler, and then use the command ImportPerformanceData(Import performance data) from the menu File.

SQL Server 2005 provides a placeholder for the profiler. These are trace stored procedures. Their functionality is almost identical to that of the profiler. For example, you can also select events to trace and write them to a text file. The main difference is that all settings will have to be done from the Transact -SQL code.

Trace stored procedures are more difficult and less convenient to work with than with a profiler, and they do not provide additional features. Therefore, we will not consider them in detail. Here is just a list of such stored procedures with a brief description:

q sp_trace_create- allows you to configure the parameters of the tracing session;

q sp_trace_setevent- allows you to select the required events for the created trace session;

q sp_trace_setfilter- allows you to configure a filter for collecting trace information;

q sp_trace_setstatus- allows you to start a trace, stop it or delete a trace created by a stored procedure sp_trace_create current session definition;

q sp_trace_generateevent- allows you to generate a custom event that will be intercepted during the trace.

In previous editions (see ComputerPress # 1, 3-5, 7, 9'2006), we discussed issues related to the transition to SQL Server 2005 by migration or upgrade, as well as the main scenarios for using SQL Server 2005. This article will focus on various methods of query optimization and possible approaches to solving problems associated with finding performance problems.

SQL Server provides services that run in a dynamic, ever-changing environment. Therefore, regular monitoring of the server's operation allows you to identify problems at the stage of their occurrence and promptly take measures to eliminate them. With the accumulation of statistics, an understanding of the main trends in the server operation appears. Collecting data regularly - even when the server is running smoothly - will create a so-called server performance baseline that can serve as a benchmark for further measurements of server performance.

Once you have a baseline performance benchmark, you gain the ability to analyze query performance in a variety of scenarios. It is recommended that measurements be taken at least before, during and after the various optimization procedures have been performed to ensure that the steps taken are successful. In addition, regular standard performance measurements (weekly, monthly, quarterly, etc.) will provide a clearer picture of server performance.

The baseline can be used to determine the server's peak and off-peak hours, the response time of requests and command packets, the time it takes to back up and restore data, etc.

Query performance

The performance of requests should be considered from two points of view - the resources used to fulfill a particular request (resources include objects that we access during the execution of a request, blocked objects, etc.), and the time spent on executing the request, - the less time it takes to complete a request, the lower the likelihood that we will block other requests and transactions during the request process.

SQL Server 2005 includes two main tools for measuring query performance - Performance Monitor and SQL Server Profiler. In addition, T-SQL constructs from the SET STATISTICS group — SET STATISTICS IO, SET STATISTICS PROFILE, and SET STATISTICS TIME — and Dynamic Management Views can be used to obtain information about the execution of queries. Next, we'll take a closer look at using Performance Monitor and SQL Server Profiler.

Using Performance Monitor

The Performance Monitor utility is used to analyze the performance of hardware and software resources, including memory, network utilization, processor time, and information related to the operation of SQL Server and other software products - Microsoft Message Queuing (MSMQ), Microsoft .NET Framework and Microsoft Exchange Server. Specifically, you can use Performance Monitor to monitor SQL Server resources such as locks and transactions.

To add the counters of interest to us, you must perform the following steps:

  1. On the menu Start select team Run and in the dialog box Run to introduce perfmon... This launches the Microsoft Performance Management Console (MMC), displaying a graphical view of a number of performance counters.
  2. In the window System Monitor right-click on the chart and select the command Add Counters.
  3. In the dialog box Add Counters choose the computer and object of interest to us for monitoring. Objects specific to SQL Server are prefixed accordingly.
  4. To select counters, you must select either the option All counters, or one or more counters from the drop-down list (Fig. 1).
  1. After that, you should select the database instances or enable the option All Instances.
  2. Clicking the Add and Close buttons completes adding counters.

After the counters of interest to us are added, we can either view their values ​​in the form of a graph, or save them in a trace file (Fig. 2).

In the latter case, it is necessary to expand the element Performance Logs and Alerts and click on the command New Log Settings... In the dialog box New Log Settings you need to specify the name of the protocol and click the OK button. Further in the dialog box CounterLogName we choose the team Add Counters... Adding the counters of interest to us is the same as described above. Please note that a large number of counters can affect system performance. When finished adding counters, click Close.

In the tab General In chapter Sample data every you can set the sampling frequency (data capture). It is recommended to start with an average frequency, for example every 5 minutes, and then, if necessary, decrease or increase it. Note that the shorter the sampling interval, the more system and disk resources are required. Remember that intervals shorter than the quantization period of the counter can also result in incorrect data display. In the tab Log Files you can configure the properties of the log file, and on Schedule the monitoring schedule is set. Pressing the OK button leads to the creation of a protocol at the beginning of data collection (Fig. 3).

To manually end logging, you need to right-click on the element Counter Logs and choose a team Stop.

To view the data collected in the log file in Performance Monitor, select the item System Monitor, in the taskbar - the command View Log Data, and in the dialog box System Monitor Properties in the tab Source specify the name of the log file.

The most commonly used Performance Monitor counters, from which you can use the information to identify performance problems, are divided into the following categories:

  • data access - an object is used to audit data access methods used by SQL Server itself SQLServer: Access Methods;
  • statistics - an object is used to monitor compilation and recompilation of queries SQLServer: SQL Statistics... It provides information on how quickly and efficiently SQL Server processes queries;
  • transactions - counters related to objects are used to determine the number of transactions per second SQLServer: Databases and SQLServer: Transactions;
  • locks - the object is used to audit SQL Server locks placed on specific resource types SQLServer: Locks(Table 1).

Table 1. SQL Server Profiler counters

Counter name

Description

Counts range scans for indexes per second

Counts the number of complete scans performed in the last second

Index Searches / sec

Counts the number of index lookups in the last second

Table Lock Escalations / sec

Counts the number of locks for a table

Worktables Created / sec

Counts the number of worksheets created in the last second

Batch Requests / sec

Counts the number of Transact-SQL command packets per second. A large number of packets means good throughput

SQL Compilations / sec

Counts the number of query compilations per second. The value of this counter should become nearly constant after the user has performed basic actions.

SQL Re-Compilations / sec

Counts the number of recompiled requests per second

SQLServer object: Databases. Transactions / sec counter

Counts the number of transactions started in the database in the last second

SQLServer object: Transactions. Counter Longest Transaction Running Time

Calculates the number of seconds since the start of a transaction that has been active longer than any other current transaction. If this counter shows a very long transaction, use the sys.dm_tran_active_transactions () system stored procedure to get information about this transaction.

SQLServer object: Transactions. Update conflict ratio counter

Calculates the percentage of transactions that use snapshot isolation to resolve conflicts that occur when updating data in the last second

Average Wait Time (ms)

Calculates the average wait time for each lock request that caused the wait

Lock Requests / sec

Counts the number of locks and lock conversions per second

Lock Wait Time (ms)

Calculates the cumulative wait for locks in the last second

Counts the number of lock requests per second that resulted in a wait

The second utility that you can use to measure query performance is SQL Server Profiler. Next, we'll look at the main ways to use it.

Using SQL Server Profiler

SQL Server Profiler measures the performance of individual queries and queries that are part of stored procedures and Transact-SQL command packages. It can be used to collect performance information, including the time taken to execute an individual command, the blocking time for the command, and the execution plan.

To create a new profile, you need to follow a series of the following steps:

  1. Run SQL Server Profiler ( Microsoft SQL Server 2005 => Performance Tools => SQL Server Profiler).
  2. On the menu File select team New trace.
  3. In the dialog box Connect to Server select the required server and click the button Connect.
  4. In the dialog box Trace Properties you need to set the name of the profile (Trace name), and in the list Use the template choose one of the available templates or Blank if no template is used.
  5. To save the profiling results, you must select the option Save to file to write data to a file and specify the maximum file size (the default value is 5 MB), optionally you can enable the option Enable file rollover to automatically create a new file when the specified profile size is reached. The second possibility is storing data in a database table (Save to table)- option Set maximum rows allows you to set the maximum number of records.
  6. To set the end time of the trace, you can use the option Enable trace stop time(fig. 4).

To define events and data columns for a profile, you need to perform the following steps:

  1. In the dialog box Trace Properties go to tab Events Selection.
  2. Add or remove events from the profile using the event class table.
  3. To get a list of all available events, enable the option Show all events(fig. 5).

Table Table 2 lists the most commonly used SQL Server Profiler events. Like performance counters, SQL Server Profiler events fall into a number of categories, some of which are of interest to our purposes.

Table 2. SQL Server Profiler Events

Event name

Description

This event occurs when a remote procedure call completes.

This event is triggered when the stored procedure has finished executing.

SP: StmtCompleted

This event occurs when one of the Transact-SQL commands within a stored procedure has finished executing.

SQL: StmtCompleted

This event is triggered when the Transact-SQL command completes.

SQL: BatchCompleted

This event occurs when a batch of Transact-SQL commands has finished executing.

This event is fired when a transaction acquires a lock on some resource

This event occurs when a transaction releases a previously locked resource

This event is triggered when the timeout for acquiring a lock is exceeded, since the required resource is already locked by another transaction

How to determine if there are blockages

Each time a transaction uses a resource (table, page, index, etc.), a lock is set on it. If another transaction tries to access this resource and the lock type is incompatible with the lock already held, a new lock is created.

The following methods can be used to determine the presence of locks:

  • use the utility SQL Server Management Studio Activity Monitor, which shows information about processes, locks at the process level and at the object level. To access Activity Monitor in SQL Server Management Studio, select the item Management, and in it - Activity Monitor and double click on that item. Activity Monitor allows you to view:

Locked objects per process - use process ID to identify the request that caused the lock Server Process ID (SPID) On the page Process Info,

Processes causing other processes to wait - use the column to identify such processes. Blocked By On the page Process Info(fig. 6);

  • use SQL Server Profiler to get blocked processes report - this list displays information about processes that have been blocked for longer than a specified time O

This article contains several materials published in the mailing list "MS SQL Server is a delicate matter ..." and devoted to the problems of monitoring the effectiveness of MS SQL Server, methods of finding out the causes of hardware problems and their resolution. In addition, several useful, practical recommendations and warnings will be offered to your attention.

If your database server is using I / O intensively, you can change the operating system's I / O Page Lock Limit setting, which can increase the operating system's effective read / write rate to the hard disk.
First, benchmark the I / O for your typical server load. Then, in regedit.exe open the key:

HKLM \ SYSTEM \ CurrentControlSet \ Control \ SessionManager \ MemoryManagement \ IoPageLockLimit

The point of your actions is to step by step selection of the values ​​of this key to the most optimal value in terms of changes in the benchmarking results.
In this key, the operating system reads the maximum number of bytes that it can use for I / O operations. The default is 0, which corresponds to 512KB. Increase this value in steps, adding 512KB each time (for example: "512", "1024", etc.), and benchmark your system after each change. It makes sense to increase this parameter only as long as you observe an increase in the throughput of I / O operations, which can manifest itself in a decrease in the time spent on standard disk operations. When you no longer see significant improvement, return to the Registry Editor and delete the last increment.

Warning: There is a limit on the maximum size of the value of this key. If you have 16 MB of RAM, do not set IoPageLockLimit more than 2048 bytes; for 32MB of RAM, do not exceed 4096 bytes, and so on.

Important note:

The operations listed by Sergey, if used incorrectly or erroneously, can lead to the collapse of your system. Therefore, take care of the availability of database and system backups in advance, save the working NT configuration, and simulate possible changes at the test site. Never make more than one configuration change at a time.

Share this