SQL Profiler solves problems.

SQL Server 2005 profiler, application request tracking, trace templates, request information grouping

One of the most useful means of monitoring user activity is profiler (profiler). With this tool, you can find out what commands SQL Server is currently executing. The need to use a profiler arises very often. Here are a few standard situations where it can be very difficult to do without it:

q You want to analyze the application and see what commands it executes on the server. This information may be useful:

· to understand which tables in the database this application works with when performing certain operations. Very often in an enterprise there is a need to create reports in a form that is not provided by the application, and developers rarely provide detailed information about the structure of the database;

· to find out how optimal in terms of performance the requests are sent to the server by the application. In practice, when using a profiler, it is often possible to identify completely suboptimal queries, for example, when filtering or sorting data is performed on the client;

· To understand which Transact -SQL command from an application on the server generates an error.

q to collect information about user activity over a long period of time (for example, you can collect all requests that were sent to the server by a certain application during the working day). The collected information can then be analyzed manually or passed to the Database Tuning Advisor for automated analysis;

q to monitor server operation in real time. For example, if the server suddenly slows down, you can see in the profiler window what commands are currently being executed on it.

The profiler has many new features in SQL Server 2005:

q Added Integration Services event profiling. You can now use the profiler to monitor the progress of new DTS packages;

q it became possible to record counter readings from the System Monitor when recording information about the execution of a command;

q many new events and sources of information have been added to the profiler that can be selected to be written to the trace file. The definition of what to write to the trace file can now be saved in XML format;

q it is now possible to save trace results in XML format (the ability to write to ANSI , OEM , UNICODE formats is also saved);

q You can even save execution plans for Transact -SQL commands captured by the profiler in XML format. These plans can then be opened in SQL Server Management Studio for further analysis;

q it became possible to group events directly in the profiler window. Grouping, for example, makes it very easy to count how many times a particular Transact -SQL command was run on the server during the day.

Working with the profiler looks very simple. This application can be launched from the menu Start| Programs| Microsoft SQL Server 2005 | performance tools | SQL Server Profiler . To get started, in the profiler window that opens, in the menu file(File) must be selected NewTrace(New Trace) and connect to the SQL Server 2005 server you want to monitor. The word "trace" refers to a session that collects information about the operation of SQL Server 2005. However, before you start collecting information, you need to configure the settings for this session. This setting is made in the window TraceProperties(Trace Properties), which opens automatically before starting a tracing session (Fig. 11.1).

Rice. 11.1. Setting trace session options

On the tab General(General) in the list usethetemplate(Use template) you can choose the most suitable template for collecting information within your session. In principle, you can not pay attention to the template settings, but manually define the parameters for collecting information (using the adjacent tab EventsSelection(Select events)). However, specifying the correct template can save time and avoid mistakes. Therefore, we will dwell on the templates in more detail.

A template is saved in a special file with the extension tdf trace session settings. Working with templates (adding new ones, changing existing ones, importing and exporting reports to other directories) is done using the menu file| Templates(File| Templates) in SQL Server Profiler . Initially, you have eight templates at your disposal:

q Standard (default)- as the name implies, this template is suitable for most situations and is therefore selected by default. It allows you to track all stored procedures and Transact -SQL commands that are run;

q SP_Counts- information is collected about stored procedures and functions launched for execution. At the same time, the information in the profiler window is sorted (in the profiler's terminology - grouped) by the names of stored procedures;

q TSQL- Collects information about all Transact -SQL commands that are run on the server. In addition to the command code, information about the identifiers of user processes and the start time is also recorded. Typically, this pattern is used to monitor commands sent to the server by an application;

q TSQL_duration- almost the same as the previous template, but instead of recording information about when the Transact -SQL command was run, it records the time it took to execute it. Typically, this template is used to "manually" monitor server performance;

q TSQL_grouped- in addition to information about the code of the Transact -SQL command and the time it was run, information is recorded about the application name, user account in the operating system and the user login that was used to connect. The records are grouped by login. Typically, this pattern is used in situations where you want to track the activity of a specific application;

q TSQL_Replay- The most detailed information about the executed Transact -SQL commands will be recorded. Then this information can be used in order to reproduce the load on the server with maximum accuracy. Typically, this template is used to write a set of commands that will then be used to test different server settings in terms of performance;

q TSQL_SPs- in addition to recording information about the beginning of the launch of the entire stored procedure (event SP: Starting), this tracing option also records information about the execution of each command of this stored procedure (event SP:StmtStarting). This pattern is typically used to monitor the operation of complex stored procedures;

q Tuning- this template is intended to record information most appropriate for the transmission of the Database Tuning Advisor. About working with this tool for automated analysis and performance optimization will be described in sec. 11.5.5.

As already mentioned, it is not at all necessary to be limited to a set of ready-made templates. You can use your own trace session settings by configuring them on the tab EventsSelection. In the table on this tab, you must select the required events (in rows) and the information (in columns) that will be recorded for them. Note that only a small fraction of the available rows and columns are visible by default. To enable the display of all rows and columns, you need to check the boxes showAllEvents(Show all events) and showAllcolumns(Show all columns).

It is very often the case that you want to track only the actions performed on a specific database, or by a specific application, or by a specific user, or select all of these conditions at the same time. Filters for collecting information can be configured by clicking the button ColumnFilters(Column Filters) tab EventsSelection. Each column can be configured to record only specific values ​​( Like) or preventing certain values ​​from being written ( Not Like). By default, only one filter is configured - Not Like for column ApplicationName. It causes all SQL Server Profiler events to be ignored, that is, all events related to the trace collection process itself. It is better not to remove this filter, because otherwise positive feedback may occur with an endless recording of information.

With another button Organizecolumns(Organize Columns), which is located on the tab EventsSelection, you can customize the order of the columns to display or record in the profiler. Pay attention to the section group(Group) in this list. For those columns that are placed in it, grouping will be automatically performed. If you put only one column in this section, then when you view it, you will have the opportunity to use a very convenient mode. Aggregatedview(Aggregated view) (when information is automatically grouped, such as by database, application, username, etc., and entries for the desired database, application, or user can be expanded and collapsed).

After you select the desired template or set up your own set of events for logging, you just have to return to the tab General and configure a few advanced trace session options.

The trace information can be logged to a file. This file can be used in different situations:

q can be passed as a source of information Database Tuning Advisor ;

q you can "play" again in the profiler, repeating all the recorded commands, for example, to evaluate performance with different server settings;

q can be presented to the developers in support of their claims to the application.

Let's note some points that are related to logging a trace session to a file:

q 5 MB, which is the default file size limit, is very small. When profiling a production server, this size is gained in minutes. True, the checkbox is checked by default. enablefilerollover(Enable file change), i.e. after filling in one file, a second file will be automatically created, to the name of which the number 1 will be added, then - 2, etc., but working with a large number of files is not always convenient. If you are collecting information to send to the Database Tuning Advisor , it is best to set the file size limit to 1 GB (using the setmaximumfilesize(Set Max File Size) tab General). Tracing is written to a file most often from the administrator's workstation, so disk space will be required on the workstation, and not on the server;

q parameter serverprocessestracedata(Server is processing trace data) can be used to increase the reliability of recording trace information. By default, SQL Server Profiler handles trace data, and it does so on the machine it's running on (not necessarily the server). If you select this check box, then the server will handle the processing of trace information. This ensures that all trace information is collected (if the checkbox is unchecked, some information may be missed during server peak times), but will increase the load on the server.

Another option for writing trace information is to write to a SQL Server table. A table with the desired set of columns will be created automatically. You can only adjust the maximum number of entries in this table. Please note that on this tab the maximum number of entries is indicated in thousands.

The last option on the tab General- enableTracestoptime(Enable trace stop time). You can specify a time when tracing will be disabled automatically. It usually makes sense to turn off tracing before starting some housekeeping operations that are of no interest to you from a logging point of view (backup, bulk data loading, processing OLAP cubes, etc.).

After all the tracing parameters are configured, you can click on the button Run(Run) tab General and start tracing (Fig. 11.2).

Rice. 11.2. Viewing information during a trace session

The operation of the Trace Information Viewer is fairly straightforward: the top section shows the events that occur on the server, while the bottom section provides detailed information (for example, SQL command code). Here are some of the options available in this window:

q if tab Organizecolumns in the template properties you have selected columns for grouping, you can group records by these columns in the view window. For this purpose, the menu view(View) command provided groupedview(grouped view);

q if on the same tab in the template properties in the list group only one column has been placed, you can use an even more convenient display mode Aggregatedview(Fig. 11.3). This mode is enabled with the command Aggregatedview from the same menu view and lets you turn the values ​​in your chosen column into tree nodes that can be collapsed and expanded. In addition, the number of events is automatically counted for each of these nodes.

Rice. 11.3. Display mode Aggregatedview

q in the profiler, you can display not only those events that have just been caught, but also saved files and trace tables. In addition, you can open normal SQL Server scripts with Transact -SQL commands. Information from these files or tables can be used to replay logged operations. The menu commands are for this purpose. Replay(Repeat);

q A new feature has been introduced in SQL Server 2005 Profiler - linking trace information to Performance Monitor performance counters. In order to take advantage of this opportunity, you need:

define a trace session during which information for columns must be written StartTime and EndTime;

· launch a tracing session with information written to a file or table. Simultaneously with it, collect a log of meter readings Performance Monitor into a file;

open the collected information from the trace file in the profiler, and then use the command ImportPerformanceData(Import performance data) from the menu file.

SQL Server 2005 provides a replacement for the profiler. These are trace stored procedures. Their functionality is almost identical to that of the profiler. For example, you can also select events to trace and write them to a text file. The main difference is that all settings will have to be done from the Transact -SQL code.

Working with trace stored procedures is more difficult and less convenient than with a profiler, and they do not provide additional features. Therefore, we will not consider them in detail. We give only a list of such stored procedures with a brief description:

q sp_trace_create- allows you to configure the parameters of the tracing session;

q sp_trace_setevent- allows you to select the required events for the created trace session;

q sp_trace_setfilter- allows you to configure a filter to collect trace information;

q sp_trace_setstatus- allows you to start a trace, stop it, or delete the generated stored procedure sp_trace_create current session definition;

q sp_trace_generateevent- allows you to generate a custom event that will be intercepted during the trace.

The SQL Server Profiler software product is a graphical shell designed to create traces and analyze trace results. Events are stored in a trace file, which can then be analyzed or used to replay certain sequences of steps to identify problems that have occurred.

In order to track the actions that are currently running, you need to run MS SQL Profiler, create a new trace and configure the analysis of indicators:

On the "General" tab, you must specify the name of the trace. Specify where the captured trace data will be saved - to a file and/or to a database table.

Of great interest is the tab "Select events":

This page lists the events that need to be monitored. In this example, we'll specify the data needed to track query plans.

Get 267 1C video lessons for free:

By default, tracing runs through all specified events in all databases. In order to apply filters to the received data, you must click the "Filters of columns ..." button:

For example, let's set the selection by infobase ID (You can find out the database ID using the SELECT DB_ID(N'BaseName') query).

Starting tracing in Profiler for 1C

After all the settings are made, it remains to start tracking, for this you need to click "Run" (RUN). From this moment, all actions specified in the filter will start to be traced:

For example, I run a route for the duration of the document "Receipts of goods and services" in order to track the most labor-intensive operations.

After the trace is received, it must be analyzed.

Analyzing data from Profiler

For analysis, the resulting trace can be saved either to a file or to a table. We will save to a database table:

In this lecture, we will continue our study of stored procedures, which we started in "Creating and Managing Stored Procedures" . You will learn how to parse stored procedures and other T-SQL statements using the Microsoft SQL Server Query Analyzer and SQL Server Profiler. From this analysis, you will be able to determine how efficient T-SQL statements are. An efficient SQL Server query uses the right sequence of operations and the right indexes to reduce row processing and minimize I/O.

Using the Query Analyzer, you can see the execution plan chosen for the T-SQL statement query optimizer SQL Server. Query Optimizer is an internal module that looks for the best execution plan for each T-SQL statement. Query Optimizer analyzes each T-SQL statement, looks at a number of possible execution plans, and evaluates the "cost" of each plan in terms of resources required and processing time. The plan with the lowest cost is selected. The cost of each plan is determined based on available statistics that are collected by the system and may be out of date. Since you may know more about your database and your data than query optimizer, then you may be able to create a plan that is better than the query optimizer. Using the information provided by Query Analyzer , you can determine if the query optimizer plan for a particular statement will be efficient, and if not, you can try to optimize that statement by modifying it or by using SQL hinting. In this lecture, you will learn how to optimize T-SQL statements in addition to learning how to use the Query Analyzer.

Using Profiler, you can analyze the operations within your SQL Server system to determine which SQL statements and stored procedures are using unnecessary system resources. With this information, you can focus your tuning efforts on these statements and stored procedures first. In addition to describing how to use the Profiler, this chapter also shows you how to make the most of the information you get from the Profiler.

Using SQL Query Analyzer

The Query Analyzer utility is shipped with Microsoft SQL Server 2000 instead of

In our work, we quite often encounter a situation where a certain query is running slowly, and no obvious problems are visible in the query text. Usually in this case it is necessary to investigate the problem at a deeper level. As a rule, it becomes necessary to look at the text of the SQL query and its plan, and this is where SQLProfiler helps us.

What is SQL Profiler and why is it needed at all

SQLProfiler is a program supplied with MS SQL Server and it is designed to view all events that occur in SQL Server or, in other words, to record a trace. Why would a 1C programmer need SQLProfiler? At least in order to get the query text in SQL and see its plan. Of course, this can also be done with the help of a technology magazine, but this requires some skills, and the plan in TJ is not so beautiful and readable. In the profiler, you can see not only a text, but also a graphical query execution plan, which, in my opinion, is much more convenient. You can also use the profiler to determine: requests longer than a certain time requests to a specific waiting table on locks deadlock timeouts and much more ...

Query Analysis with SQL Profiler

Most often, the profiler is used specifically for query analysis. As a rule, we do not need to track all queries, we often need to see how a certain query in the 1C language is translated into SQL, and see its execution plan. For example, this may be necessary to determine why a query is running slowly, or we have written a large query and want to make sure that the SQL query body does not contain subquery joins. To catch the request in the trace, do the following:

1. Run SQL Profiler Start - All Programs - Microsoft SQL Server 2008 R2 - Performance Tools - SQLProfiler
2. Create a new trace File - Create trace (Ctrl+N)
3. Specify the DBMS server on which our database is located and click "Connect".

Naturally, nothing prevents you from tracing a DBMS server that is located on another computer. 4. In the appeared window "Trace properties" go to the second tab "Select events"

5. Now you need to specify the events and the properties of these events that we want to see in the trace. We need queries and query plans, so we need to enable the appropriate events. To display the full list of properties and events, enable the flags "Show all columns" and "Show all events". Next, you need to select only the events shown in the figure below, all other events must be disabled.


Description of events: ShowplanStatisticsProfile - text query execution plan.
ShowplanXMLStatisticsProfile - graphical query execution plan.
RPC:Completed - request text, if it is executed as a procedure (if a 1C request is executed with parameters).
SQL:BatchCompleted - query text if it is executed as a normal query (if a 1C query was executed without parameters).

6. Now you need to configure the filter for events. If this is not done, then we will see queries for all databases located on this DBMS server. Click the "Column Filters" button and specify a filter by database name

Now we will see only requests to the database “TestBase_8_2” in the trace. If you wish, you can put a filter on other fields, the most interesting of them: Duration (Duration), TextData (usually this is the request text) and RowCounts (number of rows returned by the request).

For example, if I need to catch all requests to the table "_InfoRg4312" lasting more than 3 seconds in the database "TestBase_8_2", then I do:
a) Filter by database, example shown above
b) Filter by duration in milliseconds.

C) Filter by request text


Here we specify the mask. If you need to track queries that access multiple tables, then create multiple items in the "Looks Like" section. The conditions of all filters work together.

7. Now you can start tracing. Click "Start", after that the tracing starts working, and you can see those events that you configured to display and that fall under your filters. You can use the buttons on the command bar to control tracing.


From left to right: Eraser - clears the tracing window, Start - starts tracing, Pause - pauses tracing, pressing Start resumes tracing, Stop - stops tracing

8. The trace window itself consists of two parts. Events and event properties are located in the upper part. The lower part displays different information depending on the type of events. In our case, either the text of the request or its plan will be displayed here.

9. Run the query in the 1C query console and see how it is reflected in the profiler.


The trace shows that there were several requests, and only one of them is ours. The rest of the requests are service.

10. By the properties of events, you can understand: how many seconds the query was executed (Duration), how many logical readings (Reads) there were, how many rows the query returned as a result (RowCounts), etc. In my case, the query took 2 milliseconds, did 4 logical reads, and returned 1 row.

11. If we go up one event, we can see the query plan graphically.
As can be seen from the plan, the search is carried out by index by price, although this plan cannot be called ideal, because. the index is not covering, the code and name fields are obtained using KeyLookup, which takes 50% of the time.


Using the context menu, the graphical plan can be saved to a separate *.SQLPlan file and opened in a profiler on another computer or using the more advanced SQL Sentry Plan Explorer program.

12. If we go even higher, we will see the same query plan, but in text form. It is this plan that is displayed in the TJ, TsUP and other 1C performance controls. To analyze it, I recommend using an advanced text editor with highlighting, such as Notepad++.

13. Using the "File-Save As" menu, the entire trace can be saved in various formats:
a) To the format of the profiler itself, i.e. with *.trc extension
b) To xml format
c) You can make a template out of the trace. See next paragraph.
d) You can save the trace as a database table. A convenient way if we need to find, for example, the slowest request in the entire trace, or select requests by some parameter. File - Save as - Trace table - Select a DBMS server and connect to it Next, you need to select a database on the specified server and specify the name of the table where the trace will be saved. You can select an existing table, or write a new name and then the table will be created automatically in the selected database.

In this case, it should be taken into account that Duration is stored in the table in millionths of a second and when displaying the result, it is desirable to convert the value to milliseconds. The RowNumber column is also added to the table, which shows the number of the given row in the trace.

14. If you need to frequently use the profiler to analyze requests, then setting up the necessary filters and events will quickly get boring and, moreover, will take a lot of time. Trace templates come to the rescue, where we specify the filters we need and the order of the columns, and then simply select this template when creating a new trace. To create a template, use the menu File - Templates - New Template

On the first tab, everything is simple. Specify the server type, template name, and, if necessary, set a flag to use this template by default. On the second tab, we select events and set up filters, as already shown above. I also recommend that you adjust the order of the columns in the trace, this saves time when analyzing queries. For example, I find it more convenient to use the following order.

Now, when creating a new trace, you can simply specify the required template, after that, on the second tab, all filters and events will be filled in automatically.

Of course, far from all ways of using this wonderful tool are shown here, if there is interest from the audience, then in the future it will be possible to replenish the collection of articles on this topic.

05/16/2000 Itzik Ben-Gan

Just as the reconstruction of a crime scene helps to find the perpetrator, so the tracing of the database operation allows you to identify bottlenecks and eliminate them.

The "Catch the Event" article in the previous issue of the magazine described the architecture of the SQL Server 7.0 tracing system and showed how to graphically define a trace in SQL Profiler. This time we will talk about how to recreate traces using SQL Profiler, and how to define automatic start through extended trace stored procedures. With such a strong foundation, you will be able to expertly use SQL Profiler and stored procedures for a wide variety of investigations, from long-running queries to complex deadlocks.

Preliminary preparation for track playback

With SQL Profiler, you can retrace saved traces to debug problematic applications, create real-life scenarios for test cases, tune databases, and more. If you want to re-walk the track, you will have to do some preparatory work. First of all, you need to define a trace to track specific events and columns of data other than those you are interested in. Fixing these additional events and columns ensures that all actions will be repeated exactly as they happened before. Second, you should save the trace results in a file, table, or SQL script.

Any rerun needs to capture the Connect, Disconnect, ExistingConnection, and RPC:Starting and SQL:BatchStarting events. In addition, when replaying backend API cursors (that is, server cursors that are controlled by API cursor functions), you must capture the CursorExecute, CursorOpen, and CursorPrepare events. To replay server-side prepared SQL statements, add the Exec Prepared SQL and Prepare SQL events. Replay will require columns that will contain the following data: application name, binary information, connection ID or server process ID (SPID), database ID, event class, event subclass, host name, numeric information, server name, user name SQL, run start time and text information.

It is important to note that during the second run, the captured events are not simulated, they occur again. Therefore, be aware that during the initial trace, you most likely changed your database. For example, when reproducing a trace that includes an INSERT statement, a duplicate key may appear in the table. To avoid such problems, you should revert the database if the trace is replayed on the source server (that is, the server on which the original trace was run).

If the rerun will be performed on a different server, it is important to ensure that the database on that server is in the same state as the database on the original server. In this case, be sure to use the same user names, their authorities, database identifiers, which were used on the source server.

Using the same identifiers requires special skill and experience, especially because Microsoft does not encourage accessing the sysdatabases system table directly, which is necessary to change database identifiers. You can match the database IDs in another way. To do this, copy the user database files from the source server to the one where the track will be played, and then restore the master database backup from the source server to it. An alternative method is to restore the backup of the user database from the source server to the server chosen for the run, and then restore the backup of the master database to the same server. In both cases, on the server where the track is played, the database files will be located in the same directories as on the source server, and the master database system tables will contain the original database identifiers. To completely get rid of these problems, you just need to remove the database identifier column from the trace and set the default database for each user that is captured in the trace as set.

You can also control the script's timing level and playback speed. Select Settings from the Replay menu to enter the Replay SQL Server dialog box. The Synchronization Level parameter, which controls synchronization within a connection, can take the following values:

Full synchronization (Full synchronization). This value is used by default. In this case, all events that occurred in one connection are played back in their original order. Partial synchronization. With this value, events in one connection can start before events already recorded in other connections. No synchronization. With this parameter value, events can occur immediately after the end of the previous event in the same connection, that is, without any synchronization within the connection.

The replay speed parameter, Replay Rates, can be set to one of the following values:

As fast as possible (As fast as possible). This is the default value. In this case, the next event starts as soon as the previous one ends. Maintain interval between events. This value preserves the original time interval between the occurrence of events. Maintain relationship to start time. With this value, events occur at the same time points relative to the beginning of trace playback as in the original trace.

Track playback organization

Suppose you want to reproduce an execution trace of server-side prepared SQL statements, which are Transact-SQL (T-SQL) statements sent by a user to a server via ADO, OLE DB, or ODBC. SQL Server 7.0 executes server-side prepared SQL statements using sp_prepare and sp_execute pseudo-stored procedures that are called by the client application.

The sp_prepare call causes SQL Server to prepare T_SQL statements for execution by compiling them and cached the execution plans. When sp_execute is called, SQL Server executes previously cached plans, and possibly does so more than once. Each stored procedure call fires the RPC:BatchStarting, Prepare SQL, and Exec Prepared SQL events. It is for this reason that these events must be included in the trace definition.

SQL Profiler contains several sample trace definitions that can be used as templates. This includes example number 6, "T-SQL for Replay", which refers to rerunning the trace. This example is useful for specifying the trace output that is generated during replay. To open the saved trace output for playback, select Open from the File menu and select a file, table, or SQL script to store the information collected during the trace. You can control playback using the options shown in Table 1. They can be represented either by the Replay menu items or by buttons on the toolbar.

Applying Extended Stored Procedures

Some tracing features from SQL Profiler are not available. These include tracing to run on a schedule, run when a specific event occurs, or when SQL Server starts up. In addition, SQL Profiler cannot be configured to send trace results to the Windows NT or Windows 2000 application log. To perform these functions and to provide more programmatic control over traces, you can use a set of extended stored procedures collectively known as xp_trace*.

Let's look at the principles of using these stored procedures using the example of starting a trace sp_start_mytrace and the stored procedure stopping a trace sp_stop_mytrace. The first stored procedure, sp_start_mytrace, defines trace events, data columns, filters, and creates a queue to hold captured events. It then retrieves the events from the queue and places them in a system file. The sp_start_mytrace procedure communicates with the event queue and keeps track of its state through the queue handle integer type handle that the procedure creates during the creation of the queue. sp_stop_mytrace uses this handle when it needs to stop queuing.

Keeping track of the state of a queue descriptor is not an easy task. Although there are many methods for obtaining its value, the simplest and most functional way is to create a table that will record data about all traces and their queues, as well as the start time of the trace, the identifier of the user who turned on the trace, and the name of the computer from which it was launched. Listing 1 shows the statements that create a table called activetraces. To see which traces are currently being taken, just view this table. To stop tracing, simply query the table for the appropriate queue descriptor.

Stored procedure to start tracing

Let's walk through these two stored procedures to see how a trace is started and stopped. The stored procedure that starts the trace has four optional input parameters. The first two, @spid_filter and @dbid_filter, allow you to limit the information collected during a trace to only those related to a specific server process (identified by its ID, SPID) and a given database. If these parameters are not set, the trace will collect data on all processes and databases. The @email_address parameter allows you to assign an email address to which detailed information about the progress of the trace will be sent. If this parameter is not specified, then sp_start_mytrace will only print information to the screen. If it is given, but the address is incorrect, then the stored procedure will issue an error message and exit. The last parameter, @filename, is intended to specify the name of the file to which the information collected during the trace will be sent. If this parameter is not specified, the default information will be placed in the c:\mytraceN.trc file, where N is the number of the queue descriptor. This convention, which defines the rule for naming files with trace data, allows you to capture several traces at the same time, without allowing one of them to lock the file to write results only for itself.

To test the trigger, change the file properties:

ALTER DATABASE testdb MODIFY FILE (NAME=`testdb_dat`, MAXSIZE=30MB)

You will receive a message that the properties of the file have been changed:

File properties changed:
Statement: ALTER DATABASE testdb MODIFY FILE (NAME = `testdb_dat`,
MAXSIZE=30MB)
NT User Name: Gandalf
Application Name: MS SQL Query Analyzer
SQL User Name: NA
Time: 2000-11-22 14:15:28

It is always very difficult to find out what events led to the creation of a deadlock. However, SQL Profiler provides special events that can greatly facilitate the "investigation". For example, you can use tracing to track the occurrence of the Lock:Deadlock event. The occurrence of this event speaks

that an impasse has arisen. This provides the user with the server process ID (SPID), the blocked transaction ID, the blocking time, the application name, and the user ID. The Lock: Deadlock Chain event, which is generated every time a lock occurs, is extremely convenient: it allows you to find out the process identifiers (SPID) and transactions.

You can record the IDs of the transactions involved in the deadlock, then group the trace results by transaction IDs and only analyze those transactions. In another approach, trace results are sent to a table. You can then use queries to filter it by SPID or transaction ID.

To generate a deadlock situation, create two tables, t1 and t2, each of which must have only one integer column. Enter one row in each table containing the value 1. Specify a trace in which the following set of events will be recorded: Lock:Deadlock, Lock: Deadlock Chain, and the corresponding start and end events of statement execution (RPC, SP, SQL). The choice should be made depending on the intended source of blocking. In our example, we only need the SQL events: StmtStarting and SQL:StmtCompleted.

In addition to the default data columns, add a column to capture the transaction ID and columns of your choice. Set the trace filter to match the ID of the database you are working with. After that, open two server connections from Query Analyzer. On the first connection, run:

BEGIN TRANSACTION UPDATE t1 SET col1 = 1

On connection 2, run the following transaction:

BEGIN TRANSACTION
UPDATE t2 SET col1 = 1
SELECT * FROM t1
COMMIT TRANSACTION

Finally, in connection 1, execute the statements:

SELECT * FROM t2
COMMIT TRANSACTION

Stop tracing and open the trace file. Look for Lock:Deadlock Chain events, and note down the numbers of the transactions involved. Group the output by transaction ID and expand the corresponding transactions. The output will look similar to Screen 1.

SQL Server Enterprise Manager includes a wizard that can help you set up traces, including those used to find the cause of deadlocks. To use the Create Trace Wizard to define a trace, log in to Enterprise Manager, select Wizards from the Tools menu, then open the Management category and select Create Trace Wizard.

Final remark

The tracing capabilities provided by SQL Profiler, along with the extended trace stored procedures available in SQL Server 7.0, allow you to debug database behavior. Whether it's just monitoring the state of your SQL Server environment or solving application performance problems, it's time to put your knowledge into practice.

Itzik Ben-Gan [email protected] holds MCDBA, MCSE+I, MCSD, MCT and SQL Server MVP certifications. He is a Senior Lecturer in SQL Server courses at Hi-Tech College in Israel and Chairman of the Israel SQL Server Users Group.

Share