SQL Optimize the performance of SQL query


SQL Optimize the performance of SQL query

SQL query can be optimized using the following ways

  • Add missing indexes
  • Check for unused indexes
  • Reduce Table Size
  • Simplify Joins
  • Use SELECT Fields FROM Instead of SELECT * FROM
  • Avoid using SELECT DISTINCT
  • Use TOP to sample query results
  • Run the query during off-peak hours
  • Avoid Running Queries in a Loop, use Bulk Insert/Update instead
  • Use EXIST instead of IN

Add missing indexes

In SQL Server, when you execute a query, the optimizer generates an execution plan. If it detects the missing index that may be created to optimize performance, the execution plan suggests this in the warning section. With this suggestion, it informs you which columns the current SQL should be indexed, and how performance can be improved upon completion.

You can also understand which tables need indexes by analyzing graphical query plans. The thicker the arrow between operators on the query execution plan is, the more data is passed. Seeing thick arrows you need to think about adding indexes to the tables being processed to reduce the amount of data passed through the arrow.

Check for unused indexes

You may encounter a situation where indexes exist but are not being used. One of the reasons for that might be implicit data type conversion. Let’s consider the following query:

FROM TestTable
WHERE IntColumn = '1';

When executing this query, SQL Server will perform implicit data type conversion, i.e. convert int data to varchar and run the comparison only after that. In this case, indexes won’t be used. How can you avoid this? We recommend using the CAST() function that converts a value of any type into a specified datatype. Look at the query below.

FROM TestTable
WHERE IntColumn = CAST(@char AS INT);

Reduce Table Size

To ensure you only receive the information you need, you can filter your data. Filtering data will reduce table size and optimize SQL queries’ runtime. Popular methods for SQL table optimization and query speed improvement include:

  • Providing a limited range of dates for time series data
  • Limiting the dataset in a subquery
  • Avoiding duplicate data


Simplify Joins

Sometimes, when a query joins tables, it drastically increases the result set’s row count, which can lead to a slow execution time. Before joining tables, try to reduce their size, as explained above.

Something as simple as changing the order you join tables in can also optimize SQL queries. When joining two tables, start with the one that will return the fewest results after filtering.

Use SELECT Fields FROM Instead of SELECT * FROM

By using SELECT fields FROM instead of SELECT * FROM, you can narrow down the data fetched from the table during a query, increasing your query’s speed. The command SELECT * will fetch all the data from your table, whereas specifying fields can reduce query runtime by ensuring you only receive the necessary data.


SELECT DISTINCT command in SQL is used for fetching unique results and remove duplicate rows in the relation. To achieve this task, it basically groups together related rows and then removes them. GROUP BY operation is a costly operation. So to fetch distinct rows and remove duplicate rows, one might use more attributes in the SELECT operation.


SELECT DISTINCT Name, Color, StandardCost, Weight FROM SalesLT.Product

The result will be as follows

But if we write the query without distinct as follows

SELECT  Name, Color, StandardCost, Weight, SellEndDate, SellEndDate FROM SalesLT.Product

The result will be as follows

Use TOP to sample query results

The SELECT TOP command is used to set a limit on the number of records to be returned from the database. To make sure that your query will output the required result, you can use this command to fetch several rows as a sample.

Run the query during off-peak hours

Another SQL tuning technique is to schedule the query execution at off-peak hours, especially if you need to run multiple SELECT queries from large tables, or execute complex queries with nested subqueries, looping queries, etc. If you are running a heavy query against a database, SQL Server locks tables you are working with to prevent concurrent use of resources by different transactions. That means that other users are not able to work with those tables. Thus, executing heavy queries at peak times leads not only to server overload but also to restricting other users’ access to certain amounts of data. One of the popular mechanisms to avoid this is to use the WITH (NOLOCK) hint. It allows the user to retrieve the data without being affected by the locks. The biggest drawback of using WITH (NOLOCK) is that it may result in working with dirty data.

Instead of Loop, use Bulk Insert/Update

The loops must be avoided because it requires running the same query many times. Instead, we should opt for bulk inserts and updates.

Use EXIST instead of IN

IN operator is more costly than EXISTS in terms of scans especially when the result of the subquery is a large dataset. So we should try to use EXISTS rather than using IN for fetching results with a subquery.

Let us see this with an example,

SELECT ProductNumber,Name,Color FROM SalesLT.Product
(SELECT ProductID FROM SalesLT.ProductDescription)

With IN operator we get the below result

Lets use the EXIST operator

SELECT ProductNumber,Name,Color FROM SalesLT.Product
(SELECT ProductID FROM SalesLT.ProductDescription)

The result will be as follows

We have executed the same query having a subquery with IN command and EXISTS commands and we observe that the EXISTS command takes half of the time as compared to IN command and the number of physical and logical scans is very low.

Please comment and share if you like this post and tell us about how we can enhance our posts. Thanks.