Skip to main content
Ben Nadel at InVision In Real Life (IRL) 2019 (Phoenix, AZ) with: Keeley Hammond
Ben Nadel at InVision In Real Life (IRL) 2019 (Phoenix, AZ) with: Keeley Hammond ( @keeleyhammond )

SQL Server NOLOCK / ROWLOCK Directives To Improve Performance

By on
Tags:

Just recently, Rick Osborne told me about the NOLOCK SQL directive. I had never heard about this but Rick told me that it would help improve the performance of my queries. After some quick searching, I found a great page on Sql-Server-Performance.com. Apparently SQL server puts a locking mechanism around all data access and manipulation to prevent things like dirty reads and the reading of uncommitted data. I was totally not aware of this - SQL server just worked and that was good enough for me.

Apparently though, these locks (as with most locking) comes with a perforance hit. This is good though as it ensures the integrity of your data. However, there are times when you just don't care. Some tables, especially look up tables, are not updated often (if ever) and locking around these brings on unnecessary overhead.

As Rick Osborne (and the page above) explains, using the SQL directives NOLOCK and ROWLOCK can circumvent this SQL Server's locking mechanism and speed up queries. Here, I demonstrate using it on a SQL Join query in which I get Blog entry information:

SELECT
	b.id,
	b.name,
	( t.id ) AS tag_id,
	( t.name ) AS tag_name
FROM
	blog_entry b
INNER JOIN
	blog_entry_tag_jn btjn (NOLOCK)
ON
	b.id = btjn.blog_entry_id
INNER JOIN
	tag t (NOLOCK)
ON
	btjn.tag_id = t.id

Notice that I am using the NOLOCK directive on the blog_entry_tag_jn and the tag Tables. The Tag table pretty much never gets updated and the blog_entry_tag_jn (joining of entries to tags) table gets updated ONLY when I add or update a blog entry. Due to the low frequency of updates, the requirement for locking on these tables is (next to) pointless. By using the NOLOCK directive I am asking SQL to ignore all locking mechanism surroundings those tables and proceed directly to data-retrieval. Theoretically, this should provide a small performance gain. Sweet!

Want to use code from this post? Check out the license.

Reader Comments

153 Comments

Anecdotally, where this really starts to shine is when you have a big table (say, over one million rows) with lots of indexes. Doing an INSERT into such a table can be time-consuming, and doing hundreds of inserts at once can be noticeably slow while indexes are updated, etc.

We've got a table in our data warehouse at work that is wide (200+ columns) and tall (2M+ rows) and well-indexed (20+ indexes, with 4x more data in indexes than data in the table). Even a single insert can take 5-15 seconds. The web-based query tool for this table can build some pretty heinous queries with dozens of CASE statements and joins to lookup tables, as the users can filter on as many columns as they want, aggregating however they want, etc, etc.

There's a scheduled job that updates this table with the latest data every 10 minutes, which takes about 2 minutes to run and locks the table the whole time. Obviously, having the table unqueryable for 2 out of every 10 minutes is not an option. I can open 2 query windows, run one with NOLOCK and one without, and the NOLOCK query no matter how heinous will always return in under 3 seconds. The query without will spin for up to 2 minutes until the update is complete before returning.

Of course, we can only get away with this because we accept that anyone running queries for the up-to-date data might occasionally get dirty half-completed transactions. But, for the type of data that it is, that's a perfectly acceptable risk.

15,674 Comments

Rick,

That sounds really cool and really demonstrates a HUGE performance change. I have to say, though, I am very curious as to what kind of table has over 200 columns :) I am sure you can't talk about it too much (proprietary and all), but I don't know how you even keep track of so many columns.

Thanks for the tips. I am working on "patching" a reporting system that has hundreds of thousands or rows and any little bit of performance gain I can get is great. Eventually we are hoping to move the whole system over to its own DB and implement something called "Star Schema" or something like that (of which I know very little). Until then, though, tweaking it here an there is all I can do.

153 Comments

The table is a denormalization of our company's sales transaction history, sales forecasts, budgets, and current product inventory. (Eventually we'll also add in historical inventory levels.) And when I say denormalized, I mean heavily denormalized.

For example, each date is broken into its parts: an INT column for the year, a SMALLDATETIME for the year+month, then year+month+day, then the complete timestamp, the quarter, the day of month, etc. And think about how many dates you have in a sales transaction system: when it was ordered, when it was requested to be delivered, when the warehouse got the order, when the warehouse picked it off the shelves, when the warehouse put it on the FedEx truck, when the customer actually got it, when we sent out the invoice, and when we received the payment.

That sounds horribly inefficient to have all those different date types for all those dates, right? But we have a bunch of different users in a bunch of different departments that have their own needs for querying and reporting on the data.
- Marketing only cares about the year-level data, because they like to say things like "#1 widget-maker in 2006!".
- Sales wants reports by quarter so that reps can get their commissions.
- Accounting wants reports by month so that they can pay our bills.
- Distribution wants reports by the day to see how efficient they are at getting product off the shelves and out the door.
- Forecasting wants reports by a bunch of different dates because they need to figure out what the cycle is, so they can figure out how many we are going to sell next time that cycle comes around.

It turns out that having 6 or 8 denormalized date fields for each fact is much faster than doing things like DATEPART. It takes up significantly more space, but space is cheap.

And that's just dates. Think about what other types of information you might have in a sales system: who bought it? Where are we shipping it? Are they in a particular market or channel? What are they buying? What is the brand of the product? Type? Category? Who sold it? What was their commission? Did they pay freight? How was it shipped?

And that's just sales. Add in inventory: where is the product in the warehouse? Which warehouse? Which aisle? How far down the aisle? How many levels up in the stack? How many complete palettes do you have on hand? Complete shrink-wrapped cases? Individual pieces? Who picked it off the shelf? Who packed it into a box? Who put it on the truck?

Add in budgets: What currency was the budget done with? Which company is the budget for? What object account is the budget for? What ledger? What subledger?

Add in forecasting: How many did the client request? How many did we actually ship them? How long was the product backordered? What is the build time or lead time for the product? How much material do we need to buy to build the products for next month? Next quarter? Next year?

Now, for the prestige of my little magic trick. Nothing up my sleeves ...

Board-level Executives love picking two or three of these questions, each from a different area and department, and asking to have them tied together in the same report: "What is the variation in the budget due to product that sat on the shelves too long last month because we built too much two months ago based on a forecast from last quarter that was based on sales from four quarters ago that were artificially inflated by a sales rep that was having a better-than-average month because of a sale on a product that we had too much of and had to get rid of?"

Can you imagine the logistical nightmare of trying to get all those departments to work together to produce such a report? Instead, you can take my monster denormalized table, a query on it that takes about 3 minutes to build, then export to Excel and spend another 15 minutes cleaning it up and formatting it, and get the report on the CEO's desk in under a half-hour.

I think that's pretty darn cool.

All of that denormalization, in a nutshell, turns a hundred or so nice, normalized tables in a relational system ("snowflake" schema), into one monolithic table with all of the facts/measures and just a few lookup tables ("star" schema). DBAs like snowflakes because the data is rational and logical and clean. (And, that's how you were taught to do it: normalize mercilessly.)

But it turns out that end-users don't grok relational theory and glaze over when they see 100 tables that make perfect sense to you and me. Instead, they don't care where in the snowflake the data is coming from, they just know that they want to see what they want to see. And, to add insult to injury for the DBA, it turns out that snowflake schemas just aren't as efficient to query as star schemas. If you are a CF coder and a DBA, throw salt on the wound when you figure out that writing dynamic aggregated queries in a snowflake schema is hella harder than writing those same dynamic queries in a star schema. It's dirty and nasty and smelly, but it gets the job done and done well.

15,674 Comments

Rick,

What you are doing sounds exactly what we want to do in the next huge phase of our client's application! I am now totally excited to not only learn something new but also to know that it works and works well. We have reports that run by joining like 6 table and performing aggregates and dealing with a tremendous amount of data.. YIKES! Thanks for the really awesome explanation of what is going on over there.

I hope that when the time comes I can hit you up for some advice as we journey into the world of star schema; but that won't be for months.

Also, nice use of "hella" :D That was hella sweet.

153 Comments

Certainly.

My first piece of advice, that I wished someone had given me when I started on this project 18 months ago, would be to avoid MS Analysis Services and cubes and MDX. There are going to be a lot of indicators that the product will solve all of your problems and make your life easier and possibly even cure cancer ... but ...
1. It's a very steep learning curve. MDX may superficially look like SQL, but it is fundamentally different. Think wolf in sheep's clothing.
2. Once you get to the top of that curve you realize that MDX has made your life much harder than it used to be with just SQL. Things that are no-brainers in SQL are nigh-impossible in MDX. Conversely, things that are easy in MDX are not necessarily hard in SQL. I have yet to find anything in MDX that I couldn't replicate in SQL.
3. The performance just isn't there.
4. Cubes are much harder to change than you are lead to believe. If you don't have your dimensions exactly right the first time, or if you want to be able to add dimensions more often than every 6 months, then cubes are a bad idea.
5. Doing MDX through CF means using a SQL Server to flatten the data down from a multidimensional cube to a 2-D query. This is much slower than you are led to believe.

On the upside, the Excel tie-in for OLAP is triple-sweet. It generally just works, and end-users can be trained on how to use it in under a half-hour. That's going to look very tempting, but I assert that it's not worth all of the other headaches.

Stick with SQL and denormalization to a star schema. It's blindingly fast, easy to modify later, and won't give you grey hairs.

15,674 Comments

Rick,

That sounds like great advice. I am not familiar with the products to which you are referring, however I am sure I will become familiar soon. I will pass this onto our team.

Thanks for the time and effort you put into your comments. It is highly appreciated.

1 Comments

Just another tip.

Save yourself some time in stored procedures where you're always implementing nolock by putting the following at the top of the proc.

SET TRANSACTION ISOLATION READ UNCOMMITTED

http://msdn2.microsoft.com/en-us/library/aa259216(SQL.80).aspx

Does the same thing and saves you the time of having to remember to always add the nolock. Its also nice if you want to quickly go back through your procs and add this behavior.

15,674 Comments

Steve,

Interesting tip. I have never set the Isolation level within the query itself. I have done that as part of the CFTransaction tag. I assume that it does the same thing, but have no idea.

1 Comments

Guys, i am not sure if i agree with your denormalized database schema. A 200 column table is very slow, instead you could have used covering indexes (which doesnt work anymore in a denormalized table), and combine it with (NOLOCK). If the reports are still slow, get more RAM for your sql server. If the company has enough money to spend, replicate the tables to a 2nd server from where you do all the reporting. But i work for a website that has 70 page requests per second, and we still dont have the problems you have so im very suprised. I reckon you are getting yourself in more problems with this denormalized table..

153 Comments

There is a world of difference between a query used on a high-traffic website and a query run in a data warehousing situation. The former produces queries that are very predictable, and therefore, as you said, easy to optimize with indexes. But in a data warehouse, by it's very nature, you are not going to be able to optimize for every query that gets thrown at it.

It's not a question of the amount of memory - it's all about the processor cycles it takes to spin through millions of records. Unless you have more money than God, in most large-scale data warehousing situations you know that you're not going to be able to get enough memory to fit everything at once. So, you optimize for the processor. If you have a ton of queries that use a full date and a ton of queries that use just the month part and a ton of queries that use just the year part, wasting processor cycles doing stupid DATEPART tricks on millions of rows is only going to slow you down.

Hence: denormalization.

Seriously. This is what I do every day for a living. When you have more than hunches and guesses, come on back and we'll chat again.

15,674 Comments

@Rick,

The client of mine that is going to need a reporting system just finished raising their capital investments. We are waiting for some big IT guy to sign on over at their place, and then we are moving forward with the new database in star schema. It's very exciting! I can't wait to learn all about this.

15,674 Comments

@Marco,

That is interesting to know. We have not yet moved to SQL 2005 at work yet. And to be honest, I usually forget all about the NOLOCK. But, I have started to use the CFTransaction, which I have never done before :)

1 Comments

Rick:

I totally agree with you in regards to denormalising databases in certain circumstances. A key component of an application that I am building for a client is a search engine designed to query millions of records. During the optimization phase, I tried lots of things to speed up searches, including creating indexes. One of the things I tried was denormalising the tables. Boy was I surprised at the results!

Denormalising the tables reduced my queries by 7 seconds at least. I know this may not sound like a big deal, but when it involves a high-traffic site it really makes a difference. I have learnt first hand the cost of complex joins on normalised tables. To all DBAs out there: don't get me wrong. I am a great believer in normalising databases, but in certain circumstances, denormalising the databases can mean a difference in search speeds.

1 Comments

About sql 2005:

Specifying NOLOCK or READUNCOMMITTED in the FROM clause of an UPDATE or DELETE statement when applied to the target table of the statement.
SELECT statement is not mentioned.

54 Comments

Wow, this conversation just changed my way of thinking and opened my eyes a great deal!!!

We've a statistical database here which is growing itself a little on the chubby side of late and we're seeing a bunch of issues slowly rearing their head, from deadlocks through to slow performing queries.

I'd found that I had a set of report queries that were working just fine in development but when moved to production were running like absolute dogs and I couldn't figure out why. Seems to me that the key difference is that on our Production Server the database is written too on a regular basis unlike in development, it seems the report queries are hanging around waiting for locks to be dropped and that's what's causing the dog performance.

De normalization is something which excites me very much and something I'd not really considered in the past. At the moment I'm a little unsure as to how we might be able to do that but certainly we have a few columns which have been ripped out into subtables which could really be tucked up in bed together to keep warm.

When we get around to rethinking our structure I'll be playing with these ideas a great deal.

Thank you guys, excellent talk.

Rob

2 Comments

I have also seen MS SQL server locks for a long time even insert one row, this includes happening to msdb.dbo.syssessions table.

1 Comments

Hi

I have 4 tables.
monthly_header, monthly_detail, yearly_header (as summary of monthly_header), and yearly_detail (as summary of monthly_detail)

if I have transaction (calculation process, can run simultanously from more than 1 user but using that tables) like this

<cfloop query="qry">
<cftransaction>
<!--- currenty using cftransaction with no isolation attribute --->

update monthly_header with (rowlock)

select ... from monthly_header with (rowlock) ...

...
...

select ... from yearly_header with (rowlock) ...

delete from monthly_detail with (rowlock) ...

insert monthly_detail ...

update yearly_detail with (rowlock) ...

update yearly_header with (rowlock)

...
...

select ... from monthly_header with (rowlock) ...

select ... from yearly_header with (rowlock) ...

</cftransaction>
</cfloop>

if the process is running simultaneously, can we sure there are no data locked by another process?

I'm using CF 9 and MSSql 2005

Thank you

triyadhi

15,674 Comments

@Triyadhi,

According to the documentation, if you leave out the isolation attribute, ColdFusion uses the default as defined by the given database system. My guess is that as long as you put the CFTransaction tag in place, you should be pretty good. But, I am not 100% sure.

1 Comments

I have a similar situation to Rick but instead of a data warehouse it's a high traffic website (jobboard) in which one of the features on the search page is to be able to filter jobs by any number of categories and to also see the current job count for all other available categories which are also filtered by the current selection.

You might have seen this sort of thing on http://jobs.guardian.co.uk/jobs/arts-and-heritage/uk/

For example the provided link is filtering on the categories Arts & Heritage AND UK. For all the other fields on display: Sector, Hours, Salary, Contract etc each categories "job count" must be filtered by UK and Arts & Heritage. If I then selected a salary range the categories would have to filter by that particular range as well.

A site can have literally thousands of categories across multiple fields and to calculate the current count for the particular filter in question for every category on the fly on the normalised system would be very time consuming, involve lots of joins and require NOLOCK statements on each table as the underlying data is constantly being updated.

So to accomplish this task in a slightly different way I created a MATRIX table that holds all the possible category combinations (CROSS JOIN) so that doing a COUNT or SELECT is a very simple and quick process.

One of the good features of SQL 2005 is the SYNONYM feature which allows you to reference tables through a pointer which can be changed at any point. I make use of this so that I always have two MATRIX tables, one is being used by the website and the other is in the process of being populated.

A scheduled job runs every 15 minutes to rebuild the new MATRIX tables alongside the existing table and I use a SYNONYM to point to the current table in use.

My website never cares which table it is using and because the table that is being referenced never gets updated I can safely use WITH NOLOCK to aid performance.

I just thought I would mention it as it's a different approach to toggle a LIVE and BUILDING table with a SYNONYM instead of just having one table that gets updated and is reported/selected from at the same time. It does mean the data is 15 minutes old but in my case this is not as important as being able to deliver fast and heavily filtered results to users.

Thanks

1 Comments

To the gentleman indicating that Cubes and MDX are too much to wade through I must disagree.

It does have a very steep learning curve but the benefits (especially when combined with excel) are amazing.

It does however require changes to the database in order to provide the basis for easier reporting (rather then have long MDX strings to calculate certain things, better to adjust the DW to have this as a dimension.

We have been able to scale past 250 million lines in one of our cubes, and it provides results within 4 seconds, and this includes reporting on large dimensions (250k Customers, 120k Items) we have also been able to scale to multi-company (consolidated datawarehouse for Stock, Sales, Finance, Stock ageing and Exposure ageing across 11 companies from 9 countries, with a single unified reporting currency)

Thanks for the tip about NOLOCK, i will evaluate this and see if we can use it in our code generator!

1 Comments

NOLOCK is something you also need to be very careful with. It is NOT a 'tool' and should not be used as such. It should be a last resort for overcoming design issues that don't have a resolution due to high refactor cost. Implementing NOLOCK as a tool will at some point result in dirty reads and potentially cause bugs that can usurp days of lost time and money to track down because they're simply trackable. Want to find that bug? Look at your NOLOCKs. Yes, they can be useful, but they should also be used with caution and the understanding that you are now signing an agreement stating you're okay with untraceable 'bugs'.

I believe in love. I believe in compassion. I believe in human rights. I believe that we can afford to give more of these gifts to the world around us because it costs us nothing to be decent and kind and understanding. And, I want you to know that when you land on this site, you are accepted for who you are, no matter how you identify, what truths you live, or whatever kind of goofy shit makes you feel alive! Rock on with your bad self!
Ben Nadel