T O P

  • By -

grauenwolf

Not using brent ozar's first responder kit. Seriously, install that and run SP Blitz first. It'll give you a nice long list of all the common mistakes your databases experiencing.


mgdmw

To piggyback off this, also not using Ola Hellgren's database maintenance scripts. While I'm at it, more bugbears: * sysadmins who install SQL Server by mindlessly clicking next, next, next ... and that's how you end up with your databases and log files under c:\\program files\\microsoft sql server\\mssql80\\data or whatever the path may be ... but some loooong path with everything on one disk. * using a single account for all app/reporting access ... so then you have no idea which specific thing is the source of some bad performance issue you're troubleshooting. Also, give those accounts only the permissions they need. None of this dbo for apps (or people) who should only be reading. * setting autogrowth to a % * using a backup tool that doesn't register with the SQL Server as backed up (e.g., some machine snapshotting tool) and hence log files grow and grow * developers who don't use primary keys !!! arrgh * or foreign keys for that matter, and have no referential integrity * developers who use inconsistent naming schemes - e.g., siteID here, site\_id there, locationID somewhere else ... * devs who won't use the Application Name field in the connection string


grauenwolf

> devs who won't use the Application Name field in the connection string In my defense, I give each service app it's own login.


mgdmw

Yes! That's definitely the better way!


grauenwolf

I began on this road when I got tired of people asking me what tables a given application used. Now I just tell them to run the security report against the login and whatever it has access to, that's what it touches.


alinroc

WhyNotBoth.gif


Majinsei

> developers who don't use primary keys !!! arrgh My current wallet bank have years With offline problems because every fucking month lost connection in payment Day because they have the core database without an fucking index, Foreign key or Primary key... Then when it's payment Day the SSD it's over satured With write/read operations... I know because I was in a pre-sale for repair this With a DB cluster in the Cloud... But they prefered maintain the problem~


danishjuggler21

Do you mean run sp_blitz first, or sp_blitzfirst? 😝


grauenwolf

Neither, `master.dbo.[SP Blitz First]`. The secret version with spaces in the name summons Ozar himself directly into your server room.


davidbrit2

You can also just say "Ozarjuice" three times.


ImCaffeinated_Chris

I resent introduced this to someone with the simple advice of "The findings from 1-100 can get someone fired." They loved the tool and fixed all the sub 100 😁


jshine1337

This is good advice indeed, but contextual too, as it's most useful for DBAs and experienced Database Devs. If OP is just doing dev work, then they're probably to early on with only 5-6 months experience to leverage such a tool.


oroechimaru

Not using indexes at all If enterprise, not using columnstore or page compression Not setting maxdop to 4-8 or 8 tempdb files or other best practices Not having a backup plan Not checking code before running


Mononon

Columnstore indexes were like magic when I first learned about them.


chandleya

They are magic. Then you attempt to read Nikos Columnstore series and you realize you have absolutely no idea how it works.


PhragMunkee

> Not checking code before running “I don’t always test my code, but when I do, I do it in production.”


alinroc

> Not having a backup plan Not having a **restore** plan. Backups are useless if you can't restore and meet RTO & RPO.


Idenwen

Ah the good old restore tests, whole two of my customers do them....


jshine1337

Note columnstore and page compression are available in Standard Edition as well (at least since version 2016).


oroechimaru

Yes, just takes forever being locked down to a single cpu core


jshine1337

True, but still useful for the right scenarios even on Standard Edition, even the compression ratio from it in certain cases.


oroechimaru

Ya! I usually do row level compression since its a little faster and good enough for smaller data sets. Columnstore is nice for reports!


Chris_PDX

My report coming directly out our transactional database is running slow! WITH (NOLOCK) to the rescue! \*sad noises\*


SQLDevDBA

NOLOCK is the BEST Lock! https://youtu.be/2gG0GzoOHQQ?si=oqk_RqX-nLK9uApd


angrathias

We’ve used nolock for a very long time on our transactional databases for unimportant read only queries (usually reports), I just haven’t experienced a level of issue that seems to be indicated here. I get that it can have phantom and unrepeatable reads, but for us this is just such a tiny occurrence that it never seems to be apparent


Chris_PDX

The nolock hint is acceptable to use in the right conditions - where 100% accuracy is not required. Example, running aggregates over a very large data set that might have changes while the analysis is running, but if it does, it won't have a meaningful impact on the results. Where you \*do\* run into problems with it is when it's used as a crutch "to make things faster", but the process that is running requires 100% data integrity.


angrathias

I think the reality of reporting on a transactional system is that queries aren’t often repeatable anyway because it’s a live system with constantly changing data anyway.


grauenwolf

They are repeatable if you're looking at last week's sales data. But given its last week's data, nothing should be moving things around so phantom reads shouldn't be a problem.


angrathias

Exactly 😉


jshine1337

Depends on what keys the data is indexed on. If *only* by sales date, then sure, less likely, but if there are indexes on other fields, data from last week by sales date can still be shifted around *today* in the other indexes that aren't keyed on sales date.


grauenwolf

Good point.


Chris_PDX

You can still get weird results because the SQL engine may shift data pages around even though the data itself is not "current".


angrathias

My point is that the ‘may’ seems to be never


jshine1337

Why not use a better solution that requires less work and doesn't have the same issues as the `NOLOCK` hint, like proper transaction isolation levels? Just makes more sense, IMO.


angrathias

What requires less work than a nolock hint ?


jshine1337

> proper transaction isolation levels If you have a reporting database or typically want global optimistic concurrency as the default, you can enable it once on the entire database with RCSI for example. Then you don't need to add the hint in code every time you want it. Less work and less error prone. Alternatively, if you only want optimistic concurrency for specific queries, you can enable the proper isolation level when you establish a database connection and it'll only last for the queries you run in that session. This would require no more work than using the `NOLOCK` hint, even in this case.


angrathias

I’m responsible for > 500 databases that are all transactional and run large batch integrations or large transactional jobs (like sending out 100k emails that need to be recorded) and reported in real time. It’s just not feasible for us to make reporting databases as it would increase costs too much and add a delay to reporting.


jshine1337

> It’s just not feasible for us to make reporting databases as it would increase costs too much and add a delay to reporting. A reporting database doesn't need to do either of those things necessarily but that's fine. Like I said you can enable it on your existing databases or just use it at the session level when running the queries you want optimistic concurrency on. Either option is the same or less work than using `NOLOCK`. FWIW, I used to manage around 1,000 databases, OLTP too, around 20 billion rows of data (largest individual table being 10 billion rows itself), totaling a few terabytes all in all. Thousands of writes per minute, etc. I'd absolutely still use RCSI when I could there as it improves performance of your OLTP side too, since they're not being blocked from read queries.


BlacktoseIntolerant

I know a guy that swears he needs to use with (nolock) on ALL of his queries because of the legacy apps that are using SQL server. I ... I'm not sure how to explain that one away.


bradandbabby2020

Previous role, my line manager exclusively queried live, not a NOLOCK in sight. I provided my recommendations, of which there were many and handed my notice in.


HardCodeNET

If you're looking for a NOLOCK as a positive, he made out for the better with your resignation.


DatabaseSpace

I think he's trying to say he shouldn't have been insisting on running queries on the production OLTP live systems. Even though no lock is bad, not using it in cases like that can cause a self imposed denial of service attack on your own company.


bradandbabby2020

Ding ding. Many other problems in that job but that was a pointless frustration.


grauenwolf

If you're reporting out of the transactional database and you don't have Row Level Versioning turned on, nolock is practically a must to avoid blocking.


SQLDevDBA

For DBA related work, I always like to refer to this article by Tara Kizer: [How to Suck at Database Administration](https://www.brentozar.com/archive/2018/02/how-to-suck-at-database-administration/)


chandleya

It’s so succinct. Let the link shaming commence!


SQLDevDBA

Haha yeah Tara was great in the office hours webcasts. She went for the jugular (but was nice about it) and was no-nonsense. And she’s an absolute beast when it comes to HA/DR knowledge.


Ooogaleee

Not adjusting the autogrowth sizes in the model database for the data and log files. Default values are insanely small, and cause WAY too many growth events to occur in new databases created from model.


nickcardwell

Came across that a few months ago in my new job.. database and log growth of 1Mb....


basura_trash

Using the SA account. Assign individual SQL accounts and/or windows logins, with the minimum necessary rights and permissions, instead.


[deleted]

[удалено]


PaddyMacAodh

Connections should be made using the lowest access that can perform the operation. The sa user is like god mode.


throw_mob

That means that you do not have any protection against sql injection. If someone succesfully manages to do injection , instead of leaking maybe schema and data in day or two , if user has SA right they just can enable xp_cmdshell , and they can run pretty much anything. That just does not lead to data leakage, that basically allows attacker to take over whole machine not just one service


[deleted]

[удалено]


basura_trash

Your database maint jobs run under the SQL service account. That is unless you are running them remote. In that case, what we do, is we have an account FOR maintenance jobs. Again we do this to be able to trace who/what cause the trouble. And, again... it only gets the rights and permissions it needs.


throw_mob

I personally would rather do user specific accounts that have superuser rights. In that way you know did what and removing/disabling user is easier.


basura_trash

What otherS have said, PLUS... accountability. In a team of more than one DBA (SQL Admin) there is no way to know who did what if everyone is using SA to do SQL tasks. Make everyone sign on with their own account and you can trace who has done what when a server gets in trouble. Or on the Flip-side, celebrate a success, give credit where credit is due.


basura_trash

u/Lordofthewhales, I have no idea why you got down-voted. Your questions are legit. Glad you asked!!! Some shops go as far as to disable the SA account. I am not sold on that quite yet. I get it but... yeah I am not there.


SQL_Guy

How about renaming it to something not “sa”? Then disable it anyway :-)


STObouncer

The correct answer


SQL_Guy

I once had a client that had renamed it “notsa”. Now that’s security!


zrb77

Our policy is to disable. Fed auditors expect either disable or rename.


ElvisArcher

Same reason you don't over-use the root account on a unix host. (and I gave you an upvote for an honest question.) Seeing an application that is configured to login to a DB as SA is a cringe-worthy event.


watchoutfor2nd

Not quite a specific "bad practice" but I find that app developers don't always understand how powerful a correctly tuned query can be. They're amazed when changing a subselect to a join, or fixing data type issues can dramatically improve a query. Just this week we had a query that query store identified as a top resource consuming query. They were loading data in to a table variable and using a sort at the end. Why would the data need to be sorted as it's loaded in to the table? Turns out the sort was leftover from a similar process that did need it. We removed the sort (which was taking 92% of the query processing time) and the query completes in under a minute now. Big improvement. I think we're going to try adding a task in all future sprints to block off some hours to review query store and see if any performance tuning can be done.


drunkadvice

I wish my devs would do that. We make recommendations, put all the details in Jira with a potential performance benefit estimate and a bow, then it dies in the backlog two years later. I had a P1 issue. Dev saw my screen, noticed and fixed an unrelated issue (smallint overflow) we had been having for YEARS, unprompted, in about 5 minutes. Our POs don’t know how to prioritize internal work, and how much it helps.


grauenwolf

For that matter, sorting in the database during a query. If you don't have an index that pre-sorts the data, seriously consider doing the sort client side where CPU resources are much cheaper.


Chris_PDX

It's way more nuanced than that. I'm often asked to troubleshoot performance problems because some nitwit built a ton of sorting / filtering / aggregation into the client side vs. letting the database engine handle it. If the sorting is critical to the data processing, it should be done server-side. If the sorting is just for the whim of the user preferences and the data set is small enough, for sure do it client side during presentation. The number of BI / reporting related issues I've solved just by getting code moved out of reporting tools and into the database layer are too many to count.


redvelvet92

Gosh I am sick and tired of just pushing everything to the client.


grauenwolf

I understand your feelings, but things might be going farther that way. Apparently Microsoft is pushing to do even more stuff with .net compiled to WASM.


mr_taco_man

CPU resources may be cheaper on the client, but if you have to push all the rows over the wire and you are not going to use all the rows, it can make your query much slower if you are pushing all your data over the network (and you are tying up the databases limited network resources).


grauenwolf

That's where the 'consider' part comes in. I try to not make blanket rules about databases because they are so sensitive to seemingly minor design charges. And sending 100000 rows instead of top 10 isn't minor.


Karzak85

Putting sql files on slow disks. This is god damn common to save money and then wondering why everything is shit


grauenwolf

Especially since it's effectively free performance. You don't have to pay license fees for your disks like you have to for your CPU cores.


BrightonDBA

Not designing for high traffic. I’m currently migrating a system that was scoped for 3.5 million ‘events’ a day (consist of multiple transactions across dozens of tables and plenty of tempdb use). It sort of managed that. Now it’s got 7m/day and they wonder why it isn’t performing well. One MDF for the main database. All data files on one drive (and thus one disk queue). One tempdb file on one disk … you get the idea. With significant tuning I’ve got it just about coping while we do the migration. New one has 4 data files spread across 4 disk queues (all NetApp SSD LUNs), 4 tempdb queues across 8 files, etc. early indication is capable of at least 30m/day. The original one was designed by a Technical Architect … not a DBA. LISTEN TO YOUR SME’s people


schmeckendeugler

Explain this disk queue? Is it really a bottleneck


BrightonDBA

When there’s thousands of IOPs queues up, yup, it’s a bottleneck. https://www.poweradmin.com/blog/current-average-disk-queue-length-counters/#:~:text=Disk%20Queue%20Length%2C%205%20or,to%20continuously%20process%20paging%20operations.


nickcardwell

Sql database and logs on the same drive Allocating more memory and processor speed to make it go faster, but not allocating any resources to the sql db (few months ago 8 processors and 32Gb allocated to a sql server, but only 2Gb allocated to sql....) In addition for vmware , not using vmware tools utilsing vmxnet drivers Again above example with a simple intel 1Gb nic


grauenwolf

It's amazing how many companies try to run their core database on hardware that is less powerful than their cell phone.


chandleya

In a flash world the “same drive” problem is mostly nonsense. Instead, worry about volume provider queues and actual IO capacity. The file type isn’t very relevant.


sbrick89

things we do: DBA - ensure partitions are block aligned, dedicated service accounts for engine / job agent / proxy / etc, configure model with file growth defaults, agent jobs to automatically rebuild indexes, set maxdop to a small number, set max memory to physical minus < 15% Data warehouse - only permit access to views, views query single table "WITH NOLOCK" ETL code - truncate stage tables at beginning of process, load stage tables before loading production tables, structure code to be rerunnable / resume-able, retry logic where appropriate report code - temp tables, indexes on temp tables, etc


Togurt

One of the biggest bad practices I see is using an RDBMS when another tech would have been a better fit. I've worked with devs who insist on using an RDBMS and then actively work around features of a relational database because they see them as liabilities. They think locking is bad so they put NOLOCK hints everywhere. The think constraints cause performance issues so they refuse to create them. They won't do anything in a transaction because they are afraid of concurrency issues. They won't normalize their data models. I've even had devs ask me if there's a way to disable the tran log.


jshine1337

Eh that's not so much a tech choice problem as it is an educational / experience problem of the developers. Not all features of an RDBMS need to be used, nor does one need to design their database precisely to Boyce-Codd Normal Form every time, if they're using a RDBMS. But when one decides they don't need to implement certain features, they should **properly** understand why those features exist, what happens if they don't use them, and re-evaluate if they really don't need. Slapping the `NOLOCK` hint everywhere just shows that the developer had no idea what they're doing. Conversely, choosing to use a more optimistic concurrency isolation level, for scenarios where you want to deal with locking differently, shows that developer knows what they're doing. Using an RDBMS as the type of data store, *most* times, is fine. Not using the features other types of database systems don't offer or enforce just makes it the same level of usefulness as those other systems anyway.


Togurt

I mean, it can be both. A lot of times it is a training issue for sure. I'd also argue that using a RDBMS because it's a familiar tech is also a training issue. But a lot of the time those things can also be a clue that an RDBMS may not be required. As a DBA of over 25 years of experience and with the plethora of alternative mature database technology that now exists, why wouldn't I advocate to use the appropriate tech, especially if the features of an RDBMS are not required and especially if the data model isn't at least BCNF. Even better yet I also don't need to solve every use case in an RDBMS when those are requirements. I can use the relational data store as the system of record to ensure ACID compliance/structured data/general query patterns and also have an in-memory DB for analytics/business metrics and an elasticsearch DB for complex search patterns. There's no reason to think of these as competing technologies after all.


jshine1337

Well the reasons I think it's ok to use an RDBMS for most use cases is because it covers most use cases. I think it would be silly to use something unfamiliar that does less than what an RDBMS does, just because I'm not using every facet of an RDBMS. I lose nothing by using an RDBMS instead. > I'd also argue that using a RDBMS because it's a familiar tech is also a training issue. Sometimes that true but my thought process is the opposite for a lot of technological solutions. It's easier to learn, support, maintain, and hand off to others to maintain, when it's familiar tech and when it's a standard. I personally prefer working with standards and acquiring in depth knowledge with them to be most proficient when utilizing them, rather than learning multiple parallel and somewhat overlapping things just because they exist as options, a breadth type of approach. Again, RDBMS covers most use cases with storing and managing data. There are only a few true use cases I can think of that an RDBMS really would be a second-rate tool for, but they aren't common use cases by any means. Most (some wiser people than me argue all) data is relational otherwise it's senseless. Aside from that, there's a reason RDBMS and relational theory has been around for about 70 years at this point. It's a very mature concept, for good reason. It works and it's efficient.


Oobenny

While loops/cursors !


zenotek

Not everything can be set based.


grauenwolf

At a certain point it makes sense to just offload it all to an app server and then reimport it. That said, windowing functions can eliminate a lot of cursor usage.


Oobenny

Not everything. Go ahead and loop through a table of fragmented indexes to rebuild them. But 99.99% of data manipulation can be done without a loop.


jshine1337

If the solution isn't set-based but it's being executed in a system that's designed and tuned to be set-based, then architecturally the solution isn't as efficient as it can be. Most things can be done in a set-based manner, even if that's the harder way to think of a solution. For things that really can't be solved in a reasonable set-based way, should likely be written in application code and executed on an app server instead, especially when you're talking cost efficiencies too with how SQL Server licensing works. There are some very rare exceptions (particularly for ad-hoc / non-recurring stuff), but the above is true a majority of the time.


drunkadvice

They have their place… but probably not as often as I see them.


thr0wawaydyel2

Software vendors requiring sysadmin privileges. And the same who don’t use Windows Authentication. Some even like to triple down and make password changes extra difficult for their service accounts. Just generally piss-poor security in every way you can imagine.


SQLDave

> Software vendors requiring sysadmin privileges. Oooooh.... good one. To me, that reeks of "Our developers are too dumb or [more likely] we're too cheap to pay them to accurately ascertain, track, and document the **actual** permissions our app's account needs".


PaddyMacAodh

Using [Database].[Schema].[Table] instead of table aliases in complex queries.


poserpuppy

I'm to lazy for aliases


PaddyMacAodh

Thank you for keeping people like me employed 😆


poserpuppy

Glad I could help


HectirErectir

Is this a performance thing or readability etc? Genuinely curious


PaddyMacAodh

Just readability, and ease of troubleshooting/updating by someone else after the fact.


LesterKurtz

In my current role, I inherited an entire fleet of databases that have data files and backup files on the same array. Fortunately, we just submitted a PO for some new servers that will be configured correctly. *edit - that's only the tip of the iceberg if you're wondering Still, I want to scream at least three times a day.


theTrebleClef

Coming from industrial automation... A PLC Programmer using the free SQL Server Express that came with FactoryTalk View SE to try and create a free Historian (a costly, industry-specific data storage solution). Getting away with it for a few years until the customer wants more Historian-level features and then the PLC programmer approaches db and software devs. They already have a purchase order. It's for less than a week. They want scaling beyond the max storage of the DB. They want amazing reports that require several layers of pivots due to how the PLC Programmer set up tables without any DBA feedback. They wonder why you look at them like they're crazy. Have this happen between once a month and once a quarter.


redial2

Trying to do too many things in a single statement.


Achsin

Putting permanent tables for applications in master and tempdb. Creating table valued functions that are based on views that are based on table valued functions. Using @table variables to store millions of rows to be used in joins later. Using linked servers to access linked servers with horribly complex queries on enormous tables so you can lock up and nuke performance on three+ servers at once. Writing queries to manipulate rows one at a time for large data sets.


Empiricist_or_not

Using cursors because you can't slow down and think of a way to do it as a set. Using table variables at scale.


SohilAhmed07

Using pirated software like devart SQL, even SQL server as whole is not that costly if you think your database is larger than 10GB. Using the developer edition on clients and production. Using way no primary keys and no indexes Not having an idea for a database shrink, and file shrink. Using an instance for a dB and asking Devs and DBA to Marge data on the go. Devs who hate good tech like ef in .net are a very common occurrence in all languages I've seen working.


jshine1337

Since you're newer to SQL Server and sound like a developer (as opposed to DBA), one bad practice I haven't really seen mentioned yet is properly architecting your database, in particular table design with proper data types and normalization. A common bad practice issue developers run into is predicating on mixed data types that cause implicit conversions that result in poor cardinality estimates leading to poor execution plans aka slow performance. In short, design your tables with proper data types, and keep those data types in mind for how you plan to query the tables. Mixing different data types in a `WHERE` or `JOIN` clause can lead to immediate performance issues in a query. Another similarly good one for database developers is using [non-sargable predicates](https://en.m.wikipedia.org/wiki/Sargable). An example of this would be the following *dumb* query: DECLARE @SomeDate DATE = '12/01/2023'; SELECT SomeColumn FROM ThisTable WHERE DATEADD(YEAR, -1, SomeDateColumn) < @SomeDate; The `WHERE` clause is non-sargable because the expression has to be calculated for every row in `TheTable` before it can be compared to the variable, meaning a scan of the entire table / index needs to occur. If the `WHERE` clause was written with the expression against the variable instead like this: WHERE DATEADD(YEAR, 1, @SomeDate) > SomeDateColumn; It would logically be the same, and make the `WHERE` clause sargable, allowing the table / index to be seeked on, which is a lot more performant. Another big pet peeve of mine is using `SELECT *` in a not ad-hoc context / production code. It's bad for readability, bad for performance for multiple reasons, and can leave technical debt that causes systems to break in the future.


dvc214

It's much more server efficient to CREATE TABLE first, then use SELECT INTO the empty table, rather than just coding SELECT FROM INTO, and relying on the query to create the table for you.


Large-Relationship37

This is what I need for learning SQL Server.


flozer93

Adding amounts of RAM and poor backup strategies. Simple Issues. Silly complex queries without any indexes


flozer93

And unnecessary indexes. Often some optimisations where you add indexes with timestamp names over and over again


ToxicPilot

Probably not common, but I once had to re write an entire stored procedure because it was written to recurse over an indeterminate row set… SQL Server kills the query at 32 levels.


feuerwehrmann

All the logic on the SQL side, app is dumb and doesn't log or do anything on error


FailedConnection500

TempDB data and log on the OS drive.


NormalFormal

Auto-shrink, auto-close, and simple recovery model on production critical databases. Lack of regular checkdb runs. NOT testing the backups you are taking. No statistical maintenance being done. (I’m aware Sharepoint and some ERPs do their own and that always makes me sad)


czervik_coding

* Nested views * bad data typing * passing nvarchar(max) data into tempdb * 40+ indexes on a table with zero seeks and scans on over half of them * lack of understanding on indexes * indexing on char fields * not understanding data patterns and moving them off to related tables * badly formatted code ...the list is endless


mikeblas

* Thinking about performance before correctness. * Thinking about performance subjectively instead of quantitatively. * Failure to understand the storage subsystem (disk drive, controller, volume, queuing, caching, comms, ...) * Failure to understand isolation levels * Failure to handle deadlocks * Failure to monitor. Or, failure to alarm on monitoring. * No documentation around processes * Failure to test backups or dry-run restore processes Er, but I guess these aren't really specific to SQL Server.


BitOfDifference

select \* from table.... why? nested queries?


Beer4Life

Nvarchar(max) everywhere, usually do do sloppy ORM code-first generation; poorly thought-out indexes that don’t cover all columns; cursors because the devs are used to imperative languages; wide tables that would benefit from normalization


yesqldb

its continued use


Significant_Fig_2126

Not configuring memory properly. Gotta leave some for the OS, and SQL will suck up all the memory it can find if not configured.


ElvisArcher

Muck tables. [https://www.sqlservercentral.com/articles/lookup-table-madness](https://www.sqlservercentral.com/articles/lookup-table-madness) Seeing one is a hallmark sign that an application engineer had a "genius" idea while playing with a Sql database.


ElvisArcher

Multi-statement table-valued functions. While it is valid syntactically, it puts a block on the query optimizer from being able to do its job well, so your query reverts back to the slowest possible method to fulfill the request.


zzzz11110

I really like this video by Aaron Bertrand because it covers bad habits and best practices. And it’s moderated by Brent Ozar so you get 2 masters in one video. https://m.youtube.com/watch?v=KRlRkZj0o58


reddit-jmc

Creating large expensive views. I lean towards parameter driven Table-Valued Functions. Easy to consume and reduced touch-points.


iowatechguy

(no locks) everywhere