Is there anyway for SQL Server Profiler to remember column widths? - sql-server

I've setup SQL Server Profiler to only report events relevant to me and it works great. However, I think what sucks is that it doesn't remember the column widths. I could set them up properly and save it as a Profiler template, but the next time I start a trace, it forgets the column widths.
Is there a trick or am I missing something simple or does it simply not save these values?

I am certain this feature does not exist. I recommend you request a new feature, seems like many people would vote for it.
Here is the link to reach out to the SQL Team and suggest it.

Related

Visual Studio 2013: Open Existing Report in Report Wizard?

I have not seen this one yet, so I am hoping one of you can help me out here.
I am in VS 2013, have a Report Server solution going, and when I click Add New Report, it opens up a report in VS. I can use the query designer to add fields and such, but once I "finish" that wizard, I cannot, for the life of me, figure out how to get back into the wizard.
The reason I ask is because right now, if I want to simply add a field, I need to alter the query, alter the XML, and hope for the best. Ideally, I'd just be able to pop back into the Wizard to grab the fields (and their aliases), and then use the designer view to actually manipulate my tables.
Any ideas? Or am I stuck re-creating the report every time I need to add a field?
Sorry if anything is unclear - this is literally my first day using VS to write reports, and honestly, I Feel pretty good so far!
Thanks!
Use SQL Server Data Tools. Its very easy to use and will allow you to create and enhance your reports. Here is a link to it:
https://msdn.microsoft.com/en-us/library/mt204009.aspx

Can you ignore columns when doing a SSDT data compare?

I frequently use the SSDT data comparison tool to sync up database data from our integration environment to our production environment. However, I typically run into scenarios where columns should be ignored and never synced up. Even if I review the data differences that SSDT finds, the sync operation happens on the row level and unfortunately I need to control syncing on the cell level.
Anyone have any good solutions?
I just have came across this requirement in my project and in VS 2019 I have found the solution.
Create a new data comparison (Tools/SQL Server/New Data Comparison...) and
in the wizard after selecting the data sources click Next to have the wizard to enumerate the tables.
Select the desired table, expand it and unselect the fields you do not want to include in the comparison.
That's it. It was easy to overlook the little expand arrow in front of the table name...
The solution is to user an alternative tool, SSDT doesn't support this at the moment. It would certainly be nice to have.

How to automatically refresh the SQL Server Management Studio intellisense cache?

In SQL Server Management Studio if the user creates new table columns, tables, etc. the user needs to refresh the IntelliSense cache using CTRL+Shift+R.
Is there a setting or some way to automate this so it can automatically be refreshed right after inserting a new table, etc?
I don't know that there is a way to automatically refresh the cache without manually pressing Ctrl+Shift+R (or equivalent, e.g. the menu). The reason is that when the app talks to the database too much, people complain that it is too chatty (perhaps someone could write a simple add-in that does this - using something like query notifications to indicate something has changed in the metadata views?). You may also want to consider an add-on like SSMS Boost (though with a quick keyword search I see no mention of this functionality on its feature page) or SQL Prompt (but also I don't see any evidence of this functionality in their documentation, only a mention here of an experimental feature).
I believe SQL Server Data Tools does this, but I don't know if you can control the frequency of the refresh. And using that tool may also require a significant shift in how you think about database development.

How to help QA team access the right database?

In the place I work, very often it happens that a developer and QA session goes like this:
(This is in reference to SQL Server 2005)
QA: I get Invalid object name 'customers'
DEV: huh? can u send me the exact SQL statement you used?
QA: select * from customers
DEV: hmm. (after some thinks) Are you sure you're using CUSTDB?
QA: yes
DEV: (after figuring out that QA was using CUSTDB_PRODUCTION) Please add "USE CUSTDB" and then tell me what you get with that SQL.
QA: Oh, sorry, I was using wrong DB.
The tab-text for the SQL window shows the information of which database the query is running on, but how do you ensure that QA follows this?
I will admit that I have made this mistake of using the wrong DB many times. I don't tend to read the text in the tab.
What are your experiences with this type of scenario? Have you found a way to help mitigate such a problem?
if your QA is using SSMS for testing you should try the window coloring options in SSMS Tools Pack free add-in for SSMS. this way you could immediately differentiate between servers.
if that's not an option don't allow QA to access production server at all. they shouldn't be able to anyway.
I think you need to formalise how QA will report an error.
You need to specify a set of information that they'll supply with every error report, including:
what they were doing (exactly)
their configuration (including the database!)
time/date (so you can match stuff in logs)
how to repeat it (if repeatable)
etc. You can act on that immediately, or log it in an incident tracking system and come back to it later (in which case the above is invaluable, otherwise it's all lost).
The above can be as simple as an email draft/template. But you need to be rigorous about this, otherwise (as you've discovered) you're going to go round in circles, perhaps without all the salient information you require.
If QA are allowed access to both live and dev databases, using SSMS, then there must be some level of accepted responsibility on their part and/or some level of training of them on your part.
They have been given a tool that allows them to ask questions of the data, but are asking the wrong questions, then complaining to you - if I was the DBA, I'd simply remove their access until they could demonstrate they knew what they where doing! I sympathise that that might not go down to well, but at least threatening to do it might make them think a little for themselves.
Think of this question as 'someone is doing something wrong'
There are 2 simple answers:
remove their ability to 'do something wrong'
train them to do it right
On the same note as Mladen Prajdic, you can colour code query windows in SQL2008 SSMS too.
Personally I use the fully qualified name in all queries (server.datatabase.owner.table - well I only use server if I'm deliberately using a linked server) because I move from database to database so much. If you specify the database in the queries to be run, they still work if connected to a differnt database on the same server or if you have a linked server. Have your QA adopt doing this as their standard if they are writing their own queries; if you are writing the test queries then you should be specifying the database name in the query not through a use statment.

How Do I Follow A T-SQL Transaction?

Are there any programs that will allow you to follow a sql transaction through to it's end? For instance, say I've inherited a rather complex sql database with a data dictionary. The data dictionary is pretty good, but not as good as say, SQL Doc. I've taken a look at Red Gate's Dependency Tracker and, while that does a very good job of putting things together (triggers, stored procedures, tables, views, etc.), it still does nothing for following a transaction through it's various tracks.
What I'd like is software that will allow me to enter a transaction and based on everything in my db, tree it out visually to let me see what's happening during the transaction. Does that make sense or do I need to elaborate?
Edited to elaborate: While the answers below were very good, it's not exactly what I'm looking for. The front end of the sql database was built with PowerBuilder. I can use SqlSpy during frontend data input and that helps to follow transactions through, but it's all the t-sql and as you can imagine, reams of output. I'm looking for something that would do the same thing, but lay it out visually with the ability to tree out the detail if I wanted to.
If you open a connection in Visual Studio, you can run sprocs and queries in debug mode. i.e. you can go step by step over a query with intellisense and see the values in the variables.
The SQL Profiler will allow you to trace a transaction through, and see exactly what is going on. It is very flexible and allows you to be shown only the events you are interested in, but it won't show it in the tree format I think you're asking about.
The profiler is installed as part of the SQL Server Client Tools (along with Enterprise Manager and Query Analyser).
I guess if this way is doable. You can set up a test environment and run the transaction. Once transaction is submitted, compare the database in test environment with the prodution database.
You can use some tool to do this comparation. Like OpenDBDiff or you can look for other tool by googling.
I haven't used this yet, it appears to do what you're asking: apply IDE-style debugging tools to T-SQL statements. I woul be interested to see how it works out.

Resources