Name clash with sys.sysusers system view in SQL Server - sql-server

(Note that this was on SQL Server 2008, but I have a colleague who reports the same issue on SQL Server 2014.)
I'm using a framework that supports multiple database back-ends, and our application has a table called sysUsers, which works fine in MySQL.
We now need to install it on SQLServer and it appears that this name conflicts with a built-in system view. The system view is sys.sysusers and the application table is dbo.sysUsers.
I am aware that the case difference is irrelevant to SQL Server, however the schema seems to be being ignored for some reason.
SELECT * FROM sys.sysusers; returns records from sys.sysusers. This is wholly as expected.
SELECT * FROM sysUsers; returns records from sys.sysusers. This is surprising (I would have thought the local schema would take precedence) but perhaps explicable.
However, SELECT * FROM dbo.sysUsers; still returns records from sys.sysusers. This seems just plain wrong as I am explicitly selecting the dbo schema.
I haven't found anything in the MS documentation that says these names are reserved.
I have tried renaming the table and hacking the code to use a different name, and everything works (i.e. this is nothing to do with the SQLServer integration within the application) and the same results are seen when running the queries from the management console directly. Therefore this appears to definitely be an issue with the conflicting table name and not a middle-ware error or syntax difference.
If this table name is reserved, why does MSSMS allow me to create it? If it is not reserved, why does it not let me query it?
And how can I work round the problem without requiring application updates (as these would be a migration headache for other deployments).

There are at least three workarounds, but none guarantee that no code has to be rewritten (except the one that's horribly unsafe):
Use a case-sensitive collation when creating your database (CREATE DATABASE Foo COLLATE Latin1_General_CS_AS). In this case, sysUsers will be a different object from sysusers, in all circumstances. You can set a case-insensitive collation immediately after creating the database so your data doesn't end up case-sensitive, as this is probably not what the end users want. Obviously this won't work if your application is actually relying on case-insensitive object names, unless you rewrite your queries carefully. Note that this means that all database objects, even those created afterwards, will have case-sensitive names, as this is embedded in the system tables on creation and can't be changed afterwards.
Use a schema other than dbo. The system table mapping occurs only for that scheme, not any others. If your application uses its own schema exclusively, any sysusers you create in that will not be aliased to sys.sysusers. (This isn't documented anywhere, but it is how it works.) Note that in order for this to work, you must always specify the schema explicitly even when it is the default schema for your user, otherwise you will again get the system table (I'd consider this a bug, but it's probably a necessity because of the way old scripts will assume sysusers resolves anywhere).
Enable the Dedicated Administrator Connection, restart SQL Server in single user mode, switch the mssqlsystemresource database to READ_WRITE and DROP VIEW sysusers. This will remove sys.sysusers from all databases. Doing this will void your warranty, it will cause Microsoft Support to laugh at you if you come crying to them, it may make installing future Service Packs and updates impossible and is emphatically not recommended, but I'm mentioning it anyway, for science. No code anywhere should be using this view, but, you know, I'm not an engineer working on SQL Server itself.
Note that lowering the compatibility level is not a workaround, which I mention for completeness. This has no effect on how these table names are resolved, even if it was a desirable approach (which it's not).
I consider the change made in SQL Server 2012 to ignore the dbo qualifier and resolve to these old, deprecated names anyway a mistake and if it were up to me I'd at least make it possible to opt out of this behavior with a trace flag, but it's not up to me. You could consider opening up an issue on Microsoft Connect for it, because the current behavior makes it needlessly complicated for RDBMS-agnostic code to run.

Related

How to find SQL server database name and the table names that an app uses from Access?

I have an MS Access app that uses data from different tables. I need to find the database and the tables being used by this app.
I am using SQL Server 2005.
Any help is greatly appreciated.
Hum, there are quite a few moving parts here.
but, first up, it may well depend on what kind of Access applcation you are using. (there are several types).
So, you want to first check/see what the file extension is here. In most cases, the extension will be accDB (2007 onwards).
However, you might be using a ADP (file extension .adp). So, this is the FIRST thing you need to check.
I mean, assuming this is accDB, then of course when you open access, you should see a list of tables in the left nav pane, say like this:
And if you "hover" your cursor over a table - as I did above then you can see that the tool tip shows the database server in question.
And of course, note that you see "different" icons for different types of tables.
so, tables without a -> (arrow) are LOCAL tables not linked.
but, if a table has the "arrow", then the table is a external (linked table).
And if you look, you can also see some Orange colored linked tables. Those are SharePoint tables (and once again, you can hover over those to see the location (I did not hover over those, since they actually are SharePoint tables on a site).
So, the above is the most simple approach to quick see the tables.
As noted, the other way is to fire up the linked table manger.
And that will and should show the current table links, like this:
Note that the above does not show the server name, but only the table + database name.
Last but not least, you can hit ctrl-g (debug window), and say type in this
? currentdb.TableDefs("tblAnimals2").Connect
(in above, replace tblAnimals2 with a valid linked table name from the nav pane).
Output: (on one line, broken up for ease of reading here).
ODBC;DRIVER=SQL Server;SERVER=ALBERTKALLAL-PC\SQLEXPRESS;
Trusted_Connection=Yes;APP=Microsoft Office 2010;
DATABASE=Test4;Network=DBMSLPCN
However, if this is a ADP applcation, then linked tables don't show nor exist in the traditional sense, and you have to go file->info, and you see a "server" setting, like this:
So, first issue here, is what kind of Access applcation? It is either a adp, or accDB (or maybe really old, and a mdb). If you using a ADP, then linked tables work VERY differnt, and you not see a linked table manager. You have to use the "server" connection. In a "adp", then your tables will NOT be linked, but they are still residing on sql server.
But, adp's can't be used after Access 2010, so in most cases, what version of Access you are using don't matter, but in the case of a adp, then it is a big deal.
and of course, you want to know if you have a accDE (or mde), as that is a compiled applcation, and design changes, and all source (Visual Basic) code will have been stripped out during the compile process. So, as noted, what version of access, and even more important is the file extension type you have.

Why does my SQL Server column appear to have no default value even though it acts like it does?

I have a SQL Server table that has four columns in it, one of which is a datetime column with a default value of getdate(). I have two copies of this table, one in a development database server over which I have full control, and another in a production database server in which I have few permissions.
Here is how the development table looks:
I've selected the dtInsert column. Notice that this column has a default value of getdate(). The production version I have of this table is exactly the same. When I add a row to this table, the dtInsert cell defaults to getdate() like I'd expect. When a database administrator generates a script of the production table, it includes the default value constraint. However, when I view the table design in SQL Server Management Studio 2012, it shows the column as not having a default value. See here:
When I generate a database diagram, it also shows the dtInsert column as having no default value. Again, I know from testing that the dtinsert column in my production database server indeed defaults to getdate().
Is this a bug in SQL Server Management Studio version 2012? Is there some permission I don't have which brings about this behavior? Is it something else? Why does the column appear to have no default value even though it does?
Is this a bug in SQL Server Management Studio version 2012?
No.
Is there some permission I don't have which brings about this behavior?
Yes.
In a comment on the question I suggested running the following:
SELECT *
FROM sys.default_constraints
WHERE [parent_object_id] = OBJECT_ID(N'_table_name_');
The result was a row return in Production in which the [definition] column was NULL. This means that the DEFAULT CONSTRAINT is there but you either:
lack explicit and implied permissions to see the definition
have been explicitly denied the permission to see it.
You can read up on this on the MSDN page for Metadata Visibility Configuration.
Now, there are various permissions (VIEW DEFINITION, VIEW ANY DEFINITION, etc.) that affect this setting. These can be applied at various levels:
the object itself
the schema
the database
etc
Permissions get further complicated when taking into account membership in multiple Windows Groups (if those are being used).
The permission can even be granted at multiple levels. Permissions are also additive: a GRANT in 1 out of 3 Windows Groups that your Login is a member of is enough to work. However, a DENY in any of those levels takes precedence and in that case, no definition for you.
As I mentioned in a comment on the question, this is really a matter for the Production DBA(s) who configured permissions such that you can't see the definition. Without know exactly why you can't see the definition (lacking of a GRANT or presence of a DENY?) it is useless issuing GRANT statements trying to get this permissions (especially since not being able to see the definition implies that you likewise would not be able to GRANT such permission to anyone). Please go talk to whoever is in charge of Production telling them that you can't see the definition of a default constraint, but you would like to be able to. If there is a specific reason why you currently cannot, you will be told. If it is an oversight, they should correct it in a controlled fashion that might need to be replicated to other environments, etc.
It appears the difference in viewing the object in SSMS between dev and prod is due to permission differences on your user account between dev and prod.
In order to view default values on a table object you need to have at least one of the following permissions on the object to see the default value:
ALTER on OBJECT Or CONTROL on OBJECT Or TAKE OWNERSHIP on OBJECT Or
VIEW DEFINITION on OBJECT
Found this https://dba.stackexchange.com/questions/78769/minimum-sql-server-rights-that-allow-viewing-column-default-values
which seems to be pretty much answer your question :)

sql server 2005 replication article conflict

I have a sql server 2005 database that I want to setup replication for. The problem is that the database has two schemas both of which have a table with the same name in it.
For some reason even though the tables are in different schemas the replication creation fails when done through management studio due to conflicting article names (i assume its trying to create the same name for both tables in the different schemas).
Is there any workaround for doing this in the studio, I can probably write a script or program to do this but just for this one thign is a bit annoying and it probably wont be allowed to run in production.
Perhaps there is a hot fix or something I'm not aware about?
Cheers,
There doesn't appear to be a way around this purely using the new publication wizard in SSMS - the article name is always the table name without a schema-qualifier, and can't be customised from the wizard - although there is a work-around if you use the scripting options.
Go through the wizard as normal, but at the end of the process, untick the "create publication" option and select the "Generate script file..." option.
Once the file is created, open it and edit the article names so that they no longer conflict, then execute the script in the publication database.
could you think of having two publications for your database, each publication being linked to one of the schemas? Of course, this means that you'll have to define two different subscribers, one for each publication. The feasability of this proposal will of course highly depend on how you need to distribute your data among the subscribers, and on the way your users access the data

how to handle db schema updates when using schemabinding and updating often

I'm using a MS SQL Server db and use plenty of views (for use with an O/R mapper). A little annoyance is that I'd like to
use schema binding
update with scripts (to deploy on servers and put in a source control system)
but run into the issue that whenever I want to e.g. add a column to a table, I have to first drop all views that reference that table, update the table, and then recreate the views, even if the views wouldn't need to be updated otherwise. This makes my update scripts a lot longer and also, looking the diffs in the source control system, it is harder to see what the actual relevant change was.
Is there a better way to handle this?
I need to still be able to use simple and source-controllable sql updates. A code generator like is included in SQL Server Management Studio would be helpful, but I had issues with SQL Server Management Studio in that it tends to create code that does not specify the names for some indices or (default) constraints. But I want to have identical dbs when I run my scripts on different systems, including the names of all contraints etc, so that I don't have to jump through loops when updating those constraints later.
So perhaps a smarter SQL code generator would a solution?
My workflow now is:
type the alter table statement in query editor
check if I get an error statement like "cannot ALTER 'XXX' because it is being referenced by object 'YYY'."
use SQL Server Managment Studio to script me create code for the referenced object
insert a drop statement before the alter statement and create statement after
check if the drop statement creates error and repeat
this annoys me, but perhaps I simply have to live with it if I want to continue using schemabinding and script updates...
You can at least eliminate the "check if I get an error" step by querying a few dynamic managment functions and system views to find your dependencies. This article gives a decent explanation of how to do that. Beyond that, I think you're right, you can't have your cake and eat it too with schema-binding.
Also keep in mind that dropping/creating views will cause you to lose any permissions that were granted on those objects, so those permissions should be included in your scripts as well.

Is there a good way to verify if a database schema is correct after an upgrade or migration?

We have customers who are upgrading from one database version to another (Oracle 9i to Oracle 10g or 11g to be specific). In one case, a customer exported the old database and imported it into the new one, but for some reason the indexes and constraints didn't get created. They may have done this on purpose to speed up the import process, but we're still looking into the reason why.
The real question is, is there a simple way that we can verify that the structure of the database is complete after the import? Is there some sort of checksum that we can do on the structure? We realize that we could do a bunch of queries to see if all the tables, indexes, aliases, views, sequences, etc. exist, but this would probably be difficult to write and maintain.
Update
Thanks for the answers suggesting commercial and/or GUI tools to use, but we really need something free that we could package with our product. It also has to be command line or script driven so our customers can run it in any environment (unix, linux, windows).
Presuming a single schema, something like this - dump USER_OBJECTS into a table before migration.
CREATE TABLE SAVED_USER_OBJECTS AS SELECT * FROM USER_OBJECTS
Then to validate after your migration
SELECT object_type, object_name FROM SAVED_USER_OBJECTS
MINUS
SELECT object_type, object_name FROM USER_OBJECTS
One issue is if you have intentionally dropped objects between versions you will also need to delete the from SAVED_USER_OBJECTS. Also this will not pick up if the wrong version of objects exist.
If you have multiple schemas, then the same thing is required for each schema OR use ALL_OBJECTS and extract/compare for the relevant user schemas.
You could also do a hash/checksum on object_type||object_name for the whole schema (save before/compare after) but the cost of calculation wouldn't be that different from comparing the two tables on indexes.
If you are willing to spend some, DBDiff is an efficient utility that does exactly what you need.
http://www.dkgas.com/oradbdiff.htm
In SQL DEVELOPER (the free Oracle utility) there is a Database Schema Differences feature.
It's worth to try it.
Hope it helps.
SQL Developer - download
Roni.
I wouldn't write the check script, I'd write a program to generate the check script from a particular version of the database. Just go though the metatdata and record what's there and write it to a file, then compare the values in that file against the values in the customer's database. This won't work so well if you use system-generated names for your constraints, but it is probably enough to just verify that things are there. Dropping indexes and constraints is pretty common when migrating a database, so you might not even need to check too much; if two or three things are missing, then it's not unreasonable to assume they all are. You might also want to write a script that drops all the constraints and indexes and re-creates them, and just have your customers run that as a post-migration step. Just be sure you drop everything by name, so you don't delete any custom indexes your customer might have created.

Resources