I have 2 databases, each of them has its own user-defined messages. The problem is that their message_id's stored in sys.messages intersect so these DB's cannot be deployed on the same SQL Server instance without changing all messages in one database (but these is too expensive - I have to change ALL stored procedures).
Is there any way to make error messages specific to database?
The sys.message table is database-wide so you simply cannot do that easily. That table really should only be used for things that are DB-wide (like extended stored procedures and other server extensions), not for stored procedure. But I guess changing that is already too late.
I see only two ways around it:
Use the "language" field when adding the message to identify not the message but the application. Of course, that will cause additional problems as you'll need to have each application use it's own "language".
Use two different instances of SQL server, one for each app.
Related
We have the same app source (with some custom "if x client then") and basically the same SQL Server database structure. But, some clients need slightly different stored procedures.
What would be best practice in this scenario for long term maintaining the databases and keeping the correct structure? As of now, for example when I change the procedure in one database and need to do the same in 9/10 others I just ALTER procedure and USE different database. But, I can't keep track of which procedures are different in that special snowflake client.
Any ideas? The plan is of course to get more clients so that's looking for trouble.
I try to push the "one fits all" concept but hey, what can you do...
Maybe have that "if x client then" as case statement in a SQL Server stored procedure and then you can just ALTER mindlessly?
I am charged with creating an export from several tables in one database to a identically configured tables in another database, so first I have to create the identical configuration in the second database.
The problem is, when the first database was set up, they used what seems like hundreds of user defined types. I am not the greatest typist in the world and recreating these on the second database is going to a nightmare.
These user-defined types must be stored someplace, but I can't find them. I am thinking if I found the location where these types are stored, I could copy them to the second database and avoid trying to fat-finger though a couple hundred user-defined types.
Can anyone tell me where these types are stored, or of there is another way to solve my problem?
Thank you.
Right click on the database in SQL Server Management Studio, choose Tasks > Generate Scripts.... Then check user defined data types and table types. Click Next etc up until Finish.
Once you have the script generated, execute it on the database you want to create the types in.
I inherited a SQL server database that is not well formatted. ( some consulting company came in to do the project and left without completing it)
the main issues I have with this database are:
Data types: a lot of tinyint and text types.
Tables are not normalized: some of the keys are names instead of seq ids.
A lot of tables that I am not sure are being used
a lot of stored procedures that i am not sure are being used
Badly named tables and stored procs
I also inherited the asp.net application that runs against this database.
I would like to clean this database up. I understand that changing the datatypes will have to happen at each table. for getting rid of all the extra tables and stored procs. what is the easiest way to do so.
any other tips to make it cleaner and smaller is appreciated.
I want to also mention that I have RedGate tools installed.( if that helps).
Thank you
Check out the Sql Server Data Tools they allow to create a project from a live database. Some of the things you can do in there is right click 'Find Usages' for the tables, views and functions.
So long as the previous developer used stored procedures and views rather than querying directly, it should find references to your project that way, without killing your project.
Also, for finding stored procedures that are not used, put in some basic logging at the top of each stored procedure in your application, after X amount of days, those that haven't been logged in your table are likely safe to remove, else a tedious search through your .NET code will find them.
We have a database server and it has about 10 databases.
I would like to create some functions / stored procedures which can be used in all databases.
For example, we can use sp_executesql in any database.
We have some requirements like that (getting current academic year, financial year, etc...)
Is it doable?
As others have suggested, you could put objects into the master database, but Microsoft explicitly recommends that you should not do that. I find that solution to be rather risky anyway, because the master database is 'owned' by the system, not by you, so there are no guarantees that it will continue to behave in the same way in the future.
Instead, I would consider this to be primarily a deployment issue. There are (at least) two strategies you could use:
Deploy the objects to every database
Deploy them to one 'reference' database that is only used for shared objects and create synonyms in the other databases
The second option is perhaps the better one, because if your functions use tables (e.g. you use a calendar table to get the academic year, which is much easier than calculating it) then you would have to create the same tables in every database too. By using synonyms, you only have to maintain one set of tables.
For the actual deployment, it's straightforward to use scripting to do manage the objects, because you just need a list of databases to connect to and run each DDL script against. You can do that using batch files and SQLCMD (perhaps with SQLCMD variables in your .sql scripts), or drive it from PowerShell or any other language that you prefer.
Depending upon what the SP actually does, you want to create the procedure in master, name it with sp_ and mark it as a system procedure:
http://weblogs.sqlteam.com/mladenp/archive/2007/01/18/58287.aspx
A couple of options:
You can use a system stored procedure as Cade says. I've done this in the past and it works ok. One warning on this is that the sp_MS_marksystemobject procedure is undocumented, which may mean that it could vanish or change without warning in future SQL versions. Thinking back I think there were other problems using this approach with functions though.
Another approach is to use standardized procedure and functions, and roll them out across your databases using sp_MSforeachdb to run code against every database. If you need to run against only your 10 database you can take copy the code in this procedure and modify it to check that a database matches your schema before running the code (or you can write your own version that does a similar thing).
We have an application that has 1000+ databases and 600+ sprocs. Each database represents a different client.
Problem: We need to move this to a single database while creating as little effect on the ui as possible, meaning dont change all the sproc signatures at 1 time.
The connection string currently sets the database attribute, a proposal is to move that to the user attribute. This attribute (using SYSTEM_USER) could be used to determine the site identifier which would be used on the where clause.
The above would not be final solution, but allows us to make changes to the sproc signature at a slow controlled pace. Once all are done we can correct the connstring and get some connection pooling.
Are there any limitation to the number of logins/users that we can have on sqlserver 2005/8. Or has anyone been down this path that could shed some light on a better option.
See my answer here
Ideas for Combining Thousand Databases into One Database
Sounds like you two are working the same project. YOu will need to change every proc before you can move to one datbase or each client will see the others' data.
As for the number of logins on SQL Server 2005 / 08 - I don't think anyone has ever run into a hard limit here. A few thousand will NOT be any problem at all.
What you could consider for this scenario might be one schema inside your single DB per customer, e.g. customer "Miller" has a "miller" schema, with its objects inside, and customer "Brown" will have a "brown" schema.
And contrary to what HLGEM just responded - no, customers won't see each others data, if you specify proper permissions - each customer (and its users) into its own schema only - should work just fine.
Marc
You might also consider setting a distinctive application name in the connection string rather than using a distinctive user, which you can get into your where clause using APP_NAME(). I'm sure that SQL Server won't have a problem with thousands of logins, but you may prefer not to have to create them.