Problem: Junior SQL dev here, working with a SQL Server database where we have many functions that use temp tables to pull data from various tables to populate Crystal reports etc. We had an issue where a user action in our client caused a string to overflow the defined NVARCHAR(100) character limit of the column. As a quick fix, one of our seniors decided on a schema change to set the column definition to NVARCHAR(255), instead of fixing the issue of the the string getting too long. Now, we have lots of these table based functions that are using temp tables referencing the column in question but the temp table variable is defined as 100 instead of 255.
Question: Is there an easy way to find and update all of these functions? Some functions might not reference the table/column in question at all, but some heavily rely on this data to feed reports etc. I know I can right click a table and select "View Dependencies" in SQL Server Management Studio, but this seems very tedious to have to go through all of them and then update our master schema before deploying it to all customers.
I thought about a find and replace if there is a way to script or export the functions but I fear a problem I will run into is one variable in one function might be declared as TransItemDescription NVARCHAR(100) and one might be TransItemDesc NVARCHAR (100). I've heard of people avoiding temp tables maybe because of these issues so maybe there is just bad database design here?
Thus far I've been going through them one at a time using "View Dependencies" in SSMS.
I think the best solution would be to script out the whole database into a single script from SSMS. Then use Notepad++ (or equivalent) to either find:
All occurrences of NVARCHAR(100)
All occurrences of the variable name, e.g. TransItemDescription, TransItemDesc.
Once you have found all occurrences then make a list of all of the functions to be fixed. Then you would still need to do a manual fix to all functions, but once complete the issue should be totally resolved.
BACKGROUND
After moving our MSSQL DB to an ELK stack version 5.2, we realised we needed a relatiopnal visauliser. By then downgrading to 2.4.1 we were able to link Kibi.
TRIED
I have created the relations between the tables as shown below;
However, when trying to build a simple line graph to compare the tables tblaccounts and aspnetusers , it simply uses the values in tblaccountusers. None of the Kibi documentation seems to help, from what I have seen.
THE PROBLEM
The problem with this is I need the actual values to be used from both tables, through the child table of tblaccountusers, as to display account names rather than the ID number such as 1,2... etc.
If anyone has any guidance or links to help with this, please comment below.
i am using oracle 11. I need to find when specific column was created. I know we can find out last DDL change date but first i created the column
and after some days created index on one of the column of same table . So now, I need to find when that specific column was created .
Is there a way ?
This depends on your audit settings if the object was being audited you may find it in audit trail. I'd suggest reading
http://docs.oracle.com/cd/B28359_01/server.111/b28337/tdpsg_auditing.htm
Or you can use LogMiner to check redo logs if you DB was running in ARCHIVELOG mode. But I have never used this so I'm not sure about all the requirements there.
I've got a table in SQLite, and it already has many rows stored in it. I know realise I need another column in the table. Up to now I've just deleted the database and started again because the data has just been test data. But now the data in the database can't be deleted.
I know the query to add a column to the table, my question is what is a good way to do this so that it works for both existing users and new users? (I have updated the CREATE query I have for when the table is not found (because it's a new user or an existing user has cleared the database). It seems wrong to have an ALTER query in software that ships, and check every time. Is there some way of telling SQLite to automatically add the column if it doesn't exist during the UPDATE query I now need?
If I discover I need more columns in the future, is having a bunch of ALTER statements on startup (or somewhere?) really the best way to do it?
(If relevant this is for a node js app)
I'd just throw a table somewhere that marks what version of your database it is, and check that to determine if an update is needed. Either that or if you have a table already where there's always going to be just one record in it add a new field 'DatabaseVersion' to it.
So for example if you check the version number, and find it's a version 1 database when the newest version should be version 3, you know which updates to perform on it.
You can use PRAGMA user_version to store the version number of the database and check if the database needs to be updated.
I need to put versions onto a SQL Server 2005 database and have these accessible from a .NET Application. What I was thinking is using an Extended Properties on the Database with a name of 'version' and of course the value would be the version of the database. I can then use SQL to get at this. My question is does this sound like a good plan or is there a better way for adding versions to a SQL Server database?
Lets assume I am unable to use a table for holding the Metadata.
I do this:
Create a schema table:
CREATE TABLE [dbo].[SchemaVersion](
[Major] [int] NOT NULL,
[Minor] [int] NOT NULL,
[Build] [int] NOT NULL,
[Revision] [int] NOT NULL,
[Applied] [datetime] NOT NULL,
[Comment] [text] NULL)
Update Schema:
INSERT INTO SchemaVersion(Major, Minor, Build, Revision, Applied, Comment)
VALUES (1, 9, 1, 0, getdate(), 'Add Table to track pay status')
Get database Schema Version:
SELECT TOP 1 Major, Minor, Build from SchemaVersion
ORDER BY Major DESC, Minor DESC, Build DESC, Revision DESC
Adapted from what I read on Coding Horror
We use the Extended Properties as you described it and it works really well.
I think having a table is overkill. If I want to track the differences in my databases I use source control and keep all the db generation scripts in it.
I've also used some ER diagram tools to help me keep track of changes in DB versions. This was outside the actual application but it allowed me to quickly see what changed.
I think it was CASEStudio, or something like that.
If I understand your question right (differentiating between internal database versions, like application build numbers), you could have some sort of SYSVERSION table that held a single row of data with this info.
Easier to query.
Could also contain multiple columns of useful info, or multiple rows that represent different times that copy of the database was upgraded.
Update: Well, if you can't use a table to hold the metadata, then either external info of some sort (an INFO file on the hard drive?) or extended properties would be the way to go.
I still like the table idea, though :) You could always use security to only make it accessable through a custom stored proc get_ db_version or something.
The best way to do is to have 2 procedures: one header to control what is being inserted and validations a footer to insert the data if the release is good or not. The body will contain your scripts.
You need a wrapper that will encapsulate your script and record all the info: as far release, script number been applied, applyby, applydate date, release outcome "failed or succeeded".
I am using dedicated table similar to Matt's solution. In addition to that, database alters must check current version before applying any changes to the schema. If current version is smaller than expected, then the script terminates with fatal error. If current version is larger than expected, then script skips current step because that step has already been performed sometimes in the past.
Here is the complete solution with examples and conventions in writing database alter scripts: How to Maintain SQL Server Database Schema Version