How table was created in SQL-Server - sql-server

What I need to find is the procedure of recreating some table, what data sources were used, which scripts if any &c. So is it possible to differentiate somehow, maybe in system views or similar, if the table was created manually or by query and if the data was imported from external data or from already existing table/view in the database? I already know who created and when. I’ve pretty much screened whole database without results and now I am looking for hints in metadata.

If the table was created recently, you can glean information from the default trace. The query below will list object created and altered events. The default trace is a rollover trace so forensic information will be limited based on activity.
SELECT
trace.DatabaseName
,trace.ObjectName
,te.name AS EventName
,tsv.subclass_name
,trace.EventClass
,trace.EventSubClass
,trace.StartTime
,trace.EndTime
,trace.NTDomainName
,trace.NTUserName
,trace.HostName
,trace.ApplicationName
,trace.Spid
FROM (SELECT REVERSE(STUFF(REVERSE(path), 1, CHARINDEX(N'\', REVERSE(path)), '')) + N'\Log.trc' AS path
FROM sys.traces WHERE is_default = 1) AS default_trace_path
CROSS APPLY fn_trace_gettable(default_trace_path.path, DEFAULT) AS trace
JOIN sys.trace_events AS te ON
trace.EventClass=te.trace_event_id
JOIN sys.trace_subclass_values AS tsv ON
tsv.trace_event_id = EventClass
AND tsv.subclass_value = trace.EventSubClass
WHERE te.name IN(N'Object:Altered', N'Object:Created')
AND tsv.subclass_name = 'Commit'
ORDER BY trace.StartTime;

Related

How to link DOCUVALUE table to related business metadata

I am trying to pull a report of all the documents referenced in AX, and I'm having a heck of a time figuring out the AX database structure. Ideally I want to pull a list of documents and the Journal / Batch # each is associated with.
In our AX environment, all documents are stored on a share (i.e. they're not actually stored as BLOBs in the AX database).
It looks like the DOCUVALUE table is the principal table that references the documents, having the ORIGINALFILENAME and other columns that seem to "point" to the files on the AX share. But DOCUVALUE doesn't contain any useful business metadata.
After a bit of exploring, it looks like like the DOCUREF table relates to DOCUVALUE (DOCUVALUE.RECID = DOCUREF.VALUERECID) which helps a little - gives you the Company #, but that's about it.
After a bit more exploring, it looked like it would be possible to join across to LEDGERJOURNALTABLE as shown below:
select ljt.journalnum, filename + '.' + filetype filename, ljt.name journal_name,
dr.refcompanyid, convert(varchar(10), ljt.posteddatetime,111) posted_date,
ljt.createdby, convert(numeric, ljt.journaltotalcredit) journalamount
from LEDGERJOURNALTABLE ljt, DOCUREF dr, DOCUVALUE dv
where dv.RECID = dr.VALUERECID and dr.refrecid = ljt.recid
order by 1,2
This looked promising, so I pulled out a data listing and asked one of our key business users to review the results. She indicated that it was accurate to some extent, but there were other areas where the document referenced just didn't have any relation to the JournalNum in the listing.
So - I'm at a bit of a dead end - I've spent further time generating SQL statements to harvest data using specific RECID values, trying other joins, but each time I just disappear down a rabbit hole.
Any ideas? Any help gratefully received!!
The AX document management framework is designed so that a document can be attached to any data row in any table. What you're trying to do is far easier in AX, but we'll stick with SQL for the question.
The problem you're having is you don't know the reference objects because you're ignoring REFTABLEID.
The key fields that connect a denormalized "document" to the associated business data are REFTABLEID, REFCOMPANYID, and REFRECID (you already have the last one).
So start with this query below:
SELECT sd.NAME
,sd.SQLNAME
,dr.*
,dv.*
FROM DOCUREF dr
,DOCUVALUE dv
,SQLDICTIONARY sd
WHERE dv.RECID = dr.VALUERECID
AND sd.TABLEID = dr.REFTABLEID
AND sd.FIELDID = 0 -- Indicates it is a table and not a table field
AND sd.NAME = 'LEDGERJOURNALTABLE' -- Instead of hardcoding, join & query
You'll have to get creative depending on your use with SQL, You'll want to remove the hardcoded 'LEDGERJOURNALTABLE' and then use sd.SQLNAME to join to the actual SQL table. Then if that SQL table has DataAreaId, you'd likely want to join it to dr.REFCOMPANYID.
Or you can hardcode the tables or whatever you want to do. You should be aware you can attach documents to journal headers OR lines...or many other rows for that matter.
Just start exploring the data and you should be able to figure out what you want with that query above.
So for your sample query, you can see I added 2 lines. Your query will only work when joined for LedgerJournalTable. You'll have to do some dynamic SQL or use a cursor or something if you want to report on every attachment.
SELECT ljt.journalnum
,filename + '.' + filetype filename
,ljt.name journal_name
,dr.refcompanyid
,convert(VARCHAR(10), ljt.posteddatetime, 111) posted_date
,ljt.createdby
,convert(NUMERIC, ljt.journaltotalcredit) journalamount
FROM LEDGERJOURNALTABLE ljt
,DOCUREF dr
,DOCUVALUE dv
WHERE dv.RECID = dr.VALUERECID
AND dr.REFRECID = ljt.RECID
AND dr.REFCOMPANYID = ljt.DATAAREAID -- ADDED
AND dr.REFTABLEID = 211 -- ADDED TableId for LedgerJournalTable
ORDER BY 1
,2

How to trace down more information about SQL Server session ID in the past?

I got into a problem when one of my DB was in "restoring" state.
After checking error logs, i found out that someone had done something.
- Starting up database "mydb"
- The database "mydb" is makred RESTORING and is in a state that does not allow recovery to be fun
- Starting up database "mydb"
- RESTORE DATABASE sucessfully processed 192392 pages in 178.seconds
All of this messages belong to spid128 source.
But i couldn't trace down who did this.
I can check all of the current session ID but that's not what i want.
I'm looking for a way to, let's say check information about that spid yesterday.
Is that possible?
The default trace captures backup and restore events so it will have details of the restore. However, since it's a rollover trace with a max of 5 files of 20MB each, older historical data might not be available depending on server activity.
Below is an example query to get backup/restore events from default trace files for the problem database:
SELECT
te.name
,tt.TextData
,tt.StartTime
,tt.HostName
,tt.LoginName
,tt.ApplicationName
FROM sys.traces AS t
CROSS APPLY fn_trace_gettable(
REVERSE(N'crt.gol' + SUBSTRING(REVERSE(t.path), CHARINDEX(N'\', REVERSE(t.path)), 128)), default) AS tt
JOIN sys.trace_events AS te ON
te.trace_event_id = tt.EventClass
JOIN sys.trace_subclass_values AS tesv ON
tesv.trace_event_id = tt.EventClass
AND tesv.subclass_value = tt.EventSubClass
WHERE
t.is_default = 1 --default trace
AND te.name = N'Audit Backup/Restore Event'
AND DatabaseName = N'mydb';

Recently created index in SQL Server

How to find recently created index details in my SQL Server database? Any query to find this?
In my database there are a lot of indexes. I want to know which of those indexes were recently created, with all their details.
You can use SCHEMA changes history to know index creation changes along with many changes
Below is how you do it..
1.Right click server
2.Goto reports -->standard reports-->Schema changes history
below is screenshot from mt device
Default trace is enabled by default,unless you turn it on
below query tells you,if default trace status is ON
select * from sys.configurations where name like '%trace%'
below query can provide object creation stats
SELECT OBJECT_NAME(objectid),objectname,indexid
FROM sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1
f.[value]
FROM sys.fn_trace_getinfo(NULL) f
WHERE f.property = 2
)), DEFAULT) T
JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
where DatabaseName=db_name()
ORDER BY t.StartTime ;

SQL CLR Trigger - get source table

I am creating a DB synchronization engine using SQL CLR Triggers in Microsoft SQL Server 2012. These triggers do not call a stored procedure or function (and thereby have access to the INSERTED and DELETED pseudo-tables but do not have access to the ##procid).
Differences here, for reference.
This "sync engine" uses mapping tables to determine what the table and field maps are for this sync job. In order to determine the target table and fields (from my mapping table) I need to get the source table name from the trigger itself. I have come across many answers on Stack Overflow and other sites that say that this isn't possible. But, I've found one website that provides a clue:
Potential Solution:
using (SqlConnection lConnection = new SqlConnection(#"context connection=true")) {
SqlCommand cmd = new SqlCommand("SELECT object_name(resource_associated_entity_id) FROM sys.dm_tran_locks WHERE request_session_id = ##spid and resource_type = 'OBJECT'", lConnection);
cmd.CommandType = CommandType.Text;
var obj = cmd.ExecuteScalar();
}
This does in fact return the correct table name.
Question:
My question is, how reliable is this potential solution? Is the ##spid actually limited to this single trigger execution? Or is it possible that other simultaneous triggers will overlap within this process id? Will it stand up to multiple executions of the same and/or different triggers within the database?
From these sites, it seems the process Id is in fact limited to the open connection, which doesn't overlap: here, here, and here.
Will this be a safe method to get my source table?
Why?
As I've noticed similar questions, but all without a valid answer for my specific situation (except that one). Most of the comments on those sites ask "Why?", and in order to preempt that, here is why:
This synchronization engine operates on a single DB and can push changes to target tables, transforming the data with user-defined transformations, automatic source-to-target type casting and parsing and can even use the CSharpCodeProvider to execute methods also stored in those mapping tables for transforming data. It is already built, quite robust and has good performance metrics for what we are doing. I'm now trying to build it out to allow for 1:n table changes (including extension tables requiring the same Id as the 'master' table) and am trying to "genericise" the code. Previously each trigger had a "target table" definition hard coded in it and I was using my mapping tables to determine the source. Now I'd like to get the source table and use my mapping tables to determine all the target tables. This is used in a medium-load environment and pushes changes to a "Change Order Book" which a separate server process picks up to finish the CRUD operation.
Edit
As mentioned in the comments, the query listed above is quite "iffy". It will often (after a SQL Server restart, for example) return system objects like syscolpars or sysidxstats. But, it seems that in the dm_tran_locks table there's always an associated resource_type of 'RID' (Row ID) with the same object_name. My current query which works reliably so far is the following (will update if this changes or doesn't work under high load testing):
select t1.ObjectName FROM (
SELECT object_name(resource_associated_entity_id) as ObjectName
FROM sys.dm_tran_locks WHERE resource_type = 'OBJECT' and request_session_id = ##spid
) t1 inner join (
SELECT OBJECT_NAME(partitions.OBJECT_ID) as ObjectName
FROM sys.dm_tran_locks
INNER JOIN sys.partitions ON partitions.hobt_id = dm_tran_locks.resource_associated_entity_id
WHERE resource_type = 'RID'
) t2 on t1.ObjectName = t2.ObjectName
If this is always the case, I'll have to find that out during testing.
How reliable is this potential solution?
While I do not have time to set up a test case to show it not working, I find this approach (even taking into account the query in the Edit section) "iffy" (i.e. not guaranteed to always be reliable).
The main concerns are:
cascading (whether recursive or not) Trigger executions
User (i.e. Explicit / Implicit) transactions
Sub-processes (i.e. EXEC and sp_executesql)
These scenarios allow for multiple objects to be locked, all at the same time.
Is the ##SPID actually limited to this single trigger execution? Or is it possible that other simultaneous triggers will overlap within this process id?
and (from a comment on the question):
I think I can join my query up with the sys.partitions and get a dm_trans_lock that has a type of 'RID' with an object name that will match up to the one in my original query.
And here is why it shouldn't be entirely reliable: the Session ID (i.e. ##SPID) is constant for all of the requests on that Connection). So all sub-processes (i.e. EXEC calls, sp_executesql, Triggers, etc) will all be on the same ##SPID / session_id. So, between sub-processes and User Transactions, you can very easily get locks on multiple resources, all on the same Session ID.
The reason I say "resources" instead of "OBJECT" or even "RID" is that locks can occur on: rows, pages, keys, tables, schemas, stored procedures, the database itself, etc. More than one thing can be considered an "OBJECT", and it is possible that you will have page locks instead of row locks.
Will it stand up to multiple executions of the same and/or different triggers within the database?
As long as these executions occur in different Sessions, then they are a non-issue.
ALL THAT BEING SAID, I can see where simple testing would show that your current method is reliable. However, it should also be easy enough to add more detailed tests that include an explicit transaction that first does some DML on another table, or have a trigger on one table do some DML on one of these tables, etc.
Unfortunately, there is no built-in mechanism that provides the same functionality that ##PROCID does for T-SQL Triggers. I have come up with a scheme that should allow for getting the parent table for a SQLCLR Trigger (that takes into account these various issues), but haven't had a chance to test it out. It requires using a T-SQL trigger, set as the "first" trigger, to set info that can be discovered by the SQLCLR Trigger.
A simpler form can be constructed using CONTEXT_INFO, if you are not already using it for something else (and if you don't already have a "first" Trigger set). In this approach you would still create a T-SQL Trigger, and then set it as the "first" Trigger using sp_settriggerorder. In this Trigger you SET CONTEXT_INFO to the table name that is the parent of ##PROCID. You can then read CONTEXT_INFO() on a Context Connection in a SQLCLR Trigger. If there are multiple levels of Triggers then the value of CONTEXT INFO will get overwritten, so reading that value must be the first thing you do in each SQLCLR Trigger.
This is an old thread, but it is an FAQ and I think I have a better solution. Essentially it uses the schema of the inserted or deleted table to find the base table by doing a hash of the column names and comparing the hash with the hashes of tables with a CLR trigger on them.
Code snippet below - at some point I will probably put the whole solution on Git (it sends a message to Azure Service Bus when the trigger fires).
private const string colqry = "select top 1 * from inserted union all select top 1 * from deleted";
private const string hashqry = "WITH cols as ( "+
"select top 100000 c.object_id, column_id, c.[name] "+
"from sys.columns c "+
"JOIN sys.objects ot on (c.object_id= ot.parent_object_id and ot.type= 'TA') " +
"order by c.object_id, column_id ) "+
"SELECT s.[name] + '.' + o.[name] as 'TableName', CONVERT(NCHAR(32), HASHBYTES('MD5',STRING_AGG(CONVERT(NCHAR(32), HASHBYTES('MD5', cols.[name]), 2), '|')),2) as 'MD5Hash' " +
"FROM cols "+
"JOIN sys.objects o on (cols.object_id= o.object_id) "+
"JOIN sys.schemas s on (o.schema_id= s.schema_id) "+
"WHERE o.is_ms_shipped = 0 "+
"GROUP BY s.[name], o.[name]";
public static void trgSendSBMsg()
{
string table = "";
SqlCommand cmd;
SqlDataReader rdr;
SqlTriggerContext trigContxt = SqlContext.TriggerContext;
SqlPipe p = SqlContext.Pipe;
using (SqlConnection con = new SqlConnection("context connection=true"))
{
try
{
con.Open();
string tblhash = "";
using (cmd = new SqlCommand(colqry, con))
{
using (rdr = cmd.ExecuteReader(CommandBehavior.SingleResult))
{
if (rdr.Read())
{
MD5 hash = MD5.Create();
StringBuilder hashstr = new StringBuilder(250);
for (int i=0; i < rdr.FieldCount; i++)
{
if (i > 0) hashstr.Append("|");
hashstr.Append(GetMD5Hash(hash, rdr.GetName(i)));
}
tblhash = GetMD5Hash(hash, hashstr.ToString().ToUpper()).ToUpper();
}
rdr.Close();
}
}
using (cmd = new SqlCommand(hashqry, con))
{
using (rdr = cmd.ExecuteReader(CommandBehavior.SingleResult))
{
while (rdr.Read())
{
string hash = rdr.GetString(1).ToUpper();
if (hash == tblhash)
{
table = rdr.GetString(0);
break;
}
}
rdr.Close();
}
}
if (table.Length == 0)
{
p.Send("Error: Unable to find table that CLR trigger is on. Message not sent!");
return;
}
….
HTH

Source data type "200" not found error when exporting query results to excel Microsoft SQL Server 2012

I am very new to Microsoft SQL Server and am using 2012 Management Studio. I get the error above when I try to export query results to an excel file using the wizard. I have seen solutions posted elsewhere for this error but do not know enough to figure out how to implement the solutions recommended. Can somebody please walk me through one of these solutions step by step?
I believe my problem is that the SQL Server Import and Export Wizard Does Not Recognise Varchar and NVarchar which I believe is the data type for the columns that I am receiving errors for.
Source Type 200 in SQL Server Import and Export Wizard?
http://connect.microsoft.com/SQLServer/feedback/details/775897/sql-server-import-and-export-wizard-does-not-recognise-varchar-and-nvarchar#
Query:
SELECT licenseEntitlement.entID, licenseEntitlement.entStartDate, entEndDate, quote.quoteId, quote.accountId, quote.clientId, quote.clientName, quote.contactName,
quote.contactEmail, quote.extReference, quote.purchaseOrderNumber, quote.linkedTicket
FROM licenseEntitlement INNER JOIN
quote ON quote.quoteId = SUBSTRING(licenseEntitlement.entComments, 12, PATINDEX('% Created%', licenseEntitlement.entComments) - 12)
inner join sophos521.dbo.computersanddeletedcomputers on computersanddeletedcomputers.name = entid and IsNumeric(computersanddeletedcomputers.name) = 1
WHERE (licenseEntitlement.entType = 'AVS') AND (licenseEntitlement.entComments LIKE 'OV Order + %') and entenddate < '4/1/2014'
ORDER BY licenseEntitlement.entEndDate
Error:
TITLE: SQL Server Import and Export Wizard
------------------------------
Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider.
[Query] -> `Query`:
- Column "accountId": Source data type "200" was not found in the data type mapping file.
- Column "clientId": Source data type "200" was not found in the data type mapping file.
- Column "clientName": Source data type "200" was not found in the data type mapping file.
- Column "contactName": Source data type "200" was not found in the data type mapping file.
- Column "contactEmail": Source data type "200" was not found in the data type mapping file.
- Column "extReference": Source data type "200" was not found in the data type mapping file.
- Column "purchaseOrderNumber": Source data type "200" was not found in the data type mapping file.
- Column "linkedTicket": Source data type "200" was not found in the data type mapping file.
If any more details are needed please let me know
So, implementing the suggestion at the StackOverflow link you gave, of turning the query into a View, here's an example of what that could look like (with some code formatting ;) --
CREATE VIEW [dbo].[test__View_1]
AS
SELECT LIC.entID, LIC.entStartDate, entEndDate,
quote.quoteId, quote.accountId, quote.clientId, quote.clientName,
quote.contactName, quote.contactEmail, quote.extReference,
quote.purchaseOrderNumber, quote.linkedTicket
FROM [dbo].licenseEntitlement LIC WITH(NOLOCK)
INNER JOIN [dbo].quote WITH(NOLOCK)
ON quote.quoteId = SUBSTRING(LIC.entComments, 12,
PATINDEX('% Created%', LIC.entComments) - 12)
INNER JOIN sophos521.dbo.computersanddeletedcomputers COMPS WITH(NOLOCK)
ON COMPS.name = entid and IsNumeric(COMPS.name) = 1
WHERE (LIC.entType = 'AVS')
AND (LIC.entComments LIKE 'OV Order + %')
and (entenddate < '4/1/2014')
ORDER BY LIC.entEndDate
GO
Then, you would export from test__View_1 (or whatever real name you choose for it), as if test__View_1 was the table name.
FYI, after the first time you've executed the above -- after you've "created" the view -- then from then on, the view's first line (during modifications) changes, from CREATE VIEW, to ALTER VIEW.
((And, aside from the bug question... in your WHERE clause, did you intend entComments LIKE 'OV Order + %', or was that really intended to be entComments LIKE 'OV Order%'? I've made that change, in the alternative example code, below.))
Note: if you're going to be exporting repeatedly (or re-using) the output from one run, and especially if your query is slow or hogs the machine... then instead of a VIEW, you might prefer a SELECT INTO, to create a table once, which can be quickly re-used. (I would also choose SELECT INTO rather than CREATE VIEW, when developing a one-time-only query for export.)
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'zz_LIC_ENT_DETAIL')
DROP TABLE [dbo].zz_LIC_ENT_DETAIL
SELECT LIC.entID, LIC.entStartDate, LIC.entEndDate,
quote.quoteId, quote.accountId, quote.clientId, quote.clientName,
quote.contactName, quote.contactEmail, quote.extReference,
quote.purchaseOrderNumber, quote.linkedTicket
INTO [dbo].zz_LIC_ENT_DETAIL
FROM [dbo].licenseEntitlement LIC WITH(NOLOCK)
INNER JOIN [dbo].quote WITH(NOLOCK)
ON quote.quoteId = SUBSTRING(LIC.entComments, 12,
PATINDEX('% Created%', LIC.entComments) - 12)
INNER JOIN sophos521.dbo.computersanddeletedcomputers COMPS WITH(NOLOCK)
ON COMPS.name = LIC.entid and IsNumeric(COMPS.name) = 1
WHERE (LIC.entType = 'AVS')
AND (LIC.entComments LIKE 'OV Order%')
and (LIC.entenddate < '4/1/2014')
ORDER BY LIC.entEndDate
Then, you would of course export from table zz_LIC_ENT_DETAIL (or whatever table name you chose).
Hope that helps...
It might be easier to right click query results window and choosing Save Results As (CSV)..
To append the column names in the first row you'd also need to modify your query in this way (note the cast for int or datetime columns):
select 'col1', 'col2', 'col3'
union all
select cast(id as varchar(10)), name, cast(someinfo as varchar(28))
from Question1355876

Resources