I have set of views in the two schema under one database , i wanted to know which views are getting most used by external access like (Power BI , BO..) . Is there any way in snowflake to identify that?
Everytime an external tool accesses a Snowflake object (Views in your case), this is captured in Query History in Snowflake.
To compare which Views are most used, you can compare the number of times each of those Views is accessed.
Following SQL query accesses the query history, and filters by View name, and gives the number of times a View was accessed, in a given timeframe:
select count(1) as "ACCESS_COUNT" from snowflake.account_usage.query_history where start_time between '<yyyy-mm-dd hh:mm:ss>' and '<yyyy-mm-dd hh:mm:ss>' and database_name = '<db_name>' and query_text ilike '%<view_name>%' ;
Please note that the result will also include the count a View was accessed by any other users as well (and not just by external tools)
To refine your search, you can add more filters. Here's the list of columns in query_history View.
Related
Currently we're identifying if a column is virtual in Snowflake by running a SHOW COLUMN query and checking the KIND field for VIRTUAL_COLUMN. Unfortunately, there's a 10k limit on entries returned from SHOW queries in Snowflake and we'd like to be able to run this query at the schema level on schemas ~25k tables.
According to this post there's no way to identify virtual columns in the information_schema.columns view and we'd like to avoid having to run a SHOW COLUMNS query at the table level or having to run a desc table on every table.
Is there some other way we can identify virtual columns at scale?
Unfortunately, not aware of any native capability. I would consider writing a script using the get_ddl() function and run it against all objects in a schema.
I am trying to get from my query history the pct_table_scan as
select
query_id
, (partitions_scanned / nullif(partitions_total,0)) * 100 as pct_table_scan
from table(information_schema.query_history());
from information_schema.query_history(). However, I see in the documentation that the partitions_% columns are available in the query_history from the account usage , which I don't have access to, but they are missing from the information_schema.query_history, which I can access.
Is there any way I can access the partitions_% columns from the information schema?
As you see in the documentation, these columns are not available in the output of the table function (information_schema.query_history), so you can't access this information through the function.
You may submit a feature request to Snowflake Support to add these columns to the query_history table function.
I want to display all audit history data as per MS CRM format.
I have imported all records from AuditBase table from CRM to another Database server table.
I want this table records using SQL query in Dynamics CRM format (as per above image).
I have done so far
select
AB.CreatedOn as [Created On],SUB.FullName [Changed By],
Value as Event,ab.AttributeMask [Changed Field],
AB.changeData [Old Value],'' [New Value] from Auditbase AB
inner join StringMap SM on SM.AttributeValue=AB.Action and SM.AttributeName='action'
inner join SystemUserBase SUB on SUB.SystemUserId=AB.UserId
--inner join MetadataSchema.Attribute ar on ab.AttributeMask = ar.ColumnNumber
--INNER JOIN MetadataSchema.Entity en ON ar.EntityId = en.EntityId and en.ObjectTypeCode=AB.ObjectTypeCode
--inner join Contact C on C.ContactId=AB.ObjectId
where objectid='00000000-0000-0000-000-000000000000'
Order by AB.CreatedOn desc
My problem is AttributeMask is a comma separated value that i need to compare with MetadataSchema.Attribute table's columnnumber field. And how to get New value from that entity.
I have already checked this link : Sql query to get data from audit history for opportunity entity, but its not giving me the [New Value].
NOTE : I can not use "RetrieveRecordChangeHistoryResponse", because i need to show these data in external webpage from sql table(Not CRM database).
Well, basically Dynamics CRM does not create this Audit View (the way you see it in CRM) using SQL query, so if you succeed in doing it, Microsoft will probably buy it from you as it would be much faster than the way it's currently handled :)
But really - the way it works currently, SQL is used only for obtaining all relevant Audit view records (without any matching with attributes metadata or whatever) and then, all the parsing and matching with metadata is done in .NET application. The logic is quite complex and there are so many different cases to handle, that I believe that recreating this in SQL would require not just some simple "select" query, but in fact some really complex procedure (and still that might be not enough, because not everything in CRM is kept in database, some things are simply compiled into the libraries of application) and weeks or maybe even months for one person to accomplish (of course that's my opinion, maybe some T-SQL guru will prove me wrong).
So, I would do it differently - use RetrieveRecordChangeHistoryRequest (which was already mentioned in some answers) to get all the Audit Details (already parsed and ready to use) using some kind of .NET application (probably running periodically, or maybe triggered by a plugin in CRM etc.) and put them in some Database in user-friendly format. You can then consume this database with whatever external application you want.
Also I don't understand your comment:
I can not use "RetrieveRecordChangeHistoryResponse", because i need to
show these data in external webpage from sql table(Not CRM database)
What kind of application cannot call external service (you can create a custom service, don't have to use CRM service) to get some data, but can access external database? You should not read from the db directly, better approach would be to prepare a web service returning the audit you want (using CRM SDK under the hood) and calling this service by external application. Unless of course your external app is only capable of reading databases, not running any custom web services...
It is not possible to reconstruct a complete audit history from the AuditBase tables alone. For the current values you still need the tables that are being audited.
The queries you would need to construct are complex and writing them may be avoided in case the RetrieveRecordChangeHistoryRequest is a suitable option as well.
(See also How to get audit record details using FetchXML on SO.)
NOTE
This answer was submitted before the original question was extended stating that the RetrieveRecordChangeHistoryRequest cannot be used.
As I said in comments, Audit table will have old value & new value, but not current value. Current value will be pushed as new value when next update happens.
In your OP query, ab.AttributeMask will return comma "," separated values and AB.changeData will return tilde "~" separated values. Read more
I assume you are fine with "~" separated values as Old Value column, want to show current values of fields in New Value column. This is not going to work when multiple fields are enabled for audit. You have to split the Attribute mask field value into CRM fields from AttributeView using ColumnNumber & get the required result.
I would recommend the below reference blog to start with, once you get the expected result, you can pull the current field value using extra query either in SQL or using C# in front end. But you should concatenate again with "~" for values to maintain the format.
https://marcuscrast.wordpress.com/2012/01/14/dynamics-crm-2011-audit-report-in-ssrs/
Update:
From the above blog, you can tweak the SP query with your fields, then convert the last select statement to 'select into' to create a new table for your storage.
Modify the Stored procedure to fetch the delta based on last run. Configure the sql job & schedule to run every day or so, to populate the table.
Then select & display the data as the way you want. I did the same in PowerBI under 3 days.
Pros/Cons: Obviously this requirement is for reporting purpose. Globally reporting requirements will be mirroring database by replication or other means and won't be interrupting Prod users & Async server by injecting plugins or any On demand Adhoc service calls. Moreover you have access to database & not CRM online. Better not to reinvent the wheel & take forward the available solution. This is my humble opinion & based on a Microsoft internal project implementation.
I'm trying to put a data dictionary together for a client using SQL Server MDS. In order to avoid tracking the specific location in the data warehouse/marts/source systems of the fields being defined, I'm hoping to use the system views as much as possible.
This works fine for tracking the fields themselves. However, the users' reports work off cubes built on views in the data marts, and I'm having trouble tracking the source of calculated columns.
Taking this view as an example:
CREATE VIEW [dbo].[vw_testing_colltable]
AS
SELECT colid, coldesc, firstadd + secondadd AS totaladd
FROM dbo.testing_coltable
Where in the system views can I determine that firstadd and secondadd are part of the view and they are used to create totaladd?
INFORMATION_SCHEMA.VIEW_COLUMN_USAGE lists firstadd and secondadd but not totaladd.
INFORMATION_SCHEMA.COLUMNS lists totaladd but not firstadd and secondadd.
sys.columns combines the two.
I need to do join query to MS SQL Server 2014 DB based on a column name value. The same query runs when doing query directly to DB, but when doing query through Mule I'm getting error. The query looks something like this :
SELECT * FROM sch.emple JOIN sch.dept on sch.emple.empid = sch.dept.empid;
The above query work fine while doing query directly to MS SQL Server DB, but gives the following error through mulesoft.
Record cannot be mapped as it contains multiple columns with the same label. Define column aliases to solve this problem (java.lang.IllegalArgumentException). Message payload is of type: String
Request you to please help me out.
Specify columns list directly:
SELECT e.<col1>, e.<col2>, ...., d.<col1>,...
FROM sch.emple AS e
JOIN sch.dept AS d
ON e.empid = d.empid;
Remarks:
You could use aliases instead of schema.table_name
SELECT * in production code in 95% cases is bad practice
The column that has duplicate is empid(or more). You could add alias for it e.empid AS emple_empid and d.empid AS dept_empid or just specify e.empid once.
To avoid specifying all columns manually, you could drag and drop them from object explorer to query pane like Drag and Drop Column List into query window.
Second way is to use plugin like Redgate Prompt to expand SELECT *:
Image from: https://www.simple-talk.com/sql/sql-tools/sql-server-intellisense-vs.-red-gate-sql-prompt/
Addendum
But the same query works directly.
It works because you don't bind them. Please read carefully link I provided for SELECT * antipattern and especially:
Binding Problems
When you SELECT *, it's possible to retrieve two columns of the same name from two different tables. This can
often crash your data consumer. Imagine a query that joins two
tables, both of which contain a column called "ID". How would a
consumer know which was which? SELECT * can also confuse views (at
least in some versions SQL Server) when underlying table structures
change -- the view is not rebuilt, and the data which comes back can
be nonsense. And the worst part of it is that you can take care
to name your columns whatever you want, but the next guy who comes
along might have no way of knowing that he has to worry about adding a
column which will collide with your already-developed names.
But the same query works directly.
by Dave Markle