Expert,
How can we configure Azure/Snowflake to access all snowflake logs using Azure Log Analytics and use kusto and alert to create alert?
Rana
it depends what data you want to unload from Snowflake to log files, as there is lots of information available in account_usage and information schema. But it's easy enough to write that data out to files on Azure storage, for ingestion and use in Azure Log Analytics. Here's an example - pushing errors recorded in the login_history view to JSON files:
copy into #~/json_error_log.json from
(select object_construct(*) from (
select event_timestamp, event_type,user_name,reported_client_type,error_code,error_message
from table(information_schema.login_history(dateadd('days',-7,current_timestamp()),current_timestamp()))
where error_code is not null
order by event_timestamp))
file_format = (type ='JSON');
And you can find more information here:
https://docs.snowflake.com/en/user-guide/data-unload-azure.html
Can't comment on the A.L.A tool operations but hopefully this gives you some idea of what to do on the Snowflake side.
Related
Im getting an error when running Data Factory to load data into Snowflake:
ErrorCode=SnowflakeUnsupportedCouldPlatformForImport,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Snowflake only support the account hosted in azure platform while as sink connector, please make sure your account is hosted in azure, current region and platform is '',Source=Microsoft.DataTransfer.ClientLibrary,'
Does the azure blob storage (staging area for copy command) needs to be in the same region as where snowflake was provision?
In my case going to the warehouse you have configured and enabling "Auto Resume" solved the issue for me. Hope this helps someone running into this same issue.
I am trying to use the Snowflake connector in Tableau to query an external Snowflake table.
I cannot see the external table in the list of all tables on the left pane in Tableau (only regular Snowflake tables), so I have tried to pull from the external table using SQL.
Running this from the Snowflake site gets me the contents of the external table:
select * from EXTERNAL_TABLE_NAME;
Running the same from the "New Custom SQL" dialog in Tableau's Snowflake connector gets me this:
SQL compilation error: Object 'EXTERNAL_TABLE_NAME' does not exist or not authorized.
I also tried the following:
select from #DATABASE_NAME.SCHEMA_NAME.STAGE_NAME.EXTERNAL_TABLE_NAME
...which gets me: SQL compilation error: Object does not exist, or operation cannot be performed.
Any thoughts on what I can do to get this to work? I don't think it is a permissioning issue because I am using the same account to auth in Tableau as I am on the Snowflake website.
I'm guessing that I simply need to do a better job pointing to the location where the external table is, but I can't figure it out.
Thanks in advance for your help!
Looks like this is a deeper permissioning issue that I will have to resolve with our Snowflake admin. I was able to pull to Tableau from an external Snowflake table successfully using a different ROLE and DATABASE, so marking this resolved.
When i am loading data from oracle database to Salesforce from informatica with SFDC Bulk API checked, then no data is getting inserted into salesforce. In Workflow Monitor it is showing the successful records but when i checked in Salesforce its not getting inserted.How to bulk load to Salesforce?
There should be some errors in your data that is the reason you are not able to see any data in your salesforce target.
While using SFDC Bulk API option the rejected data will not be written to any reject file. To know if there are any errors in your data implement below steps in order.
In target session properties check the below options.
Use SFDC Error File
Monitor Bulk jobs Until all Batches Processed.
Set the location of the BULK error files(you should provide a path)
After doing the above changes run the workflow if there are any errors in the data it will be moved to the reject file along with the error message which will be saved in the location you provided in step3.
I am currently setting up our second Azure Search service. I am making it identical to our existing one, just in a different region.
I'm using the portal Import Data function to set up my index. For the Data Source, I have configured it to connect to my Azure SQL Database and table, which definitely has Integrated Change Tracking turned on. Further, it's the exact same database and table that I'm connected to and indexing from in my existing Azure Search service.
The issue is that when I get to the "Create an Indexer" step, I get the message that says "Consider enabling integrated change tracking on your database..." In other words, it doesn't think I have change tracking on this database. I definitely do, and our other Azure Search Service recognizes this just fine on the exact same database.
Any idea what's going on here? How can I get this Data Source to be recognized as having Change Tracking turned on, and why isn't it doing so when all is working as expected in our existing Search service with identical set up?
We will investigate. In the meantime, please try creating your datasource and indexer programmatically using the REST API or .NET SDK.
When I was experiencing this problem, I tried creating the search service via "Add Azure Search" in Azure portal > SQL database.
Using that wizard I was able to create the search data source, index & indexer.
Update: I opened a ticket with Azure support, and when trying to get more information to provide to them, I tried to reproduce the problem (create a data source via REST API), but the expected failure did not happen ("Change tracking not enabled for table..." despite it being enabled). This makes me think there was something wrong with internal Azure code that was fixed in the meantime.
I'm trying to clean up my postgres database on heroku where some large objects have gotten out of control and I want to remove large objects which aren't used anymore.
On my dev machine, I can do:
select distinct loid from pg_largeobject
Where loid Not In (select id from table )
then i run:
SELECT lo_unlink(loid)
on each of those IDs.
But on heroku, i can't run any selects on pg_largeobject, as even
select * from pg_largeobject limit 1;
gives me an error:
ERROR: permission denied for relation pg_largeobject
Any suggestions how to work around this, or actually even why we don't have read access on pg_largeobject in heroku?
Since PostgreSQL 9.0, a non-superuser can't access pg_largeobject. This is documented in the release notes:
Add the ability to control large object (BLOB) permissions with
GRANT/REVOKE (KaiGai Kohei)
Formerly, any database user could read or modify any large object.
Read and write permissions can now be granted and revoked per large
object, and the ownership of large objects is tracked.
If it works on your development instance, it's either because it's version 8.4 or lower, or because you're logged in as a superuser.
If you can't log in as a superuser on heroku, I guess you could dump the remote database with pg_dump, then reload it locally, identify the leaked OIDs as a local superuser, put them in a script with the lo_unlink commands, and finally play this script against the heroku instance.
Update:
Based on how the psql \dl command queries the database, it appears that pg_catalog.pg_largeobject_metadata can be used to retrieve the OIDs and ownership of all large objects, through this query:
SELECT oid as "ID",
pg_catalog.pg_get_userbyid(lomowner) as "Owner",
pg_catalog.obj_description(oid, 'pg_largeobject') as "Description"
FROM pg_catalog.pg_largeobject_metadata ORDER BY oid
So your initial query finding the leaked large objects could be changed for non-superusers with 9.0+ into:
select oid from pg_largeobject_metadata
Where oid Not In (select id from table )
and if necessary, a condition on lomowner could be added to filter on the large objects owned by a specific user.