We recently discovered that an external stage in our environment mysteriously disappeared, and don't know why. I assume someone must have run a DROP STAGE command at some point, but don't know who could have done that or when it occurred.
I have tried searching Query History for instances of LIKE '%drop stage%, but not getting any hits. I would guess since that is a DDL command, Query History is the wrong place to look. Is there a way to find out when the stage was dropped?
You can find the date when the stage was removed by querying the snowflake.account_usage.stages shared view. There is a deleted field associated with each object and this will be updated upon a drop/deletion of the stage.
Related
I create an external stage in Snowflake via (I've tried with a public bucket too)
CREATE OR REPLACE stage "DATABASE"."SCHEMA"."STAGE_NAME"
url='s3://bucket'
CREDENTIALS=(AWS_KEY_ID='xxxxxxxxxxxx' AWS_SECRET_KEY='xxxxxxxxxxxx');
I could view the parameters of this stage via
SHOW STAGES
DESC STAGE "DATABASE"."SCHEMA"."STAGE_NAME"
However, I'm getting the error whenever I'm trying to interact with this stage (e.g., LIST #STAGE_NAME or load a file).
SQL compilation error: Stage 'DATABASE.SCHEMA.STAGE_NAME' does not exist or not authorized.
I've tried different snowflake roles but can't make it work. Could anyone point me where to look? Perhaps I have to assign any permissions to the stage?
You have STAGE-privileges: https://docs.snowflake.com/en/user-guide/security-access-control-privileges.html#stage-privileges
For COPY, LIST and others you need the privileges mentioned there. (USAGE, READ and maybe WRITE)
It's pretty weird, but I can list a stage if the name consists of capital letters. No additional permissions are needed.
Works fine
CREATE OR REPLACE stage "DATABASE"."SCHEMA"."STAGE_NAME"
url='s3://bucket'
CREDENTIALS=(AWS_KEY_ID='xxxxxxxxxxxx' AWS_SECRET_KEY='xxxxxxxxxxxx');
LIST #STAGE_NAME
Returns Stage does not exist or not authorized.
CREATE OR REPLACE stage "DATABASE"."SCHEMA"."Stage_Name"
url='s3://bucket'
CREDENTIALS=(AWS_KEY_ID='xxxxxxxxxxxx' AWS_SECRET_KEY='xxxxxxxxxxxx');
LIST #Stage_Name
At the same time, I see all Stages while running the "SHOW STAGES" command.
Are there any constraints on the naming? I haven't found any so far.
If the stage DDL has the name enclosed in double-quotes(CREATE OR REPLACE stage "DATABASE"."SCHEMA"."STAGE_NAME"), the name becomes case-sensitive which is why you cannot see it. Do not enclose the stage name in quotes and you should be able to see it regardless of the case.
https://docs.snowflake.com/en/sql-reference/sql/create-stage.html#required-parameters
Hope you're well. I'm currently building out a report, but despite my best efforts so far, I can't get some information to populate within the report. It does not appear to me that salesforce is recognizing the field "Agent Incoming Connecting Time" within the object "AC_Agent_Performance". However, I can pull in some other fields within the same object into the Agent Performance report, so I'm not clear on what is not taking place in the field that I wish to see within the report. Here are some of the things that I've tried:
I have checked the access to the field. The first photo (Photo 1) Shows an example of a working object, the the second one shows an example of one that does not.
The API name seems to work, and is consistent with other fields within the object that work.
I have checked the page layout for the object (even though I don't think this is the issue), and I have mirrored other fields to the best of my knowledge that ARE populating within the report.
I reviewed the CTI flows to see if there was something missing in there on a lark, but there was nothing in there that would lead me to believe that this was the source of the problem.
I have tried setting up a new field in the object (formula), that references the field that I'm trying. to pull in, but that just returns a result of 'zero' for all values.
One thing that I have done that appears to be working, is I have set up a joined report, which uses both "AC Agent Performance" object and "AC Historical Queue Metrics" object in the report. The result that is returning appears to be accurate (please see the picture (picture number 3)). However, I don't think that this is the right way to go about this, and I don't want to do it this way. I want to use the report with one object rather than with two.
I know that permissions are the most likely issue, so I've taken a close look at these. Please let me know if there is something wrong with how I have the permissions configured. The First image depicts the 'Field Level Security'. The second image depicts the'field accessibility'. They are both like this, the whole way down:
Please note one other thing, which is that the last picture depicts a different field within the object displaying in the report.
Does anyone have any ideas on how I can proceed so the field "Agent Incoming Connecting Time" will display within the report?
Please also note, that these are objects that contain data that is populated from AWS' Amazon Connect.
This last photo, shows that the object does not have any information in it within the report.
If the field isn't populated there's not much you can do on the reporting side of things. You already tried "joined report". You should check why the integration doesn't populate it, maybe read integration documentation, contact the managed package's support...
The tables are connected with lookup or master-detail, right? In a pinch you could try making formula field on "AC Agent Performance" looking "up" and pulling the value from related AC historical queue metrics. If the relationship is other way around (performance -> down to related list -> metrics) you could try to make-do with a master detail and rollup summary field. I don't know this package, no idea if you can pull it off when you don't have full control over the fields.
If you can't really use the relationships and absolutely need to report on single table - you could capture intermediate results of the report to a helper table and then report on that. It's called "reporting snapshots". Or write some nightly (hourly?) batch that recalculates stuff and writes homemade "rollup" to these fields?
I would like to take the Audit History provided by Enterprise Architect and create a SQL query to report through a BI tool that will allow myself and other users to search the history of an object but I am having a little trouble understanding the audit table: t_snapshot.
From what I can tell, t_snapshot has a Style column that contains "INSERT," "UPDATE," and "DELETE" which would tell me what is happening and the Notes column can tell me what object it is referencing but so far I've only been able to get a partial picture. What I have not been able to deduce is when any event occurred or which user made the change.
If anyone has encountered this problem in the past, your input would be appreciated.
Well, I don't know whether you really want to touch that.
There's a column called BinContent which contains what you are looking for. It looks like
<LogItem><Row Number="0"><Column Name="object_id"><Old Value="1797"/><New Value="1797"/></Column><Column Name="name"><Old Value="CB"/><New Value="CBc"/></Column><Column Name="modifieddate"><Old Value="07.12.2018"/><New Value="11.12.2018"/></Column><appliesTo><Element Type="Action"/></appliesTo></Row><Details User="Thomas" DateTime="2018-12-11 08:22:59"/></LogItem>
So basically some XML describing the change including the plain text user name.
The bincontent column(s) are actually zips which contain a single file str.dat holding the above information.
Good luck.
I'm struggling a bit to overcome this obstacle that is to create a table with a foreign key to another table. It looks simple right? It is, but unfortunately i'm not being successfull. The error thrown is the one in the title. Has anyone else had this error before? How did you resolved it? I'm using SQL Server 2014 but the error is thrown through Outsystems IDE.
Best regards,
Rafael Valente
It would help if you could post a picture of your datamodel for us to take a look.
One way of dealing with this kind of errors in OutSystems is inspecting the database itself. There's a system table called ossys_espace. Get your espace id from there. Then query ossys_entity to see which is the physical table name for that entity and check if there's something wrong with it.
There's also the possibility that you've created a table in the past that is causing the error. Check for the entities with the flag deleted set to true in that table. If it helps, there's this Forge component that you can you to clean those deleted entities.
If you have access to the server you can also look at the generated SQL and understand if there's a problem with it.
I find that error weird, but you might be bumping into a bug, and for sure we want to know that :)
I have a emdx file with update-able views. I made these views by following an example here where I delete the name and the type and leave just dbo:schema, however, every time I pick "Update Model from Database" these views and the entire definition including associations and such, get removed from the file.
To solve this problem, I end up doing a manual merge with the previous version, however, this is a really long and painful process.
Anyone know what I'm doing wrong?
Example of my declared update-able view:
<EntitySet Name="vw_MeterEmisHist" EntityType="Model.Store.vw_MeterEmisHist" Schema="dbo" />
I have had the same this happen when adding node to allow for mapping stored procedures to entities. The reason for this is that the XML formatted EDMX file is always completely auto generated when the model is updated (or created) from the database.
The easiest work around that I have found is to keep a text file within my solution with the changes that I have made so that they can be easily replaced. To speed things up, its possible to create a find/replace macro within Visual Studio to automate the process.
If anyone ever gets really bored, that sort of functionality would make a great add-in. (Or a great fix in VS. MS, are you listening?)