When using the SDP to extract data from Cloudant and populate dashDB, I occasionally see error messages in the dashdb "XXXX_OVERFLOW" table that look like this:
No matched schema for {"_id":"...","doc":{...}
Questions
What does this error mean?
How can I fix it?
There are two main phases to the SDP process:
Schema analysis
Data import
In the schema analysis phase, the SDP analyses a sample of documents in Cloudant and uses the document structures of the sample to infer the target schema in dashDB.
The above error is encountered when the SDP tries to import a document with a schema that it did not see during the SDP analysis stage.
The only option to resolve this is to increase the sample size used during schema discovery to unlimited.
Related
I am trying to use the Snowflake connector in Tableau to query an external Snowflake table.
I cannot see the external table in the list of all tables on the left pane in Tableau (only regular Snowflake tables), so I have tried to pull from the external table using SQL.
Running this from the Snowflake site gets me the contents of the external table:
select * from EXTERNAL_TABLE_NAME;
Running the same from the "New Custom SQL" dialog in Tableau's Snowflake connector gets me this:
SQL compilation error: Object 'EXTERNAL_TABLE_NAME' does not exist or not authorized.
I also tried the following:
select from #DATABASE_NAME.SCHEMA_NAME.STAGE_NAME.EXTERNAL_TABLE_NAME
...which gets me: SQL compilation error: Object does not exist, or operation cannot be performed.
Any thoughts on what I can do to get this to work? I don't think it is a permissioning issue because I am using the same account to auth in Tableau as I am on the Snowflake website.
I'm guessing that I simply need to do a better job pointing to the location where the external table is, but I can't figure it out.
Thanks in advance for your help!
Looks like this is a deeper permissioning issue that I will have to resolve with our Snowflake admin. I was able to pull to Tableau from an external Snowflake table successfully using a different ROLE and DATABASE, so marking this resolved.
I am trying to copy CSV files from my local directory into a SQL Server database running in my local machine by using Apache NiFi.
I am new to the tool and I have been spending few days googling and building my flow. I managed to connect to source and destination but still I am not able to populate the database since I get the following error: "None of the fields in the record map to the columns defined by the tablename table."
I have been struggling with this for a while and I have not been able to find a solution in the Web. Any hint would be highly appreciated.
Here are further details.
I have built a simple flow using GetFile and PutDatabaseRecord processors 1.
My input is a simple table with 8 columns 2.
My configurations for GetCSV process are here (I have added the input directory and left the rest as default): 3
The configuration for PutDatabaseRecord process is here (I have referred to the CSVReader and DBCPConnectionPool controller services, used the MS SQL 2012+ database type (I have 2019 version), configured INSERT statement type, inserted the schema and correct table name and left everything else as default): 4
The CSVReader configuration looks as shown here (Schema Access Strategy = Use String Fields From Header; CSV Format = Microsoft Excel): 5
And this is the configuration of the DBCPConnectionPool (I have added the correct URL, DB driver class name, driver location, DB user and password): 6
Finally, this is a snapshot of the description of the table I have created in the database to host the content: 7
Many thanks in advance!
The warning "None of the fields in the record map to the columns defined by the tablename table." is also obtained when the processor is not able to find the table and this can happen also when the table name is correctly configured in PutDatabaseRecord but there is some issue with user access rights (which ended up to be the actual cause of my error ...).
Expert,
How can we configure Azure/Snowflake to access all snowflake logs using Azure Log Analytics and use kusto and alert to create alert?
Rana
it depends what data you want to unload from Snowflake to log files, as there is lots of information available in account_usage and information schema. But it's easy enough to write that data out to files on Azure storage, for ingestion and use in Azure Log Analytics. Here's an example - pushing errors recorded in the login_history view to JSON files:
copy into #~/json_error_log.json from
(select object_construct(*) from (
select event_timestamp, event_type,user_name,reported_client_type,error_code,error_message
from table(information_schema.login_history(dateadd('days',-7,current_timestamp()),current_timestamp()))
where error_code is not null
order by event_timestamp))
file_format = (type ='JSON');
And you can find more information here:
https://docs.snowflake.com/en/user-guide/data-unload-azure.html
Can't comment on the A.L.A tool operations but hopefully this gives you some idea of what to do on the Snowflake side.
I was forwarded a Crystal Reports error message that said:
Failed to retrieve data from the database. ... Description: The EXECUTE permission was denied on the object 'xxxx_IDList', database 'DBName', schema 'dbo'.
There is an 'object?' named 'xxxx.IDList' under User-Defined Table Types - in the database.
I have never created or used a User-defined Table Type so I am just trying to figure out how to approach this error and how I might proceed with troubleshooting it.
I am hoping this is not an uncommon error.
Can anyone suggest an approach to solving this problem?
Thanks in advance!
In SQL server, any table that is not a system table is a user-defined table. So, anything in a typical non-system database that holds business data. The error message tells me that you have a database (dbname) on the server you are connecting to. In that database there is at least one schema (dbo, which is the default) and that the table xxxx_IDList lives in that schema.
Your app is trying to execute this table as if it was a function or stored procedure and you do not have permission to do that.
Do you have the source for the app that we can look at?
When using the SDP to extract data from Cloudant and populate dashDB, I occasionally see error messages in the dashdb "XXXX_OVERFLOW" table that look like this:
[XXXX does not exist in the discovered schema. Document has not been imported.]
Questions
What does this error mean?
How can I fix it?
This error is similar to: No matched schema for {"_id":"...","doc":{...}, so the same answer applies here.
There are two main phases to the SDP process:
Schema analysis
Data import
In the schema analysis phase, the SDP analyses a sample of documents in Cloudant and uses the document structures of the sample to infer the target schema in dashDB.
The above error is encountered when the SDP tries to import a document with a schema that it did not see during the SDP analysis stage.
The only option to resolve this is to increase the sample size used during schema discovery to unlimited.