Terraform destroy not working for DB instances in AWS - database

I used terraform destroy. Then I got this msg and the DB instances are still there.
Error : DB Instance Final Snapshot Identifier is required when a final snapshot is required.
Do I need to create a snapshot.
If so is it possible to do it directly in the console ?

You can use the skip_final_snapshot argument to evade this behavior, which is what you implied you are seeking here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#skip_final_snapshot.
Add that argument with a true value to your aws_db_instance, apply the new config to update the DB instance, and then you can freely destroy without the error requiring the final snapshot.

It is also valid to manually delete the RDS. Terraform will pickup the change.

Related

Regression on SQL Server Connection from Standard Logic App

I have been developing Standard Logic Apps with SQL Server successfully for some time, but suddenly can no longer connect. I'm using Azure AD Integrated as my Authentication Type, which I know is OK as I use the same credentials in SSMS. If I try to create a new credential, it is apparently successful but on save the Logic App says "The API connection reference XXX is missing or not valid". Something has changed, but I don't know what ... help!
per above, this was submitted to M/S and has been resolved as follows: the root cause is if a Logic App Parameter name includes an embedded space the problem with SQL connections is triggered. This is a pernicious problem, as the error message is quite unrelated to the root cause. Further, since embedded spaces are supported in Logic Apps e.g. in Step Names, it is easy to assume the same applies across the board.

Snowflake JDBC driver internal error: Fail to retrieve row count for first arrow chunk: null -- only occurs on SELECT statements

I have successfully established the JDBC connection and can successfully execute statements like "use warehouse ...". When I try to run any SELECT statement I get the following error:
net.snowflake.client.jdbc.SnowflakeSQLLoggedException: JDBC driver internal error: Fail to retrieve row count for first arrow chunk: null.
I am able to see that my request was successful, and returned the expected data in the snowflake UI.
The error occurs on this line:
rs = statement.executeQuery("select TOP 1 EVENT_ID from snowflake.account_usage.login_history");
The statement was able to execute queries prior to this line and the result set was as expected. Any insight would be appreciated!
This could happen due to several reasons:
What JDK version are you using?
JDK16 has introduced strong encapsulation of JDK internals (see JEP 396)
If you're using JDK16 try setting at JVM level on startup:
-Djdk.module.illegalAccess=permit
This is a workaround until we get a fix for the following Apache Arrow issue ARROW-12747
If you use an application that uses JDBC to connect to Snowflake, then the application might not interpret correctly the results. Try switching back to JSON rather than ARROW format and see if that fixes it. This can be done at session level by running:
ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON'
Using DBeaver to connect snowflake and had the same issue.
It is resolved by setting the session parameter in each editor window as following:
ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON';
This solution can be automated by configuring boot-strap queries in connection settings->Initialization. With every new-editor window this session parameter will preset during initialization.
I hit the same problem, and was able to get it working by downgrading to Java 11 for version
[net.snowflake/snowflake-jdbc "3.13.8"]
You can add the following 2 settings in the following file (macOS)
/Applications/DBeaver.app/Contents/Eclipse/dbeaver.ini
-Djdk.module.illegalAccess=permit
--add-opens=java.base/java.nio=ALL-UNNAMED
Information from: https://support.dbvis.com/support/solutions/articles/1000309803-snowflake-fail-to-retrieve-row-count-for-first-arrow-chunk-
Another alternative that worked for me on my MAC M1, is to use JDK11
brew install openjdk#11
Edit: /Applications/DBeaver.app/Contents/Eclipse/dbeaver.ini this line:
../Eclipse/jre/Contents/Home/bin/java change to /opt/homebrew/opt/openjdk#11/bin/java
Restart dbeaver
The official solution from snowflake is to configure an extra property in your datasource configurations:
https://community.snowflake.com/s/article/SAP-BW-Java-lang-NoClassDefFoundError-for-Apache-arrow
Customer can use this property (jdbc_query_result_format=json) in datasouce property of Application server or session property in application like
Statement = connection.createStatement();
Statement.executeQuery("ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON'");
which will use result format as JSON instead of Arrow and which will avoid the above error.
Before executing actual query you need to set this:
statement.executeQuery("ALTER SESSION SET JDBC_QUERY_RESULT_FORMAT='JSON'");

EFCodeFirst 4.2 and Provider Manifest tokens

I have a library that I have created that depends on EF Codefirst for DB interaction. I am also using EntityMigrations Alpha 3. When I use the library in my main application (WPF) everything works fine and as expected. Another part of the system uses Excel and retrieves information using the same library via an additional COM class in between.
In the Excel scenario, as soon as it tries to connect to the database, it throws up an exception to do with "The Provider did not return a ProviderManifestToken".
I'm really not sure why I'm only getting the error when I go through Excel/COM. In both scenarios I can confirm that the same DB connection string is being used. THe method to retrieve the DB Connection string is also the same - they use a shared config file & loader class.
Any suggestions welcome.
Issue resolved.
I had also created a custom DBIntializer and part of the intialization calls upon EntityMigrations to ensure the DB is up to date. The custom migration calls the default constructor on your context. By convention this will either dynamically use it's own connection string for SQLExpress(I don't have installed) or try to look for an entry in your config file (I don't have this either for the dll - config comes from hosting apps).
This is what is causing the failure when being used from Excel(In my scenario). The Migration will be newing up an instance of the context using the default constructor. This means that a config entry for the connection string is required or it uses the default process(SQLExpress). When being used from Excel in a COM env – no config file exists.
Moving the migration out of the Initialization strategy means I no longer have a problem.

Error while executing query for custom object Work Order

I am executing the query for my custom object created in SFDC. but i am getting the following error:
{'[{"message":"\nSELECT FS_Account_Name__c from FS_Work_Order__c\ERROR at Row:1:Column:34\nsObject type 'FS_Work_Order__c' is not supported. If you are attempting to use a custom object, be sure to append the '__c' after the entity name. Please reference your WSDL or the describe call for the appropriate names.","errorCode":"INVALID_TYPE"}]'} Thoough have written the correct table name as given while i created the Custom object. PLease help.
First thing to try: does it work properly when run as the system administrator profile? If so then it's certainly a permissions issue. Things to checl
The object is deployed (Setup > Create > Objects > Edit > Deployment Status)
The profile has permission to query the object.
If not, does that same query work from inside the developer console? If so I can't think of what it might be, except connecting to production instead of a sandbox or vice versa.

Possible to refresh Django connections on the fly?

Is it possible to add a new database connection to Django on the fly?
I have an application that uses multiple databases (django 1.2.1), and while running, it's allowed to create new databases. I'd need to use this new database right away (django.db.connections[db_alias]). Is it possible without server restart? Using module reload here and there?
Thank you for your time.
It is possible... but not recommended...
You can access the current connection handler...
Use something like this:
from django.db import connections
if not alias in connections.databases:
connections.databases[alias] = connections.databases['default'] # Copy 'default'
connections.databases[alias]['NAME'] = alias
Make sure you do not attempt to add a new alias to the databases dictionary while there is ANY database activity on the current thread.
An issue you need to overcome, is that this code will need to be placed somewhere were it will always be touched by the current thread before trying to access the database. I use middleware to achieve this.

Resources