I am using serlog for logging in my webapi and working fine. I used SQL Server to log and the following is the serlog config for the same.
__serilogLogger = new LoggerConfiguration()
.Enrich.WithProperty("ApplicationIPv4", _ipv4)
.Enrich.WithProperty("ApplicationIPv6", _ipv6)
.WriteTo.MSSqlServer(connectionString, tableName /*, columnOptions: columnOptions*/)
.WriteTo
.Seq(ConfigurationManager.AppSettings["SerilogServer"])
.CreateLogger();
I am beginner in serilog. My confusion is how to purge the logs in database. Any options in serilog to hold last 3 months data only like that.
Based on chat in serilog Gitter, there is no option for that. We can do using Sql Job Agent or any other scheduled job.
Related
I'm trying to set up a Debezium SQL Server Connector against a SQL Server instance that is controlled by DBAs at my workplace. I've been able to start up Zookeeper and Kafka Server without issue, and Kafka Connect itself works with sample Connectors, but when attempting to start a Debezium SQL Server Connector instance I've been getting the error "Couldn't obtain database name".
[2022-07-12 16:36:04,269] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:117)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Unable to connect. Check this and other connection properties. Error: Couldn't obtain database name
Here is my debezium config:
name=Dbz-SqlServer-connector
connector.class=io.debezium.connector.sqlserver.SqlServerConnector
database.hostname=MyDbHost
database.port=1433
database.user=MyUsername
database.password=MyPassword
database.dbname=MyDatabase
database.server.name=MyDbHost
table.include.list=dbo.CdcTest
database.history.kafka.bootstrap.servers=localhost:9092
database.history.kafka.topic=dbhistory.CdcTest
I've tried this in a .properties file passed to a standalone Connect instance, and as a JSON POST to a distributed Connect instance. I have tried all of the same steps on both my local Windows machine as well as on a linux VM, with the same results.
Confluent and Docker are not options for me in this situation.
for SQL Server login credentials, I am using a local account on the SQL Server instance that does have access to the database in question. I found the source code for debezium's connectors on their github and was able to find that specific error message within the code:
private static final String GET_DATABASE_NAME = "SELECT name FROM sys.databases WHERE name = ?";
...
public String retrieveRealDatabaseName(String databaseName) {
try {
return prepareQueryAndMap(GET_DATABASE_NAME,
ps -> ps.setString(1, databaseName),
singleResultMapper(rs -> rs.getString(1), "Could not retrieve exactly one database name"));
}
catch (SQLException e) {
throw new RuntimeException("Couldn't obtain database name", e);
}
}
I'm not completely familiar with Java but it appears that basically something is going wrong when the connector is trying to run "SELECT name FROM sys.databases WHERE name = 'MyDatabase'". When I run this against the database myself, logged in with the same account I'm using, it seems to work just fine, so I'm really not sure where to go from here. It is fair to say that since I'm not in full control of the SQL Server environment that I'm using, there may be some permissions issues that I'm not aware of, but from what I'm able to test it seems like it should be working.
I would greatly appreciate any help at all, whether just suggestions on settings/configs to check or a full-blown solution.
Thank you!
Update: I've built a simple console app to run that sys.databases query against MyDbHost, master database, as the relevant account, and it's working just fine so I feel like that confirms that my connection info is correct and account permissions are also correct. Seems like this is an issue within the Debezium connector.
It turned out that my problem was a mistake in the connector's config setting. I misunderstood which specific pieces of data to put into database.hostname and database.server.name, and one I corrected those fields the connector works.
I am new to using using stored procedure and Azure storage account. I am exploring the following guide at:
https://www.sqlshack.com/how-to-connect-and-perform-a-sql-server-database-restore-from-azure-blob-storage/
and have created a credential in my database 'Security' > 'Credential' folder in SSMS.
Query that I ran in SSMS:
--using the url and the key
CREATE CREDENTIAL [Credential_BLOB]
WITH IDENTITY= 'https://<account>.blob.core.windows.net/',
SECRET = '<storage account key -> which I enter my Access Key 1>';
Result:
After which I proceed to run the following stored procedure where I want to restore the backup from BLOB storage:
RESTORE DATABASE Database_Name FROM URL = 'https://<account>.blob.core.windows.net/Container/SampleDatabase.bak'
WITH CREDENTIAL = 'Credential_BLOB',
And I get this error:
Msg 41901, Level 16, State 2, Line 3
One or more of the options (credential) are not supported for this statement in SQL Database Managed Instance. Review the documentation for supported options.
However, from the guide which I input the link above, they were able to run the query:
I tried to google for the syntax of the RESTORE statement from the Microsoft Docs library and others who may have encountered similar issue but I did not find any effective result. I would appreciate your help if you have encountered something similar and would like to share your solution. Thank you!
From the error which you have shared, it is easy to interpret that you are using the SQL Database Managed Instance. But the link you have shared doesn't mention anywhere which SQL Server it is using. The approach mentioned in that link might not work in your case because of difference in SQL servers and statement compatibility.
Then, I tried the steps which are given in the Microsoft official document (link shared by #Nick.McDermaid in the comment section). It is working fine without any issue.
Please follow the steps below to achieve the requirement (applicable for SQL Server 2016 (13.x) and later, Azure SQL Managed Instance only).
Use the GUI in SQL Server Management Studio to create the credential by following the steps below.
Connect with your SQL Server 2016 (13.x) and later or Azure SQL Managed Instance
Right-click your database name, hover over Tasks and then select Back up to launch the Back Up Database wizard.
Select URL from the Back up to destination drop-down, and then select Add to launch the Select Backup Destination dialog box.
Select New container on the Select Backup Destination dialog box to launch the Connect to a Microsoft Subscription window.
Sign in to the Azure portal by selecting Sign In and then proceed through the sign-in process. Select your subscription from the drop-drown.
Select your storage account from the drop-down. Select the container you created already from the drop-down. Select Create Credential to generate your Shared Access Signature (SAS). Save this value as you'll need it for the restore.
I also tried to restore the database using the newly created credential and it is working fine.
To create the credential using T-SQL, please follow the steps provided in this link.
I have a problem with one of my new developed flink jobs.
When i run it in IntelliJ the job is working fine and commiting records to the database.
Next step was to upload it to the flink web ui and execute it there.
The database connection is established and also the inserts seem to be sended to the oracle database but the data seems to be not commited.
Im using a DataStream with the following setup:
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(10000);
...
DataStreamSink<POJO> pojoSink = filteredStream
.addSink(JdbcSink.sink(
sqlString,
JdbcStatementBuilder,
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl(url)
.withDriverName(driver)
.withUsername(user)
.withPassword(password)
.build());
I have no clue why it works on my laptop in the IDE but not on at the server via the web ui.
The server logs are also not having any errors and showing the checkpoints.
Maybe someone has a suggestion where i can have a look what the problem might be.
Cheers
It seems like it was a one time error. At the next time the job run perfectly.
I'm using SSRS (SQL Server reporting services) to display reports, my datasource is Snowflake
I have installed the ODBC snowflake driver and configured it properly
Click here to view the ODBC configuration
I have created a shared datasource on the SSRS server (via Report manager) and put in my own credentials and the connection works fine
Click here to view the connection on the SSRS Server
I'm able to build the SSRS report without any issues, when I run the report, everything works fine, I can publish the report on the server and the report renders perfectly fine on the browser
The issue is when i go back to the report the next day, i'm presented with an error:
An error has occurred during report processing. (rsProcessingAborted)
Query execution failed for dataset
'insert_name_of_my_dataset_here'. (rsErrorExecutingCommand)
ERROR [57P03] No active warehouse selected in the current session.
Select an active warehouse with the 'use warehouse' command.
So, this also means that the following doesn't work neither:
Subscriptions
Cache refresh
Snapshots
The only thing that works is if I open my report in SSRS Report builder, I right-click EACH of my datasets ("each" is very important, it doesn't work if i don't do all of them), I run the queries manually for each of them, and then the "connection" or "session" is "re-activated" and the report runs fine, both locally AND on the server...note i do not have to re-publish the report on the server for it to run
Click here to view screenshots of my process
Steps I have taken to addresss the issue (that didn't yield any resolution):
I have tried putting the "use warehouse WAREHOUSE_NAME;" command before each dataset's SQL script, but Snowflake's API doesn't allow multiple SQL commands to be sent, so I already saw that this functionality was in the development pipeline for Snowflake and found this link: https://github.com/snowflakedb/snowflake-connector-net/issues/33 - this work was started in 2018 and the last update dates from Apr 2019 that says they are starting to address the JDBC driver...no mention for the ODBC driver yet
I have set the snowflake parameter client-session-keep-alive to true (https://docs.snowflake.com/en/sql-reference/parameters.html#client-session-keep-alive), but according to the community portal: A similar "keep alive" parameter is not currently available for the ODBC driver. Instead, you could issue a dummy query every few hours to keep the connection alive. (https://community.snowflake.com/s/article/faq-how-long-can-my-jdbcodbc-connection-remain-idle)
List item
I have tried to create a cache refresh plan or a snapshot schedule that creates a snapshot or caches the report every 3 hours, and it works for the first schedule, but fails with the error for the other ones
The only thing I didn't try is to have snowflake never close the connection and keep the warehouse in the "started" state indefinitely...but this would increase my cost, and i'm pretty sure it won't work since the session would end anyways after 4 hours...
Any assistance is welcome!
Thanks
Specs:
SSRS 2014
Snowflake X-small
ODBC-64 bit driver, installed from the
snowflake driver repository (tested with 32-bit also, but 64-bit is
the one that is visible to SSRS)
I faced the same kind of issue and fixed adding the corresponding role with the data warehouse.
In the data warehouse add role with USAGE.
Could it be related with the data warehouse name (in the ODBC settings)? Is there a typo? COSNUMER_WH or CONSUMER_WH?
I strongly recommend setting default "context" configurations for situations like this, setting default role, warehouse, database, and schema with commands such as this:
ALTER USER xyz SET DEFAULT_WAREHOUSE = 'WH_NAME_HERE' ;
https://docs.snowflake.com/en/sql-reference/sql/alter-user.html
I've 2 servers: Reporting and devsvr. I've SSIS on Reporting to take article's informations from providers. And On devsvr, I've a webSite to see my articles.
I make a linked between 2 servers.
To connect on Reporting DataBase I use user : EDBV3. And To connect on devsvr user : MOS.
I connect on SQLServer Management on Reporting with EDBV3 account. Execute
INSERT INTO DEVSVR.extranet.dbo.EDBV3_Grossiste (IDGrossiste,Libelle ) SELECT IDGrossiste, Libelle FROM edb_v3.dbo.EDB_Grossiste where EDB_Grossiste.EstActif = 1
No problem.
When I put this on SSIS Package/ SQL Task. I create Reporting connexion with SQL Account EDBV3. Put my request in SQL Task. Execute this in SQL Agent on Reporting, I've error message : I'm not allowed to acces to extranet...
Why?
Finally I find the problem.
It's not on method.
I use config file for edbv3 access (cause I use it in lot of SSIS package)
But don't know why... For this package it's not ok on the server.
I include information in specific config file of this package, and it's good...