Can we configure mirroring between multiple instances?
Suppose I have 2 source servers with S1 having database A and S2 having database B. Now i want to configure mirroring to another destination server S3 for both of these databases. Can we do it using same endpoint or need to create another one in destination S3 with different port?
Also do we need to create separate certificate for each instance.
I have been using SQL authentication so making use of certificates while creating endpoints.
While trying to create a new endpoint using below query:
/****** Object: Endpoint [Endpoint_Mirroring] Script Date: 7/6/2022 8:56:49 AM ******/
CREATE ENDPOINT [Endpoint_Mirroring1]
STATE=STARTED
AS TCP (LISTENER_PORT = 5023, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = CERTIFICATE [mirror_cert1]
, ENCRYPTION = REQUIRED ALGORITHM AES)
GO
I am getting error:
Msg 7862, Level 16, State 1, Line 5
An endpoint of the requested type already exists. Only one endpoint of this type is supported. Use ALTER ENDPOINT or DROP the existing endpoint and execute the CREATE ENDPOINT statement.
Msg 7807, Level 16, State 1, Line 5
An error ('0x800700b7') occurred while attempting to register the endpoint 'Endpoint_Mirroring1'.
Already there is one endpoint present on server as:
CREATE ENDPOINT [Endpoint_Mirroring]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = CERTIFICATE [mirror_cert]
, ENCRYPTION = REQUIRED ALGORITHM AES)
GO
Related
I have a requirement to read CSV files from an Azure blob storage. So far, this is throwing access denied errors every time I run my query:
CREATE DATABASE SCOPED CREDENTIAL <myScopedCredential>
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sv=2021-06-08&ss=b&srt=sco&sp=rl&se=2023-03-31T09:38:05Z&st=2022-09-01T02:38:05Z...';
CREATE EXTERNAL DATA SOURCE <myExternalDatasource>
WITH (
TYPE = BLOB_STORAGE
, LOCATION = 'https://<myResource>.blob.core.windows.net/<myContainer>'
, CREDENTIAL= <myScopedCredential> -->
);
SELECT *
FROM OPENROWSET (
BULK '<folderName>/<fileName>.csv'
, DATA_SOURCE = '<myExternalDatasource>'
, FORMAT ='CSV'
, FORMATFILE='<formatFilesFolderName>/<formatfileName>.fmt'
, FORMATFILE_DATA_SOURCE = '<myExternalDatasource>'
, FIRSTROW = 2
) AS test
Below are some more details on how everything was setup:
The storage account kind is of BlockBlobStorage.
In the Firewalls and virtual networks setting, it is only Enabled
from selected virtual networks and IP addresses. I already added my
public IP address, as well as the IP address of Azure SQL Server
which I got from here:
https://learn.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture?view=azuresql#gateway-ip-addresses
The whole process works if I set it to Enabled from all networks.
The SQL server and the storage account lives within the same resource
group.
I also configured a VNet that is both added for both of the
resource.
Saw this thread which is exactly similar to my issue, however the accepted answer is not working from my end: Cannot bulk load because the file "File.csv" could not be opened. Operating system error code 5(Access is denied.)
I checked all the documentations regarding SAS access keys, database scoped credentials, external data sources and VNet networking and I don't see any limitations for SAS key access to be denied. Did I miss a configuration setup? I find it a little weird that in most cases, they are recommending to setup the storage account to be Enabled from all networks, which might be a security issue.
This is the error i get when i connect to snowflake via python?
OperationalError: 250003: Failed to execute request: ("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])",)
I connect using:
ctx = snowflake.connector.connect(
user='JoeBloggs',
password='pwd',
account='JoeBloggs',
database='DEV_DATA'
)
do i need to feed in other paramters such as port, host, etc how did i find what these are?
I think your value for 'account' needs to be modified. It looks like you're using your username there, but it should be the Snowflake account. This should be the portion of the URL that you connect directly to that precedes the snowflakecomputing.com portion. For example, 'xy12345.east-us-2.azure'.
My initial thoughts are that the error indicates a firewall or proxy issue. In particular, a proxy might intercept Snowflake's SSL certificate and replace it with their own. The best way to resolve this is to ensure the certificate is trusted in the proxy and the proxy is configured as per Snowflake's documentation so that the Snowflake certificate can pass through.
The documentation below has more information on using a proxy with SnowSQL. You can pass along the error with issuer details to your network engineer and can request to whitelist the required URLs (documentation also below outlining the whitelisting requirements). You can use the SYSTEM$WHITELIST function to get all the URLs to whitelist in a proxy or firewall for your account.
https://docs.snowflake.net/manuals/user-guide/snowsql-start.html#using-a-proxy-server
https://docs.snowflake.net/manuals/user-guide/hostname-whitelist.html
First, install Snowflake python connector .pip3 install snowflake-python-connector.
Can you try with code below:
------------------------------------------------------
import snowflake.connector
PASSWORD = '*****'
USER = '<UNAME>'
ACCOUNT = '<ACCNTNAME>'
WAREHOUSE = '<WHNAME>'
DATABASE = '<DBNAME>'
SCHEMA = 'PUBLIC'
print("Connecting...")
con = snowflake.connector.connect(
user=USER,
password=PASSWORD,
account=ACCOUNT,
warehouse=WAREHOUSE,
database=DATABASE,
schema=SCHEMA
)
con.cursor().execute("USE WAREHOUSE " + WAREHOUSE)
con.cursor().execute("USE DATABASE " + DATABASE)
try:
result = con.cursor().execute("Select * from <TABLENAME>")
result_list = result.fetchall()
print(result_list)
finally:
con.cursor().close()
con.cursor().close()
---------------------------------------------------
What we are doing is simply shutting down sql server and physically moving mssql folder to another server. After that operation service broker not working correctly. What to do to make service broker work on a new server? What's the correct way to move whole server to a new machine?
We have merge replication which we dont want to reinitiallize. So backup/restore and attach/deattach is not a good option. Any solutions for reanimation of service broker on a new machine? Recreate certificates/create new SB guid (NEW BROKER)?
Alright, we moved folder with database files to fresh new instance of sql server on another machine. After few tests we get the expected error An error occurred while receiving data: '10054(An existing connection was forcibly closed by the remote host.)'. and in SQLProfiler it shows as Connection handshake failed. Error 15581 occurred while initializing the private key corresponding to the certificate. The SQL Server errorlog and the Windows event log may contain entries related to this error. State 88..
So, i've tried to regenerate master keys on both main database and master database. And it worked. Service broker running good on both directions.
USE <dbName>;
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'password';
ALTER MASTER KEY REGENERATE WITH ENCRYPTION BY PASSWORD = 'password';
CLOSE MASTER KEY;
USE master;
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'password';
ALTER MASTER KEY REGENERATE WITH ENCRYPTION BY PASSWORD = 'password';
CLOSE MASTER KEY;
Do I really need to create a certificate to send a queued message between sql servers?
Can I use dbo authentication for the endpoint on both servers?
create endpoint target
state = started
as TCP
(
LISTENER_PORT = 4022
)
FOR SERVICE_BROKER (AUTHENTICATION = CERTIFICATE ????, ENCRYPTION = ENABLED);
IF I have to use a certificate, can I use a User database certificate instead of one on master? How would I go about doing this?
I am not concerned with security at the moment. Both servers are on a closed lan, with no internet access.
Sorry.I do not have profiler. I am using sql express 2005.
CREATE ROUTE RoutetoTarget
WITH
BROKER_INSTANCE = 'xxxxxx-xxx-xx-x-x-x-x',
SERVICE_NAME = 'LOCALReceivingService',
<---that works only in instances on same server. However once I add the target server IP with port number (the endpoint I created on target server) messages get sent into the void. They never make it to the other server.
ADDRESS = 'TCP://targetipadress:PORT'
I figured it out. You need to at least have AUTHORIZATION DBO for both local and remote service, make sure all encryption in endpoint, and sent message is off/disabled, and lastly, DO NOT have a master key. Many online sites say Broker will not work without an encrypted master key, but doesn't seem to be true in this case.
You are not required to use a certificate:
CREATE ENDPOINT ssb_target
STATE = STARTED
AS TCP
(
LISTENER_PORT = 4022
)
FOR SERVICE_BROKER
(
AUTHENTICATION = WINDOWS,
ENCRYPTION = DISABLED
)
GO
More info: link
For testing purposes, I placed two databases on the same server, I want to send queued messages between databases via TCP (not GUID.) Would I still need an endpoint since its all on 1 server? Also do I use tcp://127.0.0.1:PORT or tcp://IP:port?
Lastly is the ReceivingService in the route the service on the target database or the service on the initiating database? Thanks in advance!
CREATE ROUTE Route_to_Target_Database_On_Same_Server
WITH
BROKER_INSTANCE = '111F27B6-1211-10E1-1711-B1D19113121111',
SERVICE_NAME = 'ReceivingService',
ADDRESS = 'TCP://127.0.0.1:2044'
CREATE ENDPOINT BrokerEndpoint
STATE = STARTED
AS TCP ( LISTENER_PORT = 2044 )
FOR SERVICE_BROKER (
ENCRYPTION = DISABLED);
I figured it out. Both work. Just have to becarefull of firewalls/closed ports and permissions are correct on both machines. No need for endpoints if on same server, but if its remote then endpoints are a must (with correct user permissions on both)