I have a Sql Server inside a restricted network. I need to somehow get data from the outside in.
I would like to harness the use of Message Broker. My thinking is the external db places a message on a queue then I require a service that sits inside of the restricted LAN to listen (poll?) for these messages and then act upon them.
I cannot have the external queue initiate the normal broker conversation into the restricted LAN.
My question is should I be looking at the broker external activator to sit inside the restricted LAN and listen for new messages and then act upon them? Has anyone got any experience with this. Documentation / examples for external activator are pretty thin on the ground and monologues are not supported in message broker yet.
Is msmq a better option?
My recommendation would be to allow Service Broker to deliver the message all the way into the SQL Server instance inside the restricted lan. That will require the restricted LAN to allow incomming connection (allow the inside server to listen and accept). MSMQ would be no different, the MSMQ port(s) would have to be open in the restricted LAN.
If you want to use a dedicated process inside the restricted LAN that 'gets' the data inside then you must ensure the transactional consistency between the external server 'get' and the internal server write: the two operation have to be enrolled into a distributed transaction and the DTC protocol itself needs to be allowed to penetrate into the restricted LAN. So some ports still need to be open in the restricted LAN.
What your LAN security designers need to understand is that Service Broker connections are not Transact-SQL connections. Service Broker uses a dedicated protocol that only allows exchange of Service Broker messages. All traffic is encrypted and secured with RC4 or AES encryption. SSB cryptography is FIPS compliant. Allowing for Service Broker traffic to the SQL Server inside is probably the most secure way of allowing data from the external server to reach the secured server. In Service Broker networking there is no concept of 'client' and 'server' and one cannot design the network allowing connections only in one dirrection (eg. unlike say HTTP, which can be designed to connect from inside to outside but not the other way). SSB networking requires both machines involved to be able to connect to each other, because response messages can come after long delays (hours, days, consider the case when a queue is backed up so it takes a long time until the message is processed and a response is sent). IS not feasable to keep connecitons open for days to expect a response, so the receiver of a message must be able to connect back to the sender to deliver a response.
Related
The "Real Time Metrics" panel of my MongoDB Atlas cluster, shows 36 connections, even though I terminated all server apps that were supposed to be connected to it. Currently nothing should be connected to it, but I still see those 36 connections. I tried pausing the cluster and then resuming it - the connections came back. Is there any way for me to find out where are they coming from? OR, terminating all connections.
Each connection is supposed to provide with it what is called "app metadata". This is supposed to always include:
The driver identifier (e.g. pymongo 1.2.3)
The platform of the client (e.g. linux amd64)
Additionally, you can provide your own information to be sent as part of client metadata which you can use to identify your application. See e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/ :app_name option.
Atlas has internal processes that connect to cluster nodes and cluster nodes communicate with each other also. All of these add to connection count seen on each node.
To figure out where connections are coming from:
Read the server logs (which you have to download first) to obtain the client metadata sent with each connection.
Hopefully this will provide enough clues to identify cluster to cluster connections. You should also be able to tell those by source IPs which you should be able to dig out of cluster configuration.
Atlas connections should be using either Go or Java drivers, if you don't use those in your own applications this would be an easy way of telling those apart.
Add app name to all of your application connections to eliminate those from the unknown ones.
There is no facility provided by MongoDB server to terminate connections from clients. You can kill operations and sessions but connections used for those operations would remain until the clients close them. When clients close connections depends on the particular driver used and connection pool settings, see e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/#connection-pooling.
I'm working on logs. I want to reproduce a log in which the application fails to connect to the server.
Currently the commands I'm using are
db2 force applications all
This closes all the connections and then one by one I deactivate each database using
db2 deactivate db "database_name"
What happens is that it temporary blocks the connections and after a minute my application is able to create the connection again, due to which I am not able to regenerate the log. Any Ideas how can I do this?
What you are looking for is QUIESCE.
By default users can connect to a database. It becomes active and internal in-memory data structures are initialized. When the last connection closes, the database becomes inactive. Activating a database puts and leaves them initialized and "ready to use".
Quiescing the database puts them into an administrative state. Regular users cannot connect. You can quiesce a single database or the entire instance. See the docs for some options to manage access to quiesced instances. The following forces all users off the current database and keeps them away:
db2 quiesce db immediate
If you want to produce a connection error for an app, there are other options. Have you ever tried to connect to a non-estisting port, Db2 not listening on it? Or revoke connect privilege for that user trying to connect.
There are several testing strategies that can be used, they involve disrupting the network connection between client and server:
Alter the IP routing table on the client to route the DB2 server address to a non-existent subnet
Use the connection via a proxy software that can be turned off, there is a special proxy ToxiProxy, which was designed for the purpose of testing network disruptions
Pull the Ethernet cable from the client machine, observe then plug it back in (I've done this)
This has the advantage of not disabling the DB2 server for other testing in progress.
I had a perfectly working SQL Server Service Broker this morning, until I tested how it recovers from crashing.
I forced a system shutdown on the sender during a messaging session between servers over a network. I was sending binary messages of about 5mb size. There are automatic procedures for sending, replying and receiving messages and ending conversations from both sides in place and my setup uses certificates for security.
I am now unable to send any messages from the server side.
Both sides of the messaging chain have queues on and it does not seem like poison message handling would be causing this. The sender side accepts new messages but is not sending them.
The sender side transmission queue has messages with transmission_status
The Service Broker endpoint cannot listen for connections due to the following error: '10013(An attempt was made to access a socket in a way forbidden by its access permissions.)'.
Running ALTER ENDPOINT myendpoint STATE = STARTED returns the same error as above.
Running select * from sys.endpoints shows the endpoint with state_desc = STARTED anyhow..
Running select state_desc from [sender_database].sys.conversation_endpoints shows state_desc = CONVERSING for all results.
Running SELECT COUNT(*) FROM dbo.sender_queue returns 0.
There is no other traffic to the port my endpoint is using, at least not any that is visible with netstat or the TCPView tool. The ports have rules to allow traffic from the firewall and sqlagent and sqlsrvr processes also have extra rules to be allowed.
Using ssbdiagnose tool with ssbdiagnose -level info configuration from service... from the sender side shows a (not new) error
The route for service sender_service is classified as REMOTE. This will result in the message being forwarded.
along with some other errors about certificates that have always been there even when messaging was working. Ssbdiagnose with RUNTIME flag shows nothing at all.
Ssbdiagnose from the target side now says an exception occurs during connection. The target database also has a couple of reply messages stuck in the transmission queue with an empty transmission_status.
Edit: Seems that occasionally the status on the target side changes to the error 10060 connection failed...
What more can I do to diagnose the problem and fix it?
Edit: I tried changing the port the endpoint uses but the same error is thrown.
Edit: I am able to ping the servers from each other. Ssbdiagnose with RUNTIME option on target side says it cannot find the connection to the SQL Server that corresponds to the routing address of my sender endpoint/database.
The Service Broker endpoint cannot listen for connections due to the following error: '10013(An attempt was made to access a socket in a way forbidden by its access permissions.)'
WSAEACCESS (10013) is a rather unusual socket listen error. I never encountered it before. A quick search reveals KB3039044: Error 10013 (WSAEACCES) is returned when a second bind to a excluded port fails in Windows which is an acknowledged bug in Windows Server 2008R2, 2012 and 2012R2 when excluding a range of ports (netsh ... add excludedportrange ...). So my first question is, are you on one of the affected server OSes and are you actually using a network port exclusion range?
I strongly urge you to open a Microsoft support case for this issue and follow up with them, making sure networking guys are involved (again, WSAEACCESS is rather unusual symptom). This is not one of the usual issues and it is difficult to diagnose over forums discussion.
Background
I am using a SparkCore wireless arduino board to connect to a local Node.js server. The server includes a local intranet TCP server that a TCP client programmed onto the SparkCore connects to.
Problem
If I run the server on a different network, the server has a different local IP address. When I do this, I have to reprogram the SparkCore arduino to tell it the new local IP address of the server to connect its TCP client to. This is not ideal for a variety of reasons.
Question
Is there a way to have the client dynamically search for the TCP server or alternatively have the server broadcast to TCP clients in a way that would inform the client of the local IP address to use for the server without initially hardcoding it? I would love to do this in way that did not involve iterating through a bunch of IPs on a specific port to see if a connection is made. That being said, if that's the only way to do this, then so be it.
How is the arduino booting? If it's booting using DHCP, one method would be to provide a customer DHCP option that provided the address of the node.js server. ntp, for instance, can configure itself in a similar way. This has the advantage that the arduino need not be on the same local subnet as the node.js server.
An alternative (slightly disgusting) would be to use an A record within your domain (let's say nodejs.example.com. Configure the local DNS recursive server to explicitly return this value (I am presuming you might have lots of different deployments with lots of different nodejs servers).
A third possibility would be to send out some form of discovery packet, either by broadcast, or better by multicast UDP. Assuming it's on the same LAN, the nodejs server could then reply. Clearly you might need to concern yourself with a rogue server impersonating your nodejs server, and therefore might need to add some security (e.g. use a shared secret, send a random nonce plus the nonce hashed with the shared secret to the server, the server checks the hash, and replies with the answer, the nonce, plus the answer hashed with the shared secret and the nonce, each of which the client then checks).
I'm trying to figure out the easiest way to send SQL Server Event Notifications to a separate server using service broker. I've built an endpoint on each server, a queue on each server, working on Dialogs and Contracts and activation... but do I need any of that?
CREATE EVENT NOTIFICATION says it can send the notification XML to a "target service" - so could I just create a contract on the "sending" server that points to a queue on a "receiving server", and use activation there?
Or do I need to have it send to a local queue and then forward on to the receiving server's queue? Thanks!
You can target the remote service, but you have to have the ROUTEs defined for bidirectional communication so that you get the Acknowledgement message back. I once had a script for creating a centralized processing server for all Event Notifications, and the other servers targeted it's service. If I can find it I'll post it on my blog and update this with a link.