i have a general question about the Kerberos configuration (krb5.conf) on a client.
If I give a RHEL8 client multiple AD servers for authentication (one in USA, one in Europe and one in Asia), which server would the client use if I want to connect from Germany?
krb5.conf
AD.COMPANY.COM = {
kdc = us-server.ad.company.com
kdc = eu-server.ad.company.com
kdc = asia-server.ad.company.com
}
is the server list processed dull from top to bottom or is the server used which answers fastest?
Greetings
D1Ck3n
To elaborate on T-Herons answer up top, it would be a bit of a security vulnerability if it responded to the quickest one. Imagine I knew that one of the three servers stored a password in a way that an invalid hash would pass, I could slow the other two via a packet flood or DDOS and force the vulnerable server to respond first every time.
Design flaws like this can be generally exploited in most computer systems, (especially things like networking, for instance). So it's 'safe' to assume that in any instance dealing with authentication that you are are dealing with one authentication at a time.
Related
General question. Which option would have less network traffic.
Option 1: A single database connection to a WebAPI, with multiple clients communicating via the API using the standard request / return data.
Option 2: Each client having direct read only access to the database with their own connections, and reading the data directly.
My expectation is that with only a single user, the direct database approach would have less traffic, but for each additional user the API would have a smaller incremental increase in traffic over the direct database.
However I have no evidence to back this up. Just hoping somebody may know of a resource which has this data already, or has done the experiment themselves (as my google fu is failing me)
In relatively small server-client environment (up to 100 client connections to database server) what is the resource-effective way for every running client application to check whether its database connection is alive?
I was thinking about implementing execution of SELECT 1 every 20 seconds from every client app.
Is this good enough or is there something more clever?
Many users can be working with the database via mobile connection which can have frequent dropouts so my idea was to indicate such a dropout in the client application and warn before user submits the form etc.
I know ideal for users would be running app client remotely through Citrix or MS Terminal Services solution, but very often they have only simple VPN, nothing more.
AFAIK there isn't a SQL Server ping function, I've relied on SELECT statements in the past.
I've used SELECT ##SERVERNAME, which returns just a single row and is also pretty efficient.
Or having a dedicated table with just item in it and selecting that.
The SqlConnection.State property isn't reliable as that indicates state after the last operation, so if the server has gone down since then it'll give you the incorrect state.
More effective than SELECT 1 or PRINT '' can be sending of
a:
what is a label declaration. Or
--
what is a comment.
It sends back no output except of basic 'Success' code
(interpreted as 'Command(s) completed successfully.').
EDIT: as seen in comments below, a single space () is sufficient. The commenter did not post the reply until now so I'm modifying mine.
I have a client-server application which use a firebird server 2.5 over internet.
I have met the problem of given a secure access to FB databases and as a first approch a tried to solve this problem by integrating a tunnel solution in the application (STunnel software more exactly). BUT, this approch suffer from many aspects :
- this add more resource consumption (CPU, memory, threads) at both client/server side,
- sotware deployment become a serious problem because STunnel software is writen as a WinNT Service, not a Dll or a Component (WinNT Service need administrator privileges for install)
and my client application need to run without installation !
SO, i decided to take the bull by the horn (or the bird by the feathers as we talk about Firebird). I have downloaded the Firebird 2.5 source code and injected secure tunnelization code directly in his low level communication layer (the INET socket layer).
NOW, encryption/decryption is done directly by the firebird engine for each TCP/IP packet.
What do you think about this approach vs external tunnelization ?
I would recommend to wrap data exchange in SSL/TLS stream, from both sides. This is proven standard.
While custom implementations, with static keys, can be insecure.
For instance, CTR mode with constant IV can reveal a lot of information, since it only encrypts incremented vector and XORes it with data, so XORing two encrypted packets will show the xored version of unencrypted packets.
In general, my view of security critical code is this, "you want as many eyes on the code in question as possible and you do not want to be maintaining it yourself." The reason is that we all make mistakes and in a collaborative environment these are more likely to be caught. Additionally these are likely to be better tested.
In my view there are a few acceptable solutions here. All approaches do add some overhead but this overhead could, if you want, be handled on a separate server if that becomes necessary. Possibilities include:
stunnel
IPSec (one of my favorites). Note that with IPSec you can create tunnels, and these can then be forwarded on to other hosts, so you can move your VPN management onto a computer other than your db host. You can also do IPSec directly to the host.
PPTP
Cross-platform vpn software like tinc and the like.
Note here in security there is no free lunch and you need to review your requirements very carefully and make sure you thoroughly understand the solutions you are working with.
The stunnel suggestion is a good one, but, if that's not suitable, you can run a true trusted VPN of sorts, in a VM. (Try saying that a few times.) It's a bit strange, but it would work something like this:
Set up a VM on the firebird machine and give that VM two interfaces,
one which goes out to your external LAN (best if you can actually
bind a LAN card to it) and one that is a host-only LAN to firebird.
Load an openvpn server into that VM and use both client and server
certificates
Run your openvpn client on your clients
Strange, but it ensures the following:
Your clients don't get to connect to the server unless BOTH the
client and server agree on the certificates
Your firebird service only accepts connections over this trusted VPN
link.
Technically, local entities could still connect to the firebird
server outside of the VPN if you wanted it -- for example, a
developer console on the same local LAN.
The fastest way to get things done would not be to improve firebird, but improve your connection.
Get two firewall devices which can do SSL certificate authentication and put it in front of your DB server and your firebird device.
Let the firewall devices do the encryption/decryption, and have your DB server do its job without the hassle of meddling with every packet.
One of our problems is that our outbound email server sucks sometimes. Users will trigger an email in our application, and the application can take on the order of 30 seconds to actually send it. Let's make it even worse and admit that we're not even doing this on a background thread, so the user is completely blocked during this time. SQL Server Database Mail has been proposed as a solution to this problem, since it basically implements a message queue and is physically closer and far more responsive than our third party email host. It's also admittedly really easy to implement for us, since it's just replacing one call to SmtpClient.Send with the execution of a stored procedure. Most of our application email contains PDFs, XLSs, and so forth, and I've seen the size of these attachments reach as high as 20MB.
Using Database Mail to handle all of our application email smells bad to me, but I'm having a hard time talking anyone out of it given the extremely low cost of implementation. Our production database server is way too powerful, so I'm not sure that it couldn't handle the load, either. Any ideas or safer alternatives?
All you have to do is run it through an SMTP server and if you're planning on sending large amounts of mail out then you'll have to not only load balance the servers (and DNS servers if you're planning on sending out 100K + mails at a time) but make sure your outbound Email servers have the proper A records registered in DNS to prevent bounce backs.
It's a cheap solution (minus the load balancer costs).
Yes, dual home the server for your internal lan and the internet and make sure it's an outbound only server. Start out with one SMTP server and if you get bottle necks right off the bat, look to see if it's memory, disk, network, or load related. If its load related then it may be time to look at load balancing. If it's memory related, throw more memory at it. If it's disk related throw a raid 0+1 array at it. If it's network related use a bigger pipe.
(if the question is more appropriate for RackOverflow please let me know)
I've setup SQL server mirroring, using 2 SQL server 2005 standard editions.
When the application is being stressed, response times increase 10-fold. I've pinpointed this to the mirror, because pausing the mirror shows acceptable response times.
What options are available for achieving better performance? Note that I'm using Standard Edition, so the excellent High Performance Mode is unavailable.
The server are in the same rack, connected to a gigabit switch.
Here's the code used to create the endpoints:
CREATE ENDPOINT [Mirroring]
AUTHORIZATION [sa]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = PARTNER, AUTHENTICATION = WINDOWS NEGOTIATE
, ENCRYPTION = REQUIRED ALGORITHM RC4)
First you need to look at your redo queue on the mirror, how big is. This is the most likely culprit and indicates that your mirror machine is underpowered. More exactly, it cannot apply and write the log as it receives it from the principal fats enough to keep up, causing flow control to propagate back to the principal and delay transaction commits. In fact you should look at all the counters in the Mirroring Object, on both machines.
Unless you find measurements to back up suspicion on the endpoint settings, leave them as they are. The mirroring communication bandwidth is very very seldom the culprit.
Given that the servers are in the same rack do you really need Encryption turned on? RC4 is a relatively weak algorithm, so the benefit is low. And presumably the 1 Gigabit network is private between the servers?
ENCRYPTION = DISABLED
In response to #Remus Rusanu's comment : Saying that "RC4 is a strong algorithm" is totally wrong. This is what the MSDN page has to say:
Though considerably faster than AES,
RC4 is a relatively weak algorithm,
while AES is a relatively strong
algorithm. Therefore, we recommend
that you use the AES algorithm.