(if the question is more appropriate for RackOverflow please let me know)
I've setup SQL server mirroring, using 2 SQL server 2005 standard editions.
When the application is being stressed, response times increase 10-fold. I've pinpointed this to the mirror, because pausing the mirror shows acceptable response times.
What options are available for achieving better performance? Note that I'm using Standard Edition, so the excellent High Performance Mode is unavailable.
The server are in the same rack, connected to a gigabit switch.
Here's the code used to create the endpoints:
CREATE ENDPOINT [Mirroring]
AUTHORIZATION [sa]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = PARTNER, AUTHENTICATION = WINDOWS NEGOTIATE
, ENCRYPTION = REQUIRED ALGORITHM RC4)
First you need to look at your redo queue on the mirror, how big is. This is the most likely culprit and indicates that your mirror machine is underpowered. More exactly, it cannot apply and write the log as it receives it from the principal fats enough to keep up, causing flow control to propagate back to the principal and delay transaction commits. In fact you should look at all the counters in the Mirroring Object, on both machines.
Unless you find measurements to back up suspicion on the endpoint settings, leave them as they are. The mirroring communication bandwidth is very very seldom the culprit.
Given that the servers are in the same rack do you really need Encryption turned on? RC4 is a relatively weak algorithm, so the benefit is low. And presumably the 1 Gigabit network is private between the servers?
ENCRYPTION = DISABLED
In response to #Remus Rusanu's comment : Saying that "RC4 is a strong algorithm" is totally wrong. This is what the MSDN page has to say:
Though considerably faster than AES,
RC4 is a relatively weak algorithm,
while AES is a relatively strong
algorithm. Therefore, we recommend
that you use the AES algorithm.
Related
i have a general question about the Kerberos configuration (krb5.conf) on a client.
If I give a RHEL8 client multiple AD servers for authentication (one in USA, one in Europe and one in Asia), which server would the client use if I want to connect from Germany?
krb5.conf
AD.COMPANY.COM = {
kdc = us-server.ad.company.com
kdc = eu-server.ad.company.com
kdc = asia-server.ad.company.com
}
is the server list processed dull from top to bottom or is the server used which answers fastest?
Greetings
D1Ck3n
To elaborate on T-Herons answer up top, it would be a bit of a security vulnerability if it responded to the quickest one. Imagine I knew that one of the three servers stored a password in a way that an invalid hash would pass, I could slow the other two via a packet flood or DDOS and force the vulnerable server to respond first every time.
Design flaws like this can be generally exploited in most computer systems, (especially things like networking, for instance). So it's 'safe' to assume that in any instance dealing with authentication that you are are dealing with one authentication at a time.
I have a client-server application which use a firebird server 2.5 over internet.
I have met the problem of given a secure access to FB databases and as a first approch a tried to solve this problem by integrating a tunnel solution in the application (STunnel software more exactly). BUT, this approch suffer from many aspects :
- this add more resource consumption (CPU, memory, threads) at both client/server side,
- sotware deployment become a serious problem because STunnel software is writen as a WinNT Service, not a Dll or a Component (WinNT Service need administrator privileges for install)
and my client application need to run without installation !
SO, i decided to take the bull by the horn (or the bird by the feathers as we talk about Firebird). I have downloaded the Firebird 2.5 source code and injected secure tunnelization code directly in his low level communication layer (the INET socket layer).
NOW, encryption/decryption is done directly by the firebird engine for each TCP/IP packet.
What do you think about this approach vs external tunnelization ?
I would recommend to wrap data exchange in SSL/TLS stream, from both sides. This is proven standard.
While custom implementations, with static keys, can be insecure.
For instance, CTR mode with constant IV can reveal a lot of information, since it only encrypts incremented vector and XORes it with data, so XORing two encrypted packets will show the xored version of unencrypted packets.
In general, my view of security critical code is this, "you want as many eyes on the code in question as possible and you do not want to be maintaining it yourself." The reason is that we all make mistakes and in a collaborative environment these are more likely to be caught. Additionally these are likely to be better tested.
In my view there are a few acceptable solutions here. All approaches do add some overhead but this overhead could, if you want, be handled on a separate server if that becomes necessary. Possibilities include:
stunnel
IPSec (one of my favorites). Note that with IPSec you can create tunnels, and these can then be forwarded on to other hosts, so you can move your VPN management onto a computer other than your db host. You can also do IPSec directly to the host.
PPTP
Cross-platform vpn software like tinc and the like.
Note here in security there is no free lunch and you need to review your requirements very carefully and make sure you thoroughly understand the solutions you are working with.
The stunnel suggestion is a good one, but, if that's not suitable, you can run a true trusted VPN of sorts, in a VM. (Try saying that a few times.) It's a bit strange, but it would work something like this:
Set up a VM on the firebird machine and give that VM two interfaces,
one which goes out to your external LAN (best if you can actually
bind a LAN card to it) and one that is a host-only LAN to firebird.
Load an openvpn server into that VM and use both client and server
certificates
Run your openvpn client on your clients
Strange, but it ensures the following:
Your clients don't get to connect to the server unless BOTH the
client and server agree on the certificates
Your firebird service only accepts connections over this trusted VPN
link.
Technically, local entities could still connect to the firebird
server outside of the VPN if you wanted it -- for example, a
developer console on the same local LAN.
The fastest way to get things done would not be to improve firebird, but improve your connection.
Get two firewall devices which can do SSL certificate authentication and put it in front of your DB server and your firebird device.
Let the firewall devices do the encryption/decryption, and have your DB server do its job without the hassle of meddling with every packet.
I'm currently designing an application which I will ultimately want to move to Windows Azure. In the short term, however, it will be running on a server which I will host myself.
The application involves a number of separate web applications - some of these are essentially WCF services which receive data, and some are sites for users to manage data. In addition, there will need to be a worker service running in the background which will process data in various ways.
I'm very keen to use a decoupled architecture for this. Ideally I'm wanting the components (i.e. web apps and worker service) to know as little as possible about each other. It seems like using a message queue will be the best solution here - the web apps can enqueue messages with work units into the queue and the worker service can pick them out and process them as needed.
However, I want to work out a good set of technologies for doing this, bearing in mind that I'll ultimately be moving to Azure and want to minimise the amount of re-work I'll need to do when I migrate to the cloud. Azure has a Queue component built in which looks ideal for my needs. What I'd like to do is create something myself which will mimic this as closely as possible.
It looks like there are several options (I'm using .NET on Windows, with a SQL Server 2005 back end) - the ones I've found so far are:
MSMQ
SQL Server service broker
Rolling my own using a database table and some stored procs
I was wondering if anyone has any suggestions for this - or if anyone has done anything similar and has advice on things to do/to avoid. I realise that every situation is different, but in this case I think my queuing requirements are pretty generic so I'd love to hear anyone else's thoughts about the best way to do this.
Thanks in advance,
John
If you have Azure in mind, perhaps you should start straight on Azure as the APIs and semnatics are significantly different between Azure queues and any of MSMQ or SSB.
A quick 3048 meters comparison of MSMQ vs. SSB (I'll leave a custom table-as-queue out of comparison as it really depends how you implement it...)
Deployment: MSMQ is a Windows component, SSB is a SQL compoenent. SSB requires a SQL instance to store any message, so disconencted clients need access to an instance (can be Express). MSMQ requires deployment of MSMQ on the client (part of OS, but optional install).
Programmability: MSMQ offers a fully fledged, supported, WCF channel. SSB offers only an experimental WCF channel at http://ssbwcf.codeplex.com
Performance: SSB will be significantly faster than MSMQ in transacted mode. MSMQ will be faster if let operate in untransacted mode (best effort, unordered, delivery)
Queriability: SSB queues can be SELECTE-ed uppon (view any message, full SQL JOIN/WHERE/ORDER/GROUP power), MSMQ queues can be peeked (only next message)
Recoverability: SSB queues are integrated in the database so they are backed up and restored with the database, keeping a consitent state with the application state. MSMQ queues are backed up in the NT file backup subsytem, so to keep the backup in sync (coherent) the queue and database have to be suspended.
Transactions (since every enqueue/dequeue is always accompanied by a database update): SSB is fully integrated in SQL so dequeueing and enqueueing are local transaction operations. MSMQ is a separate TM (Transaction Manager) so queue/dequeue has to be a Distributed Transaction operation to enroll both SQL and MSMQ in the transaction.
Management and Monitoring: both equaly bad. No tools whatsoever.
Correlated Messages processing: SSB can block processing of correlated message by concurent threads via built-in Conversation Group Locking.
Event Driven: SSB has Activation to launch stored procedures, MSMQ uses Windows Activation service. Similar. SSB though has self load balancing capalities due to the way WAITFOR(RECEIVE) and MAX_QUEUE_READERS interact.
Availability: SSB piggybacks on the SQL Server High Availability story, it can work either in a clustered or in database miroring environment. MSMQ rides the Windows clustering story only. Database Mirroring is much cheaper than clustering as a HA solution.
In addition I'd add that SSB and MSMQ differ significantly at the level ofthe primitive they offer: SSB primitive is a conversation, while MSMQ primitive is a message. Think TCP vs. UDP semantics.
Pick a queue back end that works for you, or that is better suited to your environment. #Remus has given a great comparison between MSMQ and SSB. MSMQ is going to be the easier one to implement, but has some notable limitations, while SSB is going to feel very heavy as its at the other end of the spectrum.
Have It Your Way
To minimize the rework from you applications, abstract the queues access behind an interface, and then provide an implementation for the queue transport you ultimately decide to go with. When its time to move to Azure, or another queue transport, you just provide a new implementation of your interface.
You get to control the semantics of how you want to interact with the queue to give a consistent usable API from your applications.
A rough idea might be:
interface IQueuedTransport
{
void SendMessage(XmlDocument);
XmlDocument ReceiveMessage();
}
public class MSMQTransport : IQueuedTransport {}
public class AzureQueueTransport : IQueuedTransport {}
You may not be building the be-all queuing transport, just what meets your needs. If you work with Xml, pass xml. If you work in byte arrays, pass byte arrays. :)
Good luck!
Z
Use Win32 Mailslots. They will be reliable on a single server, are easy to implement, and do not require any extra software.
Should I use a dedicated network channel between the database and the application server?
...or...
Connecting both in the switch along with all other computer nodes makes no diference at all?
The matter is performance!
It all depends on the throughput needs of your application. If you absolutely need the lowest latency possible, then it would make sense to optimize the routes. Aside from hugely scalable software, I would argue that this is rarely needed and you can just connect everything in a generic fashion.
It depends on your non-functional requirements. Assuming the NICs are running at the same rate, keeping the database traffic away from the front-end traffic can only be a good thing from a bandwidth perspective - if bandwidth is an issue.
Far more significant is that security is improved by keeping the front-side and data-sides on different networks as the only way to gain direct access to the database is to compromise the application server.
Using the shared switch could give increased latency, especially if the switch is busy. Also, you may be able to hook up a faster dedicated network channel (e.g. gigabit ethernet, if your switch is 100Mbit). Whether any of this is worth doing or not depends on your application though.
You may also want to use a dedicated channel for increased security (making your database server less accessible).
With a distributed application, where you have lots of clients and one main server, should you:
Make the clients dumb and the server smart: clients are fast and non-invasive. Business rules are needed in only 1 place
Make the clients smart and the server dumb: take as much load as possible off of the server
Additional info:
Clients collect tons of data about the computer they are on. The server must analyze all of this info to determine the health of these computers
The owners of the client computers are temperamental and will shut down the clients if the client starts to consume too many resources (thus negating the purpose of the distributed app in helping diagnose problems)
You should do as much client-side processing as possible. This will enable your application to scale better than doing processing server-side. To solve your temperamental user problem, you could look into making your client processes run at a very low priority so there's no noticeable decrease in performance on the part of the user.
In a client-server setting, if you care about security, you should always program on the assumption that the client may have been compromised. Even if it hasn't, there is always the risk of somebody using an old version of the client, using a competing or modified version of the client, or just of the net connection being a bit screwy.
So while you do as much work on the client as possible, processing and marshalling information into the right form, the server then needs to do a thorough sanity check on anything the client gives it.
So the answer I guess is "both".
The server must analyze all of this
info to determine the health of these
computers
That is probably the biggest clue so far explaning what your application is kinda about. Are you able to provide a more elaborate briefing on what this application is seeking to achieve in this distributed environment? We do not even know if the client-side processing is disk I/O or processor intensive. How you design the solution is dependent on the nature of what needs to be done to help the users/business accomplish their jobs and objectives.