Can "NIC teaming" decrease contention, for multiple sites with separate application pools? (connecting server w/ IIS, and server w/ SQL Server) - sql-server

SUMMARY: if sites have separate application pools, can their traffic avoid contention through "NIC teaming"?
((Let me know if this is better posted on http://networkengineering.stackexchange.com))
DETAILS:
Our hosting provider has priced a scenario where NIC teaming could be done, between the server hosting our websites, and the server hosting our databases.
Tech details (in case they matter):
(1) The websites are hosted on a server running Windows Server 2008, with IIS 7.0.
(2) The databases are hosted on a server running Windows Server 2003, with SQL Server 2005.
(3) NIC teaming scenario they described would involve each of the two servers having a 10GBE dual-port NIC card, with crossover cables between.
(4) Each site has its own web.config, and its own application pool in IIS.
(5) Currently, the connection strings to SQL Server, for each website, all look exactly the same, but we could make each website use a different connection string.
HOWEVER, the hosting provider told us we will only see "bandwidth aggregation" if
(A) Our application is coded to use the NIC teaming (it is not), or
(B) Our communication goes over more than one TCP stream.
So, here's my first 2 questions... call this "PLAN A" --
(I) because our sites all have separate application pools (detail #4 above -- resulting in "w3wp.exe" appearing over 10 times, in Task Manager),
would that mean we have more than one TCP stream?
(II) could there be any effective decrease in network contention -- that is, could the traffic from the different sites / different application pools travel on separate tNICs?
My third question... call this "PLAN B":
(III) If the answer to both the above is "No", then I still see a possibility of giving ONE of our sites a separate SQL Server connection string, to give it a separate NIC, or separate tNIC. Does that make sense?
It sounds like it does, if I'm understanding another post here at StackOverflow:
.NET SqlConnection NIC usage
But I'd still PREFER plan a -- automatic decrease of contention, based on separate application pools -- because I trust a NIC Teaming Solution to direct traffic in a much more intelligent way -- based on varying demand -- than it would be to exclusively dedicate a port to one site's SQL Server.
Please forgive if this is TMI... feedback welcome.
Thanks for your interest...

The number of application pools does not determine the number of TCP streams. Each HTTP request to your server will be a separate TCP stream, unless a client reuses an existing connection (HTTP keep-alive).
If you are experiencing network contention, using a teamed NIC should help you decrease it. You are creating another physical path to the server, but the router or switch will have to know to use it.

Related

Connecting to a remote SQL Server without configuring the firewall

I have an SQL Server located in the US, I've written a program that connects to a database on the server and takes the data from the server. The users of the program are spread around the world. The majority of them can easily use the program (i.e. the connection is successfully established).
But some of the users who try to run the program from inside their office building can't connect to the server because of their companies' firewalls. Since the number and location of the users is not known (the application is distributed for free with no notifications to me), customizing every firewall isn't really an option (even though it helped when I was able to do this).
I believe there should be an option like a kind of "certificate" that has to be embedded in my program and has to be registered somewhere on a user's machine that would allow establishing the connection. Or anything of that sort. Unfortunately, I haven't found anything specific in the Internet. Most probably because I googled wrong words or so.
Any help or advice is very much appreciated!
If a firewall (or other security device) is blocking, then there is no magic bullet. You need to avoid directly talking to SQL Server.
Even if you changed the port, many of those company workers will be limited to HTTP(S) access, and then only via a proxy.
So you need to talk HTTP to an API you provide, and the implementation of that API then talks (under your control) to the database.
This has the enormous advantage of giving you an extra layer protecting the integrity of the data in the database.
To build a connection you need firewall at client's place to allow access to the ip where your sql server present.
You can give a custom message to the users to allow access to the IP address of the SQL server, but it is not safe to do so due to security concerns.
Instead you can make a third application to take requests from clients and forward it to your sql server. Like some web service. Host this application on public IP and inform the clients that they need to open the IP in their firewall to run the program. It will ensure security as well as your problem will be solved.

Which server platform to choose: SQL Azure or Hosted SQL Server for new project

We're getting ready to build a new platform for our current system. Currently we install sql server express locally to all our clients and all their data is stored there. While the process works pretty good, it's still a pain to add columns/tables etc. We also want to have our data available outside of the local install. So we're moving to a central web based sql database and creating a web based application. Our new application will be a Silverlight 5, wcf ria services, mvvm, entity framework application
We've decided that either a web hosted sql server database or sql azure database are the way to go. However, I have no idea why I would choose one over the other. The limitations of azure don't seem to apply to us, but our application will be run on our current shared web host. Is it better to host the application on the same server as the database? Do we even know with shared web hosting that the server is on the same location as the app? There's also the marketing advantage of being 'in the cloud' which our clients love when we drop that word (they have no idea about anything technical, it's just a buzzword for them). I'm not too worried about the cost as I think both will ultimately be about the equivalent of each other.
I feel like I may be completely overthinking this and either will work, however I'd like to try and get the best solution for us and don't want to choose without getting some feedback.
In case it helps, our application is mostly dashboard/informational data. Mostly financial and trending data. It's almost entirely read only. Sometimes the data can get fairly large and we would be sending upwards of 50,000 rows of data to the application.
Thanks for any help/insight you can provide for me!
The main concerns I would have with using a SQL Azure DB from an application on your current shared web host would be
The effect of network latency: Depending on location, every time you do a DB round trip from your application to the SQL Azure DB you will incur a 50-100ms delay. If your application does lots of round trips, this will mount up. Often, if an application has been designed to work with a DB on the LAN (you use of local client DBs suggests this) the they tend to get "chatty" since network delays are very small on the LAN. You may find your application slows down significantly.
Security: You will have to open up the SQL Azure firewall to the IP address(es) that your application presents when querying. Depending on your host, it may be that this IP address is shared between several tenants. This would be a vulnerability.
If neither of these is a problem, then SQL Azure will provide a much lower management overhead (e.g. no need to patch etc.) and will give you very high reliability, especially in terms of the risk of data loss.

Payment Card Industry DSS - Storing card holder data in systems not connected to internet

Background
Though I've looked through some posts on stack-overflow that partially cover this point I'm yet to find one that provides a comprehensive question/answer.
As a developer of POS systems the PCI DSS has two components I'm interested in:
PA DSS (Payment Application) which regards the software I develop
PCI DSS (Merchants) which regards all my clients that use the software
The PA DSS seems to put the point most bluntly:
"9.1 The payment application must be developed such that the database server and web server are not required to be on the same server, nor is the database server required to be in the DMZ with the web server"
Testing Procedures:
9.1.a To verify that the payment application stores cardholder data in the internal network, and never in the DMZ, obtain evidence that the payment application does not require data storage in the DMZ, and will allow use of a DMZ to separate the Internet from systems storing cardholder data (e.g., payment application must not require that the database server and web server be on the same server, or in the DMZ with the web server).
9.1.b If customers could store cardholder data on a server connected to the Internet, examine PA-DSS Implementation Guide prepared by vendor to verify customers and resellers/integrators are told not to store cardholder data on Internet-accessible systems (e.g., web server and database server must not be on same server).
And from the merchant's PCI DSS:
1.3.5 Restrict outbound traffic from the cardholder data environment to the Internet such that outbound traffic can only access IP addresses within the DMZ.
Question
My question is quite simple - can the database and application server be logically different (on different virtualised OS) or must they be physically different (on different physical/dedicated servers)?
Also, I'm a bit concerned about having to place a database server with no connection to the Internet whatsoever. How am I supposed to administer this server remotely?
Or is it okay to access the database server via the application server - though surely that defeats the purpose?
No simple answer, sadly.
The SSC has released a new supplement on virtualisation which has some relevant information: https://www.pcisecuritystandards.org/documents/Virtualization_InfoSupp_v2.pdf
While mixing guest OSs of different functions on the same hypervisor is not prohibited, you will need to show that you've thought about the extra risk that this brings.
They will also have to be logically separated with network traffic from one VM to the other going through a firewall of some sort to protect the different OSs and applications. Being on the same physical host is not an excuse for skipping controls like firewalling so you may have to be creative about how you meet these requirements.

how can I simulate network latency on my developer machine?

I am upsizing an MS Access 2003 app to a SQL Server backend. On my dev machine, SQL Server is local, so the performance is quite good. I want to test the performance with a remote SQL Server so I can account for the effects of network latency when I am redesigning the app. I am expecting that some of the queries that seem fast now will run quite slowly once deployed to production.
How can I slow down (or simulate the speed of a remote) SQL Server without using a virtual machine, or relocating SQL to another computer? Is there some kind of proxy or Windows utility that would do this for me?
I have not used it myself, but here's another SO question:
Network tools that simulate slow network connection
In one of the comments SQL Server has been mentioned explicitly.
You may be operating under a misconception. MS-Access supports so-called "heterogeneous joins" (i.e. tables from a variety of back-ends may be included in the same query, e.g. combining data from Oracle and SQLServer and Access and an Excel spreadsheet). To support this feature, Access applies the WHERE clause filter at the client except in situations where there's a "pass-through" query against an intelligent back-end. In SQL Server, the filtering occurs in the engine running on the server, so SQL Server typically sends much smaller datasets to the client.
The answer to your question also depends on what you mean by "remote". If you pit Access and SQL Server against each other on the same network, SQL Server running on the server will consume only a small fraction of the bandwidth that Access does, if the Access MDB file resides on a file server. (Of course if the MDB resides on the local PC, no network bandwidth is consumed.) If you're comparing Access on a LAN versus SQL Server over broadband via the cloud, then you're comparing a nominal 100 mbit/sec pipe against DSL or cable bandwidth, i.e. against perhaps 20 mbit/sec nominal for high-speed cable, a fifth of the bandwidth at best, probably much less.
So you have to be more specific about what you're trying to compare.
Are you comparing Access clients on the local PC consuming an Access MDB residing on the file server against some other kind of client consuming data from a SQL Server residing on another server on the same network? Are you going to continue to use Access as the client? Will your queries be pass-through?
There is a software application for Windows that does that (simulates a low bandwidth, latency and losses if necessary). It not free though. The trial version has a 30-sec emulation limit. Here is the home page of that product: http://softperfect.com/products/connectionemulator/
#RedFilter: You should indicate which version of Access you are using. This document from 2006 shows that the story of what Access brings down to the client across the wire is more complicated than whether the query contains "Access-specific keywords".
http://msdn.microsoft.com/en-us/library/bb188204(SQL.90).aspx
But Access may be getting more and more sophisticated about using server resources with each newer version.
I'll stand by my simple advice: if you want to minimize bandwidth consumption, while still using Access as the GUI, pass-through queries do best, because then it is you, not Access, who will control the amount of data that comes down the wire.
I still think your initial question/approach is misguided: if your Access MDB file was located on the LAN in the first place (was it?) you don't need to simulate the effects of network latency. You need to sniff the SQL statements Access generates, rather than introducing some arbitrary and constant "network latency" factor. To compare an Access GUI using an MDB located on a LAN server against an upsized Access GUI going against a SQL Server back-end, you need to assess what data Access brings down across the wire to the client from the back-end server. Even "upsized" Access can be a hog at the bandwidth trough unless you use pass-through queries. But a properly written client for a SQL-Server back-end will always be far more parsimonious with network bandwidth than Access going against an MDB located on a LAN server, ceteris paribus.

Why is it not advisable to have the database and web server on the same machine?

Listening to Scott Hanselman's interview with the Stack Overflow team (part 1 and 2), he was adamant that the SQL server and application server should be on separate machines. Is this just to make sure that if one server is compromised, both systems aren't accessible? Do the security concerns outweigh the complexity of two servers (extra cost, dedicated network connection between the two, more maintenance, etc.), especially for a small application, where neither piece is using too much CPU or memory? Even with two servers, with one server compromised, an attacker could still do serious damage, either by deleting the database, or messing with the application code.
Why would this be such a big deal if performance isn't an issue?
Security. Your web server lives in a DMZ, accessible to the public internet and taking untrusted input from anonymous users. If your web server gets compromised, and you've followed least privilege rules in connecting to your DB, the maximum exposure is what your app can do through the database API. If you have a business tier in between, you have one more step between your attacker and your data. If, on the other hand, your database is on the same server, the attacker now has root access to your data and server.
Scalability. Keeping your web server stateless allows you to scale your web servers horizontally pretty much effortlessly. It is very difficult to horizontally scale a database server.
Performance. 2 boxes = 2 times the CPU, 2 times the RAM, and 2 times the spindles for disk access.
All that being said, I can certainly see reasonable cases that none of those points really matter.
It doesn't really matter (you can quite happily run your site with web/database on the same machine), it's just the easiest step in scaling..
It's exactly what StackOverflow did - starting with single machine running IIS/SQL Server, then when it started getting heavily loaded, a second server was bought and the SQL server was moved onto that.
If performance is not an issue, do not waste money buying/maintaining two servers.
On the other hand, referring to a different blogging Scott (Watermasyck, of Telligent) - they found that most users could speed up the websites (using Telligent's Community Server), by putting the database on the same machine as the web site. However, in their customer's case, usually the db & web server are the only applications on that machine, and the website isn't straining the machine that much. Then, the efficiency of not having to send data across the network more that made up for the increased strain.
Tom is correct on this. Some other reasons are that it isn't cost effective and that there are additional security risks.
Webservers have different hardware requirements than database servers. Database servers fare better with a lot of memory and a really fast disk array while web servers only require enough memory to cache files and frequent DB requests (depending on your setup). Regarding cost effectiveness, the two servers won't necessarily be less expensive, however performance/cost ratio should be higher since you don't have to different applications competing for resources. For this reason, you're probably going to have to spend a lot more for one server which caters to both and offers equivalent performance to 2 specialized ones.
The security concern is that if the single machine is compromised, both webserver and database are vulnerable. With two servers, you have some breathing room as the 2nd server will still be secure (for a while at least).
Also, there are some scalability benefits since you may only have to maintain a few database servers that are used by a bunch of different web applications. This way you have less work to do applying upgrades or patches and doing performance tuning. I believe that there are server management tools for making these tasks easier though (in the single machine case).
I would think the big factor would be performance. Both the web server/app code and SQL Server would cache commonly requested data in memory and you're killing your cache performance by running them in the same memory space.
Security is a major concern. Ideally your database server should be sitting behind a firewall with only the ports required to perform data access opened. Your web application should be connecting to the database server with a SQL account that has just enough rights for the application to function and no more. For example you should remove rights that permit dropping of objects and most certainly you shouldn't be connecting using accounts such as 'sa'.
In the event that you lose the web server to a hijack (i.e. a full blown privilege escalation to administrator rights), the worst case scenario is that your application's database may be compromised but not the whole database server (as would be the case if the database server and web server were the same machine). If you've encrypted your database connection strings and the hacker isn't savvy enough to decrypt them then all you've lost is the web server.
One factor that hasn't been mentioned yet is load balancing. If you start off thinking of the web server and the database as separate machines, you optimize for fewer network round trips and also it gets easier to add a second web server or a second database engine as needs increase.
I agree with Daniel Earwicker - the security question is pretty much flawed.
If you have a single box setup with a webserver and only the database for that webserver on it, if that webserver is compromised you lose both the webserver and only the database for that specific application.
This is exactly the same as what happens if you lose the webserver on a 2-server setup. You lose the web server, and just the database for that specific application.
The argument that 'the rest of the DB server's integrity is maintained' where you have a 2-server setup is irrelevant, because in the first scenario, every other database server relating to every other application (if there are any) remain unaffected as well - being, as they are, hosted elsewhere.
Similarly, to the question posed by Kev 'what about all the other databases residing on the DB server? All you've lost is one database.'
if you were hosting an application and database on one server, you would only host databases on that server which related to that application. Therefore, you would not lose any additional databases in a single server setup when compared to a multiple server setup.
By contrast, in a 2 server setup, where the attacker had access to the Web Server, and by proxy, limited rights (in the best case scenario) to the database server, they could put the databases of every other application at risk by carrying out slow, memory intensive queries or maximising the available storage space on the database server. By separating the applications out into their own concerns, very much like virtualisation, you also isolate them for security purposes in a positive way.
I can speak from first hand experience that it is often a good idea to place the web server and database on different machines. If you have an application that is resource intensive, it can easily cause the CPU cycles on the machine to peak, essentially bringing the machine to a halt. However, if your application has limited use of the database, it would probably be no big deal to have them share a server.
Wow, No one brings up the fact that if you actually buy SQL server at 5k bucks, you might want to use it for more than your web application. If your using express, maybe you don't care. I see SQL servers run Databases for 20 to 30 applicaitions, so putting it on the webserver would not be smart.
Secondly, depends on whom the server is for. I do work for financial companies and the govt. So we use a crazy pain in the arse approach of using only sprocs and limiting ports from webserver to SQL. So if the web app gets hacked. The only thing the hacker can do is call sprocs as the user account on the webserver is locked down to only see/call sprocs on the DB. So now the hacker has to figure out how to get into the DB. If its on the web server well its kind of easy to get to.
It depends on the application and the purpose. When high availability and performance is not critical, it's not bad to not to separate the DB and web server. Especially considering the performance gains - if the appliation makes a large amount of database queries, a considerable amount of network load can be removed by keeping it all on the same system, keeping the response times low.
I listened to that podcast, and it was amusing, but the security argument made no sense to me. If you've compromised server A, and that server can access data on server B, then you instantly have access to the data on server B.
I think its because the two machines usually would need to be optimized in different ways. Other than that I have no idea, we run all our applications with the server-database on the same machine - granted we're not public facing - but we've had no problems.
I can't imagine that too many people care about one machine being compromised over both since the web application will usually have nearly unrestricted access to at the very least the data if not the schema inside the database.
Interested in what others might say.
Database licences are not cheep and are often charged per CPU, therefore by separating out your web-servers you can reduce the cost of your database licences.
E.g if you have 1 server doing both web and database that contains 8 CPUs you will have to pay for an 8 cpu licence. However if you have two servers each with 4 CPUs and runs the database on one server you will only have to pay for a 4 cpu licences
An additional concern is that databases like to take up all the available memory and hold it in reserve for when it wants to use it. You can force it to limit the memory but this can considerably slow data access.
Something not mentioned here, and the reason I am facing, is 0 downtime deployments. Currently I have DB/webserver on same machine and that makes updates a pain. If you they are on a seprate machine, you can perform A/B releases.
I.e.:
The DNS currently points to WebServerA
Apply sofware updates to WebServerB
Change DNS to point to WebServerB
Work on WebServerA at leisure for the next round of updates.
This works before the state is stored in the DB, on a separate server.
Arguing that there is a real performance gain to be had by running a database server on a web server is a flawed argument.
Since Database servers take query strings and return result sets, the data actually flowing from data server to web server is relatively small, but the horsepower required to process the query and generate the result set is relatively large. Optimizing performance around the data transfer time therefore is optimizing around the wrong thing.
Regarding security, there are advantages to having the data server on a different box than the web server. Having such a setup is not the be all and end all of security, but it is a step in the right direction.
Regarding scalability, it is easy and relatively cheap to add web servers and put them into cluster to handle increased traffic. It is not so easy and cheap to add data servers and cluster them. Also, web servers and data servers have different hardware needs, so multiple boxes help out with scalability.
If you are starting small and have only one box, then a good way would go would be to use virtual machines. Running the web server and data server in different VMs on one host gives you all the gains of separate boxes at the cost of one large box price.
Operating system is another consideration. While your database may require larger memory spaces and therefore UNIX, your web server - or more specifically your app server since you mention only two tiers - may be a .Net-based, and therefore require Windows.
Ok! Here is the thing, it is more Secure to have your DB Server installed on another Machine and your Application on the Web Server. You then connect your application to the DB with a Web Link. Thanks it.

Resources