Database mirroring between Windows Azure VMs - sql-server

I have two windows server 2012 data-center R2 VMs with SQL Server standard 2012 running. I installed my both application and the db server on each of VMs. Both VMs reside within the same cloud service.
Also I setup load balancing between two VMs through port:80. Now it's a matter of mirroring the databases. I tried to setup SQL Mirroring but no luck so far. I'm not sure how these two VMs communicate with each other via the same port 5022.
Also I've done some reading but still I'm not sure what is the possible way of doing this. I definitely need a help now.
Questions:
a) Do I need to set up a virtual network in order to mirror databases?
b) Can I mirror databases resides within the same virtual network?
c) If my assumptions above are incorrect, what is the best way forward on this?
Thanks in advance!!
UPDATE: I managed to setup Principle server in VM1 and both Mirror & Witness servers in VM2 (If you have resources best way to have them in separated VMs). Both VMs reside within same Virtual Network and same cloud service.
So when the Principle is not available, Witness automatically set the Mirror to Principle and it's no longer in the recovery state.
If you're planning to have both Witness & Mirror SQL instances within the same server, Make sure you use a different Port for Witness server.
eg.
- Principle : 5022
- Mirror : 5022
- Witness : 5023

They can be in same Cloud Service and placing them into the same CS will make life simpler, because of no need to deal with public endpoints. In fact Cloud Service is just a container for VMs and/or Roles that's also associated with an implicitly created network (thus you need an endpoint to access it from outside).
It is also recommended to put VMs into the same Availability Set, in this case Azure will try to do not shutdown all the VMs at the same time.
The simplest way to setup DBM is to use certificates. You can see example here.
Note: don't forget to setup witness if you need an automatic failover.

In order for two Azure VM's to talk they need to be set up in the same affinity group.
A virtual network is a specialized affinity group, where you are able to control the IPs.
Once the virtual network is set up, you should have no problem setting up replication for your database.

Related

Alarm DB Logger (Intouch) configuration with SQL Server Mirroring

I have an installation which has two SCADA (Intouch) HMIs and I want to save the data in an SQL Server database which will be in another computer. To be as sure as possible that I have an operating database I'm going to set a SQL Server mirroring. So I will have 2 SQL server databases with a distributor. About this I don't have any doubt. To make it easy to understand I've made an image with the architecture of the system.
Architecture.
My doubt is how do I configure the Alarm DB Logger to make it point, automatically, to the secondary database in case that the principal database is down for any unknown failover.
PS: I don't know if it's even possible.
Configure it the database in Automatic failover. The connection are handled automatically in case of a failover. Read on Mirroring EndPoints
The below Links should have more than enough information.
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/role-switching-during-a-database-mirroring-session-sql-server
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/the-database-mirroring-endpoint-sql-server
The AlarmDBLogger reads its configuration from the registry, so you could try the following:
Stop AlarmLogger
Change ServerName in registry [HKLM].[Software].[Wonderware].[AlarmLogger].[SQLServer]
Start AlarmLogger
But what about the two InTouch-nodes? What if one of those fails? You would have to make sure one of them logs alarms, and that they don't log duplicates!
The standard controls and activex for alarms use a specific view in the alarm database. You cannot change that behaviour, but you can script a server change in InTouch or System Platform.
Keep in mind that redundancy needs to be tested, and should only be implemented if 100% uptime is necessary. In many cases you will be creating new problems to solve instead of solving an actual problem.

Connecting to a remote SQL Server without configuring the firewall

I have an SQL Server located in the US, I've written a program that connects to a database on the server and takes the data from the server. The users of the program are spread around the world. The majority of them can easily use the program (i.e. the connection is successfully established).
But some of the users who try to run the program from inside their office building can't connect to the server because of their companies' firewalls. Since the number and location of the users is not known (the application is distributed for free with no notifications to me), customizing every firewall isn't really an option (even though it helped when I was able to do this).
I believe there should be an option like a kind of "certificate" that has to be embedded in my program and has to be registered somewhere on a user's machine that would allow establishing the connection. Or anything of that sort. Unfortunately, I haven't found anything specific in the Internet. Most probably because I googled wrong words or so.
Any help or advice is very much appreciated!
If a firewall (or other security device) is blocking, then there is no magic bullet. You need to avoid directly talking to SQL Server.
Even if you changed the port, many of those company workers will be limited to HTTP(S) access, and then only via a proxy.
So you need to talk HTTP to an API you provide, and the implementation of that API then talks (under your control) to the database.
This has the enormous advantage of giving you an extra layer protecting the integrity of the data in the database.
To build a connection you need firewall at client's place to allow access to the ip where your sql server present.
You can give a custom message to the users to allow access to the IP address of the SQL server, but it is not safe to do so due to security concerns.
Instead you can make a third application to take requests from clients and forward it to your sql server. Like some web service. Host this application on public IP and inform the clients that they need to open the IP in their firewall to run the program. It will ensure security as well as your problem will be solved.

Azure VM availability, mirroring

Apologies for the noob question, I've never dealt with failover before.
Currently we have a single hardware server running Windows Server, SQL Server, ASP.NET and a single (very large) web application. We are considering migrating this to an Azure VM.
I see in the SLA that Microsoft will only guarantee 99.95% availability if I am running more than one instance of an Azure VM, to allow for failure and reboots etc.
Does this mean I therefore would have two servers to manage and maintain? For example, two versions of SQL with a database on each, and two sets of ASP.NET application files? If correct, this puts the price up dramatically.
I assume there is no way to 'mirror' one server across to the other to reduce this workload?
Also, our hardware server has 25,000 uploaded files on it. Would we need to put these on a VHD then 'link' them to whichever live server was running, or does Azure do this automatically? Or do they have to be mirrored from the live server to the failover server?
Any pointers would be appreciated. I've already read all the Azure documentation but it hasn't really made things much clearer...
Seems like you have multiple topics you should look after.
Let's start with the database. The easiest thing would be, if you could migrate your sql server into the sql azure one. Than you would not have no need to maintain it and to maintain the machines you should use.
This would you give the advantage, that you central component can be used by 1 to many applications.
Second one are you uploaded files. I assume that your application allows to upload files for sharing or something else. The best thing would be, if you could just write these files into the windows azure blobstorage. Often this means you have to rewrite a connector, but this would centralize another component.
For the first step you could make them available and clients can download it with the help of a link. If not you could load the files from their and deliver them to the customer.
If you don't want to rewrite your component, you should have to use the VHD. One VHD can only have one lease. So only one instance can be used. A common way I have seen is that if the application is starting, it is trying to "recover" the lease. (try-and-error like)
Last but not least your ASP.NET application. If you have such an application I would have a look into cloud instances. Try not to consider the VMs, because than you have to do all the management. VMs are the IaaS. With a .NET application should easily be able to convert it and deploy instances.
Than you have not to think about failover and so on. Just deploy 2 instances and the load-balancer will do the rest.
If you are able to "outsource" the SQL server, you could minimize your machine for the ASP.net application. Try to use scale-out and not scale-up. This means use more smaller nodes, than one big one. (if possible)
If you are really going the VM way, you have to manage all the stuff by yourself and yes than you need 2 vms. You are also need 3 vms, because you have no auto-loadbalancer and if you only have 2 just one machine can have the port 80 exported.
HTH

Database synchronization

Recently my clients have asked me if they can use they’re application remotely, disconnected from the local network and the company server.
One solution is to place the database in the cloud, but a connection to the database, and the cloud and an internet connection must be always available.
There not always the case.
So my question is - Is there any database sync system, or a synchronization library so that I can work disconnected with local database and when I connect synchronize the changes I have made and receive changes others have made?
Update:
The application is under Windows (7/xp) ( for now )
It's in Delphi 2007 win32
All client need to have Read/Write access
All Clients have internet connection, but not always ON
Security is not critical, but the Sync service should encrypt the communication
When in the presence of the companies network the system should sync and use the Server Database and not the local one.
You have a host of issues with thinking about such a solution. First, there are lots of possible solutions, such as:
Using database replication within a database, to mimic every update (like a "hot" backup)
Building an application to copy the database periodically (every night)
Using a third-party tool (which is what you are asking, I think)
With replication services, the connection does not have to always be up. Changes to the database are logged when the connection is not available and then applied when they can be sent.
However, there are lots of other issues when you leave a corporate network. What about security of the data and access rights? Do you have other options, such as making it easier to access the database from within the network? Do the users need only read-access to the database or read-write access? Would both versions need to be accessed at the same time. Would there be updates to both at the same time?
You may have other options that are more secure than just moving a database to the cloud.
I believe RemObjects DataAbstract allows offline mode and synchronization by using what they call Briefcases. All your other requirements (security, encrypted connections, etc.) are also covered.
This is not a drop-in replacement, thought, and may need extensive rewrite/refactoring of your application. There are lots of upsides, thought; business rules can/should be enforced on the server (real security), scriptable business rules, multiplatform architecture, etc.
There are some products available in the Java world (SymmetricDS lgpl license) - apart from actually being a working system it is documents how it achieved synchronization. Connects to any db with jdbc support. . There is a pro version but the user guide (downloadable pdf) gives you the db schema plus rules on push pull syncing. Useful if you want to build your own.
Btw there is a data replication so tag that would help.
One possibility that is free is the Microsoft Sync Framework: http://msdn.microsoft.com/en-us/sync/bb736753.aspx
It may be possible for you to use it, but you would need to provid some more detail about your application and operating environment to be sure.
IS it possible to share a database like .mdb and work fine? i try but sometimes the file where the databse is changes from DB to DB1 i use delphi Xe4 and Google Drive .
Thank´s

How to secure database traffic the other way around, that is to say, from client to server

My scenary:
I am trying to develop a service which will query different databases.
To clear the above statement up:
I use the word service in its broadest sense: a sofware component that will provide some value to the database owner.
These databases will be in no way under my control as they will belong to different companies. They won't be known beforehand and multiple vendors are to be supported: Oracle, MS (SQL Server), MySql, PostgreSQL. Also, OLE DB and ODBC connections will be supported.
The problem: security of database credentials and overall traffic is a big concern but the configuration effort should be reduced at a minimum. Ideally, all the security issues should be addressed programmatically in the service implementation and require no configuration effort for the database owner other than provide a valid connection string.
Usually, database SSL support is done through server certificates which I want to avoid as it is cumbersome for the client (the database owner).
I have been looking into how to do this to no avail. Hopefully this might be done with openssl, SSPI, client SSL certificate or any form of tunneling; or may be it is just not posible. Some advice would be greatly apreciatted.
I am having a bit of difficulty understanding how this service would work without being extremely cumbersome for the database owner even before you try to secure the traffic with the database.
Take Oracle in particular (though I assume there would be similar issues with other databases). In order for your service to access an Oracle database, the owner of the database would have to open up a hole in their firewall to allow your server(s) to access the database on a particular port so they would need to know the IP addresses of your servers and there is a good chance that they would need to configure a service that does all of its communication on a single port (by default, the Oracle listener will frequently redirect the client to a different port for the actual interaction with the database). If they are at all security conscious, they would have to install Oracle Connection Manager on a separate machine to proxy the connection between your server and the database rather than exposing the database directly to the internet. That's quite a bit of configuration work that would be required internally and that's assuming that the database account already exists with appropriate privileges and that everyone signs off on granting database access from outside the firewall.
If you then want to encrypt communication with the database, you'd either need to establish a VPN connection to the database owner's network (which would potentially eliminate some of the firewall issues) or you'd need to use something like Oracle Advanced Security to encrypt the communication between your servers. Creating VPN connections to many different customer networks would require a potentially huge configuration effort and could require that you maintain one server per customer because different customers will have different VPN software requirements that may be mutually incompatible. The Advanced Security option is an extra cost license on top of the enterprise edition Oracle license that the customer would have to go out and purchase (and it would not be cheap). You'd only get to the point of worrying about getting an appropriate SSL certificate once all these other hoops had been jumped through. The SSL certificate exchange would seem like the easiest part of the whole process.
And that's just to support Oracle. Support for other databases will involve a similar series of steps but the exact process will tend to be slightly different.
I would tend to expect that you'd be better served, depending on the business problem you're trying to solve, by creating a product that your customers could install on their own servers inside their network that would connect to a database and would have an interface that would either send data to your central server via something like HTTPS POST calls or that would listen for HTTPS requests that could be sent to database and the results returned over HTTP.
SSL is very important in order to keep a client's database safe. But there is more than just that. You have to make sure that each database account is locked down. Each client must only have access to their own database. Further more, every database has other privileges which are nasty. For instance MySQL has FILE_PRIV which allows an account to read/write files. MS-SQL has xp_cmdshell which allows the user to access cmd.exe from sql (why would they do this!?). PostgreSQL allows you to write stored procedures in any language and from there you can call all sorts of nasty functions.
Then, there are other problems. A Malformed query can cause a buffer overflows which will give an attacker the keys to the kingdom. You have to make sure all of your databases are up to date, and then pray no one drops an 0-day.

Resources