How can I securely update End User's Database with EF migration? - database

I have a program that runs on a wide variety of clients' computers. How can I automatically update the end-user database when I release an update of my program that I configured the migration with EF code-first? The point is, can we apply the migration feature to the end-user database at runtime without damaging the end-user's data?

Related

SQL Server: Database User Access Mode

With Windows SQL Server there are 3 user access mode settings per DB.
Multi_User
Single_User
Restricted_User
My question is, when exactly do you put a database in the "Single_User" of the "Restricted_User" mode?
For example, if you want to update the SQL Server and thus prevent further sessions from being established for the duration of the update?
Typical Restricted_User and Single_User are used when doing maintenance that needs to be done when the applications are offline but you still need to access the data or schema.
Examples
Data migrations that are spanning multiple hours/days accessing multiple tables /databases/files are often done when the system is offline to minimize locking/blocking.
Hardware migrations: Typically when moving to new hardware the database is also but in restricted mode before the service is turned off, to make some full backups, put the database offline, ...
Recovery: When your database is corrupted an you are restoring logs and performing ddbc checkdb (Although this is mostly done on a separate environment)
...
So basically when the DBAs/Developers wants to make sure nobody else can access the database but they still needs to be able to perform tasks.
In the enterprise environment this is often a fail-safe, as access to the database will probably be limited by firewalls, user policies when doing one of these tasks.
Patching SQL server or the OS is done when the service is stopped, as often OS patches require reboots and SQL Server patches require service restarts. When running in a clustered environment, it's done 1 node at the time, to maintain up-time. So restricted access is not used in these cases as the SQL Server is offline.

How should I design the multiple same applications update one database

I'm managing an online book store website. For sake of high availability I've setup two Tomcat instances for running the website application, they are the exactly same program, and they are sharing the same database which located in another server.
My question is that how can I avoid conflicts or dirty data when the two applications do the same updates/inserts at the same time to the database.
For example: update t_sale set total='${num}' where category='cs', if there are two processes execute the above sql simultaneously would cause data lost.
If by "database" you are talking about a well designed schema that is running on an RDBMS such as Oracle, DB2, or SQL Server, then the database itself will prevent what you call "conflicts" by locking parts of the database during each update transaction.
You can prevent "dirty data" from getting into the database by adding features such as check clauses and primary-foreign key structures in the database itself.

Move from a local single-user database to an online multi-user database

I have a calendar-type WPF program that is used to assign the workload to a team. The events are stored in an Access database and the program is accessed by one person at a time by remotely connection to a computer. The team has grown and multiple people would need to access the program simultaneously. I can install the program on several computers, but where should I move the database? On a software like Dropbox/Onedrive, on a SQL online host? Thanks.
You can use a SQL Server on many Cloud platforms (though I am not sure Dropbox can host SQL Server natively). Azure (Microsoft cloud) is a very mature solution. You still should verify, now that multiple users will be managing data, that the database is backed up a regular basis and that any updates to data should be done within transactions that your code should be aware of. 'Aware of' means that if there is a conflict your code should either resubmit or notify the user that the insert/update/delete failed.

Will scale Azure DB from Web To new tier cause availability issue

As far as I can tell, scaling an Azure DB from the retired tiers to the new tiers is simply a matter of using the scale function in the Azure portal.
What I cannot seem to find anywhere is a definitive answer as to whether there are any connection string changes required (or any other issues that could cause unavailability) when scaling from the retired to new tiers.
I have a production database that needs to be upgraded, service interruption would be very bad.
The scale operation will not change the connection string. You could face some (very small, but) finite amount of downtime while the switchover happens.
Please refer to the documentation for details. Note that you will be have to suspend geo-replication (if already enabled) for the duration of the upgrade.
Techincaly it will be the same server, same connection string, same everything, but version and features.
But I would be concerned about the following statement from docu:
The duration of upgrade depends on the size, edition and number of
databases in the server. The upgrade process can run for hours to days
for servers especially for servers that has databases:
Larger than 50 GB, or
At a non-premium service tier
Which is kind of concerning.
What I would do, if possible is:
Put my service into read-only mode (put on hold any writes to the DB)
Create new db in same server from the existing one with the command - CREATE DATABASE AS COPY OF ...
When creation of DB is ready, export the new db to backpac and delete the DB when export is ready.
Perform upgrade.
In theory you could do the process without putting your system into Read-Only mode, but I am just taking more precautions measures.
And yes, you also have to aware that you are upgrading your Azure SQL DB Server not just a single Database.

Debugging EF App locks DB tables?

I have a WCF App that has 5-6 EF Models in it. In a production env, only one instance of this app will be running.
But in development there are 5 developers working on it at the same time. (Against the same Database.)
We are noticing that sometimes tables on our SQL Server 2008 R2 DB get locked. It seems to be when someone is doing step over debugging and has to leave it on a step for a few minutes.
I am curious why EF would keep a lock on a table. How would someone keep an lock on a table using EF? What Can I do to prevent this?
NOTE: This same application accesses a WCF Data Services (OData) endpoint to get some of its data (from the same database). I don't see how OData would be locking the db, but I thought I would mention it in case it is important.
There is only one solution. Each developer will have locally installed database and run debugging session in his own environment! Anything else is wrong development environment. Use SQL Server Express or SQL Server Developer edition.
All we can do is venture various guesses. Eg. if your data model is missing proper indexes then record lookups turn into table scans and scans escalate locks to table level.
The real solution is to investigate the blocking properly. What is causing the block, exactly, what resource is waiting on? What session/transaction/statement is holding the resource needed by what other session/transaction/statement.
Use the Activity Monitor or who_is_active or sp_blitz. Read the Waits and Queues whitepaper.

Resources