I'm trying to create an application that works in multi-instance mode. there will be 3 instances of this application. By the way there is also a postgresql database instance. I want to design a startup procedure to create db tables will be executed in the initializations of the static fields. This procedure is going to create tables programmatically.
I thought that working in multi-instance mode needs some special design while designing this special startup procedure. Because two instances may try to create same db tables at the same time.
Is there any way you can suggest to me? (By the way I'm not sure locking the entire database is possible or not. Maybe this is an alterative solution)
Related
We have one website currently being used by 3 clients. We have 3 different versions of the same source code which is calling 3 different databases. So,
1) Client A access "http://custA.weblink.com" will access "CustA" Database
2) Client B access "http://custB.weblink.com" will access "CustB" Database
3) Client C access "http://custC.weblink.com" will access "CustC" Database
All databases have the same structure, table design and stored procedures. Only the data is different.
The issue is, when I need to do deployment for stored procedures, I need to repeat the backup and perform deployment 3 times. It's not really hard even when there are lots of stored procedures need to be deployed but doesn't seems like a good practice.
Now I only have 3 clients, what if in the future I have 10? I need to repeat backup and deployment 10 times which is time consuming and it's hard to guarantee that all stored procedures in all databases will always be the same.
In this type of case where I have existing multi applications and databases, what could be the good practice or measurements to take to make the situation better? I don't think my company will allow making huge changes like merging all clients data into one database and re-write application flow to get the right data.
I thought about creating one main database without any data. All the Stored Procedures script will be deployed there. And each of the existing "CustA", "CustB" and "CustC" DB, I will use to EXEC method to call Stored Procedure from main database to process the data in the relevant DB. Like this:
1) Main database
USE [MainDatabase]
ALTER PROCEDURE USP_GetCustomerById
#CustId BIGINT
SELECT * FROM [Customer] WHERE Id = #CustId
2) CustA database (Same flow for CustB and CustC database)
USE [CustA]
ALTER PROCEDURE USP_GetCustomerById
#CustId BIGINT
EXEC MainDatabase.dbo.USP_GetCustomerById #CustId
Will there be any impact if I do so?
Have you ever considered to use visual studio to create an SQL Server Database Project? Here you can import for instance the first server ClientA settings for a single Database. It will import (schemas,objects,views,indexes and so on) And then you can setup different deployment servers. You can also compare Source and destination (A and B) to each other, to see if you have differences. (I
Example on deploying on Database to multiple servers
As you can see in my picture i have the whole structure for one database. In the bottom of the Solution Explorer you can see i have something called PROD and TEST. This is actually two different servers. You could create the 3 servers you need and then you can just press deploy.
Example on Schema compare
Here i have compared a source with my project. Then i can import my changes i made on ClientA and inject them to my project, so i can deploy them to the other servers.
There is legacy solution where 2 applications communicate with each other via SQL Server 2008 R2 database table.
Application "A" inserts information to database table from time to time
Application "B" polls database once per second to find out new records
I guess there may be more sophisicated approach how application "B" finds out when new records appear.
It depends on many things that are not explicitly stated in your question. Is that for one table only? For a limited set of tables? For all tables? Do you have full control on both applications?
Let's suppose this is for one table only and you can't modify A application because you don't control its sources. One way would be to use a message queue like described here combined to a trigger on that table.
If you control both applications, don't use the database as a singleton an go for message queues directly ...
I would like to be able to store the tracking tables in a different database the original. For a couple of reasons.
I would like to be able to drop it on demand if I change versions of my application.
I would like to have multiple sync scopes separated by user permissioning.
I am sure through the sqlmetadatastore class there is a way, but I have not found it yet.
the sqlmetaadatastore will not help you in any way with what you're trying to achieve. am pretty sure its not in anyway exposed in the database sync providers you're using.
note that the tracking tables are not the only objects Sync Framework provisioning creates, you will have triggers, tracking tables, stored procedures and user defined table types. and you're not supposed to be dropping them separately or even dropping them by yourself, but you should be using the deprovisioning API.
now if you really want to have the tracking tables on a separate db, the provisioning API has a Script method that can generate the SQL statements required to create the Sync Fx objects.
you can alter that to create the tracking tables on another DB, but you have to alter the triggers as well to insert on this other database.
Due to an employee quitting, I've been given a project that is outside my area of expertise.
I have a product where each customer will have their own copy of a database. The UI for creating the database (licensing, basic info collection, etc) is being outsourced, so I was hoping to just have a single stored procedure they can call, providing a few parameters, and have the SP create the database. I have a script for creating the database, but I'm not sure the best way to actually execute the script.
From what I've found, this seems to be outside the scope of what a SP easily can do. Is there any sort of "best practice" for handling this sort of program flow?
Generally speaking, SQL scripts - both DML and DDL - are what you use for database creation and population. SQL Server has a command line interface called SQLCMD that these scripts can be run through - here's a link to the MSDN tutorial.
Assuming there's no customization to the tables or columns involved, you could get away with using either attach/reattach or backup/restore. These would require that a baseline database exist - no customer data. Then you use either of the methods mentioned to capture the database as-is. Backup/restore is preferrable because attach/reattach requires the database to be offline. But users need to be sync'd before they can access the database.
If you got the script to create database, it is easy for them to use it within their program. Do you have any specific pre-requisite to create the database & set permissions accordingly, you can wrap up all the scripts within 1 script file to execute.
We are in the process of a multi-year project where we're building a new system and a new database to eventually replace the old system and database. The users are using the new and old systems as we're changing them.
The problem we keep running into is when an object in one system is dependent on an object in the other system. We've been using views, but have run into a limitation with one of the technologies (Entity Framework) and are considering other options.
The other option we're looking at right now is replication. My boss isn't excited about the extra maintenance that would cause. So, what other options are there for getting dependent data into the database that needs it?
Update:
The technologies we're using are SQL Server 2008 and Entity Framework. Both databases are within the same sql server instance so linked servers shouldn't be necessary.
The limitation we're facing with Entity Framework is we can't seem to create the relationships between the table-based-entities and the view-based-entities. No relationship can exist in the database between a view and a table, as far as I know, so the edmx diagram can't infer it. And I cannot seem to create the relationship manually without getting errors. It thinks all columns in the view are keys.
If I leave it that way I get an error like this for each column in the view:
Association End key property [...] is
not mapped.
If I try to change the "Entity Key" property to false on the columns that are not the key I get this error:
All the key properties of the
EntitySet [...] must be mapped to all
the key properties [...] of table
viewName.
According to this forum post it sounds like a limitation of the Entity Framework.
Update #2
I should also mention the main limitation of the Entity Framework is that it only supports one database at a time. So we need the old data to appear to be in the new database for the Entity Framework to see it. We only need read access of the old system data in the new system.
You can use linked server queries to leave the data where it is, but connect to it from the other db.
Depending on how up-to-date the data in each db needs to be & if one data source can remain read-only you can:
Use the Database Copy Wizard to create an SSIS package
that you can run periodically as a SQL Agent Task
Use snapshot replication
Create a custom BCP in/out process
to get the data to the other db
Use transactional replication, which
can be near-realtime.
If data needs to be read-write in both database then you can use:
transactional replication with
update subscriptions
merge replication
As you go down the list the amount of work involved in maintaining the solution increases. Using linked server queries will work best if its the right fit for what you're trying to achieve.
EDIT: If they're the same server then as suggested by another user you should be able to access the table with servername.databasename.schema.tablename Looks like it's an entity-framework issues & not a db issue.
I don't know about EntityToSql but I know in LinqToSql you can connect to multiple databases/servers in one .dbml if you prefix the tables with:
ServerName.DatabaseName.SchemaName.TableName
MyServer.MyOldDatabase.dbo.Customers
I have been able to click on a table in the .dbml and copy and paste it into the .dbml of the alternate project prefix the name and set up the relationships and it works... like I said this was in LinqToSql, though have not tried it with EntityToSql. I would give it shot before you go though all the work of replication and such.
If Linq-to-Entities cannot cross DB's then Replication or something that emulates it is the only thing that will work.
For performance purposes you probably want either Merge replication or Transactional with queued (not immediate) updating.
Thanks for the responses. We're going to try adding triggers to the old database tables to insert/update/delete records in the new tables of the new database. This way we can continue to use Entity Framework and also do any data transformations we need.
Once the UI functions move over to the new system for a particular feature, we'll remove the table from the old database and add a view to the old database with the same name that points to the new database table for backwards compatibility.
One thing that I realized needs to happen before we can do this is we have to search all our code and sql for ##Identity and replace it with scope_identity() so the triggers don't mess up the Ids in the old system.