Is it normal to have 1 database, on a DB server, that is used by a frontend (web) server, but then also have a third server doing an UPDATE in that database?
I want the frontend server to send queries to a DB table to check if an action is "done".
SELECT status FROM table WHERE id = '...';
But that action will only be "done" if this third server sends an UPDATE to that table, and updates the status.
UPDATE table SET status = 'done' WHERE id is = '...';
So 2 different servers (frontend and backend) will need to communicate with the DB.. is that potentially problematic? Is there a 'cleaner' solution?
It is common to do it. What you need to consider is the isolation level of transactions.
As long as it is ‘Read Committed’ or above, those simple queries are safe.
For more complex and multiple queries need to be excuted then Repeatable Read or Serializable level should be considered.
Related
In the system, there are couple oracle DB servers.
Lets say oracle Db1 is a primary server having one master table and rest of the oracle Db servers connect to this primary server using DB link.
So is there way to cache the value fetched from primary DB into target DB so that every time a DB link call is saved and value can be fetched from local oracle DB cache.
What are the various caching mechanisms available,if any along with its advantages & disadvantages?
Does this caching works seamlessly in Active- Passive node setup or any additional config settings/code is needed ?
When primary DB value changes, the consumer DBs to be notified of change so as to flush the data from cache. So any event driven mechanism possible.
Environment details - Oracle 11g Database Release1, Unix.
Would appreciate inputs with sample code snippet on "HowTo". Thank you
One way to allow an application to continue working when a database link is temporarily offline is to use a Materialized View (MV).
This does not work like a cache, however, as the MV would need to be refreshed manually on a schedule (e.g. once every 5 minutes). If the data on the remote database changes, the local database will not see the new results until the MV is refreshed.
For example:
create materialized view tablename_mv
refresh complete on demand
as select * from remotetablename#dblinkname;
Then refresh it periodically with:
begin
dbms_mview.refresh('TABLENAME_MV');
end;
We have a very weird problem using EF 6 with MSSQL and MassTransit with Rabbit MQ.
The scenario is as follows:
Client application inserts a row in database (via EF code first - implicit transaction only in DbContext SaveChanges)
Client application publishes Id of the row via MassTransit
Windows Service with consumers processes the message
Row is not found initially, after a few retries, the row appears
I always thought that after commit the row is persisted and becomes accessible for other connections...
We have ALLOW_SNAPSHOT_ISOLATION on in the database.
What is the reason of this and is there any way to be assured that the row is accessible before publishing the Id to MQ?
If you are dependent upon another transaction being completed before your event handler can continue, you need to make you read serializable. Otherwise, transactions are isolated from each other and the results of the write transaction are not yet available. Your write may also need to be serializable, depending upon how the query is structured.
Yes, the consumers run that quickly.
I have a stored procedure that loops through a list of servers and queries their master DBs. When one of these servers is down, the stored procedure querying times out. How can I skip the querying of any inactive server, or, how can I catch the server timeout and continue to do the queries on the remaining active servers? I do have a Server table with an IsActive column, but the value is not automatically changed when a server goes down. Currently, the list of servers to query is based on this IsActive column in the table. Another solution could be to find a way to automatically change the IsActive column whenever a server goes down but I wouldn't know how to go about that. Any thoughts?
EDIT: I'm doing this all in SQL Server 2008
Do not do this from inside the engine (linked servers).
Query from an outside process. Launch the queries in parallel, with a connection timeout set. Your 'query' will get all the information at once and you will only have to wait once for all servers that are down, roughly, the timeout you've set, instead of once for each server that is down. I would recommend against 'testing' for the connectivity because attempting to connect is the test. If you would, say, iterate over the servers and call sp_testlinkedserver for each you risk waiting more in the end, because the tests are still serialized and they take the same amount of time as attempting to connect (this is what the test does, it attempts to connect).
A much better solution would be to use a reliable transport and asynchronous messaging instead, eg. Service Broker. Since the programmign model is asynchronous but the messaging is reliable, it doesn't matter if a server is down, you will get the result you want later, when is finally back online.
I'd like to have some degree of fault tolerance / redundancy with my SQL Server Express database. I know that if I upgrade to a pricier version of SQL Server, I can get "Replication" built in. But I'm wondering if anyone has experience in managing replication on the client side. As in, from my application:
Every time I need to create, update or delete records from the database -- issue the statement to all n servers directly from the client side
Every time I need to read, I can do so from one representative server (other schemes seem possible here, too).
It seems like this logic could potentially be added directly to my Linq-To-SQL Data Context.
Any thoughts?
Every time I need to create, update or
delete records from the database --
issue the statement to all n servers
directly from the client side
Recipe for disaster.
Are you going to have a distributed transaction or just let some of the servers fail? If you have a distributed transaction, what do you do if a server goes offline for a while.
This type of thing can only work if you do it at a server-side data-portal layer where application servers take in your requests and are aware of your database farm. At that point, you're better off just using a higher grade of SQL Server.
I have managed replication from an in-house client. My database model worked on an insert-only mode for all transactions, and insert-update for lookup data. Deletes were not allowed.
I had a central table that everything was related to. I added a field to this table for a date-time stamp which defaulted to NULL. I took data from this table and all related tables into a staging area, did BCP out, cleaned up staging tables on the receiver side, did a BCP IN to staging tables, performed data validation and then inserted the data.
For some basic Fault Tolerance, you can scheduling a regular backup.
My company has recently put up a demo database on his remote servers to allow some of our partners to test our beta version of the software.
We noticed, as expected, some bottlenecks in some parts of the program, in particular on the places in which many queries are done.
For example, I have to load customer from database, with all his products associated to him.
Normally this operation would be done as
select somefields from customers where id = someId
Store it to business class
select someotherfiels from products where customerid = previousid
store to collection and show/use them.
Imagine a much more complex business logic, which would gather 8 to 12 tables in the same way. With a local database, this would be instant. But by connecting to a remotely hosted SQL Server, this is extremely slow.
We found out that making a single query, like
SELECT Customer.Field1, Customer.Field2, Product.Field1, Product.Field2,
--//LIST OTHER 200 FIELDS FROM 10 DIFFERENT TABLE SE SAME WAY
FROM Customer
LEFT JOIN Product ON Product.IdCustomer = Customer.IdCustomer
--//LEFT JOIN OTHER 10 TABLES THE SAME WAY
WHERE Customer.IdCustomer = 10
amazingly improved the speed of the whole execution.
The fact that this is more complex than the single selects of each of them doesn't compare to the fact that is only one access to the server.
I am talking about ~2000/3000ms to 80-120ms.
Here comes the true problem (sorry about the long preface).
How can i save data to a remote SQL Server in an efficient way?
Imagine i have a window/form/dialog/frame/whatever where i ought to save more tables after a given operation.
For example:
INSERT INTO Customer (Field) VALUES ('hello world') "IdCustomer" is an identity column
Fetch the new Id from database
INSERT INTO SomeLinkedTable (IdCustomer, Field) VALUES (x, 'new value')
Fetch new "IdSomeLinkedTable", use it to link some table, ecc ecc..
This is a common way of multiple saving in our program. Even if we don't leave the user without some message that the operation is gonna take a while, waiting ~10s to do a frequent multi-insert operation is way too much.
Maybe it's a simple server configuration problem, but i swear all the firewalls in both sides (our & server) are properly configured to allow some SQL Server access.
Anyone encountered the same problem?
If you pulled all your business logic into the app layer, then your app layer must live close to the database.
The solution to make a fast and snappy remote applicaiton is to use fiefdoms-and-emissaries so you remove the tight coupling to the database.
As an interim solution you can speed up any operation by removing the round-trips to the database. If you have to save a master-detail form, you send the entire operation as a single T-SQL batch (INSERT header, SELECT ##identity, INSERT all details) as one single batch. Of course, in practice this means you moved your business logic into the T-SQL batches, and now that the logic lives in the database is better to have in in stored procs anyway.
At the end of the day, you can't have your cake and eat it too. You can either make a autonomous disconnected application that supports a remote, ocassionally connected db, and demo that, or make a tightly coupled application that must live close to the database and demo that.