Using Vapor 3, is there an easy way to switch databases, while the server is running?
For example, a user logs in using the 'login' db. I then set
the db for that user in their cookie. Any subsequent requests from that user then use the db identified in the cookie (the 'user' in this scenario would really be a company).
All db's would be from the same db family (eg MySQL).
This would keep every companies data in their own db, and limit
the size of each db (and hopefully, overall, db operations would be faster).
Also, any need to restore a db would only impact one company, and backups would be simpler.
How to achieve this?
Would this be very inefficient?
Are there other better ways to achieve this?
As far as I understand you could create some different database identifiers like:
extension DatabaseIdentifier {
static var db1: DatabaseIdentifier<MySQLDatabase> {
return .init("db1")
}
static var db2: DatabaseIdentifier< MySQLDatabase > {
return .init("db2")
}
}
and then register them in configure.swift like this
let db1 = MySQLDatabase(config: MySQLDatabaseConfig(hostname: "localhost", username: "root", database: "db1"))
let db2 = MySQLDatabase(config: MySQLDatabaseConfig(hostname: "localhost", username: "root", database: "db2"))
var databaseConfig = DatabasesConfig()
databaseConfig.add(database: db1, as: .db1)
databaseConfig.add(database: db2, as: .db2)
services.register(databaseConfig)
after that don't forget to use .db1 and .db2 identifiers everywhere instead of default .mysql (for MySQL), e.g. in migrations
migrations.add(model: User.self, database: .db1)
with pooled connections
return req.requestPooledConnection(to: . db1).flatMap { conn in
defer { try? req.releasePooledConnection(conn, to: . db1) }
return User.query(on: conn).all()
}
and in transactions
return req.transaction(on: .db1) { conn in
return User.query(on: conn).all()
}
Sorry if I haven't answered your questions. I understand that it'd be great if Fluent could support passing database name for each query, but I haven't found that in it. (or it's not obvious how to pass database name on query)
But btw from my point of view having separate databases for each client may give you a real headache on migrations... maybe it'd be better to store them all in one database but with partitioning? e.g. for PostgreSQL like described here
Related
Code Migration due to Performance Issues :-
SQL Server LIKE Condition ( BEFORE )
SQL Server Full Text Search --> CONTAINS ( BEFORE )
Elastic Search ( CURRENTLY )
Achieved So Far :-
We have a web page created in ASP.Net Core which has a Auto Complete Drop Down of 2.5+ Million Companies Indexed in Elastic Search https://www.99corporates.com/
Due to performance issues we have successfully shifted our code from SQL Server Full Text Search to Elastic Search and using NEST v7.2.1 and Elasticsearch.Net v7.2.1 in our .Net Code.
Still looking for a solution :-
If the user does not select a company from the Auto Complete List and simply enters a few characters and clicks on go then a list should be displayed which we had done earlier by using the SQL Server Full Text Search --> CONTAINS
Can we call the ASP.Net Web Service which we have created using SQL CLR and code like SELECT * FROM dbo.Table WHERE Name IN( dbo.SQLWebRequest('') )
[System.Web.Script.Services.ScriptMethod()]
[System.Web.Services.WebMethod]
public static List<string> SearchCompany(string prefixText, int count)
{
}
Any better or alternate option
While that solution (i.e. the SQL-APIConsumer SQLCLR project) "works", it is not scalable. It also requires setting the database to TRUSTWORTHY ON (a security risk), and loads a few assemblies as UNSAFE, such as Json.NET, which is risky if any of them use static variables for caching, expecting each caller to be isolated / have their own App Domain, because SQLCLR is a single, shared App Domain, hence static variables are shared across all callers, and multiple concurrent threads can cause race-conditions (this is not to say that this is something that is definitely happening since I haven't seen the code, but if you haven't either reviewed the code or conducted testing with multiple concurrent threads to ensure that it doesn't pose a problem, then it's definitely a gamble with regards to stability and ensuring predictable, expected behavior).
To a slight degree I am biased given that I do sell a SQLCLR library, SQL#, in which the Full version contains a stored procedure that also does this but a) handles security properly via signatures (it does not enable TRUSTWORTHY), b) allows for handling scalability, c) does not require any UNSAFE assemblies, and d) handles more scenarios (better header handling, etc). It doesn't handle any JSON, it just returns the web service response and you can unpack that using OPENJSON or something else if you prefer. (yes, there is a Free version of SQL#, but it does not contain INET_GetWebPages).
HOWEVER, I don't think SQLCLR is a good fit for this scenario in the first place. In your first two versions of this project (using LIKE and then CONTAINS) it made sense to send the user input directly into the query. But now that you are using a web service to get a list of matching values from that user input, you are no longer confined to that approach. You can, and should, handle the web service / Elastic Search portion of this separately, in the app layer.
Rather than passing the user input into the query, only to have the query pause to get that list of 0 or more matching values, you should do the following:
Before executing any query, get the list of matching values directly in the app layer.
If no matching values are returned, you can skip the database call entirely as you already have your answer, and respond immediately to the user (much faster response time when no matches return)
If there are matches, then execute the search stored procedure, sending that list of matches as-is via Table-Valued Parameter (TVP) which becomes a table variable in the stored procedure. Use that table variable to INNER JOIN against the table rather than doing an IN list since IN lists do not scale well. Also, be sure to send the TVP values to SQL Server using the IEnumerable<SqlDataRecord> method, not the DataTable approach as that merely wastes CPU / time and memory.
For example code on how to accomplish this correctly, please see my answer to Pass Dictionary to Stored Procedure T-SQL
In C#-style pseudo-code, this would be something along the lines of the following:
List<string> = companies;
companies = SearchCompany(PrefixText, Count);
if (companies.Length == 0)
{
Response.Write("Nope");
}
else
{
using(SqlConnection db = new SqlConnection(connectionString))
{
using(SqlCommand batch = db.CreateCommand())
{
batch.CommandType = CommandType.StoredProcedure;
batch.CommandText = "ProcName";
SqlParameter tvp = new SqlParameter("ParamName", SqlDbType.Structured);
tvp.Value = MethodThatYieldReturnsList(companies);
batch.Paramaters.Add(tvp);
db.Open();
using(SqlDataReader results = db.ExecuteReader())
{
if (results.HasRows)
{
// deal with results
Response.Write(results....);
}
}
}
}
}
Done. Got the solution.
Used SQL CLR https://github.com/geral2/SQL-APIConsumer
exec [dbo].[APICaller_POST]
#URL = 'https://www.-----/SearchCompany'
,#JsonBody = '{"searchText":"GOOG","count":10}'
Let me know if there is any other / better options to achieve this.
I hope this is the correct site to post this on. I wasn't sure if I should post here or Server Fault, but seeing as this involves the website perspective, I thought perhaps this community might be a little more accurate, but I'm not 100% on that.
I have been banging my head against the wall for over half a year trying to figure out just what's going on here. I would be ecstatic if I could track down why AJAX calls are slow when going through our Site Server.
I have built a small web-app for the organization I work for and it is pretty much set up like this:
The site itself (WebMatrix IIS Express site) resides on the Site Server, but (with the help of C#) it uses SQL queries to query a (considerably large) database on our Database Server.
The problem is that when my site performs the AJAX (simple jQuery $.ajax() calls) that requires it to query the database, the response takes over 5 seconds, each!
(Chrome Network Details):
(You'll see that some of the responses are really quick. These responses contain no or a lot less data than the other responses. Maybe there's a data limit somewhere that's causing the Site Server to analyze them?)
Now here's the kicker:
On the Development machine, the local machine the site is developed on, cuts out the Site Server, has the same code, and queries the same database, but the lag doesn't persist in this scenario. The responses in this scenario are in the low millisecond, just what I would expect it to be.
Here's what the Chrome Network Details look like from the development machine:
(None even close to 1 second, let alone 5).
Some More Specifics
When launching this site straight from the Site Server, the lag persists.
WebMatrix uses SQL Server CE, while the SQL Installed on the Database Server is SQL Server 2005 (I really don't think this makes a difference, as the query itself isn't anything special, plus it's the same code that's used in either scenario).
The Site Server has been tested to see if the RAM, Processor, and Bandwidth are maxing out, but the truth is that running this web-app doesn't even touch the Site Server's resources. The same has been found for the Database Server, as well.
The connection to the database is readonly (doubt this matters, just trying to give as much detail as possible).
We have indexed the database on the Database Server, but it helped, virtually, none at all.
Even though it is just an Intranet site, I am told that putting the site directly on the Database Server is not an option.
At the moment, the AJAX requests are not asynchronous, but it should still not take this long (especially considering that it only lags from the Site Server and not from the Development Machine, even though the code is 100% identical in both cases).
Probably doesn't make any difference, but I am in an ASP.NET WebPages using WebMatrix with C# environment.
The Operating System on the Site Server is: Windows Server 2008 R2
The Operating System on the Database Server is: Windows Server 2003
What could make this app work well from my local machine but not from the Site Server? I think the problem has to be the Site Server, given this, but none of its resources are maxing out or anything. It seems to only lag by about 5 seconds per request if the data being returned is over a certain amount (an amount that seems pretty low, honestly).
Truth is, I am hopelessly stuck here. We have tried everything over the past several months (we are having a similar problem with another Intranet site where the AJAX calls lag there, too, we have just lived with it for a while).
I don't know what else to even look into anymore.
In case anybody wants to see some code
jQuery (one of the AJAX requests, they are all just repeats of this with different parameters)
$.ajax({
url: '/AJAX Pages/Get_Transactions?dep=1004',
async: false,
type: 'GET',
dataType: "json",
contentType: "application/json",
success: function (trans) {
for (var i = 0; i < trans.length; i++) {
trans[i][0] = getTimeStamp(trans[i][0]);
}
jsonObj1004 = trans;
},
error: function (jqXHR, textStatus, error) {
alert("Oops! It appears there has been an AJAX error. The Transaction chart may not work properly. Please try again, by reloading the page.\n\nError Status: " + textStatus + "\nError: " + error);
}
});
C# Server Side Code (With Razor)
#{
Layout = "";
if (IsAjax)
{
var db = Database.Open("OkmulgeeCIC");
Dictionary<string, double> dataList = new Dictionary<string, double>();
var date = "";
var previousDate = "";
double amount = 0;
string jsonString = "[";
string queryDep = "SELECT ba_trans_entered AS transDate, (ba_trans_amount * -1) AS transAmount FROM BA_VTRANS WHERE ba_trans_year >= 2011 AND ba_trans_operator = 'E' AND ba_trans_system = 'AP' AND ba_trans_ledger LIKE #0 + '%' ORDER BY ba_trans_entered ASC";
string dep = Request.QueryString["dep"];
foreach (var row in db.Query(queryDep, dep))
{
date = row.transDate.ToString();
date = date.Substring(0, date.IndexOf(" "));
amount = Convert.ToDouble(row.transAmount);
if (date == previousDate)
{
dataList[date] = dataList[date] + amount;
}
else
{
dataList.Add(date, amount);
}
previousDate = date;
}
foreach (var item in dataList)
{
jsonString += "[";
jsonString += Json.Encode(item.Key) + ", ";
jsonString += Json.Encode(item.Value) + "],";
}
//jsonString += Json.Encode(date);
jsonString = jsonString.TrimEnd(',');
jsonString += "]";
#Html.Raw(jsonString)
}
else
{
Context.RedirectLocal("~/");
}
}
ADDITONAL INFO FROM SQL SERVER PROFILER
From Development Machine
From User Machine (lag)
Just looking over your code two things jumped out for me
1) You're not closing your db connection, this is very bad. Either wrap your connection object in a using block (preferred) or add a call to .Close() at the end of your data work
using(var db = Database.Open())
{
//do work
}
2) Doing string concatenation in a loop like that is a terrible thing to do and very slow. Either use a StringBuilder. Or since you're outputting JSON anyway just bundle your objects into a list or something and pass that to JSON.Encode() (preferred)
It seem to me this problem come from your Site Server but anyway you can try this:
1/ Publish your site to any Internet web server. if it is still slow => your code problem -> Check your code.
If not go to 2/ Check your configuration Site Server and Database Server. It might be Firewall or TCP/IP:Port or NETBIOS/Domain name between two Server.
I do not know if this has any relevance to this problem because i cannot see how you are calling your application. But I have multiple times experienced about 5 sec lag on IIS using C# when i use domain names to call other servers (this can also be localhost). Instead the ip should be used.
It could be nice if you tried playing around with this using IP instead of using a domain
I had something similar when working with AjAX and JSF site.
Jquery loading taking excessive time
Since you already have it working from dev machine, it might not be a problem in your case.
But to rule out any such scenario, can you develop the page without using jquery ?
I had a similar issue where I would call 20 sprocs using a for loop, they were not large sprocs mind you but a sproc that would return 5 values.
It would work fine but time to time it would almost like lag out and would not be able to load any of the sprocs or a very small amount until timing out completely.
That is when I discovered Parameter Sniffing for SQL Server.
To fix it I added in sproc Parameters equal to the incoming parameters from my C# code.
OLD CODE:
CREATE PROC [dbo].[sp_procname_proc]
(
#param1 int,
#param2 int,
#param3 varchar(5),
--..... etc .....
)AS
BEGIN
-- select from db
END
NEW CODE
CREATE PROC [dbo].[sp_procname_proc]
(
#param1 int,
#param2 int,
#param3 varchar(5),
--..... etc .....
)AS
BEGIN
#localParam1 INT = #param1
#localParam2 INT = #param2
#localParam3 varchar(5) = #param3
-- select from db using new parameters
END
I'm in OPA for some days now and I really start to like it. I'm attending the first year of computer science and we make some database class the next year-
The little I know about Databases are from php, I have used MySQL with php and SQLlite with c++. But this type of database is a bit different from what I've seen.
I have followed the guide about database in OPA http://doc.opalang.org/manual/Hello--database but I have a question:
In the guide we declare a new Database:
type user_status = {regular} or {premium} or {admin}
type user_id = int
type user = { user_id id, string name, int age, user_status status }
database users {
user /all[{id}]
/all[_]/status = { regular }
}
We learn how to read this database and make some query to this database with Maps, but how do I add a new element? I was testing a bit:
/users/all[{id:0}]/name<-getusername;
but id should be auto increment, from the little I know.
Thanks everyone for the help =D
I really want to get in OPA, the little I have make is really impressive!
mongoDB and auto-increment
With mongoDB (the default Opa database) there is no auto-increment (like in SQL), for scalability reason.
But if you really need one, you can use a counter to create this feature yourself:
database users {
user /all[{id}]
int /fresh_key
/all[_]/status = { regular }
}
And increment the key each time you use it: /users/fresh_key++
Random fresh key
You can also generate a random id, for example with something like Random.string(6)
Read this thread to learn more about this technique: http://lists.owasp.org/pipermail/opa/2012-April/001052.html
User defined unique key
But if you are dealing with users, maybe you already have a unique key: what about using "login" or "email" as the unique key?
You can also use Date.in_milliseconds(Date.now_gmt()) for a more unique id, maybe concatenated with the user id
Let's say I've got a SQL 2008 database table with lots of records associated with two different customers, Customer A and Customer B.
I would like to build a fat client application that fetches all of the records that are specific to either Customer A or Customer B based on the credentials of the requesting user, then stores the fetched records in a temporary local table.
Thinking I might use the MS Sync Framework to accomplish this, I started reading about row filtering when I came across this little chestnut:
Do not rely on filtering for security.
The ability to filter data from the
server based on a client or user ID is
not a security feature. In other
words, this approach cannot be used to
prevent one client from reading data
that belongs to another client. This
type of filtering is useful only for
partitioning data and reducing the
amount of data that is brought down to
the client database.
So, is this telling me that the MS Sync Framework is only a good option when you want to replicate an entire table between point A and point B?
Doesn't that seem to be an extremely limiting characteristic of the framework? Or am I just interpreting this statement incorrectly? Or is there some other way to use the framework to achieve my purposes?
Ideas anyone?
Thanks!
No, it is only a security warning.
We use filtering extensively in our semi-connected app.
Here is some code to get you started:
//helper
void PrepareFilter(string tablename, string filter)
{
SyncAdapters.Remove(tablename);
var ab = new SqlSyncAdapterBuilder(this.Connection as SqlConnection);
ab.TableName = "dbo." + tablename;
ab.ChangeTrackingType = ChangeTrackingType.SqlServerChangeTracking;
ab.FilterClause = filter;
var cpar = new SqlParameter("#filterid", SqlDbType.UniqueIdentifier);
cpar.IsNullable = true;
cpar.Value = DBNull.Value;
ab.FilterParameters.Add(cpar);
var nsa = ab.ToSyncAdapter();
nsa.TableName = tablename;
SyncAdapters.Add(nsa);
}
// usage
void SetupFooBar()
{
var tablename = "FooBar";
var filter = "FooId IN (SELECT BarId FROM dbo.GetAllFooBars(#filterid))";
PrepareFilter(tablename, filter);
}
We have a couple of mirrored SQL Server databases.
My first problem - the key problem - is to get a notification when the db fails over. I don't need to know because, erm, its mirrored and so it (almost) all carries on working automagically but it would useful to be advised and I'm currently getting failovers when I don't think I should be so it want to know when they occur (without too much digging) to see if I can determine why.
I have services running that I could fairly easily use to monitor this - so the alternative question would be "How do I programmatically determine which is the principal and which is the mirror" - preferably in a more intelligent fashion than just attempting to connect each in turn (which would mostly work but...).
Thanks, Murph
Addendum:
One of the answers queries why I don't need to know when it fails over - the answer is that we're developing using ADO.NET and that has automatic failover support, all you have to do is add Failover Partner=MIRRORSERVER (where MIRRORSERVER is the name of your mirror server instance) to your connection string and your code will fail over transparently - you may get some errors depending on what connections are active but in our case very few.
Right,
The two answers and a little thought got me to something approaching an answer.
First a little more clarification:
The app is written in C# (2.0+) and uses ADO.NET to talk to SQL Server 2005.
The mirror setup is two W2k3 servers hosting the Principal and the Mirror plus a third server hosting an express instance as a monitor. The nice thing about this is a failover is all but transparent to the app using the database, it will throw an error for some connections but fundamentally everything will carry on nicely. Yes we're getting the odd false positive but the whole point is to have the system carry on working with the least amount of fuss and mirror does deliver this very nicely.
Further, the issue is not with serious server failure - that's usually a bit more obvious but with a failover for other reasons (c.f. the false positives above) as we do have a couple of things that can't, for various reasons, fail over and in any case so we can see if we can identify the circumstance where we get false positives.
So, given the above, simply checking the status of the boxes is not quite enough and chasing through the event log is probably overly complex - the answer is, as it turns out, fairly simple: sp_helpserver
The first column returned by sp_helpserver is the server name. If you run the request at regular intervals saving the previous server name and doing a comparison each time you'll be able to identify when a change has taken place and then take the appropriate action.
The following is a console app that demonstrates the principal - although it needs some work (e.g. the connection ought to be non-pooled and new each time) but its enough for now (so I'd then accept this as "the" answer"). Parameters are Principal, Mirror, Database
using System;
using System.Data.SqlClient;
namespace FailoverMonitorConcept
{
class Program
{
static void Main(string[] args)
{
string server = args[0];
string failover = args[1];
string database = args[2];
string connStr = string.Format("Integrated Security=SSPI;Persist Security Info=True;Data Source={0};Failover Partner={1};Packet Size=4096;Initial Catalog={2}", server, failover, database);
string sql = "EXEC sp_helpserver";
SqlConnection dc = new SqlConnection(connStr);
SqlCommand cmd = new SqlCommand(sql, dc);
Console.WriteLine("Connection string: " + connStr);
Console.WriteLine("Press any key to test, press q to quit");
string priorServerName = "";
char key = ' ';
while(key.ToString().ToLower() != "q")
{
dc.Open();
try
{
string serverName = cmd.ExecuteScalar() as string;
Console.WriteLine(DateTime.Now.ToLongTimeString() + " - Server name: " + serverName);
if (priorServerName == "")
{
priorServerName = serverName;
}
else if (priorServerName != serverName)
{
Console.WriteLine("***** SERVER CHANGED *****");
Console.WriteLine("New server: " + serverName);
priorServerName = serverName;
}
}
catch (System.Data.SqlClient.SqlException ex)
{
Console.WriteLine("Error: " + ex.ToString());
}
finally
{
dc.Close();
}
key = Console.ReadKey(true).KeyChar;
}
Console.WriteLine("Finis!");
}
}
}
I wouldn't have arrived here without a) asking the question and then b) getting the responses which made me actually think
Murph
If the failover logic is in your application you could write a status screen that shows which box you're connected by writing to a var when the first connection attempt fails.
I think your best bet would be a ping daemon/cron job that checks the status of each box periodically and sends an email if one doesn't respond.
Use something like Host Monitor http://www.ks-soft.net/hostmon.eng/ to monitor the Event Log for messages related to the failover event, which can send you an alert via email/SMS.
I'm curious though how you wouldn't need to know that the failover happened, because don't you have to then update the datasources in your applications to point to the new server that you failed over to? Mirroring takes place on different hosts (the primary and the mirror), unlike clustering which has multiple nodes that appear to be a single device from the outside.
Also, are you using a witness server in order to automatically fail over from the primary to the mirror? This is the only way I know of to make it happen automatically, and in my experience, you get a lot of false-positives where network hiccups can fool the mirror and witness into thinking the primary is down when in fact it is not.