Objective
I am creating a Proof of Concept multi-tenant system that will eventually run on RDS based on the following guide:
aws rds multi tenant data isolation
Summary
According to the aws article there are two ways to achieve this:
Creating a database user per tenant
using a runtime parameter on connection
Approach
I would like to explore option 2 fully before making any decisions. I can see that it is possible to pass runtime configuration parameters in the connection string based on this merge request and assume this is the Options parameter from the connection string parameters doc.
Questions
This leads me to the following questions:
Will I be able to achieve this using the Options parameter?
Will connection pooling work? and if so will it have the effect of pooling connections for a single tenant?
Related
We have a web api developed in .Net Core 3.1 which talks to Azure SQL db and running as Azure web app. The database a single database of a multi-tenant app and is protected by row-level security. It requires to set session context before executing any SQL statement. The session context is the primary key of tenant table which is an integer.
I've learned that I can use EF Core Interceptors and set session context. However for security reasons we cannot send/receive tenant id in the URL as a parameter hence we are using another identifier which looks like an encrypted string.
Considering we have a tenant identifier what is the most efficient way to set session context as tenant id? The API is stateless so I can't use session and the controller doesn't require authentication so I don't have a logged in user either. The last option and probably the ugliest way would be to hardcode and maintain a list at server side so that I don't have do a database trip every time.
Considering comments from #JeroenMostert I've decided to do a database trip as it won't be expensive because the data will not going back to client. But after getting some good knowledge and understanding I'll certainly consider using memory optimize table.
So I have an Azure SQL Database instance that I need to run a nightly data import on, and I was going to schedule a stored procedure to make a basic GET request against an API endpoint, but it seems like the OLE object isn't present in the Azure version of SQL Server. Is there any other way to make an API call available in Azure SQL Database, or do I need to put in place something outside of the database to accomplish this?
There are several options. I do not know whether a powershell job as stated in the first comment to your question can execute http requests but I do know at least a couple of options:
Azure Data Factory allows you to create scheduled pipelines to copy/transform data from a variety of sources (like http endpoints) to a variety of destinations (like azure sql databases). This involves no or a little bit of scripting.
Azure Logic Apps allows you to do the same:
With Azure Logic Apps, you can integrate (cloud) data into (on-premises) data storage. For instance, a logic app can store HTTP request data in a SQL Server database.
Logic apps can be triggered by a schedule as well and involves none or little scripting
You could also write an Azure Function that is executed on a schedule and calls the http endpoint and write the result to the database. Multiple languages are supported for writing functions, like c# and powershell for example.
All those options include the possibility to force an execution outside the schedule.
In my opinion Azure Data Factory (no coding) or an Azure Function (code only) are the best options given the need to parse a lot of json data. But do mind that Azure Functions on a Consumption Plan have a maximum execution time allowed of 10 minutes per invocation.
I am looking for information about databases per instance in JanusGraph and couldn't find proper documentation for it. The main concern in the security boundary between the databases in one instance. Let's say there are 2 databases in one instance of JanusGraph. Is it possible to configure security such that user A only has access to Database1 and user B only has access to Database2? If so, how is this security handled.
In JanusGraph as it exists today, a "database" would be a separate Graph, each of which must be defined and instantiated at server start. JanusGraph adheres to the TinkerPop specifications, so it is run with the Gremlin server, which comes with its own authenticators: http://tinkerpop.apache.org/docs/current/reference/#_security_and_execution.
The out of the box authenticators only authenticate for server level access. However, with this PR being merged into TinkerPop: https://github.com/apache/tinkerpop/pull/583, you can write a custom authentication scheme that takes into account graph level access.
Also note that this PR: https://github.com/JanusGraph/janusgraph/pull/392 is currently open in the JanusGraph repo which will allow for the instantiation/creation of graphs, i.e. "databases", dynamically (post server start). Take a look at the GraphManager class there if you end up implementing a custom authentication scheme that takes into account graph level access, and if you do, you should commit your changes upstream into OSS.
Is it possible to create multiple backplanes within Signal-R?
We're working on an ASP.net WebAPI Sass application and are looking to implement Signal-R for "real-time" web functionality. Since we'll be hosting the application a web farm, client-connection state will be managed through a SQL Server backplane.
The application is multi-tenant - but database is not. The application determines which connection string to use and all client requests talk to their appropriate database. Now the code for configuring the Signal-R SQL Server backplane within Application_Start() is:
GlobalHost.DependencyResolver.UseSqlServer(connectionString);
Does anyone know if it's possible to create multiple backplanes with Signal-R, basically loop through each connection string and call the above code?
Thanks for checking this out!
If you need to eliminate the single point of failure, I suggest setting up a failover server in case the primary SQL Server machine goes down. Reference: http://technet.microsoft.com/en-us/library/hh231721.aspx
If you simply need more performance than a single SQL Server instance can provide, I suggest using Redis as the backplane.
In either case, I doubt attempting to use "multiple backplanes" will be helpful, unless you intend to map certain hubs to certain backplanes for load distribution.
I'm just starting to spec out a project that will be a fairly advanced DB with a fairly simple MVC front end, accessible over the internet, I'm unsure of how to handle users, I can see two options:
Option 1 - Use SQL Servers in-built Logins/Users to handle user authentication and use the in-built user access to control who can access what (almost everything will be selects or stored procs).
Option 2 - Use my own list of users (with hashed salted passwords) and my own access control list (possibly using proc name and OBJECT_NAME(##PROCID) passed to another proc) and do all writes/reads of the DB through an application profile with access to all procedures/views.
I've searched but can't find any reasons I should pick one over the other, can anyone share a link or provide reasons why one is better that the other (or if there are glaring issues with both)?
If you need any more details please let me know.
If using SQL authentication (Option 1 for each user) via a connection pool, you're going to get comparatively poor performance. You'll get pooling for each user, but that could still mean 100's or 1000's of connections. There are ways around that, but the solution begins to start making Option 1 work like Option 2, so you might as well use Option 2.
I've seen a few solutions that create SQL Accounts to do the inital authentication for each user (via a single application based SQL account), then the rest works like Option 2 (again via a single application based SQL account). Those cases were using lots of SQL accounts, so each user could own a set of Views, and imitate Schemas. As Schemas & Aliases have been available since SQL Server 2005, it wouldn't be worthwhile to pick option 2 anymore for that reason.
Finally, don't use SQL Server Application roles either - they're bad security practice. Brian Kelly covers a few of the issues (pos & cons) in an article about them (http://www.sqlservercentral.com/articles/Security/sqlserversecurityprosandconsofapplicationroles/1116/).