Is there a good article/document describing how IdentityServer4 data tables and columns work together? - identityserver4

I have successfully setup IdentityServer by following quickstart samples. But what I am still unclear is how the IdentityServer data tables and columns work. I have about 30 tables created along the way (some of those are ASP.Net Core Identity related tables) and they work fine. But the IdentityServer online doc doesn't seem to contain a good detailed description about how they work together. Can someone point me to some good resource explaining the internal work?

Most of the columns and tables maps directly 1-1 with the configuration objects in IdentityServer. Including the Client and the Resource types. However, as some of the types are pretty complex, they are normalized across several tables. You can also look at the source of the IdentityServer storage library to see how the database is mapped to those objects.

Related

Managing multiple datasources in CakePHP

I'm planning to develop a web application in CakePHP that shows information in graphics and cards. I chose CakePHP because the information that we need to show is very structured, so the model approach makes easier to manage data; also I have some experience with MVC from ASP.NET and I like how simple is to use the routing.
So, my problem is that the multiple organizations that could use the app would have their own database with a different schema that the one we need. I can't just set their string connection in the app.php file because their database won't match my model.
And the organization datasource couldn't fit my model for a lot of reasons: the tables don't have the same name, the schema is different, the fields of my entity are in separated tables, maybe they have the info in different databases or also in different DBMS!
I want to know if there's a way to make an interface that achieves this
In such a way that cakephp Model/Entity can use data regardless of the source. Do you have any suggestions of how to do that? Does CakePHP have an option to make this possible? Should I use PHP with some kind of markup language like JSON or XML? Maybe MySQL has an utility to transform data from different sources into a view and I can make CakePHP use the view instead of the table?
In case you have an answer be as detailed as you can.
This other options are possible if it's impossible to make the interface:
- Usw another framework that can handle this easier and has the features I mentioned above.
- Make the organization change their database so it matches my model (I don't like this one, and probably they won't do it).
- Transfer the data in the application own database.
Additional information:
The data shown in graphics are from students in university. Any university has its own database with their own structure and applications using the db, that's why isn't that easy to change structure. I just want to make it as easy as possible to any school to configure their own db.
EDIT:
The version is CakePHP 3.2.
An important appointment is that it doesn't need all CRUD operations, only "reading". Hope that makes the solution easier.
I don't think your "question" can be answered properly, it doesn't contain enough information, not enough details. I guess there is something that will stay the same for all organizations but their data and business logic will be different. But I'll try it.
And the organization datasource couldn't fit my model for a lot of reasons: the tables don't have the same name, the schema is different, the fields of my entity are in separated tables, maybe they have the info in different databases or also in different DBMS!
Model is a whole layer, so if you have completely different table schemas your business logic, which is part of that layer, will be different as well. Simply changing the database connection alone won't help you then. The data needs to be shown in the views as well and the views must be different as well then.
So what you could try to do and what your 2nd image shows is, that you implement a layer that contains interfaces and base classes. Then create a Cake plugin for each of the organizations that uses these interfaces and base classes and write some code that will conditionally use the plugin depending on whatever criteria (guess domain or sub-domain) is checked. You will have to define the intermediate interfaces in a way that you can access any organization the same way on the API level.
And one technical thing: You can define the connection of a table object in the model layer. Any entity knows about it's origin but you should not implement business logic inside an entity nor change the connection through an entity.
EDIT: The version is CakePHP 3.2. An important appointment is that it doesn't need all CRUD operations, only "reading". Hope that makes the solution easier.
If that's true either use the CRUD plugin (yes, you can use only the R part of it) or write some code, like a class that describes the organization and will be used to create your table objects and views on the fly.
Overall it's a pretty interesting problem but IMHO to broad for a simple answer or solution that can be given here. I think this would require some discussion and analysis to find the best solution. If you're interested in consulting you can contact me, check my profile.
I found a way without coding any interface. In fact, it's using some features already included in the DBMS and CakePHP.
In the case that the schema doesn't fit the model, you can create views to match de table names and column names from the model. By definition, views work as a table so CakePHP searches for the same table name and columns and the DBMS makes the work.
I made a test with views in MySQL and it worked fine. You can also combine the data from different tables.
MySQL views
SQL Server views.
If the user uses another DBMS you just change the datasource in app.php, and make the views if it's necessary
If the data is distributed in different DBMS, CakePHP let's you set a datasource for each table, you just add it to app.php and call it in the table if it's required.
Finally, in case you just need the "reading" option, create a user with limited access to the views and only with SELECT privileges.
USING:
CakePHP 3.2
SQL SERVER 2016
MySQL5.7

Can I use Entity Framework with a non-relational SQL Server database?

I am working on a project with someone who has developed a desktop app for people who run charity voucher companies. They have customers who have accounts, who put money in their accounts, and who write charity vouchers (bit like cheques) to charities.
He wants me to write a web site where both charities and customers can log in and see details of their accounts, vouchers issued, etc.
As most of the data will be coming from his app to my web site, we agreed to use his primary key IDs in my database, so it will be easy to match up the data.
We're quite well into it, and it I've discovered that he is a staunch opponent of relational databases. His database doesn't have any foreign reference keys at all, just IDs in tables. He does individual queries on each table to see if the related data is there.
I want to use Entity Framework, but am not sure if I can, as I can't be sure that the data he sends me will be complete. For example, he might send me details of a voucher, which will have a customer ID and a charity ID, but the customer may not have been sent, so the customer ID on the voucher won't exist in the customers table.
Any ideas what I can do? I can't have foreign links between my tables, as this will throw errors whenever it comes across incomplete data, but if I don't have any links, then I've lost the whole benefit of using EF.
My only thought so far is to leave the tables unrelated, and then add partial classes for the entities, with properties that will look like navigation properties, but that will check to see if the "foreign" data is there, and if so, return it.
This might work, but seems like a lot of effort. Anyone any better suggestions as to how I handle this situation?
This is a very late answer, but since I stumbled across this question, it might be useful for others.
Microsoft recently announced that EF Core (within ASP.NET Core 2.1) will have a provider for Cosmos DB:
Cosmos DB provider preview: We have been developing an EF Core
provider for the DocumentDB API in Cosmos DB. This is the first
document database provider we have produced, and the learnings from
this exercise are going to inform improvements in the design of the
subsequent release after 2.1. The current plan is to publish an early
preview of the Cosmos DB provider in the 2.1 timeframe.
NOTE: A short video containing major features to be delivered with ASP.NET Core 2.1 can be seen here.

Alternative to Apatar for scripted data migration

I'm looking for the fastest-to-success alternative solution for related data migration between Salesforce environments with some specific technical requirements. We were using Apatar, which worked fine in testing, but late in the game it has started throwing the dreaded socket "connection reset" errors and we have not been able to resolve it - it has some other issues that are leading me to ditch it.
I need to move a modest amount of data (about 10k rows total) between several sandboxes and ultimately to a production environment. The data is spread across eight custom objects. There is a four-level-deep master-detail relationship, which obviously must be preserved.
The target environment tables are 100% empty.
The trickiest object has a master-detail and two lookup fields.
Ideally, the data from one table near the top of the hierarchy should be filtered by a simple WHERE, and then children not under matching rows not migrated, but I'll settle for a solution that migrates all the data wholesale.
My fallback in this situation is going to be good old Data Loader, but it's not ideal because our schema is locked down and does not contain external ID fields, so scripting a solution that preserves all the M-D and lookups will take a while and be more error prone than I'd like.
It's been a long time since I've done a survey of the tools available, and don't have much time to do one now, so I'm appealing to the crowd. I need an app that will be simple (able to configure and test very quickly), repeatable, and rock-solid.
I've always pictured an SFDC data migration app that you can just check off eight checkboxes from a source environment, point it to a destination environment, and it just works, preserving all your relationships. That would be perfect. Free would be nice too. Does such a shiny thing exist?
Sesame Relational Junction seems to best match what you're looking for. I haven't used it, though; so, I can't comment on its effectiveness for what you're attempting.
The other route you may want to look into is using the Bulk API or using the Data Loader CLI with Task Scheduling.
You may find this information (below), from an answer to a different question, helpful.
Here is a list of integration services (other than Apatar):
Informatica Cloud
Cast Iron
SnapLogic
Boomi
JitterBit
Sesame Relational Junction
Information on other tools, to integrate Salesforce with other databases, is available here:
Salesforce Web Services API
Salesforce Bulk API
Relational Junction has a unique feature set that supports cloning, splitting, and merging of Salesforce orgs, and will keep the relationships intact in a one-pass load. It works like this:
Download source org to an empty database schema (any relationship DBMS)
Download target org to a second empty database schema
Run some scripts to condition the data; this varies by object. Sesame provides guidance and sample scripts, but essentially you have to set a control field to tell Relational Junction to create or update Salesforce. This is also where you may need to replace source ID's with target ID's if some objects have been pre-populated during sandbox creation
Replicate the second database to the target org
Relational Junction handles the socket disconnects, timeouts, and whatever havoc happens during the unload/reload process gracefully and without creating duplicates.
This process was developed for a proof of concept at a large Silicon Valley network vendor in 2007, who became a customer. The entire down and up of 15 GB of data took 46 hours, plus about 2 days of preparation.

openam with database

I have a requirement that openam should access users and groups from a MySQL database. In the openam GUI for New Data Store -> Database Repository (Early Access), I could see some configurations related to this. But I am not aware about how to map fields from two or three of MySQL tables (users and groups) to the corresponding attributes of openAM. Also what are the mandatory or optional fields for keeping user and group information? Somebody please point me to good documentation on this.
Also I have a couple of basic queries.
Is it possible to keep policy information in database?
Is it possible to create users, groups and assign policy information from a web application deployed differently (through JSP / servlet). Does the OpenSSO APIs allows to do this.
Thanks.
The Database data store is very primitive (hence it's Early Access), and it is very likely that it won't fit your very specific needs. In that case you can always implement your own Data Store implementation (which can be tricky, I know..), here's a guide for it:
http://docs.forgerock.org/en/openam/10.0.0/dev-guide/index.html#chap-identity-repo-spi
Per your questions:
1) the policies are stored in the configuration store, and I'm not aware of any ways to store them in DataBase
2) yes, there are some APIs which let you perform some changes with the identities stored in the data store, but OpenSSO/OpenAM is not an identity management tool, so it might not be a perfect fit for doing it.
This is pretty simple but sparse documentation makes it a bit tough to do. Check out: http://rahul-ghose.blogspot.com/2014/05/openam-database-connectivity-with-mysql.html
Basically you need to create a MySql database and 2 tables for this. Include all variables you see in the User Configuration dropdown in the Mysql user table, it is okay to remove a few attributes.

How to design a DB for several projects

Im wondering what will be the best way to organize my DB. Let me explain:
Im starting a new "big" project. This big project will be composed by few litle ones. In general the litle projects are not related to each other, they are just features of the big one.
One thing that all the projects have in common is the users that are going to use it.
So my questions are:
Should i create different DB for each one of the litle projects
(currently each project will contain 4-5 tables)
How to deal with the users? Should I create one DB for all the users
or should i
duplicate the users table in every DB? Have in mind that the
information about the users is used a lot in every litle project,
it's NOT only for identification purposes.
Thanks in advance for your advice.
This greatly depends on the database you choose to use.
If these "sub-projects" are designed to work as one coherent unit, then I strongly recommend you keep it all in the same database. One backup, one restore, one unit.
For organizational purposes, if you are using a database which supports it, select a different Schema per project. PostgreSQL and SQL Server are two databases (among others) which support this effortlessly.
In the case of a database like MySQL, I recommend you pick a short prefix for each subproject and prefix all tables accordingly. "P1_Customer" for example.
Shared data would go in it's own schema or prefix, like Global or something like that.
Actually, this was one of the many reasons we switched our main database from MySQL to PostgreSQL. We've been heavy users of both, and I really appreciate the features that PostgreSQL offers. SQL Server, if you are in a windows environment, is a great database IMO as well.
If the little projects are "features of the big one" then I don't see a reason why you wouldn't want just one user table for the main project. The way you setup the question makes this seem true "If there is a user A in little project 1, then there must be a user A in the 'big' project." If that is true, you should likely have the users in the big db instead of doing duplication unless you have more qualifying details.
i think the proper answer is 'it depends'.
Starting your organization down the path of single centralized system is good on many levels. I think in general i would recommend this.
however:
if you are going to have dramatically different development schedules, or dramatically different user experiences with the various sub projects, then you may be better off keeping them separate.
I'd have a look at OpenID or some other single sign-on protocol depending on the nature of your application. OpenID includes a mechanism called "attribute exchange", which allows applications to retrieve profile information from the OpenID provider.
This allows you to create a central user profile repository, with an authentication scheme, and have your individual apps query that repository for profile information.
The question as to how to design your database is hard to answer without more information. In most architectures, "features" within an application tend to be closely linked - "users" are related to "accounts" are related to "organisations" etc.
I'd recommend looking at the foreign key relationships to answer this question. If you have lots of foreign keys, build a single database for all tables. If you have "clusters" of foreign keys, and you want to have a different life cycle for each application (assuming the clusters map neatly to the applications), consider separate databases.
By "life cycle", I mean mostly the development lifecycle - app 1 might deploy weekly, app 2 monthly, app 3 once only and then be frozen.

Resources