Edit Client in IdentityServer4 - identityserver4

The sample and seed data shows creating a new client in Startup.
This is fine in case of creating a client.
Are there any existing methods or provision for Updating a client. Update involves tracking the existing records from the collection fields within the clients too.
How are entities mapped from IdentityServer4.Models to IdentityServer4.EntityFramework.Entities during an update considering the records are already available in database?

What do you mean when you say client?? If you mean Client for Identity server then you can edit/configure or add more clients or other resources in your config class. While startup, identity server will loads up all the clients by itself, all because of this code:
// Add identity server.
services.AddIdentityServer()
.AddTemporarySigningCredential()
.AddInMemoryIdentityResources(Config.GetInMemoryIdentityResources())
.AddInMemoryApiResources(Config.GetInMemoryApiResources())
.AddInMemoryClients(Config.GetInMemoryClients(Configuration))
.AddAspNetIdentity<ApplicationUser>()
.AddProfileService<SqlProfileService>();

Are there any existing methods or provision for Updating a client.
Update involves tracking the existing records from the collection
fields within the clients too
Yes, you can update client as you can update any other data. Check here how you can use EntityFramework core with identityserver4
How are entities mapped from IdentityServer4.Models to
IdentityServer4.EntityFramework.Entities during an update considering
the records are already available in database?
If you check the IdentityServer4 source you will find AutoMapper is used to convert entities (namespace IdentityServer4.EntityFramework.Mappers). And an extension named ToModel has been provided

Related

NestJs - Single Model work with Different Database

We have databases like the below image.
In Application, Once a new company registers a new database created for the company. Consider there might be multiple schemas.
Schemas would be the same for all company.
I am new to NestJS. Currently, we are managing this scenario using function call we pass database name it will create a new connection with DB using mongoose and iterate all defined models in from the database and create an object which used by application-wide.
Now, We need a new level of development. we choose NestJs.
How can we deal with this kind of technique with nestjs?
What should be a design pattern for this kind of flow?
How we will deal with Email for the registered company?

What is the best practices for building REST API with different subscribers (companies)?

What is the best design approach in term of security, performance and maintenance for REST API that has many subscribers (companies)?
What is the best approach to use?:
Build a general API and sub APIs for each subscriber (company), when request come we check the request and forward it to the sub API using (API Key) then retrieve data to general API then to client.
Should we make single API and many databases for storing each subscribe(company) data (because each company has huge records that why we suggested to separated databases to improve performance)? when request come we verify it and change database Connection String based on client request.
Should we make one API and one big database that handle all subscribes data?
Do you suggest any new approach to solve this problem? We used Web API and MS SQL Server and Azure Cloud.
In the past I've had one API, the API is secured using OAuth/JWT in the token we have a company id. When a request comes in we read the company id from the JWT and perform a lookup in a master database, this database holds global information such a connection strings for each company. We then create a unit of work that has the company's conneciton string associated with it and any database lookups use that.
This mean that you can start with one master and one node database, when the node database starts getting overloaded you can bring up another one and either add new companies to that or move existing companies to take pressure off. Essentially you're just scaling out when the need arises.
We had no performance issues with this setup.
Depends on the transaction volume and nature of data, you can go for a single database or separate database for each company.
Option 2 would be the best , if you have complex data model
I don't see any advantage of going for option 1, because , anyway general API will call for each request.
You can use the ClientID verification while issuing access tokes.
What I understood from your question is, you want an rest API for multiple consumers(companies). Logically the employees from that company will consume your API, employees may be admin, HR etc. So what I suggested for such scenario you must go with single Rest API for providing the services to your consumers and for security you have to use OpenId on the top of OAuth 2. This resolves the authentication and authorization for you.

Azure search: use a single index on multiple data sources

I have multiple Azure tables across multiple Azure storage that have the exact same format. Is it possible to configure several data sources in Azure-search to use a unique Index so that a search on this Index would return the results aggregated from all data sources (Azure tables)?
So far, each time I configure a new 'Data Sources' and the corresponding index, I must create a new index (with a new index name). Attempting to reuse an existing index name results in an error stating "Another index with this name already exists"
Thank you for any help or pointer you might provide.
Yes, it's possible, but we don't currently support it in the Azure Portal.
When you go through the "import data" flow in the portal, it'll create a data source, indexer and index for you.
If you want more sources for that index, you need to create new data sources and indexers, with the new indexers pointing at the existing index. Unfortunately this is not currently supported from the portal. You can do it using the .NET SDK (if you're using .NET), directly using the REST API from your app, or using any tool that can make HTTP requests such as PowerShell, curl or Fiddler.
The documentation that describes the indexer-related REST APIs is here:
https://msdn.microsoft.com/en-us/library/azure/dn946891.aspx

Interfacing SugarCRM with OpenERP

I am currently working on a project whereby I have to make both openERP and SugarCRM talk to each other.
For example, if I add a new Account in SugarCRM, this account is also created in OpenERP...and if I create a new Customer in OpenERP, a new customer with same values is created in SugarCRM.
I've searched the net and I found a connector which allows this interfacing.
http://www.sugarforge.org/projects/sugar2openerp
This connector is not an easy thing to work with...I had to build a module inside SugarCRM for me to input connection details (url, username, password, etc ).
Now, I dont know how to proceed with the connector...the files contained in it mentioned "accounts_cstm"... should I create it or no?
Have you looked at the import_sugarcrm module (http://apps.openerp.com/addon/6970)?
I've never used it, but it is an certified OpenERP module, which means that it is officially supported by OpenERP SA, so you should be able to get support and post feature requests if necessary.
all table names ended with "_cstm" are the upgrade safe portion of the data you create in SugarCRM (_cstm for "custom"). For example if you add a new field to accounts, the original structure doesn't change. SugarCRM creates your new field in this table, and relates to the original accounts table thru id_c field.
I didn't test the module you are mentioning here, but if well constructed there wouldn't be need of creating such table.

Storing username that last modified database row via EJB3.0 and JPA

I'd like to store username that has last modified table row to as a field in every table.
I have following setup: an user logs in, web layer calls some EJB 3.0 beans. Those EJB beans create and modify some JPA entities.
Now, I'd like that username (from weblayer) would be automatically stored to every JPA entity (and database row) that is created or modified during EJB method call.
I have this kind of table, (that is: every table has field modifier):
CREATE TABLE my_table (
some_data INTEGER,
modifier VARCHAR(20)
);
By automatically I mean, that I wouldn't need to set manually username to each entity inside EJB methods.
You could achive this by storing username to ThreadLocal variable and fetching username from ThreadLocal in JPA entity's EntityListener.
However this works only you are using a local EJB call, it will not work with EJB calls that cross JVM boundaries. I'd like to make this work over remote EJB method calls.
It this possible at all?
I am using Weblogic Server 10.3, EJB3.0 and EclipseLink JPA. I am also interested in hearing if this works with other JPA implementations.
You can use an EJB interceptor (#Around) on the EJB class to get the current user (using standard EJB api) and storing the name in the threadlocal variable. This would work transparently for remote and local calls.

Resources