how to parse Excel worksheet/table and map excel columns to a data entity in Dynamics 365 Operations - azure-logic-apps

what actions/connectors one could use to parse a excel file and map columuns from a excel table to and external store, in this case i wish to make a record in Dynamics 365 operations using OData entity.
Thanks

Why exactly are the obvious RapidStart Services no option for you?
If you definitely need to use OData, I suggest building a publishable OData page via WebServices. You then can implement a parser in any environment/language you prefer and submit the new record via a RESTful Webservice-Call (which nearly every framework should allow) to this very same page, in order to submit the record to your productive environment.

Related

Inject customer-specific data into Outlook add-in

For our add-in, we need some data that will be different on a per-customer basis, such as a customer ID, base URL (changes based on region/environment), API key, etc. We will need this data in order to contact some of our external microservices.
We've looked into RoamingSettings, and that seems to be a good way to save this customer data for later use, but this would only be possible AFTER our data gets injected and is available to the add-in for saving.
We've looked into this question and its corresponding answers: Read parameters from Outlook 365 Add-In manifest. Michael Zlatkovsky's answer, regarding appending an encoded string with that data to the add-in's start URL in the XML manifest, and later grabbing that URL with JavaScript and decoding/parsing that query string seems to be our best option yet.
Is there a standardized or more common way of injecting data into an add-in?

What is the best practices for building REST API with different subscribers (companies)?

What is the best design approach in term of security, performance and maintenance for REST API that has many subscribers (companies)?
What is the best approach to use?:
Build a general API and sub APIs for each subscriber (company), when request come we check the request and forward it to the sub API using (API Key) then retrieve data to general API then to client.
Should we make single API and many databases for storing each subscribe(company) data (because each company has huge records that why we suggested to separated databases to improve performance)? when request come we verify it and change database Connection String based on client request.
Should we make one API and one big database that handle all subscribes data?
Do you suggest any new approach to solve this problem? We used Web API and MS SQL Server and Azure Cloud.
In the past I've had one API, the API is secured using OAuth/JWT in the token we have a company id. When a request comes in we read the company id from the JWT and perform a lookup in a master database, this database holds global information such a connection strings for each company. We then create a unit of work that has the company's conneciton string associated with it and any database lookups use that.
This mean that you can start with one master and one node database, when the node database starts getting overloaded you can bring up another one and either add new companies to that or move existing companies to take pressure off. Essentially you're just scaling out when the need arises.
We had no performance issues with this setup.
Depends on the transaction volume and nature of data, you can go for a single database or separate database for each company.
Option 2 would be the best , if you have complex data model
I don't see any advantage of going for option 1, because , anyway general API will call for each request.
You can use the ClientID verification while issuing access tokes.
What I understood from your question is, you want an rest API for multiple consumers(companies). Logically the employees from that company will consume your API, employees may be admin, HR etc. So what I suggested for such scenario you must go with single Rest API for providing the services to your consumers and for security you have to use OpenId on the top of OAuth 2. This resolves the authentication and authorization for you.

Azure search: use a single index on multiple data sources

I have multiple Azure tables across multiple Azure storage that have the exact same format. Is it possible to configure several data sources in Azure-search to use a unique Index so that a search on this Index would return the results aggregated from all data sources (Azure tables)?
So far, each time I configure a new 'Data Sources' and the corresponding index, I must create a new index (with a new index name). Attempting to reuse an existing index name results in an error stating "Another index with this name already exists"
Thank you for any help or pointer you might provide.
Yes, it's possible, but we don't currently support it in the Azure Portal.
When you go through the "import data" flow in the portal, it'll create a data source, indexer and index for you.
If you want more sources for that index, you need to create new data sources and indexers, with the new indexers pointing at the existing index. Unfortunately this is not currently supported from the portal. You can do it using the .NET SDK (if you're using .NET), directly using the REST API from your app, or using any tool that can make HTTP requests such as PowerShell, curl or Fiddler.
The documentation that describes the indexer-related REST APIs is here:
https://msdn.microsoft.com/en-us/library/azure/dn946891.aspx

How to show audit data in the front end as timeline when user adds,modifies,deletes etc

Single Page Application which is developed in angular JS. I Just wanted to know the audit of the user activity in the front end timeline based on the users interaction with the database.
The database layer is done using HIBERNATE and controller layer with JERSEY Restful web-services. I wants to Audit the user operations on add,modify,delete etc in the UI while interacting with the hibernate.
I have gone through some posts , Some suggests JPA API for hibernate auditing, some suggests Spring DATA to achieve it. I Wanted the audit data to be shown up when user interacts with the system as well as arranging it in the back-end also.
Help me from the best architecture perceptive,flow or road-map to achieve it and also give me some learning tutorials.
Thanks in advance
Based on the assumption that by auditing you mean to be able track the change history that is made to entity rows at the database level, then yes Hibernate has an extension called Hibernate Envers that does this for you.
You can find documentation on Envers here
The jist is that you simply need to annotate your entities with #Audited either at the class level or on a per property level, supply a few configuration parameters at the time you construct either your EntityManagerFactory or SessionFactory and make sure you have the appropriate tables created in your database.
Once the revision tracking has started, you can then easily query for audit changes by using the org.hibernate.envers.AuditReader interface. See the documentation for how to obtain this.

Accessing Sharepoint database to read all blob data

I have a situation where I am uploading an image in sharepoint and it is being saved using blob. I need to create an XML file with the data of the blob and other data that helps users to identify it. The following is a hint of what I want

I was looking at the tables in wss_content and came up to alldocumentstreams where there is a column called rbsid. unfortunately I cannot link this id to non of my documents. My question is this is there a way how i can get all the blob information from the DB so i can link it to other details?
Directly accessing the SharePoint database isn't supported by Microsoft.
If a server component requires information from the database, it must
get that data by using the appropriate items in the SharePoint object
model, and not by trying to get the items from the data structures in
the database through some query mechanism.
You might be better using the SharePoint object model to read these files.
Some links that should help
http://www.codeproject.com/KB/sharepoint/File_Shunter.aspx
http://www.learningsharepoint.com/2011/04/01/read-a-file-in-sharepoint-document-library/

Resources