Automated DynamoDB Database Checks | ReactJS + AWS Amplify - reactjs

My team and I are working on a Full-Stack Application using ReactJS on the frontend and AWS Amplify on the backend. We are using AWS AppSync to Query data in our DynamoDB tables (through GraphQL Queries), Cognito for User Authentication, and SES to send out emails to users. Basically, the user inputs some info (DynamoDB Table #1), and that is matched against an opportunity database (DynamoDB table #2), and the top 3 opportunities are shown to the user. If none are found, an email is sent to inform the user that they will receive an email when opportunities are found. Now for the Question: I wanted to know if there is a way to automatically Query a DynamoDB table (Like once a day or every time a new opportunity is added to the DynamoDB Table #2) and send out emails with matching opportunities to users who were waiting for them? I tried using Lambda Triggers but the only way I could do it was by querying each row of DynamoDB Table #1 against DynamoDB Table #2. That is computationally infeasible as there will be too many resources being used up. I am asking for advice on how I can go about making that daily check because I haven't been able to figure it out yet! Any responses are appreciated, and let me know if you need any additional information from my side! Thank you!

You could look into using DynamoDB Streams. When a new Opportunity is added to DynamoDB, the stream would trigger a lambda to be called. Your lambda could then execute your business logic to match the opportunity with the appropriate user.

Related

Is there a way to raise SNOW ticket as notification for query failures in snowflake?

I was going through the integration documents available for snowflake & service now. But, all documents are oddly focussed on sf consuming snow data for analytics. Didn't find anything related to creating tickets for failures at snowflake. Is it possible?
It's not about the monitoring & notification aspect of snowflake but connecting with service now and raise a ticket for query failures (tasks,sp etc.)
Any ideas?
There's no functionality like that as of now. I can recommend you open an Idea for it and if enough customers want it our Product Management will review it.
For the Snowpipe, we found a way to use it. We send the error message to SNS and then we can do a Lambda function to call the Rest API of ServiceNow to create a ticket.
For Task, we find that it is possible to use External Functions to notify to AWS whenever the Task fails, but we haven’t implemented it.
Email is a simple way. You need to determine how your ServiceNow instance is processing emails. We implemented incident creation from Azure App Insights based on emails.
In ServiceNow find the Inbound Action you need to process the email or make one.
ServiceNow provides every instance with an email account
Refer to enter link description here
The instance email is usually xxxx#service-now.com.
If your instance url is "audi.service-now.com", the email would be "audi#service-now.com".
For a PDI dev#servicenowdevelopers.com, e.g.; dev12345#servicenowdevelopers.com

Conditional read access to DynamoDB table with AWS Amplify

I'm building an application with AWS Amplify, where I have three DynamoDB tables: Users, Posts and Subscriptions.
users can make posts
users subscribe to other users
user A can only see posts by user B if user A is subscribed to user B
Points 1. and 2. are easy to implement with standard graphQL mutations. But I'm stuck at how to implement 3. in an elegant way. Currently what I do is to use a lambda resolver.
Given inputs "user A wants to see user B", the lambda resolver does the following:
Query Subscriptions table to see if there's a document for "user A subscribed to user B"
if such a row exists, query Posts table and return documents. If not, return nothing.
This logic required two round trips, but since dynamo is fast I'm OK with this trade-off. There are other downsides though, so I'm wondering if there's a more Amplify-native way to do this? Some magic DynamoDB and #auth trickery perhaps?
Thank you!
If you are using multiple tables to store the data, the multiple query approach is your only option.
You can use transactions when mutating items across multiple tables, which is useful when you want to perform an operation based on a condition on an item in another table(s). But when it comes to a read operation, you have no such option.
Aside from re-designing your tables to support this access pattern, I don't think two reads is particularly bad.
If you wanted to handle authorization logic outside of DDB, you may want to look into AWS IAM and it's documentation on Fine-Grained Access Control. Among other features, IAM can restrict access to specific items in a table based on certain primary key values.

What is the best practices for building REST API with different subscribers (companies)?

What is the best design approach in term of security, performance and maintenance for REST API that has many subscribers (companies)?
What is the best approach to use?:
Build a general API and sub APIs for each subscriber (company), when request come we check the request and forward it to the sub API using (API Key) then retrieve data to general API then to client.
Should we make single API and many databases for storing each subscribe(company) data (because each company has huge records that why we suggested to separated databases to improve performance)? when request come we verify it and change database Connection String based on client request.
Should we make one API and one big database that handle all subscribes data?
Do you suggest any new approach to solve this problem? We used Web API and MS SQL Server and Azure Cloud.
In the past I've had one API, the API is secured using OAuth/JWT in the token we have a company id. When a request comes in we read the company id from the JWT and perform a lookup in a master database, this database holds global information such a connection strings for each company. We then create a unit of work that has the company's conneciton string associated with it and any database lookups use that.
This mean that you can start with one master and one node database, when the node database starts getting overloaded you can bring up another one and either add new companies to that or move existing companies to take pressure off. Essentially you're just scaling out when the need arises.
We had no performance issues with this setup.
Depends on the transaction volume and nature of data, you can go for a single database or separate database for each company.
Option 2 would be the best , if you have complex data model
I don't see any advantage of going for option 1, because , anyway general API will call for each request.
You can use the ClientID verification while issuing access tokes.
What I understood from your question is, you want an rest API for multiple consumers(companies). Logically the employees from that company will consume your API, employees may be admin, HR etc. So what I suggested for such scenario you must go with single Rest API for providing the services to your consumers and for security you have to use OpenId on the top of OAuth 2. This resolves the authentication and authorization for you.

Is it possible to update/delete User by externalId

We are trying to develop a SCIM enabled Provisioning system for provisioning data from an Enterprise Cloud Subscriber(ECS) to Salesforce(Cloud Service Provider-CSP). We are following SCIM 1.1 standard.
What are we able to do:
We are able to perform CRUD operations on User object using Salesforce auto-generated userId field
Exact Problem:
We are not able to update/delete User object using externalId provided by ECS.
Tried something as below... But it is not working, Unknown_Exception is thrown...
XXX/my.salesforce.com/services/scim/v1/Users/701984?fields=externalId
Please note that it is not possible to store Salesforce userId in ECS's database due to some compliance reasons. So we have to completely depend upon externalId only.
Possible Workaround:
Step1: Read the userId based on externalId from Salesforce
Step2: Update the User object using the salesforce UserId obtained in Step1.
But this two step process would definitely degrade the performance.
Is there any way to update/delete the User by externalId
Could you please guide us on this..
Thanks so much....
I realize this is old thread but wanted to note that you CAN update Users from REST using an external ID. The endpoint in above question is incorrect. Following is how it should be set, send as a PATCH request:
[instance]/services/data/v37.0/sobjects/user/[external_id__c]/[external id value]
Instance = your instance i.e. https://test.salesforce.com/
external_id__c = API name of your custom external Id field on User
external id value = whatever the value of the user's external Id
NOTES:
Salesforce responds with an HTTP 204 status code with No Content in the body, this isn't usual for patch requests, but it is 'success' response
The external id on user has to be a custom field, make sure it is set
as UNIQUE
Ensure the profile/permission set of the user that is making the call
has the Manage Users permission & has access to the external id field
It is pretty common pattern for other applications, too, to search first and then perform on update on the returned object. Your workaround seems fine to me. What performance problem are you concerned about? Are you concerned about Salesforce not being able to process more requests or are you concerned about the higher response time in your application because you need to make multiple requests? Have you actually measured how much an extra call costs?

Google App Engine large IN clause query

I have an Account entity that has a facebook id.
Sometimes, the client might send all facebook ids (the clients facebook friends) to the server.
We want to select all Accounts IN the facebook ids the client provided.
Looping and calling get on each facebook id seems rather slow, considering people might have 1000+ friends. Further more, GAE is limited to 30 queries with IN clause.
Has anyone had a similar situation? How did you handle it?
Thanks!
You can set up a model that uses the facebook ID as a key which allows you to use Model. get_by_key_name(key_names=fb_ids) to fetch all the models with keys in fb_ids at once.
e.g.
class FBModel(db.Model):
account = db.ReferenceProperty(reference_class=Account)
When creating the model:
model = FBModel(key_name=fb_id)

Resources