aws - should I integrate s3 upload and store s3 url in dynamodb in one single request? - reactjs

I have a Table called "Banner".
I have a banner upload function in my UI.
Aws Api gateway is used.
2 resources are created in api gateway,which are /s3 and /banner
I am using 2 separate requests to do this.
1.POST request, resource: /s3
This request runs below lambda function, to upload the banner image to s3
UploadBannerToS3.js
...
const s3 = new AWS.S3();
...
const data = await s3.upload(params).promise();
...
This returns a s3 url storing the banner(as image).
2. POST request, resource: /banner
This request take above s3 url as parameter, to store a banner information including the url in dynamodb.
The lambda function will like this.
CreateBanner.js
...
const { url} = JSON.parse(event.body);
const params = {
TableName : "Banner",
Item: {
id: id,
url: url,
createdAt: date,
}
};
...
const data = await documentClient.put(params).promise();
...
In my frontend code(I am using React) will like this.
handleUploadBanner = async (banners) => {
const image = await toBase64(banner);
const payload = { "banner": image }
try {
// request 1
const uploadResponse_S3 = await APIHandler.uploadBannerToS3(payload)
const s3Url = uploadResponse_S3.data.Location;
// request 2
const response = await APIHandler.createBanners({
url: s3Url,
})
console.log(response)
} catch (error) {
console.log(error)
}
}
If only request 1 is successfully sent, while request 2 fail to return successful status, would it be a mess for development?
Should I combine these 2 request in one single lambda function to handle it?
What is the best practise to do so?

If end-user (front-end) wants to have a "synchronized" response from API, so it means we need to design 2 apis as synchronized ones. But it doesn't mean we need to merge them.
If end-user wants to have only the first api response and doesn't care about the second one, we can design the second apis as asynchronized and you can use the pipeline like
a. Lambda 1 -> Performs its logic -> Send a SNS and return to end-user
b. SNS -> SQS -> Lambda 2
The more we design the system as "single responsibility" is the better for development and maintainance.
Thanks,

If only request 1 is successfully sent, while request 2 fail to return
successful status, would it be a mess for development?
Not necessarily. You could come up with a retry function in front-end for simplicity. But it depends because mess it is a very abstract concept. What is the requirement ? It is of vital importance that the requests never fail ? What do you wanna do if they fail ?
Should I combine these 2 request in one single lambda function to
handle it?
Either way is better to keep them small and short. It is how you work with aws lambdas.
But I think if you want more control over the outcome with better fail-over approach.
SQS it is one way of doing, however they are complex for that case. I would configure a trigger from s3 to lambda that way you will only update when the images get successfully updated.
So in summary:
Call Lambda 1 -> Upload s3 ? Successful ?
S3 Triggers Lambda 2
Lambda 2 saves to DB

I would prefer to process both at one lambda for s3 uploading and db storing. It's simpler and reliable to be said it makes sense to abstracting the fail response.
I mean, the app client is mirroring the file item to dynamodb not the s3. So I will assume, whatever it succeed either failed process we don't need to worries for the app getting wrong link. With some scenarios:
succeed upload, succeed db: App client get the correct link
succeed upload, failed db: App client will never get the correct link (no item)
failed upload, failed db: as same to point #2

Related

How can we find the total number of tokens available in a wallet ? (withOut initialising Moralis) -As it is time taking

const Moralis = require('moralis').default
const {EvmChain} = require('#moralisweb3/evm-utils')
const runApp = async () => {
// WithOut Moralis initialising - I want to skip these ------(https://i.stack.imgur.com/u4cGM.jpg)
await Moralis.start({
apiKey: 'api_key_secret'
})
// ------------------
const address = '0xbf820316675F3F96beb7a47Cec34c5aEdf07BD0e'
const chain = EvmChain.GOERLI
const response = await Moralis.EvmApi.token.getWalletTokenBalances({
address,
chain
})
console.log(response.toJSON())
}
runApp()
As every detail of a smartContract is public. I don't want to use the API of a third party like Moralis as it slows the app.
Yes, you are right all the smartContract data on the blockchain is public. But it is not always easy to read this data. To read data from the blockchain you would need to run your own local rpc node or you may have to rely on another 3rd node provider or API provider to read the blockchain data.
Moralis provides the data to users through the API and it is one of the fastest ways to read real-time blockchain data.
If you don't want to use any third-party providers for reading blockchain data, one option is to run your own full RPC node. This requires setting up a server and syncing the entire blockchain to your machine. It gives you the ability to read the data directly from the blockchain. This can be a good option if you have the technical expertise and the resources to set up and maintain a full node.
But this is not an easy option nor the fastest option to choose if you are only looking to get the ERC20 token wallet balances.

How to share one session between react and django?

I have frontend on React and backend on Django. They are running on two different ports.
The Goal is to save data from frontend to Django session and having access to it on every request.
But thing is it creates new session everytime I make a request
This is how request looks like on React side
const data = await axios.post(
"http://127.0.0.1:8000/api/urls/",
qs.stringify({
long: long_url,
subpart: subpart,
})
);
And this is how it processed by view in Django where i am trying to create list of urls and append it every time.
#api_view(['POST'])
def users_urls(request):
if request.method == 'POST':
long_url = request.POST.get('long')
subpart = request.POST.get('subpart')
if 'users_urls' in request.session:
request.session['users_urls'].append(subpart)
else:
request.session['users_urls'] = [subpart]
return Response(short_url)
It works as it should work when i make requests from Postman. But there is some trouble with react.
Help me please to figure this out

Correct place to audit query in Hot Chocolate graphql

I am thinking should I audit user queries in HttpRequestInterceptor or DiagnosticEventListener for Hot Chocolate v11. The problem with latter is that if the audit failed to write to disk/db, the user will "get away" with the query.
Ideally if audit fail, no operation should proceed. Therefore in theory I should use HttpRequestInterceptor.
But How do I get IRequestContext from IRequestExecutor or IQueryRequestBuilder. I tried googling but documentation is limited.
Neither :)
The HttpRequestInterceptor is meant for enriching the GraphQL request with context data.
The DiagnosticEventListener, on the other hand, is meant for logging or other instrumentations.
If you want to write an audit log, you should instead go for a request middleware. A request middleware can be added like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseRequest(next => async context =>
{
})
.UseDefaultPipeline();
The tricky part here is to inspect the request at the right time. Instead of appending to the default pipeline, you can define your own pipeline like the following.
services
.AddGraphQLServer()
.AddQueryType<Query>()
.UseInstrumentations()
.UseExceptions()
.UseTimeout()
.UseDocumentCache()
.UseDocumentParser()
.UseDocumentValidation()
.UseRequest(next => async context =>
{
// write your audit log here and invoke next if the user is allowed to execute
if(isNotAllowed)
{
// if the user is not allowed to proceed create an error result.
context.Result = QueryResultBuilder.CreateError(
ErrorBuilder.New()
.SetMessage("Something is broken")
.SetCode("Some Error Code")
.Build())
}
else
{
await next(context);
}
})
.UseOperationCache()
.UseOperationResolver()
.UseOperationVariableCoercion()
.UseOperationExecution();
The pipeline is basically the default pipeline but adds your middleware right after the document validation. At this point, your GraphQL request is parsed and validated. This means that we know it is a valid GraphQL request that can be processed at this point. This also means that we can use the context.Document property that contains the parsed GraphQL request.
In order to serialize the document to a formatted string use context.Document.ToString(indented: true).
The good thing is that in the middleware, we are in an async context, meaning you can easily access a database and so on. In contrast to that, the DiagnosticEvents are sync and not meant to have a heavy workload.
The middleware can also be wrapped into a class instead of a delegate.
If you need more help, join us on slack.
Click on community support to join the slack channel:
https://github.com/ChilliCream/hotchocolate/issues/new/choose

Django, Djoser social auth : State could not be found in server-side session data. status_code 400

I'm implementing an auth system with django and react. The two app run respectively on port 8000, 3000. I have implemented the authentication system using the Djoser package. This package uses some dependencies social_core and social_django. Everything seems to be configured ok. I click on login google button...I'm redirected to the google login page and then back to my front-end react app at port 3000 with the state and code parameters on the url.
At this point I'm posting those parameters to the backend. The backend trying to validate the state checking if the state key is present in the session storage using the code below from (social_core/backends/oauth.py)
def validate_state(self):
"""Validate state value. Raises exception on error, returns state
value if valid."""
if not self.STATE_PARAMETER and not self.REDIRECT_STATE:
return None
state = self.get_session_state()
request_state = self.get_request_state()
if not request_state:
raise AuthMissingParameter(self, 'state')
elif not state:
raise AuthStateMissing(self, 'state')
elif not constant_time_compare(request_state, state):
raise AuthStateForbidden(self)
else:
return state
At this point for some reasons the state session key is not there..and I receive an error saying that state cannot be found in session data ( error below )
{"error":["State could not be found in server-side session data."],"status_code":400}
I recap all the action I do:
Front-end request to backend to generate given the provider google-oauth2 a redirect url. With this action the url is generated also the state key is stored on session with a specific value ( google-oauth2_state ).
Front-end receive the url and redirect to google auth page.
Authentication with google and redirection back to the front-end with a state and code parameters on the url.
Front-end get the data form url and post data to back-end to verify that the state received is equal to the generated on the point (1).
For some reasons the state code is not persisted... Any ideas and help will be really appreciated.
Thanks to all.
ok so this is a common problem while you are working with social auth. I had the same problem for so many times.
The flow:
make a request to http://127.0.0.1:8000/auth/o/google-oauth2/?redirect_uri=http://localhost:3000/ (example)
you will get a authorization_url. if you notice in this authorization_url there is a state presented . this is the 'state of server side'.
now you need to click the authorization_url link.Then you will get the google auth page.After that you will be redirect to your redirect url with a state and a code. Remember this state should be the same state as the server side state .(2)
make post req to http://127.0.0.1:8000/auth/o/google-oauth2/?state=''&code=''.
if your states are not the same then you will get some issue.
everytime you wanna login , you need to make a request to http://127.0.0.1:8000/auth/o/google-oauth2/?redirect_uri=http://localhost:3000/
and then to http://127.0.0.1:8000/auth/o/google-oauth2/?state=''&code='' thus you will get the same state.
Without necessary detailed information, I can only tell 2 possible reasons:
You overrode backend with improper session operations(or the user was logged out before auth was finished).
Front-end used incorrect state parameter
You could test social login without front-end, let's say if you're trying to sign in with Google:
Enter the social login URL in browser, like domain.com:8000/login/google-oauth2/
Authorize
See if the page redirected to your default login page correctly
If yes, then probably you need to check your front-end code, and if no, then check your backend code.
At the end, if you're not so sensitive to the potential risk, you could also override GoogleOAuth2 class as following to disable state check:
from social_core.backends import google
class GoogleOAuth2(google.GoogleOAuth2):
STATE_PARAMETER = False
I think you may need some changes in you authorizing flow in step NO.3 and 4.
3.Authentication with google and redirection back to the front-end with a state and code parameters on the url.
4.Front-end get the data form url and post data to back-end to verify that the state received is equal to the generated on the point (1).
maybe you should redirect back to server side after google's authorization.
then at the server side, do the check! validate the state and code (maybe do more things).
then let server redirect to the front-end site you wanted to before.
for some reason, redirect to front-end directly will miss the param.. :-)
Finally, I reach a point where everything is working 200 percent fine, on local as well as production.
The issue was totally related to the cookies and sessions:
So rite answer typo is
make it look to your backend server as if the request is coming from localhost:8000, not localhost:3000,
means the backend domain should be the same always.
For making it possible you have two ways:
1: server should serve the build of the frontend then your frontend will always be on the same domain as the backend.
2: make a simple view in django and attach an empty template to it with only a script tag including logic to handle google auth. always when you click on signing with google move back you you're that view and handle the process and at the end when you get back your access token pass it to the frontend through params.
I used 2nd approach as this was appropriate for me.
what you need to do is just make a simple View and attach a template to it so on clicking on signIN with google that view get hit. and other process will be handled by the view and on your given URL access token will be moved.
View Code:
class GoogleCodeVerificationView(TemplateView):
permission_classes = []
template_name = 'social/google.html'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context["redirect_uri"] = "{}://{}".format(
settings.SOCIAL_AUTH_PROTOCOL, settings.SOCIAL_AUTH_DOMAIN)
context['success_redirect_uri'] = "{}://{}".format(
settings.PASSWORD_RESET_PROTOCOL, settings.PASSWORD_RESET_DOMAIN)
return context
backend script code:
<body>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/axios/0.21.1/axios.min.js"></script>
<script>
function redirectToClientSide(success_redirect_uri) {
window.location.replace(`${success_redirect_uri}/signin/`);
}
function getFormBoday(details) {
return Object.keys(details)
.map(
(key) =>
encodeURIComponent(key) + "=" + encodeURIComponent(details[key])
)
.join("&");
}
try {
const urlSearchParams = new URLSearchParams(window.location.search);
const params = Object.fromEntries(urlSearchParams.entries());
const redirect_uri = "{{redirect_uri|safe}}";
const success_redirect_uri = "{{success_redirect_uri|safe}}";
if (params.flag === "google") {
axios
.get(
`/api/accounts/auth/o/google-oauth2/?redirect_uri=${redirect_uri}/api/accounts/google`
)
.then((res) => {
window.location.replace(res.data.authorization_url);
})
.catch((errors) => {
redirectToClientSide(success_redirect_uri);
});
} else if (params.state && params.code && !params.flag) {
const details = {
state: params.state,
code: params.code,
};
const formBody = getFormBoday(details);
// axios.defaults.withCredentials = true;
axios
.post(`/api/accounts/auth/o/google-oauth2/?${formBody}`)
.then((res) => {
const formBody = getFormBoday(res.data);
window.location.replace(
`${success_redirect_uri}/google/?${formBody}`
);
})
.catch((errors) => {
redirectToClientSide(success_redirect_uri);
});
} else {
redirectToClientSide(success_redirect_uri);
}
} catch {
redirectToClientSide(success_redirect_uri);
}
</script>
</body>

Firebase - Best Practice For Server Firestore Reads For Server-Side Rendering

I have a server-side-rendered reactjs app using firebase firestore.
I have an area of my site that server-side-renders content that needs to be retrieved from firestore.
Currently, I am using firestore rules to allow anyone to read data from these particular docs
What worries me is that some bad person could setup a script to just continuously hit my database with reads and rack up my bills (since we are charged on a per-read basis, it seems that it's never wise to allow anyone to perform reads.)
Current Rule
// Allow anonymous users to read feeds
match /landingPageFeeds/{pageId}/feeds/newsFeed {
allow read: if true;
}
Best Way Forward?
How do I allow my server-side script to read from firestore, but not allow anyone else to do so?
Keep in mind, this is an initial action that runs server-side before hydrating the client-side with the pre-loaded state. This function / action is also shared with client-side for page-to-page navigation.
I considered anonymous login - which worked, however, this generated a new anonymous user with every page load - and Firebase does throttle new email/password and anonymous user accounts. It did not seem practical.
Solution
Per Doug's comment, I thought about the admin SDK more. I ended up creating a separate API in firebase functions for anonymous requests requiring secure firestore reads that can be cached.
Goals
Continue to deny public reads of my firestore database
Allow anonymous users to trigger firestore reads for server-side-rendered reactjs pages that require data from Firestore database (like first-time visitors, search engines).
Prevent "read spam" where a third party could hit my database with millions of reads to drive up my cloud costs by using server-side CDN cache for the responses. (by invoking unnessary reads in a loop, I once racked up a huge bill on accident - I want to make sure strangers can't do this maliciously)
Admin SDK & Firebase Function Caching
The admin SDK allows me to securely read from firestore. My firestore security rules can deny access to non-authenticated users.
Firebase functions that are handling GET requests support server caching the response. This means that subsequent hits from identical queries will not re-run all of my functions (firebase reads, other function invocations) - it will just instantly respond with the same data again.
Process
Anonymous client visits a server-side rendered reactjs page
Initial load rendering on server triggers a firebase function (https trigger)
Firebase function uses Admin SDK to read from secured firestore database
Function caches the response for 3 hours res.set('Cache-Control', 'public, max-age=600, s-maxage=10800');
Subsequent requests from any client anywhere for the next 3 hours are served from the cache - avoiding unnecessary reads or additional computation / resource usage
Note - caching does not work on local - must deploy to firebase to test caching effect.
Example Function
const functions = require("firebase-functions");
const cors = require('cors')({origin: true});
const { sendResponse } = require("./includes/sendResponse");
const { getFirestoreDataWithAdminSDK } = require("./includes/getFirestoreDataWithAdminSDK");
const cachedApi = functions.https.onRequest((req, res) => {
cors(req, res, async () => {
// Set a cache for the response to limit the impact of identical request on expensive resources
res.set('Cache-Control', 'public, max-age=600, s-maxage=10800');
// If POST - response with bad request code - POST requests are not cached
if(req.method === "POST") {
return sendResponse(res, 400);
} else {
// Get GET request action from query
let action = (req.query.action) ? req.query.action : null;
console.log("Action: ", action);
try {
// Handle Actions Appropriately
switch(true) {
// Get Feed Data
case(action === "feed"): {
console.log("Getting feed...");
// Get feed id
let feedId = (req.query.feedId) ? req.query.feedId : null;
// Get feed data
let feedData = await getFirestoreDataWithAdminSDK(feedId);
return sendResponse(res, 200, feedData);
}
// No valid action specified
default: {
return sendResponse(res, 400);
}
}
} catch(err) {
console.log("Cached API Error: ", err);
return sendResponse(res, 500);
}
}
});
});
module.exports = {
cachedApi
}

Resources