I have frontend on React and backend on Django. They are running on two different ports.
The Goal is to save data from frontend to Django session and having access to it on every request.
But thing is it creates new session everytime I make a request
This is how request looks like on React side
const data = await axios.post(
"http://127.0.0.1:8000/api/urls/",
qs.stringify({
long: long_url,
subpart: subpart,
})
);
And this is how it processed by view in Django where i am trying to create list of urls and append it every time.
#api_view(['POST'])
def users_urls(request):
if request.method == 'POST':
long_url = request.POST.get('long')
subpart = request.POST.get('subpart')
if 'users_urls' in request.session:
request.session['users_urls'].append(subpart)
else:
request.session['users_urls'] = [subpart]
return Response(short_url)
It works as it should work when i make requests from Postman. But there is some trouble with react.
Help me please to figure this out
Related
I am trying to make a music controller room with django backend and react frontend. These are running on two different localhost servers, 3000 and 8000. I am using django session's session_key attribute to be able to identify who the host(the person who created the room) is. If the user using the app in the frontend creates a room and comes back to create another room before the session expires, the user should be taken to the room they have created instead of having the backend create another room. My problem is that each time I hit the create room button on the frontend seconds after creating another room(obviously the session hasn't expired so I expect to be taken to the previous room), the fetch method returns a new room.
Here is a view in my views.py that handles this POST request:
class CreateRoomview(APIView):
# declaring the class we are going to serialize our data with.
serializer_class = CreateRoomSerializer
# manually defining the method we will use to handle post data from our frontend
def post(self, request, format=None):
'''
a check to see if in any of our rooms we have a room that that has a host with the
current session key. We are filtering out all those rooms that have a host with
the current session key.
'''
if not self.request.session.exists(self.request.session.session_key):
self.request.session.create()
serializer = self.serializer_class(data=request.data)
# if not Room.objects.filter(host=self.request.session.session_key).exists():
# # if the room does not exist, we create a session like that.
# self.request.session.create()
# we serialize our request data
serializer = self.serializer_class(data=request.data)
# check to see if the data we have serialized is valid
if serializer.is_valid():
# define those three attributes of our model to be those values.
guest_can_pause = serializer.data.get('guest_can_pause')
votes_to_skip = serializer.data.get('votes_to_skip')
host = self.request.session.session_key
# obtain a queryset with the newly defined host.
queryset = Room.objects.filter(host=host)
# if the queryset is not empty,
if queryset.exists():
# assign a room to the first entry in the queryset.
room = queryset[0]
# assign those room attributes to the values we defined above.
room.guest_can_pause = guest_can_pause
room.votes_to_skip = votes_to_skip
# do not create a new room, but rather,update the values we only
# only want to change
room.save(update_fields=['guest_can_pause', 'votes_to_skip'])
# if the queryset is empty,
else:
# create a room with the values specified
room = Room(host=host, guest_can_pause=guest_can_pause,
votes_to_skip=votes_to_skip)
room.save()
return Response(RoomSerializer(room).data, status=status.HTTP_201_CREATED)
Here is method in one of my react frontend components that sends the POST request:
handleRoomButtonClicked() {
const requestOptions = {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
votes_to_skip: this.state.votes_to_skip,
guest_can_pause: this.state.guest_can_pause,
}),
};
fetch("http://127.0.0.1:8000/api/createroom/", requestOptions)
.then((response) => response.json())
.then((data) => console.log(data));
}
Background information
Im using django-cors-headers to allow for cross origin requests since my backend and frontend are runnig on two different ports.
i have CORS_ALLOW_ALL_ORIGINS = True in my settings.py.
I have tried to search for similar questions on stack but none were helpful, I forgot to save their links so I can attach to this question
I'm implementing an auth system with django and react. The two app run respectively on port 8000, 3000. I have implemented the authentication system using the Djoser package. This package uses some dependencies social_core and social_django. Everything seems to be configured ok. I click on login google button...I'm redirected to the google login page and then back to my front-end react app at port 3000 with the state and code parameters on the url.
At this point I'm posting those parameters to the backend. The backend trying to validate the state checking if the state key is present in the session storage using the code below from (social_core/backends/oauth.py)
def validate_state(self):
"""Validate state value. Raises exception on error, returns state
value if valid."""
if not self.STATE_PARAMETER and not self.REDIRECT_STATE:
return None
state = self.get_session_state()
request_state = self.get_request_state()
if not request_state:
raise AuthMissingParameter(self, 'state')
elif not state:
raise AuthStateMissing(self, 'state')
elif not constant_time_compare(request_state, state):
raise AuthStateForbidden(self)
else:
return state
At this point for some reasons the state session key is not there..and I receive an error saying that state cannot be found in session data ( error below )
{"error":["State could not be found in server-side session data."],"status_code":400}
I recap all the action I do:
Front-end request to backend to generate given the provider google-oauth2 a redirect url. With this action the url is generated also the state key is stored on session with a specific value ( google-oauth2_state ).
Front-end receive the url and redirect to google auth page.
Authentication with google and redirection back to the front-end with a state and code parameters on the url.
Front-end get the data form url and post data to back-end to verify that the state received is equal to the generated on the point (1).
For some reasons the state code is not persisted... Any ideas and help will be really appreciated.
Thanks to all.
ok so this is a common problem while you are working with social auth. I had the same problem for so many times.
The flow:
make a request to http://127.0.0.1:8000/auth/o/google-oauth2/?redirect_uri=http://localhost:3000/ (example)
you will get a authorization_url. if you notice in this authorization_url there is a state presented . this is the 'state of server side'.
now you need to click the authorization_url link.Then you will get the google auth page.After that you will be redirect to your redirect url with a state and a code. Remember this state should be the same state as the server side state .(2)
make post req to http://127.0.0.1:8000/auth/o/google-oauth2/?state=''&code=''.
if your states are not the same then you will get some issue.
everytime you wanna login , you need to make a request to http://127.0.0.1:8000/auth/o/google-oauth2/?redirect_uri=http://localhost:3000/
and then to http://127.0.0.1:8000/auth/o/google-oauth2/?state=''&code='' thus you will get the same state.
Without necessary detailed information, I can only tell 2 possible reasons:
You overrode backend with improper session operations(or the user was logged out before auth was finished).
Front-end used incorrect state parameter
You could test social login without front-end, let's say if you're trying to sign in with Google:
Enter the social login URL in browser, like domain.com:8000/login/google-oauth2/
Authorize
See if the page redirected to your default login page correctly
If yes, then probably you need to check your front-end code, and if no, then check your backend code.
At the end, if you're not so sensitive to the potential risk, you could also override GoogleOAuth2 class as following to disable state check:
from social_core.backends import google
class GoogleOAuth2(google.GoogleOAuth2):
STATE_PARAMETER = False
I think you may need some changes in you authorizing flow in step NO.3 and 4.
3.Authentication with google and redirection back to the front-end with a state and code parameters on the url.
4.Front-end get the data form url and post data to back-end to verify that the state received is equal to the generated on the point (1).
maybe you should redirect back to server side after google's authorization.
then at the server side, do the check! validate the state and code (maybe do more things).
then let server redirect to the front-end site you wanted to before.
for some reason, redirect to front-end directly will miss the param.. :-)
Finally, I reach a point where everything is working 200 percent fine, on local as well as production.
The issue was totally related to the cookies and sessions:
So rite answer typo is
make it look to your backend server as if the request is coming from localhost:8000, not localhost:3000,
means the backend domain should be the same always.
For making it possible you have two ways:
1: server should serve the build of the frontend then your frontend will always be on the same domain as the backend.
2: make a simple view in django and attach an empty template to it with only a script tag including logic to handle google auth. always when you click on signing with google move back you you're that view and handle the process and at the end when you get back your access token pass it to the frontend through params.
I used 2nd approach as this was appropriate for me.
what you need to do is just make a simple View and attach a template to it so on clicking on signIN with google that view get hit. and other process will be handled by the view and on your given URL access token will be moved.
View Code:
class GoogleCodeVerificationView(TemplateView):
permission_classes = []
template_name = 'social/google.html'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context["redirect_uri"] = "{}://{}".format(
settings.SOCIAL_AUTH_PROTOCOL, settings.SOCIAL_AUTH_DOMAIN)
context['success_redirect_uri'] = "{}://{}".format(
settings.PASSWORD_RESET_PROTOCOL, settings.PASSWORD_RESET_DOMAIN)
return context
backend script code:
<body>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/axios/0.21.1/axios.min.js"></script>
<script>
function redirectToClientSide(success_redirect_uri) {
window.location.replace(`${success_redirect_uri}/signin/`);
}
function getFormBoday(details) {
return Object.keys(details)
.map(
(key) =>
encodeURIComponent(key) + "=" + encodeURIComponent(details[key])
)
.join("&");
}
try {
const urlSearchParams = new URLSearchParams(window.location.search);
const params = Object.fromEntries(urlSearchParams.entries());
const redirect_uri = "{{redirect_uri|safe}}";
const success_redirect_uri = "{{success_redirect_uri|safe}}";
if (params.flag === "google") {
axios
.get(
`/api/accounts/auth/o/google-oauth2/?redirect_uri=${redirect_uri}/api/accounts/google`
)
.then((res) => {
window.location.replace(res.data.authorization_url);
})
.catch((errors) => {
redirectToClientSide(success_redirect_uri);
});
} else if (params.state && params.code && !params.flag) {
const details = {
state: params.state,
code: params.code,
};
const formBody = getFormBoday(details);
// axios.defaults.withCredentials = true;
axios
.post(`/api/accounts/auth/o/google-oauth2/?${formBody}`)
.then((res) => {
const formBody = getFormBoday(res.data);
window.location.replace(
`${success_redirect_uri}/google/?${formBody}`
);
})
.catch((errors) => {
redirectToClientSide(success_redirect_uri);
});
} else {
redirectToClientSide(success_redirect_uri);
}
} catch {
redirectToClientSide(success_redirect_uri);
}
</script>
</body>
I have a Table called "Banner".
I have a banner upload function in my UI.
Aws Api gateway is used.
2 resources are created in api gateway,which are /s3 and /banner
I am using 2 separate requests to do this.
1.POST request, resource: /s3
This request runs below lambda function, to upload the banner image to s3
UploadBannerToS3.js
...
const s3 = new AWS.S3();
...
const data = await s3.upload(params).promise();
...
This returns a s3 url storing the banner(as image).
2. POST request, resource: /banner
This request take above s3 url as parameter, to store a banner information including the url in dynamodb.
The lambda function will like this.
CreateBanner.js
...
const { url} = JSON.parse(event.body);
const params = {
TableName : "Banner",
Item: {
id: id,
url: url,
createdAt: date,
}
};
...
const data = await documentClient.put(params).promise();
...
In my frontend code(I am using React) will like this.
handleUploadBanner = async (banners) => {
const image = await toBase64(banner);
const payload = { "banner": image }
try {
// request 1
const uploadResponse_S3 = await APIHandler.uploadBannerToS3(payload)
const s3Url = uploadResponse_S3.data.Location;
// request 2
const response = await APIHandler.createBanners({
url: s3Url,
})
console.log(response)
} catch (error) {
console.log(error)
}
}
If only request 1 is successfully sent, while request 2 fail to return successful status, would it be a mess for development?
Should I combine these 2 request in one single lambda function to handle it?
What is the best practise to do so?
If end-user (front-end) wants to have a "synchronized" response from API, so it means we need to design 2 apis as synchronized ones. But it doesn't mean we need to merge them.
If end-user wants to have only the first api response and doesn't care about the second one, we can design the second apis as asynchronized and you can use the pipeline like
a. Lambda 1 -> Performs its logic -> Send a SNS and return to end-user
b. SNS -> SQS -> Lambda 2
The more we design the system as "single responsibility" is the better for development and maintainance.
Thanks,
If only request 1 is successfully sent, while request 2 fail to return
successful status, would it be a mess for development?
Not necessarily. You could come up with a retry function in front-end for simplicity. But it depends because mess it is a very abstract concept. What is the requirement ? It is of vital importance that the requests never fail ? What do you wanna do if they fail ?
Should I combine these 2 request in one single lambda function to
handle it?
Either way is better to keep them small and short. It is how you work with aws lambdas.
But I think if you want more control over the outcome with better fail-over approach.
SQS it is one way of doing, however they are complex for that case. I would configure a trigger from s3 to lambda that way you will only update when the images get successfully updated.
So in summary:
Call Lambda 1 -> Upload s3 ? Successful ?
S3 Triggers Lambda 2
Lambda 2 saves to DB
I would prefer to process both at one lambda for s3 uploading and db storing. It's simpler and reliable to be said it makes sense to abstracting the fail response.
I mean, the app client is mirroring the file item to dynamodb not the s3. So I will assume, whatever it succeed either failed process we don't need to worries for the app getting wrong link. With some scenarios:
succeed upload, succeed db: App client get the correct link
succeed upload, failed db: App client will never get the correct link (no item)
failed upload, failed db: as same to point #2
I am using React JS for the front-end part of my code. I want to store the bearer token in cookies and then return the bearer token in the content field when the API is called successfully. As I haven't used cookies earlier so want to know how I can accomplish this task. Also the back-end part is not done by me.
Following is the code in which I am calling the API
onSubmitSignup = () => {
fetch('https://cors-anywhere.herokuapp.com/http://35.154.16.105:8080/signup/checkMobile',{
method:'post',
headers:{'Content-Type':'application/json'},
body: JSON.stringify({
mobile:this.state.mobile
})
})
.then(response => response.json())
.then(data =>{
if(data.statusCode === '2000'){
localStorage.setItem('mobile',this.state.mobile);
// this.props.loadNewUser(this.state.mobile);
this.props.onRouteChange('otp','nonav');
}
})
// this.props.onRouteChange('otp','nonav');
}
First of all, on this line:
if(data.statusCode === '2000'){
Are you sure the status code shouldn't be 200 and not 2000.
Secondly, there are packages for managing cookies. One that springs to mind is:
Link to GitHub "Universal Cookie" repo
However you can use vanilla js to manage cookies, more info can be found on the Mozilla website:
Here
When you make that initial API call, within the data returned, I assume the Bearer token is returned too. Initialise the cookie there like so:
document.cookie = "Bearer=example-bearer-token;"
When you need access to the cookie at a later date, you can just use the following code:
const cookieValue = document.cookie
.split('; ')
.find(row => row.startsWith('Bearer'))
.split('=')[1];
And then forward the bearer with the next call.
Edit
Set the cookie bearer token like this:
document.cookie = "Bearer=example-bearer-token;"
Get the cookie bearer token like this:
const cookieValue = document.cookie
.split('; ')
.find(row => row.startsWith('Bearer'))
.split('=')[1];
A cookie is made up of key/value pairs separated by a semi-colon. Therefore the above code to get the "Bearer" value, firstly gets the cookie, splits it into its key/value pairs, finds the row that has a key of "Bearer" and splits that row to attain the Bearer token.
In your comment you say the dev team said the bearer will be in the "content". In your ajax request you already have access to that content through data. You need to debug that request to find out what it is coming back as. I assume you just need to grab the token from the returned data inside of the "If" block where you check for your statusCode.
It will be something like:
document.cookie = "Bearer=" + data.bearer;
However, I don't have the shape of your data so you can only work that final part out yourself.
I have a server-side-rendered reactjs app using firebase firestore.
I have an area of my site that server-side-renders content that needs to be retrieved from firestore.
Currently, I am using firestore rules to allow anyone to read data from these particular docs
What worries me is that some bad person could setup a script to just continuously hit my database with reads and rack up my bills (since we are charged on a per-read basis, it seems that it's never wise to allow anyone to perform reads.)
Current Rule
// Allow anonymous users to read feeds
match /landingPageFeeds/{pageId}/feeds/newsFeed {
allow read: if true;
}
Best Way Forward?
How do I allow my server-side script to read from firestore, but not allow anyone else to do so?
Keep in mind, this is an initial action that runs server-side before hydrating the client-side with the pre-loaded state. This function / action is also shared with client-side for page-to-page navigation.
I considered anonymous login - which worked, however, this generated a new anonymous user with every page load - and Firebase does throttle new email/password and anonymous user accounts. It did not seem practical.
Solution
Per Doug's comment, I thought about the admin SDK more. I ended up creating a separate API in firebase functions for anonymous requests requiring secure firestore reads that can be cached.
Goals
Continue to deny public reads of my firestore database
Allow anonymous users to trigger firestore reads for server-side-rendered reactjs pages that require data from Firestore database (like first-time visitors, search engines).
Prevent "read spam" where a third party could hit my database with millions of reads to drive up my cloud costs by using server-side CDN cache for the responses. (by invoking unnessary reads in a loop, I once racked up a huge bill on accident - I want to make sure strangers can't do this maliciously)
Admin SDK & Firebase Function Caching
The admin SDK allows me to securely read from firestore. My firestore security rules can deny access to non-authenticated users.
Firebase functions that are handling GET requests support server caching the response. This means that subsequent hits from identical queries will not re-run all of my functions (firebase reads, other function invocations) - it will just instantly respond with the same data again.
Process
Anonymous client visits a server-side rendered reactjs page
Initial load rendering on server triggers a firebase function (https trigger)
Firebase function uses Admin SDK to read from secured firestore database
Function caches the response for 3 hours res.set('Cache-Control', 'public, max-age=600, s-maxage=10800');
Subsequent requests from any client anywhere for the next 3 hours are served from the cache - avoiding unnecessary reads or additional computation / resource usage
Note - caching does not work on local - must deploy to firebase to test caching effect.
Example Function
const functions = require("firebase-functions");
const cors = require('cors')({origin: true});
const { sendResponse } = require("./includes/sendResponse");
const { getFirestoreDataWithAdminSDK } = require("./includes/getFirestoreDataWithAdminSDK");
const cachedApi = functions.https.onRequest((req, res) => {
cors(req, res, async () => {
// Set a cache for the response to limit the impact of identical request on expensive resources
res.set('Cache-Control', 'public, max-age=600, s-maxage=10800');
// If POST - response with bad request code - POST requests are not cached
if(req.method === "POST") {
return sendResponse(res, 400);
} else {
// Get GET request action from query
let action = (req.query.action) ? req.query.action : null;
console.log("Action: ", action);
try {
// Handle Actions Appropriately
switch(true) {
// Get Feed Data
case(action === "feed"): {
console.log("Getting feed...");
// Get feed id
let feedId = (req.query.feedId) ? req.query.feedId : null;
// Get feed data
let feedData = await getFirestoreDataWithAdminSDK(feedId);
return sendResponse(res, 200, feedData);
}
// No valid action specified
default: {
return sendResponse(res, 400);
}
}
} catch(err) {
console.log("Cached API Error: ", err);
return sendResponse(res, 500);
}
}
});
});
module.exports = {
cachedApi
}