Proxy doesn't work with fetch() in React.js - reactjs

I've made a simple project in React; the client is running at port 3000, server at 3001.
If I launch localhost:3001/api/visitator/cars it works correctly, but when I make the GET Request on Client I have this error, on console http://localhost:3000/api/visitator/cars 404 (Not found).
I don't know why, but the request is done on port 3000 and not 3001, even if on package.json is present
"proxy": "http://localhost:3001".
This is the code in client/api:
async function askForCars(){
let url = '/api/visitator/cars'
const response = await fetch(url);
const carJson = await response.json();
if(response.ok){
console.log(carJson)
return carJson;
} else {
let err = {status: response.status, errObj:carJson};
throw err; // An object with the error coming from the server
}
}

There are two ways to solve this:
You have give the full path rather than relative path as your server lies on a different domain as ports are different. So your url variable value should be the domain name + uri + i.e. http://localhost:3001/api/visitator/cars.
The second way to solve this would be you need to add redirect rules on the server where you are hosting the app so that your every request having http://localhost:3000/api uri should be redirected to http://localhost:3001/api.
I think the quick solution would be the first one for now incase you don't have requirement to redirect api calls to actual server. Hope it helps.

Related

How to deal with with cookies, settings persistence

[DEVELOPMENT] - All fine (no issues as cookies are set on same domain 'localhost')
[PROD / LIVE] - Link below
The issue: The cookie are not being set properly or not getting persisted, I don't know why so if anyone can give me an idea what might be the cause as you can see in network tab, it sets the cookie but it doesn't put it in Application > Storage > Cookies you can have a look if I am not explaining it very well.
https://gta-open-q99pjtak6-patricksubang.vercel.app/
username: demo
password: demo
Front end, POST/GET request
const sendRequest = async (method, endpoint, custom) => {
const isProd =
process.env.NODE_ENV === "development"
? "http://localhost:8000/"
: "https://gta-open.ga/";
const url = isProd + endpoint;
const response = await fetch(url, {
method: method,
mode: "cors",
credentials: "include",
...custom,
});
return response;
};
Setting session cookie using gorrilla\sessions
Cookie.Options.Path = "/"
Cookie.Options.HttpOnly = true
Cookie.Options.SameSite = http.SameSiteNoneMode
state := false
if os.Getenv("ENV") == "PROD" {
state = true
}
Cookie.Options.Secure = state
func GenerateSession(w http.ResponseWriter, r *http.Request, uid int) (err error) {
session, _ := Cookie.Get(r, "sessionid")
session.Values["accountID"] = uid
// Save it before we write to the response/return from the handler.
err = session.Save(r, w)
if err != nil {
return
}
return
}
If anyone can give me a rough idea why, or the cause of the issue would be helpful
Many thanks!
Not familiar with a Golang backend nor gorrilla sessions, but I'm pretty sure it's how you're configuring your cookie settings. Specifically, I believe the domain name being set to gta-open.ga doesn't match vercel.app so it's either being blocked by the browser's third party cookie settings or not being utilized because the domain names don't match.
For example, cookie blocked by 3rd party (user's browser preference):
As a result, no cookie is set:
However, if I allow 3rd party cookies from cross-domains, then the cookie is set:
However, since it has a different domain from the app, it's not being utilized:
On that note, I'd highly suggest purchasing your own domain. They're pretty cheap (10usd/7gbp or less per year), then you can set up your vercel app to use the custom domain. Then, set your cookie to use that custom domain name and it should be smooth sailing.
Otherwise, you'll have to set up the cookie to use subdomain's .vercel.app in production (which isn't recommended because vercel hosts a lot of apps on their subdomains) or use domain gta-open-q99pjtak6-patricksubang.vercel.app, but that's kind of silly.

Django, Djoser social auth : State could not be found in server-side session data. status_code 400

I'm implementing an auth system with django and react. The two app run respectively on port 8000, 3000. I have implemented the authentication system using the Djoser package. This package uses some dependencies social_core and social_django. Everything seems to be configured ok. I click on login google button...I'm redirected to the google login page and then back to my front-end react app at port 3000 with the state and code parameters on the url.
At this point I'm posting those parameters to the backend. The backend trying to validate the state checking if the state key is present in the session storage using the code below from (social_core/backends/oauth.py)
def validate_state(self):
"""Validate state value. Raises exception on error, returns state
value if valid."""
if not self.STATE_PARAMETER and not self.REDIRECT_STATE:
return None
state = self.get_session_state()
request_state = self.get_request_state()
if not request_state:
raise AuthMissingParameter(self, 'state')
elif not state:
raise AuthStateMissing(self, 'state')
elif not constant_time_compare(request_state, state):
raise AuthStateForbidden(self)
else:
return state
At this point for some reasons the state session key is not there..and I receive an error saying that state cannot be found in session data ( error below )
{"error":["State could not be found in server-side session data."],"status_code":400}
I recap all the action I do:
Front-end request to backend to generate given the provider google-oauth2 a redirect url. With this action the url is generated also the state key is stored on session with a specific value ( google-oauth2_state ).
Front-end receive the url and redirect to google auth page.
Authentication with google and redirection back to the front-end with a state and code parameters on the url.
Front-end get the data form url and post data to back-end to verify that the state received is equal to the generated on the point (1).
For some reasons the state code is not persisted... Any ideas and help will be really appreciated.
Thanks to all.
ok so this is a common problem while you are working with social auth. I had the same problem for so many times.
The flow:
make a request to http://127.0.0.1:8000/auth/o/google-oauth2/?redirect_uri=http://localhost:3000/ (example)
you will get a authorization_url. if you notice in this authorization_url there is a state presented . this is the 'state of server side'.
now you need to click the authorization_url link.Then you will get the google auth page.After that you will be redirect to your redirect url with a state and a code. Remember this state should be the same state as the server side state .(2)
make post req to http://127.0.0.1:8000/auth/o/google-oauth2/?state=''&code=''.
if your states are not the same then you will get some issue.
everytime you wanna login , you need to make a request to http://127.0.0.1:8000/auth/o/google-oauth2/?redirect_uri=http://localhost:3000/
and then to http://127.0.0.1:8000/auth/o/google-oauth2/?state=''&code='' thus you will get the same state.
Without necessary detailed information, I can only tell 2 possible reasons:
You overrode backend with improper session operations(or the user was logged out before auth was finished).
Front-end used incorrect state parameter
You could test social login without front-end, let's say if you're trying to sign in with Google:
Enter the social login URL in browser, like domain.com:8000/login/google-oauth2/
Authorize
See if the page redirected to your default login page correctly
If yes, then probably you need to check your front-end code, and if no, then check your backend code.
At the end, if you're not so sensitive to the potential risk, you could also override GoogleOAuth2 class as following to disable state check:
from social_core.backends import google
class GoogleOAuth2(google.GoogleOAuth2):
STATE_PARAMETER = False
I think you may need some changes in you authorizing flow in step NO.3 and 4.
3.Authentication with google and redirection back to the front-end with a state and code parameters on the url.
4.Front-end get the data form url and post data to back-end to verify that the state received is equal to the generated on the point (1).
maybe you should redirect back to server side after google's authorization.
then at the server side, do the check! validate the state and code (maybe do more things).
then let server redirect to the front-end site you wanted to before.
for some reason, redirect to front-end directly will miss the param.. :-)
Finally, I reach a point where everything is working 200 percent fine, on local as well as production.
The issue was totally related to the cookies and sessions:
So rite answer typo is
make it look to your backend server as if the request is coming from localhost:8000, not localhost:3000,
means the backend domain should be the same always.
For making it possible you have two ways:
1: server should serve the build of the frontend then your frontend will always be on the same domain as the backend.
2: make a simple view in django and attach an empty template to it with only a script tag including logic to handle google auth. always when you click on signing with google move back you you're that view and handle the process and at the end when you get back your access token pass it to the frontend through params.
I used 2nd approach as this was appropriate for me.
what you need to do is just make a simple View and attach a template to it so on clicking on signIN with google that view get hit. and other process will be handled by the view and on your given URL access token will be moved.
View Code:
class GoogleCodeVerificationView(TemplateView):
permission_classes = []
template_name = 'social/google.html'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context["redirect_uri"] = "{}://{}".format(
settings.SOCIAL_AUTH_PROTOCOL, settings.SOCIAL_AUTH_DOMAIN)
context['success_redirect_uri'] = "{}://{}".format(
settings.PASSWORD_RESET_PROTOCOL, settings.PASSWORD_RESET_DOMAIN)
return context
backend script code:
<body>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/axios/0.21.1/axios.min.js"></script>
<script>
function redirectToClientSide(success_redirect_uri) {
window.location.replace(`${success_redirect_uri}/signin/`);
}
function getFormBoday(details) {
return Object.keys(details)
.map(
(key) =>
encodeURIComponent(key) + "=" + encodeURIComponent(details[key])
)
.join("&");
}
try {
const urlSearchParams = new URLSearchParams(window.location.search);
const params = Object.fromEntries(urlSearchParams.entries());
const redirect_uri = "{{redirect_uri|safe}}";
const success_redirect_uri = "{{success_redirect_uri|safe}}";
if (params.flag === "google") {
axios
.get(
`/api/accounts/auth/o/google-oauth2/?redirect_uri=${redirect_uri}/api/accounts/google`
)
.then((res) => {
window.location.replace(res.data.authorization_url);
})
.catch((errors) => {
redirectToClientSide(success_redirect_uri);
});
} else if (params.state && params.code && !params.flag) {
const details = {
state: params.state,
code: params.code,
};
const formBody = getFormBoday(details);
// axios.defaults.withCredentials = true;
axios
.post(`/api/accounts/auth/o/google-oauth2/?${formBody}`)
.then((res) => {
const formBody = getFormBoday(res.data);
window.location.replace(
`${success_redirect_uri}/google/?${formBody}`
);
})
.catch((errors) => {
redirectToClientSide(success_redirect_uri);
});
} else {
redirectToClientSide(success_redirect_uri);
}
} catch {
redirectToClientSide(success_redirect_uri);
}
</script>
</body>

React and Nodemailer

I am running VSCode, Nodejs, Nodemailer, and Reactjs in a Windows machine, but I cannot get Nodemailer to send email. According to the instructions in the web, it should work. Finally I did the following: I created two empty folders in both of which I ran node init, installed Nodemailer, and copied the email sending code. In the other folder I also ran create-react-app. Then I edited the files just enough to get the sending code running.
In the first folder it works without problems, but in the folder with React, it does not do anything. Not even the usual following if(error)/else(success) statements get executed, they are just jumped over. However, the code before and after the transporter.sendMail (or .verify) part are executed... Anyone know why this happens or how to fix it?
This is the code I run in both cra and the non-cra folders:
const nodemailer = require("nodemailer");
const SendEmail = message => {
const transporter = nodemailer.createTransport({
service: "Gmail",
auth: {
user: "from#gmail.com",
pass: "xxxxxxxx"
}
});
transporter.verify(function(error) {
if (error) {
console.log(error);
} else {
console.log("Server is ready to take our messages");
}
});
const mailOptions = {
from: "from#gmail.com",
to: "to#gmail.com",
subject: "Subject",
text: message,
html: "<b>Html</b>"
};
transporter.sendMail(mailOptions, (err, info) => {
if (err) console.log(err);
else console.log(info.response);
});
};
module.exports = SendEmail;
Tim
Gmail has spam filter to prevent spam, so most probably, you may get it pass through sometime and not most time without proper configuration.
and it is not a good idea to send your email in your client app, such as react. Since everyone can access to your email and password to do nasty thing, which is not really a good idea.
Best practice is to request your node server to send mail.
Other than, I noticed that you used gmail to do that. There are some free mail fake stmp server that you can do spamming without the mail provider to flag you as spam user. Such as mailTrap, if you are just interested to test, is react able to send email, try it with mailtrap. I never do it, but still it is better than using your own email provider, as they might have filter rules about it, could be the reason, you are not able to send it.

how do you make a request from client to server locally using fetch without getting an opaque response?

I'm running a react app on localhost:3000, and a go server on localhost:8000.
When I make a request from the client using fetch - the response is opaque so I'm not able to access the data.
How do I make a valid cross-origin request?
client:
componentWillMount() {
const url = 'https://localhost:8000/api/items'
fetch(url, { mode: 'cors' })
.then(results => {
return results.json()
}).then(data => {
let items = data;
this.setState({items})
})
}
server:
func GetItems(w http.ResponseWriter, r *http.Request) {
items := getItems()
w.Header().Set("Access-Control-Allow-Origin", "*")
json.NewEncoder(w).Encode(items)
}
From what I've read - it's expected that requests made across resources should be opaque - but for local development - how do you get access to the JSON?
After looking at the definitions for request types I found this:
cors: Response was received from a valid cross-origin request. Certain
headers and the body may be accessed.
I think I need to set up a valid cross-origin request.
I got it!
This question helped resolve how to set up CORS in golang: Enable CORS in Golang
3 key things here:
Set the mode in the client request to cors
Set the Access-Control-Allow-Origin header on the server to *
Call .json() on the result in the client, and in a
following promise you can access the data.
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Add("Access-Control-Allow-Headers", "Content-Type")
w.Header().Set("content-type", "application/json")
You can try to add them in the handleFunc

Only one auth mechanism allowed; only the X-Amz-Algorithm query parameter..?

I am trying to send a PUT request to an amazonS3 presigned URL. My request seems to be called twice even if I only have one PUT request. The first request returns 200 OK, the second one returns 400 Bad Request.
Here is my code:
var req = {
method: 'PUT',
url: presignedUrl,
headers: {
'Content-Type': 'text/csv'
},
data: <some file in base64 format>
};
$http(req).success(function(result) {
console.log('SUCCESS!');
}).error(function(error) {
console.log('FAILED!', error);
});
The 400 Bad Request error in more detail:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InvalidArgument</Code>
<Message>Only one auth mechanism allowed; only the X-Amz-Algorithm query parameter, Signature query string parameter or the Authorization header should be specified</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>Bearer someToken</ArgumentValue>
<RequestId>someRequestId</RequestId>
<HostId>someHostId</HostId>
</Error>
What I don't understand is, why is it returning 400? and What's the workaround?
Your client is probably sending an initial request that uses an Authorization header, which is being responded with a 302. The response includes a Location header which has a Signature parameter. The problem is that the headers from the initial request are being copied into the subsequent redirect request, such that it contains both Authorization and Signature. If you remove the Authorization from the subsequent request you should be good.
This happened to me, but in a Java / HttpClient environment. I can provide details of the solution in Java, but unfortunately not for AngularJS.
For the Googlers, if you're sending a signed (signature v4) S3 request via Cloudfront and "Restrict Bucket Access" is set to "Yes" in your Cloudfront Origin settings, Cloudfront will add the Authorization header to your request and you'll get this error. Since you've already signed your request, though, you should be able to turn this setting off and not sacrifice any security.
I know this may be too late to answer, but like #mlohbihler said, the cause of this error for me was the Authorization header being sent by the http interceptor I had setup in Angular.
Essentially, I had not properly filtered out the AWS S3 domain so as to avoid it automatically getting the JWT authorization header.
Also, the 400 "invalid argument" may surface as a result of wrong config/credentials for your S3::Presigner that is presigning the url to begin with. Once you get past the 400, you may encounter a 501 "not implemented" response like I did. Was able to solve it by specifying a Content-Length header (specified here as a required header). Hopefully that helps #arjuncc, it solved my postman issue when testing s3 image uploads with a presigned url.
The message says that ONLY ONE authentication allowed. It could be that You are sending one in URL as auth parameters, another - in headers as Authorization header.
import 'package:dio/adapter.dart';
import 'package:dio/dio.dart';
import 'package:scavenger_inc_flutter/utils/AuthUtils.dart';
import 'package:scavenger_inc_flutter/utils/URLS.dart';
class ApiClient {
static Dio dio;
static Dio getClient() {
if (dio == null) {
dio = new Dio();
dio.httpClientAdapter = new CustomHttpAdapter();
}
return dio;
}
}
class CustomHttpAdapter extends HttpClientAdapter {
DefaultHttpClientAdapter _adapter = DefaultHttpClientAdapter();
#override
void close({bool force = false}) {
_adapter.close(force: force);
}
#override
Future<ResponseBody> fetch(RequestOptions options,
Stream<List<int>> requestStream, Future<dynamic> cancelFuture) async {
String url = options.uri.toString();
if (url.contains(URLS.IP_ADDRESS) && await AuthUtils.isLoggedIn()) {
options.followRedirects = false;
options.headers.addAll({"Authorization": await AuthUtils.getJwtToken()});
}
final response = await _adapter.fetch(options, requestStream, cancelFuture);
if (response.statusCode == 302 || response.statusCode == 307) {
String redirect = (response.headers["location"][0]);
if(!redirect.contains(URLS.IP_ADDRESS)) {
options.path = redirect;
options.headers.clear();
}
return await fetch(options, requestStream, cancelFuture);
}
return response;
}
}
I disallowed following redirects.
Used the response object to check if it was redirected.
If it was 302, or 307, (HTTP Redirect Codes), I resent the request after clearing the Auth Headers.
I used an additioal check to send the headers only if the path contained my specific domain URL (or IP Address in this example).
All of the above, using a CustomHttpAdapter in Dio. Can also be used for images, by changing the ResponseType to bytes.
Let me know if this helps you!
I was using django restframework. I applied Token authentication in REST API. I use to pass token in request header (used ModHeader extension of Browser which automatically put Token in Authorization of request header) of django API till here every thing was fine.
But while making a click on Images/Files (which now shows the s3 URL). The Authorization automatically get passed. Thus the issue.
Link look similar to this.
https://.s3.amazonaws.com/media//small_image.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=XXXXXXXXXXXXXXXXXXXXX%2F20210317%2Fap-south-XXXXXXXXFaws4_request&X-Amz-Date=XXXXXXXXXXXXXXX&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
I lock the ModHeader extension to pass Authorization Token only while making rest to REST API and not while making resquest to S3 resources. i.e. do not pass any other Authorization while making request to S3 resource.
It's a silly mistake. But in case it helps.
Flutter: if you experience this with the http dart package, then upgrade to Flutter v2.10!
Related bugs in dart issue tracker:
https://github.com/dart-lang/sdk/issues/47246
https://github.com/dart-lang/sdk/issues/45410
--> these has been fixed in dart 2.16, which has been shipped with Flutter v2.10!
https://medium.com/dartlang/dart-2-16-improved-tooling-and-platform-handling-dd87abd6bad1

Resources