I have an express server that has written a cookie, but I can not access it from the client side. I can see it in the Chrome dev tools, it is NOT marked as being httpOnly or Secure, yet when I try to access it via my React app or even just by typing document.cookie in the browser console, I get nothing.
Right now the express server is running on Heroku, and my client side is localhost.
I'm stumped.
Here is my server side code that is setting the cookie:
return res
.status(200)
.cookie('id_token', token, {
httpOnly: false,
path: '/',
secure: false,
maxAge: 400000
})
.json({
token: token
});
Express server is running in heroku and Client server is running in localhost.
The cookie set in the Express server is scoped to the current host when Domain for the cookie isn't set. [1]
Say your application is served at express.herokuapp.com,
scripts can only read it when they're running in the same host. i.e. express.herokuapp.com
However, with cookie scopes cookie set on a domain can be read by scripts running in a subdomain.
In development, you can set Domain attribute for the cookie to be .herokuapp.com
For production, I strongly suggest to explicitly scope the cookie to the client domain. While you can apply the same process as development if client and server are running in different subdomains. You should only do this if other client apps running in other subdomains share cookies.
However if both client and server are going to be running in the same domain, I suggest to keep the default cookies scope.
If client and server are running in different domains, I strongly suggest to explicitly scope the cookie to the client domain.
Then add the following entry in your /etc/hosts to alias localhost to a subdomain of herokuapp.com
127.0.0.1 local.herokuapp.com
Visit the address alias and the client side script will read the cookie.
Related
I have an application that runs fine when viewing the application from your local machine/vm, but if I start the application in a vm and try to access the application using {vm-IP:PORT}, Express and React are not able to communicate with each other.
How React communicates -
const api = axios.create({
baseURL: "http://localhost:5000"
})
How Express communicates -
router.use(cors({
origin: 'http://localhost:3000'
}))
I would have to hard code the ip address of the machine into the code to have them communicate correctly (replace localhost with ip), but that would also mean the application would have to be edited every time it is ran on a different machine.
Is there a workaround to this issue? The application is also dockerized, so I don't mind being able to paste a ip address in the docker-compose so that React and Express can communicate but I'm not sure if that's possible.
You can use ngrok for this issue.
refer this doc: https://ngrok.com/docs
Basically what ngrok does is that it will allow you to serve your port on internet without actually hosting it.
It is very easy to use. First install ngrok to your system and then run ngrok http 5000 command in your terminal to server your express port 5000 on internet.
ngrok will allow you to serve your port on internet for 8 hours after that ngrok url will be deactivated.
Running above command will give you one url which you can use as below in your react application
const api = axios.create({
baseURL: <<url_generated_by_ngrok>>
})
According to the documentation
app.use(cors({origin: true}));
will accept every origin and set a corresponding Access-Control-Allow-Origin header. That is better than setting Access-Control-Allow-Origin: *, which browsers do not accept when credentials are involved.
If accepting every origin is too broad, you can also implement rules in a function and use
app.use(cors({origin: myOriginFunction}));
I'm working on a project which is using PassportJS Google OAuth 2.0. When I test on my local machine (with a React client on localhost:3000 and a Express server on localhost:4000), the flow works fine. I am able to send requests to the server and deserialize the user on each request. However, when I host the client on Google Firebase Hosting and the server on Heroku, the user no longer get deserialized on each request.
Here are some specifics of the things I've done / tried / worked locally along with extra information:
The client and server and hosted on different domains.
I am using axios to send the request to the server. In the request, I make sure to set the "withCredentials" property in the options to true to make sure the cookies connected to that domain are sent on each request.
On the server I have CORS enabled for the domain the client is hosted on (as it is currently being hosted on a different domain) and I have "credentials" set to true to allow the credentials to be sent and received.
Please let me know if I've forgotten to include something in the post or if any extra information would be helpful. Thank you in advance.
I don't know if you fix this, but I got the exact same problem, in my case I added sameSite: "none" in my express session setting, it worked.
cookie: {
sameSite: "none", //add this line
...
},
So I have an app, (react client, express backend) that is behind an nginx proxy over with ssl.
But I when I try and authenticate through okta, I get a successful authentication and then when I go to redirect const auth = await this.props.auth;
auth.redirect({ sessionToken: this.props.sessionToken }); I get kicked back to authenticate again on oktas login page. It works in Firefox (most time) but never works in chrome or IE.
In Chrome I get warnings in the console about HPKP headers, so I'm thinking that's the cause but I'm not sure. But I don't know why it would be since it should be over ssl.
I'm not sure what kind of code to put it here because I'm not entirely sure the problem
How is the token passed/stored?
HPKP = HTTP Public Key Pinning
It's a method of verifying that the supplied security certificate belongs to the specific web server you are connecting to, to mitigate against MITM attacks, forged certificates etc.
I've no experience of setting it up but it's all about passing the correct headers, there are several Nginx directives for proxying you should look at, as Nginx doesn't pass all headers by default, or you may have to rewrite some. proxy_redirect, proxy_set_header and proxy_pass_header are probably good places to start.
Could be tricky as HPKP exists to stop anyone getting between the client and the server, and proxies exist to get between the client and the server. Do let me know what happens.
I am new in WebStorm. I have got a sample application that consist two projects: client(angularJs) and server (node.js).
When I start server app.js (api that return json), it listening port 7200
Wnen start client (AngularJS app) index.html, it works on port 63342
But the api call from client to server does not work. because client ask url
$http({method: 'GET', url: 'api/maa'}).
http://localhost:63342/quickstart/src/client/api/maa, but server works on port 7200 (http://localhost:7200/api/maa).
How to fix this, it is possible to launch both server and client on same port?
Sure. You can even start both using the same run configuration. Node.js run configuration has a 'Browser/LiveEdit' tab that allows to launch the browser and debug the client code. Check the 'After launch' checkbox there, specify the URL of the server your front end is served on (http://localhost:7200) and enable the 'with javaScript Debugger' option
I am having the following problem. I will describe 3 use cases - two which work and the other one which doesn't.
I have an AngularJS client using SockJS with STOMP. In the backend I have a Spring application. Client is in domain domainA.com, backend is in domainB.com.
var socket = new SockJS(("http://domainB.com/myApp/messages"));
stompClient = Stomp.over(socket);
stompClient.connect('guest', 'guest', function(frame) {
...
}
In the backend there are Cors filters and the Cross-origin calls are possible. All works fine.
Use Case 1. Client domainA, Server domainB
My application is unsecured on the backend side. I subscribe like below:
stompClient.subscribe('/topic/listen', function(message) {
showMessage(JSON.parse(message.body).content);
});
All works fine.
Use Case 2. Client domainB, Server domainB
My application is secured on the backend side with Spring security. Authentication is done through a form - username and password. Nothing uncommon.
In this case the client is on domainB.com, same as the backend. All works fine, the difference is that I use a different subscription method:
stompClient.subscribe('/user/queue/listen', function(message) {
showMessage(JSON.parse(message.body).content);
});
in order to benefit from getting the principal from the security session. All works well. No issues.
The JSESSIONID cookie is added to the connection request in new SockJS(("http://domainB.com/myApp/messages"));.
Use Case 3. Client domainA, Server domainB
The application is secured the same as in UC2. However, the client is now on a different domain.
The JSESSIONID is NOT added to the connection request. The connection to websocket in Spring is unauthenticated and redirected back to login page. This repeats and makes it impossible to connect.
Why is the JSESSIONID cookie not populated with the websocket requests in this case?
Cheers
Adam
As part of SockJS protocol, a http GET is sent to websocket server for negotiating the supported protocols. It's done using XmlHttpRequest which won't add any cookies stored for a different domain than its own domain the web application and scripts are served due to same-origin policy implemented in every modern web browser.
You should resort to a way of circumventing the same-origin policy.
I think you'll find the answers you are looking for here : http://spring.io/blog/2014/09/16/preview-spring-security-websocket-support-sessions
the trick to implement a HandshakeInterceptor