How to pass windows authentication(browser) from react application to Spnego Kerberos Spring SSO? - reactjs

We have a react application which used get that data from spring boot webservice. Both is deployed in a same server(tomcat). But we only need Kerberos authentication for webservice call from the React application. Anyone can open the React application but when it navigate then it calls to the webservcie to get the data. So if we configure the spring to support spnego kerberos spring sso, is it possible that browser will automatically pass( from React app, as react run on the browser) the logged in Windows credentials to the spring boot web service.
We are calling the service from react app as follows -
export const client = rest
.wrap(mime, { registry: registry })
.wrap(errorCode)
.wrap(defaultRequest, {
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json'
},
method: 'GET'
})
export const fetchPDSIs = (Id) =>
APIHelpers.client(APIHelpers.buildPDSIReq(Id))
.then(
response => (response.entity || []).sort((a, b) => a.portalinstance.localeCompare(b.portalinstance))
,
response => {
global.msg.error(<div className='smallTextNotification'>`Fetching instances and portal for {Id} error: {response.status.code} -> {response.status.text}</div>)
return []
}
)
export const buildPDSIReq = (Id) => ({path: `${serverAddr}/msd232/pdsiii/${Id}`})

Using the fetch API, it worked for me by adding credentials: 'include'
fetch(${authBase}/auth, {credentials: 'include'})
I know it isn't what you are using but it may help other readers

Yes, it's possible, requirements on the client side:
User logged into domain account on OS.
Proper config in your browser, see Spring documentation
E.g. for Internet Explorer:
E.3 Internet Explorer
Complete following steps to ensure that your Internet Explorer browser
is enabled to perform Spnego authentication.
Open Internet Explorer.
Click Tools > Intenet Options > Security tab.
In Local intranet section make sure your server is trusted by i.e. adding it into a list.
Kerberos auth is triggered by HTTP header returned from backend service:
WWW-Authenticate: Negotiate
If your OS and browser are correctly configured this will trigger service ticket generation, which browser will send as Authorization HTTP header value.
EDIT:
If your application is split into frontend and backend hosted separately on different hosts, then you have to register SPN (and generate keytab) for the fronted host which users will enter. Example:
Backend: api.test.com
Frontend: application.test.com
For SSO to work, you have to register SPN: application.test.com, backend host name is irrelevant here. Command:
setspn -A HTTP/application.test.com#test.com ad_user_to_registern_spn_for

Related

Where do I attach the Execute API policy for React App deployed via ECR -> ECS -> ELB?

I wish to secure my API Gateway using AWS IAM Authorization so that only my React Application and users with the correct policy attached can call the endpoints.
I have managed to test that the policy works when I execute via Postman using AWS Signature as the authorisation method, but I'm having trouble applying this policy to my React Application.
The React Application is deployed using CodePipeline to build a container to ECR, then to deploy the application via ECS. The domain routes to the ELB that is mapped to the reverse proxy of the React Application. I just cannot figure out where to attach the policy so that when the API call is made from the webpage, the authorisation is attached to the request. The endpoints are returning a 403 - Missing Authentication Token error (as expected)
Instead of attaching the IAM role to the application's container, I implemented the headers client side. As I'm using Axios, I used the https://www.npmjs.com/package/aws4-axios package which creates a request interceptor for the signing. The credentials are from a new user created with the policy attached to allow execute-api.
import { aws4Interceptor } from "aws4-axios";
export const postLogin = async () => {
let response;
const interceptor = aws4Interceptor(
{
region: "your region here",
service: "execute-api",
},
{
accessKeyId: "from .env",
secretAccessKey: "from .env",
}
);
axios.interceptors.request.use(interceptor);
try {
response = await axios.post( etc.

Service to service requests on App Engine with IAP

I'm using Google App Engine to host a couple of services (a NextJS SSR service and a backend API built on Express). I've setup my dispatch.yaml file to route /api/* requests to my API service and all other requests get routed to the default (NextJS) service.
dispatch:
- url: '*/api/*'
service: api
The problem: I've also turned on Identity-Aware Proxy for App Engine. When I try to make a GET request from my NextJS service to my API (server-side, via getServerSideProps) it triggers the IAP sign-in page again instead of hitting my API. I've tried out a few ideas to resolve this:
Forwarding all cookies in the API request
Setting the X-Requested-With header as mentioned here
Giving IAP-secured Web App User permissions to my App Engine default service account
But nothing seems to work. I've confirmed that turning off IAP for App Engine allows everything to function as expected. Any requests to the API from the frontend also work as expected. Is there a solution I'm missing or a workaround for this?
You need to perform a service to service call. That's no so simple and you have not really example for that. Anyway I tested (in Go) and it worked.
Firstly, based your development on the Cloud Run Service to Service documentation page.
You will have this piece of code in NodeJS sorry, I'm not a NodeJS developer and far least a NexJS developer, you will have to adapt
// Make sure to `npm install --save request-promise` or add the dependency to your package.json
const request = require('request-promise');
const receivingServiceURL = ...
// Set up metadata server request
// See https://cloud.google.com/compute/docs/instances/verifying-instance-identity#request_signature
const metadataServerTokenURL = 'http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=';
const tokenRequestOptions = {
uri: metadataServerTokenURL + receivingServiceURL,
headers: {
'Metadata-Flavor': 'Google'
}
};
// Fetch the token, then provide the token in the request to the receiving service
request(tokenRequestOptions)
.then((token) => {
return request(receivingServiceURL).auth(null, null, true, token)
})
.then((response) => {
res.status(200).send(response);
})
.catch((error) => {
res.status(400).send(error);
});
This example won't work because you need the correct audience. Here, the variable is receivingServiceURL. It's correct for Cloud Run (and Cloud Functions) but not for App Engine behind IAP. You need to use the Client ID of the OAuth2 credential named IAP-App-Engine-app
Ok, hard to understand what I'm talking about. So, go to the console, API & Services -> Creentials. From there, you have a OAuth2 Client ID section. copy the Client ID column of the line IAP-App-Engine-app, like that
Final point, be sure that your App Engine default service account has the authorization to access to IAP. And add it as IAP-secured Web App User. The service account has this format <PROJECT_ID>#appspot.gserviceaccount.com
Not really clear also. So, go to the IAP page (Security -> Identity Aware Proxy), click on the check box in front of App Engine and go the right side of the page, in the permission panel
In the same time, I can explain how to deactivate IAP on a specific service (as proposed by NoCommandLine). Just a remark: deactivate security when you have trouble with it is never a good idea!!
Technically, you can't deactive IAP on a service. But you can grant allUsers as IAP-secured Web App User on a specific service (instead of clicking on the checkbox of App Engine, click on the checkbox of a specific service). And like that, even with IAP you authorized all users to access to your service. it's an activation without checks in fact.

Mixed Content error when requesting ID token through Google Auth Library in App Engine

I am trying to authenticate my app running in App Engine to call a Cloud Run service. To get so I request an OAuth 2 token through the Google Auth library (getIdTokenClient method) as looks to be the recommended approach here https://github.com/googleapis/google-auth-library-nodejs#working-with-id-tokens.
The following error is raised from my app when OAuth 2 is trying to access the Google metadata:
gaxios.ts:91 Mixed Content: The page at 'https://myapp-dev.nw.r.appspot.com/' was loaded over HTTPS, but requested an insecure resource 'http://169.254.169.254/computeMetadata/v1/instance'. This request has been blocked; the content must be served over HTTPS.
Following my piece of code:
const {GoogleAuth} = require('google-auth-library');
const url = 'https://myapp-dev-fvnpywgyfa-nw.a.run.app';
const auth = new GoogleAuth();
const serviceRequestOptions = {
method: 'GET',
headers: {
'Content-Type': 'text/plain',
},
timeout: 3000,
};
try {
// Create a Google Auth client with the Renderer service url as the target audience.
if (!client) client = await auth.getIdTokenClient(url);
// Fetch the client request headers and add them to the service request headers.
// The client request headers include an ID token that authenticates the request.
const clientHeaders = await client.getRequestHeaders();
serviceRequestOptions.headers['Authorization'] =
clientHeaders['Authorization'];
} catch (err) {
throw Error('could not create an identity token: ', err);
}
gaxios.ts:91 Mixed Content: The page at https://myapp-dev.nw.r.appspot.com/ was loaded over HTTPS, but requested an insecure resource http://169.254.169.254/computeMetadata/v1/instance. This request has been blocked; the content must be served over HTTPS
Subnet 169.254.0.0/16 is an IANA special use net (rfc3330) for "Link-Local" block (rfc3927). This subnet are not routed to the public internet, therefore is accessible in local segment only.
URL http://169.254.169.254/computeMetadata/v1/instance is used as an internal link-local address in Cloud Services such as AWS, Yandex Cloud, Google Cloud Platform (GCP also uses http://metadata.google.internal/computeMetadata/v1/instance URL) to get information about a VM instance.
IP address 169.254.169.254 is accessible only via http: because it works in private internal network, where SSL-certs cannot be verified (see para 1 - no route to Public Inet).
Therefore if your app tries to access http://169.254.169.254/computeMetadata/v1/instance - you definitely did something wrong. This address cannot be accessed by easy way
May be Using OAuth 2.0 to Access Google APIs link can help you.

Laravel 7 Sanctum: Same domain (*.herokuapp.com) but separate React SPA gets CSRF Token Mismatch

I've read a lot from this forum and watched a lot of tutorial videos on how to connect separate React/Vue SPA to Laravel API with Sanctum Auth but none of the solutions worked for me. This is for my school project.
So here's what I did so far.
I created 2 folders, one for api and one for frontend. I installed Laravel on the api folder and installed React app on the frontend folder. Both of these are Git initialized and have their own Github repositories. Also, both of them are deployed to Heroku.
API
Repository: https://github.com/luchmewep/jarcalc_api
Website: https://jarcalc-api.herokuapp.com
Front-end
Repository: https://github.com/luchmewep/jarcalc_front
Website: https://jarcalculator.herokuapp.com
On local, everything runs fine. I can set error messages to email and password fields on the front-end so that means I have received and sent the laravel_session and XSRF_TOKEN cookies. I have also displayed the authenticated user's information on a dummy dashboard so everything works fine on local.
On the internet, both my apps run but won't communicate with each other. In the official documentation, they must at least be on the same domain and in this case, they are subdomains of the same domain which is .herokuapp.com.
Here are my environment variables for each Heroku apps.
API
SANCTUM_STATEFUL_DOMAINS = jarcalculator.herokuapp.com
(I've tried adding "SESSION_DRIVER=cookie" and "SESSION_DOMAIN=.herokuapp.com" but still not working!)
Update
Found out that axios is not carrying XSRF-TOKEN when trying to POST request for /login. It is automatically carried on local testing.
Here is the relevant code:
api.tsx
import axios from "axios";
export default axios.create({
baseURL: `${process.env.REACT_APP_API_URL}`,
withCredentials: true,
});
Login.tsx
...
const handleSubmit = (e: any) => {
e.preventDefault();
let login = { email: email.value, password: password.value };
api.get("/sanctum/csrf-cookie").then((res) => {
api.post("/login", login).then((res) => {
/**
* goes here if login succeeds...
*/
console.log("Login Success");
...
})
.catch((e) => {
console.log("Login failed...")
});
})
.catch((e) => {
console.log("CSRF failed...");
});
};
UPDATE
".herokuapp.com is included in the Mozilla Foundation’s Public Suffix List. This list is used in recent versions of several browsers, such as Firefox, Chrome and Opera, to limit how broadly a cookie may be scoped. In other words, in browsers that support the functionality, applications in the herokuapp.com domain are prevented from setting cookies for *.herokuapp.com."
https://devcenter.heroku.com/articles/cookies-and-herokuapp-com
COOKIES ON LOCAL
COOKIES ON DEPLOYED
Explanation: Although the API and frontend both have .herokuapp.com, that does not make them on the same domain. It is explained on Heroku's article above. This means that all requests between *.herokuapp.com are considered cross-site instead of same-site.
SOLUTION
Since laravel_session cookie is being carried by axios, the only problem left is the xsrf-token cookie. To solve the problem, one must buy a domain name and set the subdomain name for each. In my case, my React frontend is now at www.jarcalculator.me while my Laravel backend is now at api.jarcalculator.me. Since they are now same-site regardless of where they are deployed (React moved to Github pages while Laravel at Heroku), the cookie can be set automatically.
Finally fixed my problem by claiming my free domain name via Github Student Pack. I set my React app's domain name to www.jarcalculator.me while I set my Laravel app's domain name to api.jarcalculator.me. Since they are now subdomains of the same domain which is jarcalculator.me, passing of cookie that contains the CSRF-token and laravel_session token is automatic. No need for modification on axios settings. Just setting the axios' withCredentials to true is all you need to do.

Unable to set cookies in Chrome using Flask-JWT-Extended, React, and Axios

Background and Issues
I have a Flask back-end running in localhost:5000 and a React SPA running on localhost:3000.
I was able to make them talk but when trying to store the token generated from Flask into Browser's cookies 1) response headers does not contain any cookies when doing console.log(response) after a successful POST from axios and 2) the cookies are not being set. But when inspecting the network > Login.js header, I could actually see the Set-Cookie key exists as response's header. I've tried multiple solutions from Google and StackOverflow but no solution seems to work here and I really can't figure out what is going on as the request is being made successfully, and Chrome is allowing third party software to set the cookies. And even I can see the tokens from Network > Login.js header.
Steps
1) Users enters in their username and password and hit login.
2) Axios POST call is made to Flask's back-end.
3) Process the data and generates a couple of tokens and set them into cookies.
4) Browser's cookie are set with few tokens. <- this part is not working.
Code
Flask back-end token generation using flask-jwt-extended
# app_config related to flask-jwt-extended
CORS_HEADERS = "Content-Type"
JWT_TOKEN_LOCATION = ["cookies"]
JWT_COOKIE_SECURE = False
JWT_COOKIE_CSRF_PROTECT = True
# post method from flask-restful for LoginAPI class
def post(self):
email = request.json.get("email")
password = request.json.get("password")
# some processing here.....
payload = {
"email": email
}
access_token = create_access_token(identity=payload)
refresh_token = create_refresh_token(identity=payload)
response = jsonify({"status": True})
set_access_cookies(response, access_token)
set_refresh_cookies(response, refresh_token)
return response
CORS using flask-cors
# in below code, I had some issues with putting wildcard (*) into origin, so I've specified to the React SPA's host and port.
CORS(authentication_blueprint, resources={r"/authentication/*": {"origins": "http://localhost:3000"}},
supports_credentials=True)
React SPA - making a post call using axios
# also tried `axios.defaults.withCredentials = true;` but same result.
export const login = (email, password, cookies) => {
return dispatch => {
const authData = {
email: email,
password: password
};
let url = 'http://127.0.0.1:5000/authentication/login/';
axios.post(url, authData, {withCredentials: true)
.then(
response => {
console.log(response)
})
.catch(err => {
console.log(err)
});
dispatch(authSuccess(email, password));
}
};
Below image is the response from successful post call in axios.
I'm not sure whether it is normal but response's headers are not showing any of the cookies that I'm setting from the back-end.
And below image is from Network > header for login/
As shown, you can clearly see the token information with Set-Cookie key. I've also checked that they aren't secure.
And finally when I check my cookie tab from application > cookies, I do not see anything.
So the issues were coming from the localhost.
I have a Flask back-end running in localhost:5000 and a React SPA running on localhost:3000.
From above statement, to be very specific, I was running the back-end on localhost:5000 and running the React SPA on 127.0.0.1:3000.
Once I've changed the 127.0.0.1 to localhost, it worked like a charm.
And a side note, after playing around with CORS, I think it will be a lot easier to use Nginx and proxy_pass to pass the request coming from React SPA to back-end to avoid using CORS completely, because if one have to use the CORS in different environment such as test, staging and etcs, one would have to set up the CORS at the web server level e.g) Nginx anyway and it requires slightly different configuration that how I set up for local environment anyway.

Resources