Gorilla CSRF with AngularJS - angularjs

I'm trying to get AngularJS to work with Gorilla CSRF for my web applciation, but there aren't many documentation around that I can find, so I'm not sure where exactly to start. Should I set a X-CSRF-Tokenfor every GET request or should I just do it when the user visits the home page like I'm doing now? Also, how do I make AngularJS CSRF protection work with Gorilla CSRF? Do I need to do some sort of comparisons? Any example codes would be appreciated.
Here is my code:
package main
import (
"github.com/gorilla/csrf"
"github.com/gorilla/mux"
)
func main() {
r := mux.NewRouter()
r.HandleFunc("/", Home).Methods("GET")
// Other routes handling goes here
http.ListenAndServe(":8000",
csrf.Protect([]byte("32-byte-long-auth-key"))(r))
}
func Home(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-CSRF-Token", csrf.Token(r))
}
// More routes

You're question might be a bit broad but overall you're misusing the tools so I'm just going try and explain the basic ideas. The application you're using uses a 'double submit' pattern for CSRF protection. This requires changes in both the client and server code bases. The server should not be setting the X-CSRF-Token header, that is the role of the client. I've actually implemented a couple from scratch anti-CSRF solutions recently and they're pretty simple (both double submit pattern). I also used a few packages from vendors like MSTF and Apache (had to implement CSRF across like 20 years of applications on all kinds of stacks).
In the double submit pattern the server should be setting a cookie with a random value (like a guid), the cookie must be marked as secure. You can make it httponly as well, however it will require you to do a lot more work on your front end resources. On the client side, the simplest way to deal with this is to implement some JavaScript that reads the cookie value and adds it as a header before any POST request. You don't need to protect GET's typically. You could, but if your GET's are doing constructive/destructive things server side, then you're misusing the HTTP verb and I would correct that by making those requests POSTS rather than trying to protect every single request.
On the server side, it's best to do the CSRF check up front, in a common place where all requests come in. When a POST comes in, the server should read the cookie value, check for the header value and compare them. If they're equal then the request should be allowed to pass through, if they're not then you should boot them out with a 403 or something. After doing so the server should rewrite the cookie value (best to make it one use only).
Your client side script can have something like the code below, just make sure the resource is on every page load and you don't use form submits and this will cover everything. If you submit forms you'll need some other code like this to handle that. Some approaches prefer to write the value in the DOM server side. For example in .NET the CSRF library makes the value HTTPOnly and Secure and expects the devs to put a place holder token in every single form in every single cshtml file in their project... I personally think that is very stupid and inefficient. No matter how you do this, you're probably gonna have to do some custom work. Angular isn't going to implement the front end for gorillas CSRF library. gorilla probably isn't going to come with JavaScript for your client since it's an API library. Anyway, basic JavaScript example;
// three functions to enable CSRF protection in the client. Sets the nonce header with value from cookie
// prior to firing any HTTP POST.
function addXMLRequestCallback(callback) {
var oldSend;
if (!XMLHttpRequest.sendcallback) {
XMLHttpRequest.sendcallback = callback;
oldSend = XMLHttpRequest.prototype.send;
// override the native send()
XMLHttpRequest.prototype.send = function () {
XMLHttpRequest.sendcallback(this);
if (!Function.prototype.apply) {
Function.prototype.apply = function (self, oArguments) {
if (!oArguments) {
oArguments = [];
}
self.__func = this;
self.__func(oArguments[0], oArguments[1], oArguments[2], oArguments[3], oArguments[4]);
delete self.__func;
};
}
// call the native send()
oldSend.apply(this, arguments);
}
}
}
addXMLRequestCallback(function (xhr) {
xhr.setRequestHeader('X-CSRF-Token', getCookie('X-CSRF-Cookie'));
});
function getCookie(cname) {
var name = cname + "=";
var ca = document.cookie.split(';');
for (var i = 0; i < ca.length; i++) {
var c = ca[i];
while (c.charAt(0) == ' ') c = c.substring(1);
if (c.indexOf(name) == 0) return c.substring(name.length, c.length);
}
return "";
}
Now, if you can narrow your question a bit I can provide some more specific guidance but this is just a guess (maybe I'll read their docs when I have a minute). Gorilla is automatically going to set your cookie and do your server side check for you if you use csrf.Protect. The code you have setting the header in Go, that is what you need the JavaScript above for. If you set the header on the server side, you've provided no security at all. That needs to happen in the browser. If you send the value along with all your requests, Gorilla will most likely cover the rest for you.
Some other random thoughts about the problem space. As a rule of thumb, if an attacker can't replay a request, they probably can't CSRF you. This is why this simple method is so effective. Every incoming request has exactly one random GUID value it requires to pass through. You can store that value in the cookie so you don't have to worry about session moving across servers ect (that would require a shared data store server side if you're not using the double submit pattern; this cookie-header value compare business). There's no real chance of this value being brute forced with current hardware limitations. The single origin policy in browsers prevents attackers from reading the cookie value you set (only scripts from your domain will be able to access it if it's set as secure). The only way to exploit that is if the user has previously been exploited by XSS which I mean, kind of defeats the purpose of doing CSRF since the attacker would already have more control/ability to do malicious things with XSS.

Related

Cypress: Try/Catch block for External Services?

So what I want to do might be impossible and actually may not even be recommended, but im curious of the best way to handle this.
Currently one of my applications attempts to use an external service (Specifically google maps API). Sometimes I get a bad response from googles API (or it times out). It's rare (less than 1/30 times maybe) but it still happens and introduces flakiness into the automation tests.
I thought about stubbing this out everytime, but that would also sort of I feel like "lower" the test value (Since google maps api is important in this case)
I was curious if Cypress is able to do something that would TRY to get a response (with a successful status code) and if it failed maybe leave a log note but allow the test to continue on (either via stubbing or just continuing on).
This may even be a bad idea as we don't really "know" from just looking at the results but I wanted to at least pose the question.
Thanks!
You could achieve this by only conditionally sending mocked data, based on the response to that API call.
cy.intercept('/foo', (req) => { // replace with the URL for the google API
req.continue((res) => { // pass through the request to the real API
if (res.statusCode !== 200) { // or whatever "success" statusCode/criteria is
cy.log('my information I want to log')
res.send(200, myMockedBody) // send the mocked response.
}
res.send(); // otherwise, just send the response from the API
});
});

Should I use POST or GET if I want to send an Array of filters to fetch all articles related with those filters

I havent find ressources online to solve my problem.
I'm creating an app with React Native that fetches and shows news articles from my database.
At the top of the page, there's some buttons with filters inside, for example:
one button "energy",
one button "politics"
one button "people"
one button "china"
etc...
Everytime I press one of those buttons, the filter corresponding is stored in an array "selectedFilters", and I want to fetch my database to only show articles that are corresponding to those filters.
Multiple filters can be selected at the same time.
I know one way of doing it, with a POST request:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'POST',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
But I don't know how to send an Array with GET request.
I read online that I can pass multiple parameters to my url(for example: arr[0]=selectedFilters[0]&arr[1]=... but the fact is I never know in advance how many items will be in my array.
And also I'm not sure if I could write exactly the same way as my POST request above, but with GET:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'GET',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
or if I can only pass items in the url, but does this work ?
await fetch('187.345.32.33:3000/fetch-articles?arr[0]=${selectedFilters[0]', {
Or even better if something like this could work:
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
Thanks for your help
You should definitely use a GET request if your purpose is to fetch the data.
One way of passing the array through the URL is by using a map function to create a comma separated string with all the filters. This way you would not need to know in advance how many elements are in the array. The server can then fetch the string from the URL and split it on the commas.
One more method you can try is to save a filters array on the server side for the session. You can then use a POST/PUT request to modify that array with new filter as user adds or remove them. Finally you can use an empty GET request to fetch the news as the server will already have the filters for that session.
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
Yes, you do read that everywhere. It's wrong (or at best incomplete).
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” (Fielding, 2009)
It may help to remember that on the HTML web, POST was the only supported method for requesting changes to resources, and the web was catastrophically successful.
For requests that are effectively read only, we should prefer to use GET, because general purpose HTTP components can leverage the fact that GET is safe (for example, we can automatically retry a safe request if the response is lost on an unreliable network).
I'm not sure if I could write exactly the same way as my POST request above, but with GET
Not quite exactly the same way
A client SHOULD NOT generate content in a GET request unless it is made directly to an origin server that has previously indicated, in or out of band, that such a request has a purpose and will be adequately supported. An origin server SHOULD NOT rely on private agreements to receive content, since participants in HTTP communication are often unaware of intermediaries along the request chain. -- RFC 9110
The right idea is to think about this in the framing of HTML forms; in HTML, the same collection of input controls can be used with both GET and POST. The difference is what the browser does with the information.
Very roughly, a GET form is used when you want to put the key value pairs described by the submitted form into the query part of the request target. So something roughly like
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
method: 'GET'
});
Although we would normally want to be using a URI Template to generate the request URI, rather than worrying about escaping everything correctly "by hand".
However, there's no rule that says general purpose HTTP components need to support infinitely long URI (for instance, Internet Explorer used to have a limit just over 2000 characters).
To work around these limits, you might choose to support POST - it's a tradeoff, you lose the benefits of safe semantics and general purpose cache invalidation, you gain that it works in extreme cases.

Best way to refresh token every hour?

I am building a website with React and I have to send about 3 requests per every page, but first of all I have to get communication token that needs to be refreshed every hour by the way, and then use it as a base for all other requests.
I have a plan to get it as soon as App mounts and put it in state (redux, thunk) and use it in every component that subscribes to store and then put setInterval function in componentDidMount method too. Another thing that comes to my mind is to put it in local storage but that would be a bit complicated (I have to parse every time I get something from local storage).
class App extends React.Component {
componentDidMount() {
this.props.getToken()
setInterval (this.props.getToken, 5000)
}
This works pretty well, and switching between pages doesn't spoil anything, it works pretty good. Note that here 5000 miliseconds is just for trying out, I will put it to be 3500000. Is this OK or there is another way to do this? Thanks!
Your implementation is pretty fine although I'd make a few changes
Use local storage so you don't have to refetch your token if user refreshes the page (since it'll be lost from memory). Also you'll have same benefit when working with multiple tabs. You can easily create some LocalStorageService that does all parsing/stringify for you so you don't have to worry.
I'd suggest to move that logic to some kind of service where you'll control your token flow much easier - e.g. what happens if user logs out or somehow token becomes invalid? You'd have to get new token from other place than your App (since root componentDidMount will be called only once) and also you'd need to clear the current interval (on which you won't have reference with current implementation) to avoid multiple intervals.
Instead of intervals maybe you could even use setTimeout to avoid having multiple intervals in edge cases:
getToken() {
// do your logic
clearTimeout(this.tokenExpire);
this.tokenExpire = setTimeout(() => getToken(), 5000);
}
Overall your implementation is fine - it can only be improved for easier maintenance and you'll need to cover some edge cases (at least ones mentioned above).
Ideally your server should put tokens on secured sessions so they are not vulberable to XSS.
If there's no such an option. I'd suggest using axios. You configure it to check the tokens on each request or response and handle the tokens accordingly.

What is the idiomatic way of handling "ephemeral" state in a database?

I know that "best practices" type of questions are frowned upon in the StackOverflow community, but I am not sure how else to word this.
My "big picture" question is this:
What is a good practice when it comes to handling "session" state in a stateless server (like one that provides a REST api)?
Quick details
Using nodeJS on backend, MongoDB for database.
Example 1: Login state
In version 1 of the admin panel, I had a simple login that asks for an email and password. If the credentials are correct, user is returned a token, otherwise an error.
In version 2, I added a two-factor authentication for users who activate it.
Deciding to keep things simple, I have now two endpoints. The flow is this:
/admin/verifyPassword:
Receive email and password;
if(Credentials are correct) {
if(Admin requires 2fa) {
return {nextStep: 2fa};
} else {
return tokenCode;
}
} else {
return error;
}
/admin/verifyTotpToken:
Receive email and TOTP token;
Get admin with corresponding email
if(Admin has verified password) {
return tokenCode
} else {
return error;
}
At the verifyTotpToken step, it needs to know if the admin has already verified password. To do that I decided to attach a 'temporary' field to the Admin document called hasVerifiedPassword which gets set to true in verifyPassword step.
Not only that, but I also set a passwordVerificationExpirationDate temporary field in the verifyPassword endpoint so that they have a short window within which they must complete the whole login process.
The problem with my approach is that:
It bloats the admin document with ephemeral, temporary state that has nothing to do with an admin itself. In my mind, resource and session are two separate things.
It gives way for stale data to stay alive and attached to the admin document, which at best is a slight nuisance when looking through the admin collection in a database explorer, and at worst can lead to hard to detect bugs because the garbage data is not properly cleaned.
Example 2: 2FA activation confirmation by email
When an admin decides to activate 2fa, for security purposes, I first send them an email to confirm that it is truly them (and not someone who hijacked their session) who wanted to activate 2fa. To do that I need to pass in a hash of someway and store it in the database.
My current approach is this:
1) I generate a hash on the server side, store it in their admin document as well as an expiration date.
2) I generate a url containing the hash as a query parameter and send it in the email.
3) The admin clicks on the email
4) The frontend code picks up the hash from the query parameter and asks the server to verify it
5) The server looks up the admin document and checks for a hash match. If it does, great. Return ok and clean up the data. If not, return an error. If expired, clean up the data.
Here also, I had to use some temporary state (the two fields hash and expirationDate). It is also fragile for the same problems mentioned above.
My main point
Through these two examples I tried to illustrate the problem I am facing. Although these solutions are working "fine", I am curious about what better programmers think of my approaches and if there is a better, more idiomatic way of doing this.
Please keep in mind that the purpose of my question is not a get a specific solution to my specific problem. I am looking for advice for the more general problem of storing session data in a clever, maintainable, way that does not mix resource state and ephemeral state.

Meteor one time or "static" publish without collection tracking

Suppose that one needs to send the same collection of 10,000 documents down to every client for a Meteor app.
At a high level, I'm aware that the server does some bookkeeping for every client subscription - namely, it tracks the state of the subscription so that it can send the appropriate changes for the client. However, this is horribly inefficient if each client has the same large data set where each document has many fields.
It seems that there used to be a way to send a "static" publish down the wire, where the initial query was published and never changed again. This seems like a much more efficient way to do this.
Is there a correct way to do this in the current version of Meteor (0.6.5.1)?
EDIT: As a clarification, this question isn't about client-side reactivity. It's about reducing the overhead of server-side tracking of client collections.
A related question: Is there a way to tell meteor a collection is static (will never change)?
Update: It turns out that doing this in Meteor 0.7 or earlier will incur some serious performance issues. See https://stackoverflow.com/a/21835534/586086 for how we got around this.
http://docs.meteor.com/#find:
Statics.find({}, {reactive: false} )
Edited to reflect comment:
Do you have some information that the reactive: false param is only client side? You may be right, it's a reasonable, maybe likely interpretation. I don't have time to check, but I thought this may also be a server side directive, saying not to poll the mongo result set. Willing to learn...
You say
However, this is horribly inefficient if each client has the same large data set where each document has many fields.
Now we are possibly discussing the efficiency of the server code, and its polling of the mongo source for updates that happen outside of from the server. Please make that another question, which is far above my ability to answer! I doubt that is happening once per connected client, more likely is a sync between app server info and mongo server.
The client requests you issue, including sorting, should all be labelled non-reactive. That is separate from whether you can issue them with sorting instructions, or whether they can be retriggered through other reactivity, but which need not include a trip to the server. Once each document reaches the client side, it is cached. You can still do whatever minimongo does, no loss in ability. There is no client asking server if there are updates, you don't need to shut that off. The server pushes only when needed.
I think using the manual publish ( this.added ) still works to get rid of overhead created by the server observing data for changes. The observers either need to be added manually or are created by returning a Collection.curser.
If the data set is big you might also be concerned about the overhead of a merge box holding a copy of the data for each client. To get rid of that you could copy the collection locally and stop the subscription.
var staticData = new Meteor.Collection( "staticData" );
if (Meteor.isServer ){
var dataToPublish = staticData.find().fetch(); // query mongo when server starts
Meteor.publish( "publishOnce" , function () {
var self = this;
dataToPublish.forEach(function (doc) {
self.added("staticData", doc._id, doc); //sends data to client and will not continue to observe collection
});
});
}
if ( Meteor.isClient ){
var subHandle = Meteor.subscribe( "publishOnce" ); // fills client 'staticData' collection but also leave merge box copy of data on server
var staticDataLocal = new Meteor.Collection( null ); // to store data after subscription stops
Deps.autorun( function(){
if ( subHandle.ready() ){
staticData.find( {} ).forEach( function ( doc ){
staticDataLocal.insert( doc ); // move all data to local copy
});
subHandle.stop(); // removes 'publishOnce' data from merge box on server but leaves 'staticData' collection empty on client
}
});
}
update: I added comments to the code to make my approach more clear. The meteor docs for stop() on the subscribe handle say "This will typically result in the server directing the client to remove the subscription's data from the client's cache" so maybe there is a way to stop the subscription ( remove from merge box ) that leaves the data on the client. That would be ideal and avoid the copying overhead on the client.
Anyway the original approach with set and flush would also have left the data in merge box so maybe that is alright.
As you've already pointed out yourself in googlegroups, you should use a Meteor Method for sending static data to the client.
And there is this neat package for working with Methods without async headaches.
Also, you could script out the data to a js file, as either an array or an object, minimize it, then link to it as a distinct resource. See
http://developer.yahoo.com/performance/rules.html for Add an Expires or a Cache-Control Header. You probably don't want meteor to bundle it for you.
This would be the least traffic, and could make subsequent loads of your site much swifter.
as a response to a Meteor call, return an array of documents (use fetch()) No reactivity or logging. On client, create a dep when you do a query, or retrieve the key from the session, and it is reactive on the client.
Mini mongo just does js array/object manipulation with an syntax interpreting dsl between you and your data.
The new fast-render package makes one time publish to a client collection possible.
var staticData = new Meteor.Collection ('staticData');
if ( Meteor.isServer ){
FastRender.onAllRoutes( function(){
this.find( staticData, {} );
});
}

Resources