Cypress: Try/Catch block for External Services? - try-catch

So what I want to do might be impossible and actually may not even be recommended, but im curious of the best way to handle this.
Currently one of my applications attempts to use an external service (Specifically google maps API). Sometimes I get a bad response from googles API (or it times out). It's rare (less than 1/30 times maybe) but it still happens and introduces flakiness into the automation tests.
I thought about stubbing this out everytime, but that would also sort of I feel like "lower" the test value (Since google maps api is important in this case)
I was curious if Cypress is able to do something that would TRY to get a response (with a successful status code) and if it failed maybe leave a log note but allow the test to continue on (either via stubbing or just continuing on).
This may even be a bad idea as we don't really "know" from just looking at the results but I wanted to at least pose the question.
Thanks!

You could achieve this by only conditionally sending mocked data, based on the response to that API call.
cy.intercept('/foo', (req) => { // replace with the URL for the google API
req.continue((res) => { // pass through the request to the real API
if (res.statusCode !== 200) { // or whatever "success" statusCode/criteria is
cy.log('my information I want to log')
res.send(200, myMockedBody) // send the mocked response.
}
res.send(); // otherwise, just send the response from the API
});
});

Related

Should I use POST or GET if I want to send an Array of filters to fetch all articles related with those filters

I havent find ressources online to solve my problem.
I'm creating an app with React Native that fetches and shows news articles from my database.
At the top of the page, there's some buttons with filters inside, for example:
one button "energy",
one button "politics"
one button "people"
one button "china"
etc...
Everytime I press one of those buttons, the filter corresponding is stored in an array "selectedFilters", and I want to fetch my database to only show articles that are corresponding to those filters.
Multiple filters can be selected at the same time.
I know one way of doing it, with a POST request:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'POST',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
But I don't know how to send an Array with GET request.
I read online that I can pass multiple parameters to my url(for example: arr[0]=selectedFilters[0]&arr[1]=... but the fact is I never know in advance how many items will be in my array.
And also I'm not sure if I could write exactly the same way as my POST request above, but with GET:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'GET',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
or if I can only pass items in the url, but does this work ?
await fetch('187.345.32.33:3000/fetch-articles?arr[0]=${selectedFilters[0]', {
Or even better if something like this could work:
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
Thanks for your help
You should definitely use a GET request if your purpose is to fetch the data.
One way of passing the array through the URL is by using a map function to create a comma separated string with all the filters. This way you would not need to know in advance how many elements are in the array. The server can then fetch the string from the URL and split it on the commas.
One more method you can try is to save a filters array on the server side for the session. You can then use a POST/PUT request to modify that array with new filter as user adds or remove them. Finally you can use an empty GET request to fetch the news as the server will already have the filters for that session.
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
Yes, you do read that everywhere. It's wrong (or at best incomplete).
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” (Fielding, 2009)
It may help to remember that on the HTML web, POST was the only supported method for requesting changes to resources, and the web was catastrophically successful.
For requests that are effectively read only, we should prefer to use GET, because general purpose HTTP components can leverage the fact that GET is safe (for example, we can automatically retry a safe request if the response is lost on an unreliable network).
I'm not sure if I could write exactly the same way as my POST request above, but with GET
Not quite exactly the same way
A client SHOULD NOT generate content in a GET request unless it is made directly to an origin server that has previously indicated, in or out of band, that such a request has a purpose and will be adequately supported. An origin server SHOULD NOT rely on private agreements to receive content, since participants in HTTP communication are often unaware of intermediaries along the request chain. -- RFC 9110
The right idea is to think about this in the framing of HTML forms; in HTML, the same collection of input controls can be used with both GET and POST. The difference is what the browser does with the information.
Very roughly, a GET form is used when you want to put the key value pairs described by the submitted form into the query part of the request target. So something roughly like
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
method: 'GET'
});
Although we would normally want to be using a URI Template to generate the request URI, rather than worrying about escaping everything correctly "by hand".
However, there's no rule that says general purpose HTTP components need to support infinitely long URI (for instance, Internet Explorer used to have a limit just over 2000 characters).
To work around these limits, you might choose to support POST - it's a tradeoff, you lose the benefits of safe semantics and general purpose cache invalidation, you gain that it works in extreme cases.

Best practices for displaying and managing error messages in React-Redux app

I am building a React app (with react-redux, redux-saga, and axios), and
I need advice on how to arrange my project for displaying user-friendly error messages.
(It is up to me to decide what and how I display to the user)
In particular, I would like to get answers to the following questions:
Should I display a message based on the status code?
Should I break down the errors to client / server / other errors and what are the benefits of that? (based on example from Axios)
Where should I keep the error messages, in the component itself, in a config file (I would like to see an example of such a file)?
How would my redux state tree look?
Should I dispatch an action for every error based on the status code?
I would appreciate any suggestions or real-world examples.
Here are some examples of error responses from our backend:
Request URL: https://example.com/api/call/123
Request Method: POST
Status Code: 400 Bad Request
Request URL: https://example.com/api/call/123
Request Method: PUT
Status Code: 409 Conflict
Request URL: https://example.com/api/user/me/
Request Method: GET
Status Code: 401 Unauthorized
It basically depends on what method you are trying to display the message, For instance, in our own projects, We are using a global snack bar component to display errors if any have occurred during the requests.
Most of the time users don't care about the status code, if you want not to be very specific then you can display a simple alert/snack bar for example: "Sorry, Some error occurred".
If you are sure that you do need to show specific errors to the user the I definitely recommend a constant file for errors which will store all your error message, You can keep them in constants directory in the store folder so maybe even in /helpers, It depends on your choice.
Yep, you can definitely divide your errors based on if the error was on the server or the client-side.
I don't think the redux tree will change if you're not managing errors in the tree. If you want to, definitely use a snack bar/alert reducer on the top of the tree
You may not want to show the same error for a status code in each of different components, Otherwise, if you want to, You can use it but that would add a lot of unnecessary code into your requests.
For our projects, since we are using i18 for internationalization, We have a snack bar reducer and the action folder, We import the snack bar actions in our sagas and just display a simple error message ( You definitely can customize it for your needs accordingly), That's all,Keep it simple.
yield put(Actions.apiRequest);
try {
const res = yield axios.put('/todo/', updateData);
if (res.data.status === 'success') {
yield put(Actions.fetchTodos(todoID));
yield put(snackbarSuccess('Todo Saved Successfully !'));
} else {
throw new Error();
}
} catch (error) {
yield put(Actions.apiError);
yield put(snackbarError(REQUEST_FAIL)); // an imported constant
}
Some basic code behind my implementation.
1) Assuming you're also doing the BE or can ask someone to adjust the response - It might be best to return a body with your API error response, and avoid just HTTP status codes - if possible. That could then contain an error 'code' that maps to a message on your front-end, as well as field name which can be really helpful for displaying errors in the right place on forms, etc. alternatively, the entire message could come from the BE and the FE simply display it. I work on an enterprise-level codebase that uses both these methods.
2) Regarding error message, i'd always store them in a common file but beyond that up to you really. It sort of depends on how you implement #1. Personally I like error 'codes' stored in an enum file, which correspond to a message because you can then do other logic from that (e.g. don't display the rest of a form if error X is triggered, use a custom message for the error code in one situation or fall back to a default
3) Not sure - I guess you'd do that if you want to log server-side errors but show client. Where I work we differentiate purely for different logging categories I think.
4) Again depends on your implementation - somewhat up to you. Some form packages will handle this for you in redux. Others will just use local state and not redux for this.
5) Would make sense to, yes. Again if you look a custom error code returned in the body of the API call that'll give you more flexibility.
I hope that gives you some ideas, based on my experience rather than any set way of thinking.
have a look at https://reactjs.org/docs/error-boundaries.html as well, and if you haven't already REST APIS / best practice for REST API: https://blog.restcase.com/rest-api-error-codes-101/

Gorilla CSRF with AngularJS

I'm trying to get AngularJS to work with Gorilla CSRF for my web applciation, but there aren't many documentation around that I can find, so I'm not sure where exactly to start. Should I set a X-CSRF-Tokenfor every GET request or should I just do it when the user visits the home page like I'm doing now? Also, how do I make AngularJS CSRF protection work with Gorilla CSRF? Do I need to do some sort of comparisons? Any example codes would be appreciated.
Here is my code:
package main
import (
"github.com/gorilla/csrf"
"github.com/gorilla/mux"
)
func main() {
r := mux.NewRouter()
r.HandleFunc("/", Home).Methods("GET")
// Other routes handling goes here
http.ListenAndServe(":8000",
csrf.Protect([]byte("32-byte-long-auth-key"))(r))
}
func Home(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-CSRF-Token", csrf.Token(r))
}
// More routes
You're question might be a bit broad but overall you're misusing the tools so I'm just going try and explain the basic ideas. The application you're using uses a 'double submit' pattern for CSRF protection. This requires changes in both the client and server code bases. The server should not be setting the X-CSRF-Token header, that is the role of the client. I've actually implemented a couple from scratch anti-CSRF solutions recently and they're pretty simple (both double submit pattern). I also used a few packages from vendors like MSTF and Apache (had to implement CSRF across like 20 years of applications on all kinds of stacks).
In the double submit pattern the server should be setting a cookie with a random value (like a guid), the cookie must be marked as secure. You can make it httponly as well, however it will require you to do a lot more work on your front end resources. On the client side, the simplest way to deal with this is to implement some JavaScript that reads the cookie value and adds it as a header before any POST request. You don't need to protect GET's typically. You could, but if your GET's are doing constructive/destructive things server side, then you're misusing the HTTP verb and I would correct that by making those requests POSTS rather than trying to protect every single request.
On the server side, it's best to do the CSRF check up front, in a common place where all requests come in. When a POST comes in, the server should read the cookie value, check for the header value and compare them. If they're equal then the request should be allowed to pass through, if they're not then you should boot them out with a 403 or something. After doing so the server should rewrite the cookie value (best to make it one use only).
Your client side script can have something like the code below, just make sure the resource is on every page load and you don't use form submits and this will cover everything. If you submit forms you'll need some other code like this to handle that. Some approaches prefer to write the value in the DOM server side. For example in .NET the CSRF library makes the value HTTPOnly and Secure and expects the devs to put a place holder token in every single form in every single cshtml file in their project... I personally think that is very stupid and inefficient. No matter how you do this, you're probably gonna have to do some custom work. Angular isn't going to implement the front end for gorillas CSRF library. gorilla probably isn't going to come with JavaScript for your client since it's an API library. Anyway, basic JavaScript example;
// three functions to enable CSRF protection in the client. Sets the nonce header with value from cookie
// prior to firing any HTTP POST.
function addXMLRequestCallback(callback) {
var oldSend;
if (!XMLHttpRequest.sendcallback) {
XMLHttpRequest.sendcallback = callback;
oldSend = XMLHttpRequest.prototype.send;
// override the native send()
XMLHttpRequest.prototype.send = function () {
XMLHttpRequest.sendcallback(this);
if (!Function.prototype.apply) {
Function.prototype.apply = function (self, oArguments) {
if (!oArguments) {
oArguments = [];
}
self.__func = this;
self.__func(oArguments[0], oArguments[1], oArguments[2], oArguments[3], oArguments[4]);
delete self.__func;
};
}
// call the native send()
oldSend.apply(this, arguments);
}
}
}
addXMLRequestCallback(function (xhr) {
xhr.setRequestHeader('X-CSRF-Token', getCookie('X-CSRF-Cookie'));
});
function getCookie(cname) {
var name = cname + "=";
var ca = document.cookie.split(';');
for (var i = 0; i < ca.length; i++) {
var c = ca[i];
while (c.charAt(0) == ' ') c = c.substring(1);
if (c.indexOf(name) == 0) return c.substring(name.length, c.length);
}
return "";
}
Now, if you can narrow your question a bit I can provide some more specific guidance but this is just a guess (maybe I'll read their docs when I have a minute). Gorilla is automatically going to set your cookie and do your server side check for you if you use csrf.Protect. The code you have setting the header in Go, that is what you need the JavaScript above for. If you set the header on the server side, you've provided no security at all. That needs to happen in the browser. If you send the value along with all your requests, Gorilla will most likely cover the rest for you.
Some other random thoughts about the problem space. As a rule of thumb, if an attacker can't replay a request, they probably can't CSRF you. This is why this simple method is so effective. Every incoming request has exactly one random GUID value it requires to pass through. You can store that value in the cookie so you don't have to worry about session moving across servers ect (that would require a shared data store server side if you're not using the double submit pattern; this cookie-header value compare business). There's no real chance of this value being brute forced with current hardware limitations. The single origin policy in browsers prevents attackers from reading the cookie value you set (only scripts from your domain will be able to access it if it's set as secure). The only way to exploit that is if the user has previously been exploited by XSS which I mean, kind of defeats the purpose of doing CSRF since the attacker would already have more control/ability to do malicious things with XSS.

What is the difference between .all() and .one() in Restangular?

What is the difference between these two? Both seems to make a GET to /users and retrieve them.
Restangular.one('users').getList().then(function(users) {
// do something with users
});
Restangular.all('users').getList().then(function(users) {
// do something with users
});
I understand that you can do one('users', 123) and it will retrieve /users/123 but without the second argument it seems to be the same thing. Why not just have one method in that case?
The one() function has a second argument that accepts an id e.g. .one('users', 1).
one('users', 1).get() translates to /users/1
all('users').getList() translates to /users
Unlike all(), one() is not generally used with .getList() without argument. However, if you were to call .one('users', 1).getList('emails') or .one('users', 1).all('emails').getList(), then you would make a GET request to /users/1/emails.
My guess is that they are there for expressing an intention of what you are going to do. I would understand those as a way to build the url, expressing if you are accessing to the whole resource or to a specific one.
In the end, they are gonna build and do a GET request but because you do a GET and retrieve some data it does not mean that it should be used in that way.
Example extracted from https://github.com/mgonto/restangular/issues/450
getList can be called both ways. If it's called in an element one,
then it needs a subelement to get to a Collection. Otherwise, it
fetches the collection. So the following is the same:
Restangular.one('places', 123).getList('venues') // GET /places/123/venues
Restangular.one('places', 123).all('venues').getList() // GET /places/123/venues
As you can see, it is more expressive to call one('places', 123).all('venues') to understand that you just want the venues located in the area/place 123.
Maybe the following url will help you:
https://github.com/mgonto/restangular/issues/450
I've recently discovered a difference between these methods. Yes, both of them make the same get requests, but the results you get might surprise you (as they surprised me).
Let's assume we have an API method /users which returns not strictly an array, but something like this:
{
"result": [{...}]
}
So an array is returned as a value of some prop of the response object. In this case get() and getList() work differently. This code works well:
Restangular.get('users').then(function (response) {...});
Your response handler gets invoked after response has been received. But this code doesn't seem to work:
Restangular.all('users').getList().then(function (response) {...});
Response handler is not invoked, despite that request completed with status code 200 and non-empty response. Browser console doesn't show any errors and network monitor shows successful request.
I've tested this with Restangular 1.5.2 so probably this is already fixed in newer versions.

Nancy RequiresClaims failure returns a 403. How do I use this?

If a module requires a claim, and the user does not have the claim a 403 response is returned.
eg:
this.RequiresClaims(new[] { "SuperSecure" });
or
this.RequiresValidatedClaims(c => c.Contains("SuperSecure"));
but that just returns a blank page to the user.
How do I deal with a user not having the required claim?
Can I 'catch' the 403 and redirect?
The RequiresClaims method returns void or uses the pre-request hook to throw back a HttpStatusCode.Forbidden. What should I do so the user knows what has happened?
Many Thanks,
Neil
You can catch it either by writing your own post request hook (either at the app level, or the module level) or by implementing your own IErrorHandler, probably wrapping the default one.
The error handler stuff is going to change so you will be able to register multiple ones (for different error codes), it's setup to do that (with the "can/do" interface) but for some reason my brain didn't add it as a collection :-)

Resources