If I'm safe from CSRF, am I safe from spambots? - spam-prevention

I've read up quite a bit on spam prevention, and this is one apparent solution that keeps being suggested:
Use a token and put it into a session
and also add it to the form. If the
token is not submitted with the form
or doesn't match then it is automated
and can be ignored.
Source: https://webmasters.stackexchange.com/questions/3588/how-do-spambots-work
Which basically is saying to protect yourself from CSRF.
So my question is, do spambots rely entirely on a method that incorporates CSRF? Do they smply send repeat POST requests without actually requesting the page to figure out what the hidden token embedded in the form is? This seems almost suspiciously too easy to stop and I'm skeptical. Anyone have any concrete information on this?

Imagine crawler that visits random URL and if it sees form, fills it and submits. In this case, token will be automatically accepted, as long as generated on page load.
So, as an additional defense - place tough CAPTCHA.

Related

Secure an API route without credentials

I have a form that checks if an email is already in my DB (/api/user?email=user#example.com), if it does, it responds with their information.
I can't seem to find a way to protect my API routes from someone going to postman and just brute forcing GET https://example.com/api/user?email=name#domain.com and collecting personal information.
I need this functionality without any login credentials. I know there must be an industry standard way of doing this. There are insurance providers that do this with their forms. (e.g. once you enter your email, is greets you with your name and asks you to finish filling out a form.)
In other words - I need my api route to somehow differentiate between a legit browser making those requests or someone with different intentions.
There is no standard, but you can protect your route from brute force:
Add rate limiting to avoid brute force from small range of IP's
Add captcha check to avoid non client side requests and cheap bots.
Generate session (or hashed url) for each user and allow only user's with session to complete form by entering email
Use csrf token to avoid non client request
Without credentials there is no 100% bulletproof way of verifying an authentic request from a user versus one from someone or somewhere else. The "industry standard" is a certain level of personal data that can be exposed without any verification, as your example with insurance providers mentioned.
There are some ways to mitigate this, such as by checking the request headers; specifically origin, referrer, user-agent, etc. but all of these could still be bypassed if one really needed to. If I were in your place, and "absolutely had to have no credentials to validate the request", I would just return only a shallow amount of information such as an email and first name, which is essentially the same amount as anything else.

How do I entirely limit access from a frontend framework(react) to specific(admin) pages using REST API(is it possible?)

I'm very new to REST API and frontend JS frameworks world, and I don't really understand how I can limit access for a frontend to specific pages, I don't really think I can, am I? I'll explain:
Usually, if I develop without REST API, I can use backend to determine if a user may access content(on some pages) and block it if needed, so there's no possible way to download(and view) whatever it might/could be presented on that page.
On the other hand, if I make REST API for the same pages, I can only limit the presented data(I will basically block any request to certain protected endpoint), but yet, the user still will be able to download the schema of the page(frontend part), even I will check if user can/can't view the page, still he will be able to download it and see it, because, well... I check it in frontend and all the logic to present the data is also in frontend(that user may see, even though through a code).
Am I getting this right, if not please explain it to me.

Authentication for single-page apps

Background
I am looking at the OAuth 2.0 Implicit Grant flow where a user is redirected to an authentication service and a JWT token is sent back a Single Page Application(SPA). The token is stored in a cookie or in local storage and, in the examples i have seen, the application will hide/show certain pages based on whether it can find the token in storage.
Issue
The problem is that in all the examples (official from service providers), i was able to manually add any random but properly formed token to the browser's local storage and got access to the 'secured' pages.
It was explained to me that you cannot validate the token in the SPA because that would require exposing the client secret and that you should validate the token on the API server. This means that you can 'hide' the pages but it is really easy to see them if someone wants to. Having said that you are unlikely to cause any real damage because any data retrieval or actions would need to go through the API server and the token should be validated there.
This is not really a vulnerability but the documentation and examples I have seen do not explicitly cover this nuance and i think that it could lead naive programmers (like myself) to think that some pages are completely secure when it is not strictly the case.
Question
It would be really appreciated if, someone who is better informed than i am, confirm that this is indeed how SPA authentication supposed to work?
I am far from an expert, but I have played a bit in this space. My impression is that you are correct, any showing/hiding of functionality based solely on the presence of a token is easily spoofed. Your SPA could, of course, get into verifying an access token.
But that may just make it a little more challenging to spoof. If someone wants to fake the client into thinking it has a valid token, they can likely manipulate the client-side JS to do that. Unfortunately that's the nature of client-side JS. Much of the code can be manipulated in the browser.
Thus far this is speaking to protecting the user from seeing a UI/UX. Most applications are only beneficial when they have data to populate their UI. That's where the API access token strategy is still sound. The server will verify the token and not give the client any data without it.
So while it's unfortunate that JS can be easily spoofed and manipulated to show things the developer would rather not make visible, this isn't typically a deal-breaker. If you have some awesome UI feature that doesn't need data, and you need to secure access to that UI itself, this model may not be the greatest.

Verify that a user can only query his own data/information

o/
I'm working on a smaller app, and its going pretty well so far. I talked with a friend about it and he suddenly made me realize something. How do i make sure a user is only able to query his own data from a Database in the cloud?
Its a very simple app, where you can create a user and make some personal shopping lists.
I thought about a couple of options, but I'm not sure what is the right direction to take - or even if any of them is the right one.
The username/id & password is stored locally and appended to the request, and checked against the DB every time.
A token is generated, saved both in the DB & stored locally as a "active" session, and every time a request is send, the token is appended to the request and checked.
...?
I'm sorry if i placed this topic have the wrong tags, since i was not 100% sure where they should be placed.
Well, from your description it seams that you are working on a "no backend" app. If it is the case I suggest you to take a look to Firebase since it will solve all your concerns about authentication and user authorization.
If your would like to use a more custom approach, simply consider that appending the username and a passowrd to a request is always not recommended and since you are using a token is also unnecessary.
Now, returning to the question, i will give you my vision related to contexts where an authentication token is used and thus a backend is needed:
when you log-in a user, you produce a token that is function of the user id
each user request must contain that token
the backend can extract the id of the user that submitted the request from the appended token
a policy or a specific condition will check that data that is going to be retrieved must belong to the user whose id has been extracted.
Hope this could help you

How do I use libcurl to login to a secure website and get at the html behind the login

I was wondering if you could help me work through accessing the html behind a login page using C and libcurl.
Specific Example:
The website I'm trying to access is https://onlineservices.ubs.com/olsauth/ex/pbl/ubso/dl
Is it possible to do something like this?
The problem is that we have a lot of clients each of which has a separate login. We need to get data from each of their accounts every day. It would be really slick if we could write something in C to do this and save all the pertinent data into a file. (like the values of the accounts and positions which I can parse from the html)
What do you guys think? Is this possible and could you help point me in the right direction with some examples, etc...?
After a cursory glance at the login page, it is possible to do this with libcurl, by posting the username/password combo to their authenticating page, and assuming they use cookies to represent a login session. The first step is to make sure that you've got the following options set:
CURLOPT_FOLLOWLOCATION - The server may redirect after authenticating, this is quite common.
CURLOPT_POST - This tells libcurl to switch into post mode.
CURLOPT_POSTFIELDS - This tells libcurl the values to set for the post fields. Set this option to "userId=<insert username>&password=<insert password>". That value is derived from the source code for that page.
CURLOPT_USERAGENT - Set a simple user-agent, so that the web server won't throw it out (some strict ones will do this).
Then, once the post is complete, the libcurl instance should contain some sort of authorisation cookie used by the site to identify a logged-in user. Curl should keep track of cookies within a given instance. There are plenty of options for Curl if you want to tweak how cookies behave.
Make sure that once you are 'logged-in' that the same libcurl instance is used for each request under that account, otherwise it will see you as logged out.
As for parsing the resulting pages go, there are tonnes of HTML parsers for c - just google. The only thing I will say is do not try to write an HTML parser yourself. It is notoriously tricky, because a lot of sites don't produce good (or even working) HTML.

Resources