How do I use libcurl to login to a secure website and get at the html behind the login - c

I was wondering if you could help me work through accessing the html behind a login page using C and libcurl.
Specific Example:
The website I'm trying to access is https://onlineservices.ubs.com/olsauth/ex/pbl/ubso/dl
Is it possible to do something like this?
The problem is that we have a lot of clients each of which has a separate login. We need to get data from each of their accounts every day. It would be really slick if we could write something in C to do this and save all the pertinent data into a file. (like the values of the accounts and positions which I can parse from the html)
What do you guys think? Is this possible and could you help point me in the right direction with some examples, etc...?

After a cursory glance at the login page, it is possible to do this with libcurl, by posting the username/password combo to their authenticating page, and assuming they use cookies to represent a login session. The first step is to make sure that you've got the following options set:
CURLOPT_FOLLOWLOCATION - The server may redirect after authenticating, this is quite common.
CURLOPT_POST - This tells libcurl to switch into post mode.
CURLOPT_POSTFIELDS - This tells libcurl the values to set for the post fields. Set this option to "userId=<insert username>&password=<insert password>". That value is derived from the source code for that page.
CURLOPT_USERAGENT - Set a simple user-agent, so that the web server won't throw it out (some strict ones will do this).
Then, once the post is complete, the libcurl instance should contain some sort of authorisation cookie used by the site to identify a logged-in user. Curl should keep track of cookies within a given instance. There are plenty of options for Curl if you want to tweak how cookies behave.
Make sure that once you are 'logged-in' that the same libcurl instance is used for each request under that account, otherwise it will see you as logged out.
As for parsing the resulting pages go, there are tonnes of HTML parsers for c - just google. The only thing I will say is do not try to write an HTML parser yourself. It is notoriously tricky, because a lot of sites don't produce good (or even working) HTML.

Related

User login in Django + React

I have looked through quite a few tutorials (e.g. this, this, and this) on user authentication in a full-stack Django + React web app. All of them simply send username and password received from the user to the backend using a POST request. It seems to me that, if the user leaves the computer unattended for a minute, anyone can grab his password from the request headers in network tools in the browser. Is this a valid concern that must be taken care of? If so, how should these examples be modified? A tutorial / example of the correct approach would be appreciated.
It seems to me that, if the user leaves the computer unattended for a minute, anyone can grab his password from the request headers in network tools in the browser
If the user leaves the computer unattended then what you are describing will probably be the least of his/her worries.
Authentication is a complex topic, if you really do not want to use existing libraries that handle this for you then you will need to spend quite some time to get things right (knowing that even then, risk 0 does not exist), the most basic thing being to never store plain text credentials on your DB and using https to transmit them over an encrypted connection. You can then start thinking about JWTs, avoiding local storage, CSRF and securing cookies, refresh tokens, etc.
You cannot do much however about cases like the one you describe of people giving away access to their computers or sharing their passwords with others except reminding them they should never do such a thing.
On a side note, if the user didn't have the network monitoring tool open when making the request to your website, opening it afterwards will not show the previously submitted plain text credentials (there are workarounds to this however)

How do I entirely limit access from a frontend framework(react) to specific(admin) pages using REST API(is it possible?)

I'm very new to REST API and frontend JS frameworks world, and I don't really understand how I can limit access for a frontend to specific pages, I don't really think I can, am I? I'll explain:
Usually, if I develop without REST API, I can use backend to determine if a user may access content(on some pages) and block it if needed, so there's no possible way to download(and view) whatever it might/could be presented on that page.
On the other hand, if I make REST API for the same pages, I can only limit the presented data(I will basically block any request to certain protected endpoint), but yet, the user still will be able to download the schema of the page(frontend part), even I will check if user can/can't view the page, still he will be able to download it and see it, because, well... I check it in frontend and all the logic to present the data is also in frontend(that user may see, even though through a code).
Am I getting this right, if not please explain it to me.

React - several stored accounts in one client

I'm looking for some information about a problem that I never thought about, and that I can't find much on the internet (or I'm looking wrong).
Here it is, for a dashboard project in my company, I need to be able to set up a system to store one account per client, or I just have to click on the account in question to connect, like twitter, google or instagram for example.
After the person has added his account, it appears in a list that can switch whenever he wants. Only I don't really see how to set this up, storing the login information in the localStorage? or the jwt token? I confess that I can't find a correct and secure solution as it should be, that's why I'd like to know if some people would have already done that, or if I can be oriented on an interesting solution?
Thanks a lot!
(Sorry for my english, i'm french and it's not perfect ^^)
Do not use localStorage. LocalStorage is not secure at all and can easily be hijacked through any js code running. If you need to store sensitive information, you should always use a server side session.
When a user logs into your website, create a session identifier for them and store it in a cryptographically signed cookie. If you're using a web framework, look up “how to create a user session using cookies” and follow that guide.
Make sure that whatever cookie library your web framework uses is setting the httpOnly cookie flag. This flag makes it impossible for a browser to read any cookies, which is required in order to safely use server-side sessions with cookies. Read Jeff Atwood's article for more information. He's the man.
Make sure that your cookie library also sets the SameSite=strict cookie flag (to prevent CSRF attacks), as well as the secure=true flag (to ensure cookies can only be set over an encrypted connection).
Each time a user makes a request to your site, use their session ID (extracted from the cookie they send to you) to retrieve their account details from either a database or a cache (depending on how large your website is)
Once you have the user's account info pulled up and verified, feel free to pull any associated sensitive data along with it

Client-side vs. server-side authorization in AngularJS

I'm working on this application where the user is supposed to answer a bunch of questions.
What I want to do is protect the page that has the JSON of the question Objects, which also contains sensitive information, such as the answer(s) to the questions.
What I'm trying to do is restrict access to that page for non-admin users on the client-side (they should not be able to type and go to that URL and see the JSON on that page, they should get a 403), while allowing HTTP GET requests from non-admin users on the server-side, so I can get the questions for them to answer.
This is what I have on the client-side:
$routeProvider
.when('/questions', {
resolve: checkRoleForRoute.admin
})
And this is what I have on the server-side:
application.get('/questions', questions.getQuestions);
Both work well for separate routes, but once the route is the same, the server-side code is always executed, while the client-side code isn't. Therefore, any user that is not an admin is able to see the plain JSON when they access the URL, which is not desirable.
Any ideas on why is this happening?
Thank you.
you should add some security check on server side, before providing the JSON data back, i.e. the server should check if the data can be provided to the requesting user.
Moreover, in my opinion, the route is hackable in client-side too.
In fact, using browser's dev tool one can change or skip the 'resolve' field, if he's smart enough.

Making HTTPS calls to website that does not have API

I am working on a wp7 app and I want to make an https call (sign-in and then post ) to a website which does not have an REST API. So I will have to use it just like a browser adding headers to the https sign in call and parsing the resulting data get the Cookie data and unique id assigned extra and pass that on to a subsequent https calls I make.
Can someone please point me to best way to do this ? Pointers to some samples that comes close to this would be helpful.
thanks
If the site in question doesn't prevent CSRF you could just submit form data to it directly. If it does, you're going to need to screen scrape the forms for the website, then populate the necessary fileds and then submit.
If you have control over the site, you'll probably be better off in the long run and be less vulnerable to changes to the site.
If it's not your site, be sure to check permission from the site owner about automating logging in. You also need to be very careful about what you do with regard to the users login credentials.

Resources