How to write a program which would help me to access to data with https in URL - python-webbrowser

Recently I've been using python to access to web pages with http as the protocol but it is getting out of my hands to access to web pages which have https as their host.Please can someone help to learn the right procedure to solve this error.

It is a little bit easier, you must use the module requests to access url, but you must especify Not Secure in the settings of your requests.get line, use the argument verify=False
Use the following:
r = requests.get('https://m.facebook.com/josue.carranza.56884761', verify=False)

Related

IdentityServer and Session Storage

Apologies if this sounds a bit stupid, but I'm just experimenting with a javascript client using oidc-client.js connecting to identityserver 4.
The server creates a token and then sends it to the client where oidc stores it in session storage(as i understand it).
Supposing you referenced an external javascript library on a cdn somewhere which got compromised somehow. What's to stop that compromised js accessing the token in the session data and giving an attacker access to whatever you're trying to secure with the token.
Thanks
Nothing at all. Don't use CDN-hosted libs for anything you actually care about the security of. Even with HTTP-only cookies a targeted attack is possible. While you're at it don't let NPM anywhere near anything you care about the security of as the potential exposure to malware is terrifying.
All in my opinion.

AppEngine authentication through Node.js

I'm trying to write a VSCode extension where users could log into Google AppEngine with a google account, and I need to get their SACSID cookie to make appengine requests.
So I'm opening a browser window at
https://accounts.google.com/ServiceLogin?service=ah&passive=true&continue=https://appengine.google.com/_ah/conflogin%3Fcontinue%3Dhttp://localhost:3000/
(generated by google.appengine.api.users.create_login_url)
The user logs in and is redirected to my local webserver at
localhost:3000/_ah/conflogin/?state={state}
Now I try to forward the request to my AppEngine app (since it knows how to decode the state parameter), so I do a request to
https://my-app.appspot.com/_ah/conflogin/?state={state}
basically just replacing localhost with the actual app.
but it doesn't work, presumably because the domain is different. I assume this is on purpose, for security.
Is there any way I can make this work ?
Not ideal, but the only solution I've found is to have an endpoint on my GAE instance that does the redirection. Then I can set that as the continue url, when I'm starting the authentication process
https://accounts.google.com/ServiceLogin?service=ah&passive=true&continue=https://appengine.google.com/_ah/conflogin%3Fcontinue%3Dhttps://my-app.appspot.com/redirect?to=http://localhost:3000
I think you should center the attention on the protocols you are using, since it’s known that the cookie name is based on the http protocol (HTTP : ACSID, HTTPS:SACSID), and that’s the security perspective till this point for me.
Having the error you are facing now would be helpful to understand the problem better. Also, how are you performing the call to the API and the code you are using would be helpful too.

How to protect RESTful service?

Contemplating building an Angular 2 front-end to my website. My question is not necessarily related to Angular but I want to provide full context.
Application logic that displays content to user would shift to the client. So on the server side, I would need to expose data via a RESTful JSON feed. What worries me, is that someone can completely bypass my front-end and execute requests to the service with various parameters, effectively scraping my database. I realize some of this is possible by scraping HTML but exposing a service with nicely formatted data is just a no-brainer.
Is there a way to protect the RESTful service from this? In other words, is there a way to ensure such service would only respond to my Angular 2 application call? Authentication certainly isn't a solution here - I don't want to force visitors to authenticate and the scraper could very well authenticate and get access, anyway.
I would recommend JWT Authorization. One such implementation is OAuth. Basically you get a json web token ( JWT ) that has been signed by an authority you trust that tells about the user and what resources they can access on your api.
If the request doesn't include an Authorization token - your API rejects it.
If the token has been tampered with by someone trying to grant themselves privledges after the token is signed by the authorization authority - your API rejects it.
It is a pretty cool piece of kit.
This site has information about OAuth implementations in different languages, hopefully your favorite is listed.
Some light bed time reading.
There is no obvious way to do it that I know of, but a lot of people seem to be looking at Amazon S3 as a model. If you put credentials in your client code, then anyone getting the client code can see them. I might suggest that you could write the server to pass a time limited token back to the browser with the client code. The client code would be required to pass it back to the server for access. This would prevent anyone from writing their own client code, as only client code sent by the server would work, though only for some period of time. The user might occasionally get timeouts, but that depends on how strict you want to make the token timeouts. Of course, even this kind of thing could be hacked by someone making a client request to get a copy of the token to use with their own client API, but at that point you should be proud that someone is trying so hard to use your API! I have not tried to write such a thing, so I don't have any practical experience with the issue. I myself have wondered about it, but also don't have enough experience with this architecture to see what, if anything, others have been doing. What do angularJS forums suggest?
Additional References: Best Practices for securing a REST API / web service
I believe the answer is "No".
You could do some security by obscurity type stuff. Your rest API could expose garbled data and you could have some function that was "hidden" in your code un-garble it. Though obviously this isn't fool proof, but if you expose data on a public site it's out there regardless of server or client rendering.

Scrape website for basic data using Angular JS ( facebook like Link sharing module)

I am trying to make Facebook like "Link sharing" module i.e when anyone write a link while doing new POST , it will automatically show some basic data from the website like in facebook...
I tried simple scraping using $http.get and it is working only if I install CORS extension in google chroome so the main issue I am facing with this approach is to make is working without using any plugin for it...
I also tried by adding headers in config file but still no luck.
$httpProvider.defaults.headers.common = {};
$httpProvider.defaults.headers.post = {};
$httpProvider.defaults.headers.put = {};
$httpProvider.defaults.headers.patch = {};
$httpProvider.defaults.useXDomain = true;
delete $httpProvider.defaults.headers.common['X-Requested-With'];
Please share me the best approach to do this feature or if there is any way I can solve CORS issue ?
Thanks
Zeshan
This is not possible. CORS exists for a reason: to STOP you from accessing HTTP resources from other domains without those other domains explicitly allowing you to.
Again: this is not possible due to security restrictions imposed by browsers.
The only way you can accomplish this, and the way Facebook does it, is to move those cross-domain requests to a server, where there are no cross-domain restrictions.
So $http.post('/some-script-on-my-server') where that script does the actual HTTP request for the remote page, scrapes the necessary information and returns it back to the browser.
There is a workaround for this in order to have an only browser working javascript solution without configuring anything in the server (maybe useful in some particular situation) and "avoiding" CORS.
You could use the YQL. This way you only have to play in their console a little bit with the url you need to scrape and use the query they provide you with in your website as url for your request.
For example (extracted from YQL website):
select * from html where url='http://finance.yahoo.com/q?s=yhoo' and xpath='//div[#id="yfi_headlines"]/div[2]/ul/li/a'
Gets the headlines from yahoo finance, and you get also the query url that you can use in your request:
https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D'http%3A%2F%2Ffinance.yahoo.com%2Fq%3Fs%3Dyhoo'%20and%20xpath%3D'%2F%2Fdiv%5B%40id%3D%22yfi_headlines%22%5D%2Fdiv%5B2%5D%2Ful%2Fli%2Fa'&format=json&diagnostics=true&callback=
They have other examples and how to integrate it in their documentation.
You don't need to configure anything in the server side, but of course, it has to go through Yahoo's, which isn't at all optimal. Of course, performance gets directly affected...
As said again, maybe in some particular situations (dev, tests, etc) this can be useful and it is always interesting to give it a try.

Using a subdomain to identify a client

I'm working on building a Silverlight application whereas we want to be able to have a client hit a url like:
http://{client}.domain.com/
and login, where the {client} part is their business name. so for example, google's would be:
http://google.domain.com/
What I was wondering was if anyone has been able, in silverlight, to be able to use this subdomain model to make decisions on the call to the web server so that you can switch to a specific database to run a query? Unfortunately, it's something that is quite necessary for the project, as we are trying to make it easy for their employees to get their company specific information for our software.
Wouldn't it work to put the service on a specific subdomain itself, such as wcf.example.com, and then setup a cross domain policy file on the service to allow it to access it?
As long as this would work you could just load the silverlight in the proper subdomain and then pass that subdomain to your service and let it do its thing.
Some examples of this below:
Silverlight Cross Domain Services
Silverlight Cross Domain Policy Helpers
On the server side you can check the HTTP 1.1 Host header to see how the user came to your server and do the necessary customization based on that.
I think you cannot do this with Silverlight alone, I know you cannot do this without problems with Javascript, Ajax etc. . That is because a sub domain is - for security reasons - treated otherwise than a sub-page by the browsers.
What about the following idea: Insert a rewrite rule to your web server software. So if http://google.domain.com is called, the web server itself rewrites the URL to something like http://www.domain.com/google/ (or better: http://www.domain.com/customers/google/). Would that help?
Georgi:
That would help if it would be static, but alas, it's going to all be dynamic. My hope was to have 1x deployment for the application, and to use the http://google.domain.com/ idea to switch to the correct database for the user. I recall doing this once when we built an asp.net website, using the domain context to figure out what skin to use, etc.
Ates: Can you explain more about what you are saying... sounds like you are close to what I am trying to come up with. Have you seen such a tutorial for this?
The only other way I have come up with to make this work is to have a metabase that when the user logs in, it will switch them to the appropriate database as required... was just thinking as well that telling Client x to hit:
http://ClientX.domain.com/ would have been sweeter than saying to hit http://www.domain.com/ and login. It seemed as if they were to hit their name, and to show it personalized for them right from the login screen would have been much more appealing for the client base.
#Richard B: No, I can't think of any such tutorial that I've seen before. I'll try to be more verbose.
The server-side approach in more detail:
Direct *.example.com to the same IP in your DNS settings.
The backend app that handles login checks the Host HTTP header (e.g. the "HTTP_HOST" server variable in some platforms). That would contain the exact subdomain.example.com that the client used for reaching your server. Extract the subdomain part and continue...
There can also be a client-side-only approach. I don't know much about Silverlight but I'm assuming that you should be able to interface Silverlight with JavaScript. You could read document.location with JavaScript and pass it to your Silverlight applet, whereon further data fetching etc. logic would rely on the subdomain that was passed in by JavaScript.
#Ates:
That is what we did when we wrote the ASP.Net system... we pushed a slew of *.example.com hosts against the web server, and handled using the HTTP headers. The hold-up comes when dealing with WCF pushing the info between the client and the server... it can only exist in one domain...
So, for example, when you have {client}.example.com and {sandbox}.example.com, the WCF service can't be registered to both. It also cannot be registered to just *.example.com or example.com, so that's where the catch 22 is coming in at. everything else I have the prior knowledge of handling.
I recall a method by which an application can "spoof" another domain name in certain instances. I take it in this case, I would need to do such a configuration? Much to research yet I believe.

Resources