I'm new to web development and was curious about something. When posting to an endpoint to then receive a value from a server-side function, is it problematic if multiple users are writing to the same endpoint? Can this corrupt the value returned?
For instance, I'm using Stripe in a project and you're supposed to post to an endpoint to generate a user-specific ephemeral key. There's a 1-2 second delay in the response at times, so would there be a problem if two users posted to the same endpoint within a few milliseconds?
Capable web server software is designed with concurrency in mind, meaning a server can handle multiple user requests at the same time.
If you're curious about the specific techniques of how this is done, or web server architecture in general, this article is pretty interesting and offers some sample applications
http://www.linuxjournal.com/content/three-ways-web-server-concurrency
Related
I have a front-end angular app using firebase to store user data.
I currently do not have a backend set up, such as a node.js server.
I would like to use the Google Docs API to upload files from my app.
Since the Great Firewall of China does not (or makes unstable) the use of Google services, is it possible to place those services on the backend server and still use them reliably?
Perhaps after they have uploaded the document to firebase, a backend script retrieves it, uploads it to google docs, and then removes the record from firebase? Just trying to see if Google or similar services are even feasible for this use case.
I suppose the crux of my question is whether or not the calling of the Google API would be taking place on the user's computer, in which case would it become unstable?
** Updates for clarity:
I am deciding whether my firebase-backed app needs a more traditional backend like a node server to do things like: upload images and documents, send mail via Mandrill, etc... It would be helpful to me if I knew whether, after putting in the time to create a server, some of the services I am after (aka APIs) are any more resilient to the GFW than they would be if they ran on the client side. So if any one has had success in such a task, I would like to know.
** Technical update:
So, for example, if I run the Google Maps API on the client side, if the user is in China and is not running a VPN, accessing the API calls will either lag or time out or (rarely) success in returning the scripts. If I was somehow able to able to process the map query "off-site" aka on the server, could I then return with a static image of the map to a Chinese user without fail?
If I was somehow able to able to process the map query "off-site" aka
on the server, could I then return with a static image of the map to a
Chinese user without fail?
Yes, of course. What you are going to miss this way is all the front-end interactive functionality Google Maps offers. But if that's ok in your use case, sure.
I have never tried it with the GCF, but what I would do is this:
Google Maps <-> Your Reverse proxy <-> User
So, instead of the user visitng the real google maps site, it will be visiting your maps.mydomain.com site, that will be sitting in between, proxying everything.
Nginx is an excellent choice for a reverse proxy. If you need more control, there are good node.js reverse proxying packages that you an use to rewrite the content extensively before serving it (perhaps to obfuscate it in case the GCF blacklists content based on pattern matching, or to change the script names/links again to avoid pattern matching).
You are misunderstanding about the great firewall of China. I consulted for a couple of Chinese companies after the dot com crash so I can say this from personal experience, not hearsay.
It is mostly high-end Cisco hardware behind gateways behind their government telecom infrastructure. Nowadays they knock off what hardware they can, every chance they can, and spend money on specialized hardware to monitor cell phones systems.
There was a brief mention of the street-level surveillance hardware on 20/20 before the crash if you are interested in looking it up.
Not to discourage you, but I say set up whatever open servers you want with whatever frontends or backends you want, but the reality is the traffic is not going to be there.
That is why they call it an oppressive regime, they do not get to decide for themselves, remember?
I have a small question with possibly a complex answer. I have tried to research around, but I think I may not know the keywords.
I want to build a web service that will send a JSON response, which would be used for another application. My goal is having the App Engine server crawl a set of webpages and store the relevant values so the second application (client) would not need to query everything. It will only go to my server with the already condensed information.
I know, it's pretty common, but how can I defend from attackers who wish to exhaust my App Engine resources/quota?
I have been thinking on limiting the amount of requests by IP (say.. 200 requests / 5 minutes), but is that feasible? Or is there a better, and more clever way of doing it?
First, you need to cache the JSON. don't hit the datastore for every request. use memcache or possibly, depending on your requirements, you can cache the JSON in a static file in Cloud Storage. This simple is the best defender against DDOS, since every request adds minimal overhead.
Also, take a look in the DDOS protection service offered by app engine:
https://developers.google.com/appengine/docs/java/config/dos
You could require users to log-in then generate and send an auth key to the client app that must accompany any requests to the app engine service.
Similar questions have been asked before, but this a one is a little different. I created a REST API to send an XML document with `POST. I send data from my (Windows) application to the servers, which includes: open time, operating system, version, etc.
I have one problem though. How can I make sure people can't use the REST API? How do I know that the information sent to the server is from an application and not from someone who knows the URL? How do analytic software companies solve this problem?
Thank you.
Update
I would like users to use my application without having to log in. I am pretty sure that companies that create apps that do not force you to log in are able to see whatever you are doing.
Well there are several way to secure your service.
You can always setup authentication & authorization for the service - this way the service will be available only to registered/known users.
Here are links few links for more details:
Best Practices for securing a REST API / web service
http://www.stormpath.com/blog/secure-your-rest-api-right-way
Also there are less sophisticated ways such as setting firewall rules to allow connections only from certain places -- I don't think it is a recommended approach.
I've created a angularjs app which uses php for handling the database queries and enforcing an authentication schema.
When the user logs in into the app, he does so in php and php fetches the user data into a session. Then angularjs issues a http post request to a php page to read the fetched data.
After that, whenever a user asks for data, angular issues a post to a php page.
I'm considering using a framework for doing the authentication and the database queries in a better way. My security knowledge is primitive and I fear that I have mistakes in my code.
After doing a research I found laravel which seems straightforward and easy.
Now my questions are:
Can a php framework such as laravel do these things for me?
Is there something else I could use to have people authenticate and making sure that they are doing the CRUD operations they are authorized to do?
What are the keywords I'm searching about, is it routing, is it php restful? I'm asking in order to do further research on the matter.
Is there any other way in which a SPA could work with CRUD operations and Authenticating in a "safe" manner using php?
I know that the above questions are not programming questions per se, but I don't know where to ask (because I feel I cannot communicate what I want to learn about/ *that's why the keywords question above).
Thank you
There's basically two kinds of relevant "routing" both based on URLs, either client side or server side. AngularJS has the $routeProvider which you can configure so when the location changes (handled by $location) the client side template and controller being used also change. On the server side you may have redirects or "routes" that map a URL to a particular PHP file (or Java method) where at the destination it parses the incoming URL to get extra information/parameters.
I know nothing about laravel, but googling laravel and authentication came back with this which looks promising:
http://bundles.laravel.com/category/authentication
I also know things like Zend framework provide many similar options for plugging in some authentication code.
Ultimately if you're writing the CRUD operations something in your code is going to have to do deal with the role based execution of code or access to data.
RESTful is it's own thing. At a very basic level a RESTful interface uses HTTP "verbs/vocuabulary" like PUT, POST, DELETE, GET (part of the request headers which is just data that comes before any body data in the request) are given special meaning like update an entry etc. It's mostly orthogonal to the issue of authentication though if you do true REST I'm not sure if using the SESSION for maintaining authentication would be allowed since it's not completely stateless in that case (anyhow just an academic argument). Point being you can use the other ideas of REST or use some implementation that is "RESTful" and it can be written in any language or you can choose not to do this, either way you still have the issue of controlling resources (functions/methods/data) that you want to control and this issue is not the same as choosing RESTful or not RESTful (if you wanted to keep true to REST for reasons of scalability across a cluster of servers etc. you could follow guidance here How do I authenticate user in REST web service?). Also to note here the $resource in AngularJS provides an abstraction above $http specifically for handling restful services.
IMHO you should be searching for two things
1 php security/authentication
2 php hacking/hacks/vulnerabilities
You can simply write your own authentication mechanism using a session to keep track of the signed in user. http://php.net/manual/en/features.sessions.php There is no difference in a SPA vs a traditional web app as far as the server is concerned, these are simply differences in the client side code.
Any security you intend on putting in place is really only as good as your understanding of that security. I wouldn't trust someone else's plugin from the internet to handle authentication for me unless time was an extremely critical factor and security not so much. One thing that you hadn't mentioned but I think is worth looking into and necessary for any of this to really be secure is SSL. If you don't have your data encrypted there is always a possibility of a man in the middle attack (someone getting the plaintext username and password as their submitted to the database) or session hijacking (someone getting the sessionid of an active session then using that to act as the original user). Basically I would suggest you keep doing research regarding best practices and personally look over any code you plan to use to be sure you understand how it's working and what kind of security it provides you with.
I also wanted to mention, though it's a bit off topic languages wise, that Java Spring has some really nice stuff for dealing with authentication and handling access to services and data. If security is a major concern I would probably strongly consider running a Java server (not to say Java has never had it's issues or that it's automatically more secure but there's a lot of production code that has withstood the test of time). There's the free Tomcat J2EE Server or IBM WebSphere if you need to massively distribute an application. If interested search for Java, Spring, Hibernate (ORM), MyBatis, Data Access Objects. Those are all the parts (some optional) I can think of you would need to put together a service layer in Java. Good intro in the video on the left of this page:
http://static.springsource.org/spring-security/site/index.html
Also SSL isn't a silver bullet, but every layer of security helps.
Kevin Mitnick said in one of his books that lots of places have "hard-shell candy security" (paraphrasing) where breaking the outer layer means you get to all the mushy goodness inside. Any direct answer I would bank will result in this type of security.
Depending on the scope of the project it might be necessary to have security professionals do penetration testing on the system to determine if there are vulnerabilities so they can be plugged.
I'm currently in the process of dating the data from other networks by the use of Gigya to allow users to login to my site and then post the data with php to my database.
I don't know if this is the best option available as they aren't precise on installing it to post the data etc; they put everything in sub sections on how to do individual things.
I'm curious if there is a custom tutorial on using a different service or making it myself. I've read the API's and developements of some of the site, and facebook using JSON apparently, which I'm not familiar with.
You have two elements in your question.
First, authentication. There are several services offering you multiple networks authentication, but using several of them for a single user is not as common: you will most likely have to do it yourself. To handle multiple identities in parallel, your server will have to store them and manage the session on its own. Gigya is one authentication solution, there is also two other good ones:
http://www.janrain.com/products/engage/social-login
www.clickpass.com/docs (still under development)
Then, using api. To do that, you will have to decide what to do and then call the API yourself using Javascript SDKs or server-side ones. Notice the authentication will need to provide you with oauth (most common authentication method) keys to post messages or fetch data. More here:
developers.facebook.com/docs/api
developer.twitter.com/doc
One thing worth noting about Gigya. The have a function called "showAddConnectionUI" which basically lets users establish simultaneous connections with multiple social networks. For example, once a user authenticates to your site w/ Facebook, they can also connect with Twitter and Google if you want to allow this. The nice thing is that Gigya manages these identities for you so you technically don't have to implement anything on your side... just call their getUserInfo function and they'll return a collection of identities.
Not sure if that helps... we use this functionality on our site and it works well. Here's the link to showAddConnectionsUI:
http://wiki.gigya.com/030_API_reference/010_Client_API/020_Methods/socialize.showAddConnectionsUI