Obviously SQL Injection via the payload of a message is a very common practice and therefore it is always key to cover this (I already have in my code). However I'm just wondering about how possible it is for SQL Injection via the URL and whether this is possible as an actual method for SQL Injection.
I'll give an example URL to detail my question better. If I have a URL such as this with the SQL statement to be used for Injection included inside a parameter as its value (please note that the 'SELECT' could be any SQL query):
https://testurl.com:1234/webservicename?parameter=SELECT
I would like to know if this is valid way and would this actually work for hackers in terms of trying to inject into the back-end of the desired web service and also if so, what is the best way to go about covering for this?
It's the same things as for the POST datas. If it's in php, you get your value in $_GET['parameter']. You can after that secure the value before building your SQL request. Usually, you will also need to put quotes to end the string value and write your own request. GET method is not safer than POST data, never trust datas coming from the client side. It all depend on how you secure that datas. If it's well secured, hacked can't broke anything (by this way), if it's not well or not secured at all, an hacker can inject his requests.
Related
I'm writing some code using angularjs, node.js, and mongodb which allows users to add comments which are stored in mongodb by a server running on node.js.
My strategy is to replace < and > with gt and lt. Where should I do this? If I do it in the client, someone could bypass it by posting to my server using something like postman, although google captcha may help at least a little there.
If I do the escaping at the server, is it too late? I would like to intercept it in the server route and do the manipulation before it is stored in mongo.
Or, in the context of just storing comments, do I even need to do something like this at all?
You do not need to do the escaping yourself. MongoDB can handle the special characters just fine. Since it does not use any SQL so no worries for any SQL injections. You can just store exactly what the user types in the comment into the DB directly.
However, you might consider validating those texts to avoid some people that try to inject JavaScript code in the comment, which will lead to a XSS attack.
As for what you want to do with the data from the client-side, you should, if possible, always process that data in the backend (server-side) because once your app or web are out there publicly, it is exposed to hacking tools.
Here's the setup.
I'm sending a POST using ajax where I send the parameters for procedures to nodejs server. In this particular case, username and code.
This POST calls a request.get to a web service that executes a procedure that uses those two parameters.
For example
app.post('url/:username/:code', function(req,res,next){
var procedure = 'EXECUTE procedureName'+req.params.code;
request.get('myWslink/myService.asmx/service?
callback=&userName='+req.params.username+'&procedureName='+procedure, function(){});
});
The front end user cannot see my webservice url, my request.get url or my procedure name but he can still see the parameters being sent (username, code)
and he can change these parameters enabling him to execute a procedure he's not supposed to execute.
He can also call a POST request a bunch of times and fill up the database with a bunch of junk if it's an insert procedure.
What would be the best way to protect myself against these exploits?
A few suggestions here:
Don't do meta-programming to this extent. Make seperate routes on your application for each procedure, and then inject those 'codes' yourself. This will allow you to do things like validate the users input to ensure it isn't garbage data being passed in, as well as rate-limit specific routes to ensure the DB isn't filled with garbage.
You could also create a whitelist array of allowed 'codes' and make sure that whitelist.instanceOf(procedure) != -1 but this wouldn't let you do per-route input validation.
Even if you manually included the procedure, there's still a problem. In your existing code, the call out to the external service places the 'req.params.username' parameter before procedureName. For most HTTP parsing frameworks, parameters are first come, first serve. One of the first attacks I would try after seeing this code would be to inject
'&procedureName=something_i_shouldnt_be_able_to_call'
into my username. This would cause the procedureName attribute you are including to be ignored, while the one I submitted would be used instead. You can prevent this by either placing the user-input based params last and URI-encoding the user input prior to string interpolation, or by including your querystring as an object named 'qs' passed into the options argument to request.
Whether or not this creates a SQL injection vulnerability is entirely dependent on how the web service parses the arguments and executes the procedure. The optimal case would be that the service URI decodes each parameter, and then passes those in as arguments to either a PDO or a prepared statement. My guess is that it's using PDO, given the way it's being called.
So what I would suggest here ultimately is to URI encode each of your user-input supplied parameters, and use the qs object passed into request options as mentioned above, rather than just interpolating strings. Once you've done that, you can take any or all of these steps to help validate:
Attempt to do things like inject single quotes into your user input manually.
Run a tool like sqlmap on that particular route to test for SQL injection. This will give you a fairly robust testing without requiring in-depth knowledge of SQL injection techniques.
Schedule an application security assessment with an experienced node.js security professional. ( I'm available at liftsecurity.io )
To reiterate - Don't trust users to give you the procedure code - make seperate routes and insert that data yourself, URI encode all user input before further processing, and use the request options object like: {qs:{name:value}} instead of string interpolation.
With those protections, you'll likely be just fine, as it seems to be using stored procedures here. Unless you can find confirmation of that in the web service's documentation, however, the only way to be sure of that is through one of the methods I suggested above.
Hope this helps!
For preventing the sql injection you can escape the input data on your webservice.
and for avoid multiple fake entries in database you can add a unique token with each post request and verify that token on your webservice, If the token is legitimate then allow the insertion if not then avoid that insertion.
you are using the webservice so as far as I think you will have to keep those tokens in database for verifying.
I am designing .Net WebApi service and an AngularJS client. One particular feature of the client is a fairly complex search engine for a particular type of resource. The search query is ideally represented in an object graph. I'm wrestling with the fact that I am semantically supposed to be sending this request to the service as a GET request with the search query encoded into the url. The problem is that it is way too much data for a query string, and everything I'm reading has firmly led me to believe I should not use the message body in a GET request in a situation like this.
I have seen a solution suggested a couple times which seems a bit clumsy but at least sematically correct:
Create an api in the service for POSTing search query resources.
Create an api in the service for GETing search query results.
If I do implement this api in the service, there still is no easy way to bookmark or link to the search results in the client (because if the query was reasonably representable in a url, this whole question wouldn't need to be asked).
Are there any better solutions?
The solution you've found is actually a nice and simple way of handling this sort of thing:
you create your query and POST it to the server;
a query resource is created for it and stored on the server;
you return an URL for that created query (you don't execute it yet). I think this is better done with a POST/Redirect/GET so the URL can be bookmarked;
client does a GET to the URL and at this point the query is executed and results are returned;
every time you GET that URL, the same query is executed.
Now, as a refinement you could allow the user to give the query a friendly name when she creates it (e.g. queryForXYZBlaWhatever) and return this as part of the URL http://server/api/query/queryForXYZBlaWhatever. This can be bookmarked, shared, emailed whatever and it will always point to the same query.
I'm transitioning towards more responsive front-end web apps and I have a question about model validation. Here's the set-up: the server has a standard REST API for inserting, updating, retrieving, etc. This could be written in Node or Java Spring, it doesn't matter. The front-end is written with something like Angular (or similar).
What I need is to figure out where to put the validation code. Here's the requirements:
All validation code should be written in one place only. Not both client and server. this implies that it should reside on the server, inside the REST API when persisting.
The front-end should be capable of understanding validation errors from the server and associating them to the particular field that caused the error. So if the field "username" is mandatory, the client can place an error next to that field saying "Username is mandatory".
It should be possible to validate correct variable types. So if we were expecting a number or a date and got a string instead, the error would be something like "'Yo' is not a correct date."
The error messages should be localized to the user's language.
Can anyone help me out? I need something simple and robust.
Thanks
When validating your input and it fails you can return a response in appropriate format (guessing you use JSON) to contain the error messages along with a proper HTTP error code.
Just working on a project with a Symfony backend, using FOSRestBundle to provide proper REST API. Using the form component of Symfony whenever there's a problem with the input a well structured JSON response is generated with error messages mapped to the fields or the top level if for example there's unexpected input.
After much research I found a solution using the Meteor.js platform. Since it's a pure javascript solution running on both the server and the client, you can define scripts once and have them run on both the client and the server.
From the official Meteor documentation:
Files outside the client, server and tests subdirectories are loaded on both the client and the server! That's the place for model definitions and other functions.
Wow. Defining models and validation scripts only once is pretty darn cool if you ask me. Also, there's no need to map between JSON and whatever server-side technology. Plus, no ORM mapping to get it in the DB. Nice!
Again, from the docs:
In Meteor, the client and server share the same database API. The same exact application code — like validators and computed properties — can often run in both places. But while code running on the server has direct access to the database, code running on the client does not. This distinction is the basis for Meteor's data security model.
Sounds good to me. Here's the last little gem:
Input validation: Meteor allows your methods and publish functions to take arguments of any JSON type. (In fact, Meteor's wire protocol supports EJSON, an extension of JSON which also supports other common types like dates and binary buffers.) JavaScript's dynamic typing means you don't need to declare precise types of every variable in your app, but it's usually helpful to ensure that the arguments that clients are passing to your methods and publish functions are of the type that you expect.
Anyway, sounds like I've found the a solution to the problem. If anyone else knows of a way to define validation once and have it run on both client and server please post an answer below, I'd love to hear it.
Thanks all.
To be strict, your last gate keeper of validation for any CRUD operations is of course on server-side. I do not know what is your concern that you should handle your validation on one end only(either server or client), but usually doing on both sides is better for both user experience and performance.
Say your username field is a mandatory field. This field can be easily handled in front-end side; before a user click submit and then been sent to the server and then get returned and shows the error code. You can save that round trip with a one liner code in front-end.
Of course, one may argue that from client-side the bad guys may manipulate the data and thus bypassing the front-end validation. That goes to my first point - your final gate keeper in validation should be on your server-side. That's why, data integrity is still the server's job. Make sure whatever that goes into your database is clean, dry and valid.
To answer you question, (biased opinion though) AngularJS is still a pretty awesome framework to let you do front-end validation, as well as providing a good way to do server-side error handling.
I have the following tiny dilemma: I have a backbone app, which is almost entirely route based, i.e. if I do to nameoftheapp/photos/1/edit I should go to the edit page for a given photo. The problem is, since my view logic happens almost 100% on the client side (I use a thin service-based server for storage and validation) how do I avoid issues of the sort of an unauthorized user reaching that page? Of course, I can make the router do the check if the user is authorized, but this already leads to duplication of efforts in terms of validation. Of course, I cannot leave the server side without validation, because then the API would be exposed to access of any sort.
I don't see any other way for now. Unless someone comes up with a clever idea, I guess I will have to duplicate validation both client and server-side.
The fundamental rule should be "never trust the client". Never deliver to the client what they're not allowed to have.
So, if the user goes to nameoftheapp/photos/1/edit, presumably you try to fetch the image from the server.
The server should respond with a HTTP 401 response (unauthorized).
Your view should have an error handler for this and inform the user they're not authorized for that - in whatever way you're interested in - an error message on the edit view, or a "history.back()" to return to the previous "page".
So, you don't really have to duplicate the validation logic - you simply need your views to be able to respond meaningfully to the validation responses from the server.
You might say, "That isn't efficient - you end up making more API calls", but those unauthorized calls are not going to be a normal occurrence of a user using the app in any regular fashion, they're going to be the result of probing, and I can find out all the API calls anyway by watching the network tab and hit the API directly using whatever tools I want. So, there really will be no more API traffic then if you DID have validation in the client.
I encountered the same issue a while ago, and it seems the best practice is to use server-side validation. My suggestion... Use a templating engine like Underscore, which is a dependency of Backbone, design the templates, and for those routes that only authenticated users or those with rights to do so, can access... you ask the server for the missing data (usually small pieces of json data) based on some CSRF token, or session_id, or both, (or any other server-side validation method you choose), and you render the template... otherwise you render a predefined error with the same template... Logic is simple enough...