I am trying to implement a REST protocol and have realized in trying to debug that my web server is disallowing the PUT request.
I have tested and confirmed this further by running:
curl -X PUT http://www.mywebserver.com/testpage
Which for my web server gives back a 403 - Forbidden error.
The same happens for DELETE, where as for POST and GET everything is fine.
I am wondering if this is a common issue that those who use REST encounter and what the work-around might be?
Could I make a simple change to an .htaccess file? Or do I need to modify the protocol to set a hidden variable "_method" in the POST query string?
Often web servers will be configured to block anything except GET and POST since
99% of the time they're all that are needed and there have been problems in the
past with applications assuming the requests were one of those two.
You don't say which server it is but, for example, you can tell Apache which
methods to allow with the directive:
eg:
<Limit POST PUT DELETE>
Require valid-user
</Limit>
It sounds like maybe some helpful sysadmin has used this to block non GET/POST
You could try an .htaccess with
<Limit GET POST PUT DELETE>
Allow from all
</Limit>
(I'm not an expert at apache, this may not be exactly correct)
in the official trigger.io docs there seems to be no provision for custom http headers when it comes to the forge.file module. I need this so I can download files behind an http authentication scheme. This seems like an easy thing to add, if support is not already there.
any workarounds? any chance of a quick fix in the next update? I know I could use forge.request instead, but I'd like to keep a local copy (saveURL).
thanks
Unfortunately the file module just uses simple "download url" methods rather than a full HTTP request library, which makes it a fairly big task to add support for custom headers.
I've added a task to our backlog for this, but I don't have a timeframe for it being added.
Currently on iOS you can do basic auth by using urls in the form http://user:password#url.com in case that helps.
Maybe to avoid this you can configure your server differently, or have a proxy server in front that allows you to pass authentication details as get parameters?
I am getting following error on a browser when i open Secure(HTTPS) Site URL for my sandbox org,
You attempted to reach **.cs9.force.com, but instead you actually reached a server identifying itself as .cs9.force.com. This may be caused by a misconfiguration on the server or by something more serious
The problem arises because i am using Sites for exposing WS and the HTTPS warning gives error on client side while interacting with WS.
How can i configure my org to resolve HTTPS warning message?
Because you are using sites you are able to use HTTP. There is no way to change the HTTPS Site settings. And there is no way to upload your own SSL-Certificate.
Dont know if your problem is resolved by now or not. But you need to configure your WS to accept wildcard SSL certificates. I am currently facing this issue and following link may be helpful to you.
I can't use this solution because of shared weblogic server across many applications. If you have found any solution to above, can you please comment here.
There are two application servers and a switch. When i access application by using application server ip it works fine. However if i use switch ip in my url Bad request error throws up only for firefox and chrome for a few links only.
Here is a detailed explanation & solution for this problem from ibm.
Problem(Abstract)
Request to HTTP Server fails with Response code 400.
Symptom
Response from the browser could be shown like this:
Bad Request
Your browser sent a request that this server could not understand.
Size of a request header field exceeds server limit.
HTTP Server Error.log shows the following message:
"request failed: error reading the headers"
Cause
This is normally caused by having a very large Cookie, so a request header field exceeded the limit set for Web Server.
Diagnosing the problem
To assist with diagnose of the problem you can add the following to the LogFormat directive in the httpd.conf:
error-note: %{error-notes}n
Resolving the problem
For server side:
Increase the value for the directive LimitRequestFieldSize in the httpd.conf:
LimitRequestFieldSize 12288 or 16384
For How to set the LimitRequestFieldSize, check Increase the value of LimitRequestFieldSize in Apache
For client side:
Clear the cache of your web browser should be fine.
If you use Apache httpd web server in version above 2.2.15-60, then it could be also because of underscore _ in hostname.
https://ma.ttias.be/apache-httpd-2-2-15-60-underscores-hostnames-now-blocked/
I just deleted my stored cookies, site data, and cache from my browser...
It worked. I'm using firefox...
Make sure you url encode all of the query params in your url.
In my case there was a space ' ' in my url, and I was making an API call using curl, and my api server was giving this error.
Means the following url
http://somedomain.com?key=some value with space
should be
http://somedomain.com/?key=some%20value%20with%20space
THIS IS CAUSED BY TOO MANY COOKIES!
To SOLVE - Chrome: go into 'developer mode' -> ctrl + shift + i
On top you will see console, network and LITTLE BUTTON THAT LOOKS LIKE ARROWS >>> click on that for APPLICATION
On Left, under STORAGE, find COOKIES.
There will be little DOWN ARROW indicating a drop down, click on this.
now you will see the website something like: www.investing.com
RIGHT CLICK IT and select Clear
Reload.
Works!
Alternatively, clear cookies and cache in a traditional way, and it will work too.
In my case is a cookie-related issue, I had many cookies with extremely big values, and that was causing the problem.
You can replicate this issue here on stackoverflow.com, just open the console and type this:
[ ...Array(5) ].forEach((i, idx) => {
document.cookie = `stackoverflow_cookie${idx}=${'a'.repeat(4000)}`;
});
What is that?
I am creating 5 cookies with a string of length or value of 4000 bytes; then reload the page and you will see the same issue.
I tried it on google.com and you'll get the error but they automatically clear the cookies for you, which is a nice fallback to start fresh.
I was testing my application with special characters & was observing the same error. After some research, turns out the % symbol was the cause. I had to modify it to the encoded representation %25. Its all fine now, thanks to the below post
https://superuser.com/questions/759959/why-does-the-percent-sign-in-a-url-cause-an-http-400-bad-request-error
I'm a bit late to the party, but bumped in to this issue whilst working with the openidc auth module.
I ended up noticing that cookies were not being cleared properly, and I had at least 10 mod_auth_openidc_state_... cookies, all of which would be sent by my browser whenever I made a request.
If this sounds familiar to you, double check your cookies!
in my case:
in header
Content-Typespacespace
or
Content-Typetab
with two space or tab
when i remove it then it worked.
in my magento2 website ,show exactly the same error when click a product,
my solution is to go to edit the value of Search Engine Optimization - URL Key of this product,
make sure that there are only alphabet,number and - in URL Key,
such as 100-washed-cotton-duvet-cover-set,
deleting all other special characters ,such as % .
I got Bad Request, Your browser sent a request that this server could not understand
when I tried to download a file to the target machine using curl.
I solved it by instead using scp to copy the file from the source machine to the
target machine.
If you are getting this error on the WordPress website, check the below solution.
Corrupted Browser Cache & Cookies: Delete your Cookies and clear your cache
Restart your server
For GET Request make sure that passing parameters are url encoded.
if you are using php you can use urlencode function
If you have this same problem and none of the other solutions worked, please check again the url.
In my case it was a space in the end, when it was added to the Cronjob, someone also copied a blank space by accident.
check your data types are correct or not.
for ex: if you send the file, you need to consider to send the full object of the file
I'm working on building a Silverlight application whereas we want to be able to have a client hit a url like:
http://{client}.domain.com/
and login, where the {client} part is their business name. so for example, google's would be:
http://google.domain.com/
What I was wondering was if anyone has been able, in silverlight, to be able to use this subdomain model to make decisions on the call to the web server so that you can switch to a specific database to run a query? Unfortunately, it's something that is quite necessary for the project, as we are trying to make it easy for their employees to get their company specific information for our software.
Wouldn't it work to put the service on a specific subdomain itself, such as wcf.example.com, and then setup a cross domain policy file on the service to allow it to access it?
As long as this would work you could just load the silverlight in the proper subdomain and then pass that subdomain to your service and let it do its thing.
Some examples of this below:
Silverlight Cross Domain Services
Silverlight Cross Domain Policy Helpers
On the server side you can check the HTTP 1.1 Host header to see how the user came to your server and do the necessary customization based on that.
I think you cannot do this with Silverlight alone, I know you cannot do this without problems with Javascript, Ajax etc. . That is because a sub domain is - for security reasons - treated otherwise than a sub-page by the browsers.
What about the following idea: Insert a rewrite rule to your web server software. So if http://google.domain.com is called, the web server itself rewrites the URL to something like http://www.domain.com/google/ (or better: http://www.domain.com/customers/google/). Would that help?
Georgi:
That would help if it would be static, but alas, it's going to all be dynamic. My hope was to have 1x deployment for the application, and to use the http://google.domain.com/ idea to switch to the correct database for the user. I recall doing this once when we built an asp.net website, using the domain context to figure out what skin to use, etc.
Ates: Can you explain more about what you are saying... sounds like you are close to what I am trying to come up with. Have you seen such a tutorial for this?
The only other way I have come up with to make this work is to have a metabase that when the user logs in, it will switch them to the appropriate database as required... was just thinking as well that telling Client x to hit:
http://ClientX.domain.com/ would have been sweeter than saying to hit http://www.domain.com/ and login. It seemed as if they were to hit their name, and to show it personalized for them right from the login screen would have been much more appealing for the client base.
#Richard B: No, I can't think of any such tutorial that I've seen before. I'll try to be more verbose.
The server-side approach in more detail:
Direct *.example.com to the same IP in your DNS settings.
The backend app that handles login checks the Host HTTP header (e.g. the "HTTP_HOST" server variable in some platforms). That would contain the exact subdomain.example.com that the client used for reaching your server. Extract the subdomain part and continue...
There can also be a client-side-only approach. I don't know much about Silverlight but I'm assuming that you should be able to interface Silverlight with JavaScript. You could read document.location with JavaScript and pass it to your Silverlight applet, whereon further data fetching etc. logic would rely on the subdomain that was passed in by JavaScript.
#Ates:
That is what we did when we wrote the ASP.Net system... we pushed a slew of *.example.com hosts against the web server, and handled using the HTTP headers. The hold-up comes when dealing with WCF pushing the info between the client and the server... it can only exist in one domain...
So, for example, when you have {client}.example.com and {sandbox}.example.com, the WCF service can't be registered to both. It also cannot be registered to just *.example.com or example.com, so that's where the catch 22 is coming in at. everything else I have the prior knowledge of handling.
I recall a method by which an application can "spoof" another domain name in certain instances. I take it in this case, I would need to do such a configuration? Much to research yet I believe.