I am using libcurl in C to retrieve data from a server. I am caching the initial connection to be used again and again for further retrieving from the server. But, i also want the GET or POST question to be asked each time I ask for a response from the server.. Any workaround this?
There's no "workaround" needed, you simply set the correct options for each transfer you want and libcurl will use GET or POST correctly. It will still reuse the connection if you reuse the curl handle and the server doesn't close it...
CURLOPT_POSTFIELDS is a common option to set POST (with data) and then you reset it back to use GET with CURLOPT_HTTPGET.
I am using ratchet websockets to connect to a WAMP Autobahn front-end.
Everything works perfectly however when I change the topic code, the responses do not change!
i am running this in docker, when I restart the containers the updates responses get sent but then if I change this again nothing happens!
Your questions doesn't have many details but I think what is happening is your are running the PHP WebSocket server. Then changing the PHP code and expect the server to automatically update. That isn't how it works, you have to restart the websocket for code changes to take effect.
If you want to have a dynamic response without restarting the socket I suggest using a database or other data store that you can dynamically change and PHP can fetch from
I am using sessions saved in the database. Works well. Lot of data relating to pagination, browsing history etc is stored perfectly within the database.
However, I notice that data sent to a controller using Ajax is not being stored successfully.
If I debug the session within the controller called by ajax, right after I have set the session vars, I see the values appear to be stored correctly in the session, but on subsequent requests, it transpires that the session vars have NOT been saved.
I have done some testing and have found that the problem disappears if I change back to using "php" for the session instead of "database".
I have eliminated pretty much everything from the mix - and it boils down to Cake not saving session data that is sent by ajax. Again, simply switching back to using "php" for sessions, and everything works perfectly.
I wonder if anyone has experienced anything similar?
CakePHP 2.4
Many thanks.
Well, in case anyone is interested, it turns out that the issue I was having was not strictly related to storing sessions in the database.
My application was making 2 ajax calls at the same time, both attempting to update the session. This was a bug / error on my part, and was causing other session-related issues too, such as returning 403 error status.
I removed the offending bug and all is now well.
There are two application servers and a switch. When i access application by using application server ip it works fine. However if i use switch ip in my url Bad request error throws up only for firefox and chrome for a few links only.
Here is a detailed explanation & solution for this problem from ibm.
Problem(Abstract)
Request to HTTP Server fails with Response code 400.
Symptom
Response from the browser could be shown like this:
Bad Request
Your browser sent a request that this server could not understand.
Size of a request header field exceeds server limit.
HTTP Server Error.log shows the following message:
"request failed: error reading the headers"
Cause
This is normally caused by having a very large Cookie, so a request header field exceeded the limit set for Web Server.
Diagnosing the problem
To assist with diagnose of the problem you can add the following to the LogFormat directive in the httpd.conf:
error-note: %{error-notes}n
Resolving the problem
For server side:
Increase the value for the directive LimitRequestFieldSize in the httpd.conf:
LimitRequestFieldSize 12288 or 16384
For How to set the LimitRequestFieldSize, check Increase the value of LimitRequestFieldSize in Apache
For client side:
Clear the cache of your web browser should be fine.
If you use Apache httpd web server in version above 2.2.15-60, then it could be also because of underscore _ in hostname.
https://ma.ttias.be/apache-httpd-2-2-15-60-underscores-hostnames-now-blocked/
I just deleted my stored cookies, site data, and cache from my browser...
It worked. I'm using firefox...
Make sure you url encode all of the query params in your url.
In my case there was a space ' ' in my url, and I was making an API call using curl, and my api server was giving this error.
Means the following url
http://somedomain.com?key=some value with space
should be
http://somedomain.com/?key=some%20value%20with%20space
THIS IS CAUSED BY TOO MANY COOKIES!
To SOLVE - Chrome: go into 'developer mode' -> ctrl + shift + i
On top you will see console, network and LITTLE BUTTON THAT LOOKS LIKE ARROWS >>> click on that for APPLICATION
On Left, under STORAGE, find COOKIES.
There will be little DOWN ARROW indicating a drop down, click on this.
now you will see the website something like: www.investing.com
RIGHT CLICK IT and select Clear
Reload.
Works!
Alternatively, clear cookies and cache in a traditional way, and it will work too.
In my case is a cookie-related issue, I had many cookies with extremely big values, and that was causing the problem.
You can replicate this issue here on stackoverflow.com, just open the console and type this:
[ ...Array(5) ].forEach((i, idx) => {
document.cookie = `stackoverflow_cookie${idx}=${'a'.repeat(4000)}`;
});
What is that?
I am creating 5 cookies with a string of length or value of 4000 bytes; then reload the page and you will see the same issue.
I tried it on google.com and you'll get the error but they automatically clear the cookies for you, which is a nice fallback to start fresh.
I was testing my application with special characters & was observing the same error. After some research, turns out the % symbol was the cause. I had to modify it to the encoded representation %25. Its all fine now, thanks to the below post
https://superuser.com/questions/759959/why-does-the-percent-sign-in-a-url-cause-an-http-400-bad-request-error
I'm a bit late to the party, but bumped in to this issue whilst working with the openidc auth module.
I ended up noticing that cookies were not being cleared properly, and I had at least 10 mod_auth_openidc_state_... cookies, all of which would be sent by my browser whenever I made a request.
If this sounds familiar to you, double check your cookies!
in my case:
in header
Content-Typespacespace
or
Content-Typetab
with two space or tab
when i remove it then it worked.
in my magento2 website ,show exactly the same error when click a product,
my solution is to go to edit the value of Search Engine Optimization - URL Key of this product,
make sure that there are only alphabet,number and - in URL Key,
such as 100-washed-cotton-duvet-cover-set,
deleting all other special characters ,such as % .
I got Bad Request, Your browser sent a request that this server could not understand
when I tried to download a file to the target machine using curl.
I solved it by instead using scp to copy the file from the source machine to the
target machine.
If you are getting this error on the WordPress website, check the below solution.
Corrupted Browser Cache & Cookies: Delete your Cookies and clear your cache
Restart your server
For GET Request make sure that passing parameters are url encoded.
if you are using php you can use urlencode function
If you have this same problem and none of the other solutions worked, please check again the url.
In my case it was a space in the end, when it was added to the Cronjob, someone also copied a blank space by accident.
check your data types are correct or not.
for ex: if you send the file, you need to consider to send the full object of the file
wondering anyone else encountered this problem before - jqGrid works fine when working on my Google Appengine application on local machine (using eclipse+local datastore). When the same the application is deployed to actual domain, although the ajax data calls (i'm using XML format) are returning correct values (as confirmed using firebug ), the returned information does not appear as rows in the grid table. (Both firebug console and firefox error console shows same messages for both local and deployed requests).
Any helpful pointers ??
Solved !
I had not explicitly set the content-type as "text/xml" when returning data, so the XML data was still having the default header "text/html". This somehow was ignored when the data was served from local host.
Setting this header solved the issue.
Thanks.