I want to set 'secure' flag to JSESSIONID cookie . Is there a configuration in tomcat 6 for this ?
I tried by setting 'secure="true"' in 'Connector' (8080) element of server.xml , but it creates problems ....thats Connection is getting reset .
Note that in my application , the JSESSIONID is getting created in 'http' mode ( index page ) , when the user logins , it will switch into 'https' mode.
If you are using tomcat 6 you can do the following workaround
String sessionid = request.getSession().getId();
response.setHeader("SET-COOKIE", "JSESSIONID=" + sessionid + "; secure ; HttpOnly");
see https://www.owasp.org/index.php/HttpOnly for more information
use the attribute useHttpOnly="true". In Tomcat9 the default value is true.
For nginx proxy it could be solved easy in nginx config:
if ($scheme = http) {
return 301 https://$http_host$request_uri;
}
proxy_cookie_path / "/; secure";
Related
based on aws documetation (https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints-create.html) ,
response = client.create_endpoint_config(
EndpointConfigName="<your-endpoint-configuration>",
ProductionVariants=[
{
"ModelName": "<your-model-name>",
"VariantName": "AllTraffic",
"ServerlessConfig": {
"MemorySizeInMB": 2048,
"MaxConcurrency": 20
}
}
]
)
i created an serverless endpoint (sample code above) , but I keep getting error when the endpoint is invoked , has anyone run into this issue - 'Error - / .sagemaker/ts/models/model.mar already exists. Please specify --force/-f option to overwrite the model archive output file' . FYI - this worked when the endpoint was configured provisioned instead of serverless.
You can checkout a few examples we created here
The netty4-http component is setting an invalid "host" HTTP header when no port is defined in the uri for requests.
netty4-http sets the header in DefaultNettyHttpBinding.toNettyRequest where URI is used to parse the uri string but URI give -1 if no port is defined.
For example the host header could be set to "hostname:-1" which is not accepted of some proxy servers that check the validity of the host header.
For example Apache proxy will return a http error 400(Bad request).
Also see ch. "14.23 Host" https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html for definition.
See https://www.rfc-editor.org/rfc/rfc7230#section-5.4
// This is how it's done in DefaultNettyHttpBinding.toNettyRequest
URI u = new URI(uri);
String hostHeader = u.getHost() + (u.getPort() == 80 ? "" : ":" + u.getPort());
request.headers().set(HttpHeaderNames.HOST.toString(), hostHeader);
LOG.trace("Host: {}", hostHeader);
As a workaround I'm using a custom NettyHttpBinding class but it had been nice to get a fix for it.
I'm working in getting a connection to cloudant done.
The following is using sag library for php:
<?php
header('Content-Type: text/html; charset=utf-8');
require_once('../../src/Sag.php');
//this credentials are from API key
$uName="";
$pName="";
$sag = new Sag('user.cloudant.com');
$sag->login($uName, $pName);
$sag->setDatabase('test');
try {
$result = $sag->get('/test/_design/wordsP/_view/errores');
echo ($result);
}
catch(Exception $e) {
error_log("Something's wrong");
var_dump($e);
}
?>
However I'm not getting expected result (). The view does work, if used just in the url bar.
The response is:
object(SagException)#3(6){
[
"message:protected"
]=>string(50)"Sag Error: cURL error #7: couldn't connect to host"[
"string:private"
]=>string(0)""[
"code:protected"
]=>int(0)[
"file:protected"
]=>string(73)"/home2/.../public_html/clant/src/httpAdapters/SagCURLHTTPAdapter.php"[
"line:protected"
]=>int(134)[
"trace:private"
]=>array(3){ .............
Is there something I'm not using corretly in the php script? (removed current password + username as well as account, but they're there).
Curl error 7 means you are unable to connect to the host or proxy:
CURLE_COULDNT_CONNECT (7)
Failed to connect() to host or proxy.
Source: http://curl.haxx.se/libcurl/c/libcurl-errors.html
When you connect to the URL from your browser if your browser is using a proxy, you will also need to configure sag to use that proxy.
Also, when you say that you replaced the account in your code example, did you mean that you replaced user in $sag = new Sag('user.cloudant.com'); with your actual username? If not, you will need to use your actual username.
An error has occurred: Validation failed for [userAgent] with value
[]: The property userAgent is required and cannot be NULL, the empty
string, or the default [userAgent]
How can I resolve this exception?
Code example:
require_once 'Google/Api/Ads/AdWords/Lib/AdWordsUser.php';
$user = new AdWordsUser();
$user->LogDefaults();
$targetingIdeaService = $user->GetService('TargetingIdeaService', 'v201406');
Google Adwords SDK version 201406 requires you to set userAgent to a non-empty string by which you can identify your API Request and Google Team can identify where from the Request comes if any problem arises. Put any valid name to userAgent in the auth.ini file.
Using phantomjs page.evaluate to extract "resultStats" (div id) from http://www.google.com/search/?q=site:%s works on my local server but not on production server.
NOTE: I'm using the latest phantomjs 1.9.7, however I experienced the same issue with the previous version 1.9.6
NOTE: Phantomjs page.render (on Google home page as well as any other domain name) is working on both servers and creates nice screenshots.
On my production server (Debian stable 7.3 #linode.com) the PHP code below for a top level domain name as the "$url" returns:
TypeError: 'null' is not an object (evaluating 'document.getElementById('resultStats').textContent') phantomjs://webpage.evaluate():2 phantomjs://webpage.evaluate():3 phantomjs://webpage.evaluate():3 null
On my local server (debian testing) the PHP code below for the same "$url" returns:
About 43 results
This happens with any domain name/url I use as the argument - I've tested it on dozens.
What might cause this to occur in my remote production server and not my local server?
gsiteindex.js
var page = require('webpage').create(), site;
var site = phantom.args[0];
page.open("https://www.google.com/search?q=site:" + site, function (status) {
var result = page.evaluate(function () {
return document.getElementById('resultStats').textContent;
});
console.info(result);
phantom.exit();
});
.php
$phantomjs = "phantomjs";
$script = "gsiteindex.js";
$site = $url;
$command = "$phantomjs $script $site";
$googlestring = shell_exec($command);
echo $googlestring;
die();
Well, nrabinowitz was right. I tested it more on my own server using proxies, most timed out, some returned the above error, and a couple returned correct results (well I assume they were correct based on the location the IP address of the proxy - because the figures were a little different than using my ISPs public IP address (calif., USA)).
So it's simply a matter of google blocking certain types of requests from certain IP addresses.
Thanks again for the comment.
Incleude header with user-agent e.g.
header = {'user-asgent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64;
rv:68.0) Gecko/20100101 Firefox/68.0'}
Withuot user agent you get googles gefault style page without resultStats a also had this issue and adding header helped
Default google search page looks like this
enter image description here