If statement in vcl_fetch not working - c

Trying to remove a cookie only if the url is not like "demo/secured"
In the default.vcl I have:
sub vcl_fetch {
# error 200 req.url ~ ".*/demo/secured/.*";
if (req.url ~ ".*/demo/secured/.*") {
set beresp.http.x-whaaat = "this is demo securd";
}else {
unset beresp.http.set-cookie;
set beresp.http.x-whaaat = "not demo secured";
}
}
Both url's with or without /demo/secured result in a x-whaaat response header of "not demo secured". But uncommenting the error 200 line gives error 200 true for url's with demo/secured and error 200 false for url's without demo/secured.
I tried a gazillion and one variations of that if statement but can't get it to return anything other than false.
The following is true:
if ( "/app_dev.php/demo/secured/login" ~ ".*/demo/secured/.*" )
Even though I copied and pasted "/app_dev.php/demo/secured/login" from the page output of error 200 req.url the following isn't true:
# error 200 req.url;
if ( req.url ~ ".*/demo/secured/.*" ) {
set beresp.http.x-whaaat = "this is demo securd";
}else {
unset beresp.http.set-cookie;
set beresp.http.x-whaaat = "not demo secured";
}
gives me a "x-whaat not demo secured" header on http://site/app_dev.php/demo/secured/login not sure how this is possible because the same url gives me "/app_dev.php/demo/secured/login" when uncommenting the error 200 line.
varnishd -V
gives me:
varnishd (varnish-3.0.5 revision 1a89b1f)
Copyright (c) 2006 Verdens Gang AS
Copyright (c) 2006-2011 Varnish Software AS

I was looking at the wrong request in firebug/chrome dev tool. The request I was inspecting was a xhr request made to mysite.com/app_dev.php/_wdt/60c03d and that doesn't have /demo/secured so sure enough it has the right headers.
Time to walk away from the computer for a while now, continue tomorrow.

Related

Identity Server configuration endpoint invalid json?

We have an issue with identity server configuration endpoint generating invalid JSON, I cant show too much but the screenshot below shows the call to the .well-known/openid-configuration endpoint . The one with 7ee gives us this error, another environment that works shows valid JSON.
"#t": "2022-08-24T08:59:41.1177158Z",
"#mt": "{msg} {#dt}",
"#l": "Error",
"msg": "Exception caught while processing request",
"dt": {
"StackTrace": " at Newtonsoft.Json.JsonTextReader.ParseReadNumber(ReadType readType, Char firstChar, Int32 initialPosition)\r\n at Newtonsoft.Json.JsonTextReader.ParseValue()\r\n at Newtonsoft.Json.JsonReader.ReadForType(JsonContract contract, Boolean hasConverter)\r\n at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)\r\n at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)\r\n at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)\r\n at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings)\r\n at Microsoft.IdentityModel.Protocols.OpenIdConnect.OpenIdConnectConfigurationRetriever.GetAsync(String address, IDocumentRetriever retriever, CancellationToken cancel)\r\n at Microsoft.IdentityModel.Protocols.ConfigurationManager`1.GetConfigurationAsync(CancellationToken cancel)",
"Details": "Input string '7ee' is not a valid number. Path '', line 1, position 3.",
"CallingMethod": "Invoke",
"$type": "ErrorLogDetails"
},
Has anyone experienced this before and help point me in the right direction, many thanks.
This was a hard one, and really outside of our domain, we have a F5 load balancer that didn't have the right cors policy, unsure how this caused the extra characters, but it did. Adding this to the load balancer policy for that site corrected the issue.
when HTTP_REQUEST priority 200 {
unset -nocomplain cors_origin
if { ( [HTTP::header Origin] contains "example.com" ) } {
if { ( [HTTP::method] equals "OPTIONS" ) and ( [HTTP::header exists "Access-Control-Request-Method"] ) } {
# CORS preflight request - return response immediately
HTTP::respond 200 "Access-Control-Allow-Origin" [HTTP::header "Origin"] \
"Access-Control-Allow-Methods" [HTTP::header "Access-Control-Request-Method"] \
"Access-Control-Allow-Headers" [HTTP::header "Access-Control-Request-Headers"] \
"Access-Control-Max-Age" "86400" \
"Access-Control-Allow-Credentials" "true"
} else {
# CORS GET/POST requests - set cors_origin variable
set cors_origin [HTTP::header "Origin"]
}
}
}
when HTTP_RESPONSE {
set cors_origin [HTTP::header "example.com"]
# CORS GET/POST response - check cors_origin variable set in request
if { [info exists cors_origin] } {
HTTP::header remove Access-Control-Allow-Origin
HTTP::header remove Access-Control-Allow-Credentials
HTTP::header remove Vary
HTTP::header insert "Access-Control-Allow-Origin" example.com
HTTP::header insert "Access-Control-Allow-Credentials" "true"
HTTP::header insert "Vary" "Origin"
}
}

unable to extra/list all event log on watson assistant wrokspace

Please help I was trying to call watson assistant endpoint
https://gateway.watsonplatform.net/assistant/api/v1/workspaces/myworkspace/logs?version=2018-09-20 to get all the list of events
and filter by date range using this params
var param =
{ workspace_id: '{myworkspace}',
page_limit: 100000,
filter: 'response_timestamp%3C2018-17-12,response_timestamp%3E2019-01-01'}
apparently I got any empty response below.
{
"logs": [],
"pagination": {}
}
Couple of things to check.
1. You have 2018-17-12 which is a metric date. This translates to "12th day of the 17th month of 2018".
2. Assuming the date should be a valid one, your search says "Documents that are Before 17th Dec 2018 and after 1st Jan 2019". Which would return no documents.
3. Logs are only generated when you call the message() method through the API. So check your logging page in the tooling to see if you even have logs.
4. If you have a lite account logs are only stored for 7 days and then deleted. To keep logs longer you need to upgrade to a standard account.
Although not directly related to your issue, be aware that page_limit has an upper hard coded limit (IIRC 200-300?). So you may ask for 100,000 records, but it won't give it to you.
This is sample python code (unsupported) that is using pagination to read the logs:
from watson_developer_cloud import AssistantV1
username = '...'
password = '...'
workspace_id = '....'
url = '...'
version = '2018-09-20'
c = AssistantV1(url=url, version=version, username=username, password=password)
totalpages = 999
pagelimit = 200
logs = []
page_count = 1
cursor = None
count = 0
x = { 'pagination': 'DUMMY' }
while x['pagination']:
if page_count > totalpages:
break
print('Reading page {}. '.format(page_count), end='')
x = c.list_logs(workspace_id=workspace_id,cursor=cursor,page_limit=pagelimit)
if x is None: break
print('Status: {}'.format(x.get_status_code()))
x = x.get_result()
logs.append(x['logs'])
count = count + len(x['logs'])
page_count = page_count + 1
if 'pagination' in x and 'next_url' in x['pagination']:
p = x['pagination']['next_url']
u = urlparse(p)
query = parse_qs(u.query)
cursor = query['cursor'][0]
Your logs object should contain the logs.
I believe the limit is 500, and then we return a pagination URL so you can get the next 500. I dont think this is the issue but once you start getting logs back its good to know

PHP Download File Script (doesn't work)

I have never written an PHP Download File Script neither have any experience with it and I am really not a pro. I got the snippet of code you can see below from another website and tried to make use of it. I understand what is written in the script but I just don't get the message of the errors or rather said, I don't know how to prevent these.
Here is the download.php script - I have put it into the /download/ folder below my main domain:
<?php
ignore_user_abort(true);
set_time_limit(0); // disable the time limit for this script
$path = "/downloads/"; // change the path to fit your websites document structure
$dl_file = preg_replace("([^\w\s\d\-_~,;:\[\]\(\).]|[\.]{2,})", '', $_GET['download_file']); // simple file name validation
$dl_file = filter_var($dl_file, FILTER_SANITIZE_URL); // Remove (more) invalid characters
$fullPath = $path.$dl_file;
if ($fd = fopen ($fullPath, "r")) {
$fsize = filesize($fullPath);
$path_parts = pathinfo($fullPath);
$ext = strtolower($path_parts["extension"]);
switch ($ext) {
case "pdf":
header("Content-type: application/pdf");
header("Content-Disposition: attachment; filename=\"".$path_parts["basename"]."\""); // use 'attachment' to force a file download
break;
// add more headers for other content types here
default;
header("Content-type: application/octet-stream");
header("Content-Disposition: filename=\"".$path_parts["basename"]."\"");
break;
}
header("Content-length: $fsize");
header("Cache-control: private"); //use this to open files directly
while(!feof($fd)) {
$buffer = fread($fd, 2048);
echo $buffer;
}
}
fclose ($fd);
exit;
Now in the /download/ folder, which contains the download.php I have a folder /downloads, which contains the .pdf that should be downloaded.
The link I use on my webpage is:
PHP download file (why isn't it displayed, included the 4 white spaces :
Now I get the following errors when I click on the link:
Warning: Cannot set max_execution_time above master value of 30 (tried to set unlimited) in /var/www/xxx/html/download/download.php on line 4
Warning: fopen(/downloads/test.pdf): failed to open stream: No such file or directory in /var/www/xxx/html/download/download.php on line 12
Warning: fclose() expects parameter 1 to be resource, boolean given in /var/www/xxx/html/download/download.php on line 34
If I use an absolute path (https://www.my-domain.de/downloads/) for the $path variable, I get these errors:
Warning: Cannot set max_execution_time above master value of 30 (tried to set unlimited) in /var/www/xxx/html/download/download.php on line 4
Warning: fopen(): https:// wrapper is disabled in the server configuration by allow_url_fopen=0 in /var/www/xxx/html/download/download.php on line 12
Warning: fopen(https://www.my-domain.de/downloads/test.pdf): failed to open stream: no suitable wrapper could be found in /var/www/xxx/html/download/download.php on line 12
Warning: fclose() expects parameter 1 to be resource, boolean given in /var/www/xxx/html/download/download.php on line 34
I am thankful for any advices!
<?php
ignore_user_abort(true);
//set_time_limit(0); disable the time limit for this script
$path = "downloads/"; // change the path to fit your websites document structure
$dl_file = preg_replace("([^\w\s\d\-_~,;:\[\]\(\).]|[\.]{2,})", '', $_GET['download_file']); // simple file name validation
$dl_file = filter_var($dl_file, FILTER_SANITIZE_URL); // Remove (more) invalid characters
$fullPath = $path.$dl_file;
if ($fd = fopen ($fullPath, "r")) {
$fsize = filesize($fullPath);
$path_parts = pathinfo($fullPath);
$ext = strtolower($path_parts["extension"]);
switch ($ext) {
case "pdf":
header("Content-type: application/pdf");
header("Content-Disposition: attachment; filename=\"".$path_parts["basename"]."\""); // use 'attachment' to force a file download
break;
// add more headers for other content types here
default;
header("Content-type: application/octet-stream");
header("Content-Disposition: filename=\"".$path_parts["basename"]."\"");
break;
}
header("Content-length: $fsize");
header("Cache-control: private"); //use this to open files directly
while(!feof($fd)) {
$buffer = fread($fd, 2048);
echo $buffer;
}
}
fclose ($fd);
exit;
?>
Try this code
Your server is probably not allowing you for a maximum execution time limit for infinite seconds. Check it in php.ini file
Also the relative path was wrong, and "https://www.my-domain.de/downloads/" is not a path, it's a url for the server

Retry loop until condition met

I am trying to navigate my mouse on object but I want to create a condition that will check if "surowiec" is still on the screen, if not I want to skip loop and go to another one. After it finish the second one get back to first and repeat.
[error] script [ Documents ] stopped with error in line 12 [error] FindFailed ( can not find surowiec.png in R[0,0 1920x1080]#S(0) )
w_lewo = Location(345,400)
w_prawo = Location(1570,400)
w_gore = Location(345,400)
w_dol = Location(345,400)
surowiec = "surowiec.png"
while surowiec:
if surowiec == surowiec:
exists("surowiec.png")
if exists != None:
click("surowiec.png")
wait(3)
exists("surowiec.png")
elif exists == None:
surowiec = None
click(w_prawo)
wait(8)
surowiec = surowiec
How about a small example:
while True:
if exists(surowiec):
print('A')
click(surowiec)
else:
print('B')
break
A while loop that is True will always run, until it it meets a break to exit the loop. Also have a look at the functions that are available in Sikuli, it can somethimes be hard to find them, that they are available. So here are a few nice ones:
Link: Link 1 and Pushing keys and Regions
The commands that I found myself very usefull are is exists and if not exists, and find that will allow to locate an image on the screen. Then you don't have to find an image over and over again if it stays on the same location. image1 = find(surowiec)

postgres SQLSTATE : PQresultErrorField returns NULL

I am not able to get error details using the PQresultErrorField API after a query execution fails. Using PQerrorMessage on the connection gives the correct error (constraint violation xxx_pk etc etc) and PQresultStatus shows FATAL_ERROR.
However, when I use the API PQresultErrorField(result, PG_DIAG_SQLSTATE)), I get a NULL result. Other field-codes also gives me null results.
Does this API need to be compiled in ?
Postgres version is 9.2.1
Using libpq C library
It's supposed to return NULL only when it's not applicable.
That simple test works for me:
PGresult* res = PQexec(conn, "SELECT * FROM foobar");
if (res) {
if (PQresultStatus(res) == PGRES_FATAL_ERROR) {
char* p = PQresultErrorField(res, PG_DIAG_SQLSTATE);
if (p) {
printf("sqlstate=%s\n", p?p:"null");
}
}
}
Result:
sqlstate=42P01

Resources