Detect if document update fails - vespa

This update-statement:
curl -v -X PUT
-d '{"fields":{"postings":{"assign":42}}}'
http://localhost:8080/document/v1/post/post/docid/ABCDEFG
Question:
The post "ABCDEFG" does not exist - but the server answers "200 OK".
Is there way to detect, that the update statement fails (e.g. getting a "404 NOT FOUND")?

https://docs.vespa.ai/en/reference/document-v1-api-reference.html
Going by the documentation that is expected behavior. Utilize the condition parameter to determine if the document exists first in order to get a 412 status code instead.

Related

Bash: works at command line but getting 'curl: (1) Protocol ""https" not supported or disabled in libcurl' when using in a script

Got stuck with the situation that a curl request works fine with the command line but does not work at a script giving weird errors. Here is the part of the script:
#we already got the $token
#vars defining curl parameters
dc="NA EU AU SEA JP"
curl_url='https://example.com/api/config/v1/'
curl_request='alertingProfiles?'
curl_properties='tenant=all&stage=prd&cluster='
curl_auth='"Authorization: Bearer '"$token"'"'
for k in $dc; do
# Download the json for each DC using the curl request
curl_combined=\""$curl_url$curl_request$curl_properties$k"\"; echo "$curl_combined"
curlCMD=( -vv --location --request GET "$curl_combined" -H "$curl_auth" -o \""$k.json"\")
echo "${curlCMD[#]}"
curl "${curlCMD[#]}"
done
When I run this in a script I see the following output:
First, the output from the first echo command
"https://example.com/api/config/v1/alertingProfiles?tenant=all&stage=prd&cluster=NA"
Then, the output from the second echo command which is what curl should take as a parameter, I've skipped the bearer token, but trust me, it is not a problem here.
-vv --location --request GET "https://example.com/api/config/v1/alertingProfiles?tenant=all&stage=prd&cluster=NA" -H "Authorization: Bearer " -o "NA.json"
And I am getting following output
Note: Unnecessary use of -X or --request, GET is already inferred.
* Protocol ""https" not supported or disabled in libcurl
* Closing connection -1
curl: (1) Protocol ""https" not supported or disabled in libcurl
Please note two double quotes in the message above in front of https.
If I simply copy/paste the above second echo output in the command line right after curl and execute, it works as expected and downloads JSON in the file.
If I remove " from curl_combined var, the request is sent without double quotes and now I am getting the message curl: (56) Unexpected EOF from the script.
If I copy/paste the second echo output (without double quotes), supply it to curl, and execute from the command line, I am getting 403 message until I put double or single quotes around the https request.
In short - it seems that double quotes around the https request is necessary but for some reason, curl does not take it.
So what went wrong here? I don't mind even to write the constructed URL in a file and read from it or use herestring if it helps, but want to know what exactly went wrong here.
Thanks.
P.S. Removing --location or --request GET or both does not change the outcome.
The script is using passing the "${curlCMD[#]}" array as parameter to curl. This approach will handle the correct expansion of the url, regardless of any special characters in it. There is no need to wrap the curl_combined with additional quotes using the \" construct.
Suggesting replacing setting curl_combined with:
curl_combined="$curl_url$curl_request$curl_properties$k"
echo "$curl_combined"
This solution works replacing curl request with array to the following
curl -vv \
--location \
--request GET \
"$curl_combined" \
--header "Authorization: Bearer ${token}" \
-o "$k.json"
But it is ugly, let's say politely...

InfluxDB returns bad timestamp. I have no idea why

I doubt this is a duplicate issue but it might be the same problem as seen elsewhere.
When I try to insert things into InfluxDB via CURL, it just keeps giving me the same invalid timestamp error. I can't really tell why. Below is the data.
metaRead=3 metaWrite=0 dspaceOps=0 keyValOps=0 scheduled=1 requests=66 smallReads=0 smallWrites=0 flowReads=0 flowWrites=0 creates=0 removes=0 mkdirs=0 rmdirs=0 getattrs=0 setattrs=0 3862710532
Any help would be appreciated. I'm using the CURL library to send the requests but sending them manually via the command line yields the same response.
InfluxDB expects/defaults to nanosecond precision. You're using seconds, by the looks of it. You'll need to modify your curl command with the precision query arg:
curl -i -XPOST "http://localhost:8086/write?db=weather&precision=s" --data-binary 'temperature,location=1 value=90 1472666050'

Catch invalid password on Sudo

Is there a way to trap/catch a invalid password when you use sudo? Basically I want to return a specific exit code if the sudo password is invalid. I don't want to avoid sudo or get around it, I just want to close/exit a script in a matter of my choosing.
Based on the man page of sudo(8), there is no easy way for evaluating the exact error reasons for a failure:
Exit Value
Upon successful execution of a program, the exit status from sudo will
simply be the exit status of the program that was executed.
Otherwise, sudo exits with a value of 1 if there is a
configuration/permission problem or if sudo cannot execute the given
command. In the latter case the error string is printed to the
standard error. If sudo cannot stat(2) one or more entries in the
user's PATH, an error is printed on stderr. (If the directory does not
exist or if it is not really a directory, the entry is ignored and no
error is printed.) This should not happen under normal circumstances.
The most common reason for stat(2) to return ''permission denied'' is
if you are running an automounter and one of the directories in your
PATH is on a machine that is currently unreachable.
The only "ugly" approach, which comes to my mind is to parse the result of stderr to determine the error reason:
#!/bin/bash
tmpfile=`mktemp`
sudo echo "dummy" 2>$tmpfile
if [ $? == 1 ]; then
if [ `cat $tmpfile | grep -x "sudo.*incorrect password attempts" | wc -l` == 1 ]; then
# exit due to failed password attempts
echo "too many failed password attempts"
else
# other reason, for instance configuration
echo "other reason"
fi
fi
rm $tmpfile
Note, however, that this approach is not upgrade-safe and moreover language-dependent: If a patch to sudo changes the text which is shown to the user in case of a wrong password, or the user logs on in a different language, this coding will not be able to handle this properly.

Programmatically get web request initiator

The Chrome Dev Tools network tab has an initiator column that will show you exactly what code initiated the network request.
I'd like to be able to get network request initiator information programmatically, so I could run a script with a url and request search string argument, and it would return details about where every request with a url matching request search string came from on the page at url. So given the arguments www.stackoverflow.com and google the output might look something like this (showing requesting url, line number, and requested url):
/ 19 http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js
/ 4291 http://www.google-analytics.com/analytics.js
I looked into PhantomJS, but its onResourceRequested callback doesn't provide any initiator information, or context from which it can be derived, according to the documentation: http://phantomjs.org/api/webpage/handler/on-resource-requested.html
Is it possible to do with with PhantomJS at all, or some other tool or service such as selenium?
UPDATE
From the comments and answers so far it seems as though this isn't currently supported by Phantom, Selenium or anything else. So here's an alternative approach that might work: Load the page, and all of the assets, and then find any occurrences of request search string in all of the files. How could I do that?
You should file a feature request in the issue tracker against the DevTools. The initiator information is not exported in the HAR, so getting it out of there isn't going to work. As far as I know, no existing API allows for this either.
I've been able to implement a solution that uses PhantomJS to get all of the URLs loaded by a page, and then use a combination of xargs, curl and grep to find the search string at those URLs.
The first piece is this PhantomJS script, which simply outputs every URL requested by a page:
system = require('system');
var page = require('webpage').create();
page.onResourceRequested= function(req) {
console.log(req.url);
};
page.open(system.args[1], function(status) {
phantom.exit(1);
});
Here it is in action:
$ phantomjs urls.js http://www.stackoverflow.com | head -n6
http://www.stackoverflow.com/
http://stackoverflow.com/
http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js
http://cdn.sstatic.net/Js/stub.en.js?v=06bb9dbfaca7
http://cdn.sstatic.net/stackoverflow/all.css?v=af4b547e0e9f
http://cdn.sstatic.net/img/share-sprite-new.svg?v=d09c08f3cb07
For my problem I'm not interested in images, and those can be fitlered out by adding the phantomjs arg --load-images=no.
The second piece is taking all of the URLs and searching them. It's not enough to just output the match, I also need the context around which URL was matched, and ideally which line number too. Here's how to do that:
$ cat urls | xargs -I% sh -c "curl -s % | grep -E -n -o '(.{0,30})SEARCH_TERM(.{0,30})' | sed 's#^#% #'"
We can wrap this all up in a little script, where we'll pipe the output back through grep to get color highlighting on the search string:
#!/bin/bash
phantomjs --load-images=no urls.js $1 | xargs -I% sh -c "curl -s % | grep -E -n -o '(.{0,30})$2(.{0,30})' | sed 's#^#% #' | grep $2 --color=always"
We can then use it to search for any term on any site. Here we're looking for adzerk.net on stackoverflow.com:
So you can see that the adzerk.net request gets initiated somewhere around line 4158 of the main stackoverflow page. It's not a perfect solution because the invocation might be somewhere completely different from where the URL is defined, but it's probably a close, and certainly a good point to start tracking down the exact invocation site.
There might be a better way to search the contents of each URL. It doesn't look like PhantonJS's onResourceReceived handler currently exposes the resource content, but there is ongoing work to address that, and once that's available all of this will be much simpler.
You can use Chrome's debugger protocol from a process external to Chrome or use the chrome.debugger API in a Chrome extension (see How to retrieve the Initiator of a request when extending Chrome DevTool?).

has anyone faced this error "Error: No valid counters" using type perf?

Has anyone faced this error, Error: No valid counters, using typeperf utility while writing it to SQL database. I have tried variety of different things but every time I try to write it in SQL database using counters in a file it fails with the No valid counters error.
The command was executed in the following fashion:
C:\>typeperf -cf "E:\DBA\CounterCollector\counters_eg.txt" -si 15 -sc 10 -f SQL -o SQL:SQLServerDS!log5
The counters_eg.txt file contains:
"\\<computername>\PhysicalDisk(* *)\Avg. Disk Queue Length"
I am able to write in SQL database by specifying the counters individually at command prompt.
example:
C:\Windows\system32>typeperf -f SQL -o SQL:SQLServerDS!log4 "\\<computername>\PhysicalDisk(* *)\Avg. Disk Queue Length"
Note: I have replaced the server name by <computername>.
Include a double '%%', i.e.
typeperf "\\<remote-IP>\Process(*)\%% Processor Time" -sc 1
Figured it out:
After following the example from
https://www.simple-talk.com/sql/performance/collecting-performance-data-into-a-sql-server-table/
I kept on getting the same error message "Error: No valid counters". The counter.txt is exactly the same like the example provided by Feodor but when I put the counter names on the command line individually, they get processed successfully. The problem I was getting was when I tried to run the entire syntax.
Instead of using what Feodor used:
"TYPEPERF -f SQL -s ALF -cf “C:\CounterCollect\Counters.txt” -si 15 -o SQL:SQLServerDS!log1 -sc 4",
I tweaked it a little bit (after looking at the second example from http://technet.microsoft.com/en-us/library/cc753182.aspx) and finally it WORKED! It is a matter of switching the parameters.
After following the demo by Feodor, I used this below syntax, and it worked for me. I am using SQL Server 2012 and here is the command:
TYPEPERF -cf "C:\PerfMonCollect\Counters.txt" -si 5 -sc 4 -f SQL -o SQL:SQLdatasource!log1".
Your counters list may be damaged. Run perfmon GUI utility and make sure that you are able to see the counters in there.
make sure your file name is correct. counters.txt NOT counters.txt.txt . show extensions then check the file name. also, you can try the RUN command and paste your target to the text file and see if it works.
I had the same issue and it drove me crazy.
I had this error now and solved it by adding the user running typeperf to local administrators group on the servers that threw the error.
I was getting this error on a server(Windows Server 2012 R2) I had admin rights on, I had to manually build performance counters and it was sorted. Here's the link https://support.microsoft.com/en-us/help/2554336/how-to-manually-rebuild-performance-counters-for-windows-server-2008-6
The problem is that the file should contain only file names, without " quote marks.
Removing all " from counterlist resolved the issue for me.

Resources