how to execute batch file after successful server side SSH connection in Windows 10 - batch-file

I would like to setup a SSH Server on my PC and access it from Internet but I would like to be notified on each SSH successful connection (maybe on fail too in order to check if there no Bruteforce attack). I'm using OpenSSH-Win64 but I can't see any option to execute a batch file to make a CURL call. I saw that on Linux it is possible to use ForceCommand but the command seems not to work on Windows.
I tried to add this on the end of the file sshd_config file :
ForceCommand C:/tools/send_sms.bat
send_sms.bat :
curl --header "Access-Token: <myToken>" --header "Content-Type: application/json" --data-binary "{\"data\":{\"addresses\":[\"<myPhoneNumber>\"],\"message\":\"Someone successfully connected with SSH\",\"target_device_iden\":\"<myDeviceId>\"}}" --request POST https://api.pushbullet.com/v2/texts
but nothing happen on successful connection although the bat works on manual execution.
I could watch C:\ProgramData\ssh\logs\sshd.log file but it's a little bit tricky.
Does someone have any solution/idea?
Thank you

Related

Sending song metadata to a v2.6 Shoutcast server using CURL in a batch file

I am attempting to send song metadata to a v2.6 Shoutcast server. Obviously, I can’t share the username and passwords here, but if I enter the following format in a web browser, it works:
http://<serverusername>:<serverpassword>#<IPaddress>:<port>/admin.cgi?<streampassword>&mode=updinfo&song=XXXXXXX – XXXXXXX
However, I want to send updated metadata using a Windows Batch file. I have used CURL successfully before to send HTTP commands but I don’t appear to be able to get it to work in this instance. I have tried various CURL configurations such as:
CURL http://<serverusername>:<serverpassword>#<IPaddress>:<port>/admin.cgi -d <streampassword>&mode=updinfo&song=XXXXXXX – XXXXXXX
It does appear to be connecting to the server since it returns quite a bit of log information, but no errors. If I change the username or password, then it fails authentication.
Any suggestions?

LibCurl Functions to produce tokens or hash shared-secret

Does LibCurl provide some functionality to produce tokens or hash/salt a string and shared secret? My c++ program will upload files to the server and my server script will authenticate that the HTTP post is coming from my c++ application and not someone else. So I'll send a auth token or hash in the query string that the server script can compare with its own to authenticate the request.
I've seen that you can authenticate using curl --user name:password http://www.example.com but can't a user just read the binary executable and see the username and password?
Although, maybe I am reinventing the wheel with my auth approach. Does LibCurl or another c++ provide the ability to perform shared-secret authentication?
I haven't tried this, but since the command line supports a netrc file as described here https://stackoverflow.com/a/27894407/1542667. This is more secure when using the command line as you don't make your password visible to everyone on the same host via the ps command.
It looks like you could use the same approach for libcurl
https://curl.haxx.se/libcurl/c/CURLOPT_NETRC_FILE.html

Nagios check_http failing with 404 on certain address

I have built 2 nagios servers this week. The first was just a proof of concept, and tonight I built the prod one. I followed the exact same instructions on both, and migrated my existing configuration over to the new server tonight. Everything works perfect, except that some check_http checks are getting a 404 error, even though I can curl and wget the address. Example:
./check_http -I 127.0.0.1 -u http://11.210.1.18:8001/alphaweb/login.html
HTTP WARNING: HTTP/1.1 404 Not Found - 528 bytes in 0.000 second response time |time=0.000418s;;;0.000000 size=528B;;;0
I can curl this address with no problem. But the following succeeds:
./check_http -I 127.0.0.1 -u http://11.210.1.16:7001
HTTP OK: HTTP/1.1 200 OK - 288 bytes in 0.001 second response time |time=0.000698s;;;0.000000 size=288B;;;0
Both of these checks work perfectly on an almost identical server, any ideas?
Good thing is, that you receive some http status code, even 404 is good one, because you somehow interact with web server.
check log files on target web servers
Assuming you have access to the target web server I would recommend you to check log files.
Both requests, even the one with 404 status code, shall be seen there. And here can come a surprise, you might find, that while your check is getting some response, your log files is not showing any log record about it. In such a case I would suspect some proxy or iptables in the way.
doublecheck spelling of your calls
But the cause could be much simpler - small mistake in your command causing significant difference.

How to test Mirror API Subscriptions

The restrictions of a https callbackUrl and the nature of the subscriptions as a whole makes it seem like this is something that can only be done with a publicly accessible url.
So far I have come across two potential solutions to make local development / debugging easier.
The first is the Subscription Proxy service offered by google. This workaround essentially lets you remove the SSL restriction and proxy subscription callbacks to a custom URL.
The second and most helpful way I have found to do development locally is to capture a subscription callback request (say from a server that is publicly accessible) into a log and then use curl to reproduce that request on your local/dev machine using something like:
curl -H "Content-type: application/json" -X POST \
-d '{"json for":"the notification"}' http://localhost:8080/notify
Since the requests can sometimes be large, or you might want to test multiple callback types, I also found it useful to put the JSON of the subscript request into various files (ex: timeline-respond.json) and then run
curl -H "Content-Type: application/json" \
--data #timeline-respond.json http://localhost:8080/notify
I'm curious as to what other people are doing to test their application subscriptions locally.
The command line curl technique that you mention is the best I've found to date.
I've experimented with other solutions, such as an App Engine subscription target paired with a local script which pulls that App Engine service for new notifications to relay to localhost, but so far I haven't found one that's worth the added complexity.
Alternatively, there are many localhost proxies available. My favorite is ngrok.com.
You might want to give localtunnel a try.

Download Log from AppEngine Including Python Log Statements

I know you can download the raw access logs with appcfg.py, but I'm really interested in all the information around a specific request like python logging statements, exceptions and api statistics (just like the online log viewer). Does anyone know if there is a way to get that information another way then having to build it yourself?
If case anyone is wondering, we want to do some continuos statistical analyzing for problems and displaying them on a large screen on a wall in the office.
Sure - just pass the --severity flag to appcfg.py:
$ appcfg.py help request_logs
Usage: appcfg.py [options] request_logs <directory> <output_file>
Write request logs in Apache common log format.
The 'request_logs' command exports the request logs from your application
to a file. It will write Apache common log format records ordered
chronologically. If output file is '-' stdout will be written.
Options:
-h, --help Show the help message and exit.
-q, --quiet Print errors only.
-v, --verbose Print info level logs.
--noisy Print all logs.
-s SERVER, --server=SERVER
The server to connect to.
--insecure Use HTTP when communicating with the server.
-e EMAIL, --email=EMAIL
The username to use. Will prompt if omitted.
-H HOST, --host=HOST Overrides the Host header sent with all RPCs.
--no_cookies Do not save authentication cookies to local disk.
--passin Read the login password from stdin.
-A APP_ID, --application=APP_ID
Override application from app.yaml file.
-V VERSION, --version=VERSION
Override (major) version from app.yaml file.
-n NUM_DAYS, --num_days=NUM_DAYS
Number of days worth of log data to get. The cut-off
point is midnight UTC. Use 0 to get all available
logs. Default is 1, unless --append is also given;
then the default is 0.
-a, --append Append to existing file.
--severity=SEVERITY Severity of app-level log messages to get. The range
is 0 (DEBUG) through 4 (CRITICAL). If omitted, only
request logs are returned.
--vhost=VHOST The virtual host of log messages to get. If omitted,
all log messages are returned.
--include_vhost Include virtual host in log messages.
--end_date=END_DATE End date (as YYYY-MM-DD) of period for log data.
Defaults to today.
This is what works for us really well:
appcfg.py --append --num_days=0 --include_all request_logs /path/to/your/app/ /var/log/gae/yourapp.log
Anyway, the line above will get all your log records and append them to a log file if you've executed this before, if not, it will create a new log file. It actually looks at your existing log (if it's there) and it will not get any duplicates. You can run this without --append if you want, but use it if you are automating log downloads.
The key here is the --include_allflag which seems to be undocumented. This flag will get all the data that you see if you use GAE's web log viewer. So, you will get fields such as: ms=71 cpu_ms=32 api_cpu_ms=12 cpm_usd=0.000921... etc.
OK, I hope that helps someone.
BTW, we wrote up a blog post on this, check it out here.
I seem to be running into 100M limit with appcfg. I ended up using logservice API to get the logs
Here's the code - https://github.com/manasg/gae-log-fetcher
Here is a way to access raw logs so you can further processing without custom parsing (also for me request_logs is not downloading all the data for specified time frame).
Here is an app which runs in the appengine itself:
https://gaelogapp.appspot.com/
You can easily add this functionality to your app with updates to app.yaml and copy logs.py:
https://github.com/okigan/gaelogapp

Resources