Issue with an REST API- suggestions to my problem - database

So I started a REST API, it is part of an exercise. Anyhow, I wanted to ask to the community what would you do if your request tested through Postman sent responses as null or an empty array. I use the mongo shell and it shows me there are two collections within my db. I am confused and have not really found an answer.
Ex: this is what the cmd responds- ::1 - - [16/Nov/2022:11:04:14 +0000] "GET /movies HTTP/1.1" 201 2 or ::1 - - [16/Nov/2022:11:07:13 +0000] "GET /movies/The%20Lord%20of%20Rings:%20The%20Fellowship%20of%20the%20Ring HTTP/1.1" 200 4.
I have gone through the instructions and I do not see an error so to speak in the index file as well as the model.

It’s probably an error in your GET function. No reason to have a different behaviour between MongoDB localhost & MongoDB in Atlas unless you are using completely different versions.
Maybe share the code and someone can help debug
https://community.postman.com/t/tests-for-empty-response-body/15584

Related

App Engine logs showing HTTP 301 for nonexistent URLs

I am running a website on Google App Engine. From time to time I get out-of-control bots or perhaps brute force hacking attempts that I see in my logs. Recently I've had a bot (I presume) trying to access administrator/index.php several times a second. That file doesn't exist on my site. If I try to access it, I get the standard 404 and this in my logs:
But for the bot I am seeing HTTP 301 in the logs and I'm wondering why. Does Google interpret the requests as a denial of service or other attack and automatically intervene? I haven't seen documentation stating as much, but I'm not sure why else I would be seeing the 301 instead of 404 for the same URL:
Does anyone have an explanation for this?
The log entires shown on the screenshots can be clicked & expanded to view additional information. As mentioned in the comment above two things could be checked there for further analysis of what's going on:
check the hostname of where the request came to & see if it's not the expected behaviour for that hostname.
if the json object is shown, navigate to protoPayload -> line -> [0] -> logMessage where something like redirecting "http://example.com/" to "https://www.example.com/" should be shown which could also clear things up a bit.

AppEngine dev_appserver - urllib2.urlopen issue with localhost url

UPDATE
App Engine SDK 1.9.24 was released on July 20, 2015, so if you're still experiencing this, you should be able to fix this simply by updating. See +jpatokal's answer below for an explanation of the exact problem and solution.
Original Question
I have an application I'm working with and running into troubles when developing locally.
We have some shared code that checks an auth server for our apps using urllib2.urlopen. When I develop locally, I'm getting rejected with a 404 on my app that makes the request from AppEngine, but the request succeeds just fine from a terminal.
I have appengine running on port localhost:8000, and the auth server on localhost:8001
import urllib2
url = "http://localhost:8001/api/CheckAuthentication/?__client_id=dev&token=c7jl2y3smhzzqabhxnzrlyq5r5sdyjr8&username=amadison&__signature=6IXnj08bAnKoIBvJQUuBG8O1kBuBCWS8655s3DpBQIE="
try:
r = urllib2.urlopen(url)
print(r.geturl())
print(r.read())
except urllib2.HTTPError as e:
print("got error: {} - {}".format(e.code, e.reason))
which results in got error: 404 - Not Found from within AppEngine
It appears that AppEngine is adding the schema, host and port to the PATH portion of the url I'm trying to hit, as this is what I see on the auth server:
[02/Jul/2015 16:54:16] "GET http://localhost:8001/api/CheckAuthentication/?__client_id=dev&token=c7jl2y3smhzzqabhxnzrlyq5r5sdyjr8&username=amadison&__signature=6IXnj08bAnKoIBvJQUuBG8O1kBuBCWS8655s3DpBQIE= HTTP/1.1" 404 10146
and from the request header we can see the whole scheme and host and port are being passed along as part of the path (header pieces below):
'HTTP_HOST': 'localhost:8001',
'PATH_INFO': u'http://localhost:8001/api/CheckAuthentication/',
'SERVER_PORT': '8001',
'SERVER_PROTOCOL': 'HTTP/1.1',
Is there any way to not have the AppEngine Dev server hijack this request to localhost on a different port? Or am I not misunderstanding what is happening? Everything works fine in production where our domains are different.
Thanks in advance for any assistance helping to point me in the right direction.
This is an annoying problem introduced by the urlfetch_stub implementation. I'm not sure what gcloud sdk version introduced it.
I've fixed this by patching the gcloud SDK - until Google does.
which means this answer will hopefully be irrelevant shortly
Find and open urlfetch_stub.py, which can often be found at ~/google-cloud-sdk/platform/google_appengine/google/appengine/api/urlfetch_stub.py
Around line 380 (depends on version), find:
full_path = urlparse.urlunsplit((protocol, host, path, query, ''))
and replace it with:
full_path = urlparse.urlunsplit(('', '', path, query, ''))
more info
You were correct in assuming the issue was a broken PATH_INFO header. The full_path here is being passed after the connection is made.
disclaimer
I may very easily have broken proxy requests with this patch. Because I expect google to fix it, I'm not going to go too crazy about it.
To be very clear this bug is ONLY related to LOCAL app development - you won't see this on production.
App Engine SDK 1.9.24 was released on July 20, 2015, so if you're still experiencing this, you should be able to fix this simply by updating.
Here's a brief explanation of what happened. Until 1.9.21, the SDK was formatting URL fetch requests with relative paths, like this:
GET /test/ HTTP/1.1
Host: 127.0.0.1:5000
In 1.9.22, to better support proxies, this changed to absolute paths:
GET http://127.0.0.1:5000/test/ HTTP/1.1
Host: 127.0.0.1:5000
Both formats are perfectly legal per the HTTP/1.1 spec, see RFC 2616, section 5.1.2. However, while that spec dates to 1999, there are apparently quite a few HTTP request handlers that do not parse the absolute form correctly, instead just naively concatenating the path and the host together.
So in the interest of compatibility, the previous behavior has been restored. (Unless you're using a proxy, in which case the RFC requires absolute paths.)

Nagios check_http failing with 404 on certain address

I have built 2 nagios servers this week. The first was just a proof of concept, and tonight I built the prod one. I followed the exact same instructions on both, and migrated my existing configuration over to the new server tonight. Everything works perfect, except that some check_http checks are getting a 404 error, even though I can curl and wget the address. Example:
./check_http -I 127.0.0.1 -u http://11.210.1.18:8001/alphaweb/login.html
HTTP WARNING: HTTP/1.1 404 Not Found - 528 bytes in 0.000 second response time |time=0.000418s;;;0.000000 size=528B;;;0
I can curl this address with no problem. But the following succeeds:
./check_http -I 127.0.0.1 -u http://11.210.1.16:7001
HTTP OK: HTTP/1.1 200 OK - 288 bytes in 0.001 second response time |time=0.000698s;;;0.000000 size=288B;;;0
Both of these checks work perfectly on an almost identical server, any ideas?
Good thing is, that you receive some http status code, even 404 is good one, because you somehow interact with web server.
check log files on target web servers
Assuming you have access to the target web server I would recommend you to check log files.
Both requests, even the one with 404 status code, shall be seen there. And here can come a surprise, you might find, that while your check is getting some response, your log files is not showing any log record about it. In such a case I would suspect some proxy or iptables in the way.
doublecheck spelling of your calls
But the cause could be much simpler - small mistake in your command causing significant difference.

Google App Engine keeps spinning up "phantom" instances

I am trying to set up an email server, where people can email my app, and my app will process the information given in the body.
I was testing it last night and sent a bunch of emails to the server (22), but the last message went out at about 10pm eastern. For whatever reason, when looking at the server logs this morning, it is still saying that it is receiving emails from my email address (which it definitely isn't..). I was doing a lot of debugging and I'm wondering if GAE is recycling instances somehow if there was an error? The behavior seems incredibly odd and I don't know how to stop it.
From my app.yaml, the instance is only loaded up when an email is received, and no one else according to GAE's logs is emailing my server.
Thanks!
Here's a bit from my logs:
2014-03-30 06:58:50.668 /_ah/mail/[emailaddress]#[myappname].appspotmail.com 500 519ms 2kb module=default version=main
0.1.0.20 - - [30/Mar/2014:03:58:50 -0700] "POST /_ah/mail/[emailaddress]#[myappname].appspotmail.com HTTP/1.1" 500 2076 - - " [myappname].appspot.com" ms=520 cpu_ms=21 cpm_usd=0.200348 app_engine_release=1.9.1 instance=00c61b117cc3c88c83fa0e016a492569707566
I 2014-03-30 06:58:50.164
Received a message from: [my personal email address]
Except I definitely didn't send an email at 6:58am..

Error 503 on Varnish on Setup

I have set up Varnish on my centos server which runs my drupal site.
Browsing to any page returns a blank page due to 503 :Service Unavailable
I have read many questions and answers about intermittent 503's but this is occurring constantly. I can still browse to the site using www.example.com:8080 .
I am running on Centos 6 using the VCL :
https://raw.githubusercontent.com/NITEMAN/Varnish_VCL_samps-hacks/master/varnish3/drupal-base.vcl
I have also tried https://fourkitchens.atlassian.net/wiki/display/TECH/Configure+Varnish+3+for+Drupal+7 .
Not sure where to even start in debugging this.
ADDITIONAL INFO:
NITEMANS answer below provides some really helpful debug suggestions.
In my case it was something very simple, I had left the default 127.0.0.1 in my default.vcl . Changing this to my real external IP got things working. I hope that is the correct thing to do!
As you're running my sample VCL, it should be easy to debug (try each step separately):
Make sure apache is listening on 127.0.0.1:8080 (as it can be listening on another IP and not in the local loopback). netstat -lpn | grep 8080 should help.
Rise backend timeouts (if the server is very slow, since defined timeouts are already huge). Requires a Varnish reload.
Disable health probe (as Varnish can be marking the backend as sick). Comment probe basic block and probe line on backend default. Requires a Varnish reload.
Disable Varnish logic, uncommenting the first return(pipe) on sub vcl_recv. Requires a Varnish reload.
You should also provide when debugging:
varnishadm debug.health output
varnishlog output for a sample request
Hope it helps!

Resources