I am trying out nested application in AIOhttp but can't get it to run.
If I want my url to be like localhost/greet/ and localhost/greet/abc, I am using the following code but giving me 404 Not Found so my routing is not correct.
I am not able to find much online resources here as well.
Below is my code:
app = web.Application()
greet = web.Application()
app.router.add_get('/', index)
greet.router.add_get('/{name}', handle_name, name='name')
app.add_subapp('/greet/', greet)
web.run_app(app, host='127.0.0.1', port=8080)
async def handle_name(request):
name = request.match_info.get('name', "Anonymous")
txt = "Hello {}".format(name)
return web.Response(text=txt)
Any guidance will be helpful!
Not entirely clear what your problem was, but this works fine:
from aiohttp import web
async def index_view(request):
return web.Response(text='index\n')
async def subapp_view(request):
name = request.match_info.get('name', "Anonymous")
txt = "Hello {}\n".format(name)
return web.Response(text=txt)
app = web.Application()
app.router.add_get('/', index_view)
greet = web.Application()
greet.router.add_get('/{name}', subapp_view)
app.add_subapp('/greet/', greet)
if __name__ == '__main__':
web.run_app(app, host='127.0.0.1', port=8080)
and then testing with curl:
~ 0 25ms ➤ curl localhost:8080/
index
~ 0 33ms ➤ curl localhost:8080/greet/world
Hello world
Hope that answers your question.
Related
I am building a web crawler app on Google App Engine. I want to use a post method to pass the variable to Flask. Then, the received variable became the input of my web crawler app. However, Flask only accept single variable from post. If I add another variable in the funciton, Flask would crash.
I have limited knowledge in Flask and Google app engine. I struggled with the problem for several days and your help will be highly appreciated.
Failed function
#server-side function that does not work,with 2 variable passed
#app.route('/bac',methods=['GET', 'POST'])
def bac():
request_json = request.get_json()
filename = request_json["filename"]
url = request_json["url"]
#baseconnect.Baseconnect(url=url,filename=filename).run()
return filename,url
#The function to post on client side
import requests
req = requests.Session()
data = req.post('https://project.appspot.com/bac',json={"filename":"yuan","url":"https:...f5"})
print(data.text)
#output:
Internal server eror 500
Succeeded function
#server-side function that works,with 1 variable passed
#app.route('/bac',methods=['GET', 'POST'])
def bac():
request_json = request.get_json()
filename = request_json["filename"]
#url = request_json["url"]
#baseconnect.Baseconnect(url=url,filename=filename).run()
return filename
#The function to post on client side
import requests
req = requests.Session()
data = req.post('https://project.appspot.com/bac',json={"filename":"yuan"})
print(data.text)
#output:
yuan
Flask seems only accept single variable. What is the problem....
The problem you have here is that Flask only returns Response object, and Flask will consider return filename, url a shortcut of return Response, status or header.
In this case, url becomes the http status code or header, which is obviously not right.
You need flask.jsonify() to return the proper format of so called 'multiple variables'.
Something like this: (only the important part)
# In server-side code
from flask import jsonify
#app.route('/bac',methods=['GET', 'POST'])
def bac():
request_json = request.get_json()
filename = request_json["filename"]
url = request_json["url"]
# Do your logic here
return jsonify({ filename_returned: filename, url_returned: url })
# client-side
import requests
req = requests.Session()
json_data = req.post('https://project.appspot.com/bac',json={"filename":"yuan", "url": "http:xxxxxxx"})
real_data = json.loads(json_data)
# real_data should be the result you want
I'm following the docs and yet it appears the requests are still being made synchronously.
https://cloud.google.com/appengine/docs/standard/python/issue-requests
Here is my code:
rpcs = []
for url in urls:
rpc = urlfetch.create_rpc()
urlfetch.make_fetch_call(rpc, url)
rpcs.append(rpc)
result = []
for rpc in rpcs:
result.append(rpc.get_result().content)
return result
I did some profiling and compared using requests.get and they both take exactly the same amount of time.
The urls i'm fetching are from different sites so I'm sure that I don't have concurrent limitations on the server side.
Running on GAE Standard, Python 2.7
I got it working but for some reason only with callbacks. Also It only works on production and not on local env. :D. Here is the working code:
from google.appengine.api import urlfetch
import functools
class ClassName(object):
responses = []
def fetch_concurrent_callback(self, rpc):
response = rpc.get_result()
json_response = json.loads(response.content)
self.responses.append(json_response)
def fetch_concurrent(self, urls):
rpcs = []
for url in urls:
rpc = urlfetch.create_rpc()
rpc.callback = functools.partial(self.fetch_concurrent_callback, rpc)
urlfetch.make_fetch_call(rpc, url)
rpcs.append(rpc)
for rpc in rpcs:
rpc.wait()
return self.responses
I have a really simple web app. All the important stuff happens in index.py:
from google.appengine.api import users
import webapp2
import os
import jinja2
JINJA_ENVIRONMENT = jinja2.Environment(
loader=jinja2.FileSystemLoader(os.path.dirname(__file__)),
extensions=['jinja2.ext.autoescape'],
autoescape=True)
def get_user():
user = {}
user['email'] = str(users.get_current_user())
user['name'], user['domain'] = user['email'].split('#')
user['logout_link'] = users.create_logout_url('/')
return user
class BaseHandler(webapp2.RequestHandler):
def dispatch(self):
user = get_user()
template_values = {'user': user}
if user['domain'] != 'foo.com':
template_values['page_title'] = 'Access Denied'
template = '403'
else:
template_values['page_title'] = 'Home'
template = 'index'
template_engine = JINJA_ENVIRONMENT.get_template('%s.html' % template)
self.response.write(template_engine.render(template_values))
app = webapp2.WSGIApplication([
('/', BaseHandler),
], debug=True)
I'm trying to be a good person and write some local unit tests but - after looking at the documentation - I am totally out of my depth. All I want is a basic framework where I can do something like:
python test_security.py
and simulate two users hitting the domain - one #foo.com who should get the index template, and one #bar.com who should get the 403 template.
Here's where I've got so far:
import sys
# I don't want to talk about it, let's just ignore this block
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2')
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\webob-1.2.3')
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\jinja2-2.6')
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\yaml-3.10')
sys.path.append('C:\Program Files (x86)\Google\google_appengine\lib\jinja2-2.6')
sys.path.append('C:\Program Files (x86)\Google\google_appengine')
sys.path.append('C:\pytest')
# A few proper imports
import unittest
import webapp2
from google.appengine.ext import testbed
# Import the module I'd like to test
import index
class TestHandlers(unittest.TestCase):
def test_hello(self):
self.testbed = testbed.Testbed()
self.testbed.init_user_stub()
self.testbed.setup_env(USER_EMAIL='test#foo.com',USER_ID='1', USER_IS_ADMIN='0')
request = webapp2.Request.blank('/')
response = request.get_response(main.app)
print "running test"
self.assertEqual(response.status_int, 200)
self.assertEqual(response.body, 'Hello, world!')
Predictably, this doesn't work at all. What am I missing? Am I just wildly overestimating how simple this should be?
If you're planning on invoking this with "python test_security.py", the magic words you are looking for are:
if __name__ == '__main__':
unittest.main()
This will make your unit test run - at the moment all you're doing is defining it.
Note also that you'll need to change your request.get_response from "main.app" to "index.app".
I suspect (primarily based on the function names) that you should call self.testbed.init_user_stub() before calling self.testbed.setup_env(), not after.
Also you seem to be missing an initial self.testbed = testbed.Testbed() and possibly a testbed.activate() call.
You might want to check out this answer: https://stackoverflow.com/a/21139805/4495081
I'm writing an action in controller to download pdf from webapp directory. when I run the app using run-app command, it works perfectly fine. But when I create a war, it throws error of File not found on browser. My code is as below.
def pdfDownlod ={
def pdfFileName=params.pdfFileName
def pdfFile = new File('web-app/sales/resources/pdf/'+pdfFileName)
response.setContentType("application/pdf")
response.setHeader("Content-disposition", "attachment; filename=${pdfFileName}")
response.outputStream << pdfFile?.getBytes()
response.outputStream.flush()
return
}
Please let me know the root cause of the problem and solution.
Thanks in advance.
you can try this syntax , it will work fine for you.
def pdfFile = new File(ServletContextHolder.servletContext.getRealPath('sales/resources/pdf/'+pdfFileName))
Enjoy.
That's because in your deployed war the web-app folder don't exists. To properly get resources in both dev and prod use the grailsResourceLocator bean.
Example:
class MyController {
def grailsResourceLocator
def pdfDownlod ={
def pdfFileName = params.pdfFileName
final Resource pdfFile = grailsResourceLocator.findResourceForURI('web-app/sales/resources/pdf/'+pdfFileName)
response.setContentType("application/pdf")
response.setHeader("Content-disposition", "attachment; filename=${pdfFileName}")
response.outputStream << pdfFile?.file.bytes // << already flushes!
return
}
}
I use Amazon web service api from within my Google app engine application. Amazon have said that they will only accept signed requests from Aug 15, 2009. While they have given simple instructions for signing, I am not so knowledgeable of Python libraries for SHA256. The app engine documentation says it supports pycrypto but I was just wondering (read being lazy) if anyone has already done this. Any code snippets you could share? Any issues I might be missing here?
Here is an example of a REST request based on lower level (then boto) libraries. Solution was taken from http://cloudcarpenters.com/blog/amazon_products_api_request_signing.
All you need is valid entries for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
def amazon_test_url():
import base64, hashlib, hmac, time
from urllib import urlencode, quote_plus
AWS_ACCESS_KEY_ID = 'YOUR_KEY'
AWS_SECRET_ACCESS_KEY = 'YOUR_SECRET_KEY'
TEST_ISBN = '9780735619678' #http://stackoverflow.com/questions/1711/what-is-the-single-most-influential-book-every-programmer-should-read
base_url = "http://ecs.amazonaws.com/onca/xml"
url_params = dict(
Service='AWSECommerceService',
Operation='ItemLookup',
IdType='ISBN',
ItemId=TEST_ISBN,
SearchIndex='Books',
AWSAccessKeyId=AWS_ACCESS_KEY_ID,
ResponseGroup='Images,ItemAttributes,EditorialReview,SalesRank')
#Can add Version='2009-01-06'. What is it BTW? API version?
# Add a ISO 8601 compliant timestamp (in GMT)
url_params['Timestamp'] = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
# Sort the URL parameters by key
keys = url_params.keys()
keys.sort()
# Get the values in the same order of the sorted keys
values = map(url_params.get, keys)
# Reconstruct the URL parameters and encode them
url_string = urlencode(zip(keys,values))
#Construct the string to sign
string_to_sign = "GET\necs.amazonaws.com\n/onca/xml\n%s" % url_string
# Sign the request
signature = hmac.new(
key=AWS_SECRET_ACCESS_KEY,
msg=string_to_sign,
digestmod=hashlib.sha256).digest()
# Base64 encode the signature
signature = base64.encodestring(signature).strip()
# Make the signature URL safe
urlencoded_signature = quote_plus(signature)
url_string += "&Signature=%s" % urlencoded_signature
print "%s?%s\n\n%s\n\n%s" % (base_url, url_string, urlencoded_signature, signature)
Pycrypto will work fine - it's supported on App Engine, though the public ciphers are implemented in Python rather than C. You also ought to be able to use one of the existing AWS libraries, now that urlfetch/httplib are supported on App Engine.
I have an app that uploads images to S3, and I've implemented the request signing myself, but mostly because I wrote it before urlfetch/httplib were available. It works just fine, however.
Got this to work based on code sample at http://jjinux.blogspot.com/2009/06/python-amazon-product-advertising-api.html
Here is a minor improved version that lets you merge a dict of call specific params with the basic params before making the call.
keyFile = open('accesskey.secret', 'r')
# I put my secret key file in .gitignore so that it doesn't show up publicly
AWS_SECRET_ACCESS_KEY = keyFile.read()
keyFile.close()
def amz_call(self, call_params):
AWS_ACCESS_KEY_ID = '<your-key>'
AWS_ASSOCIATE_TAG = '<your-tag>'
import time
import urllib
from boto.connection import AWSQueryConnection
aws_conn = AWSQueryConnection(
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=Amz.AWS_SECRET_ACCESS_KEY, is_secure=False,
host='ecs.amazonaws.com')
aws_conn.SignatureVersion = '2'
base_params = dict(
Service='AWSECommerceService',
Version='2008-08-19',
SignatureVersion=aws_conn.SignatureVersion,
AWSAccessKeyId=AWS_ACCESS_KEY_ID,
AssociateTag=AWS_ASSOCIATE_TAG,
Timestamp=time.strftime("%Y-%m-%dT%H:%M:%S", time.gmtime()))
params = dict(base_params, **call_params)
verb = 'GET'
path = '/onca/xml'
qs, signature = aws_conn.get_signature(params, verb, path)
qs = path + '?' + qs + '&Signature=' + urllib.quote(signature)
print "verb:", verb, "qs:", qs
return aws_conn._mexe(verb, qs, None, headers={})
Sample usage:
result = self.amz_call({'Operation' : 'ItemSearch' , 'Keywords' : searchString , 'SearchIndex' : 'Books' , 'ResponseGroup' : 'Small' })
if result.status == 200:
responseBodyText = result.read()
# do whatever ...
See http://sowacs.appspot.com/AWS/Downloads/#python for a GAE Python signing service webapp. Uses native Python libraries.
I wrote another simple example that uses only the core python 3 libraries (not boto) and uses version 2 of the AWS signature protocol:
http://xocoatl.blogspot.com/2011/03/signing-ec2-api-request-in-python.html
I know it won't work in GAE, but might be useful for anyone just looking for AWS authentication examples like I was.
I use this one using pycrypto to generate a custom policy:
import json
import time
from Crypto.Hash import SHA
from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from base64 import b64encode
url = "http://*"
expires = int(time.time() + 3600)
pem = """-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----"""
key_pair_id = 'APK.....'
policy = {}
policy['Statement'] = [{}]
policy['Statement'][0]['Resource'] = url
policy['Statement'][0]['Condition'] = {}
policy['Statement'][0]['Condition']['DateLessThan'] = {}
policy['Statement'][0]['Condition']['DateLessThan']['AWS:EpochTime'] = expires
policy = json.dumps(policy)
private_key = RSA.importKey(pem)
policy_hash = SHA.new(policy)
signer = PKCS1_v1_5.new(private_key)
signature = b64encode(signer.sign(policy_hash))
print '?Policy=%s&Signature=%s&Key-Pair-Id=%s' % (b64encode(policy),
signature,
key_pair_id)
This allows me to use one key for multiple items, something like:
http://your_domain/image1.png?Policy...
http://your_domain/image2.png?Policy...
http://your_domain/file1.json?Policy...
Don't forget to enable pycrypto by adding this lines to the app.yaml
libraries:
- name: pycrypto
version: latest