appengine.googleapis: method() takes exactly 1 argument (2 given) - google-app-engine

I'm getting this error when I try to get the domain for my google app engine application using the rest API. It says the method() takes 1 arguments (2 given) can anyone help me?
from google.oauth2 import service_account
from googleapiclient import discovery
SCOPES = ['https://www.googleapis.com/auth/cloud-platform']
SERVICE_ACCOUNT_FILE = 'secret.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
rest_service = discovery.build('appengine', 'v1', credentials=credentials)
#app.route('/audit/create', subdomain='<business>', methods=['GET'])
def audit_create(business):
logging.debug(rest_service.apps().domainMappings().get('apps/branchify-dashboard/domainMappings/sample.branchify.co'))
return render_template('audit/letsgetstarted.html')

I think you should add
domainMappingsId
, parameter name to your argument list.
rest_service.apps().domainMappings().get(domainMappingsId="apps/branchify-dashboard/domainMappings/sample.branchify.co")

I found out the issue and made a medium article about this. For future people who would need this to work you can view it here: https://medium.com/#timothyouano/google-app-engine-custom-domains-getting-and-creating-them-progammatically-using-python-162f9c38ba4

Related

scala - Gatling - I can't seem to use Session Variables stored from a request in a subsequent Request

The code:
package simulations
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class StarWarsBasicExample extends Simulation
{
// 1 Http Conf
val httpConf = http.baseUrl("https://swapi.dev/api/films/")
// 2 Scenario Definition
val scn = scenario("Star Wars API")
.exec(http("Get Number")
.get("4")
.check(jsonPath("$.episode_id")
.saveAs("episodeId"))
)
.exec(session => {
val movie = session("episodeId").as[String]
session.set("episode",movie)
}).pause(4)
.exec(http("$episode")
.get("$episode"))
// 3 Load Scenario
setUp(
scn.inject(atOnceUsers(1)))
.protocols(httpConf)
}
Trying to grab a variable from the first Get request, and inject that variable into a second request, but unable to do so despite using the documentation. There might be something I'm not understanding.
When I use breakpoints, and navigate through the process, it appears the session execution happens AFTER both of the other requests have been completed (by which time is too late). Can't seem to make that session execution happen between the two requests.
Already answered on Gatling's community mailing list.
"$episode" is not correct Gatling Expression Language syntax. "${episode}" is correct.

google cloud online glossary creation returning "empty resource name" error

I am following the EXACT steps indicated here
https://cloud.google.com/translate/docs/glossary#create-glossary
to create a online glossary.
I am getting the following error
madan#cloudshell:~ (focused-pipe-251317)$ ./rungcglossary
{
"error": {
"code": 400,
"message": "Empty resource name.; Resource type: glossary",
"status": "INVALID_ARGUMENT"
}
}
Here is the body of my request.json
{
"languageCodesSet": {
"languageCodes": ["en", "en-GB", "ru", "fr", "pt-BR", "pt-PT", "es"]
},
"inputConfig": {
"gcsSource": {
"inputUri": "gs://focused-pipe-251317-vcm/testgc.csv"
}
}
}
The inputUri path i copied from the google cloud bucket file URI box.
I am not able to understand what the issue is. All I know is something is wrong with the inputUri string.
Please help.
Thanks.
I am a Google Cloud Technical Support Representative and we know that, for the moment, there is an issue with the REST API which is on track. I tried to reproduce your situation and while trying to create the glossary using directly the API I got the same issue as you.
After that, I have tried to create the glossary programmatically using a HTTP Triggered Python Cloud Function and everything went just right. In this manner your API will be called with the Cloud Functions service account.
I will attach the code of my Python Cloud function:
from google.cloud import translate_v3beta1 as translate
def create_glossary(request):
request_json = request.get_json()
client = translate.TranslationServiceClient()
## Set your project name
project_id = 'your-project-id'
## Set your wished glossary-id
glossary_id = 'your-glossary-id'
## Set your location
location = 'your-location' # The location of the glossary
name = client.glossary_path(
project_id,
location,
glossary_id)
language_codes_set = translate.types.Glossary.LanguageCodesSet(
language_codes=['en', 'es'])
## SET YOUR BUCKET URI
gcs_source = translate.types.GcsSource(
input_uri='your-gcs-source-uri')
input_config = translate.types.GlossaryInputConfig(
gcs_source=gcs_source)
glossary = translate.types.Glossary(
name=name,
language_codes_set=language_codes_set,
input_config=input_config)
parent = client.location_path(project_id, location)
operation = client.create_glossary(parent=parent, glossary=glossary)
result = operation.result(timeout=90)
print('Created: {}'.format(result.name))
print('Input Uri: {}'.format(result.input_config.gcs_source.input_uri))
The requirements.txt should include the following dependencies:
google-cloud-translate==1.4.0
google-cloud-storage==1.14.0
Do not forget to modify the code with your parameters
Basically, I have just followed the same tutorial as you, but for Python and I used Cloud Functions. My guess is that you can use App Engine Standard, as well.This may be an issue regarding the service account that are used to call this API. In case this doesn´t work for you let me know and I will try to edit my comment.

Upload to Amazon S3 using Boto3 and return public url

Iam trying to upload files to s3 using Boto3 and make that uploaded file public and return it as a url.
class UtilResource(BaseZMPResource):
class Meta(BaseZMPResource.Meta):
queryset = Configuration.objects.none()
resource_name = 'util_resource'
allowed_methods = ['get']
def post_list(self, request, **kwargs):
fileToUpload = request.FILES
# write code to upload to amazone s3
# see: https://boto3.readthedocs.org/en/latest/reference/services/s3.html
self.session = Session(aws_access_key_id=settings.AWS_KEY_ID,
aws_secret_access_key=settings.AWS_ACCESS_KEY,
region_name=settings.AWS_REGION)
client = self.session.client('s3')
client.upload_file('zango-static','fileToUpload')
url = "some/test/url"
return self.create_response(request, {
'url': url // return's public url of uploaded file
})
I searched whole documentation I couldn't find any links which describes how to do this can someone explain or provide any resource where I can find the soultion?
I'm in the same situation.
Not able to find anything in the Boto3 docs beyond generate_presigned_url which is not what I need in my case since I have public readable S3 Objects.
The best I came up with is:
bucket_location = boto3.client('s3').get_bucket_location(Bucket=s3_bucket_name)
object_url = "https://s3-{0}.amazonaws.com/{1}/{2}".format(
bucket_location['LocationConstraint'],
s3_bucket_name,
key_name)
You might try posting on the boto3 github issues list for a better solution.
I had the same issue.
Assuming you know the bucket name where you want to store your data, you can then use the following:
import boto3
from boto3.s3.transfer import S3Transfer
credentials = {
'aws_access_key_id': aws_access_key_id,
'aws_secret_access_key': aws_secret_access_key
}
client = boto3.client('s3', 'us-west-2', **credentials)
transfer = S3Transfer(client)
transfer.upload_file('/tmp/myfile', bucket, key,
extra_args={'ACL': 'public-read'})
file_url = '%s/%s/%s' % (client.meta.endpoint_url, bucket, key)
The best solution I found is still to use the generate_presigned_url, just that the Client.Config.signature_version needs to be set to botocore.UNSIGNED.
The following returns the public link without the signing stuff.
config = Config(signature_version=botocore.UNSIGNED)
config.signature_version = botocore.UNSIGNED
boto3.client('s3', config=config).generate_presigned_url('get_object', ExpiresIn=0, Params={'Bucket': bucket, 'Key': key})
The relevant discussions on the boto3 repository are:
https://github.com/boto/boto3/issues/110
https://github.com/boto/boto3/issues/169
https://github.com/boto/boto3/issues/1415
Somebody who wants to build up a direct URL for the public accessible object to avoid using generate_presigned_url for some reason.
Please build URL with urllib.parse.quote_plus considering whitespace and special character issue.
My object key: 2018-11-26 16:34:48.351890+09:00.jpg
please note whitespace and ':'
S3 public link in aws console: https://s3.my_region.amazonaws.com/my_bucket_name/2018-11-26+16%3A34%3A48.351890%2B09%3A00.jpg
Below code was OK for me
import boto3
s3_client = boto3.client
bucket_location = s3_client.get_bucket_location(Bucket='my_bucket_name')
url = "https://s3.{0}.amazonaws.com/{1}/{2}".format(bucket_location['LocationConstraint'], 'my_bucket_name', quote_plus('2018-11-26 16:34:48.351890+09:00.jpg')
print(url)
Going through the existing answers and their comments, I did the following and works well for special cases of file names like having whitespaces, having special characters (ASCII), corner cases. E.g. file names of the form: "key=value.txt"
import boto3
import botocore
config = botocore.client.Config(signature_version=botocore.UNSIGNED)
object_url = boto3.client('s3', config=config).generate_presigned_url('get_object', ExpiresIn=0, Params={'Bucket': s3_bucket_name, 'Key': key_name})
print(object_url)
For Django, if you use Django storages with boto3 the code below does exactly what you want:
default_storage.url(name=f.name)
I used an f-string for the same
import boto3
#s3_client = boto3.session.Session(profile_name='sssss').client('s3')
s3_client=boto3.client('s3')
s3_bucket_name = 'xxxxx'
s3_website_URL= f"http://{s3_bucket_name}.s3-website.{s3_client.get_bucket_location(Bucket=s3_bucket_name)['LocationConstraint']}.amazonaws.com"

Google App Engine logout url

I am having problems getting the logout link work in GAE (Python).
This is the page I am looking at.
In my template, I create a link
<p>Logout</p>
But when I click on it I get "broken link" message from Chrome. The url for the link looks like this:
http://localhost:8085/users.create_logout_url(
My questions:
Can anybody explain how this works in general?
What is the correct url for the dev server?
What is the correct url for the app server?
What is the ("/") in the logout url?
Thanks.
EDIT
This link works; but I don't know why:
<p>Logout</p>
What sort of templates are you using? It's clear from the output that you're not escaping your code correctly.
Seems to me that you want to do this instead:
self.response.out.write("This is the url: %s", users.create_logout_url("/"))
You could also pass it to your template, using GAEs implemented django templates.
from google.appengine.ext.webapp import template
...
...
(inside your request handler)
class Empty: pass
data = Empty()
data.logout = users.create_logout_url("/")
self.response.out.write(template.render(my_tmpl, {'data': data})
A useful approach is to add all sorts of info to a BaseRequestHandler and then use this as base class for all of your other request handler classes.
from google.appengine.ext import webapp
...
class BaseRequestHandler(webapp.RequestHandler):
def __init__(self):
webapp.RequestHandler.__init__(self) # extend the base class
class Empty: pass
data = Empty()
data.foo = "bar"
Then your new classes will have access to all the data you provided in the base class.
class OtherHandler(BaseRequestHandler):
def get(self):
self.response.out.write("This is foo: %s" % self.data.foo) # passes str "bar"
Hope it helps.
A.
Hi following more or less what this article is showing for the user account stuff. In gwt I store server side the logout/login url and I pass them to the client
http://www.dev-articles.com/article/App-Engine-User-Services-in-JSP-3002

Using Google App Engine's Cron service to extract data from a URL

I need to scrape a simple webpage which has the following text:
Value=29
Time=128769
The values change frequently.
I want to extract the Value (29 in this case) and store it in a database. I want to scrape this page every 6 hours. I am not interested in displaying the value anywhere, I just am interested in the cron. Hope I made sense.
Please advise me if I can accomplish this using Google's App Engine.
Thank you!
Please advise me if I can accomplish
this using Google's App Engine.
Sure! E.g., in Python, urlfetch (with the URL as argument) to get the contents, then a simple re.search(r'Value=(\d+)').group(1) (if the contents are as simple as you're showing) to get the value, and a db.put to store it. Do you want the Python details spelled out, or do you prefer Java?
Edit: urllib / urllib2 would also be feasible (GAE does support them now).
So cron.yaml should be something like:
cron:
- description: refresh "value"
url: /refvalue
schedule: every 6 hours
and app.yaml something like:
application: valueref
version: 1
runtime: python
api_version: 1
handlers:
- url: /refvalue
script: refvalue.py
login: admin
You may have other entries in either or both, of course, but this is the subset needed to "refresh the value". A possible refvalue.py might be:
import re
import wsgiref.handlers
from google.appengine.ext import db
from google.appengine.ext import webapp
from google.appengine.api import urlfetch
class Value(db.Model):
thevalue = db.IntegerProperty()
when = db.DateTimeProperty(auto_now_add=True)
class RefValueHandler(webapp.RequestHandler):
def get(self):
resp = urlfetch.fetch('http://whatever.example.com')
mo = re.match(r'Value=(\d+)', resp.content)
if mo:
val = int(mo.group(1))
else:
val = None
valobj = Value(thevalue=val)
valobj.put()
def main():
application = webapp.WSGIApplication(
[('/refvalue', RefValueHandler),], debug=True)
wsgiref.handlers.CGIHandler().run(application)
if __name__ == '__main__':
main()
Depending on what else your web app is doing, you'll probably want to move the class Value to a separate file (e.g. models.py with other models) which of course you'll then have to import (from this .py file and from others which do something interesting with all of your saved values). Here I've taken some possible anomalies into account (no Value= found on the target page) but not others (the target page's server does not respond or gives an error); it's hard to know exactly what anomalies you need to consider and what you want to do if they occur (what I'm doing here is very simply recording None as the value at the anomaly's time, but you may want to do more... or less -- I'll leave that up to you!-)

Resources