I have a Flask app that's sending an image to GCV via it's API endpoint (https://vision.googleapis.com/v1/images:annotate). There are two things that the app does:
Send the image to Cloud Vision for text extraction
Resend the same image to Cloud Vision at an angle of positive 89
On my local machine:
Step 1: returns an output with locale being 'zh-Hant'
Step 2: returns an output with locale being 'en'
On my remote machine (Heroku):
Step 1: returns an output with locale being 'en'. Due to this being a different output as compared to step 1 on my local machine, my code fails.
How is it, that for the same image, same API, Cloud Vision returns a different result?
Related
I'm running a Python 3.7 App Engine application in the Standard Environment. In general my time-to-first-byte for requests to this application range from 70-150ms when doing simple server-side rendering of text. My app runs in the us-central region, I'm testing from California.
I've been benchmarking requests to Google Drive API v3. I have a request which simply takes the drive file ID in the URL and returns some metadata about the file. The simplified code looks like:
from googleapiclient.discovery import build
def get(file_id: str):
credentials = oauth2.get_credentials()
service = build("drive", "v3", cache_discovery=False, credentials=credentials)
data = service.files().get(fileId=file_id, fields=",".join(FIELDS), supportsTeamDrives=True)
return json.dumps(data)
My understanding is that this request should be making a single request to the Drive API. I'm seeing 400-500ms time-to-first-byte coming back from the server. That implies ~300ms to get data from the Drive API. This seems quite high to me.
My questions:
* Is this kind of per-API-call latency common for Google Drive API
* If this is not common, what is common?
* What steps, if any, could I take to reduce the amount of time talking to the Drive API?
I have a Google Cloud project and i want to see the logs of all api hits request and response parameters in GCP. In AWS we have S3 browser to get all logs folder. What is the equivalent in GCP??
In GCP logs are not stored on a filesystem, there is no logs folder, so "equivalent" is a bit relative.
Most (if not all) GCP products funnel their logs through Stackdriver Logging, which offer a somewhat consistent interface for viewing and/or further processing/exporting them (see Basic Concepts).
The structure and content/details of a particular log entry depends on the log type and the particular GCP product that produced it (and/or its flavour). For App Engine the environment being used, for example, matters for the log entry content (1st generation standard, 2nd generation standard or flexible).
At least for the 1st generation standard environment (which I use) the request response times (and all other parameters logged/available for all requests and their corresponding replies) are captured in the request logs:
11 Wallclock time Yes
Total clock time in milliseconds spent by App Engine on the request.
This time duration does not include time spent between the client and
the server running the instance of your application. Example: ms=195.
I am running a Java Microengine on GAE.
I have my own html pages for data input and output. When there is a data error and the engine cannot complete its execution (crashes) - the microengine spits out the "Response" as Server not available, please try later.
In order to debug, I run the dataset in the dev environment - as a Java application to identify the error in the console output.
Is there a way to capture the "error" (the console output equivalent when run as a Java application) - as an output string and send it as a content of the servlet response from the deployed Application in GAE..
thanks,
assuming you are using App Engine Managed VM, and a logging framework, you should also forward the log entry into a file, e.g. /var/log/app_engine/custom_logs/app.log
https://cloud.google.com/appengine/docs/managed-vms/custom-runtimes#logging
Subsequently, you'll be able to read the output from Google Cloud Logging.
I have a script that uploads a photo to google compute engine to be processed, saves it in google cloud storage, and then responds with the path which i send to an app engine app to be read... testing with the gsutil cp command shows that the picture is saved correctly to GCS as the cp command always finds it.
However a lot of times app engine has problems find the photo when I send the path, returning a:
NotFoundError: Expect status [200] from Google Storage. But got status 404
Any thoughts?
Thanks to Voscausa, you can simply solve the issue by utilizing adequate parameters:
https://developers.google.com/appengine/docs/python/googlecloudstorageclient/retryparams_class
I need some architecture guidance here.And if anybody can please give me an overview of how it can be done and i will figure out rest of stuff myself.
So from my WINRT App i am capturing an image and sending the image
to Azure tables. I am storing image in blob and image path in table.
So whenever the image is inserted into azure tables, a service [or
whatever it is a bus , a queue..] should call a C# program [This
C# program takes the image and other parameters from table and does
shape recognition].
This C# program will run asynchronously and will update the table
with image matching %.
After that Azure will send an push notification to device if image matching % is greater than 90%. [i know
this is a simple script].
I know Steps 1 and Steps 4. How do i do go around with Step 2 and Step 3?.Where should i put my C# program so that azure tables can call it?
One option would be a worker role hosting a WCF service. The worker role host your C# program and take care of step 3 the WCF service will provide the means for step 2 and your flow would look something like:
So from my WINRT App i am capturing an image and sending the image to Azure tables. I am storing image in blob and image path in table.
Client (WinRT app) makes a call to the WCF service after image is sent to initiate step 3.
Web service call executes your shape recognition code asynchronously.
After that Azure will send an push notification to device if image matching % is greater than 90%. [i know this is a simple script].