Can't authenticate when using download_data on Google App Engine - google-app-engine

trying to download all my app data using appcfg.py download_data as follows:
(venv)awp$ ../google_appengine/appcfg.py download_data --application=s~app-name --url=http://app-name.appspot.com/_ah/remote_api --filename=dev-datastore/data.csv
I have included in app.yaml:
builtins:
- remote_api: on
also tried using service account credentials, but got the same exception, I have triple checked my email and password are correct and the app-name. I have granted myself all possible permissions in IAM pane on cloud dashboard, but still no love...
(venv)awp$ GOOGLE_APPLICATION_CREDENTIALS=../app-name-51362728f4a9.json ../google_appengine/appcfg.py download_data --authenticate_service_account --application=s~app-name --url=http://app-name.appspot.com/_ah/remote_api --filename=dev-datastore/data.csv --noisy
04:46 PM Downloading data records.
[INFO ] Logging to bulkloader-log-20161206.164650
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20161206.164650.sql3
[INFO ] Opening database: bulkloader-results-20161206.164650.sql3
[DEBUG ] [WorkerThread-0] WorkerThread: started
[DEBUG ] [WorkerThread-1] WorkerThread: started
[DEBUG ] [WorkerThread-2] WorkerThread: started
[DEBUG ] [WorkerThread-3] WorkerThread: started
[DEBUG ] [WorkerThread-4] WorkerThread: started
[DEBUG ] [WorkerThread-5] WorkerThread: started
[DEBUG ] [WorkerThread-6] WorkerThread: started
[DEBUG ] [WorkerThread-7] WorkerThread: started
[DEBUG ] [WorkerThread-8] WorkerThread: started
[DEBUG ] [WorkerThread-9] WorkerThread: started
[DEBUG ] Configuring remote_api. url_path = /_ah/remote_api, servername = app-name.appspot.com
[DEBUG ] Bulkloader using app_id: s~app-name
[INFO ] Connecting to app-name.appspot.com/_ah/remote_api
Please enter login credentials for app-name.appspot.com
Email: anthony#app-name.com
Password for anthony#app-name.com:
[ERROR ] Exception during authentication
Traceback (most recent call last):
File "/Users/Ant/Documents/google_appengine/google/appengine/tools/bulkloader.py", line 3466, in Run
self.request_manager.Authenticate()
File "/Users/Ant/Documents/google_appengine/google/appengine/tools/bulkloader.py", line 1329, in Authenticate
remote_api_stub.MaybeInvokeAuthentication()
File "/Users/Ant/Documents/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 889, in MaybeInvokeAuthentication
datastore_stub._server.Send(datastore_stub._path, payload=None)
File "/Users/Ant/Documents/google_appengine/google/appengine/tools/appengine_rpc.py", line 441, in Send
self._Authenticate()
File "/Users/Ant/Documents/google_appengine/google/appengine/tools/appengine_rpc.py", line 582, in _Authenticate
super(HttpRpcServer, self)._Authenticate()
File "/Users/Ant/Documents/google_appengine/google/appengine/tools/appengine_rpc.py", line 313, in _Authenticate
auth_token = self._GetAuthToken(credentials[0], credentials[1])
File "/Users/Ant/Documents/google_appengine/google/appengine/tools/appengine_rpc.py", line 252, in _GetAuthToken
response = self.opener.open(req)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 437, in open
response = meth(req, response)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 475, in error
return self._call_chain(*args)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 409, in _call_chain
result = func(*args)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 404: Not Found
[ERROR ] Authentication Failed: Incorrect credentials or unsupported authentication type (e.g. OpenId).
any help would be greatly appreciated! thanks

Try exporting the credentials to your credentials file.
export GOOGLE_APPLICATION_CREDENTIALS=./app-name-51362728f4a9.json

Related

Run SageMaker Batch transform failed on loading model

I am trying to run batch transform job with HuggineFace class and fine-tuned model and custom inference file.
The job failed on loading the model but I could load it locally.
I need to make custom inference file because i need to keep the input file as is, so i had to change the input key from the input json file.
Here are the exception :
PredictionException(str(e), 400)
2022-05-08 16:49:45,499 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mms.service.PredictionException: Can't load config for '/.sagemaker/mms/models/model'. Make sure that:
2022-05-08 16:49:45,499 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -
2022-05-08 16:49:45,499 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - - '/.sagemaker/mms/models/model' is a correct model identifier listed on 'https://huggingface.co/models'
2022-05-08 16:49:45,499 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -
2022-05-08 16:49:45,500 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - - or '/.sagemaker/mms/models/model' is the correct path to a directory containing a config.json file
2022-05-08 16:49:45,500 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle -
2022-05-08 16:49:45,500 [INFO ] W-model-1-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - : 400
I am running on script mode:
from sagemaker.huggingface.model import HuggingFaceModel
hub = {
# 'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'text-classification',
'INPUT_TEXTS': 'Description'
}
huggingface_model = HuggingFaceModel(model_data='../model/model.tar.gz',
role=role,
source_dir="../model/pytorch_model/code",
transformers_version="4.6",
pytorch_version="1.7",
py_version="py36",
entry_point="inference.py",
env=hub
)
batch_job = huggingface_model.transformer(
instance_count=1,
instance_type='ml.p3.2xlarge',
output_path=output_s3_path, # we are using the same s3 path to save the output with the input
strategy='SingleRecord',
accept='application/json',
assemble_with='Line'
)
batch_job.transform(
data=s3_file_uri,
content_type='application/json',
split_type='Line',
#input_filter='$[1:]',
join_source='Input'
)
Custom inference.py
import json
import os
from transformers import pipeline
import torch
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
def model_fn(model_dir):
model = pipeline(task=os.environ.get('HF_TASK', 'text-classification'), model=model_dir, tokenizer=model_dir)
return model
def transform_fn(model, input_data, content_type, accept):
input_data = json.loads(input_data)
input_text = os.environ.get('INPUT_TEXTS', 'inputs')
inputs = input_data.pop(input_text, None)
parameters = input_data.pop("parameters", None)
# pass inputs with all kwargs in data
if parameters is not None:
prediction = model(inputs, **parameters)
else:
prediction = model(inputs)
return json.dumps(
prediction,
ensure_ascii=False,
allow_nan=False,
indent=None,
separators=(",", ":"),
)
I think the issue is with the "model_data" parameter. It should point to an S3 object(model.tar.gz).
Then the transform job will download the model file from S3 and load it.
The solution is change the "task" in the pipline to "sentiment-analysis"
hub = {
# 'HF_MODEL_ID':'cardiffnlp/twitter-roberta-base-sentiment',
'HF_TASK':'sentiment-analysis',
'INPUT_TEXTS': 'Description'
}

Error Response: [13] An internal error occurred while creating a Google Cloud Storage bucket

I am trying to push a Nodejs sample app Hello World but after pushing I am getting error.
ERROR: (gcloud.app.deploy) Error Response: [13] An internal error
occurred while creating a Google Cloud Storage bucket.
There is nothing wrong with the code as it is just a sample app downloaded from google.
I have seen other SO posts related to this error but none of them helped.
Can anyone please tell me a Fix for this error.
I also tried to run command in debug using command and got following output.
gcloud app deploy app.yaml --verbosity debug
DEBUG: No bucket specified, retrieving default bucket.
DEBUG: Using bucket [gs://staging.united-backbone-186810.appspot.com].
DEBUG: Service [appengineflex.googleapis.com] is already enabled for project [united-backbone-186810]
Beginning deployment of service [default]...
INFO: Need Dockerfile to be generated for runtime nodejs
Building and pushing image for service [default]
INFO: Uploading [c:\users\sudha\appdata\local\temp\tmpaj1p9z\src.tgz] to [asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest]
DEBUG: Using runtime builder root [gs://runtime-builders/]
DEBUG: Loading runtimes manifest from [gs://runtime-builders/runtimes.yaml]
INFO: Reading [<googlecloudsdk.api_lib.storage.storage_util.ObjectReference object at 0x0000000006631400>]
DEBUG: Resolved runtime [nodejs] as build configuration [gs://runtime-builders/nodejs-default-builder-20171116155610.yaml]
INFO: Using runtime builder [gs://runtime-builders/nodejs-default-builder-20171116155610.yaml]
INFO: Reading [<googlecloudsdk.api_lib.storage.storage_util.ObjectReference object at 0x0000000006642390>]
Started cloud build [c3702b4b-7bc8-4861-944f-8490bf183078].
DEBUG: GCS logfile url is https://www.googleapis.com/storage/v1/b/staging.united-backbone-186810.appspot.com/o/log-c3702b4b-7bc8-4861-944f-8490bf183078.txt?alt=media
To see logs in the Cloud Console: https://console.cloud.google.com/gcr/builds/c3702b4b-7bc8-4861-944f-8490bf183078?project=united-backbone-186810
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 233 bytes)
------------------------------------------------- REMOTE BUILD OUTPUT --------------------------------------------------
starting build "c3702b4b-7bc8-4861-944f-8490bf183078"
FETCHSOURCE
Fetching storage object: gs://staging.united-backbone-186810.appspot.com/asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest#1511349522754483
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 398 bytes)
Copying gs://staging.united-backbone-186810.appspot.com/asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest#1511349522754483...
- [1 files][ 1.8 KiB/ 1.8 KiB]
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 2198 bytes)
Operation completed over 1 objects/1.8 KiB.
BUILD
Step #0: Pulling image: gcr.io/gcp-runtimes/nodejs/gen-dockerfile#sha256:196bc20ff8d91905dc071100399538814e2c619d0d27576c35a6405674da696c
Step #0: sha256:196bc20ff8d91905dc071100399538814e2c619d0d27576c35a6405674da696c: Pulling from gcp-runtimes/nodejs/gen-dockerfile
Step #0: Digest: sha256:196bc20ff8d91905dc071100399538814e2c619d0d27576c35a6405674da696c
Step #0: Status: Downloaded newer image for gcr.io/gcp-runtimes/nodejs/gen-dockerfile#sha256:196bc20ff8d91905dc071100399538814e2c619d0d27576c35a6405674da696c
Starting Step #0
Step #0: Checking for Node.js.
Finished Step #0
Step #1: Pulling image: gcr.io/cloud_builders/docker#sha256:8f8f572201e2b2ae876d8ca8b05c7d44df994e7ea8352c334ee5bae7ca3dc7f9
Step #1: sha256:8f8f572201e2b2ae876d8ca8b05c7d44df994e7ea8352c334ee5bae7ca3dc7f9: Pulling from cloud_builders/docker
Step #1: Digest: sha256:8f8f572201e2b2ae876d8ca8b05c7d44df994e7ea8352c334ee5bae7ca3dc7f9
Step #1: Status: Downloaded newer image for gcr.io/cloud_builders/docker#sha256:8f8f572201e2b2ae876d8ca8b05c7d44df994e7ea8352c334ee5bae7ca3dc7f9
Starting Step #1
Step #1: Sending build context to Docker daemon 11.26kB
Step #1: Step 1/5 : FROM gcr.io/google-appengine/nodejs#sha256:2c743f7509798cca81aaebaa339c899c4d1924153beb4a94df00ff6af238fcb2
Step #1: sha256:2c743f7509798cca81aaebaa339c899c4d1924153beb4a94df00ff6af238fcb2: Pulling from google-appengine/nodejs
Step #1: Digest: sha256:2c743f7509798cca81aaebaa339c899c4d1924153beb4a94df00ff6af238fcb2
Step #1: Status: Downloaded newer image for gcr.io/google-appengine/nodejs#sha256:2c743f7509798cca81aaebaa339c899c4d1924153beb4a94df00ff6af238fcb2
Step #1: ---> 669f53c480d3
Step #1: Step 2/5 : COPY . /app/
Step #1: ---> 388693438f26
Step #1: Removing intermediate container 45b4eecdfef0
Step #1: Step 3/5 : RUN /usr/local/bin/install_node '>=4.3.2'
Step #1: ---> Running in 67ea703659bc
Step #1: ---> 7b5b6d0283c5
Step #1: Removing intermediate container 67ea703659bc
Step #1: Step 4/5 : RUN npm install --unsafe-perm || ((if [ -f npm-debug.log ]; then cat npm-debug.log; fi) && false)
Step #1: ---> Running in 2d7b75885d6c
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 145 bytes)
Step #1: [91mnpm[0m[91m notice created a lockfile as package-lock.json. You should commit this file.
Step #1: [0madded 43 packages in 1.695s
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 1222 bytes)
Step #1: ---> e802843c73cd
Step #1: Removing intermediate container 2d7b75885d6c
Step #1: Step 5/5 : CMD npm start
Step #1: ---> Running in cef15318568d
Step #1: ---> b61594a5ec64
Step #1: Removing intermediate container cef15318568d
Step #1: Successfully built b61594a5ec64
Step #1: Successfully tagged asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest
Finished Step #1
PUSH
Pushing asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest
The push refers to a repository [asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838]
737512f7e42b: Preparing
ec3a6c686d39: Preparing
644f071ca81b: Preparing
226846715c53: Preparing
e0a3cc0c6e70: Preparing
c2003e396592: Preparing
8fc48a7a910e: Preparing
9aa804bf0e6a: Preparing
749e521e9c3d: Preparing
31cb62ec9f95: Preparing
c2003e396592: Waiting
8fc48a7a910e: Waiting
9aa804bf0e6a: Waiting
749e521e9c3d: Waiting
31cb62ec9f95: Waiting
226846715c53: Layer already exists
e0a3cc0c6e70: Layer already exists
644f071ca81b: Layer already exists
8fc48a7a910e: Layer already exists
c2003e396592: Layer already exists
31cb62ec9f95: Layer already exists
749e521e9c3d: Layer already exists
9aa804bf0e6a: Layer already exists
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 21 bytes)
ec3a6c686d39: Pushed
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 21 bytes)
737512f7e42b: Pushed
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] not complete. Waiting 1s.
DEBUG: Reading GCS logfile: 206 (read 104 bytes)
latest: digest: sha256:6659b586325b131087fcbf872abf618fa1ee45503fff47cad7bb9e92d63bcd12 size: 2413
DONE
DEBUG: Operation [operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4] complete. Result: {
"response": {
"finishTime": "2017-11-22T11:19:11.119096Z",
"status": "SUCCESS",
"timeout": "600s",
"startTime": "2017-11-22T11:18:45.353017219Z",
"logsBucket": "staging.united-backbone-186810.appspot.com",
"results": {
"images": [
{
"name": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838",
"digest": "sha256:6659b586325b131087fcbf872abf618fa1ee45503fff47cad7bb9e92d63bcd12"
},
{
"name": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest",
"digest": "sha256:6659b586325b131087fcbf872abf618fa1ee45503fff47cad7bb9e92d63bcd12"
}
],
"buildStepImages": [
"",
""
]
},
"createTime": "2017-11-22T11:18:44.769257383Z",
"#type": "type.googleapis.com/google.devtools.cloudbuild.v1.Build",
"source": {
"storageSource": {
"object": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest",
"bucket": "staging.united-backbone-186810.appspot.com"
}
},
"options": {
"substitutionOption": "ALLOW_LOOSE"
},
"steps": [
{
"args": [
"--runtime-image",
"gcr.io/google-appengine/nodejs#sha256:2c743f7509798cca81aaebaa339c899c4d1924153beb4a94df00ff6af238fcb2"
],
"name": "gcr.io/gcp-runtimes/nodejs/gen-dockerfile#sha256:196bc20ff8d91905dc071100399538814e2c619d0d27576c35a6405674da696c",
"env": [
"GAE_APPLICATION_YAML_PATH=app.yaml"
]
},
{
"args": [
"build",
"-t",
"asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest",
"."
],
"name": "gcr.io/cloud_builders/docker#sha256:8f8f572201e2b2ae876d8ca8b05c7d44df994e7ea8352c334ee5bae7ca3dc7f9",
"env": [
"GAE_APPLICATION_YAML_PATH=app.yaml"
]
}
],
"sourceProvenance": {
"resolvedStorageSource": {
"generation": "1511349522754483",
"object": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest",
"bucket": "staging.united-backbone-186810.appspot.com"
},
"fileHashes": {
"gs://staging.united-backbone-186810.appspot.com/asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest#1511349522754483": {
"fileHash": [
{
"type": "MD5",
"value": "RCwNC0JHUlmvZZz+C5RYsw=="
}
]
}
}
},
"projectId": "united-backbone-186810",
"images": [
"asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest"
],
"substitutions": {
"_GAE_APPLICATION_YAML_PATH": "app.yaml",
"_OUTPUT_IMAGE": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest"
},
"id": "c3702b4b-7bc8-4861-944f-8490bf183078",
"logUrl": "https://console.cloud.google.com/gcr/builds/c3702b4b-7bc8-4861-944f-8490bf183078?project=united-backbone-186810"
},
"done": true,
"name": "operations/build/united-backbone-186810/YzM3MDJiNGItN2JjOC00ODYxLTk0NGYtODQ5MGJmMTgzMDc4",
"metadata": {
"#type": "type.googleapis.com/google.devtools.cloudbuild.v1.BuildOperationMetadata",
"build": {
"finishTime": "2017-11-22T11:19:11.119096Z",
"status": "SUCCESS",
"timeout": "600s",
"startTime": "2017-11-22T11:18:45.353017219Z",
"logsBucket": "staging.united-backbone-186810.appspot.com",
"results": {
"images": [
{
"name": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838",
"digest": "sha256:6659b586325b131087fcbf872abf618fa1ee45503fff47cad7bb9e92d63bcd12"
},
{
"name": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest",
"digest": "sha256:6659b586325b131087fcbf872abf618fa1ee45503fff47cad7bb9e92d63bcd12"
}
],
"buildStepImages": [
"",
""
]
},
"id": "c3702b4b-7bc8-4861-944f-8490bf183078",
"source": {
"storageSource": {
"object": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest",
"bucket": "staging.united-backbone-186810.appspot.com"
}
},
"options": {
"substitutionOption": "ALLOW_LOOSE"
},
"steps": [
{
"args": [
"--runtime-image",
"gcr.io/google-appengine/nodejs#sha256:2c743f7509798cca81aaebaa339c899c4d1924153beb4a94df00ff6af238fcb2"
],
"name": "gcr.io/gcp-runtimes/nodejs/gen-dockerfile#sha256:196bc20ff8d91905dc071100399538814e2c619d0d27576c35a6405674da696c",
"env": [
"GAE_APPLICATION_YAML_PATH=app.yaml"
]
},
{
"args": [
"build",
"-t",
"asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest",
"."
],
"name": "gcr.io/cloud_builders/docker#sha256:8f8f572201e2b2ae876d8ca8b05c7d44df994e7ea8352c334ee5bae7ca3dc7f9",
"env": [
"GAE_APPLICATION_YAML_PATH=app.yaml"
]
}
],
"sourceProvenance": {
"resolvedStorageSource": {
"generation": "1511349522754483",
"object": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest",
"bucket": "staging.united-backbone-186810.appspot.com"
},
"fileHashes": {
"gs://staging.united-backbone-186810.appspot.com/asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest#1511349522754483": {
"fileHash": [
{
"type": "MD5",
"value": "RCwNC0JHUlmvZZz+C5RYsw=="
}
]
}
}
},
"projectId": "united-backbone-186810",
"images": [
"asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest"
],
"substitutions": {
"_GAE_APPLICATION_YAML_PATH": "app.yaml",
"_OUTPUT_IMAGE": "asia.gcr.io/united-backbone-186810/appengine/default.20171122t164838:latest"
},
"createTime": "2017-11-22T11:18:44.769257383Z",
"logUrl": "https://console.cloud.google.com/gcr/builds/c3702b4b-7bc8-4861-944f-8490bf183078?project=united-backbone-186810"
}
}
}
DEBUG: Reading GCS logfile: 416 (no new content; keep polling)
------------------------------------------------------------------------------------------------------------------------
DEBUG: Converted YAML to JSON: "{
"betaSettings": {
"module_yaml_path": "app.yaml",
"vm_runtime": "nodejs"
},
"env": "flex",
"handlers": [
{
"script": {
"scriptPath": "PLACEHOLDER"
},
"urlRegex": ".*"
}
],
"runtime": "vm"
}"
DEBUG: Received operation: [apps/united-backbone-186810/operations/51892505-4cb2-4b11-8431-ee7b62f0f236]
DEBUG: Operation [apps/united-backbone-186810/operations/51892505-4cb2-4b11-8431-ee7b62f0f236] not complete. Waiting to retry.
Updating service [default] (this may take several minutes).../DEBUG: Operation [apps/united-backbone-186810/operations/51892505-4cb2-4b11-8431-ee7b62f0f236] not complete. Waiting to retry.
Updating service [default] (this may take several minutes)...\DEBUG: Operation [apps/united-backbone-186810/operations/51892505-4cb2-4b11-8431-ee7b62f0f236] complete. Result: {
"metadata": {
"target": "apps/united-backbone-186810/services/default/versions/20171122t164838",
"method": "google.appengine.v1.Versions.CreateVersion",
"user": "prafactor9#gmail.com",
"insertTime": "2017-11-22T11:19:14.120Z",
"ephemeralMessage": "Deployment failed. Attempting to cleanup deployment artifacts.",
"#type": "type.googleapis.com/google.appengine.v1.OperationMetadataV1"
},
"done": true,
"name": "apps/united-backbone-186810/operations/51892505-4cb2-4b11-8431-ee7b62f0f236",
"error": {
"message": "An internal error occurred while creating a Google Cloud Storage bucket.",
"code": 13
}
}
Updating service [default] (this may take several minutes)...failed.
DEBUG: (gcloud.app.deploy) Error Response: [13] An internal error occurred while creating a Google Cloud Storage bucket.
Traceback (most recent call last):
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\cli.py", line 789, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\calliope\backend.py", line 756, in Run
resources = command_instance.Run(args)
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\surface\app\deploy.py", line 65, in Run
parallel_build=False)
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\command_lib\app\deploy_util.py", line 587, in RunDeploy
flex_image_build_option=flex_image_build_option)
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\command_lib\app\deploy_util.py", line 395, in Deploy
extra_config_settings)
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\api_lib\app\appengine_api_client.py", line 188, in DeployService
message=message)
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\api_lib\app\operations_util.py", line 244, in WaitForOperation
sleep_ms=retry_interval)
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\api_lib\util\waiter.py", line 266, in WaitFor
sleep_ms=sleep_ms)
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\core\util\retry.py", line 222, in RetryOnResult
if not should_retry(result, state):
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\api_lib\util\waiter.py", line 260, in _IsNotDone
return not poller.IsDone(operation)
File "C:\Users\sudha\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\api_lib\app\operations_util.py", line 169, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [13] An internal error occurred while creating a Google Cloud Storage bucket.
ERROR: (gcloud.app.deploy) Error Response: [13] An internal error occurred while creating a Google Cloud Storage bucket.
C:\Users\sudha\Desktop\nodejs-docs-samples\appengine\hello-world>
Try by setting this configuration parameter first:
gcloud config set app/use_deprecated_preparation True
as proposed in Google's Public Issue Tracker
Try setting the configuration parameter with:
gcloud config set app/stop_previous_version true

Error enabling AD authentication in GPFS filesystem

Ive created a filesystem on a 2 linux node(both running RHEL 7) GPFS cluster. I am trying to enable AD authentication but am receiving an error and unable to find an answer to fix. Here is the process I am following on the manager node:
./spectrumscale file auth ad
Yes to edit template
I fill in the template with the following info:
[file_ad]
servers = bdtestdc01 <--- my test AD server
netbios_name = gpfscluser <--- the name I gave the cluster during setup Is this field looking for another name?
idmap_role = master
bind_username = administrator
bind_password = the domain password of the administrator account
unixmap_domains = bdtest.subdomain.company.com
I save the template and set the password. I then run:
./spectrumscale deploy
It errors at Installing Authentication. The log file says:
Error executing action run on resource 'execute[Configure file authentication]
2015-12-21 10:45:31,440 [ TRACE ] bdgpfs01.subdomain.company.com Chef Client failed. 1 resources updated in 3.641691552 seconds
2015-12-21 10:45:31,456 [ TRACE ] bdgpfs01.subdomain.company.com [2015-12-21T10:45:31-08:00] ERROR: execute[Configure file authentication] (auth::auth_file_configure line 22) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
2015-12-21 10:45:31,456 [ TRACE ] bdgpfs01.subdomain.company.com ---- Begin output of /usr/lpp/mmfs/bin/mmuserauth service create --data-access-method file --type ad --servers 'bdtestbluedc01' --netbios-name 'gpfscluster' --idmap-role 'master' --user-name 'administrator' --password XXXXXX --unixmap-domains 'bdtest.subdomain.company.com' --idmap-range '10000000-299999999' --idmap-range-size '1000000' --enable-nfs-kerberos ----
2015-12-21 10:45:31,456 [ TRACE ] bdgpfs01.subdomain.company.com STDOUT:
2015-12-21 10:45:31,457 [ TRACE ] bdgpfs01.subdomain.company.com STDERR: mmuserauth service create: Syntax error. The correct syntax is:
2015-12-21 10:45:31,457 [ TRACE ] bdgpfs01.subdomain.company.com --unixmap-domains domain(lower value-higher value)
2015-12-21 10:45:31,457 [ TRACE ] bdgpfs01.subdomain.company.com mmuserauth service create: Command failed. Examine previous error messages to determine cause.
2015-12-21 10:45:31,438 [ TRACE ] bdgpfs01.subdomain.company.com mmuserauth service create: Command failed. Examine previous error messages to determine cause.
The domain field was looking for nonwhitespace characters and an ID Map Range.
Instead of using the configuration template, I ran the following command which enabled the authentication successfully...
mmuserauth service create --type ad --data-access-method file --netbios-name bdtestnode --user-name administrator --idmap-role master --servers myADserver --password Passwr0rd --idmap-range-size 1000000 --idmap-range 10000000-299999999
I then ran the following command to test:
id "testdomain\administrator"
It returned the proper groups and IDs

How to load data from the online GAE datastore into the local development server?

I have previously used the approach described in GAE docs to download backups of my entities on the live datastore.
Currently, I have a csv file per entity kind, that I got by writing bulkloader.yaml and using this command:
appcfg.py download_data --config_file=bulkloader.yaml --filename=users.csv --kind=Permission --url=http://your_app_id.appspot.com/_ah/remote_api
I also have a sql3 dump file that I got using the command:
appcfg.py download_data --kind=<kind> --url=http://your_app_id.appspot.com/_ah/remote_api --filename=<data-filename>
Now if I try this command:
appcfg.py upload_data --url=http://your_app_id.appspot.com/_ah/remote_api --kind=<kind> --filename=<data-filename>
Replacing the URL by localhost:8080, it asks me for a username/password. Now even if provide a mock username (test#example.com) in http://localhost:8080/_ah/remote_api and check the "admin" checkbox, it always gives me an authentication error.
The other alternative mentioned in the docs is using this:
appcfg.py upload_data --config_file=album_loader.py --filename=album_data.csv --kind=Album --url=http://localhost:8080/_ah/remote_api <app-directory>
I wrote a loader, and tried it out, it also asks for a username and password, but it accepts anything here. The output is as follows:
/usr/local/google_appengine/google/appengine/api/search/search.py:232: UserWarning: DocumentOperationResult._code is deprecated. Use OperationResult._code instead.
'Use OperationResult.%s instead.' % (name, name))
/usr/local/google_appengine/google/appengine/api/search/search.py:232: UserWarning: DocumentOperationResult._CODES is deprecated. Use OperationResult._CODES instead.
'Use OperationResult.%s instead.' % (name, name))
Application: knowledgetestgame
Uploading data records.
[INFO ] Logging to bulkloader-log-20121113.210613
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20121113.210613.sql3
Please enter login credentials for localhost
Email: test#example.com
Password for test#example.com:
[INFO ] Connecting to localhost:8080/_ah/remote_api
[INFO ] Starting import; maximum 10 entities per post
[ERROR ] [WorkerThread-4] WorkerThread:
Traceback (most recent call last):
File "/usr/local/google_appengine/google/appengine/tools/adaptive_thread_pool.py", line 176, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 764, in PerformWork
transfer_time = self._TransferItem(thread_pool)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 933, in _TransferItem
self.content = self.request_manager.EncodeContent(self.rows)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 1394, in EncodeContent
entity = loader.create_entity(values, key_name=key, parent=parent)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 2728, in create_entity
(len(self.__properties), len(values)))
AssertionError: Expected 17 columns, found 18.
[INFO ] [WorkerThread-5] Backing off due to errors: 1.0 seconds
[INFO ] Unexpected thread death: WorkerThread-4
[INFO ] An error occurred. Shutting down...
[ERROR ] Error in WorkerThread-4: Expected 17 columns, found 18.
[INFO ] 980 entities total, 0 previously transferred
[INFO ] 0 entities (278 bytes) transferred in 5.9 seconds
[INFO ] Some entities not successfully transferred
I have ~4000 entities in total, it says here that 980 are transferred, but actually I check the local datastore and I find none of them..
Below is the loader I use (I used NDB for the Guess entity)
import datetime
from google.appengine.ext import db
from google.appengine.tools import bulkloader
from google.appengine.ext.ndb import key
class Guess(db.Model):
pass
class GuessLoader(bulkloader.Loader):
def __init__(self):
bulkloader.Loader.__init__(self, 'Guess',
[('selectedAssociation', lambda x: x.decode('utf-8')),
('suggestionsList', lambda x: x.decode('utf-8')),
('associationIndexInList', int),
('timeEntered',
lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date()),
('rank', int),
('topicName', lambda x: x.decode('utf-8')),
('topic', int),
('player', int),
('game', int),
('guessString', lambda x: x.decode('utf-8')),
('guessTime',
lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date()),
('accountType', lambda x: x.decode('utf-8')),
('nthGuess', int),
('score', float),
('cutByRoundEnd', bool),
('suggestionsListDelay', int),
('occurrences', float)
])
loaders = [GuessLoader]
Edit: I just noticed this part in the error message [ERROR ] Error in WorkerThread-0: Expected 17 columns, found 18. while actually I just went through the whole csv file, and made sure that every line has 18 columns. I checked the loader, and found that I was missing the key column, I gave it a type int but this doesn't work.
If you have problems with the authentication, put the following in your appengine_config.py:
if os.environ.get('SERVER_SOFTWARE','').startswith('Development'):
remoteapi_CUSTOM_ENVIRONMENT_AUTHENTICATION = (
'REMOTE_ADDR', ['127.0.0.1'])
then run
appcfg.py download_data --url=http://APPNAME.appspot.com/_ah/remote_api --filename=dump --kind=EntityName
appcfg.py upload_data --url=http://localhost:8080/_ah/remote_api --filename=dump --application=dev~APPNAME
Try just pressing Enter (no username/password). This seemed to do the trick for me. My command (wrapped in a bash script to prevent import errors that I occasionally received) is:
#!/bin/bash
# Modify path
export PYTHONPATH=$PYTHONPATH:.
# Load data
python /path/to/app/config/appcfg.py upload_data \
--config_file=<my_loader.py> \
--filename=<output.csv> \
--kind=<kind> \
--application=dev~<application_id> \
--url=http://localhost:8088/_ah/remote_api ./
When prompted for the Email, I hit enter and all is uploaded to the dev server. I am not using NDB in this case, although I do not believe that should make a difference.

BadRequestError while uploading data using bulk loader

Hello I have created sample Greeting application in Google app engine.
Now I am trying to upload data using bulk loader.
But its giving BadRequestError.This is the code for that:
D:\Study\M.Tech\Summer\Research\My Work\Query Transformation\Experiment\Tools\Bu
lkloader\bulkloader test>appcfg.py create_bulkloader_config --url=http://bulkex.
appspot.com/remote_api --application=bulkex --filename=config.yml
Creating bulkloader configuration.
[INFO ] Logging to bulkloader-log-20111008.175810
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20111008.175810.sql3
[INFO ] Opening database: bulkloader-results-20111008.175810.sql3
[INFO ] Connecting to bulkex.appspot.com/remote_api
Please enter login credentials for bulkex.appspot.com
Email: shyam.rk22#gmail.com
Password for shyam.rk22#gmail.com:
[INFO ] Downloading kinds: ['__Stat_PropertyType_PropertyName_Kind__']
[ERROR ] [WorkerThread-3] WorkerThread:
Traceback (most recent call last):
File "C:\Program Files\Google\google_appengine\google\appengine\tools\adaptive
_thread_pool.py", line 176, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "C:\Program Files\Google\google_appengine\google\appengine\tools \bulkloader.py",line 764, in PerformWork transfer_time = self._TransferItem(thread_pool)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkload
er.py", line 1170, in _TransferItem
self, retry_parallel=self.first)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkload
er.py", line 1471, in GetEntities
results = self._QueryForPbs(query)
File "C:\Program Files\Google\google_appengine\google\appengine\tools\bulkload
er.py", line 1442, in _QueryForPbs
raise datastore._ToDatastoreError(e)
BadRequestError: app s~bulkex cannot access app bulkex's data
[INFO ] [WorkerThread-0] Backing off due to errors: 1.0 seconds
[INFO ] An error occurred. Shutting down...
[ERROR ] Error in WorkerThread-3: app s~bulkex cannot access app bulkex's data
[INFO ] Have 0 entities, 0 previously transferred
[INFO ] 0 entities (6466 bytes) transferred in 25.6 seconds
Note the warning under --application in http://code.google.com/appengine/docs/python/tools/uploadingdata.html and use --url instead.
I was having the same issue. I removed the --application=APPID parameter from the statement and the code executed and built out the config.yml file with all my Kinds from the Datastore!

Resources