Error in Google App Engine - Log Service - SQLite - google-app-engine

I am using Google App Engine on Ubuntu within Linux Subsystem for Windows.
When I start dev_appserver.py I receive errors with the following line resulting in this, which I am understanding to be a corrupted sqlite data file.
File "/../google-cloud-sdk/platform/google_appengine/google/appengine/api/logservice/logservice_stub.py", line 181, in start_request
host, start_time, method, resource, http_version, module))
DatabaseError: database disk image is malformed
Based upon this post I am understanding there is a log.db referenced.
GoogleAppEngineLauncher: database disk image is malformed
However, when I run the script referenced, the resultant path does not contain a log.db leading me to believe this is a different issue.
Any help in identifying the appropriate database, for the purposes of removing, would be appreciated.
Per comment added --clear_datastore=1 and did not notice a change
dev_appserver.py --host 127.0.0.1 --port 8080 --admin_port 8082 --storage_path=temp/storage --skip_sdk_update_check true --clear_datastore=1 main/app.yaml main/sync.yaml

Related

How to deploy SQL Server Express on Docker Desktop Kubernetes

I've been studying "Kubernetes Up and Running" by Hightower et al (first edition) Chapter 13 where they discussed creating a Reliable MySQL Singleton (Since I just discovered that there is a second edition, I guess I'll be buying it soon).
Using their MySQL reliable singleton example as a model, I've been looking for some sample YAML files to make a similar deployment with Microsoft SQL Server (Express) on Docker Desktop for Kubernetes.
Apparently I need YAML files to deploy
Persistent Volume
Volume claim (should this be NFS?)
SQL Server (Express edition) replica set (in spite of the fact that this is just a singleton).
I've tried this example but I'm confused because it does not contain a persistent volume & claim and it does not work. I get the error
Error: unable to recognize "sqlserver.yml": no matches for kind "Deployment" in version "apps/v1beta1"
Can someone please point me to some sample YAML files that are not Azure specific that will work on Docker Desktop Kubernetes for Windows 10? After debugging my application, I'll want to deploy this to Azure (AKS).
Wed Jul 15 2020 Update
I left out the "-n namespace" for the helm install command (possibly because I'm using Helm and you are using helm v2?).
That install command still did not work. Then I did a
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
Now this command works:
helm install todo-app-database stable/mssql-linux
Progress!
When I do a "k get pods" I see that my todo-app-mssql-linux database is in the pending state. So I did a
kubectl get events
and I see
Warning FailedScheduling pod/todo-app-database-mssql-linux-8668d9b88c-lsh5l 0/1 nodes are available: 1 Insufficient memory.
I've been google searching for "Kubernetes insufficient memory" and can find no match.
I suspect this is a problem specific to "Docker Desktop Kubernetes".
When I look at the output for
helm -n ns-todolistdemo template todo-app-database stable/mssql-linux
I see the deployment is asking for 2Gi. (Interesting: when I use the template command, the "-n ns-todolistdemo" does not cause an error like it does with the install command).
So I do
kubectl describe deployment todo-app-database-mssql-linux >todo-app-database-mssql-linux.yaml
I edit the yaml file to change 2Gi to 1Gi.
kubectl apply -f todo-app-database-mssql-linux.yaml
I get this error:
error: error parsing todo-app-database-mssql-linux.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
Hmm... that did not work. I try delete:
kubectl delete deployment todo-app-database-mssql-linux
kubectl create -f todo-app-database-mssql-linux.yaml
I get this error:
error: error validating "todo-app-database-mssql-linux.yaml": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
So I try apply:
kubectl apply -f todo-app-database-mssql-linux.yaml
Same error!
Shucks.... Is there a way to adjust the memory allocation for Docker Desktop?
Thank you
Siegfried
Short answer
https://github.com/helm/charts/blob/master/stable/mssql-linux/templates/pvc-master.yaml
Detailed Answer
Docker For Desktop comes already with a default StorageClass :
This storage class is responsible for auto-provisioning of PV whenever you create a PVC.
If you have a YAML definition of PVC (persistent volume claim), you just need to keep storageClass empty, so it will use the default.
k get storageclass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 11d
This is fair enough as Docker-For-Desktop Cluster is a one node cluster. So if your DB crashes and the cluster opens it again , it will not move to another node, because simply, you have a single node :)
Now should write the YAML of PVC from scratch ?
No , you don't need. Because Helm should be your best friend.
( I explained below Why you have to use Helm even without deep learning curve)
Fortunately, the community provides a chart called stable/mssql-linux..
Let's run it together :
helm -n <your-namespace> install todo-app-database stable/mssql-linux
# helm -n <namespace> install <release-name> <chart-name-from-community>
If you want to check the YAML (namely PVC) that Helm computed, you can run template instead of install
helm -n <your-namespace> template todo-app-database stable/mssql-linux
Why I give you the answer with Helm ?
Writing YAML from scratch lets reinventing the wheel while others do it.
The most efficient way is to reuse what community prepare for you.
However, you may ask: How can i reuse what others doing ?
That's why Helm comes.
Helm comes to be your installer of any application on top of kubernetes regardless how much YAML does your app require.
Install it now and hit the ground choco install kubernetes-helm

Deploying Haskell yesod docker container on google app engine

I am trying to upload a yesod Docker container on Google App Engine. The source code is here and the Docker image is here.
I followed the documentation in the Custom runtime quickstart, and when invoking gcloud app deploy the app builds fine after increasing the build timeout, but the container either the readiness check when trying to start or shows the following timeout message:
ERROR: (gcloud.app.deploy) Operation [apps/meeshkan-github-webhook-router/operations/xxxx-xxxx-xxxx] timed out. This operation may still be underway.
I have tried experimenting with several things, including a manual readiness check, creating an /_ah/health endpoint, and increasing the timeout of the readiness check all the way to 1799 seconds, but none of these actions seem to work.
One issue may be the size of the container (it is 3.2gb), and I could try to prune it down, but I'd only do that if someone could confirm that container size is a contributing factor to deployment problems. Other than that, I'm not sure what could be causing this failure. The docker image starts fine on our local machines.
Thanks in advance for your help and suggestions!
The issue turned out to be that, because I was building on Windows, images built using Docker Desktop on Windows gave all shell scripts executable permission automatically, whereas Docker on Linux needs shell scripts to be given the executable permission. By adding this line to my Dockerfile:
RUN chmod +x /usr/src/app/run.sh
Everything worked fine!

Not able to start SOLR service

I have installed the solr service on a LINUX environment. Now trying to start the service using the below command
service solr start
After executing this command, am getting below error from the server
Waiting to see Solr listening on port 8080 [-] Still not seeing Solr listening on 8080 after 30 seconds!
tail: cannot open `/var/solr/logs/solr.log' for reading: No such file or directory
I created the solr.log file manually and placed it under the above mentioned path, But as soon, i issue the command "service solr start" . The solr.log file will be renamed and there won't be any new file created with the solr.log. hence the service fails to start. Could anyone let me know how to tackle this issue.
Thanks in advance.
I had a similar issue, and was able to find a hint in the /var/solr/logs/solr-8983-console.log
Originally I had been using Java 8, and Solr was working just fine for me.
When I switched to Java 11, Solr would have the issue you reported.
The log file contained the following:
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Unrecognized VM option 'UseParNewGC'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I switched back to Java 8 and Solr started just fine.
First, did you use the procedure provided to install the Solr service page 461
Second, did you set the proper overrides to the environment defaults in a solr.in.sh script (p. 462 of the ref manual). You also need to make sure that the LOG4J_PROPS in the solr.in.sh file points to your log4j.properties file. Also make sure that SOLR_LOGS_DIR is pointing to the correct place.
If all that is correct, then check that the values in your log4j.properties file are set correctly (p. 468 of the ref manual).
You can get the reference manual here: https://www.apache.org/dyn/closer.lua/lucene/solr/ref-guide/ if you don't have it already.
I had a tough time getting Solr to run as a service, but in the end I simply wasn't reading carefully enough.

GAE Upload Download Data / Import Data to localhost for testing on my dev server

I needed to test some changes on my local dev server before pushing to production. Doing so required having the full dataset on my local machine.
A colleague directed me to:
https://developers.google.com/appengine/docs/python/tools/uploadingdata?csw=1
I downloaded the data using an administrator's username and password, but unfortunately, I was unable to upload the data to my localhost "dev" app engine server.
Ran this command from the commandline:
appcfg.py upload_data --filename=../data/data1.dat --url=http://localhost:9080/_ah/remote_api ./
Where:
9080 was my app port on my localhost copy of the app
I was running this command from my app directory
Had the downloaded data stored in relative directory
../data/data1.dat
Received this error:
raise _ToDatastoreError(err)
google.appengine.api.datastore_errors.BadRequestError: app "dev~appname" cannot access app "appname"'s data
UPDATE: It seems that the answer was as simple as adding the following to my upload_data call:
--application="dev~appname"
Thanks #DavidBennett.
ORIGINAL ANSWER: (which also works)
After a ton of searching on SO and code.google.com, the solution I found that worked was a comment on this question:
devappserver2, remote_api, and --default_partition
I used my original command as described in the question:
appcfg.py upload_data --filename=../data/data1.dat --url=http://localhost:9080/_ah/remote_api ./
The username and password I entered when prompted were my apps username (in my case, my email) and the corresponding password. (If that doesn't work you might want to try blank or test#example.com based on other comments I've read, but have not tested that theory.)
I also restarted my app engine with the following flag: (don’t forget to remove the flag the next time you restart the server) (You might want to try without using this flag, since I can’t confirm that it affects anything - I’m including it here, since it was a setting that I used.)
--clear_datastore=yes
The commenter recommends to delete “dev~” in your local server code on line 84 in this file:
google/appengine/tools/devappserver2/application_configuration.py, line 84
Where:
that base directory 'google' is located inside of:
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/
assuming your GoogleAppEngineLauncher.app directory is in your Applications directory on your Mac
IMPORTANT: Restart your local app engine server for the changes to take effect.

Google App Engine appcfg.py shows the help message for every command

I have GWT app, which is deployed on GAE (Java). I'm trying to download data from App Engine datastore using appcfg.py . I did all the setup according to http://ikaisays.com/2010/06/10/using-the-bulkloader-with-java-app-engine/ .
GAE Python SDK version is 1.4.3
Python version is 2.5.4
appcfg.py is on my PATH. When I run appcfg.py on the command-line, I get the "help" message. But the problem is that no matter which command I use, it always returns the help message. I have not been able to run any command using appcfg.py.
It doesn't give any specific error message no matter what arguments I give. My app is using Google Accounts authentication, but I don't think it even gets to the point of authentication.
I'm able to use the Java appcfg (for other actions like rollback) without any problem. But the Python version simply refuses to work for all commands.
I've tried different formats like:
appcfg.py create_bulkloader_config --url=http://myappid.appspot.com/remote_api --application=myappid --filename=config.yml
appcfg.py create_bulkloader_config --filename=bulkloader.yaml --url=http://myappid.appspot.com/remote_api
appcfg.py --filename=bulkloader.yaml --url=http://myappid.appspot.com/remote_api create_bulkloader_config
All give me the same help message:
Usage: appcfg.py [options]
Action must be one of:
create_bulkloader_config: Create a bulkloader.yaml from a running application.
cron_info: Display information about cron jobs.
download_app: Download a previously-uploaded app.
download_data: Download entities from datastore.
help: Print help for a specific action.
request_logs: Write request logs in Apache common log format.
rollback: Rollback an in-progress update.
set_default_version: Set the default (serving) version.
update: Create or update an app version.
update_cron: Update application cron definitions.
update_dos: Update application dos definitions.
update_indexes: Update application indexes.
update_queues: Update application task queue definitions.
upload_data: Upload data records to datastore.
vacuum_indexes: Delete unused indexes from application.
Use 'help <action>' for a detailed description.
Options:
-h, --help Show the help message and exit.
-q, --quiet Print errors only.
-v, --verbose Print info level logs.
--noisy Print all logs.
-s SERVER, --server=SERVER
...
...
...
Even when I try "appcfg.py help create_bulkloader_config" for a detailed description, it still shows me the same standard help.
I have also tried on the local development server using the url http://127.0.0.1:8888/remote_api but it still gives the same help message.
I'm totally clueless as to what the problem is. I'm new to GWT and GAE, and any help will be appreciated.
Thanks.
The following fix worked for me. It looks like appcfg.py doesn't like PYTHON27 and ALWAYS returns the help menu. I fixed it by executing it with PYTHON25 and hard coded all my file locations:
C:\Python25-archive\python "C:\Program Files (x86)\Google\google_appengine\appcfg.py" rollback C:\scripts\myapp
The right way is to change the environment variables on Windows 7:
Go to System Properties
Go to Advance System Settings
Click on Environment Variables
Append to Path variable the values C:\Python27\
Click Ok and restart your computer. (Yes, it is needed.)
Another way is to:
Open command Prompt
Locate your python.exe file. For example:
C:\Python27>_
Then, run a python command that looks like this.
python <appcfg_directory> download_app -A <your_app_id> -V <your_app_version> <output-dir>
Where <appcfg_directory> is equal to C:\Program Files\Google\google_appengine\appcfg.py. (Depending on your file location)
Don't forget to put quotes before and after <appcfg_directory>

Resources