Remote API, Objectify and the DevServer don't like transactions? - google-app-engine

I am using objectify 4 to write to the HRD datastore. Everything works fine in unit tests and running the application in devserver or production.
But when I try connect using the REMOTE API to the devserver datastore, an error is thrown when the code starts a XG transaction. While connecting with the Remote API, it seems to think that HRD is not enabled.
This is how I connect ...
public static void main(String[] args) {
RemoteApiOptions options = new RemoteApiOptions().server("localhost", 8888).credentials("foo", "bar");
//options = options.
RemoteApiInstaller installer = new RemoteApiInstaller();
StoredUser storedUser = null;
try {
installer.install(options);
ObjectifyInitializer.register();
storedUser = new StoredUserDao().loadStoredUser(<KEY>);
log.info("found user : " + storedUser.getEmail());
// !!! ERROR !!!
new SomeOtherDao().doSomeDataManipulationInTransaction();
} catch (Throwable e) {
e.printStackTrace();
} finally {
ObjectifyFilter.complete();
installer.uninstall();
}
}
When new SomeOtherDao().doSomeDataManipulationInTransaction() starts a transactions on multiple entity groups I get the error thrown :
transactions on multiple entity groups only allowed in High Replication applications
How can I tell the remote api that this is a HRD environment ?

If your application is using the High Replication Datastore, add an
explicit s~ prefix (or e~ prefix if your application is located in the
European Union) to the app id
For Java version, add this prefix in the application tag in the appengine-web.xml and then deploy the version where you have activated the remote_api servlet
Example
<application>myappid</application>
become
<application>s~myappid</application>
Source: https://developers.google.com/appengine/docs/python/tools/uploadingdata#Python_Setting_up_remote_api

I had 'unapplied job percentage' set to 0 and transactions using the remote api failed as if the devserver was running with Master/Slave and not HRD. Raising the 'unapplied job percentage' above zero fixed the problem.

Related

Connect Java Google AppEngine Local Standard Server to Cloud DB | appengine-api-1.0-sdk-1.9.84.jar | IntelliJ & Cloud Code

EDIT2: I have managed to get past the GlobalDatastoreConfig has already been set error. I managed to pinpoint all the locations that were getting called before the init function. They were in static space in some weird files.
I have now pointed ALL DatastoreServiceFactory.getDatastoreService() to a new static function I've created in a file called Const.java.
private static boolean hasInit = false;
public static DatastoreService getDatastoreService() {
if(!hasInit) {
try {
CloudDatastoreRemoteServiceConfig config = CloudDatastoreRemoteServiceConfig
.builder()
.appId(CloudDatastoreRemoteServiceConfig.AppId.create(CloudDatastoreRemoteServiceConfig.AppId.Location.US_CENTRAL, "gcp-project-id"))
.build();
CloudDatastoreRemoteServiceConfig.setConfig(config);
hasInit = true;
} catch (Exception ignore) {}
}
return DatastoreServiceFactory.getDatastoreService();
}
This returns no errors on the first initialisation. However, I am getting a new error now!
Dec 08, 2022 6:49:56 PM com.google.appengine.api.datastore.dev.LocalDatastoreService init
INFO: Local Datastore initialized:
Type: High Replication
Storage: C:\Users\user\dev\repo\Celbux\core\Funksi179_NSFAS_modules\classes\artifacts\Funksi179_NSFAS_modules_war_exploded\WEB-INF\appengine-generated\local_db.bin
Dec 08, 2022 6:49:56 PM com.google.appengine.api.datastore.dev.LocalDatastoreService load
INFO: Time to load datastore: 20 ms
2022-12-08 18:49:56.757:WARN:oejs.HttpChannel:qtp1681595665-26: handleException / java.io.IOException: com.google.apphosting.api.ApiProxy$CallNotFoundException: Can't make API call urlfetch.Fetch in a thread that is neither the original request thread nor a thread created by ThreadManager
2022-12-08 18:49:56.762:WARN:oejsh.ErrorHandler:qtp1681595665-26: Error page too large: 500 org.apache.jasper.JasperException: com.google.apphosting.api.ApiProxy$RPCFailedException: I/O error
Full stacktrace: https://pastebin.com/YQ2WvqzM
Pretty sure the first of the errors is invoked from this line:
DatastoreService ds = Const.getDatastoreService();
Key ConstantKey = KeyFactory.createKey("Constants", 1);
Entity Constants1 = ds.get(ConstantKey) // <-- This line.
EDIT1: I am not using Maven. Here are the .jars I have in WEB-INF/lib
appengine-api-1.0-sdk-1.9.84.jar
appengine-api-labs.jar
appengine-api-labs-1.9.76.jar
appengine-api-stubs-1.9.76.jar
appengine-gcs-client.jar
appengine-jsr107cache-1.9.76.jar
appengine-mapper.jar
appengine-testing-1.9.76.jar
appengine-tools-sdk-1.9.76.jar
charts4j-1.2.jar
guava-11.0.2.jar
javax.inject-1.jar
json-20190722.jar
Original Question:
The company that I'm working at have a legacy GCP codebase written in Java. This codebase uses the appengine-api-1.0-sdk.jar libary. Upon running this CloudDatastoreRemoteServiceConfig code in the very first place that our DatastoreService gets initialised, it says that the config has already been set.
If someone can shed light on how to get this outdated tech connected to the Cloud via localhost, I'll be most grateful!
web.xml
<filter>
<filter-name>NamespaceFilter</filter-name>
<filter-class>com.sintellec.funksi.Filterns</filter-class>
</filter>
<filter-mapping>
<filter-name>NamespaceFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
Code
public class Filterns implements javax.servlet.Filter {
public void init(FilterConfig filterConfig) {
try {
CloudDatastoreRemoteServiceConfig config = CloudDatastoreRemoteServiceConfig
.builder()
.appId(CloudDatastoreRemoteServiceConfig.AppId.create(CloudDatastoreRemoteServiceConfig.AppId.Location.US_CENTRAL, "gcp-project-id"))
.build();
CloudDatastoreRemoteServiceConfig.setConfig(config);
DatastoreService ds = DatastoreServiceFactory.getDatastoreService();
} catch (Exception e) {
System.out.println(e);
return;
}
this.filterConfig = filterConfig;
}
}
I got this code snippet from here.
Was thinking a few ideas:
Perhaps there's GCP code that's called before our Java code which initialises the Local DB
Perhaps I need to set a global environment variable to point this old emulator to a Cloud Configuration instead
Only problem is I have no idea what to do from here, hoping someone has experience on the legacy Java library here.
To clarify; I am trying to get this outdated GCP Java codebase (appengine-api-1.0-sdk.jar) to connect to Cloud Datastore, NOT use the Local Datastore Emulator. This is so I can debug multiple applications that all access the same Cloud DB
It is very difficult to say especially with that amount of code and we can only guess but, as you indicated, probably some code is initializing your DataStore configuration, probably the SDK itself. You could try setting a breakpoint in the setConfig method of CloudDatastoreRemoteServiceConfig and analyze the call stack.
In any way, one think you could also try is not performing such as initialization in your code, delegating to Application Default Credentials the authentication of your client libraries.
For local development you have two options to configure such as Application Default Credentials.
On one hand, you can use user credentials, i.e., you can use the gcloud CLI to authenticate against GCP with an user with the required permissions to interact with the service, issuing the following command:
gcloud auth application-default login
Please, don't forget to revoke those credentials when you no longer need them:
gcloud auth application-default revoke
On the other, you can create a service account with the necessary permissions and a corresponding service account key, and download that key, a JSON file, to your local filesystem. See this for instructions specific to DataStore. Then, set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of the downloaded file with your service account key.
Again, a word of caution: take care of the downloaded service account key file and never put it under version control because anyone with that file could assume the permissions granted to the service account.
You code should work without further problem when running in GCP because probably you will be using a service that supports attaching a service account which means that Application Default Credentials are provided by the GCP services per se.

not getting any error when serilog is not able to insert data in sql server

I am using serilog as logging framework in .net core 2.0 project and i am trying to store the logs in sql server but serilog is not storing any data in database and it is not even returning error.
can any one help how to resolve this issue and is it possible to add file approach to store logs when database fails to store
Serilog.Debugging.SelfLog
You can use the SelfLog property to tell serilog where to log it's own errors (all of us have had to debug the logger at some point).
Sample Code
Because I hate providing an answer without sample code that others might find useful ... here is the code we use to "initialize" our logger (including serilog and seq -- a great combo for generating centralized logs that the devops team can monitor).
Serilog.Debugging.SelfLog.Enable(Console.Error);
ILoggerFactory factory = new LoggerFactory();
factory.AddConsole();
factory.AddDebug();
var env = "PROD"; //MyEnvironment: PROD, STAGE, DEV, ETC
var seqLogger = new LoggerConfiguration()
.MinimumLevel.Information()
.Enrich.FromLogContext()
.Enrich.WithProperty("Environment", env)
.WriteTo.Seq(
"logserveraddress",
Serilog.Events.LogEventLevel.Verbose,
1000,
null,
"LogServerApiKey")
);
if (env.ToLower() == "prod") { seqLogger.MinimumLevel.Warning(); }
factory.AddSerilog(seqLogger.CreateLogger());
}
return factory.CreateLogger("NameThisLogInstaceSomethingUseful");

GORM Cloud SQL Connection on App Engine Using Go

I'm trying to connect to a Cloud SQL database using GORM in golang.
db, _ = gorm.Open("mysql", "user:pass#cloudsql(connection:name:example)/")
if err != nil {
log.Println(err)
//panic(err)
}
When I attempt to serve the app
goapp serve appengine/
I get a runtime error
ERROR 2017-02-19 20:48:05,436 http_runtime.py:396] bad runtime process port ['\r\n']
Which I found was related to the database migration
db.AutoMigrate(&models.Event{})
If I remove the AutoMigrate, the runtime process port error goes away. However whenever I access a route (ie /events) that does a database query, the connection gets dropped, a 404 page is thrown, and an error message is logged sql: database is closed
When I run the app locally by building the package go build && ./appname and using a local MySQL server, it works fine.
Can someone please tell me how to connect to a Cloud SQL database using Go's GORM framework and App Engine?
This is due to the call to log.New in this file: https://github.com/jinzhu/gorm/blob/master/logger.go#L15
This anwser explain why dev_appserver.py gets it: https://stackoverflow.com/a/24112953/4266494
To disable this, you can either disable all GORM logging:
db.LogMode(false)
Or use an adapter the logger output: https://github.com/benguild/GAEBridge/blob/master/log/debugLevel.go
db.SetLogger(NewDebugLogger(nil)) // on application scope
db.SetLogger(NewDebugLogger(appengine.NewContext(req))) // on request scope
I'm setting a new logger with the real context
This is the only workaround I found to avoid crashes while keeping some logs, it could be awesome if one of you had a real one.

How to upload data to local datastore?

I can update the live datastore using the remote API but is there something similar for the local datastore ? My data is in CSV format.
When I try to connect locally using below code
String username = "test";
String password = "test";
RemoteApiOptions options = new RemoteApiOptions().server("localhost", 8888).credentials(username, password);
RemoteApiInstaller installer = new RemoteApiInstaller();
installer.install(options);
I get an exception :
Exception in thread "main" java.net.UnknownHostException: http
The exception is thrown at line :
installer.install(options);
The local server is running, am I connecting correctly ? Do I need to start the local remote_api server separately ?
I finally got this to work through alot of searching. The dev url/password is XXXX/XXXX
Taken from here : https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/1cQWn0UEoMc
I havent been able to find this specified anywhere in the google app engine documentation.

DB data lost for each cloudfoundry deploy of grails application

I'm developing a grails 2.0.3 application using sts.
I develop and before close sts I usually deploy my application on cloudFoundry.
I'm using HSQLDB and this is DataSource.groovy:
dataSource {
pooled = true
driverClassName = "org.h2.Driver"
username = "mcg"
password = "mcg"
}
hibernate {
cache.use_second_level_cache = true
cache.use_query_cache = true
cache.provider_class = 'net.sf.ehcache.hibernate.EhCacheProvider'
}
// environment specific settings
environments {
development {
dataSource {
dbCreate = "update" // one of 'create', 'create-drop','update'
url = "jdbc:h2:file:qhDB"
}
}
test {
dataSource {
dbCreate = "update"
url = "jdbc:h2:file:testDb"
}
}
production {
dataSource {
dbCreate = "update"
url = "jdbc:h2:file:prodDb"
}
}
}
My problem is that each time I deploy my application to cloudfoundry the db becomes empty on the cloud.
Some suggestions?
#kenota is correct, but there's the additional risk that the entire instance can crash and get rebuilt, so you would lose all filesystem files, even in /tmp. You're much better off using MySQL or PostgreSQL - both are trivial to use in CloudFoundry and will perform much better. In addition if you have enough traffic that you need multiple web server instances, you will share one database instead of multiple file-based databases that all have different data.
By doing this:
url = "jdbc:h2:file:prodDb"
You are asking H2 to use file to store data. But problem is, you are using relative path, so the file will be created in the current working directory of web application, which is usually unpacked web app root.
If you run it on tomcat, the file will be located at: /opt/tomcat7/webapps/app/prodDb If you will redeploy your application with deleting previous one, the database file will be deleted as well.
I think that is exactly what is happening on cloudfoundry.
You should define absolute path to store your database:
url = "jdbc:h2:file:/tmp/prodDb"
I solve using MySQL service on cloudfoundy.

Resources