Idea doesn't upgrade sources while in local debug for appEngine app - google-app-engine

I created Spring Boot + Google App Engine application. For development purpose I use IntelliJ IDEA and Google Cloud Tools plugin. I'm currently using only localDebug, which means I don't deploy anything on Google cloud. The configuration for debug is below:
I created a simple service to be sure if my code is updated on change or not:
static int i = 10;
#GetMapping(value = "/test")
public String test() {
return Integer.toString(++i);
}
Unfortunately when I change my code (e.g. from i = 10 to i = 100) and restart the app (I mean press on Rerun (Ctrl+F5) or Stop (Ctrl+F2) + Run my code doesn't apply on server, which means Idea doesn't rebuild the sources on server start. As you see on the screenshot above I even tried to add Build Project step to Before launch, which didn't work.
So to apply changes I need to execute from command line mvn appengine:run -> press Ctrl+C to stop it, switch to IDEA and start debug again which is a pain in the ass.
Another option is to use Hot Reload (Update application, Ctrl+F10). It recompiles only changed classes and reloads resources. This is a cool feature, but unfortunately it doesn't work in a lot of cases which makes me unable to use it as a reliable reload.
Is there anything I can do to force IDEA compile my sources? Is it a bug I should report to plugin developer. Or maybe appengine uses some additional remote sources that require explicit call of maven?

I finally found a solution. As I understood the Google cloud plugin just complies the classes into target/classes but when it starts the appEngine, the engine expected unpacked .war to be present under target/demo-0.0.1-SNAPSHOT.
E.g. if because if I delete both directories I get the error below:
To solve the issue I needed to compile those sources:
In toolbar Run -> Edit configuration
Select Google App Engine Standard Local server
In before launch add Build Artifact -> demo:war exploded where demo is the name of your App.

Related

How to glue together Vert.x web and Kotlin react using Gradle in Kotlin MPP

Problem
It is not clear to me how to configure the Kotlin MPP (multiplatform platform project) using Gradle (Kotlin DSL) to use Vert.x web for Kotlin/JVM target with Kotlin React on the Kotlin/JS target.
Update
You can check out updated minimal example for a working solution
inspired by an approach of Alexey Soshin.
What I've tried
Have a look at my minimal example on GitHub of a Kotlin MPP with the Vert.x web server on the JVM target and Kotlin React on the JS target.
You can make it work if you:
First run Gradle task browserDevelopentRun (I don't understand magick behind it) and after browser opens and renders React SPA (single page application) you can
stop that task and then
start the Vert.x backend with task run.
After that, without refreshing the remaining SPA in the browser, you can confirm that it can communicate with the backend by pressing the button and it will alert received data.
Question
What are the possible ways/approaches to glue these two targets so when I run my application: JS target is assembled and served via JVM backend conveniently?
I am thinking that perhaps Gradle should trigger some of the Kotlin browser tasks and then make them available in some way for the Vert.x backend.
If you'd like to run a single task, though, you need that your server task would depend on your JS compile. In your build.gradle add the following:
tasks.withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile> {
dependsOn(tasks.getByName<org.jetbrains.kotlin.gradle.targets.js.webpack.KotlinWebpack>("jsBrowserProductionWebpack"))
}
Now invoking run will also invoke WebPack.
Next you want to serve your files. There are different ways of doing it. One is to copy them to Vert.x resources directory using Gradle. Another is to point Vert.x to where WebPack puts them by default:
route().handler(StaticHandler.create("../../../distributions"))
There is a bunch of different things going on there.
First, both your Vert.x and Webpack run on the same port. Easiest way to fix that is to start Vert.x on some other port, like 18080:
.listen(18080, "localhost") { result ->
And then change your index.kt file to use that port:
val result: SomeData = get("http://localhost:18080/data")
Because we run on different ports now, we also need to specify CORS handler:
router.apply {
route().handler(CorsHandler.create("*"))
Last is the fact, that you cannot run two neverending Gradle tasks from the same process (ok, you can, but that's complicated). So what I suggest is that you open two Terminals, and run:
./gradlew run
In one, and
./gradlew jsBrowserDevelopmentRun
In another.
Having done all that, you should see this:
Now, this is for development mode. For production mode, you probably don't want to run jsBrowserDevelopmentRun, but tie jsBrowserProductionWebpack to your run and server spa.js from your Vert.x app using StaticHandler. But this answer is already too long.

How to pass deployment settings to application?

I am trying to deploy a Qooxdoo web application backed by CherryPy-hosted web services onto a server. However, I need to configure the client-side Qooxdoo application with the hostname of the server on which the application resides so that that the Ajax callbacks resolve to the right host. I have a feeling I can use the capabilities of the generate.py Qooxdoo script to generate client-side code with this appropriately set, but reading through the docs hasn't helped make it clear how yet. Anyone have any tips?
(FWIW, I know how I'd approach this using something like PHP and a different client-side framework like Echo 3--I'd have the index file be a PHP file that reads a local system configuration file prior to sending back client-side code. In this case, however, the generate.py file is a necessary part of the toolchain, so I can't see how to do it so simply.)
You can use qx.core.Enviroment class to add/get configuration for your project. The recommend way is only during compilation time, but there is a hack if you want to configure your application during run time.
Configuration during compilation time
If you want to configure the environment during compilation time see this.
In both cases after you add any environmental variable to your application, it can be accessed using the qx.core.Environment.get method.
On run time
WARNING this method isn't supported/documented from qooxdoo. Basically it's a hack
If you want to make available some environment configuration on run time you have to do this before qooxdoo loads. In order to this you could add some javascript into your webpage e.g.
window.qx = { };
window.qx.$$environment = {
"myawsomeapp.hostname": "example.org",
};
This should be added somewhere in your page before the qooxdoo start loading otherwise it will not have the desirable effect. The advantage of this method is that you can push configuration to the client e.g. some api keys that may be different between instances of your application.
The easiest way will be to compose your AJAX URL on the fly from window.location; ideally, you would be able to use window.location.origin which for this StackOverflow website would be "https://stackoverflow.com" but there are issues with that on IE.
A cross platform solution is:
var urlRoot = window.location.protocol + "//" +
window.location.hostname + (window.location.port ? ':' +
window.location.port: '');
This means your URL will always be correct, even if the server name changes (eg your on a test server instead of production).
See here for more details:
https://tosbourn.com/a-fix-for-window-location-origin-in-internet-explorer/

How to specify different api URL in Azure deployment vs running locally?

So my setup is like this.
I have a solution with two projects. The first project is an ASP.NET WebAPI project that represents a REST API. It is completely view-less and returns only JSON responses for the API calls.
The second project is an AngularJS client. I started by creating an empty Web app in Visual Studio. So this project does have a Web.Config and an Azure publish profile but no C# controllers, routes, app_start, etc. It is all JavaScript and HTML.
The two projects are deployed as two independent Web Apps in Azure. Project_API and Project_Web.
My question is in my Angular App when the service responsible for communicating with the REST API how do I gracefully detect or set the URL based on whether I am deployed in Azure vs running locally?
// Use this api URL when running locally
var BaseURL = 'http://localhost:15774/api/games/';
// Use this api URL when deployed to Azure
// var BaseURL = 'http://Project_API.azurewebsites.net/api/games/';
It is similar to how inside of the Project_API project I can set a different connection string for my local vs production database. That I understand though because the C# code can read the database connection string from Web.Config, and I can override that value in the Azure application settings for the deployed app. I guess I don't know the right way to do the similar action for a JavaScript client web app though.
The actual solution I went with was to create an ASP 5 project type. This project type has built-in support for the gulp task runner. It still feels a little bit unnatural to use Visual Studio to develop an AngularJS client in the first place, but at least this brings it closer to a common front-end webdev client development feel with the taskrunner support.
The other suggest solution I am sure works also. It just seems to me that if you choose:
To separate your REST API and client front-end into separate independent projects rather than serving up both your client and server from a single project.
Write your front-end client as an Angular SPA.
Then it would be undesirable to have to use C# and Razor in the Angular client. That might be common in traditional ASP development, but not common and standard in most Angular client development. Using taskrunners for Angular clients seems more like the general practice. However, as the previous solution points out at this time it is brand new for Visual Studio to support this.
Rest of the details for my solution:
Pull into gulp a new module/dependency: gulp-ng-constant
Create app/config.json:
{
"development": { "ApiEndpoint": "http://localhost:15774/api/games/" },
"production":{ "ApiEndpoint": "http://myapp.azurewebsites.net/api/games/" }
}
Setup new gulp task: "gulp config"
Set this task to call the ngConstant function that comes with gulp-ng-constant. Have it load the development settings from the config file:
var myConfig = require(paths.webroot + 'js/app/config.json');
var envConfig = myConfig["development"];
Follow gulp-ng-constant documentation to specify any options and specify the name of which Angular module you want the constants to be registered in.
Bind this new task to the after-build event in task-runner explorer, so it will run after every build.
Setup new gulp task: gulp production:config
Make it exactly the same as step 3, except myConfig["production"]
Don't bind it to the after-build event, rather add it to the pre-publish tasks in your project.json file:
"prepublish": [ "npm install", "bower install", "gulp clean", "gulp production:config", "gulp min" ]
Now whenever you build and/or publish the gulp task will automatically generate a file /app/ngConstants.js. If you setup the task correctly the file will contain the Angular code to register the constants with the correct module.
angular.module("game", [])
.constant("ApiEndpoint", "http://localhost:15774/api/games/")
The only thing I don't really like about this solution is that there is no obvious way to tell in gulp if a build is "Debug" vs "Release". Reading some forums it sounds like the VS team is aware of this issue and planning to fix it in the future. It needs some method to expose the build config to the taskrunner. In my solution it will write the "development" constants every build and then it overwrites them to "production" values on publish. This works for this API endpoint case, but other constants might have different requirements and need that release vs debug configuration or you would be forced to run the release tasks by hand which might be acceptable depending on how often you are running the release build locally.
In your case, you should have a cshtml file which provides you more information on the page. You will need MVC if you're intending to deploy this with IIS. Otherwise, your options would be different with something like node.
Whether you read that information from a registry value, environment variable, database, web config, or whatever is up to you.
At the end of the day, you will have something that sets that value, which you generate in the cshtml with Razor:
<script>window.ENDPOINT = '#someEndpoint'</script>
And then you can either just read that off the window in your JavaScript, or you can make a constant in your app and use it that way:
app.constant('myAppGlobal', window.ENDPOINT || {});

Play Siena failing to connect to MySQL on GAE

I am using play framework 1.2.7, gae module 1.6.0 and siena module 2.0.7 (also tested 2.0.6). This is a simple project that should run in play deployed on App Engine and connect to a MySQL database in Google Cloud SQL. My project runs fine locally but fails to connect to the database in production. Looking at the logs it looks like it is using the postgresql driver instead of the mysql one.
Application.conf
# db=mem
db.url=jdbc:google:mysql://PROJECT_ID:sienatest/sienatest
db.driver=com.mysql.jdbc.GoogleDriver
db.user=root
db.pass=root
This is the crash stack trace
play.Logger niceThrowable: Cannot connected to the database : null
java.lang.NullPointerException
at com.google.appengine.runtime.Request.process-a3b6145d1dbbd04d(Request.java)
at java.util.Hashtable.put(Hashtable.java:432)
at java.util.Properties.setProperty(Properties.java:161)
at org.postgresql.Driver.loadDefaultProperties(Driver.java:121)
at org.postgresql.Driver.access$000(Driver.java:47)
at org.postgresql.Driver$1.run(Driver.java:88)
at java.security.AccessController.doPrivileged(AccessController.java:63)
at org.postgresql.Driver.getDefaultProperties(Driver.java:85)
at org.postgresql.Driver.connect(Driver.java:231)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at play.modules.siena.GoogleSqlDBPlugin.onApplicationStart(GoogleSqlDBPlugin.java:103)
at play.plugins.PluginCollection.onApplicationStart(PluginCollection.java:525)
at play.Play.start(Play.java:533)
at play.Play.init(Play.java:305)
What is going on here? I am specifying the correct driver and url schema and it's using postgresql driver. Google Cloud SQL API access is enabled, the app is allowed to connect to the mysql instance, I am not using db=mem, ... I am stuck and can't figure out how to move forward! :-((
UPDATE: I thought I found the solution, but that was not the case. If I keep the %prod. prefix and create a war normally (or just don't define any DB properties), then the application will use Google DataStore instead of the Cloud SQL. If I create the war file adding --%prod at the end (or just delete the %prod. prefix in the application.conf), then it will keep failing to connect to the database showing the same initial error.
Any ideas please?
After being stuck for so long on this I just found the solution in no time after posting the question. Quite stupid actually.
The production environment properties in the application.conf file must be preceded by %prod. so the database config should read
%prod.db.url=jdbc:google:mysql://PROJECT_ID:sienatest/sienatest
%prod.db.driver=com.mysql.jdbc.GoogleDriver
%prod.db.user=root
%prod.db.pass=root
And everything runs fine.
EDIT: This is NOT the solution. The problem went away, but the app is using the DataStore instead of the Cloud SQL
At the end I ended doing a slight modification in play siena module source code and recompiling it.
In case anyone is interested, you will need to remove/comment/catch exception in this code around line 97 in GoogleSqlDBPlugin class:
// Try the connection
Connection fake = null;
try {
if (p.getProperty("db.user") == null) {
fake = DriverManager.getConnection(p.getProperty("db.url"));
} else {
fake = DriverManager.getConnection(p.getProperty("db.url"), p.getProperty("db.user"), p.getProperty("db.pass"));
}
} finally {
if (fake != null) {
fake.close();
}
}
For some reason the connection fails when initiated with DriverManager.getConnection() but it works when initiated with basicDatasource.getConnection(); which apparently is the way used by the module in the rest of the code. So if you delete the above block, and recompile the module everything will work as expected. If you are compiling with JDK 7, you will also need to implement public Logger getParentLogger() throws SQLFeatureNotSupportedException in the ProxyDriver inner class at the end of GoogleSqlDBPlugin file.
Strangely, I digged into the DriverManager.getConnection() and it looked like some postgresql driver is registered somehow, because otherwise I can't see why DriverManager.getConnection() would call to org.postgresql.Driver.connect().

google app engine python uploading application first time

i'm trying to upload my app engine project for the very first time and i have no clue why it is not working. the error from my terminal is:
[me][~/Desktop]$ appcfg.py update ProjectDir/
Application: tacticalagentz; version: 1
Host: appengine.google.com
Starting update of app: tacticalagentz, version: 1
Scanning files on local disk.
Error 404: --- begin server output ---
This application does not exist (app_id=u'tacticalagentz').
--- end server output ---
i'm using python 2.6.5 and ubuntu 10.04.
not sure if this is relevant, but i just created a google app engine account today. and i also just created the application today (like a couple of hours ago). this is really frustrating because i just want to upload what i have so far (as a demo). in my app.yaml this is my first line:
application: tacticalagentz
Furthermore, i checked on my admin console, and i CLEARLY see the app id right there, and it matches letter for letter with the app id in my app.yaml
could someone please enlighten me and tell me what i am doing wrong? or is it something beyond my comprehension (like indexing issue with Google that they need time to index my app id) ?
thank you very much in advance
apparently adding the "--no_cookies" parameter will work
appcfg.py update --no_cookies ProjectDir/
the way i was able to find my answer was by uploading my app from my Mac OS X (thank god i have linux mac and windows). AppEngine on Mac OS X comes with a GUI interface, and it worked for uploading. so then i found the command they used in the console, which included "--no_cookies". perhaps if you run into similar issues in the future, this is one approach to getting the answer
App Engine for Java have the same problem. The problem is about account login.
If you are using Eclipse, use Sign In button.
If u are using command-line, use "-e" option, like this:
appcfg.sh -e your#email.com update yoursite/
I had the same problem. When I changed the name of the app I used in the launcher to match the one in the app engine, It worked without any problem. The way I figured out, it was the name mismatch which caused the problem. You can see the name of your registered app in the admin console of app engine.(https://appengine.google.com/)
Here's what fixed it for me:
i had an instance of dev_appserver.py myProjDirectory/ on a different terminal.
i guess the scripts are somehow linked and aren't thread safe
An alternate option that worked for me is to just "Clear Deployment Credential" from the Control option of the GUI. When the app was deployed after this, it opened a google page to allow GAE to access the user profile and then deployment was successful.
The key bit is
This application does not exist (app_id=u'tacticalagentz').
which is telling you that appspot.com doesn't know of an application by that name. The admin console (https://appengine.google.com/) shows your applications. Check there. You might have made an inadvertent typo when you registered the app.

Resources