How to glue together Vert.x web and Kotlin react using Gradle in Kotlin MPP - reactjs

Problem
It is not clear to me how to configure the Kotlin MPP (multiplatform platform project) using Gradle (Kotlin DSL) to use Vert.x web for Kotlin/JVM target with Kotlin React on the Kotlin/JS target.
Update
You can check out updated minimal example for a working solution
inspired by an approach of Alexey Soshin.
What I've tried
Have a look at my minimal example on GitHub of a Kotlin MPP with the Vert.x web server on the JVM target and Kotlin React on the JS target.
You can make it work if you:
First run Gradle task browserDevelopentRun (I don't understand magick behind it) and after browser opens and renders React SPA (single page application) you can
stop that task and then
start the Vert.x backend with task run.
After that, without refreshing the remaining SPA in the browser, you can confirm that it can communicate with the backend by pressing the button and it will alert received data.
Question
What are the possible ways/approaches to glue these two targets so when I run my application: JS target is assembled and served via JVM backend conveniently?
I am thinking that perhaps Gradle should trigger some of the Kotlin browser tasks and then make them available in some way for the Vert.x backend.

If you'd like to run a single task, though, you need that your server task would depend on your JS compile. In your build.gradle add the following:
tasks.withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile> {
dependsOn(tasks.getByName<org.jetbrains.kotlin.gradle.targets.js.webpack.KotlinWebpack>("jsBrowserProductionWebpack"))
}
Now invoking run will also invoke WebPack.
Next you want to serve your files. There are different ways of doing it. One is to copy them to Vert.x resources directory using Gradle. Another is to point Vert.x to where WebPack puts them by default:
route().handler(StaticHandler.create("../../../distributions"))

There is a bunch of different things going on there.
First, both your Vert.x and Webpack run on the same port. Easiest way to fix that is to start Vert.x on some other port, like 18080:
.listen(18080, "localhost") { result ->
And then change your index.kt file to use that port:
val result: SomeData = get("http://localhost:18080/data")
Because we run on different ports now, we also need to specify CORS handler:
router.apply {
route().handler(CorsHandler.create("*"))
Last is the fact, that you cannot run two neverending Gradle tasks from the same process (ok, you can, but that's complicated). So what I suggest is that you open two Terminals, and run:
./gradlew run
In one, and
./gradlew jsBrowserDevelopmentRun
In another.
Having done all that, you should see this:
Now, this is for development mode. For production mode, you probably don't want to run jsBrowserDevelopmentRun, but tie jsBrowserProductionWebpack to your run and server spa.js from your Vert.x app using StaticHandler. But this answer is already too long.

Related

Cefpython app with html/js files in local filesystem

I'm trying to make a hybrid python-js application with cefpython.
I would like to have:
JS and HTML files local to the cef python app (e.g. in './html', './js', etc)
Load one of the HTML files as the initial page
Avoid any CORS issues with files accessing each other (e.g. between directories)
The following seems to work to load the first page:
browser = cef.CreateBrowserSync(url='file:///html/index.html',
window_title="Rulr 2.0")
However, I then hit CORS issues.
Do I need to run a webserver also? Or is there an effective pattern for working with local files?
Try passing "disable-web-security" switch to cef.Initialize or set BrowserSettings.web_security_disabled.
Try also setting BrowserSettings.file_access_from_file_urls_allowed and BrowserSettings.universal_access_from_file_urls_allowed.
There are a few options in CEF for loading custom content and that can be used to load filesystem content without any security restrictions. There is a resource handler, a scheme handler and a resource manager. In CEF Python only resource handler is currently available. There is the wxpython-response.py example on README-Examples.md page.
Resource manager is a very easy API for loading various content, it is to be implemented in Issue #418 (PR is welcome):
https://github.com/cztomczak/cefpython/issues/418
For scheme handler see Issue #50:
https://github.com/cztomczak/cefpython/issues/50
Additionally there is also GetResourceResponseFilter in upstream CEF which is an easier option than resource handler, to be implemented via Issue #229:
https://github.com/cztomczak/cefpython/issues/229
You could also run an internal web server inside your app (easy to do with Python) and serve files that way. Upstream CEF also has a built-in web server functionality, however I don't think this will be exposed in cefpython, as it's already easy to set up web server in Python.

How to pass deployment settings to application?

I am trying to deploy a Qooxdoo web application backed by CherryPy-hosted web services onto a server. However, I need to configure the client-side Qooxdoo application with the hostname of the server on which the application resides so that that the Ajax callbacks resolve to the right host. I have a feeling I can use the capabilities of the generate.py Qooxdoo script to generate client-side code with this appropriately set, but reading through the docs hasn't helped make it clear how yet. Anyone have any tips?
(FWIW, I know how I'd approach this using something like PHP and a different client-side framework like Echo 3--I'd have the index file be a PHP file that reads a local system configuration file prior to sending back client-side code. In this case, however, the generate.py file is a necessary part of the toolchain, so I can't see how to do it so simply.)
You can use qx.core.Enviroment class to add/get configuration for your project. The recommend way is only during compilation time, but there is a hack if you want to configure your application during run time.
Configuration during compilation time
If you want to configure the environment during compilation time see this.
In both cases after you add any environmental variable to your application, it can be accessed using the qx.core.Environment.get method.
On run time
WARNING this method isn't supported/documented from qooxdoo. Basically it's a hack
If you want to make available some environment configuration on run time you have to do this before qooxdoo loads. In order to this you could add some javascript into your webpage e.g.
window.qx = { };
window.qx.$$environment = {
"myawsomeapp.hostname": "example.org",
};
This should be added somewhere in your page before the qooxdoo start loading otherwise it will not have the desirable effect. The advantage of this method is that you can push configuration to the client e.g. some api keys that may be different between instances of your application.
The easiest way will be to compose your AJAX URL on the fly from window.location; ideally, you would be able to use window.location.origin which for this StackOverflow website would be "https://stackoverflow.com" but there are issues with that on IE.
A cross platform solution is:
var urlRoot = window.location.protocol + "//" +
window.location.hostname + (window.location.port ? ':' +
window.location.port: '');
This means your URL will always be correct, even if the server name changes (eg your on a test server instead of production).
See here for more details:
https://tosbourn.com/a-fix-for-window-location-origin-in-internet-explorer/

Idea doesn't upgrade sources while in local debug for appEngine app

I created Spring Boot + Google App Engine application. For development purpose I use IntelliJ IDEA and Google Cloud Tools plugin. I'm currently using only localDebug, which means I don't deploy anything on Google cloud. The configuration for debug is below:
I created a simple service to be sure if my code is updated on change or not:
static int i = 10;
#GetMapping(value = "/test")
public String test() {
return Integer.toString(++i);
}
Unfortunately when I change my code (e.g. from i = 10 to i = 100) and restart the app (I mean press on Rerun (Ctrl+F5) or Stop (Ctrl+F2) + Run my code doesn't apply on server, which means Idea doesn't rebuild the sources on server start. As you see on the screenshot above I even tried to add Build Project step to Before launch, which didn't work.
So to apply changes I need to execute from command line mvn appengine:run -> press Ctrl+C to stop it, switch to IDEA and start debug again which is a pain in the ass.
Another option is to use Hot Reload (Update application, Ctrl+F10). It recompiles only changed classes and reloads resources. This is a cool feature, but unfortunately it doesn't work in a lot of cases which makes me unable to use it as a reliable reload.
Is there anything I can do to force IDEA compile my sources? Is it a bug I should report to plugin developer. Or maybe appengine uses some additional remote sources that require explicit call of maven?
I finally found a solution. As I understood the Google cloud plugin just complies the classes into target/classes but when it starts the appEngine, the engine expected unpacked .war to be present under target/demo-0.0.1-SNAPSHOT.
E.g. if because if I delete both directories I get the error below:
To solve the issue I needed to compile those sources:
In toolbar Run -> Edit configuration
Select Google App Engine Standard Local server
In before launch add Build Artifact -> demo:war exploded where demo is the name of your App.

How to specify different api URL in Azure deployment vs running locally?

So my setup is like this.
I have a solution with two projects. The first project is an ASP.NET WebAPI project that represents a REST API. It is completely view-less and returns only JSON responses for the API calls.
The second project is an AngularJS client. I started by creating an empty Web app in Visual Studio. So this project does have a Web.Config and an Azure publish profile but no C# controllers, routes, app_start, etc. It is all JavaScript and HTML.
The two projects are deployed as two independent Web Apps in Azure. Project_API and Project_Web.
My question is in my Angular App when the service responsible for communicating with the REST API how do I gracefully detect or set the URL based on whether I am deployed in Azure vs running locally?
// Use this api URL when running locally
var BaseURL = 'http://localhost:15774/api/games/';
// Use this api URL when deployed to Azure
// var BaseURL = 'http://Project_API.azurewebsites.net/api/games/';
It is similar to how inside of the Project_API project I can set a different connection string for my local vs production database. That I understand though because the C# code can read the database connection string from Web.Config, and I can override that value in the Azure application settings for the deployed app. I guess I don't know the right way to do the similar action for a JavaScript client web app though.
The actual solution I went with was to create an ASP 5 project type. This project type has built-in support for the gulp task runner. It still feels a little bit unnatural to use Visual Studio to develop an AngularJS client in the first place, but at least this brings it closer to a common front-end webdev client development feel with the taskrunner support.
The other suggest solution I am sure works also. It just seems to me that if you choose:
To separate your REST API and client front-end into separate independent projects rather than serving up both your client and server from a single project.
Write your front-end client as an Angular SPA.
Then it would be undesirable to have to use C# and Razor in the Angular client. That might be common in traditional ASP development, but not common and standard in most Angular client development. Using taskrunners for Angular clients seems more like the general practice. However, as the previous solution points out at this time it is brand new for Visual Studio to support this.
Rest of the details for my solution:
Pull into gulp a new module/dependency: gulp-ng-constant
Create app/config.json:
{
"development": { "ApiEndpoint": "http://localhost:15774/api/games/" },
"production":{ "ApiEndpoint": "http://myapp.azurewebsites.net/api/games/" }
}
Setup new gulp task: "gulp config"
Set this task to call the ngConstant function that comes with gulp-ng-constant. Have it load the development settings from the config file:
var myConfig = require(paths.webroot + 'js/app/config.json');
var envConfig = myConfig["development"];
Follow gulp-ng-constant documentation to specify any options and specify the name of which Angular module you want the constants to be registered in.
Bind this new task to the after-build event in task-runner explorer, so it will run after every build.
Setup new gulp task: gulp production:config
Make it exactly the same as step 3, except myConfig["production"]
Don't bind it to the after-build event, rather add it to the pre-publish tasks in your project.json file:
"prepublish": [ "npm install", "bower install", "gulp clean", "gulp production:config", "gulp min" ]
Now whenever you build and/or publish the gulp task will automatically generate a file /app/ngConstants.js. If you setup the task correctly the file will contain the Angular code to register the constants with the correct module.
angular.module("game", [])
.constant("ApiEndpoint", "http://localhost:15774/api/games/")
The only thing I don't really like about this solution is that there is no obvious way to tell in gulp if a build is "Debug" vs "Release". Reading some forums it sounds like the VS team is aware of this issue and planning to fix it in the future. It needs some method to expose the build config to the taskrunner. In my solution it will write the "development" constants every build and then it overwrites them to "production" values on publish. This works for this API endpoint case, but other constants might have different requirements and need that release vs debug configuration or you would be forced to run the release tasks by hand which might be acceptable depending on how often you are running the release build locally.
In your case, you should have a cshtml file which provides you more information on the page. You will need MVC if you're intending to deploy this with IIS. Otherwise, your options would be different with something like node.
Whether you read that information from a registry value, environment variable, database, web config, or whatever is up to you.
At the end of the day, you will have something that sets that value, which you generate in the cshtml with Razor:
<script>window.ENDPOINT = '#someEndpoint'</script>
And then you can either just read that off the window in your JavaScript, or you can make a constant in your app and use it that way:
app.constant('myAppGlobal', window.ENDPOINT || {});

How to work with authentication in local Google App Engine tests written in Go?

I'm building a webapp in Go that requires authentication. I'd like to run local tests using appengine/aetest that validate the authentication behavior. However, I do not see any way to create an aetest.Context with a dummy user. Am I missing something?
I had a similar issue with Python sdk. The gist of the solution is to bypass authentication when tests run locally.
You should have access to the [web] app object at the the test setup time - create a user object and save it into the app (or wherever your get_current_user() method will check).
This will let you unit test all application functions except authentication itself. For the later part you can deploy your latest changes as unpublished google app version, then test authentication and if all works - publish the version.
I've discovered some header values that seem to do the trick. appengine/user/user_dev.go has the following:
X-AppEngine-Internal-User-Email
X-AppEngine-Internal-User-Federated-Identity
X-AppEngine-Internal-User-Federated-Provider
X-AppEngine-Internal-User-Id
X-AppEngine-Internal-User-Is-Admin
If I set those headers on the Context's Request when doing in-process tests, things seem to work as expected. If I set the headers on a request that I create separately, things are less successful, since the 'user.Current()' call consults the Context's Request.
These headers might work in a Python environment as well.

Resources