I see that there is already an answer to this question but I feel its outdated. Many things have changes since then. There are now modules, cloud_endpoints and webapp2. What should be the good directory structure for my project which allows me to add/modify features easily.
For example I should be able to manage:
Modules.
Cron jobs.
Task queues.
Cloud endpoints.
I'd first take a look at modules, at least for these reasons:
modules really are in many respects (almost) equivalents to entire (single-module) apps in older docs/references, so once a module's position in the app's hierarchy is clarified various posts referencing an app-context can usually be extrapolated to just a module-only context.
nowadays an app can use different languages/sandboxes for different modules (see Run both Java and PHP on google app engine project) or even for different versions of the same module (see Google App Engine upgrading part by part)
Personally I'd stick with the recommended multi-module app structure - each module having its own directory, one level below the app's directory:
The app's top dir would hold the per-app configs (which aren't applicable to a particular module): dispatch.yaml, cron.yaml, index.yaml, and queue.yaml. Note that the cron jobs and task queues definitions belong here (but nothing stops you from routing/dispatching various cron jobs to various modules based on the requested paths).
I'd also place in the app's top level dir any files/directories I'd like to share across multiple modules in the DRY way. These files/dirs would be shared by a module by symlinking them inside the respective modules so that the module gets its own copy at deployment. Almost anything that can exist as a separate file or directory can be shared this way:
templates, images, scripts, CSS, macros, datastore models definitions, python modules - whatever you need
3rd party libs, for example How do I access a vendored library from a module in Python Google App Engine?
even portions of the module's .yaml configuration! for example: Do I need to copy `skip_files` across multiple YAML files?
Finally the recommended files/dir structure of a particular module may further depend on the module language/sandbox, the framework(s) used, the developer's style/preferences, etc. I don't think it's possible to provide a one-size-fits-all recommendation which would be effective/acceptable in all cases.
Endpoints are just RPC (strongly typed) versions of basic REST url's with the added advantage that they can be used to Generated client side libraries. So the endpoint config and definitions belong in the SAME directory as the module (ie mobile-backend) as their REST counterpart would. In other words, if you have (or would have) a REST endpoint in Module1 for "user login", then you should put the "user login" Endpoint in the module1 directory. Further, if you don't like the symlink approach, you can move your module1.yaml file UP one level and then that whole module can import from a "common" directory.
Related
Struggling with collision of technical terms (most especially the term "plugin" which has about seven different meanings within the react development stack).
Short question:
Is there a way to pre-compile static webpack modules that can be installed separately from a main static react web application, while still sharing modules contained in the main web application? (The question as best I can formulate it using my relatively naïve react developer skills). I'd like the ability to plug in web user interface components supplied by 3rd party developers after the fact. i.e. installable runtime React UI components, not requiring react compilation at install time.
Details:
I have a static React web app that allows remote control of audio plugins (specifically LV2 audio plugins). It's a single-page static react app (that communicates via we sockets with the running application), hosted by a static C++ web server. Realtime and IOT agility requirements make a python hosted dynamic web server an runtime compilation an unattractive prospect (https://github.com/rerdavies/pipedal)
What I want to do is allow extension of the web UI using separate bundles provided by 3rd-party LV2 plugins. The ideal solution would be to allow static webpack bundles pre-compiled by the lv2 plaigns and placed in /usr/lib/lv2/<pluginname.lvw>/resource directories to be consumed by the web app at runtime. I'm using a custom C++ web server, so redirecting URLs into the /usr/lib/lv2/xx/resource directories is straightforward.
The main app would be distributed one apt package. Lv2 plugins would be compiled (potentially by 3rd party developers) against an "sdk package" provided by the main app build, after the main app was built, and then distributed in separate packages. Ideally, I'd like to pre-compile the ui code for the plugins to static webpack modules before their installers are built.
I more-or-less understand how I would do this if I were using raw CLI tools and configuration files (tsc, webpack, babel). But I can't help thinking I would be reinventing a wheel. (And I do have concerns that I'm going to incur serious version-dependency problems).
I would like to code-share the base modules (react, #mui controls, and a limited set of app-supplied components and interfaces).
I see the path through the various tools to make this happen, using my own custom build script, I think. I can get the typescript compiler to do code-splitting; I can probably figure out how to get the babel transpiler to do the right thing. And I think I understand how to write webpack configuration files that will process do sharing of modules from the main app. And a likely path to build and distribute an npm package to do the setup and build of LV2 plugin projects. And how to write supporting CMake build rules for building and installing such packages. &c. But I'm concerned that I'm going to go down a large rabbit hole trying to reinvent something that surely must exist already. And I can imagine seven thousand ways for this to go horribly wrong. :-P
So far, I have implemented the TypeScript compiler portion of the build procedure. And writing various bits to dynamicall intercept and service resource requests in the web server is trivial. But it has become painfully obvious that I also need to do babel and webpack build steps as well.
I haven't yet looked at the react-scripts package contents to see if I can steal code to build what I want there. Perhaps that's a viable path.
Is there a way to do this with off-the-shelf npm packages and off-the-shelf npm build procedures? I can find all kinds of bits to get me part way; but the integration of all the bits is rather daunting. Should I just do the deed, and start writing my own custom build scripts to make this happen?
I am migrating from Eclipse to Android Studio and have a Android App connected to AppEngine.
I have split the Server side into two modules (default module for Endpoints and user facing requests) and "admin" module for backend stuff.
Now both these modules need to use the Entities. (backend module usually is responsible for saving these entities to DB, while the frontend default module is the one who returns data back to Android using these Entities).
What is the best way to share these Entity classes between these two modules in Android Studio? (also making sure these classes get enhanced etc). I do not wish to have duplicate classes, both in the default module as well as admin.
Maybe have a common "java" module shared between the two (but not sure class enhancing would work). Or should the admin module NOT use the Entities and instead use other ways of persistence?
Appreciate your thoughts.
While there may be reasons for not sharing the code, personally I prefer DRY.
I solved the issue in DRY spirit with the Python backend by placing the models definition file in the app dir app/models.yaml and sym-linking it into each of the modules subdirs app/module_blah/models.yaml, thus ensuring all modules see the same models definitions. At deployment time the symlinks are automatically replaced with the actual content of the file being symlinked. From appcfg.py update:
The command follows symlinks and recursively uploads all files to the
server. Temporary or source control files, such as foo~, .svn/* are
skipped.
Care may be needed to deploy all modules at the same time.
I used the same technique to also share entire libraries with common code across modules, by symlinking app/lib/libX subdirs into the desired app/module_blah/lib/libX as needed.
Not sure if this technique is usable in Java, tho.
I've slightly "abused" the front-end "version" concept in App Engine (java), to implement modules before they were introduced. I have a configuration consisting of: module1-dot-myapp.appspot.com, module2-dot-myapp.appspot.com, module3-dot-myapp.appspot.com, etc., based on the version concept (more commonly used with numbers: 1-dot-myapp, etc.).
Specifically, the code in all versions is identical, but each is practically used for different purposes. This separation allows different clients to use different api versions, separate deployment schedule, staging versions, logs separation, etc.
My question is, under theses conditions, what is the best way to convert my application to "real" modules? such that "module1" is an actual module (still mapped to the same url - module1-dot-appspot.com)?
Note: my answer comes from a somehow similar exercise but in the python GAE runtime, there is likely aditional Java-specific stuff to look at as well.
First things to look at (possible show stoppers) are the app-level configs - those will need to be merged in from your different old app versions (if they exist) and will be shared by all your modules (or directed to the default module only), so they might not work as before, best to revisit the latest documentation on these configs:
dispatch file
queue
cron
DB indexes
Note: in multi-module python apps these configs might not be updated automatically at app upload, each of them may need to be uploaded explicitly, using the respective app configuration utility options.
The separate deployment schedule is almost free (each module can be deployed independently). But there may be some impact due to the app-level configs (multiple CLI invocations instead of a single one, for example)
The logs separation comes for free.
The staging story might need to be revisited, depending on what exactly you mean by that.
Other than that - you'd bring the different old versions of your app in separate module sub-directories in your new app. Check if your version control system supports this easier. The old app config file(s) would need to be "translated" into the respective module's config file(s) and some of the info would go into the new app's top dir config file.
The module URL routing should allow transparent URL mapping, but note that the URLs will actually be <module>-dot-<appname>.appspot.com and the only way to get exactly the same URLs would be to delete all older app versions before deploying the new one (due to conflicting URLs: <module>-dot-<appname> vs <appversion>-dot-<appname>, not sure if you'll get the old or the new code serving or if it's even possible to deploy the new code without error). You could use a new appname at first, just to get all ducks lined up before the switchover (possibly a new staging story you might consider going forward).
You might find helpful complementing URL routing with a dispatch file if you didn't have one before.
Finally, if you have identical files shared across modules you may consider a single per-app copy of the file, symlinked into the respective modules, if that's easier or makes sense from your source code management prospective.
TL;DR Is there a way to deploy App Engine modules in parallel?
I've built a go application using Google's App Engine SDK for Go. This application defines multiple modules. These modules are self-contained, and do not require any sort of dependency across other modules.
When I attempt to deploy the modules to the Google Cloud, I can't help but notice that the modules are uploaded sequentially. This would be fine if deployment was relatively quick, but each module requires it's own redundant compilation of the Go binary. Hence, on top of the regular upload time, I have to wait for my app to compile [module count] x [compilation time] every time I want to deploy.
The obvious (quick) solution is to deploy in parallel, so I created a simple bash script to deploy each module independently. The problem I immediately encountered with this "solution" was a HTTP 500 response from the App Engine API. The whole umbrella application, spanning across all the modules, seems to "lock" whenever any individual module is updated. This scenario creates a race condition, under which only the first module to trigger a deploy succeeds and the others fail.
I fear that this is a holdover from the legacy languages in App Engine. Since every module uses the same Go binary, it doesn't really necessitate multiple compilations of the same code. Repeated compilation is redundant, and there is no way to circumvent the lock.
One hypothetical solution, which I have only a vague understanding of, is to compile in parallel and deploy in series. I imagine that this approach would involve taking apart the configuration tool and reworking it to execute in the aforementioned manner- though I can't say for sure (yet).
Any help here would be much obliged. Thanks!
You can deploy to another "version" of your App Engine app, then when all modules are deployed, do a very fast version switch?
Versions also allow for traffic splitting if you need/want that kind of thing.
I am using the GoLang version of the Go Application Engine to run my website. Mostly for learning.
I am at a point where I want to write multiple Go Service endpoints to support the site (mostly on the admin side). I would like to separate these so that not everything is in the same file (for maintenance sake), but cannot seem to get my head around this.
Is there a way to separate a GoApp in GoLang into multiple files to serve up and handle the incoming requests?
Ideally this would be a single interface style wrapper file that then calls into the more complex methods that are in their own files. I did think about putting the .go files by type into separate folders so that my YAML file could just route, but that does not seem as nice.
To summarize, in the simplest sense, Go automatically compiles .go files in the same folder when you execute go build.
Documentation: http://golang.org/doc/code.html