rails 5 run script only after server start - ruby-on-rails-5.1

I want to run a ftp listener class only when the server starts and not when console, generators, dbconsole, test, destroy, runner and rake commands run.
I've found some people doing same thing with rails 3 and 4 using checks like defined? Rails::Generators but I can't get it working in rails 5, I do not get any return with the defined check.

The config.ru file is only used by web servers and not loaded by the console script, rake tasks or even your test suite. What you put there is only executed when a server instance launches.
Web servers themselves have also ways to do this. When you use Puma for instance, there are hooks like on_worker_boot or after_worker_boot, which may come to help (http://www.rubydoc.info/github/puma/puma/Puma/Configuration/DSL).
However, I'd recommend integrating this into your deployed server environment and out of the Rails app.

Related

Deploying a python bot script on Google Cloud Run (GCR)

I have been racking my brains on this for a few weeks now, trying different variations from Google Cloud service offerings but can't seem to find the proper one.
I have a python script with dependencies etc, that I have containerized, pushed, and deploy to GCR.
The script is a bot that connects to an external websocket receiving signals perpetually to then do other processing via API against another external service.
What would be the best service offering from Google Cloud to run this?
So far, I've tried:
GCR Web Service - requires listening service (:8080) which I do not provide in this use case, and, it scales down your service when there is no traffic so no go.
GCR Job Service - Seems like the next ideal option (no HTTP port requirement) - however, since the script (my entry point), upon launch, doesn't 'return' unless it quits, the service launch just allows it to run for a minute or so, until the jobs API declares it as 'failed' - basically, it is launching it via my entry point which just executes the script as if it was running locally and my script isn't meant to return anything.
To try and get around this, I went the google's recommended way and built a main.py with they standard boilerplate, and built it as a wrapper to act as a launcher for the actual script. I did this via a simple subprocess.Popen using their sample main.py as shown below.
main.py
import json
import os
import sys
import subprocess
# Retrieve Job-defined env vars
TASK_INDEX = os.getenv("CLOUD_RUN_TASK_INDEX", 0)
TASK_ATTEMPT = os.getenv("CLOUD_RUN_TASK_ATTEMPT", 0)
# Define main script
def main():
print(f"Starting Task #{TASK_INDEX}, Attempt #{TASK_ATTEMPT}...")
subprocess.Popen(["python3", "myscript.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(f"Completed Task #{TASK_INDEX}.")
# Start script
if __name__ == "__main__":
try:
main()
except Exception as err:
message = f"Task #{TASK_INDEX}, " \
+ f"Attempt #{TASK_ATTEMPT} failed: {str(err)}"
print(json.dumps({"message": message, "severity": "ERROR"}))
sys.exit(1) # Retry Job Task by exiting the process
My thinking being, this would allow the job to execute my script and mark the job as completed, while the actual script remains running. Also, since subprocess.Popen sets its stdout and stderr to PIPE, my thinking is it would get caught by the google logging and I would see the output.
The job runs and marks it as succeed, however, I see no indication of the actual script executing anywhere.
I had similar issue with Google Cloud functions. Jobs seemed like an ideal option since I can run on their scheduler to make sure it is launching after saying, every hour (my script uses a lock file so it doesn't run again if running).
Am I just missing the point on how these cloud services run?
Do offerings like google cloud run jobs/functions, etc meant to execute jobs that return and quit until launched again by however scheduled?
Do I need to consider Google Computing engine as an option for this use case that is, a full running VM instead of stateless/serverless options?
I am trying to use this in a containerized, scalable as needed, fashion to both make my project portable and minimize costs as much as possible given the always running nature of the job.
Lastly, I know services like pythonanywhere as I am sure others, make this kinda stuff easier, but I would like to learn how to do this via standard cloud offerings like GCR, AWS, etc.
thanks for any insight / advice!
Cloud Run best fit is for HTTP Rest APIs serving (stateless services). There are also Jobs in beta.
One of the top feature of Run is that it scales to 0, when there are not requests to your service (your service instance gets totally destroyed).
If your bot needs to stay alive "for ever", Run is not for you... (Even if you can configure Run to keep at least one instance live).
I would consider instead AppEngine or Compute.

How to glue together Vert.x web and Kotlin react using Gradle in Kotlin MPP

Problem
It is not clear to me how to configure the Kotlin MPP (multiplatform platform project) using Gradle (Kotlin DSL) to use Vert.x web for Kotlin/JVM target with Kotlin React on the Kotlin/JS target.
Update
You can check out updated minimal example for a working solution
inspired by an approach of Alexey Soshin.
What I've tried
Have a look at my minimal example on GitHub of a Kotlin MPP with the Vert.x web server on the JVM target and Kotlin React on the JS target.
You can make it work if you:
First run Gradle task browserDevelopentRun (I don't understand magick behind it) and after browser opens and renders React SPA (single page application) you can
stop that task and then
start the Vert.x backend with task run.
After that, without refreshing the remaining SPA in the browser, you can confirm that it can communicate with the backend by pressing the button and it will alert received data.
Question
What are the possible ways/approaches to glue these two targets so when I run my application: JS target is assembled and served via JVM backend conveniently?
I am thinking that perhaps Gradle should trigger some of the Kotlin browser tasks and then make them available in some way for the Vert.x backend.
If you'd like to run a single task, though, you need that your server task would depend on your JS compile. In your build.gradle add the following:
tasks.withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile> {
dependsOn(tasks.getByName<org.jetbrains.kotlin.gradle.targets.js.webpack.KotlinWebpack>("jsBrowserProductionWebpack"))
}
Now invoking run will also invoke WebPack.
Next you want to serve your files. There are different ways of doing it. One is to copy them to Vert.x resources directory using Gradle. Another is to point Vert.x to where WebPack puts them by default:
route().handler(StaticHandler.create("../../../distributions"))
There is a bunch of different things going on there.
First, both your Vert.x and Webpack run on the same port. Easiest way to fix that is to start Vert.x on some other port, like 18080:
.listen(18080, "localhost") { result ->
And then change your index.kt file to use that port:
val result: SomeData = get("http://localhost:18080/data")
Because we run on different ports now, we also need to specify CORS handler:
router.apply {
route().handler(CorsHandler.create("*"))
Last is the fact, that you cannot run two neverending Gradle tasks from the same process (ok, you can, but that's complicated). So what I suggest is that you open two Terminals, and run:
./gradlew run
In one, and
./gradlew jsBrowserDevelopmentRun
In another.
Having done all that, you should see this:
Now, this is for development mode. For production mode, you probably don't want to run jsBrowserDevelopmentRun, but tie jsBrowserProductionWebpack to your run and server spa.js from your Vert.x app using StaticHandler. But this answer is already too long.

Idea doesn't upgrade sources while in local debug for appEngine app

I created Spring Boot + Google App Engine application. For development purpose I use IntelliJ IDEA and Google Cloud Tools plugin. I'm currently using only localDebug, which means I don't deploy anything on Google cloud. The configuration for debug is below:
I created a simple service to be sure if my code is updated on change or not:
static int i = 10;
#GetMapping(value = "/test")
public String test() {
return Integer.toString(++i);
}
Unfortunately when I change my code (e.g. from i = 10 to i = 100) and restart the app (I mean press on Rerun (Ctrl+F5) or Stop (Ctrl+F2) + Run my code doesn't apply on server, which means Idea doesn't rebuild the sources on server start. As you see on the screenshot above I even tried to add Build Project step to Before launch, which didn't work.
So to apply changes I need to execute from command line mvn appengine:run -> press Ctrl+C to stop it, switch to IDEA and start debug again which is a pain in the ass.
Another option is to use Hot Reload (Update application, Ctrl+F10). It recompiles only changed classes and reloads resources. This is a cool feature, but unfortunately it doesn't work in a lot of cases which makes me unable to use it as a reliable reload.
Is there anything I can do to force IDEA compile my sources? Is it a bug I should report to plugin developer. Or maybe appengine uses some additional remote sources that require explicit call of maven?
I finally found a solution. As I understood the Google cloud plugin just complies the classes into target/classes but when it starts the appEngine, the engine expected unpacked .war to be present under target/demo-0.0.1-SNAPSHOT.
E.g. if because if I delete both directories I get the error below:
To solve the issue I needed to compile those sources:
In toolbar Run -> Edit configuration
Select Google App Engine Standard Local server
In before launch add Build Artifact -> demo:war exploded where demo is the name of your App.

Coded UI Test with Teamcity

I run MSTest to test WPF application (Coded UI Test) on a VM using Teamcity. I already installed test agent as interactive process but i keep getting this error in Teamcity log
Error calling Initialization method for test class Squarebit.Apms.Terminal.Wpf.Test.CodedUITest1: Microsoft.VisualStudio.TestTools.UITest.Extension.UITestException: To run tests that interact with the desktop, you must set up the test agent to run as an interactive process. For more information, see "How to: Set Up Your Test Agent to Run Tests That Interact with the Desktop" (http://go.microsoft.com/fwlink/?LinkId=255012)
If you are running the tests as part of your team build, you must also set up the build agent to run as an interactive process. For more information, see "How to: Configure and Run Scheduled Tests After Building Your Application" (http://go.microsoft.com/fwlink/?LinkId=254735)
at Microsoft.VisualStudio.TestTools.UITesting.Playback.Initialize()
at Microsoft.VisualStudio.TestTools.UITesting.CodedUITestExtensionExecution.BeforeTestInitialize(Object sender, BeforeTestInitializeEventArgs e)
at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecution.RaiseBeforeTestInitialize(BeforeTestInitializeEventArgs args)
at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.RunInitializeMethod()
Can you help me resolve this problem or recommend some ways to run Coded UI Test using Teamcity?
Coded UI tests (CUIT) can't run from a service account since they need access to Desktop Windowing API set.
Please refer Installing the teamcity build agent section in http://jake.ginnivan.net/teamcity-ui-test-agent/ to setup teamcity agent as a non-service account.

How can I script WordPress for non-GUI automated Pages/Posts-import?

This is probably a "cannot see the forest because of the trees" situation,
but how do I create a script which does the automated import of Posts/Pages, without a hook in the WP Website-GUI (e.g. in Theme's functions.php). It should be standalone triggerable by calling the script name via webserver.
via this API-call wp_insert_post()
You want to use the WordPress API (http://codex.wordpress.org/XML-RPC_wp) to connect. You can do this with almost any scripting language, but since you mention running it on the webserver, and WordPress is written in PHP, we'll go with that language for now.
Check out this tutorial:
http://life.mysiteonline.org/archives/161-Automatic-Post-Creation-with-Wordpress,-PHP,-and-XML-RPC.html
He shows example of how to create a script that will insert a post into your WordPress blog. The script can be given execute permissions and ran via the command line or a cron job.
You will have to code the logic to get the post from wherever your data is stored, though.

Resources