How to run sql-client against yarn-cluster - apache-flink

I am reading https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/sqlClient.html,
looks that it illustrates the sql-client functionalities with the standalone cluster.
I would ask whether sql client supports to run against yarn cluster? If yes, I would ask how to do the configuration, I didn't find related how-to on flink.apache.org

I don't have a yarn cluster available to test this with, but see FLINK-18273 which explains that
The SQL Client YAML has a deployment section. One can use the regular flink run options there and configure e.g. a YARN job session cluster. This is neither tested nor documented but should work due to the architecture.
and also mentions that
The deployment section has also problems with keys that use upper cases. E.g. fromsavepoint != fromSavepoint which requires to use the short option s as a workaround.
Putting those two statements together suggests that putting this entry into sql-env.yaml (xxx is the yarn application id):
deployment:
yid: xxx
and then starting the client via sql-client embedded -e sql-env.yaml might just work.
See also https://docs.cloudera.com/csa/1.2.0/sql-client/topics/csa-sql-client-session-config.html.

Related

Do i need to stop my localhost server when pushing, pulling or merging code to/from github repository?

I have two terminals open in VS Code for React development. One for running the server and one for running bash commands. Now i've been told this is a bad practise as it creates problems in the build of the project. If you want to commit, pull, push, merge you should stop the server and perform these actions and then restart server. Stopping/Restarting the server takes time and seems like a hassle to me. If anyone could answer that thanks
Largely speaking, this should be fine. Assuming you are using some kind of development server (which is monitoring files for changes), any changes that are applied because of a git pull/merge/etc will cause your app to reload (usually in some optimised way).
AN exception to this can be if files that configure the development server change, as it has probably read its configuration on start and thus those changes might not take effect.
Another exception is adding/changing dependencies. If some new/different packages were added to your package.json, then you'd need to rerun npm install again before starting the development server again.
Pushing code to git while the server is running is not problematic either.
So in summary, it's usually fine to do this and is not destructive in any way. If something unpredictable happens, try restarting the dev server (and possible rerunning npm install).

How to build a Libra TestNet with two servers?

I want to build a Libra TestNet with two servers.
I don't know how to use config-builder to configure the program.
This answer might be a bit late but it might help for someone who is looking for a solution.
I was able to setup local test network with single/multiple nodes based on the following
For a single node, the libra-swarm package is well documented here https://developers.libra.org/docs/run-local-network and defines easy steps to setup your local test network with defined number of nodes.
If you are planning to use multiple nodes, you can use docker files and shell scripts to create docker images from Libra's github repo and use those images with some container-orchestration system like kubernetes to setup your network. I was able to do this and have it setup using in this github repository.

DC/OS Mesosphere install/deploy my own application across a cluster

I am trying to deploy my own cluster using DC/OS CLI installation. Mesosphere has a huge support as there are many packages ready to install provided in Mesosphere Universe repo (https://github.com/mesosphere/universe).
However, I would like to make one step further. I am trying to install my own applications to my cluster using the DC/OS CLI installation process. To do this, as far as I understand, I need to either (i) make my application recognizable to the system repo (as the other repo packages that are provided in Universe) or (ii) make a new image that consists all my applications and modify the DC/OS script to make the installation possible.
Unfortunately, my modest knowledge is flawed and I could not find any where a clear answer to this.
Therefore, I would like to ask:
1) Is it possible to do what I am trying to do?
2) If the answer is YES, how exactly should I do? My goal is to install my awesome apps for my own purpose, not to publish them. But to add my apps as repo into Universe, it seems like I have to publish them.
It is possible! :)
Please follow these instructions

Capistrano get git commit sha1

I am writing a task for capistrano 3 and I need to get the current commit sha1. How can I read that ? Is there a variable for that ?
I have seen fetch(:sha1) in some files but this isn't working for me.
I am deploying into a docker container, and I need to tag the image with the current sha1 (and ideally, skip the deployment if there is already an image corresponding to the current sha1)
Capistrano creates a file in the deployed folder containing the git revision. In looking at the task which creates that file, we can see how it obtains the revision: https://github.com/capistrano/capistrano/blob/master/lib/capistrano/tasks/deploy.rake#L224
So, it is obtaining the revision from fetch(:current_revision).
In the git specific tasks, we can see where it is set: https://github.com/capistrano/capistrano/blob/master/lib/capistrano/scm/tasks/git.rake#L62
As a side note, Capistrano is probably not the best tool for what you are trying to do. Capistrano is useful for repeated deployments to the same server. Docker essentially is building a deployable container already containing the code. See the answers here for more detail: https://stackoverflow.com/a/39459945/3042016
Capistrano 3 is using a plugin system for the version manager application used (git, svn, etc.)
The current_revision is delegated to the version manager plugin and I don't understand how to access it...
In the meantime a dirty solution would be
set :current_revision, (lambda do
`git rev-list --max-count=1 #{fetch(:branch)}`
end)
But I'm waiting for a good solution that would instead, manage to invoke the right task from the SCM plugin

Install Jetty or run embedded for Solr install

I am about to install Solr on a production box. It will be the only Java applet running and be on the same box as the web server (nginx).
It seems there are two options.
Install Jetty separately and configure to use with Solr
Set Solr's embedded Jetty server to start as a service and just use that
Is there any performance benefit in having them separate?
I am a big fan of KISS, the less setup the better.
Thanks
If you want KISS there is no question: 2. stick to vanilla Solr distrib with included jetty.
Doing the work of installing an external servlet engine would make sense if you needed Tomcat for example, but just to use the same thing (Jetty) Solr already includes...no way.
Solr is still using jetty 6. So there would be some benefits if you can get the solr application to run in a recent jetty distribution. For example you could use jetty 9 and use features like SPDY to enhance the response times of your application.
However I have no idea or experience if it's possible to run the solr application standalone in a servlet engine.
Another option for running Solr and keeping it simple is to use Solr-Undertow which is a high performance with small footprint server for Solr. It is easy to use on local machines for development and also production. It supports simple config files for running instances with different data directories, ports and more. It also can run just by pointing it at a distribution .zip file without needing to unpack it.
(note, I am the author of Solr-Undertow)
Link here: https://github.com/bremeld/solr-undertow with releases under the "Releases" tab.

Resources