I am using Windows7, Java 8, Flink 1.5.2. Extracted the tar file, and started bin\start-cluster.bat. I can open the browser URL localhost:8081, it shows all default options.
Developed a small Stream app with Kafka integration, I can receive Kafka message within flink and flink is forwarding it to Kafka (other topic) successfully.
I could test this from Eclipse IDE and from CLI.
Problem: When I run this jar from bin\flink run , the running job's details are not shown on Dashboard, also could not see completed/failed states.
Thanks
Related
Hello Flink Community,
following the documentation to troubleshoot unloading of dynamically loaded classes in Flink I added the database driver library to the opt/flink/lib folder on both the Flink JobManager Container and TaskManager Containers running on K8s (Flink Session Cluster, version: 1.11).
I marked the library as provided in my build.sbt file.
The rest of the user code is p[art of the fat jar build by sbt assembly.
Now when I submit a job to the flink cluster using the Flink API (upload and run endpoints) it won't accept the job due to the following error:
java.lang.ClassNotFoundException: com.vertica.jdbc.Driver
Why is the jar not picked up by the Flink classloader?
I even added the class pattern to the config option without any difference:
classloader.parent-first-patterns-additional: com.vertica.jdbc.;
Link: https://ci.apache.org/projects/flink/flink-docs-release-1.12/ops/debugging/debugging_classloading.html#unloading-of-dynamically-loaded-classes-in-user-code
Any recommendation would be highly appreciated.
Cheers
Please confirm your jdbc maven dependency is not provided.
when the library is provided, the library is active when compile and test.
I followed the contents of three different tutorials in deploying a slightly-modified boilerplate React app to Azure App Services. The primary issue I'm having is that while all deployment pipelines and releases have been successful on Azure DevOps, navigating to the page results in the default landing page for non-deployed app services;
Hey, Node developers!
Your app service is up and running.
Time to take the next step and deploy your code.
I'll briefly describe the steps I took to get to this point:
I used create-react-app to generate a basic template, ran all the prerequisite commands, fiddled with the app.js file and its CSS companion, and left index.* untouched.
I pushed all of it with an untouched gitignore to a Github repository.
I created an App Service, running Linux and Node 12 LTS, on the Free plan.
I created a DevOps project, and within it created a Pipeline and a Release Pipeline.
In the Pipeline: I retrieved my repository source via linked accounts in the Get Sources step. In Agent job 1, I added a npm install element, a npm run build element, and a Publish Artifact element. I set the path to build, and artifact name to artifact, publishing to Azure Pipelines.
In the Release Pipeline: I added an artifact that grabs its source from the previous pipeline, and gives a source alias of _artifact. CD trigger is enabled. I added a stage that has a Deploy Azure App Service element, using $(System.DefaultWorkingDirectory)/_artifact/artifact as the package/folder.
When I push a commit or manually trigger the first pipeline, everything succeeds with no obvious errors. The Release pipeline is triggered and also completes without error. Checking the logs, the artifact is stored and accessed accurately. I can see the correct build files being accessed.
In the Azure portal, I can see that deployment has succeeded with the correct timestamp, commit name, and pipelines. However, when I access the actual site, I am shown the generic page.
Am I missing a crucial step somewhere? I've tried navigating to /index.html, /src/index.html, and a bunch of other combinations of known files, but to no avail; Cannot GET /index.html.
Any insight would be appreciated.
For reference, I used these three walkthroughs:
https://medium.com/microsoftazure/deploying-create-react-app-as-a-static-site-on-azure-dd1330b215a5
https://medium.com/#to_pe/deploying-create-react-app-on-microsoft-azure-c0f6686a4321
https://www.pluralsight.com/guides/deploy-a-react-app-to-azure
This question is very simple, you can refer to the following post.
1. Deploy create-react-app with azure pipelines
2. Unable to deploy React JS application on Azure App service
3. Process for React App deployment to Azure Web?
Suggestion:
It is recommended to choose linux when creating a webapp.
Configuration->StartupCommand: pm2 serve /home/site/wwwroot --no-daemon --spa
I have a application running Spring Boot Camel which consume message from a ActiveMQ and write to a file:
#Override
public void configure() throws Exception {
from("activemq:queue:MyQueue").to("file:/tmp/somemessages/");
}
Very simple and works fine if run mvn spring-boot:run.
But now i need generate a bundle jar to install in my RedHat Fuse OSGi container. Everything was installed and started without error, see:
So, my camel-app is Active but after produce some messages in my ActiveMQ Queue nothing works as i expect, so the file was not generated.
How can i see if something is wrong ? Application Console Log or something like this ?
This is not a good practice. Spring Boot is intended for running Standalone. In an OSGi based runtime such as Red Hat Fuse or Apache Karaf/ServiceMix you should deploy OSGi apps, which with Camel is camel-blueprint (you can also use Java routes with blueprint). So take a look at examples how to do that, and there should be examples shipped with Red Hat Fuse you can look at.
How can i see if something is wrong ? Application Console Log or something like this ?
The simple answer is you can run diagnostic command on your bundle by running following command inside your shell console:
bundle:diag {your-bundle-id}
You may replace {your-bundle-id} by preferred bundle id that is 231 in the picture. There is also a complete list of Apache Karaf commands that may be useful for further requirements.
We are building an application that so far has a simple user management implementation. This question relates to the built-in password resetting functionality of Loopback v3. User management is being worked on a model derived from the built-in User, and it is called MyCustomUser
Each time code changes are pushed into a GitHub repo, we have Jenkins build a Docker container, and inside of it run npm install then lb-sdk (with suitable parameters) then ng build --env=prod and finally node .. After this happens, the application runs normally, BUT:
When performing the same deployment commands locally (on my own linux laptop), the API endpoints /MyCustomUsers/reset and /MyCustomUsers/reset-password are created (i.e. they are visible and manipulable via the Strongloop Explorer)
When the deployment is run by Jenkins in the Docker container, only one of the two API endpoints is created, /MyCustomUsers/reset. God only knows where the other endpoint, /MyCustomUsers/reset-password, ends up.
Obviously, all deployments are run against the same codebase (i.e. the same commit ID of the GitHub repo). It is bewildering how the service behaves perfectly on localhost but not on the cloud-based docker container.
Sounds like you are running two different versions of the Loopback-Angular2-SDK. From what I've understood the SDK for Angular2 is still in heavy beta and not yet ready for production. However this doesn't excuse the difference, but it really sounds like two different versions.
We are using the same build-flow as you, are your package.json identical when it comes to #mean-expert/loopback-sdk-builder?
The guys working with the SDK-generator are really good at responding in their issue-section, would recommend asking there otherwise.
It turns out that the remote docker was running node 6.9.2 and npm 3.10.9, whereas I was running node 6.10.3 and npm 3.10.10. After making the docker instance run the same versions as I had locally and deploying the package.json along with its npm-shrinkwrap.json, the endpoint was correctly generated.
I cannot get my server code to update. I'm running a PHP instance on GAE and no matter what I do, the files won't update. In the source code view, I can see the files have updated, but when I attempt to access the updated file, I'm still viewing the old version. I've also attempted disconnecting my Bitbucket repo and using the appcfg.py update project-name command, but the files aren't refreshing when I attempt to access them. I'm not sure what to do to force the changes to take place.
My app.yaml contains the following code
- url: /(.+\.php)$
script: \1
secure: always
So the files should be getting read, right?
I was able to figure out what went wrong. I downloaded my code using appcfg.py download_app -A <your_app_id> -V <your_app_version> <output-dir> and noticed that I was downloading the old versions of the files (and wasn't downloading the new files). Turns out using source control within GAE will upload new code, but won't deploy it. I attempted to use appcfg.py update project-name one more time, but it didn't work. Turns out I didn't disconnect my Bitbucket account (could have sworn that I did...). Once disconnected, I was able to update the project using appcfg.py update project-name. While I was figuring this out, I reached out to Google support and received this message:
To use the feature of push to deploy you need to spin-up the Jenkins
Instance on GCE (Google Compute Engine) and then it will take the
updated code and execute it in the environment. Go through [1] for how
to enable the Jenkins instance and its configuration according to
different run time.
In your issue, you just mirrored the code from Bit Bucket to Cloud
Repository, as it is just doing the version control for the
application not executing the application. So basically you have have
the option of using Jenkins instance as I described above to test the
different version of the code or using the appcfg.py update command
from your local repository.
I haven't attempted to install and use Jenkins since I fixed it after disconnecting my Bitbucket account), but it may help others who have run into this problem.