Apache Zeppelin Storage - apache-zeppelin

Is there a way to configure the naming schemes of the zeppelin notes? In my local repo they are all stored here:
/Users/myName/zeppelin-0.7.3-bin-all/notebook/
All of the folders are named with, to me at least, a mash of random characters and numbers (e.g. 2DEJKXT8). Is there any way to configure Zeppelin to use the name you give the note in the GUI to where it saves it locally? I would like to push these to a remote repository and allow someone to pull the repo and identify the notes easily. Thanks.

Currently, there's no such mechanism. The note name is in note.json itself. You have to read the note.json to get the note name.

Related

Idempotency in a camel application running in Kubernetes cluster

I am using apache camel as integration framework in my microservice. I am deploying it in a Kubernetes cloud as multiple pods. I had written a route for reading file from a directory and write to another. But I am facing an issue as the different pods are picking same file. I need to avoid that. I only want any of the pod to pick the file and process but currently all the pods are picking and processing the file. Can someone help with this. Please suggest some examples available in GitHub or any other.
Thanks in advance.
Camel recently introduced some interesting clustering capabilities - see here.
In your particular case, you could model a route which is taking the leadership when starting the directory polling, preventing thereby other nodes from picking the (same or other) files.
Set it up is very easy and all you need is to prefix singleton
endpoints according to the master component syntax:
master:namespace:delegateUri
This would result in something like this:
from("master:mycluster:file://...")
.routeId("clustered-route")
.log("Clustered file polling !");

How to build a Libra TestNet with two servers?

I want to build a Libra TestNet with two servers.
I don't know how to use config-builder to configure the program.
This answer might be a bit late but it might help for someone who is looking for a solution.
I was able to setup local test network with single/multiple nodes based on the following
For a single node, the libra-swarm package is well documented here https://developers.libra.org/docs/run-local-network and defines easy steps to setup your local test network with defined number of nodes.
If you are planning to use multiple nodes, you can use docker files and shell scripts to create docker images from Libra's github repo and use those images with some container-orchestration system like kubernetes to setup your network. I was able to do this and have it setup using in this github repository.

Need some information about Crawl Anywhere and solr

I gone through the crawl anywhere documentation but i am very much confuse about its installation steps.
What i understood is Apache is optional. But do need independent tomcat instance for crawl? Because what i saw in folder structure, there is tomcat folder already present and war file is also there?
Also do we need independent instance of Apache solr also ?
If we want to add postgresql database to crawl, how we can do that?
Please provide some link also so that I can go through it and clarify any doubt I have in my mind.
Apache is needed to use admin interface. Tomcat is needed for some interactive features. You can crawl without both of them.
No.
MySQL and MongoDB are supported. The code is open source, so you can add postgresql support.
Try Google Groups for other questions

Does it require direct XML modification to prepare an SSIS package for different environments?

I am maintaining some SSIS package I didn't build, and it creates an output file (.txt) and then emails that to a group. However, the package is actually fully configured for PROD. Some of the components I'll need to modify are: connection managers, pickup and drop off locations for the text file, mail servers, etc.
Am I going to have to just modify the XML directly to get this deployed to other environments?
Please note that I don't have access to do the deployments on these other environments - I simply have to hand it off to another team.
I'd almost post this as a comment, but if you're unfamiliar with SSIS, it's worth noting that there are about 3 ways the package can be linked to the config files.
You can certainly modify the config files. However, I'd regard setting up the config files for the environments as something the ops people should take ownership. If you need to set up connections for your development environment you've a slightly more complex problem.
The package may get the location from an environment variable, in which case you can just set up the config file for your development environment and configure the environment variable to point to it. You need to ensure that BIDS is running with the environment variable set, though.
If the configuration is supplied via a switch to DTExec you might be better off just setting the connections directly in the package. In this case the package won't use the config file unless you specify a path with DTExec /Config
If the path is hard coded into the package (i.e. a specified path rather than an environment variable) then you can adjust that path. However, the ops people will have to edit this as a part of the deployment process. Alternatively you could write a little .Net app that did the update and use that as a part of the deployment. The downside of this is it introduces scope for human error in deploying the packages.
If you need to maintain your config files manually I'd suggest you frig the indentation so it's a bit more readable. By default SSIS puts no whitespace in the files, but it doesn't mind if you do.

Best Practice for Location of Java JSP Application Files in Tomcat Environment

My Java JSP application requires to store permanent files on the Tomcat web server. At the moment I save the files in the "/temp" folder of the System. But this folder gets cleared from time to time. Further, the current solution is hard-coded which makes it less flexible (e.g. when moving to another server).
I would like to now if there is a best practice for defining and accessing a permanent directory in this configuration. In detail, where is the best place to define the app file directory, and how would I access this from within my java application? The goal of this setup would be to cause the least effort when (a) updating the application (i.e. placing a new war file), and (b) moving from one server to another and OS (e.g. Unix, Windows, MacOS).
The research I have done on this topic so far revealed that the following would be solutions (possibly amongst others):
1.) Use of a custom subdirectory in the Tomcat installation directory.
What happens to the files if I deploy a new version on the tomcat via
war file?
Where do I define this directory to be accessed from
within my Java application?
2.) In a separate directory in the file system.
Which are good locations or ways to get the locations without knowing
the system?
Where do I define this directory to be accessed from
within my Java application?
Thank you for your advice!
Essentially, you are creating 'a database' in the form of some files. In the world of Java EE and servlet containers, the only really general approach to this is to configure such a resource via JNDI. Tomcat and other containers have no concept of 'a place for persistent storage for webapps'. If a webapp needs persistent storage, it needs to be configured via JNDI, or -D, or something you tell it by posting something to it. There's no convention or standard practice you can borrow.
You can pick file system pathname by convention and document that convention (e.g. /var/something on Linux, something similar on Windows). But you won't necessarily be aligned with what anyone else is doing.

Resources