I'm having trouble finding documentation regarding this. After some googling I find that bin, conf,logs, temp, webapps, work are directories that should exist in CATALINA_BASE.
temp, logs, webapps, bin and work I don't have any trouble understanding.
bin I suppose is just another bin folder, if for some reason both CATALINA_HOME and CATALINA_BASE are in PATH, then scripts in both folders will be available for execution.
But how about conf? Will the content of CATALINA_HOME/conf be totally ignored if CATALINA_BASE is set? Suppose I only would need to customize only a few config files pr. CATALINA_BASE, would I still need to keep a complete set of config files in CATALINA_BASE/conf, or could the standard config files in CATALINA_HOME/conf be shared?
And ditto for CATALINA_BASE/lib ... would this work as a "global" lib folder pr. instance?
You can find the answer in the Tomcat documentation:
http://tomcat.apache.org/tomcat-6.0-doc/RUNNING.txt
Advanced Configuration - Multiple Tomcat Instances
In many circumstances, it is desirable to have a single copy of a
Tomcat binary distribution shared among multiple users on the same
server. To make this possible, you can set the $CATALINA_BASE
environment variable to the directory that contains the files for your
'personal' Tomcat instance.
When you use $CATALINA_BASE, Tomcat will calculate all relative
references for files in the following directories based on the value
of $CATALINA_BASE instead of $CATALINA_HOME:
bin - Only setenv.sh (*nix), setenv.bat (windows) and tomcat-juli.jar
conf - Server configuration files (including server.xml)
logs - Log and output files
webapps - Automatically loaded web applications
work - Temporary working directories for web applications
temp - Directory used by the JVM for temporary files (java.io.tmpdir)
Note that by default Tomcat will first try to load classes and JARs
from $CATALINA_BASE/lib and then $CATALINA_HOME/lib. You can place
instance specific JARs and classes (e.g. JDBC drivers) in
$CATALINA_BASE/lib whilst keeping the standard Tomcat JARs in
$CATALINA_HOME/lib.
If you do not set $CATALINA_BASE, $CATALINA_BASE will default to the
same value as $CATALINA_HOME, which means that the same directory is
used for all relative path resolutions.
Related
For automatic scripts running at the start I use following property:
spring.datasource.data=classpath:base.scripts/*
It runs all the scripts in src/main/resources/base.scripts
But imagine my scripts files are located not in resources. (In another project directory, for example src/test/resources/base/scripts). How can I specify that path?
This is how I fixed it:
spring.datasource.data=file:src/main/resources/base.scripts/*
Writing because I have a stranger problem with Docker File process
The problem is regarding Docker File Context. As far as I understood the directory context that I can access from Dockerfiles is one directory up and one directory down
Example Directory Tree
A - B - C - D - E
If my docketfile is on C
I can access B D
But I can’t access A E
I have a problem because this is my case
My Docker file is on C
And I need to access files from B D E
And I really don’t know how to do it
I need to access it
Becaiuse my target jar is on E
And I need to do an ADD to this file to implementing docket hot deploy with Spring Dev Tools
Somenthing like on Docker
ADD .\D\E\jar.file jar.file
ENtrypoint xxx
Expose xxx
And I still need to access B to get some other files.
Was Clear?
Sorry I know is strange
If you can do something it does not mean it is right or if is something not recommended so it means the issue can arise.
If you read General guidelines and recommendations, It will recommend keeping the thing in context, then why you need to copy thing from the different drive? Btw it is not possible in Linux as docker need to copy from the context so better to keep your jar file in dockerfile context.
Understand build context
When you issue a docker build command, the current working directory
is called the build context. By default, the Dockerfile is assumed to
be located here, but you can specify a different location with the
file flag (-f). Regardless of where the Dockerfile actually lives, all
recursive contents of files and directories in the current directory
are sent to the Docker daemon as the build context.
I have the following files which I would like to upload to Artifactory as a 9.8.0 versioned artifact.
NOTE: The first two files DO NOT have an extension (they are executable files i.e. if you open them/cat on it, you'll see junk characters).
Folder/files of a given version 9.8.0 in CVS is like:
com.company.project/gigaproject/v9.8.0/linux/gigainstall
com.company.project/gigaproject/v9.8.0/solaris/gigainstall
com.company.project/gigaproject/v9.8.0/win32/gigainstall.exe
com.company.project/gigaproject/v9.8.0/gigafile.dtd
com.company.project/gigaproject/v9.8.0/gigaanotherfile.dtd
com.company.project/gigaproject/v9.8.0/giga.jar
com.company.project/gigaproject/v9.8.0/giga.war
Uploading the above files which have an extension is very easy... You log in to Artifactory as an administrator/user which has access to deploy artifacts, click on "Deploy" tab, browse for the Artifactory file and once you select the file, click on "Upload" button.
Next you'll see a screen (like shown above). You'll tweak what you want in the fields on this page and once you click on "Deploy Artifact", you are done. All you have to make sure is you select the correct file.extension file while uploading and make sure the file extension is shown in the "Target Path" box correctly (with the version -x.x.x, etc.).
My questions:
Question 1: How do I upload an artifact which doesn't have an extension? It seems like Artifactory by default takes an artifact as a .jar extension. How can I upload the "gigainstall" artifact as shown in the folder/file structure above for both Linux and Solaris? I see I can use the artifact name as gigainstall-linux and gigainstall-solaris and differentiate it, but I am not sure how to tell Artifactory that this artifact doesn't have any extension.
I don't think the development team will start generating this artifact with a proper extension (as this artifact may be hard coded everywhere in other projects where they are currently getting it from CVS/SVN source control somewhere - which is itself a bad practice to store an artifact in a source control version tool).
Question 2: How would I tell a build system (for example, Gradle) to consume a non-extensioned artifact during, let's say, 'compile' task. In build.gradle under section dependencies { .. }, I will add something like as shown below, but I am not sure for non-extensioned files (the first two in the folder/file structure I mentioned above).
dependencies {
//compile 'com.company.project:gigainstall-linux:9.8.0#'
//compile 'com.company.project:gigainstall-linux:9.8.0#??????'
//compile 'com.company.project:gigainstall-linux:9.8.0#""'
//compile 'com.company.project:gigainstall-linux:9.8.0#"none"'
//compile 'com.company.project:gigainstall-linux:9.8.0#"NULL_or_something"'
// The following will easily get giga.jar version giga-9.8.0.jar from Artifactory repository
compile 'com.company.project:giga:9.8.0'
// The following will easily get giga.war
compile 'com.company.project:giga:9.8.0#war'
// Similarly, other extension based artifacts can be fetched from Artifactory
compile 'com.company.project:gigafile:9.8.0#dtd'
compile 'com.company.project:gigaanotherfile:9.8.0#dtd'
}
Answer 1 (will cover 2 as well in a different sense): Using Artifactory "Artifact Bundle" feature section under "Deploy" tab can do the TRICK for AT LEAST uploading the artifacts in a way we want, by creating a zip file first (containing the structure and artifacts in it) --OR you can upload the artifacts using/calling Artifactory REST API way.
High level idea:
Create a zip file called gigaproject.zip OR anyname.zip/.tar/compressed file which Artifactory can read. Inside the zip, create the structure - how these artifacts will be loaded to Artifactory
i.e.
gigaproject.zip will contain the following folders/structure/files.
Case 1:
com/company/project/gigaproject/9.8.0/linux/gigainstall
com/company/project/gigaproject/9.8.0/solaris/gigainstall
com/company/project/gigaproject/9.8.0/win32/gigainstall.exe
com/company/project/gigaproject/9.8.0/gigafile.dtd
com/company/project/gigaproject/9.8.0/gigaanotherfile.dtd
com/company/project/gigaproject/9.8.0/giga.jar
com/company/project/gigaproject/9.8.0/giga.war
NOTE: In case 1 example, I didn't use any -x.x.x in the filename (i.e. I'm using plain and simple giga.jar instead of giga-9.8.0.jar).
The above Upload/Deploy will result the files (as shown in the following snapshot):
So, we have achieved what we wanted. Actually (visibly speaking yes), but not in a way Artifactory usually stores these artifacts (as they should -x.x.x version embedded in the file name and where artifact id should match the artifact filename). Now, if you want to consume the following in a Gradle build file, you CANNOT as first, you haven't uploaded the filename with -x.x.x version name in it, secondly, the artifact id in our case 1 tree was "gigaproject" (after com/company/project folder), so Gradle way of defining what artifact id and what artifact file name you want won't work.
compile 'com.company.project:gigaproject:CANNOTSAY_HOW_TO_GET_GIGA_JARorGIGAINSTALL_with_without_extension'
Conclusion: It's possible to upload any files (with/without extension in Artifactory) in any structure but it depends how your build system will consume it or will be able to consume it or not.
- I deleted the structure I just created with case 1 .zip file from Artifactory repository to try next case#2 and deleted the .zip file I created.
Case 2:
Let's create an individual versioned file name for each artifact and also create structure in the format - how Artifactory actually stores them (an artifact as seen in a repository in a tree view) and create a .zip file containing that structure. Let's use the same "Artifact Bundle" feature to upload this .zip file to upload individual artifacts that we need in Artifactory - where artifact-id (second value which we mention while trying to consume it) would match the artifactfile name in Artifactory.
Folder/file structure for the .zip file:
com/company/project/gigainstall/9.8.0/gigainstall-9.8.0.linux
com/company/project/gigainstall/9.8.0/gigainstall-9.8.0.solaris
com/company/project/gigainstall/9.8.0/gigainstall-9.8.0.exe
com/company/project/gigafile/9.8.0/gigafile-9.8.0.dtd
com/company/project/gigaanotherfile/9.8.0/gigaanotherfile-9.8.0.dtd
com/company/project/giga/9.8.0/giga-9.8.0.jar
com/company/project/giga/9.8.0/giga-9.8.0.war
NOTE: This time, we'll be using the same "Artifact Bundle" feature and for similar files (gigainstall under both Linux/Solaris folders), I took the approach of creating gigainstall folder (containing gigainstall-9.8.0.linux and gigainstall-9.8.0.solaris file names) i.e. when we'll consume these artifacts in Gradle under dependencies { ... } section for compile, we'll use x.x.x# way to fetch these artifacts from Artifactory.
OK, once "Artifact Bundle" Deploy/Upload was successfully complete, I got the following message.
Successfully deployed 7 artifacts from archive: gigaproject.zip (1 seconds).
Now, let's see how it looks like in Artifactory while searching for one of the artifact/in Tree view. You can see we have the files now in place, with filename-x.x.x.extension way so that I can consume them easily in Gradle.
In Gradle build file (build.gradle), I'll mention:
dependencies {
compile "com.company.project:gigainstall:9.8.0#linux"
compile "com.company.project:gigainstall:9.8.0#solaris"
compile "com.company.project:gigainstall:9.8.0#linux"
compile "com.company.project:giga:9.8.0
compile "com.company.project:giga:9.8.0#war
compile "com.company.project:gigafile:9.8.0#dtd
compile "com.company.project:gigaanotherfile:9.8.0#dtd
}
OH OH!! - That didn't work, see below for Gradle error. Why? - Artifactory Bundle upload/deploy feature uploads a zip file content what you have in the .zip but it DOES NOT create a .pom file per artifact it deploys. Thus, making the Gradle build to fail. May be in Ant this might succeed. This occurred for each individual .jar/.war/.dtd/etc file. I'm just showing one error example.
While doing gradle clean build
Could not resolve all dependencies for configuration ':compile'.
> Could not resolve com.company.project:gigafile:0.0.0.
Required by:
com.company.project:ABCProjectWhichConsumesGIGAProjectArtifacts:1.64.0
> Could not GET 'http://artifactoryserver:8081/artifactory/ext-snapshot-local/com/company/project/gigafile/0.0.0/gigafile-0.0.0.pom'. Received status code 409 from server: Conflict
Case 3: Let's take a simple approach (workaround but will save a lot of pain).
Create gigaproject.zip file with the following structure, this approach takes - No x.x.x version value embedded in the individual artifact/filename in the folder/file structure. We will use "Single Artifact" approach (which will create the .pom for gigaproject.zip file automatically during the upload/deploy process provided by Artifactory). You'll still be able to get gigainstall file without needing any extension to its name using this approach. During the upload/deploy step, as you already have seen, you upload gigaproject.zip and artifactory will upload it to a given Target Repository as "gigaproject-x.x.x.zip" where x.x.x is 9.8.0 in our case. See the image snapshot below.
gigaproject/linux/gigainstall
gigaproject/solaris/gigainstall
gigaproject/win32/gigainstall.exe
gigaproject/gigafile.dtd
gigaproject/gigaanotherfile.dtd
gigaproject/gigaproject.zip
gigaproject/giga.jar
gigaproject/giga.war
Now, upload it in Artifactory using "Single Artifact" feature. Click "Deploy Artifact" once you tweak the values for GroupId, ArtifactId, Version, etc.
Once this is uploaded. You'll see in the zip artifact in the target repository (I took a bad example, usually this would be libs-snapshot-local or libs-release-local instead of ext-...), you'll be able to consume the ZIP artifact directly in Graddle:
dependencies {
// This is the only line we need now.
compile "com.company.project:gigaproject:9.8.0#zip"
}
Once the .zip is available to Gradle build system, now you can tell Gradle to unpack this .zip file somewhere in your build/workspace area where you can feed the actual(unpacked) files (gigainstall, .dtd, .jar, .war, etc.) to the build process/steps.
PS: Case# 1 and 2 would have worked for Ant I guess.
Answer 2:
If you have uploaded a non-extensioned file in either way. Make sure you have manually created/uploaded its POM file as well (i.e. if I uploaded gigainstall-9.8.0 as an artifact under com/company/project/gigainstall/9.8.0/gigainstall-9.8.0, then at the same level, I have to/should create it's POM file (see a simple template .pom file for a custom jar artifact or while uploading an extensioned file via "Single Artifact" deploy, you'll see what POM Editor window shows you) and upload both so that Gradle won't error out saying no POM conflict/error. Ant might not need pom (I didn't check that).
Once it's there in Artifactory, the following line should work -- OR comment please if you find another way.
dependencies {
// See nothing mentioned after - x.x.x#
compile "com.company.package:gigainstall:9.8.0#"
}
We have a web app. This web app is installed for each client of ours in a different folder in our VPS. We also have a separate folder with the base files of the web app (all code up to date).
The problem we're having is: we need to automate the update process of the web app for all client installations. Therefore, if we add files to the base web app, or move files, or create a directory, or remove a file or directory, these changes should be reflected automatically (applied to) on every client installation of the web app. Currently we're on beta and each code update results in a manual update of all files for each client installation using FTP, and the more changes done, the more time this process takes and the more complex it becomes.
Is there a tool available to automate this kind of process? Or if not, how do you suggest it should be approached?
/
/clients
/client1.domain.com
/[web app subfolders and files...]
/client2.domain.com
/[web app subfolders and files...]
/client3.domain.com
/[web app subfolders and files...]
/base_web_app
/[web app subfolders and files...]
So basically, each time we do any changes to the contents of /base_web_app, those changes should be automatically applied (sync) to the web app installations inside /clients (that is, /client1.domain.com, /client2.domain.com, /client3.domain.com).
It is also important to note that we need some files and/or subfolders to be ignored/not overwritten. Mainly configuration files specific to each client's installation.
Check out rsync: http://rsync.samba.org/examples.html It is a tool to synchronize files from one area to another (say your staging area to your production area). You can use patterns to specify what to sync and what to exclude, and it only copies changed files.
On your staging area (where you have the latest changes you want to sync), you could do something like this:
# sync staging area base_web_app directory to production base_web_app
# this syncs the entire local base_webapp directory to remote /base_webapp
rsync -avRc base_webapp server:/
# sync staging area base_web_app files to clients/client* directories, excluding the config directory
# this syncs the entire base_webapp to each remote client dir, excluding the config dir
rsync -avRc --exclude 'config/*' base_webapp server:/clients/client1.domain.com
rsync -avRc --exclude 'config/*' base_webapp server:/clients/client2.domain.com
rsync -avRc --exclude 'config/*' base_webapp server:/clients/client3.domain.com
While working on my GAE project under my dev environment, whenever I upload data to my dev datastore, the logfiles are stored in my current directory, for instance:
C:\dev\ls
bulkloader-log-20090912.104643
bulkloader-log-20090912.104648
bulkloader-log-20090912.104731
bulkloader-log-20090912.105526
bulkloader-log-20090912.110428
bulkloader-progress-20090912.104648.sql3
bulkloader-progress-20090912.104731.sql3
bulkloader-progress-20090912.105526.sql3
bulkloader-progress-20090912.110428.sql3
project
project is my GAE app. The above is generated when I run the command appcfg.py upload_data. Is there a way to tell GAE where to store those log files, for instance in a log folder.
Use the --log_file=... option to appcfg.py, as documented here: with this command line option you can give the complete path to the log file, including folder and name. (You cannot give JUST the folder and let it figure out the name; for that, you need to write a tiny script that figures out the name then calls appcfg.py).