we are trying to have a partially export from Alfresco content.
we used below batch commands in windows which responses our needs.
but because of huge mount of Alfresco contents, we set our export such that
we have a partially content export, not all Alfresco Contents.
the problem is that, our partially export do not export previous versions of a content
and just export content latest version,
to export all versions of a content we forced to have a complete export which makes a huge mount of export content.
is there any solution?
c:
cd C:\Alfresco\tomcat\webapps\alfresco\WEB-INF
set CPATH=../../../lib/;../../../endorsed/;lib/*;classes;../../../shared/classes;
java.exe -Xms128m -Xmx512m -Xss96k -XX:MaxPermSize=160m -classpath %CPATH% org.alfresco.tools.Export -user admin -pwd admin -s workspace://SpacesStore -path /app:company_home/cm:test -d d:/export -root -overwrite -verbose -zip export.acp
I guess you can try this tool Bulk Export .
It's a free addon and should contain version support, if not it's probably quite easy to add it.
Related
I want to create a Python environment with the data science libraries NumPy, Pandas, Pytorch, and Hugging Face transformers. I use miniconda to create the environment and download and install the libraries. There is a flag in conda install, --download-only to download the required packages without installing them and install them afterwards from a local directory. Even when conda just downloads the packages without installing them, it also extracts them.
Is it possible to download the packages without extracting them and extract them afterwards before installation?
There is no simple command in the CLI to prevent the extraction step. The extraction is regarded as part of the FETCH operation to populate the package cache before running the LINK operation to transfer the package to the specified environment.
The alternative would be to do something manually. Naively, one could search Anaconda Cloud and manually download, however, it would probably be better to go through the solver to ensure package compatibility. All the info for operations to be run can be viewed by including the --json flag. This could be filtered to just the tarball URLs and then downloaded directly. Here's a script along these lines (assuming Linux/Unix):
File: conda-download.sh
#!/bin/bash -l
conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
which can be used as
./conda-download.sh -c conda-forge -c pytorch numpy pandas pytorch transformers
that is, it accepts all arguments conda create would, and will download all the tarballs locally.
Ignoring Cached Packages
If you already have some packages cached then the above will not redownload them. Instead, if you wish to download all tarballs needed for an environment, then you could use this alternate version which overrides the package cache using an empty temporary directory:
File: conda-download-all.sh
#!/bin/bash -l
tmp_dir=$(mktemp -d)
CONDA_PKGS_DIRS=$tmp_dir conda create -dn null --json "$#" |\
grep '"url"' | grep -oE 'https[^"]+' |\
xargs wget -c
rm -r $tmp_dir
Do you really want to use conda-pack? That lets you archive a conda-environment for reproducing without using the internet or re-solving for dependencies. To just prevent re-solving you can also use conda env export --explict but that still ties you to the source (internet or local disk repository).
If you have a static environment (read-only) and want to really reduce docker size, you can volume mount the environment at runtime. You would need to match the file paths (ie: /opt/anaconda => /opt/anaconda).
I'm trying to export table schema via pgadmin 4. I am running postgres in docker though so I don't have direct access to the filepath.
Setting it to /usr/bin worked for me..
I installed Sqlserver on my Mac in a docker container, following the instructions from this article.
I run the container with Kitematic and managed to connect to the server using Navicat Essentials for SQl Server.
The server has four databases and I can create new ones, but, ideally, I would like to import an existing database as .bacpac.
The instructions from this answer have been of use to me in the past. Can I run something similar within the container? Or, more generally, is there a way to import a database in the container?
Hi all! We finally have a preview ready for sqlpackage that is built on dotnet core and is cross-platform! Below are the links to download from. They are evergreen links, i.e. each day a new build is uploaded. This way any checked in bug fix is available the next day. Included in the .zip file is the preview EULA.
linux
https://go.microsoft.com/fwlink/?linkid=873926
osx
https://go.microsoft.com/fwlink/?linkid=873927
windows
https://go.microsoft.com/fwlink/?linkid=873928
Release notes:
The /p:CommandTimeout parameter is hardcoded to 120
Build and deployment contributors are not supported
a. Need to move to .NET Core 2.1 where System.ComponentModel.Composition.dll is supported
b. Need to handle case-sensitive paths
SQL CLR UDT types are not supported.
a. This includes SQL Server Types SqlGeography, SqlGeometry, & SqlHierarchyId
Older .dacpac and .bacpac files that use Json serialization are not supported
Referenced .dacpacs (e.g. master.dacpac) may not resolve due to issues with case-sensitive file systems
For lack of a better method, please provide any feedback you have here on this GitHub issue.
Thanks for giving it a try and letting us know how it goes!
https://github.com/Microsoft/mssql-docker/issues/135#issuecomment-389245587
EDIT: I've made you a Docker image for this
https://hub.docker.com/r/samuelmarks/mssql-server-fts-sqlpackage-linux/
Example of setting up a container, creating a database, copying a .bacpac file over, and importing it into aforementioned database:
docker run -d -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433 --name sqlfts0 samuelmarks/mssql-server-fts-sqlpackage-linux
docker exec -it sqlfts0 /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourStrong!Passw0rd>' -Q 'CREATE DATABASE MyDb0'
docker cp ~/Downloads/foo.bacpac sqlfts0:/opt/downloads/foo.bacpac
docker exec -it sqlfts0 dotnet /opt/sqlpackage/sqlpackage.dll /tsn:localhost /tu:SA /tp:'<YourStrong!Passw0rd>' /A:Import /tdn:MyDb0 /sf:foo.bacpac
It looks like Microsoft has implemented support of this on sqlpackage, with documentation!
You will have to add sqlpackage to your container.
You can download it here. (optionally, direct link to linux package here, hopefully doesn't change)
The following are instructions for running this from a windows machine -- obviously it's the bare minimum to get it working. Please change passwords, and probably put this in a docker-compose.yml for re-use.
I unzip the above package into a folder 'c:\sqlpackage' (my windows docker run doesn't allow relative paths), and then mount that into the container with the bacpac, like such:
docker run -d -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Asdf1234" -v c:\sqlpackage:/opt/sqlpackage -v c:\yourdb.bacpac:/tmp/yourdb.bacpac -p 1433:1433 --name mssql-server-example microsoft/mssql-server-linux:2017-latest
here is what a *nix user could run alternatively:
docker run -d -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Asdf1234' -v ./sqlpackage:/opt/sqlpackage -v ./yourdb.bacpac:/tmp/yourdb.bacpac -p 1433:1433 --name mssql-server-example microsoft/mssql-server-linux:2017-latest
and finally, attach to your container and run:
/opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:targetdbname /tu:sa /tp:Asdf1234 /sf:/tmp/yourdb.bacpac
After this, you should be able to connect with SSMS to localhost, username and password as you provide them above, and see 'targetdbname'! These are mostly notes I wrote for myself but I'm sure others could use them too.
You can use free Azure Data Studio from Microsoft. Once you have it installed, install the extension "Admin Pack for SQL Server" from Microsoft. Then you can import bacpac files with ease.
This is not a supported feature with a LINUX implementation it seems.
See this link.
I depolyed an app with gcloud preview app deploy.
Is there a way to download it to an other local machine?
How can I get the files? I tried it via ssh with no success (can't access the docker dir)
UPDATE:
I found this:
gcloud preview app modules download default --version 1 --output-dir=my_dir
but it's not loading files
Log
Downloading module [default] to [my_dir/default]
Fetching file list from server...
|- Downloading [0] files... -|
I am coming to Google App Engine after two years, I see that they have made lots of improvements and added tons of features. But sadly, their documentation sometimes leaves much to be desired.
I used to download my code of the uploaded version with the appcfg.pyusing the following command.
appcfg.py download_app -A <app_id> -V <version> <output-dir>
But of course now that they have culminated everything in the gcloud shell where appcfg.py is not accessible.
However, the following method helped me to download the deployed code:
Go the console and in to the Google App Engine.
Select the project you want to work with.
Once the project's dashboard opens, Click on the top right to
open the built in console window.
Which should load the cloud shell at the bottom, now if you check appcfg.py is available to you to use in this VM.
Hence, use appcfg.py download_app -A <app_id> -V <version> <output-dir> to download the code.
Now once you have the code in the desired folder, in order to download it on your local machine - You can open the docker code editor
Now here I assumed if I rightclicked and exported the desired
folder it would work,
but instead it gave me the following error message.
{"Error":"'concurrency' must be a number but it is [object Undefined]","Message":"'concurrency' must be a number but it is [object Undefined]"}
So, I thought maybe it would play along nicely if the the folder
was an archive. Go back to the cloud shell and using whatever
utility you fancy make an archive of the folder
zip -r mycode.zip mycode
Go to the docker code editor, export and download.
Now. Of course there might many more ways do it (hopefully) but this is what made sense to me after returning to Google App Engine after 2 years.
Currently, the best way to do this is to pull the files out of Docker.
Put instance into self-managed mode, so that you can ssh into it:
$ gcloud preview app modules set-managed-by default --version 1 --self
Find the name of the instance:
$ gcloud compute instances list | grep gae-default-1
Copy it out of the Docker container, change the permissions, and copy it back to your local machine:
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 'sudo docker cp gaeapp:/app /tmp'
$ gcloud compute ssh --zone=us-central1-f gae-default-1-1234 "chown -R $USER /tmp/app"
$ gcloud compute copy-files --zone=us-central1-f gae-default-1-1234:/tmp/app /tmp/
$ ls /tmp/app
Dockerfile
[...]
IMHO, the best option today (Aug 2018) is:
Under the main menu, under Products, go to Tools -> Cloud Build -> Build history.
There, click the ID of the build you want.
Then, in the opened window (Build details), click the source link, the download of your compressed code begins.
As simple as that.
HTH.
As of Feb 2021, you can install appengine-sdk using pip
pip install appengine-sdk
Once installed, appcfg can be used to download the app code.
python -m appcfg download_app -A app_id [ -V version ] out-dir
Nothing works. Finally I found the source code this way. Simply go to google cloud storage. choose buckets starting with us.artifacts...., select containers > images > download the latest one (look by created date). unzip after downloaded file. it will have all the deployed source code of app engine.
I want to write down a Batch Script in which I put (Enter) the revision number(s) and it deploys those revision numbers in sequence on a specific location of the Server.
Any help or a sample batch script in Windows would be highly appreciated.
Regards,
Waseem Shahzad Bukhari.
Usually you want to perform a checkout for this
svn checkout -r <REVISION> <URL> <TO-PATH>
You can then later call
svn update -r <REVISION> <PATH>
to update to a later revision. This uses the data stored in the .svn subdir(s) to perform the least amount of work.
If you would use
svn export -r <REVISION> <URL> <TO-PATH>
you would also get the files from the repository, but without a way to connect them to where they came from.