How to upload react build folder to my remote server? - reactjs

I'm trying to deploy my react build folder to my server. I'm using index.html and static that configured in my settings.py file to do that. (https://create-react-app.dev/docs/deployment/)
Since my backend is running on Ubuntu, I can't just copy from my Windows side and paste it. For now, I uploaded my build folder to my Google Drive and I download it on Ubuntu. But I still can't just copy and paste it on my PyCharm IDE, I can only copy the content in each file and then create a new file ony my server and paste the content to the file. This is just so time-consuming.
Is there any better way to do this?

you can use scp to upload the floder to remote
This link may help you:
https://linuxhandbook.com/transfer-files-ssh/

use scp command
# in dest folder:
scp username#remove_address:/path/for/deploy ./

Related

download multiple files from sftp with Jenkins

I have to download all files from a ftp folder using Explicit FTP over SSL/TLS. I need that for a jenkins job, running on a windows machine and didnt find any plugins - so I am trying to use a batch script with curl and the following code lists the contents of the folder.
set "$FILEPATH=C:\temp"
set "$REMOTEPATH=/files/"
curl -u user:pass --ftp-ssl ftp://hostame.com:port%$REMOTEPATH% -o %$FILEPATH%
I figured out that with curl I have to download files one by one, but how can I achieve to go through all the files in a ftp directory and get them one by one?
Is there a better way to achieve that? I read about mget, but it doesnt seem to work with the explicit ftp over ssl.
Thanks
I couldnt bring it to work with batch directly in the script, so I wrote a python script instead and download it from git and execute it as a step in the pypeline. It has some nice libraries, so it works as a charm.

Downloading many files at once in SSH from Google Cloud?

I just started working with google cloud on a project (using VM instances). I connected to SSH straight from the browser.
I will have thousands of .txt files in a few directories, and the "Download file" option only allows me to download 1 file at a time.
What's the easiest way to download all those files (or the whole directory) straight to my computer? Or, what method should I use/learn?
The easiest way will be to install the Cloud SDK on your local machine (see installation instructions here) and use the gcloud compute scp command to download your files or directories. For example:
gcloud compute scp --recurse vm-instance:~/remote-directory ~/local-directory
This will copy a remote directory, ~/remote-directory, from vm-instance to the ~/local-directory directory of your local host. You'll find more details about this command usage here.

How to use flink-s3-fs-hadoop in Kubernetes

I see the below info in flink's documentation - to copy the respective jar to plugins directory to use s3.
How can I do it if I deploy Flink using Kubernetes.
"To use flink-s3-fs-hadoop or flink-s3-fs-presto, copy the respective JAR file from the opt directory to the plugins directory of your Flink distribution before starting Flink, e.g.
mkdir ./plugins/s3-fs-presto
cp ./opt/flink-s3-fs-presto-1.9.0.jar ./plugins/s3-fs-presto/"
If you are referencing to k8s setup in official docs you can simply re-create your image.
check out Docker file in Github repository
download flink-s3-fs-presto-1.9.0.jar to the same folder as your Docker file
add following right before COPY docker-entrypoint.sh
# install Flink S3 FS Presto plugin
RUN mkdir ./plugins/s3-fs-presto
COPY ./flink-s3-fs-presto-1.9.1.jar ./plugins/s3-fs-presto/
build the image, tag it and push to Docker hub
In your Deployment yml file, change the image name to what you just created
You can then use s3://xxxxx in your config yml file (e.g. flink-configuration-configmap.yaml)
If you are using the build.sh script that's part of flink to build an application-specific docker image, it has a parameter (--job-artifacts) that allows you to specify a list of artifacts (JAR files) to include in the image. These jar files all end up in the lib directory. See https://github.com/apache/flink/blob/master/flink-container/docker/build.sh.
You could extend on this to deal with the plugins correctly, or not worry about it for now (putting them in the lib directory is still supported).

Django Cookiecutter using environment variables pattern in production

I am trying to understand how to work with production .env files in a django cookie cutter generated project.
The documentation for this is here:
https://cookiecutter-django.readthedocs.io/en/latest/developing-locally-docker.html#configuring-the-environment
The project is generated and creates .local and .production folders for environment variables.
I am attempting to deploy to a docker droplet in digital ocean.
Is my understanding correct:
The .production folder is NEVER checked into source control and are only generated as examples of what to create on a production machine when I am ready to deploy?
So when I do deploy , as part of that process I need to do a pull/clone of the project on the docker droplet and then either
manually create the .production folder with the production environment variables folder structure?
OR
RUN merge_production_dotenvs_in_dotenv.py locally to create .env file that I copy onto production and the configure my production.yml to use that?
Thanks
Chris
The production env files are NOT into source control, only the local ones are. At least that is the intend, production env files should not be in source control as they contain secrets.
However, they are added to the docker image by docker-compose when you run it. You may create a Docker machine using the Digital Ocean driver, activate it from your terminal, and start the image you've built by running docker-compose -f production.yml -d up.
Django cookiecutter does add .envs/.production and infact everything in .envs/ folder into source control. You would know this by checking the .gitignore file. The .gitignore file does not contain .envs meaning the .envs/ folder is checked into source control.
So when you want to deploy, you clone/pull the repository into your server and your .production/ folder will be there too.
You can also run merge_production_dotenvs_in_dotenv.py to create .env file but the .env would not be checked into source control so you have to copy the file to your server. Then you can configure your docker-compose file to include path/to/your/project/.env as the env_file for any service that needs the environment variables in the file.
You can use scp to copy files from your local machine to your server easily like this:
scp /path/to/local/file username#domain-or-ipaddress:/path/to/destination

how to create folder in sonatype nexus repository through command line

Is there a way to create folder and copy artifacts into the created folders in sonatype nexus repository, through windows command line or batch files?
I have finally got solution to my problem by curl functionality.
curl -u admin:admin123 -T C:\upload\Test.txt http://Nexus_Repo_URL/folder_to_be_created/Test.txt
"folder_to_be_created" is the folder which is created in the repository and the file, 'Test.txt', is copied to it
Just upload the artifacts to whatever path is needed using one of the methods described here:
https://support.sonatype.com/hc/en-us/articles/213465818-How-can-I-programatically-upload-an-artifact-into-Nexus-
Any folders needed will be created automatically.

Resources