I'm currently writing a circleCI script for a project. This folder has multiple projects within it, each with their respective build and deploy scripts.
My question is, how do I manage the multiple projects, do I need a .circleci folder within each project or can I use a single yml script to handle the sub directories.
My current script cd into the sub directory in each run step.
You can do it all in one, by having multiple items under jobs. The default job must be called build but you can call the others whatever you like. Then you can cd into the appropriate directory inside each job, or add the directory name to your command arguments as you see fit. From the docs:
A run is comprised of one or more named jobs. Jobs are specified in the jobs map, see Sample 2.0 config.yml for two examples of a job map. The name of the job is the key in the map, and the value is a map describing the job.
...
If you are not using workflows, the jobs map must contain a job named build. This build job is the default entry-point for a run that is triggered by a push to your VCS provider. It is possible to then specify additional jobs and run them using the CircleCI API.
Elsewhere, a repo I contribute to has a working example of this:
jobs:
build:
steps:
# ...
build-oauth:
steps:
# ...
Related
We have a React app in AzureDevOps. We build it using npm install/npm run build and then upload the zip file. From there we'll do a release to multiple stages/environments. Due to SOX compliance we're trying to maintain a single build/artifact no matter what the environment.
What I'm trying to do is be able to set the environment variables during the release pipeline. For instance, be able to substitute the value of something like process.env.REACT_APP_CONFIG_VALUE
I've tried setting that in the Pipeline variables during the release but it does not seem to work. Is this possible or do I have to use a json config of some sort instead of using process.env?
Thanks
You could not achieve this by setting pipeline variables during the release.
Suggest you could use RegEx Match & Replace extension task to achieve this. Use this site to convert the regular expression: Regex Generator
Here is an example:
this._baseUrl = process.env.REACT_APP_CONFIG_VALUE;
This extension task will use regular expressions to match fields in the file.
Checking from published js.
This is how I did it -
Step1: Add all those files (.env .env.development .env.production) to azure devops library as secure files. We can download these secure files in the build machine using a DownloadSecureFile#1 pipeline task (yml). This way we are making sure the correct .env file is provided in the build machine before the task yarn build --mode development in the pipeline.
Step2: Add the following task in your azure yml pipeline in appropriate place. I have created a github repo https://github.com/mail4hafij/react-yarn-azure-pipeline if you want to see a complete example.
# Download secure file from azure library
- task: DownloadSecureFile#1
inputs:
secureFile: '.env.development'
# Copy the .env file
- task: CopyFiles#2
inputs:
sourceFolder: '$(Agent.TempDirectory)'
contents: '**/*.env.development'
targetFolder: '$(YOUR_DEFINED_PROJECT_ROOT_FOLDER_VARIABLE)'
cleanTargetFolder: false
Keep note, secure files can't be edited but you can always re-upload.
I'm using Turborepo for my monorepo project, i have 2 react apps. How can i configure Turborepo and CircleCI (repos are on Github) so if i make changes to one project that pipeline is not going to run for second project?
I know turbo is using hash algo to check if there is any changes to a project and then rebuild it.
I have tried looking here https://turborepo.org/docs/ci/circleci but does not explain the behavior of this.
Steps would be:
Make code change to Project 1
Commit changes of monorepo to Github
Github detects a commit and triggers CircleCI to run CI/CD
So this part is what I'm not sure, if it triggers CI/CD it will trigger for the both projects right? And if so how can i prevent only for the one i have made changes?
I've been working on such a solution for days now. There are two core-concepts in turborepo to achieve this:
Filtering workspaces
Caching Buildoutputs and store the cache in the cloud (not what you're looking for)
So, you can filter your monorepo for a specific project, e.g:
pnpm turbo run build --filter='my-project...[HEAD^1]' --dry=json
-> This will look if the task build is needed to run for the project "my-project", comparing the current source with "HEAD^1". The option dry=json helps to just look if there would be a need to run "build" or not for "my-project".
You could filter a whole lot more, check the docs.
Now, what i have built on top of this:
A new job on the github workflow looks with the help of this filter command if a deployment of my graphql-server is needed and he will set the output of this decision as an artifact, to provide this information for later jobs (https://github.com/actions/upload-artifact)
My actual docker-build and deploy-to-fly-io jobs that run afterwards, will download this artifact and set a CONTINUE environment variable, depending if it should build + deploy or not.
Every job coming after that is having an if: ${{ env.CONTINUE == 'true' }} to skip them if no build/deploy is needed.
It could be much simpler if you can run your build/deploy job directly with the turbo cli, because then you can just combine your filter and the execution of the build - but that was not possible in my case.
If you need to "skip" jobs that are coming later in your workflow, it's harder thant it should, as github is not supporting "abortion" of jobs.
For all the other commands like lint, typecheck and test -> just add an appropriate filter option to them and you will achieve that they only run on your "affected" workspaces/projects in your PR.
Ressources:
https://dev.to/ngconf/deploying-nx-affected-apps-from-github-actions-54n4
How can I get the previous commit before a push or merge in GitHub Action workflow?
https://github.com/orgs/community/discussions/26313
I have two webapps - "manager" and "viewer" - coded in separate VSCode projects. These are deployed to a common Firebase project where they share a common database. The "manager" webapp is used to maintain the database and the "viewer" provides public read-only access.
To create the "page" structure I have added a robocopy to React's build script for each VSCode project to produce a structured "mybuild" folder with the page subfolder within it. Firebase.json's "public": setting is then used to deploy from "mybuild".
Individually the two pages work fine, but each deployment overrides the functionality of the other. So, following the deployment of "manager", webapp/viewer returns a 404 (not found) error and vice versa.
To cut a long story short, the only way I've found around this is to manually copy the results of a deployment for one project into the "mybuild" folder of the other and then deploy from this. But this is no way to proceed.
I think I've taken a wrong turn somewhere here. Can anyone suggest the correct "firebase solution" for this requirement? In the longer term I'd like the viewer webapp to be available at the root of some user-friendly "appurl" while the manager is accessed via "appurl/manager", but other arrangements would be acceptable. The main issue right now is finding a simple way of maintaining the arrangement.
I needed to fix this fast, so here's my own answer to my question.
It seems that when you deploy a project, firebase replaces the current public folder for your URL with the content of whatever folder is specified in your firebase.json. So I decided that I had to accept that whenever either of my projects was deployed it must deploy from a "composite" folder that contains the build files for the other project as well as its own build.
That being the case, it seemed I was on the right lines with my "manual copy" approach and that what I now needed to do was simply to automate the arrangement.
Each of my projects now contains a script file with the following pattern:
npm run build
ROBOCOPY build ./composite/x /E
ROBOCOPY ../y/build ./composite/y /E
firebase deploy --only hosting
In each script, x is the owner project and y is the other. Additionally, firebase.json in each project is edited to deploy from composite.
When I run the script for a project it first builds a composite folder combining the latest build state for both that project and its partner, and then deploys the pair.
The final twist is to tell the react build process that the result is to be deployed from a relative folder and so that the build therefore also needs to use relative references. I do this by adding a
"homepage": "https://mywebapp/x",
to the package.json for each project. Here, x is the name of the project.
I'm not able to convince myself that there's not something wrong with my whole approach to this issue, but at least I have a fix for the immediate problem.
When running gcloud init, it creates a directory named "default" where it clones the sources.
Maybe a silly question, but why is it named "default"?
Is there a way to change the name or clone sources in the current directory (without creating a new one)?
The 'gcloud init' command currently only clones a single repo, which is named default. in the future you may be able to host multiple repos, each with their own name.
Also, we may add the ability to nicely import other assets into your project as well, which would not necessarily live in your repo.
So, the primary Google-hosted repository is one asset that is part of your local developer workspace, and since we intend to bring in more in the future, it gets put in its own directory 'default' (which is the name of that repo) so that it does not have conflicts with future assets.
I am using the "VCS Trigger" trigger: "Triggers one build per each VCS check-in".
I have two Build steps:
One that runs the .sln file
Another that copies files to the destination webroot.
Is there a way to configure TeamCity so that it only copies to the destination webroot the files that were part of the commit that triggered the build process?
I would not recommend that, since .net solutions are bundled together so that files from one build may not work in another build. At least that is my experience...