In previous versions of Capistrano, there used to be a task/ command via which i could easily deploy just one file from my local to the server, I didn't need to do the steps like first commit the changes, then push it to git repo and then deploy the whole repo.
With latest version 3 I can't find the similar command.
I was searching for the same feature that existed in Capistrano 2.
Reiknistofa made a gem. You should have a look to https://github.com/Reiknistofa/capistrano-upload
Related
I just stumbled on the following issue in App Engine Standard with a Python 2.7 enviroment
So I deployed to my test environment yesterday and today I had the idea of updating one of my applications. I do my normal "gcloud deploy ... " and it says updating 3 files ... While I actually changed a bunch of files. Basically my deploy command says the files are not changed.
After some searching around I found that files are being uploaded to a staging area and checked with a hash. Is it safe to actually clear this staging area, or does the gcloud command have some secret force option to actually force the files to be renewed.
The gcloud command has not given any errors what so ever, nor was it aborted at some point of deployment or something. So I have no errors, but my files aren't uploaded at all. I also tried modifying alot of files, and nothing changed
I never use the promote option for these rare cases that a deploy might fail
So anyone encountered this before, or has a solution to this issue ?
I also was encountering this and the only solution I could find was to deploy to a new bucket. To do this:
go to https://console.cloud.google.com/storage/browser and create a new bucket
redeploy using gcloud app deploy --bucket gs://your-new-bucket. (Change your-new-bucket to the actual bucket name)
This uploaded all the files again and created a new version in App Engine.
You can Go to https://console.cloud.google.com/storage/browser and delete your application bucket, on the next deploy it will be recreated. Additionally you can use the parameter --verbosity=info to check which files are being uploaded.
I cannot get my server code to update. I'm running a PHP instance on GAE and no matter what I do, the files won't update. In the source code view, I can see the files have updated, but when I attempt to access the updated file, I'm still viewing the old version. I've also attempted disconnecting my Bitbucket repo and using the appcfg.py update project-name command, but the files aren't refreshing when I attempt to access them. I'm not sure what to do to force the changes to take place.
My app.yaml contains the following code
- url: /(.+\.php)$
script: \1
secure: always
So the files should be getting read, right?
I was able to figure out what went wrong. I downloaded my code using appcfg.py download_app -A <your_app_id> -V <your_app_version> <output-dir> and noticed that I was downloading the old versions of the files (and wasn't downloading the new files). Turns out using source control within GAE will upload new code, but won't deploy it. I attempted to use appcfg.py update project-name one more time, but it didn't work. Turns out I didn't disconnect my Bitbucket account (could have sworn that I did...). Once disconnected, I was able to update the project using appcfg.py update project-name. While I was figuring this out, I reached out to Google support and received this message:
To use the feature of push to deploy you need to spin-up the Jenkins
Instance on GCE (Google Compute Engine) and then it will take the
updated code and execute it in the environment. Go through [1] for how
to enable the Jenkins instance and its configuration according to
different run time.
In your issue, you just mirrored the code from Bit Bucket to Cloud
Repository, as it is just doing the version control for the
application not executing the application. So basically you have have
the option of using Jenkins instance as I described above to test the
different version of the code or using the appcfg.py update command
from your local repository.
I haven't attempted to install and use Jenkins since I fixed it after disconnecting my Bitbucket account), but it may help others who have run into this problem.
I need a way to figure out which files are left over after upgrading Moodle to a newer version.
I have:
Old version + many plugins
New version
Merged version
In the new version some files have been removed but they still exist in the merged version.
I could start with the new version and copy all the plugins but many are in different sub folders which will take too long.
Is there a quick way to list or delete these left over files?
I would create a clone of Moodle in a separate folder.
git clone git://git.moodle.org/moodle.git moodleclone
Then check the version of Moodle in /version.php in your code - look for $release = '2.x.x. Then checkout the exact version of the Moodle clone
cd moodleclone
git checkout v2.x.x
Then use Meld to compare the 2 folders. http://meldmerge.org/
meld ../moodleclone ../yourmoodleversion
This will then show any code differences between the 2. You can see if its an official Moodle plugin or one that has been added.
It would be better if you use the uninstall plugin option via site admin -> plugins because it should remove any data in your database too. You might also want to do a clean install in a new database using the Moodle clone, then dump and compare the database structure from the clone and your code to see if there are any database changes.
I have been trying out Codeship and Heroku for continuous deployment of an AngularJS application I writing at the moment. The app was created using Yeoman and uses bower and grunt. Initially I thought this seemed like a really good setup as Codeship was free to use and I was quickly able to configure this to build my AngularJS project and it offered the ability to add a deployment step after the build. There were even many PaaS providers to choose from (Heroku, S3, Google App Engine etc). However I seem to have become a bit stuck with getting the app to run on Heroku.
The problem started from the fact that all the documentation suggested that I remove the /dist path from my .gitignore so that this directory is published to Heroku post build. This was mainly from documentation that talked about publishing to Heroku from a local machine, but I figure this is all Codeship is doing under the hood anyway. I didn't want to do this as I don't believe I should be checking build output into source control. The /dist folder was added to .gitignore for a good reason. Furthermore, this kind of defeats the point of having a CI server somewhat, as I might as well just push the latest build from my machine.
After some more digging around I found out that I could add a postinstall step to my packages.json file such as bower install && grunt build which would re-run the build on Heroku and hence repopulate all the bower dependencies (something else they wanted me to check in to source control!) and the dist directory.
After giving this a try it became apparent that I would need to add bower and grunt as dependencies in packages.json, which meant moving them from devDependencies which is where they should belong!
So I now seem to be stuck. All I want to do is publish my build artefacts (/dist) the dependencies (/bower_components) and the server.js file that will run the site. Does anyone know how to achieve this with Heroku and Codeship? Alternatively has anyone had any success with this using different tools. I am looking for something that is free and I am willing to accept that it will not be production stable (won't scale to multiple servers etc), but this is fine for now as all I want to do is continuously deploy the app for internal testing and to be able to share the output with non-technical members of my team so we can discuss features we'd like to prioritise etc.
Any advice would be greatly appreciated.
Thanks
Ahoy, Marko from the Codeship crew here. Did you already send us an in app message about this? I'm sure we can get your application building on Codeship and deploying to Heroku successfully.
As a very short answer, the easiest way to get this running would be to add both bower and grunt to your dependencies in the package.json. Another possibility would be to look for a custom buildpack with both tools already installed.
And finally you could also run the tools on Codeship, add the newly installed files to the repository, commit the changes and push this new commit to Heroku. If you want to use this, you'd very probably need to force push the changes though.
Feel free to reach out to me via the in app messenger (lower right corner of the site) and I'd be happy to help you get this working!
I found two ways to get this to work.
Heroku Node Custom Buildpack
Use the mbuchetics Heroku build pack. This works by basically re-building the app once it has been pushed to Heroku.
There were a few tricks I had to employ still to make this work. In Gruntfile.jstwo new tasks needed to be configured called heroku:production and heroku:development. This is what the buildpack executes to build the app. I initially just aliased the main build task, but found that the either the buildpack or Heroku had a problem with running jshint so in the end I copied the build task and took out the parts that I didn't need.
Also in packages.json I had to add this:
"scripts": {
"postinstall": "bower cache clean && bower install"
}
This made sure the bower_components were available in Heroku.
Pros
This allowed me to keep the .gitignore file in tact so that the 'binaries' in the dist directory and the dependencies in the bower_components directory were not committed into source control.
Cons
This is basically re-building the app once it is on Heroku and I generally prefer to use the same 'binaries' throughout the entire build and deployment pipeline. That way I know that the same code that was built, is the same code that was tested and is the same code that was deployed.
It also slows down the deployment as you have to wait for the app to build twice.
CodeShip Custom Script Deployment
Not being satisfied with the fact I was building my app twice, I tried using a Custom Script pipeline in CodeShip instead of the pre-existing Heroku one. The script basically modified the .gitignore file to allow the dist folder to be committed and then pushed to the Heroku remote (which leaves the code on the origin remote unaffected by the change).
I ended up with the following bash script:
#!/bin/bash
gitRemoteName="heroku_$APP_NAME"
gitRemoteUrl="git#heroku.com:$APP_NAME.git"
# Configure git remote
git config --global user.email "you-email#example.com"
git config --global user.name "Build"
git remote add $gitRemoteName $gitRemoteUrl
# Allow dist to be pushed to heroku remote repo
echo '!dist' >> .gitignore
# Also make sure any other exclusions dont apply to that directory
echo '!dist/*' >> .gitignore
# Commit build output
git add -A .
herokuCommitMessage="Build $CI_BUILD_NUMBER for branch $CI_BRANCH. Commited by $CI_COMMITTER_NAME. Commit hash $CI_COMMIT_ID"
echo $herokuCommitMessage
git commit -m "$herokuCommitMessage"
# Must merge the last build in Heroku remote, but always chose new files in merge
git fetch $gitRemoteName
git merge "$gitRemoteName/master" -X ours -m "Merge last build and overwrite with new build"
# Branch is in detached mode so must reference the commit hash to push
git push $gitRemoteName $(git rev-parse HEAD):refs/heads/master
Pros
This only require a single build of the app and deploys the same binaries that were tested during the test phase.
Cons
I've used this script quite a few times now and it seems relatively stable. However one issue I know of is that when a new pipeline is created there will be no code on the master branch so this script fails when it tries to do the merge from the heroku remote. At the moment I get around this by doing an initial push of the master branch to Heroku before kicking off a build, but I imagine there is probably a better Git command I could run along the lines of; 'only merge this branch if it already exists'.
I have a drupal installation running on OpenShift. I have been installing all modules and themes using git (commandline). However, I attempted to install the modules directly and the installation worked.
The problem that I now face is that when I attempt a pull request all I get is the modules and themes I had installed using the commandine and not the ones that I installed 'directly'.
Any one with a heads up on this?
OpenShift runs your code form a checkout of the git repository located at ~/app-root/repo within your gear. When you upload files using Drupal (instead of the git repository), the modules and themes are installed in this checked out directory and are not tracked in git.
I you are using a scaled application, I would recommend that copy the modules/themes and check them into git instead of the Drupal install method.
For now, to retrieve all your files you can try the rhc export command.
Thanks to #kraman above I got a hint of what to do.
I ran rhc snapshot save -a appname and got all the files. At least I know where to start off from since I can access the files.
A word of caution though for drupal users on openshift, just use git or sftp for pushing files and save yourself the headache.