Pull request conflicts in BitBucket - request

I have created a pull request in bitbucket and there are conflicts in those files. So, I cannot merge these files in the main repository. How can I resolve these conflicts?

Which Code Editor are you using?
Most have a spezial view oder mode for managing conflict.
Like VS Code + GitLense.
There you can see both versions side by side and edit your code and commit it agian.
Afterwarts you can try the merge again.

Related

How to build an Alexa skill with multiple developers?

I'm struggling to handle the pipeline building an Alexa skill across several developers and existing docs just aren't cutting it.
We have four developers and when we check our code into our git repo, checkout new branches and so forth, we're continually overwriting our .ask/config and skill.json files.
How do we set this up to avoid overwriting? Ideally we're all building towards the same Alexa skill but we'd each like to test in our own instance -- separate skills and separate lambda functions.
As soon as I grab another developers branch, I lose my necessary config and skill files.
My gitignore has these files ignored, but since they're checked in they're continually being tracked.
How do I handle multiple developers?
I see several problems here.
First of all - clean up your repo: make sure that all developers have ./ask/* entry added to their .gitignore files and ./ask directory is removed from the origin.
To solve overriding problem - you can create a template-skill.json with placeholders for lambda's ARNs and all the other things different for each developer. Then, before ask deploy just create the valid skill.json file by running some script that replaces placeholders in the template JSON with your data (kept in another gitignored file).
Setup the same in your CI instance with configurations for different environments.

Detected resolved migration not applied to database on flyway

We are using flyway to manage database schema version and we are facing a problem. Since we work as a team and use git as our source code management, there would be some cases that different people update database schema on their own local repo. If that happens, we will get
Detected resolved migration not applied to database: 2016.03.17.16.46"
The time "2016.03.17.16.46" was added by another person and I have already applied some timestamp later than that time. If that happens, we have to clean all database tables and create them again. We have tried to set false on validateOnMigrate and did flywayClean, but nothing help. Is there another way to change that?
The migration option outOfOrder is your friend here. Set it to true to allow inserting those migrations after the fact.
On the command line, run:
flyway -outOfOrder=true migrate
Or if you use the Maven plugin:
mvn -Dflyway.outOfOrder=true flyway:migrate
I faced similar problem when switching from one git branch to another and tried to run
flyway:migrate.
For example when I was on branch 'release_4.6.0' I didn't have migrations on my local machine from branch 'release_4.7.0' so
I received next error
FlywayException: Validate failed: Detected applied migration not resolved locally.
The solution that worked for me is to set ignoreMissingMigrations flyway option to true.
In maven it looks like
flyway:migrate -Dflyway.ignoreMissingMigrations=true
Maybe it's not an answer for this question, but it can be helpful for those who faced the same problem as me.
Here you can find more details:
https://flywaydb.org/documentation/configuration/parameters/ignoreMissingMigrations
You can also put it in your application.properties file if you want to apply the migrations when starting up the app:
spring.flyway.out-of-order=true
just add spring.flyway.ignore-missing-migrations=true
to your properties file if you are using spring-boot.
This will ignore previous migrations.
outOfOrder did not fix the problem for us.
Two migrations slipped into one deployment before it got removed.
So we added those migrations back in and undid the changes in another migration.
Worked 🤷
In my case, I just renamed my migration file to some other name and then renamed it back – just to update modification date of the file. And it worked.
In my case, there was existing a row with version 319 in database and the correponding file for 319 was renamed do 330, so the database registry could not find the corresponding file. Deleting the row from database solved the problem.
Since spring.flyway.ignore-missing-migrations=true became deprecated.
According to the docs I advise using spring.flyway.ignore-migration-patterns=*:missing. Or in the yaml:
spring:
flyway:
ignore-migration-patterns:
- "*:missing"

How to properly merge CEF 2623 into 2454-based project

I have a CEF-2454-based project that I wish to upgrade to 2623. However, I have made some changes to libcef needed for the project and I want to incorporate all changes made in 2623 without discarding my own changes. This raises me some questions:
What git merge strategy should I employ?
Can I build in my old 2454 directory, or I need to merge then build from scratch?
Should I merge 2526 then 2623, or I can directly merge 2623?
What is the proper way to incorporate a new CEF release into my own project?
Typical way is to merge your changes onto new CEF version. Unfortunately all other methods are more difficult.
You can try to take a diff between your changes and original CEF 2454, and try to apply that patch on 2623.

Clone, Pull, Commit & Push Using libgit2 C

I am working on windows 7 libgit2 version 0.23.0. I am using private repository to get clone using libgit2. I have read lot's of Questions & Answers from Stackoverflow, samples, github Issues/fixes which is available for libgit2 library.
I am able to clone private repository by setting credential using git_cred_userpass_plaintext_new() method. This method successfully pull all updated files from remote to my local disk. But after that if any changes is made on remote repository then I am getting issue to pull changes from remote to my local disk, I am using fetch.c it execute git_remote_fetch() without any error, it creates
New FETCH_HEAD file which contains new oid under /.git folder and
Download new pack files (.idx & .pack) as well under /.git/objects/pack folder.
After all this, changes of files or updated files is not copying at my local repo, I am not sure if I am skipping any step to do that I also tried to commit & push the files from my local repo to remote repo but I am not able to get any good example for that. The samples & API are little bit confusing for me.
Please suggest me if any body can help me for simple:
Pull from origin or master branch
Commit local repository
Push to origin or master branch
USING LIBGIT2
Thanks in advance.

How to merge Drupal database changes

We currently use an SVN repository to ensure everyone's local environments are kept up-to-date. However, Drupal website development is somewhat trickier in that any custom code you write (for instance, PHP code written for a node body) is stored in the DB and the changes aren't recognized by the SVN working copy.
There are a couple of developers who are presently working on the same area of a Drupal site, but we're uncertain about how to best merge our local Drupal database changes together. Committing patches of database dumps seem clumsy at best and is most likely inefficient and error-prone for this purpose.
Any suggestions about how to approach this issue is appreciated!
Unfortunately, database deployment/update is one of Drupals weak spots. See this question & answers as well as this one for some suggestions on how to deal with it.
As for CCK, you could find some hints here.
As for php code in content, I agree with googletorp in that you should avoid doing this. However, if for some reason you absolutely have to do it, you could try to reduce the code to a simple function call. Thus you'd have the function itself in a module (and this would be tracked via SVN). But then you are only a little step from removing the need for the inline code anyways ...
If you are putting php code into your database then you are doing it wrong. Some stuff are inside the database like views and cck fields plus some settings. But if you put php code inside the node body you are creating a big code maintenance problem. You should really use the API and hooks instead. Create modules instead of ugly hacks with eval etc.
All that has been said above is true and good advice.. To answer your practical question, there are a number of recent modules that you could use to transport the changes done by the various developers.
The "Features" modules is a cure the the described issue of Drupal often providing nice features, albeit storing lots of configs and structure in the DB. This module enables you to capture a feature and output it as a pseudo-module (qualifies as a module with .info and code-files and all). Here is how it works:
Select functionality/feature to export
The module analyses the modules, files, DB content that is required to rebuild that feature elsewhere
The module creates a pseudo-module that contains the instructions in #3 and outputs everything (even SQL to rebuild the stuff in the DB) into a module package (as well as sets dependencies for other modules required)
Install the pseudo-module on your new site and enable it
The pseudo-module replicates the feature you exported rebuilding DB data and all
And you can tell your boss you did it all manually with razor focus to avoid even 1 error ;)
I hope this helps - http://drupal.org/project/features
By committing patches of database dumps, do you mean taking an entire extract of the db and committing it after each change?
How about a master copy of the database? Extract all tables, views, sps, etc... into individual files, put them into svn and do your merge edits on the individual objects?

Resources