In Gerrit, can I someway have all the commits that have not been reviewed/approved from the repo? - distributed

I am working in a distributed team and it is not possible to have a review and approval of each commit quickly as the team members who are supposed to review are located overseas.
The team working at my end, push the code to Gerrit for review. Let us say user1 pushed 5 commits and user2 pushed 10 commits.
My question is me being user3, can I get all these 15 commits without pulling each commit individually using the commit id??

Yes, use the "Search" field.
For example, to find all open changes from user1 or user2:
(owner:user1 OR owner:user2) AND status:open
Take a look here to learn more about the search feature.

Related

Can I track permission set assignment with sfdx or other tools like gearset?

I am investigating if it is at all possible to track assigned permission sets, profiles and roles with the sfdx cli tools. So far my findings are that Permission sets and Profiles are trackable as they get converted to source but it is up to the administrator to assign profiles / permission sets after deployment.
Can anyone confirm this and point me to some documentation on the limits of what the sfdx cli can pull.
There's https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_unsupported_types.htm but that's not exactly what are you asking, the question is bit confusing.
PermissionSetAssignment table is well, a table. Normal data like Account, not metadata. Same with Users, their Roles, group memberships... You wouldn't typically store it in version control unless you're after some "golden copy" good test dataset to load to sandboxes/scratch orgs.
SF tried once to be smart with deploying queue memberships, record folder permissions and approval process assignees and the results are... Meh. You inspect the XML file and see usernames with sandbox suffix in them that they try to magically match during deployment to target org. It's "fun" when developer who created the report folder doesn't exist in production / username doesn't match (creator automatically gets added as Manager, it's extra step to remember and maybe remove that). It's even more "fun" when you keep working on the project in different sandboxes and git keeps reporting changes of suffix from ".dev" to ".proto" or ".hotfix"...
Try to rethink your question, what are you after? You can implement single sign on with piece of apex code running on login (or look into Identity Connect) so people's permissions will be synced based on their role in Active Directory/Google apps engine / what have you. Or if you really want you should be able to periodically query & upsert back CSV of PermissionSetAssignments. Or have a script during deployment to run a bunch of permset assign commands?
What will you do when John Doe moves from Marketing department to Sales? Who's right, SF or git project? Would it really need a deployment to change it?

How to handle multiple db alter scripts coming from different Git feature branches?

A bit complex to describe, but I'll do my best. Basically we're using the Git workflow, meaning we have the following branches:
production, which is the live branch. Everything is production is running in the live web environment.
integration, in which all new functionality is integrated. This branch is merged to production every week.
one or more feature branches, in which developers or development teams develop new functionality. After this is done, developers merge their feature branch to integration.
So, nothing really complex here. But, since our application is a web application running against a MySQL database, new functionality often requires changes to the database scheme. To automate this, we're using dbdeploy, which allows us to create alter scripts, given a number. E.g. 00001.sql, 00002.sql, etc. Upon merging to the integration branch, dbdeploy will check which alter scripts have a higher number than the latest executed one on that specific database, and will execute those.
Now assume the following.
- integration has alter scripts up until 00200.sql. All of these are executed on the integration database.
- developer John has a feature branch featureX, which was created when integration still had 00199.sql as the highest alter script.
John creates 00200.sql because of some required db schema changes.
Now, at some point John will merge his modifications back to the integration branch. John will get a merge conflict and will see that his 00200.sql already exists in integration. This means he needs to open the conflicting file, extract his contents, reset that file back to 'mine' (the original state as in integration) and put his own contents in a new file.
Now, since we're working with ten developers, we get this situation daily. And while we do understand the reasons behind this, it's sometimes very cumbersome. John renames his script, does a merge commit to integration, pushes the changes to the upstream only to see that somebody else already created a 00201.sql, requiring John to do the proces again.
Surely there must be more teams using the Git workflow and using a database change management tool for automating database schema changes?
So, in short, my questions are:
How to automate database schema changes, when working on different feature branches, that operate on different instances of the same db?
How to prevent merge conflicts all the time, while still having the option to have a fixed order in the executed alter scripts? E.g. 00199.sql must be executed before 00200.sql, because 00200.sql might be depending on something done in 00199.sql.
Any other tips are most welcome ofcourse.
Rails used to do this, with exactly the problems you describe. They changed to the following scheme: the files (rails calls them migrations) are labelled with a utc timestamp of when the file was created, eg
20140723069701_add_foo_to_bar
(The second part of the name doesn't contribute to the ordering).
Rails records the timestamps of all the migrations that have been run. When you ask it to run pending migrations it selects all the migration files whose timestamp isn't in the list of already run migrations and runs them in numerical order.
You'll no longer get merge conflicts unless two people create one at exactly the same point in time.
Files still get executed in the order you wrote them, but possibly interleaved with someone else's work. In theory you can still have problems - eg developer a decides to rename a table that I had decided to add a column too. That is much less common than 2 developers both making any changes to the db and you would have problems even not considering the schema changes presumably I have just written code that queries a no longer existant table - at some point developers working on related stuff will have to talk to each other!
A few suggestions:
1 - have a look at Liquibase, each version gets a file that references the changes that need to happen, then the change files can be named using a meaningful string rather than by number.
2 - have a central location for getting the next available number, then people use the latest number.
I've used Liquibase in the past, pretty successfully, and we didn't have the problem you describe.
As Frederick Cheung suggested, use timestamps rather than a serial number. Applying schema changes by order of datestamp should work, because schema changes can only depend on changes of a prior date.
In addition, include the name of the developer in the name of the alter script. This will prevent merge conflicts 100%.
Your merge hook should just look for newly added alter scripts (present in the merged branch but not in the upstream branch) and execute them by order of timestamp.
I've used two different approaches to overcome your problem in the past.
The first is to use a n ORM which can handle the schema updates.
The other approach is to create a script, which incrementally builds the database schema. This way if a developer needs to an additional row in a table, he should add the appropriate sql statement after the table is create. Likewise if he needs a new table, he should add the sql statement for that. Then merging becomes a question of making sure things happen in the correct order. This is basically what the database update process in an ORM does. Such a script needs to be coded very defensively, and each statement should check if its perquisites exists.
For the dbvc commandline tool, I use git log to determine the order of the update scripts.
git log -c --no-merges --pretty="format:" --name-status -p dev/db/updates/ | \
grep '^A' | awk '{print $2}' | tac
In this case the way the order of your commits will determine the sequence in which the updates are run. Which is most likely what you want.
If you run git merge b, the updates from master will be run first and than from B.
If you run git rebase b, the update from B will run first and than from master.

What are the significant changes of data base in moodle from 1.9.18 to 1.9.19?

I have coded an external CMS for Moodle 1.9.18 to register users with packages of courses.
Works as described next:
I create three kind of users, students, non-editing teachers and editing teachers depending of the course they're going to be enrolled.
I create groups in order to keep isolated the student users for their teachers, so teachers can evaluate them.
I register the group to the courses the users have access to.
My question popped up when we realized moodle 1.9.18 can't handle the amount of courses we have created so we need to upgrade it.
Before getting involved in this matter I wanted to ask someone who knows better data bases in moodle.
Eventually, I'll have to test the upgrade against the external CMS but if I have any hands up before I get into it that'd be great.
There are none.
In Moodle we try to avoid making database schema changes on stable releases, so it would be very unusual to see differences. However to confirm this statement, I verified this by looking for differences in install.xml files (where moodle stores its schema definitions) between the releases and you'll not there are none:
git diff v1.9.18..v1.9.19 --name-only
enrol/mnet/enrol.php
lib/environmentlib.php
mod/data/lib.php
mod/data/view.php
mod/hotpot/lang/en_utf8/help/hotpot/addquizchain.html
mod/hotpot/lang/en_utf8/help/hotpot/analysistable.html
mod/hotpot/lang/en_utf8/help/hotpot/clickreporting.html
mod/hotpot/lang/en_utf8/help/hotpot/clickreporttable.html
mod/hotpot/lang/en_utf8/help/hotpot/forceplugins.html
mod/hotpot/lang/en_utf8/help/hotpot/index.html
mod/hotpot/lang/en_utf8/help/hotpot/mediaplayers.html
mod/hotpot/lang/en_utf8/help/hotpot/mods.html
mod/hotpot/lang/en_utf8/help/hotpot/navigation.html
mod/hotpot/lang/en_utf8/help/hotpot/outputformat.html
mod/hotpot/lang/en_utf8/help/hotpot/removegradeitem.html
mod/hotpot/lang/en_utf8/help/hotpot/reportcontent.html
mod/hotpot/lang/en_utf8/help/hotpot/reportformat.html
mod/hotpot/lang/en_utf8/help/hotpot/responsestable.html
mod/hotpot/lang/en_utf8/help/hotpot/shownextquiz.html
mod/hotpot/lang/en_utf8/help/hotpot/studentfeedback.html
mod/hotpot/lang/en_utf8/help/hotpot/updatequizchain.html
mod/hotpot/lang/en_utf8/hotpot.php
mod/hotpot/lib.php
mod/hotpot/restorelib.php
mod/hotpot/view.php
question/format/hotpot/format.php
version.php
I don't think there any relevant code change, since it just seems a bugfix release. In this page you can find additional details: Moodle 1.9.19 release notes.
The 2.x branch on the contrary has lots of changes under the hood. But how many courses are you going to create? Moodle is capable to hold a big number of courses... (tens of thousands of courses, have a look here).

Git, robots and diverging branches

I am trying to use git as something it wasn't made for - a database. Please feel free to tell me that this is a stupid idea.
Setup
One branch (let's call it robot) is being updated automatically by a script on a daily basis. The data comes from some other publicly available database.
Initially the master branch is the same as the robot branch.
Some of the data in the publicly available database is wrong, so I'll make a commit to the master branch and correct the error.
When the script detects any changes on the public database in the future, it will add those to the robot branch as a new commit (there's one commit per file).
Keeping track of differences
Now, I've obviously lost the ability to do a fast forward merge if I've modified the same file. But I could still cherry pick the good changes in the robot branch and import them into the master branch. The problem is that this might get rather messy after a while, when almost all the files have diverged.
How can I keep track of the difference between the different branches in a systematic way?
It sounds like you're looking for the git rebase command. This command allows you to update changes you've made to your master branch on top of the new head of the robot branch:
git checkout master
git rebase robot
There may be conflicts, if the database has been updated with a change to something you've changed in master. You must resolve those conflicts manually.

When using Trac and SVN together, how will I know that a file is committed to solve a certain ticket?

For example, a file is modified to do an enhance ticket, what I want to do is associated the committed file to the ticket. When using Trac and SVN together, how will I know that a file is committed to solve a certain ticket? Is this possible?
Thank you.
As stated on the TracWiki, the intended workflow is:
A Trac user begins work on a ticket
They obtain code from the version control system
After the work is completed they perform a commit of their
modifications to the version control
repository
The user inserts the Trac ticket number into the commit message as a
TracLink
Trac can now display the change set for the ticket
Where the TracLink is something like #1 or ticket:1 or even comment:1:ticket:2 when referring to a ticket.
If you miss creating the link when the commit is made, you can still create one in the ticket comments themselves using TracLinks such as: r2, r1:3, [1:3], log:#1:3, etc.
you can link to the revision when closing ticket: r253, e.g.
and you can link to the ticket in commit message: #7525, e.g.
other than that, I doubt that anything can be done.
Obviously you could parse log message with on-commit hook and make a notification of sorts re tickets of interest, but you'd need to have access to the server I guess.
You may find the Trac post-commit hook useful. It allows you to close tickets using your commit log messages. See the script here.

Resources