How might I remove old revisions from the Wagtail database? I see that every time I make a change the previous revision of the page is kept in the database ... very nice, but how do I "take out the trash?"
In other words: "okay, this page is now finished, and I don't wish to keep previous revisions of it anymore." You'd think this would be easy – but, where is it?
Revisions are stored in the wagtail.core.models.PageRevision model, which is the wagtailcore_pagerevision table in the database. To delete all revisions for a given page, you can run the following from ./manage.py shell:
from wagtail.core.models import PageRevision
PageRevision.objects.filter(page_id=123).delete()
Note that the 'save as draft' and 'submit for moderation' workflows also work by saving PageRevision entries, so you should only do this after the page has been published in the state that you want to keep.
Related
I am using liquibase scripts with Cordapp. And previously the first version databaseChangeLog file was having all table creations in one single change set and in a later point of time we have split it into different databaseChangeLog having each changeset.
Now the problem is some production testing environments have the data in it with the older script, but we want to use the new scripts.
The change done is like →
Old: abc-master.xml contained abc-init.xml (usual way)|
Now: abc-master.xml contains abc-v1.xml and
abc-v1.xml contains table-v1.xml files for each table creation
Solution we were thinking is like,
create new tables with slight name change → then copy the data from old tables here → then drop old tables. So then we can remove old tables and old scripts (i assume)
Still the DATABASECHANGELOG will probably have the old scripts data, would that be a problem?
Or is there a far better way to do this?
Many thanks.
I answered this also on the Liquibase forums, but I'll copy it here for other people.
The filename is part if the unique key on the databasechangelog table (ID/Author/Filename). So when you change the filename of a changeset that has already executed, that is now in-fact a new changeset according to Liquibase.
I normally recommend that my customers never manually update the databasechangelog table, but in this case I think it might be the best course of action for you. That way your new file structure is properly reflected in the databasechangelog table.
I would run an update-sql command on the new file structure, against one of your database where you have already executed the chagesets. This will show you what changesets are pending, and also the values for the filenames that you need to update.
we are planning to go with
<preConditions onFail="MARK_RAN">
<not>
<tableExists tableName="MY_NEW_TABLE"/>
</not>
</preConditions>
For all those table creation changeset in new distributed structure ones, so our assumptions are:
We can keep this new structure alone in code & remove the old INIT file.
For environments having existing data, eventhough these new structure of changesets will be considered as new changeset to run, the preconditions will prevent it running.
For fresh db deployments, it will work as expected, by creating all the required tables.
I want to remove some columns from table "User_" of Liferay Database (Lportal) as I don't want to store user's last login IP address. I know about Monitoring in Liferay but that can be turned off.
How can I stop Liferay from storing unnecessary details of user?
Oh, this is a different level than usual. First of all: You don't write directly to the database.
Now for the next level: You don't change the structure of the database. While there might be less of an argument not to add columns, you definitely never ever ever ever remove columns.
That out of the way: If it's just the last login that you want to get rid of, you edit your portal-ext.properties file and configure it. Liferay's default is
# Set this to true to record last login information for a user.
#
users.update.last.login=true
naturally, you'll set it to false. However, beware of LPS-51051, you might need to patch for this issue if you run into the described behavior.
I've got a large amount of access databases that need to have the same table design changes (and a few new tables created) in each of them. Is there any way to take my most recent (properly designed) database, export the design properties, and import them to each of the other databases overwriting changes and creating any new fields, tables, etc. as needed?
My research has only led me to the Database documenter which seems to only be helpful in cases where I'd manually update the properties. I also know I could potentially copy each table over manually specifying 'Structure Only' for each case but that'd be a rather daunting task and I'm unsure what exactly would be copied using this method.
Let me see if I have the outline...
Open Proper.mdb
For each OtherMDB in Folder1
Open OtherMDB
for each ProperTable in Proper.mdb
If ProperTable is absent from OtherMDB
Add ProperTable to OtherMDB
Else
For each Field in ProperTable.Fields
If ProperField is absent from OtherTable.Fields
Add Field to OtherTable
Elseif ' is this a possibility?? wanting to change field type?
ProperField.Type <> OtherTable.Field("xx").Type Then
Change Field.Type
endif
Next Field
Endif
Next Table
Close OtherMDB
Next MDB
I found a utility called DBWeigher which is able to analyze and compare two access databases and automatically generate the necessary VBcode to update the changes between the two. From here I quickly ran through the changes manually and was able to see firsthand what changes would be made prior to running them through the DBConsole.
For anyone trying to update older access databases (especially when they're at different stages and could have some variances), I can't suggest checking this lightweight utility out.
I need to store data's change histories in database. For example some time some user modify some property of some data. The expected result is we can get the change histories for one data like
Tom changed title to 'Title one;'
James changed name to 'New name'
Steve added new_tag 'tag23'
Based on these change histories we can get all versions for some data.
Any good idea to achieve this? Not limited to traditional relation database.
These are commonly called audit tables. How I generally manage this is using triggers on the database. For every insert/update from a source table the trigger copies the data into another table called the same table name with an _AUDIT appended to it (the naming convention does not matter, it's just what I use). ORACLE provides you with something called journal tables. Using ORACLE designer (or manually) you can achieve the same thing and often developers put a _JN to the end of the journal/audit table. This, however, works the same, with triggers on the source table copying data into the audit table.
EDIT:
I should also note that you can create a new separate schema to manage just your audit tables or you can keep them in your schema with the source tables. I do both, it just depends on the situation.
I wrote an article about various options: http://blog.schauderhaft.de/2009/11/29/versioned-data/
If you are not tied to a relational database, there are things called 'append only' databases (I think), which never change data, but only append new versions. For your case this sounds kind of perfect. Unfortunately I don't know of any implementation.
I haven't been able to find any information regarding the best way to handle record editing with approval in CakePHP.
Specifically, I need to allow users to edit data in a record, but the edited data should not overwrite the original record data until administrators have approved the change. I could put the edited records in a new table and then overwrite the originals when I approve them but I wonder if there is an easier way since this idea doesn't seem to play well with the cake philosophy so to speak.
You are going to need somewhere to store that data until an administrator can approve it.
I'm not sure how this can be easier than creating another table with the new edits and the original post id. Then when an administrator approves the edit, the script overwrites the old record with the edited version.
I'm working on a similar setup and I'm going with storing the draft record in the same table but with a flag set on the record called "draft". Also, the original record has a "draft_id" field that has the id of the draft record stored in it.
Then in the model when the original record is loaded by the display engine it shows it normally. But when the edit or preview actions try to load the record, it checks the "draft_id" field and then loads the other record if it's set.
The "draft" flag is used to keep list and other group find type actions from grabbing the draft records too. This might also be solved by a more advanced SQL query but I'm not quite that good with SQL.