I’m using Umbraco v4.11.10 CMS and have made bulk changes to member expiry dates via scripts in the MS SQL-server database. However, when I go back to the Umbraco backend, the changes aren’t being shown. The changes are being made to the “cms.ContentXml” tables in MS SQL.
Is there any way to force or sync the Umbraco CMS backend to show the values in the database? I understand Umbraco writes the data to XML so I deleted the umbraco.config file but that doesn’t help.
The ContentXml table is a kind of half-way house for Umbraco's content and shouldn't be edited directly, as it gets overwritten as soon as you make any change to the object in question and then save it.
The underlying data for Umbraco content (including media, content, and members) is actually stored in several other tables using a very flexible object graph structure and not easily editable via straightforward sql scripts.
Under the hood, it goes something like this:
User Edits an item (Member, Content, Media, etc.) and saves it.
[If it's a Content item] Umbraco creates a new version by copying the current version.
For each property of the item, Umbraco creates a new record in the cmsPropertyData table according to the datatypes storage type (int, date, nvarchar, ntext) and linked to the item's record via the propertyId and itemId.
Once the data has been saved, an XML representation of the object is generated and deposited into the cmsContentXml table, overwriting any previous versions for that item.
If it is a Content item, and the item is published, Umbraco will then write that same XML representation to the Umbraco.config cache file.
This is a fairly simplistic representation; but should suffice to give you an idea of the complexity involved in saving and modifying the data.
I propose that you take advantage of Umbraco's extensive API to do bulk changes to records by writing a plugin - there may also be plugins available that do something similar, and you may be able to leverage those, or obtain the code of some of them, as they are quite commonly open source; and base your code off that.
Related
I've seen that there is a vote for a close request because the question is too broad. Let me rewrite the question:
There is a need to add metadata to different database tables (and thus records) which exposes itself as linked data in the form of json-ld on the website.
Database records of the same type could have different metadata. Depending on the content I should be able to save/store different metadata.
For example, I could add this fixed metadata to every page. Because I have this data available:
<script type="application/ld+json">
{"#context":"https://schema.org","#type":"City","name":"#model.MyCityVariable"}
</script>
Let's say I want to link my city to Geonames and Getty TGN. In a normal circumstance I would need to add two database fields: one to store the Geonames id and one to store the Getty TGN id.
For brussels this would be:
ID 2800866 from https://www.geonames.org/2800866/brussels.html
ID 7007868 from https://www.getty.edu/vow/TGNFullDisplay?find=Brussel&place=&nation=&prev_page=1&english=Y&subjectid=7007868
What if I want to add other sources or other metadata. Adding a database field for each new item doesn't feel right.
The goal is finding out how to store additional metadata, without having to add a specific database field for each item.
After searching I didn't know sql server was capable of working with JSON: https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-ver16
It looks like this might be a good way to add one extra database field and store a serialized json object in it, which contains my different metadata fields (like the IDs).
Maybe the question now can be:
Is this json+sql server a proper way of storing metadata and linking/providing linked data to an existing website/database? Are there other, better ways?
Tech used: sql server, asp.net core mvc, ef core
(I might be adding an asp.net core web api also.)
I will preface this right off the bat by saying that I am new to database design.
I have been working to rewrite some legacy code that controls an import process into one of our pieces of software. Part of this new process includes the modification of the incoming XML files (that come into our system via FTP) to remove certain elements as well as swap values in special cases.
As a part of the new system, we are implementing a way of versioning inside the database so that we can pull the most recent version of the xml directly from that instead modifying the file over and over again. In order to prove that this can be done, I have created a very simple table inside of SQL Server 2016 that stores the XML, then wrote a simple PowerShell script to pull that XML file from the database and store it inside of an object. Now that I know that this is indeed possible, I need to refine how I design the table.
This is where my expertise starts to take a hit. As of right now, the table contains three columns: xml_Version, xml_FileID, and xml_FileContents.
The general idea is to have a GUID (xml_FileID) that is tied to each version of the XML and another column that indicates what version of the XML that is. I would also assume that you need some way of tying each version of the XML to it's original file, too.
I was hoping that someone could point me in the right direction about how I should go about designing the table to accomplish this task. I can provide more information if needed.
Thanks.
Edit: I think what I'm having the most trouble grasping is what I should be referencing when I'm trying to grab data out of the database. Storing the XML in the table with a unique identifier is the easy part - but the unfortunate part is that there's nothing in the XML itself that I can grab out of there that would be able to uniquely identify the correlating data within the database. Does that make sense?
Scenarios :
We have bunch of XML files stored in SQL server table as BLOB column. There is no interface to view XML file details.
These are response files from external source.
My customer wants to convert into RDBM tables.
Currently part of the implementation is completed. They are reading XML from SQL server using XQuery/XPath and SSIS ETL packages and stored into SQL Server.
Instead I am planning to propose to convert XMLs into JSON format and store into MongoDB. This will help to manage growing database size compare to MS SQL (Based on my reading mongodb better).
Use mongodb collection metadata (nested Sub documents elements and attributes) to show filters on web application so that user can choose details to views on screen.
I am planning to use mongo db, node js and angular js. I am very new to these technologies.
Is it make sense to use MongoDB, Mongoose, node js and angulare js. If not any additional platform i can think.
Do I need to define data model for each of the XML type. If yes, every time if there is new element added into XML I will have modify the model/schema.
My understanding is extra elements added/deleted into JSON, MongoDB will take care which is not the case relational database. I don't know in detail yet.
Can I access the meta data of model dynamically on web application for filtering records.
In this case I have only read operation. XML format will be changed rarely. It is pulled from external interface.
Any expert opinions about my direction.
Thanks.
During an upgrade process from 2009 to 2016 I'm trying to remove objects relating to an old discontinued product. The objects are not within the range of or license and consists of both Forms, Tables and Reports. When deleting I'm faced with the well known error:
"You do not have permission to delete the '[object name]' Table."
I've tried with my developers license and the customers license with no luck. Since the product is no longer existing there is no use keeping these objects around and I need them gone for the upgrade process.
What is the best approach or technique when deleting objects that's not in the license?
UPDATE: How this issue was resolved?
I got in contact with the product owner and explained my problem. They sent me a neat PowerShell script to run. This worked like a charm. Reading through the script I can see that it's using the SQL cmdlets to select and delete relevant data from the following SQL tables:
Objects, Object Metadata, Object Metadata Snapshot, Object Tracking, Object Translation, Permission.
This was the preferred method of the product owner who used to develop this product. It should be applicable to all NAV objects. I have not yet successfully tried one of the answers below (more tries to come). Hopefully this new information will provide someone with enough knowledge to provide a good answer.
The way which was successfully used by several people but for sure cannot be recommended for production system is to simply delete these objects via SQL from Object and supplemental tables. In case of tables, you would need to manually delete the SQL table itself as well as its VSIFT views.
A bit more better (probably) way is to change the number of the object via SQL and then delete the object via NAV.
The best way is to use the functionality of "killer objects" - which allow to delete objects via FOB import:
http://navisionary.com/2011/11/how-to-delete-bsolete-dynamics-nav-objects/
If you find the partner who can provide you with such killer objects (they need to have a license to create objects in needed range), it solves you problem in a "clean" way.
If not, you may want to consider creating empty objects in 50000 range in some test DB, changing their number to obsolete range via SQL, exporting them as FOB, and then importing them to your target DB with "Delete" option.
Create new empty database, export only needed objects from old database, import them to new database.
In Nav 2016 application database can be separated from data containing database so (I assume) you could just unmount it from database with old objects and mount it to new application database. Not sure tbh.
It is due to the range of the license, for example your development license has a range of tables 7.000.000 - 7.000.200. If you want to delete a table with ID 20.000.000 you have that error.
The best solution is when you do the updrage do not you consider these objects you need to delete. Exports all objects except the objects you want to delete.
We have a table in our our database that stores XSL's and XSD's that are applied to XML documents created in our application. This table is versioned in the sense that each time a change is made, a new row is created.
I'm trying to propose that we store the XSL's and XSD's as files in our Source control system instead of relying on the database to track the history. Each time a file is updated, we would deploy the new version to the database.
I don't seem to be getting much agreement on the issue. can anyone help me out with pros and cons of this approach? Perhaps I'm missing something.
XSL and XSD files are part of the application and so ought to be kept under source control. That's just obvious. Even if somebody wanted to catgorise them as data they would be reference data and so - in my book at least - would need to be kept under source control. This is because reference data is part of the application and so part of its configuration. For instance, applications which use the database to store values for drop downs or to implement business rules need to be certain that it holds the right version of the data.
The only argument for keeping multiple versions of the files in the dtabase would be if you might need to process older versions of the XML files. This depends on the nature of your application. Certainly I have worked on systems where XML files / messages came from external (third party) systems, where we really had no control over the format of the messages sent. So for a variety of reasons we needed to be able to handle incoming XML regardless of whether its structure was current or historical. But is is in addition to storing the files in a source control repository, not instead of.