I've seen that there is a vote for a close request because the question is too broad. Let me rewrite the question:
There is a need to add metadata to different database tables (and thus records) which exposes itself as linked data in the form of json-ld on the website.
Database records of the same type could have different metadata. Depending on the content I should be able to save/store different metadata.
For example, I could add this fixed metadata to every page. Because I have this data available:
<script type="application/ld+json">
{"#context":"https://schema.org","#type":"City","name":"#model.MyCityVariable"}
</script>
Let's say I want to link my city to Geonames and Getty TGN. In a normal circumstance I would need to add two database fields: one to store the Geonames id and one to store the Getty TGN id.
For brussels this would be:
ID 2800866 from https://www.geonames.org/2800866/brussels.html
ID 7007868 from https://www.getty.edu/vow/TGNFullDisplay?find=Brussel&place=&nation=&prev_page=1&english=Y&subjectid=7007868
What if I want to add other sources or other metadata. Adding a database field for each new item doesn't feel right.
The goal is finding out how to store additional metadata, without having to add a specific database field for each item.
After searching I didn't know sql server was capable of working with JSON: https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server?view=sql-server-ver16
It looks like this might be a good way to add one extra database field and store a serialized json object in it, which contains my different metadata fields (like the IDs).
Maybe the question now can be:
Is this json+sql server a proper way of storing metadata and linking/providing linked data to an existing website/database? Are there other, better ways?
Tech used: sql server, asp.net core mvc, ef core
(I might be adding an asp.net core web api also.)
Related
Scenarios :
We have bunch of XML files stored in SQL server table as BLOB column. There is no interface to view XML file details.
These are response files from external source.
My customer wants to convert into RDBM tables.
Currently part of the implementation is completed. They are reading XML from SQL server using XQuery/XPath and SSIS ETL packages and stored into SQL Server.
Instead I am planning to propose to convert XMLs into JSON format and store into MongoDB. This will help to manage growing database size compare to MS SQL (Based on my reading mongodb better).
Use mongodb collection metadata (nested Sub documents elements and attributes) to show filters on web application so that user can choose details to views on screen.
I am planning to use mongo db, node js and angular js. I am very new to these technologies.
Is it make sense to use MongoDB, Mongoose, node js and angulare js. If not any additional platform i can think.
Do I need to define data model for each of the XML type. If yes, every time if there is new element added into XML I will have modify the model/schema.
My understanding is extra elements added/deleted into JSON, MongoDB will take care which is not the case relational database. I don't know in detail yet.
Can I access the meta data of model dynamically on web application for filtering records.
In this case I have only read operation. XML format will be changed rarely. It is pulled from external interface.
Any expert opinions about my direction.
Thanks.
I’m using Umbraco v4.11.10 CMS and have made bulk changes to member expiry dates via scripts in the MS SQL-server database. However, when I go back to the Umbraco backend, the changes aren’t being shown. The changes are being made to the “cms.ContentXml” tables in MS SQL.
Is there any way to force or sync the Umbraco CMS backend to show the values in the database? I understand Umbraco writes the data to XML so I deleted the umbraco.config file but that doesn’t help.
The ContentXml table is a kind of half-way house for Umbraco's content and shouldn't be edited directly, as it gets overwritten as soon as you make any change to the object in question and then save it.
The underlying data for Umbraco content (including media, content, and members) is actually stored in several other tables using a very flexible object graph structure and not easily editable via straightforward sql scripts.
Under the hood, it goes something like this:
User Edits an item (Member, Content, Media, etc.) and saves it.
[If it's a Content item] Umbraco creates a new version by copying the current version.
For each property of the item, Umbraco creates a new record in the cmsPropertyData table according to the datatypes storage type (int, date, nvarchar, ntext) and linked to the item's record via the propertyId and itemId.
Once the data has been saved, an XML representation of the object is generated and deposited into the cmsContentXml table, overwriting any previous versions for that item.
If it is a Content item, and the item is published, Umbraco will then write that same XML representation to the Umbraco.config cache file.
This is a fairly simplistic representation; but should suffice to give you an idea of the complexity involved in saving and modifying the data.
I propose that you take advantage of Umbraco's extensive API to do bulk changes to records by writing a plugin - there may also be plugins available that do something similar, and you may be able to leverage those, or obtain the code of some of them, as they are quite commonly open source; and base your code off that.
I'm developing an open-source web application (a helpdesk) where the users will download it and install. This application will have some settings like: title, colors, default e-mail, logs... (for example). This settings will be edited by the user on the admin panel because most of them will not understand how to do it in code.
My question is what is the best way to store this on a (MySQL) database model? And counting that this application will upgrade and add more "settings" to that settings table.
Thank you in advance
There are a lot of different ways to do this, and it depends on what you think the final needs will be.
There's nothing wrong with saving parameters in a dedicated table, each parameter/user/value on a separate row. This is a fast way to set and get the information, and allows you to easily get access to reports by parameter and value and user, for example, what colors are the most popular.
If you are just using this for configuration, you can store the parameters as an XML or JSON string in a text/blob field. This has the benefit of giving you a single load to get all of the parameter values. Even more powerful, if your application already has default values for the parameters, you can easily extend the application without changing the database records. For a large number of parameters, this reduces the number of DB calls to load up all the parameters.
I'm importing some customer information from Landslide CRM into Salesforce.
Anyone have advice on the best methodology for doing the import?
It seems like the Apex Data Loader is the best way to go, but I don't know
if there are any issues with handling the objects in question, or if there
might be a specific tool or script to perform this migration.
Any experience with this import in specific or importing data into Salesforce
in general would be appriciated.
Importing Data to salesforce can be achieved in multiple ways depending on the type of data nd the requirements you have.
The first thing to do is get your data into CSV files so you'll need to find a way to export the dat afirst. For UTF-8 encoded data don't use Excel use something like OpenOffice (only required if you have UTF-8 Characters)
If its account and contact data for example. There is an import wizard available in Setup > Administration Setup > Import Business Accounts/Contacts
Next Option is as you say to use the Apex Data Loader. This probably the best approach.
The first thing and this is critical for big migrations is to Create a Field on your account object which will be a Unique Field for reference purposes. When creating this field set it as an External ID field and populate it with a unique reference for your accounts, the same goes for anything else which will be a parent. (you'll see why shortly.)
Next use the Insert option in the Data Loader to load the data mapping all the fields, especially the External Id
Now when you upload child objects use the Upsert option and map your Account Id via the External Id created earlier. This will match the accounts using your unique Id instead of you having to use the Salesforce id, saves alot of time.
Repeat the same for other objects and you should be good to go.
Apologies for the lack for structure here... doing this while in work and don't have alot of time but hope this helps.
The data loader works great for most types of imports. The one suggestion I would give you is to create a new custom field on your target objects (presumably Account and Contact) called "Landslide ID" or similar, identify it as an external ID field, and then import the primary keys from your source system into this field (along with the "real" data).
Doing this achieves a couple things - first, you have an easy unique link back to the source data for troubleshooting or tracing back to the source system. Second, if you find yourself in a situation where you need to import more fields or related data from the original source system, you'll be able to do so in an easy and correct way. It's just a good standard practice to adopt when doing data migrations -- it's almost no additional effort and can save you many hours in the future.
I am building a site that needs to display some product info from a Magento Database, but display it on another page/site outside the Magento intallation. I know the information gets displayed twice, but I would like the site to avoid content duplication and pull that same info from an only source, the Magento product database.
Is this posible? Has anyone done it?
What would be a lot easier to do would be to pull in the entire Magento engine into your external page. This [unlike the rest of Magento] is pretty easy to do.
All you have to do is the following:
// Load Up Magento Core
define('MAGENTO', realpath('/var/www/magento/'));
require_once(MAGENTO . '/app/Mage.php');
$app = Mage::app();
Now you can use any of the Magento objects/classes as if you were inside of Magento and get your attributes
$product = Mage::getModel('catalog/product')->load(1234);
$product->getSku();
$product->getYourCustomAttribute();
etc etc.
Yes, I've done it a few ways. The safest way to do this is using the webservices Magento exposes to query objects programmatically. This will insulate you from database-level changes (such as the flat product catalog, a recent addition).
Failing that (if the performance of the webservices doesn't meet your needs), you can reconstruct the catalog data from the database directly. Use the following tables (assuming you're not using the flat catalog):
eav_entity_type
eav_attribute
catalog_product_entity
catalog_product_entity_int
catalog_product_entity_varchar
catalog_product_entity_text
catalog_product_entity_decimal
catalog_product_entity_datetime
You'll want to read up on EAV models before you attempt this. Bear in mind that this is largely the topic over which people call Magento complicated.
Hope that helps!
Thanks,
Joe