I inherited a project, and I'm trying to understand the variables and IDs that appear like such:
749E8-DEBBD-4BAB7-6E3D2
Are these hashed database IDs? Given the structure, is there a way to understand how they were generated?
Related
This is sort of a random question but I am building a backend in express and mongodb and I need to store data for a settings page. This would contain random one-off global settings an admin user would input to use. However, it needs to be saved in the DB so it is constant across all users.
Right now I have a single collection/table that just has one record stored and I just updated that specific record whenever the settings are updated.
Just feels a little goofy to do that a create a full schema and collection for one piece of data but I can't think of any other way to do it. Is this the normal way of doing this?
Thanks
I am working on a webapp at the moment where I intend to store data that looks something like this:
Organizations
Users within such organization
Documents under each user
My question is if it would be better to store this data across multiple databases, and reference each data point where it is necessary, or to have one database that stores organizations, then each organization document holds an array of documents of each user, then each user document holds and array of documents. For the first option, I would have a database for organizations, users, and then user data. In the organization file for example, I would store/reference users as documentIDs opposed to their actual documents.
I hope this makes sense, but yea, my question is, which would be a better practice? To spread the data across multiple DBs and reference it, or have each point of data directly store the data it needs to reference.
Thank you!
I have a nicely designed access database, complete with layed out forms and macros behind many buttons that filter through search boxes and do many other functions.
My problem is that I am sending this database to multiple people who want to add new records, however when they give me back their edited database (now with new records", i cant import them, because another person who has also handed me their edited version of the database needs their records added and access doesnt allow me to import these records because unique ID's have been created by both people and clash when i try to import them both in.
I have tried some websites that claim to import my database and allow cloud editing, however i lose all the functionality and layout of my forms / macros as the websites dont support it.
What would be the best solution so that i can get multiple people adding new records at the same time? Are there any websites that offer this? Or is there a way inside access to reassign the Unique ID's if they are already in the system?
Set the field size of your AutoNumber ID fields to Replication ID instead of Long Integer. It is extremely unlikely that two users will create records with the same ID.
Split the database into frontend and backend parts. Backend sits on server and frontend links to backend. Each user runs their own copy of frontend. However, if your users do not have access to same network, you are in a pickle. Have you looked into Sharepoint and Azure?
I have designed a db for users that did not have connection to our network. These were construction site field offices. The main office had the master database. Field offices were given an Access file where they entered records during life of the project and at the end they sent in the file and code in the master imported records. Since all data was new there was no concern for conflicting updates. Simplest way I found to accomplish was to not use Autonumber primary key.
I do have another db that required merging data from multiple Access files and those files did use Autonumber primary key. The import code was more complicated.
I'm new to CouchDB and want to give it a try. But before I do that I want to know if I can create dynamic database structure in CouchDB.
Eg.
The user starts on a blank thread and chooses whatever structure he/she wants (eg. title, body and tags and fill them in)
When he clicks save thread the database for this is created, maybe nested if necessary.
Then the user could get the thread from the database and read it.
Questions:
Is this dynamic creation of database structure possible?
I also read that you have to predefine views that will be used to get the documents. But how can you predefine views for data that yet doesn't exist and you have no idea what data and structure the user is going to create.
Yes, CouchDB's document appear from the outside like a JSON object in which you can put whatever you want except probably for a few reserved fieldnames for handling document ids and revisions.
These "predefined" views are themselves just documents, so you can modify them dynamically.
If what you require is more in the direction of searching then there are some ways out there to integrate solr with CouchDB which provides a more dynamic approach to queries.
I have a desktop (winforms) application that uses a Firebird database as a data store (in embedded mode) and I use NHibernate for ORM. One of the functions we need to support is to be able to import / export groups of data to/from an external file. Currently, this external file is also a database with the same schema as the main database.
I've already got NHibernate set up to look at multiple databases and I can work with two databases at the same time. The problem, however, is copying data between the two databases. I have two copy strategies: (1) copy with all the same IDs for objects [aka import/export] and (2) copy with mostly new IDs [aka duplicate / copy]. I say "mostly new" because there are some lookup items that will always be copied with the same ID.
Copying everything with new IDs is fine, because I'll just have a "CopyForExport" method that can create copies of everything and not assign new IDs (or wipe out all the IDs in the object tree).
What is the "best practices" way to handle this situation and to copy data between databases while keeping the same IDs?
Clarification: I'm not trying to synchronize two databases, just exporting a subset (user-selectable) or data for transfer to someone else (who will then import the subset of data into their own database).
Further Clarification: I think I've isolated the problem down to this:
I want to use the ISession.SaveOrUpdate feature of NHibernate, so I set up my entities with an identity generator that isn't "assigned". However, I have a problem when I want to override the generated identity (for copying data between multiple databases in the same process).
Is there a way to use a Guid.Comb or UUID generator, but be able to sometimes specify my own identifier (for transferring to a different database connection with the same schema).
I found the answer to my own question:
The key is the ISession.Replicate method. This allows you to copy object graphs between data stores and keep the same identifier. To create new identifiers, I think I can use ISession.Merge, but I still have to verify this.
There are a few caveats though: my test class has a reference to the parent object (many-to-one relationship) and I had to make the class non-lazy-loading to get Replicate to work properly. If I didn't have it set to eager load (non lazy load I guess), it would only replicate the object and not the parent object (cascade="all" in my hbm.xml file).
The java Hibernate docs have a reference to Replicate(), but the NHibernate documentation doesn't (section 10.9 in the java docs).
This makes sense for the Replicate behavior because we want to have fully hydrated entities before transferring them to another data store. What's weird though is that even with both sessions open (one to each data store), it didn't think to hydrate the object when I wanted to replicate it.
You can use FBCopy for this. Just define which tables and columns you want copied and I'll do the job. You can also add optional WHERE clause for each table, so it only copies the rows you want.
While copying it makes sure the order of which data is exported is maintained, so that foreign keys do not break. It also supports generators.