Postgres- Golang- Schema vs Database ? - database

I am using golang and Postgres for my application. In my application, For new user am creating new database and tables for that user.So for Each and every new customer, am creating new database. while processing in my application, am going to make too many connection to connect particular user database..This is now currently am doing. My Question is , Whether i have to create schema for new user instead of databases in postgres, to reduce connection. In this case, Only one database is created under the database,too many schema will created. This is best way or not.

If the schema for each customer is different then you should use Event Based data storage, in which instead of creating columns for every field create rows.
Each row in this case consists of 4 fixed columns:
id (unique for each entry), res_id(points to its parent id field if present), key (ex-"user_id"), value (ex-"1").

Related

Is it possible to conditionally set a field value to no duplicates in MS-Access

I am building a project to manage conferences. The database is on an SQL Server on an AWS instance and I am using MS Access as the front end.
I have a table for Events and a table for Exhibitors
These Tables have a relationship from Events.ID to Exhibitors.EventsID
One of my fields on the Exhibitors table is BoothNumber int Not Null
I would like to ensure that we cannot assign a booth number twice for the same event but have the ability to reuse the number for other (future) events.
Our booth assignment is generic: 1 -75 and this is repeated for every event.
Is something like this possible?
Thank you for your help!
If your data is on sql server, put a multi-field unique index on the table design using two or more fields. Sql server then takes care of the rest preventing duplicate entries.

Oracle APEX - Data Modeling & Primary Keys

I'm creating a rather large APEX application which allows managers to go in and record statistics for associates in the company. Currently we have a database in oracle with data from AD which hold all the associates information. Name, Manager, Employee ID, etc.
Now I'm responsible for creating and modeling a table that will house all their stats for each employee. The table I have created has over 90+ columns in it. Some contain data such as:
Documents Processed
Calls Received
Amount of Doc 1 Processed
Amount of Doc 2 Processed
and the list goes on for well over 90 attributes. So here is my question:
When creating this table in my application with so many different columns how would I go about choosing a primary key that's appropriate? Should I link it to our employee table using the employees identification which is unique (each have a associate number)?
Secondly, how can I create these tables (and possibly form) to allow me to associate the statistic I am entering for an individual to the actual individual?
I have ordered two books from amazon on data modeling since I am new to APEX and DBA design. Not a fresh chicken, but new enough to need some guidance. An additional problem I am running into is that each form can have only 60 fields to it. So I had thought about creating tables for different functions out of my 90+ I have.
Thanks
4.2 allows for 200 items per page.
oracle apex component limits
A couple of questions come to mind:
Are you sure that the employee Ids are not recyclable? If these ids are unique and not recycled.. you've found yourself a good primary key.
What do you plan on doing when you decide to add a new metric? Seems like you might have to add a new column to your rather large and likely not normalized table.
I'd recommend a vertical table for your metrics.. you can use oracle's pivot function to make your data appear more like a horizontal table.
If you went this route you would store your employee Id in one column, your metric key in another, and value...
I'd recommend that you create a metric table consisting of a primary key, a metric label, an active indicator, creation timestamp, creation user id, modified timestamp, modified user id.
This metric table will allow you to add new metrics, change the name of the metric, deactivate a metric, and determine who changed what and when.
This would be a much more flexible approach in my opinion. You may also want to think about audit logs.

Change Mapped table dynamically in Entity Framework

I am using Entity Framework. In my DB, I have a table (DOCMASTER) which is mapped in my model, and on occasion, has a backup from a different date created (e.g. DOCMASTER_10_01_2015). The mappings are identical, with the exception of the keys/constraints. These are not brought over with the backup.
I have an application with a dropdown that is filled with all of the tables in the DB that are of type "DOCMASTER". The user selects which table they would like to query from, and they search for a client in that particular version of the table.
What I would ideally like to do is remap my model to use the selected table instead of the mapped table, however when I do that using DbModelBuilder, it seems to want to remap all of the tables in that model, not just the one table. I receive the error "CodeFirstNamespace.CUSTOM1: : EntityType 'CUSTOM1' has no key defined. Define the key for this EntityType" wherein 'CUSTOM1' is my table name, but I receive this for all of my tables in the model.
I am debating just using a paramaterized query to query the selected table directly. Does anyone have any thoughts on this?

SQL Server Alternative to reseeding identity column

I am currently working on a phone directory application. For this application I get a flat file (csv) from corporate SAP that is updated daily that I use to update an sql database twice a day using a windows service. Additionally, users can add themselves to the database if they do not exist (ie: is not included in the SAP file). Thus, a contact can be of 2 different types: 'SAP' or 'ECOM'.
So, the Windows service downloads the file from a SAP ftp, deletes all existing contacts in the database of type 'SAP' and then adds all the contacts on the file to the database. To insert the contacts into the database (some 30k), I load them into a DataTable and then make use of SqlBulkCopy. This works particularly, running only a few seconds.
The only problem is the fact that the primary key for this table is an auto-incremented identity. This means that my contact id's grows at a rate of 60k per day. I'm still in development and my id's are in the area of 20mil:
http://localhost/CityPhone/Contact/Details/21026374
I started looking into reseeding the id column, but if I were to reseed the identity to the current highest number in the database, the following scenario would pose issues:
Windows Service Loads 30 000 contacts
User creates entry for himself (id = 30 001)
Windows Service deletes all SAP contacts, reseeds column to after current highest id: 30 002
Also, I frequently query for users based on this this id, so, I'm concerned that making use of something like a GUID instead of an auto-incremented integer will have too high a price in performance. I also tried looking into SqlBulkCopyOptions.KeepIdentity, but this won't work. I don't get any id's from SAP in the file and if I did they could easily conflict with the values of manually entered contact fields. Is there any other solution to reseeding the column that would not cause the id column values to grow at such an exponential rate?
I suggest following workflow.
import to brand new table, like tempSAPImport, with your current workflow.
Add to your table only changed rows.
Insert Into ContactDetails
(Select *
from tempSAPImport
EXCEPT
SELECT Detail1, Detail2
FROM ContactDetails)
I think your SAP table have a primary key, you can make use of the control if a row updated only.
Update ContactDetails ( XXX your update criteria)
This way you will import your data fast, also you will keep your existing identity values. According to your speed requirements, adding indexes after import will speed up your process.
If SQL Server version >= 2012 then I think the best solution for the scenario above would be using a sequence for the PK values. This way you have control over the seeding process (you can cycle values).
More details here: http://msdn.microsoft.com/en-us/library/ff878091(v=sql.110).aspx

Restricting the content of a table

Im trying hard to find a way to restrict the access of a user to a particular table. Im working with views now but i cant create what i want...and i dont know if its possible.
Now, what it accomplish was to limit all access to a table..and create a view with the content the user should be able to see...but its not what a want, really.
What i was think:
When i logon with the user XXX, it should be able to visualize the database X_DB...and the table X_TABLE...
BUT when this user selects this table..he only will see the content i defined previously...not the entire content of the table.
I was able to select it into a view..but im cannot make all of it part of one process...
Is that possible?
Thank you
Given that you have 20 databases, one per each client, add your client as a user to just the database you want them to access.
If you want to consolidate all of your databses to a single database, then I suggest that you add "Client" table containing clientId (primary key) and clientName fields, and then modifying the rest of your schema by adding foreign key fields and relationships so that the other data is related to the proper client. Then you can easily provide access to data to clients based on their clientId in conjunction with views and stored procedures.

Resources