Store large text array data in PostgreSQL database [closed] - database

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 days ago.
Improve this question
I have this requirement where I need to dump a large array of user Ids in one of the columns of a table in PostgreSQL DB. Let's say the max number of user Ids would be 100,000 and each user Id is of max 50 characters length. I won't be performing any operations on that table, it is just for logging purpose.
I've used text[] type column to dump those array of user Ids. I don't know if its the best way to do. I'm worried that some "max size limit reached" error gets thrown if the length of the array increases in the future.
Please suggest a better way to achieve this :)

Related

SQL Server - Index Seek: count of operation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
In SQL Server we have Index Seek operator. Which works very well for a search operation.
How much operation SQL Server needs to perform in order to get a value? I assume that it should be the height of the tree.
Nobody can say the one answer for sure because it depends on many parameters :
Index type (Clusted, None Clustered)
Unique or Not
Null or Not Null
Expected rows stored in which page
So, there is the well-explained article about index seeking [O2] blow:
https://sqlserverfast.com/epr/index-seek/

How to generate unique ids in C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am developing an app which requires to generate an id for new users I want to do it with the smallest number of characters that allows me to create 100 billion diferent possible ids so how should I do that and how to avoid giving two users the same it? Should I look if that id exists? Should I use a random id generator or give ids in order like 001 002 and so on?
This depends entirely on what kind of functionality you expect from this id, do you intend for these id's to correlate with persisted data, such as a database? If this is the case, it might be more prudent to let the database handle the unique ID generation for you. Otherwise, using sequential values such as 1,2,3... etc would probably be ideal. unsigned long will keep you covered for the first 2 billion users... If you somehow go beyond that, you can rethink your data storage then.
The question is very broad.

SQL server identity or a self calculated sequence [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am designing a database. I want to define a automatic sequence on a table primary key field. what is the best solution for it?
I know I can enable identity property for a field, but it has some problems ( for example its seed jumps on restart and unsuccessful events)
I also can use some calculated sequences. for example I can calculate max of the filed values and after incrementing use it as key for new inserted record.
which one is better? Is there another solution?
To my mind there's 3 options:
Identity - the simplest, but can have gaps when server is restarted etc.
Sequence - separate object, you will have still gaps in case of rollback
A separate table for the numbers - you won't have gaps, but it can be a hotspot that can cause blocking.

Database Design for similar data across multiple time [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am attempting to come up with a Database Design that works well for a specific division of my company.
Basically, I have a list of Account Numbers with a ton of fields associated with them. My division needs to compare how these fields change over time (What was in that field for this account number a year ago?).
I am currently thinking of a very linear approach where I use only one large table for the data that is time stamped so a table would have the name AccountInfo04012013 and then the next month would be a new table called AccountInfo05012013. This way we can make comparisons between any two months.
What are the drawbacks of this plan? and what should I be doing instead?
You are going to have to use timestamps. All database managers will have this built in.

table to store huge data [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Does any one have idea about storing email boby in the sql server. The email body is about 15 lines. what has to be done inorder to maintain a table with 40 different emails contents.
Example:
a : some cotent should be sent
b : some other content
You'll probably want an nvarchar(max) column to store the contents of the body. This allows you to store up to 2GB worth of text...which is kind of a lot of text, so you should be good.

Resources