I need to create index from two tables that are not related. But when I try to do so I get the response that record fetched is equal to both tables records but no index in created.
Please help
Creating 1 index from 2 unrelated tables sound very strange. Why can't you create 2 indexes?
In case you don't have other choice create unique id filed in schema (look at http://wiki.apache.org/solr/UniqueKey). Check also in DB logs what queries are actually run.
Related
Hello i am currently try different data automation processes with python and postgreSQL. I automated the cleaning and upload of a dataset with 40.000 data emtries into my Database. Due to some flaws in my process i had to truncate some tables or data entries.
i am using: python 3.9.7 / postgeSQL 13.3 / pgAdmin 4 v.5.7
Problem
Currently i have ID's of tables who start at the ID of 44700 instead of 1 (due do my editing).
For Example a table of train stations begins with the ID 41801 and ends with ID of 83599.
Question
How can reorganize my index so that the ID starts from 1 to 41801?
After looking online i found topics like "bloat" or "reindex". I tired Vacuum or Reindex but nothing really showed a difference in my tables? As far as now my tables have no relations to each other. What would be the approach to solve my problem in postgreSQL. Some hidden Function i overlooked? Maybe it's not a problem at all, but it definitely looks weird. At some point of time i end up with the ID of 250.000 while only having 40.000 data entries in my table.
Do you use a Sequence to generate ID column of your table? You can check it in pgAdmin under your database if you have a Sequence object in your database: Schemas -> public -> Sequences.
You can change the current sequence number with right-click on the Sequence and set it to '1'. But only do this if you deleted all rows in the table and before you start to import your data again.
As long as you do not any other table which references the ID column of your train station table, you can even update the ID with an update statement like:
UPDATE trainStations SET ID = ID - 41801 WHERE 1 = 1;
Hi I have 2 type of data entry which needs to be stored in db so it can be used for calculations later. Each entry has a unique id for it. The data entry are -
1.
2.
So I have to save this data in DB. With my understanding I thought of the following -
Create 3 tables - Common, Entry1 and Entry2(multiple tables with unique id as name)
The Common table will have a unique entry of each data and which table refer to for the value (Entry1/Entry2).
The Entry1 data is a single line so it can be inserted. But the Entry2 data will require a complete table because of its structure. So whenever we add a type 2 entry then a new table has to be created, which will create a lot of tables.
Or I could save the type2 values in another database and fetch the values from there. So please suggest me a way which is better than this.
I believe that you have 2 entry types with identical structure, but one containing a single row and one containing many.
In this case, I would suggest a single table containing the data for all entries, wtih a second table grouping them together. Even if your input contains a single row, it should still gain an EntryID. Perhaps something like the below:
I am wondering what are the possibilities for storing this kind of data in an efficient way.
Lets say I have 100 kinds of different messages that I need to store. all messages has a common data like message Id, message name, sender, receiver, insert date etc. every kind of message has it's own unique columns that we wanna store and index (for quick queries).
We don't want to use 100 different tables, it will be impossible to work with.
The best way That I could come up with is to use 2-3 tables:
1. for the common data.
2. for the extra unique data when every column has a generic name like: foreign key, column1, column2....column20. there will be index on every column + the foreign key(20 indexes for 20 columns).
3. optional, metadata table to describe the generic columns for every unique message.
UPDATE:
lets say that I am a backbone for passing data, there are 100 different kinds of data(messages). I want to store every message that comes through me, but not as a bulk data because later I would like to query the data based on the unique columns of every different message type.
Is there a better way out there that I don't know about?
Thanks.
BTW the database is oracle.
UPDATE 2:
Does NoSQL database can give me a better solution?
The approach of 2 or 3 tables that you suggest seems reasonable for me.
An alternative way would be to just store all those unique columns in the same table along with common data. That should not prevent you from creating indexes for them.
Here is your answer:
You can store as much messages as you want against each message type
UPDATE:
This is what you exactly want. There are different messages type as you stated e.g. tracking, cargo or whatever. These categories will be stored in msg_type table. You can store as much categories in msg_type as you want.
Then say each msg_type has numerous messages that will be stored in Messages table. You can store here as much messages as you want without any limit
Here is your database SQL:
create table msg_type(
type_id number(14) primary key,
type varchar2(50)
);
create sequence msg_type_seq
start with 1 increment by 1;
create or replace trigger msg_type_trig
before insert on msg_type
referencing new as new
for each row
begin
select msg_type_seq.nextval into :new.type_id from dual;
end;
/
create table Messages(
msg_id number(14) primary key,
type_id number(14) constraint Messages_fk references CvCategories(type_id),
msg_date timestamp(0) default sysdate,
msg varchar2(3900));
create sequence Messages_seq
start with 1 increment by 1;
create or replace trigger Messages_trig
before insert on Messages
referencing new as new
for each row
begin
select Messages_seq.nextval into :new.msg_id from dual;
end;
/
I need a suggestion about sql table schema. I've opened a table and named it Chats, would it be better for me to add two columns(like ID and Messages) or one that will contain the IDs and the messages? And which one of them will work faster
Personally I'd model this as two tables:
Chats
- ID
- Name
Messages
- ID
- ChatID
- Message
- SentDate
There should be a foreign key from Messages.ChatID to Chats.ID.
Otherwise you're going to have to create duplicate chats each time someone sends a message.
I would strongly recommend against keeping IDs and Values in the same column, it makes it near impossible to join on and will create all sorts of problems later on.
There is no reason to use a single column. Add as many columns as possible, each with its own data type because you will be able to filter and sort the table by each column later. You will also be able to add constraints, indexes, statistics, etc... if needed.
Any query performed on that table will work faster if you use separate columns.
I have a SQL Server database with two table : Users and Achievements. My users can have multiple achievements so it a many-to-many relation.
At school we learned to create an associative table for that sort of relation. That mean creating a table with a UserID and an AchivementID. But if I have 500 users and 50 achievements that could lead to 25 000 row.
As an alternative, I could add a binary field to my Users table. For example, if that field contained 10010 that would mean that this user unlocked the first and the fourth achievements.
Is their other way ? And which one should I use.
Your alternative way isn't a very good approach at all. Not only is it not queryable (how many people unlocked achievement #10?), but it means nothing. Plus, what are you going to do if you add 5 more achievements? Update all the previous users to add "00000" to the end of their "achievements" column?
There is nothing wrong with the associative table as long as you index it properly. Using that approach the data is infinitly queryable and - perhaps more importantly - makes sense!