I was curious if it's possible to SELECT a specific index from a list collection in Cassandra. Say I have:
CREATE TABLE users (
user_id text PRIMARY KEY,
order_list list<text>
);
UPDATE users
SET ordered_list = [ 'thing1', 'thing2', 'thing3' ] WHERE user_id = 'user1';
Is it possible to then get back an index of ordered_list, such as ordered_list[1] from a CQL query instead of the entire list?
No, you can't. You can UPDATE and DELETE with subscripts, but not SELECT or INSERT.
Related
Is there any way to check whether the partition key exists within a collection for CosmosDB?
For example, the available partition keys are states in the US such as WA, NY, MA, etc.. Is there any SQL statement for CosmosDB to check whether NY is part of the partition key in the collection?
Let's say if it is impossible, is there any way to return a list of partition keys in the collection?
You can try something like
SELECT DISTINCT myColl.partitionKey FROM myColl
which will return the unique values of the partitionKey. Replace partition key with what you have as a partitionKey in your collection
To eliminate the need for a cross partition query (which gets more important as you manage more data), you could do a SELECT COUNT query for all documents that have the value of the partition key you're checking. Check if count is greater than 0 and return the value.
Here's a query that fits your example where your partition key is /state:
SELECT VALUE COUNT(1) > 0
FROM c
WHERE c.state = "NY"
I have a posts table in SQL Server and I need to select (say) the first 10 rows ordered by the count of their upvotes, here is the DB script:
create database someforum
go
use someforum
go
create table users (
user_id int identity primary key,
username varchar(80) unique not null
);
create table posts (
post_id int identity primary key,
post_time datetime,
post_title nvarchar(32),
post_body nvarchar(255),
post_user int foreign key references users(user_id)
);
create table votes (
vote_id int identity primary key,
user_id int foreign key references users(user_id),
vote_type bit, --upvote=true downvote=false
post_id int foreign key references posts(post_id)
);
insert into users values ('foo'),('bar')
insert into posts values
(getdate(),N'a post by foo',N'hey',1),
(getdate(),N'a post by bar',N'hey!',2)
insert into votes values (1,0,1),(2,0,1),(1,1,2),(2,1,2) --first post downvoted by its poster (foo) and bar, second post was upvoted by both users
I need an efficient query to select the next top 10 rows from Posts based on count of upvotes. How can I achieve this in SQL Server 2008?
Important edit: I stupidly forgot to mention that I was using SQL Server 2008 R2 to which OFFSET... FETCH NEXT wasn't introduced yet. I also edited out what is currently irrelevant to my needs.
Here's what I wanted (without using the score column):
select top 10 p.post_title,sum(case when vote_type=1 then 1 else -1 end) as score
from posts p join votes v on p.post_id = v.post_id
group by p.post_title
And as to the alternative to OFFSET… FETCH NEXT, I found a great solution in DBA
There is no "best"; but a working command might involve select top 10 ... order by Score desc. I realize that your posts tables doesn't have a Score column (that aggregates and denormalizes the votes), but: you can change that
the OFFSET / FETCH clause
You could use a gridView object to display the results: this will allow users to sort on various columns with the minimum of code on your part and also allow pagination, the inclusion of numbered links at the bottom of the gridView allowing users to move through the list of results.
Using a gridView with 10 rows will allow the display of your top 10 and users will also have the option of moving through the rest of the sorted list.
1) You can calcucalte and filter by this query
SELECT * FROM (
SELECT *, COUNT(*) as upvotes FROM posts AS p INNER JOIN votes AS v ON (p.post_id = v.post_id) WHERE v.type = true
) as v_post OFFSET 10 ROWS
2) You can shift post by step count (10 at now) in the end of query: FETCH NEXT 10, FETCH NEXT 20 etc.
I have an array of jsonb elements (jsonb[]), with id and text. To remove an element I could use:
UPDATE "Users" SET chats = array_remove(chats, '{"id": 2, "text": "my message"')
But I want to delete the message just by the id, cause getting the message will cost me another query.
Assuming missing information:
Your table has a PK called user_id.
You want to remove all elements with id = 2 across the whole table.
You don't want to touch other rows.
id is unique within each array of chats.
UPDATE "Users" u
SET chats = array_remove(u.chats, d.chat)
FROM (
SELECT user_id, chat
FROM "Users", unnest(chats) chat
WHERE chat->>'id' = '2'
) d
WHERE d.user_id = u.user_id;
The following explanation matches the extent of provided information in the question:
I have two tables in Ms Access that I want to append namely tblMaster and tblNew. The problem is some of the data in tblNew is already in tblMaster. How can I append tblNew to tblMaster and exclude the data that is already in tblMaster?
Having a specific unique key field would most certainly be easiest - but you can still accomplish it by simply using a combination fields as the unique key (essentially creating your own).
You can then do the update by linking the new table to the old table, on the fields that you designate should make up the unique key.
Suppose you have an "Employees" table and an "Employees New" table. You want the employee's name and badge number to form the unique key. This would be the SQL to add in any records that do not exist in the old employee table.
INSERT INTO [Employees]
( FIELD1,
EMPLOYEENAME,
BADGENUMBER,
FIELD2,
FIELD3,
FIELD4 )
SELECT NEW.FIELD1,
NEW.EMPLOYEENAME,
NEW.BADGENUMBER,
NEW.FIELD2,
NEW.FIELD3,
NEW.FIELD4
FROM [Employees New] AS NEW
LEFT JOIN [Employees] AS OLD
ON (NEW.EMPLOYEENAME = OLD.EMPLOYEENAME) AND
(NEW.BADGENUMBER = OLD.BADGENUMBER)
WHERE (OLD.EMPLOYEENAME Is Null);
This works by linking the "Employees" table with the "Employees New" using the fields determined to make up a primary key. It limits the results to show only those records where the new employee is not in the old employee table already.
The next decision would be deciding if you want to update existing records in the Employee table with the values in the new table. If so, you'd use an approach like this.
UPDATE [Employees] AS OLD
INNER JOIN [Employees New] AS NEW
ON (OLD.BADGENUMBER = NEW.BADGENUMBER) AND
(OLD.EMPLOYEENAME = NEW.EMPLOYEENAME)
SET OLD.Field1 = NEW.Field1,
OLD.Field2 = NEW.Field2,
OLD.Field3 = NEW.Field3,
OLD.Field4 = NEW.Field4;
This works by joining both the "Employees" and "Employees New" table, showing only records in both tables where the unique key fields match. We then update all fields.
Hopefully that puts you in the right direction.
anyone know how can i delete duplicate rows by writing new way from script below to improve performance.
DELETE lt1 FROM #listingsTemp lt1, #listingsTemp lt2
WHERE lt1.code = lt2.code and lt1.classification_id > lt2.classification_id and (lt1.fap < lt2.fap or lt1.fap = lt2.fap)
Delete Duplicate Rows in a SQL Table :
delete table_a
where rowid not in
(select min(rowid) from table_a
group by column1, column2);
1 - Create an Identity Column (ID) for your table (t1)
2 - Do a Group by on your table with your conditions and get IDs of duplicated records.
3 - Now, simply Delete records from t1 where IDs IN duplicated IDs set.
Look into BINARY_CHECKSUM .... you could possibly use it when creating your temp tables to more quickly determine if the data is the same.... for example create a new field in both temp tables storing the binary_checksum value... then just delete where those fields equal
The odiseh answer seems to be valid (+1), but if for some reason you can't alter the structure of the table (because you have not the code of the applications that are using it or something) you could write a job that run every night and delete the duplicates (using the Moayad Mardini code).