MS Access row number, specify an index - database

Is there a way in MS access to return a dataset between a specific index?
So lets say my dataset is:
rank | first_name | age
1 Max 23
2 Bob 40
3 Sid 25
4 Billy 18
5 Sally 19
But I only want to return those records between 'rank' 2 and 4, so my results set is Bob, Sid and Billy? However, Rank is not part of the table, and this should be generated when the query is run. Why don't I use an autogenerated number, because if a record is deleted, this will be inconsistent, and what if I wanted the results in reverse!
This obviously very simple, and the reason I ask is because I am working on a product catalogue and I am looking for a more efficient way of paging through the returned dataset, so if I only return 1 page worth of data from the database this is obviously going to be quicker then return a complete set of 3000 records and then having to subselect from that set!
Thanks R.

Original suggestion:
SELECT * from table where rank BETWEEN 2 and 4;
Modified after comment, that rank is not existing in structure:
Select top 100 * from table;
And if you want to choose subsequent results, you can choose the ID of the last record from the first query, say it was ID 101, and use a WHERE clause to get the next 100;
Select top 100 * from table where ID > 100;
But these won't give you what you're looking for either, I bet.

How are you calculating rank? I assume you are basing it on some data in another dataset somewhere. If so, create a function, do a table join, or do something that can calculate rank based on values in other table(s), then you can do queries based on the rank() function.
For example:
select *
from table
where rank() between 2 and 4
If you are not calculating rank based on some data somewhere, there really isn't a way to write this query, and you might as well be returning three random rows from the table.

I think you need to use a correlated subquery to calculate the rank on the fly e.g. I'm guessing the rank is based on name:
SELECT T1.first_name, T1.age,
(
SELECT COUNT(*) + 1
FROM MyTable AS T2
WHERE T1.first_name > T2.first_name
) AS rank
FROM MyTable AS T1;
The bad news is the Access data engine is poorly optimized for this kind of query; in my experience, performace will start to noticeably degrade beyond a few hundred rows.
If it is not possible to maintain the rank on the db side of the house (e.g. high insertion environment) consider doing the paging on the client side. For example, an ADO classic recordset object has properties to support paging (PageCount, PageSize, AbsolutePage, etc), something for which DAO recordsets (being of an older vintage) have no support.
As always, you'll have to perform your own timings but I suspect that when there are, say, 10K rows you will find it faster to take on the overhead of fetching all the rows to an ADO recordset then finding the page (then perhaps fabricate smaller ADO recordset consisting of just that page's worth of rows) than it is to perform a correlated subquery to only fetch the number of rows for the page.

Unfortunately the LIMIT keyword isn't available in MS Access -- that's what is used in MySQL for a multi-page presentation. If you can write an order key into the results table, then you can use it something like this:
SELECT TOP 25 MyOrder, Etc FROM Table1 WHERE MyOrder in
(SELECT TOP 55 MyOrder FROM Table1 ORDER BY MyOrder DESC)
ORDER BY MyOrder ASCENDING

If I understand you correctly, there is ionly first_name and age columns in your table. If this is the case, then there is no way to return Bob, Sid, and Billy with a single query. Unless you do something like
SELECT * FROM Table
WHERE FirstName = 'Bob'
OR FirstName = 'Sid'
OR FirstName = 'Billy'
But I think that this is not what you are looking for.
This is because SQL databases make no guarantee as to the order that the data will come out of the database unless you specify an ORDER BY clause. It will usually come out in the same order it was added, but there are no guarantees, and once you get a lot of rows in your table, there's a reasonably high probability that they won't come out in the order you put them in.
As a side note, you should probably add a "rank" column (this column is usually called id) to your table, and make it an auto incrementing integer (see Access documentation), so that you can do the query mentioned by Sev. It's also important to have a primary key so that you can be certain which rows are being updated when you are running an update query, or which rows are being deleted when you run a delete query. For example, if you had 2 people named Max, and they were both 23, how you delete 1 row without deleting the other. If you had another auto incrementing unique column in there, you could specify the unique ID in your query to delete only one.
[ADDITION]
Upon reading your comment, If you add an autoincrement field, and want to read 3 rows, and you know the ID of the first row you want to read, then you can use "TOP" to read 3 rows.
Assuming your data looks like this
ID | first_name | age
1 Max 23
2 Bob 40
6 Sid 25
8 Billy 18
15 Sally 19
You can wuery Bob, Sid and Billy with the following QUERY.
SELECT TOP 3 FirstName, Age
From Table
WHERE ID >= 2
ORDER BY ID

Related

TSQL Comparing 2 tables

I have 2 tables in 2 database. The scheme for the tables is identical. There are no timestamps or last updated information. Table A is a live table, that is, it's updated in "the" program. Update records, insert records and delete records all happen in Table A. Table B is a backup made weekly. Is there a quick way to compare the 2 tables and give me results similar to:
I | 54
D | 55
U | 60
So record 54 in the live table is new, record 55 in the live table was deleted, record 60 in the live table was updated.
This needs to work in SQL Server 2008 and up.
Fields: id, first_name, last_name, phone, email, address_id, birth_date, last_visit, provider_id, comments
I have no control over the scheme. I have read-only access to Table A, read-write to Table B.
Would it be easier to store a hash of each Table A's rows rather than a full copy of the table? Generally speaking I need to know what rows have been updated/inserted and deleted without a build in timestamp. I have the weekly backup table to look at but I could create a hash table if needed.
Using two full joins the first one isvused to check just for id existance and identify inserts and deletes the second would be used for row equality.
In the example I have used checksum for simplicity but I recommend you read up on the cons of using it and consider alternatives like hashbytes or checking each column for equality
Select id, checksum(*) hash
Into #live
From live.dbo.tbl
Select id, checksum(*) hash
Into #archive
From archive.dbo.tbl
Select l1.id,
Case when l1.id is null then 'd'
when a1.id is null then 'I'
when a2.id is null then 'u' end change_type
From #live l1
Full Join #archive a1 On a1.id = l1.id
Full Join #archive a2 On a2.id = l1.id
And a2.hash = l1.hash
I'm going to recommend a tool, but it's not free, although it has a fully functioning 30 day trial period. If you're going to compare data in SQL Server tables, look at Red Gate's SQL Data Compare. It's not cheap, and it will pay for itself many times over. (If you need to compare schemas, their SQL Compare does that.)
Barring that, having a third table, where you write a compare query and select those in one table and not the other (with a field indicating that), those in the other table and not the first, and then comparing field by field to find those different -- well that should work too. It will take longer, but if it's just one one table, the time it takes to write that code should be less than what you'll pay for the Red Gate tools.
If there is a column or set of columns that can uniquely identify each row, then a series of sql statements could be written to identify the inserts, updates and deletes. If there isn't a unique row identifier or the unique identifier (for example, one of the columns that makes it unique) changes, then no.

SQL Server 2005 SELECT TOP 1 from VIEW returns LAST row

I have a view that may contain more than one row, looking like this:
[rate] | [vendorID]
8374 1234
6523 4321
5234 9374
In a SPROC, I need to set a param equal to the value of the first column from the first row of the view. something like this:
DECLARE #rate int;
SET #rate = (select top 1 rate from vendor_view where vendorID = 123)
SELECT #rate
But this ALWAYS returns the LAST row of the view.
In fact, if I simply run the subselect by itself, I only get the last row.
With 3 rows in the view, TOP 2 returns the FIRST and THIRD rows in order. With 4 rows, it's returning the top 3 in order. Yet still top 1 is returning the last.
DERP?!?
This works..
DECLARE #rate int;
CREATE TABLE #temp (vRate int)
INSERT INTO #temp (vRate) (select rate from vendor_view where vendorID = 123)
SET #rate = (select top 1 vRate from #temp)
SELECT #rate
DROP TABLE #temp
.. but can someone tell me why the first behaves so fudgely and how to do what I want? As explained in the comments, there is no meaningful column by which I can do an order by. Can I force the order in which rows are inserted to be the order in which they are returned?
[EDIT] I've also noticed that: select top 1 rate from ([view definition select]) also returns the correct values time and again.[/EDIT]
That is by design.
If you don't specify how the query should be sorted, the database is free to return the records in any order that is convenient. There is no natural order for a table that is used as default sort order.
What the order will actually be depends on how the query is planned, so you can't even rely on the same query giving a consistent result over time, as the database will gather statistics about the data and may change how the query is planned based on that.
To get the record that you expect, you simply have to specify how you want them sorted, for example:
select top 1 rate
from vendor_view
where vendorID = 123
order by rate
I ran into this problem on a query that had worked for years. We upgraded SQL Server and all of a sudden, an unordered select top 1 was not returning the final record in a table. We simply added an order by to the select.
My understanding is that SQL Server normally will generally provide you the results based on the clustered index if no order by is provided OR off of whatever index is picked by the engine. But, this is not a guarantee of a certain order.
If you don't have something to order off of, you need to add it. Either add a date inserted column and default it to GETDATE() or add an identity column. It won't help you historically, but it addresses the issue going forward.
While it doesn't necessarily make sense that the results of the query should be consistent, in this particular instance they are so we decided to leave it 'as is'. Ultimately it would be best to add a column, but this was not an option. The application this belongs to is slated to be discontinued sometime soon and the database server will not be upgraded from SQL 2005. I don't necessarily like this outcome, but it is what it is: until it breaks it shall not be fixed. :-x

How can I assign a number to each row in a table representing the record number?

How can I show the number of rows in a table in a way that when a new record is added the number representing the row goes higher and when a record is deleted the number gets updated accordingly?
To be more clear,suppose I have a simple table like this :
ID int (primary key) Name varchar(5)
The ID is set to get incremented by itself (using identity specification) so it can't represent the number of row(record) since if I have for example 3 records as:
ID NAME
1 Alex
2 Scott
3 Sara
and I delete Alex and Scott and add a new record it will be:
3 Sara
4 Mina
So basically I'm looking for a sql-side solution for doing this so that I don't change anything else in the source code in multiple places.
I tried to write something to get the job done but it failes. Here it is :
SELECT COUNT(*) AS [row number],Name
FROM dbo.Test
GROUP BY ID, Name
HAVING (ID = ID)
This shows as:
row number Name
1 Alex
1 Scott
1 Sara
while I want it to get shown as:
row number Name
1 Alex
2 Scott
3 Sara
If you just want the number against the rows while selecting the data and not in the database then you can use this
select row_number() over(order by id) from dbo.Test
This will give the row number n for nth row.
Try
SELECT id, name, ROW_NUMBER() OVER (ORDER BY id) AS RowNumber
FROM MyTable
What you want is called an auto increment.
For SQL-Server this is achieved by adding the IDENTITY(1,1) attribute to the table definition.
Other RDBMS use a different syntax. Firebird for example has generators, which do the counting. In a BEFORE-INSERT trigger you would assign the ID-field to the current value of the generator (which will be increased automatically).
I had this exact problem a while ago, but I was using SQL Server 2000, so although row number() is the best solution, in SQL Server 2000, this isn't available. A workaround for this is to create a temporary table, insert all the values with auto increment, and replace the current table with the new table in T-SQL.

Improving OFFSET performance in PostgreSQL

I have a table I'm doing an ORDER BY on before a LIMIT and OFFSET in order to paginate.
Adding an index on the ORDER BY column makes a massive difference to performance (when used in combination with a small LIMIT). On a 500,000 row table, I saw a 10,000x improvement adding the index, as long as there was a small LIMIT.
However, the index has no impact for high OFFSETs (i.e. later pages in my pagination). This is understandable: a b-tree index makes it easy to iterate in order from the beginning but not to find the nth item.
It seems that what would help is a counted b-tree index, but I'm not aware of support for these in PostgreSQL. Is there another solution? It seems that optimizing for large OFFSETs (especially in pagination use-cases) isn't that unusual.
Unfortunately, the PostgreSQL manual simply says "The rows skipped by an OFFSET clause still have to be computed inside the server; therefore a large OFFSET might be inefficient."
You might want a computed index.
Let's create a table:
create table sales(day date, amount real);
And fill it with some random stuff:
insert into sales
select current_date + s.a as day, random()*100 as amount
from generate_series(1,20);
Index it by day, nothing special here:
create index sales_by_day on sales(day);
Create a row position function. There are other approaches, this one is the simplest:
create or replace function sales_pos (date) returns bigint
as 'select count(day) from sales where day <= $1;'
language sql immutable;
Check if it works (don't call it like this on large datasets though):
select sales_pos(day), day, amount from sales;
sales_pos | day | amount
-----------+------------+----------
1 | 2011-07-08 | 41.6135
2 | 2011-07-09 | 19.0663
3 | 2011-07-10 | 12.3715
..................
Now the tricky part: add another index computed on the sales_pos function values:
create index sales_by_pos on sales using btree(sales_pos(day));
Here is how you use it. 5 is your "offset", 10 is the "limit":
select * from sales where sales_pos(day) >= 5 and sales_pos(day) < 5+10;
day | amount
------------+---------
2011-07-12 | 94.3042
2011-07-13 | 12.9532
2011-07-14 | 74.7261
...............
It is fast, because when you call it like this, Postgres uses precalculated values from the index:
explain select * from sales
where sales_pos(day) >= 5 and sales_pos(day) < 5+10;
QUERY PLAN
--------------------------------------------------------------------------
Index Scan using sales_by_pos on sales (cost=0.50..8.77 rows=1 width=8)
Index Cond: ((sales_pos(day) >= 5) AND (sales_pos(day) < 15))
Hope it helps.
I don't know anything about "counted b-tree indexes", but one thing we've done in our application to help with this is break our queries into two, possibly using a sub-query. My apologies for wasting your time if you're already doing this.
SELECT *
FROM massive_table
WHERE id IN (
SELECT id
FROM massive_table
WHERE ...
LIMIT 50
OFFSET 500000
);
The advantage here is that, while it still has to calculate the proper ordering of everything, it doesn't order the entire row--only the id column.
Instead of using an OFFSET, a very efficient trick is to use a temporary table:
CREATE TEMPORARY TABLE just_index AS
SELECT ROW_NUMBER() OVER (ORDER BY myID), myID
FROM mytable;
For 10 000 000 rows it needs about 10s to be created.
Then you want to use SELECT or UPDATE your table, you simply:
SELECT * FROM mytable INNER JOIN (SELECT just_index.myId FROM just_index WHERE row_number >= *your offset* LIMIT 1000000) indexes ON mytable.myID = indexes.myID
Filtering mytable with only just_index is more efficient (in my case) with a INNER JOIN than with a WHERE myID IN (SELECT ...)
This way you don't have to store the last myId value, you simply replace the offset with a WHERE clause, that uses indexes
It seems that optimizing for large
OFFSETs (especially in pagination
use-cases) isn't that unusual.
It seems a little unusual to me. Most people, most of the time, don't seem to skim through very many pages. It's something I'd support, but wouldn't work hard to optimize.
But anyway . . .
Since your application code knows which ordered values it's already seen, it should be able to reduce the result set and reduce the offset by excluding those values in the WHERE clause. Assuming you order a single column, and it's sorted ascending, your app code can store the last value on the page, then add AND your-ordered-column-name > last-value-seen to the WHERE clause in some appropriate way.
recently i worked over a problem like this, and i wrote a blog about how face that problem. is very like, i hope be helpfull for any one.
i use lazy list approach with partial adquisition. i Replaced the limit and offset or the pagination of query to a manual pagination.
In my example, the select returns 10 millions of records, i get them and insert them in a "temporal table":
create or replace function load_records ()
returns VOID as $$
BEGIN
drop sequence if exists temp_seq;
create temp sequence temp_seq;
insert into tmp_table
SELECT linea.*
FROM
(
select nextval('temp_seq') as ROWNUM,* from table1 t1
join table2 t2 on (t2.fieldpk = t1.fieldpk)
join table3 t3 on (t3.fieldpk = t2.fieldpk)
) linea;
END;
$$ language plpgsql;
after that, i can paginate without count each row but using the sequence assigned:
select * from tmp_table where counterrow >= 9000000 and counterrow <= 9025000
From java perspective, i implemented this pagination through partial adquisition with a lazy list. this is, a list that extends from Abstract list and implements get() method. The get method can use a data access interface to continue get next set of data and release the memory heap:
#Override
public E get(int index) {
if (bufferParcial.size() <= (index - lastIndexRoulette))
{
lastIndexRoulette = index;
bufferParcial.removeAll(bufferParcial);
bufferParcial = new ArrayList<E>();
bufferParcial.addAll(daoInterface.getBufferParcial());
if (bufferParcial.isEmpty())
{
return null;
}
}
return bufferParcial.get(index - lastIndexRoulette);<br>
}
by other hand, the data access interface use query to paginate and implements one method to iterate progressively, each 25000 records to complete it all.
results for this approach can be seen here
http://www.arquitecturaysoftware.co/2013/10/laboratorio-1-iterar-millones-de.html

Viable ways to have an AutoNumber per user (or per entity)?

Let's say you have a web application that manages books for book sellers, and it is built on a multi-tenant database with a single books table that contains books from several book sellers.
Now let's say that each book seller really wants each of their books to have a unique number associated with it so they can look books up by that number, but it's important to them that the number is roughly consecutive for them. (It's OK if there are small breaks in the sequence due to deleted books and other events that cause an AutoNumber to get consumed but not used).
Obviously each book already has a unique number (primary key) associated with it that is generated via AutoNumber and is unique across book sellers. That is not what I am discussing here.
Let's just assume SQL-Server from here on, but the discussion applies equally to Oracle (except that Oracle uses Sequences that are independent of tables, and the current version of SQL Server must use a table to accomplish the same thing).
We want a number that increments safely in the context of a book seller. We want to maintain the benefits of using AutoNumber, but we want there to be one sequence per book seller. It seems like there are two options, and neither are very good:
Create one single-column table per book seller. This scares me because I can't think of another example of dynamically changing the schema (adding a new table whenever a new book seller is added to the system via the web application) in a web request. It also seems really heavyweight to have one table per book seller. I know a future version of SQL-server will support Sequences, but even that would still be a schema change at run time.
Roll your own auto-numbering behavior. This seems really risky because databases' built-in AutoNumber features take care of a lot of stuff for you, and giving that up is a big deal. Attempts to re-implement it yourself are probably error-prone and may cause poorer concurrency than the built-in AutoNumber.
Hopefully there are additional options that I'm missing. Has anyone successfully dealt with a similar situation? Thanks.
Is there a reason you couldn't have a 2 field table with:
BookSeller_ID, BookID
You wouldn't need to change schema as you add sellers, and it would be trivial to track per seller:
SELECT MAX(BookID)
WHERE BookSeller_ID = 123
For additional info you could also add a Universal_BookID field that linked to your unique ID referenced in the 3rd paragraph.
EDIT:
To clarify, if you have sellers 1 2 and 3 you could have a table like:
SellerID BookID BookUniversalID
1 1 123
2 1 456
3 1 999
1 2 1234
1 3 8798
1 4 999
1 5 10000001
3 2 123
3 3 456
You keep track of which seller has which IDs assigned and which actual book it links too, and to determine what the next ID is for a seller just query
SELECT MAX(bookid) FROM ThisTable WHERE SellerID = 1
DENSE_RANK, works in SQL Server and Oracle
Assuming your table looks vaguely thus
CREATE TABLE dbo.BOOKS
(
internal_book_id int identity(1,1) primary key
, seller_id int NOT NULL
, title varchar(50) NOT NULL
)
Whenever you present the identity value to the seller, use the dense_rank() function to generate the surrogate values.
CREATE VIEW dbo.BOOK_TO_SELLER_MAP
AS
SELECT
B.*
, DENSE_RANK() OVER (PARTITION BY B.seller_id ORDER BY B.internal_book_id ASC) AS unique_book_id_for_seller
FROM
dbo.BOOKS B
WHERE
B.seller_id = #sellerId
For the combination of seller_id and the generated id, you ought to always match back to the true id (assuming no physical deletes).
Demo code
;
WITH BOOKS (internal_book_id, seller_id, title)
AS
(
SELECT 1, 100, 'Secret of NIMH'
UNION ALL SELECT 2, 400, 'Once and Future King'
UNION ALL SELECT 7, 88, 'Microsoft SQL Server 2008'
UNION ALL SELECT 8, 100, 'Bonfire of the Vanities'
UNION ALL SELECT 9, 100, 'Canary Row'
UNION ALL SELECT 10, 400, '1916'
UNION ALL SELECT 11, 100, 'The Picture of Dorian Gray'
UNION ALL SELECT 12, 88, 'The Disasters of War'
)
, BOOK_TO_SELLER_MAP AS
(
SELECT
B.*
, DENSE_RANK() OVER (PARTITION BY B.seller_id ORDER BY B.internal_book_id ASC) AS unique_book_id_for_seller
FROM
BOOKS B
)
SELECT
*
FROM
BOOK_TO_SELLER_MAP V
ORDER BY
V.seller_id
, V.unique_book_id_for_seller
Results
internal_book_id seller_id title unique_book_id_for_seller
7 88 Microsoft SQL Server 2008 1
12 88 The Disasters of War 2
-------------------------------------------------------------------------------
1 100 Secret of NIMH 1
8 100 Bonfire of the Vanities 2
9 100 Canary Row 3
11 100 The Picture of Dorian Gray 4
-------------------------------------------------------------------------------
2 400 Once and Future King 1
10 400 1916 2
OMG Ponies is correct that sequences are the only correct way to achieve this. There isn't really another viable option.

Resources