I need to have a way to iterate through a database table without actually storing it in memory anywhere. I want to essentially read through the rows like an input iterator.
I've tried using cursors with a select statement (select * from table_name), but this retrieves the entire table and feeds it back to be one row at a time. So this solution is no good. Instead, I need it to only feed me each row as I ask for it.
Any suggestions are greatly appreciated.
Thanks!
You'll just want to use a forward only cursor. Your DB will need to support this. For detials, see MSDN's How to: Use Cursors.
If you're using SQL Server, you can use a Fast Forward-Only Cursor, which provides extra benefits.
Related
Our company is setting up a new Snowflake instance, and we are attempting to migrate some processing currently being done is MS SQL Server. I need to migrate a Table-Valued SQL Function into snowflake. The source function has procedural logic in it, which is not allowed to my knowledge in snowflake UDTFs. I have been searching for a workaround with no success.
To be as specific as I can, I need a function that will take a string for input, decode that string, and return a table with the keys and their corresponding values. I cannot condense all of the logic to split the string and decode the keys into one SQL statement, so Snowflake SQL UDTFs will not work.
I looked into whether a UDTF can call a procedure and somehow I could simply return a result, but it does not look like that will work. Please let me know if there is any way to work around this.
I think Javascript UDTF is what you're looking for in Snowflake:
https://docs.snowflake.com/en/sql-reference/udf-js-table-functions.html
funny I just stumbled onto this as I'm running into the same thing myself. I found there is a SPLIT_TO_TABLE function that may be able to accomplish this. As Greg suggested nesting this together in the form of a CTE combined with a JOIN may allow you to accomplish what you need to do.
Basically if I want to see what the data in a table/view looks like I use
select top 1000 * from ...
But this isn't too efficient for complex views or badly indexed tables.
I really just want to see what the data in a table looks like, e.g. the format etc.
Is there a better way to do this?
I'm using SSMS 2017
*Edit for clarification:
Badly written views are endemic throughout our databases so whilst fixing these is the obvious answer it's not really a realistic one.
I suppose i was hoping for a trick i wasn't aware of, because i understand using TOP puts some sort of order into it
If you highlight a table/view object in code and then press ALT + F1 in SSMS IDE it will execute the equivalent command of sp_help ‘object_name’ where object_name is the name of the highlighted object. Maybe this can give some quick information about the object you are interested in.
I'm working on a data conversion utility which can push data from one master database out to a number of different databases. The utility its self will have no knowledge of how data is kept in the destination (table structure), but I would like to provide writing a SQL statement to return data from the destination using a complex SQL query with multiple join statements. As long as the data is in a standardized format that the utility can recognize (field names) in an ADO query.
What I would like to do is then modify the live data in this ADO Query. However, since there are multiple join statements, I'm not sure if it's possible to do this. I know at least with BDE (I've never used BDE), it was very strict and you had to return all fields (*) and such. ADO I know is more flexible, but I don't know quite how flexible in this case.
Is it supposed to be possible to modify data in a TADOQuery in this manner, when the results include fields from different tables? And even if so, suppose I want to append a new record to the end (TADOQuery.Append). Would it append to two different tables?
The actual primary table I'm selecting from has a complimentary table which is joined by the same primary key field, one is a "Small" table (brief info) and the other is a "Detail" table (more info for each record in Small table). So, a typical statement would include something like this:
select ts.record_uid, ts.SomeField, td.SomeOtherField from table_small ts
join table_detail td on td.record_uid = ts.record_uid
There are also a number of other joins to records in other tables, but I'm not worried about appending to those ones. I'm only worried about appending to the "Small" and "Detail" tables - at the same time.
Is such a thing possible in an ADO Query? I'm willing to tweak and modify the SQL statement in any way necessary to make this possible. I have a bad feeling though that it's not possible.
Compatibility:
SQL Server 2000 through 2008 R2
Delphi XE2
Editing these Fields which have no influence on the joins is usually no problem.
Appending is ... you can limit the Append to one of the Tables by
procedure TForm.ADSBeforePost(DataSet: TDataSet);
begin
inherited;
TCustomADODataSet(DataSet).Properties['Unique Table'].Value := 'table_small';
end;
but without an Requery you won't get much further.
The better way will be setting Values by Procedure e.g. in BeforePost, Requery and Abort.
If your View would be persistent you would be able to use INSTEAD OF Triggers
Jerry,
I encountered the same problem on FireBird, and from experience I can tell you that it can be made(up to a small complexity) by using CachedUpdates . A very good resource is this one - http://podgoretsky.com/ftp/Docs/Delphi/D5/dg/11_cache.html. This article has the answers to all your questions.
I have abandoned the original idea of live ADO query updates, as it has become more complex than I can wrap my head around. The scope of the data push project has changed, and therefore this is no longer an issue for me, however still an interesting subject to know.
The new structure of the application consists of attaching multiple "Field Links" on various fields from the original set of data. Each of these links references the original field name and a SQL Statement which is to be executed when that field is being imported. Multiple field links can be on one single field, therefore can execute multiple statements, placing the value in various tables, etc. The end goal was an app which I can easily and repeatedly export a common dataset from an original source to any outside source with different data structures, without having to recompile the app.
However the concept of cached updates was not appealing to me, simply for the fact pointed out in the link in RBA's answer that data can be changed in the database in the mean-time. So I will instead integrate my own method of customizable data pushes.
I have created a Web Service to send in a bunch of information to a PL/SQL procedure, however one of them is a array. What type do I use for this? I also want to put that array into a cursor after it comes in.
I don't have any experience with PL/SQL, so sorry if this doesn't apply. In MS SQL 2000/2005, there isn't a way to pass arrays into a procedure; I'm assuming PL/SQL has a similar limitation. The workaround I've used in the past is to pass in a delimited string (usually pipe delimited because commas were present in the data), and then have a function that can take a delimited string and break it up into a table result with one row per value. Then inside your procedure, you just call your split function passing it the delimited string and you have a table result that you can do whatever you want with (cursor over, join to other tables, etc).
EDIT: Just did a google for "PL/SQL table parameter" and had a few hits; Might be worth investigating to see if any of those results can help you.
This link might be of use. Or you can do what the other poster said and basically serialize your data into a string, pass it, and then unserialize it on the other end.
When using multivalue parameters in sql reporting services is it more appropriate to implement the list filter using a filter on the dataset itself, the data region control or change the actual query that drives the dataset?
SSRS will support any scenario, so then I ask, is there a reason beyond the obvious of why this should be done at one level over another?
It makes sense to me that modifying the query itself and asking the RDBMS to handle the filtering would be most efficient but maybe I am missing something with respect to how the SSRS Data Processing Extension may handle this scenario?
You are correct. The way to go is to pass the parameters through to the database engine.
Reporting Services should only be ideally used to render content. The less data that you need to pass back to the client web browser, the faster the report will render.
You may find my answer to a similar post regarding using mulit-value parameters to be of use.
Passing multiple values for a single parameter in Reporting Services
Hope this helps but please feel free to pose any further questions you may have.
Cheers,
John
Using table-valued UDF is a good approach, but there is still one issue - in case if this function is called in many places of query, and even inside inner select, there can be performance problem. You can resolve this issue using table variable (or temp table eather):
DECLARE #Param (Value INT)
INSERT INTO #Param (Value)
SELECT Param FROM dbo.fn_MVParam(#sParameterString,',')
...
where someColumn IN(SELECT Value FROM #Param)
so function will be called only once.
Othe thing, if you don't use stored procedure, but embedded SQL query instead, you can just put MVP into query:
...
where someColumn IN(#Param)
...
Use the RDBMS to do the main filtering
SSRS provides filtering for the purposes on data driven display and/or dynamic display. Especially useful for sub reports etc