Questions in Sybase lag and over by concept - sybase

I have table like this below in my sybase database
ID,Col1,Col2
1,100,300
2,300, 400
3,400,500
4,900,1000.
I want result like this below only in sybase.
1,100,500 --- cross interrow checking the values
2,900,1000.

SInce you did not specify which database you're using, I'm assuming your using Sybase ASE (rather than Sybase IQ or Sybase SQL Anywhere, which do support lag/lead etc.)
Also it's not quite clear what you want since you have not defined how the relation between the various rows and columns should be interpreted. But I'm guessing you're essentially hinting at a dependency graph between Col2->Col1.
In ASE, you'll need to write this as a multi-step, loop-based algorithm whereby you determine the dependency graph. Since you don't know how many levels deep this will run, you need a loop rather than a self-join. You need to keep track of the result in a temporary table.
Can't go further here... but that's the sort of approach you'll need.

Related

Query to find list of tables and columns used in a sybase stored procedure

I can get the table names with sp_depends but how to get the column names used in a procedure.
There is no way to get that information through supported SQL functionality. When you start looking into query processing diagnostics (e.g. dumping out internal query trees etc.) then this information could in principle be found. But this requires an understanding of the internal diagnostic info, which is very complex and not documented. You can certainly try to figure it out, and it may not be impossible, but you're on your own.

ADO - Can I edit results of a complex query with multiple join statements?

I'm working on a data conversion utility which can push data from one master database out to a number of different databases. The utility its self will have no knowledge of how data is kept in the destination (table structure), but I would like to provide writing a SQL statement to return data from the destination using a complex SQL query with multiple join statements. As long as the data is in a standardized format that the utility can recognize (field names) in an ADO query.
What I would like to do is then modify the live data in this ADO Query. However, since there are multiple join statements, I'm not sure if it's possible to do this. I know at least with BDE (I've never used BDE), it was very strict and you had to return all fields (*) and such. ADO I know is more flexible, but I don't know quite how flexible in this case.
Is it supposed to be possible to modify data in a TADOQuery in this manner, when the results include fields from different tables? And even if so, suppose I want to append a new record to the end (TADOQuery.Append). Would it append to two different tables?
The actual primary table I'm selecting from has a complimentary table which is joined by the same primary key field, one is a "Small" table (brief info) and the other is a "Detail" table (more info for each record in Small table). So, a typical statement would include something like this:
select ts.record_uid, ts.SomeField, td.SomeOtherField from table_small ts
join table_detail td on td.record_uid = ts.record_uid
There are also a number of other joins to records in other tables, but I'm not worried about appending to those ones. I'm only worried about appending to the "Small" and "Detail" tables - at the same time.
Is such a thing possible in an ADO Query? I'm willing to tweak and modify the SQL statement in any way necessary to make this possible. I have a bad feeling though that it's not possible.
Compatibility:
SQL Server 2000 through 2008 R2
Delphi XE2
Editing these Fields which have no influence on the joins is usually no problem.
Appending is ... you can limit the Append to one of the Tables by
procedure TForm.ADSBeforePost(DataSet: TDataSet);
begin
inherited;
TCustomADODataSet(DataSet).Properties['Unique Table'].Value := 'table_small';
end;
but without an Requery you won't get much further.
The better way will be setting Values by Procedure e.g. in BeforePost, Requery and Abort.
If your View would be persistent you would be able to use INSTEAD OF Triggers
Jerry,
I encountered the same problem on FireBird, and from experience I can tell you that it can be made(up to a small complexity) by using CachedUpdates . A very good resource is this one - http://podgoretsky.com/ftp/Docs/Delphi/D5/dg/11_cache.html. This article has the answers to all your questions.
I have abandoned the original idea of live ADO query updates, as it has become more complex than I can wrap my head around. The scope of the data push project has changed, and therefore this is no longer an issue for me, however still an interesting subject to know.
The new structure of the application consists of attaching multiple "Field Links" on various fields from the original set of data. Each of these links references the original field name and a SQL Statement which is to be executed when that field is being imported. Multiple field links can be on one single field, therefore can execute multiple statements, placing the value in various tables, etc. The end goal was an app which I can easily and repeatedly export a common dataset from an original source to any outside source with different data structures, without having to recompile the app.
However the concept of cached updates was not appealing to me, simply for the fact pointed out in the link in RBA's answer that data can be changed in the database in the mean-time. So I will instead integrate my own method of customizable data pushes.

Merging multiple Access databases into SQL Server

We have a program in which each user is given their own Access database. We'd like to merge these all together into a single SQL Server database.
The problem is that, using the SQL Server import/export wizard, the primary/foreign keys do not get updated. So for instance if one user has this table:
1 Apple
2 Banana
and another user has this:
1 Coconut
2 Cheeseburger
the resulting table looks like this:
1 Apple
2 Banana
1 Coconut
2 Cheeseburger
Similarly, anything that referenced Banana by its primary key (2) is now referencing both Banana and Cheeseburger, which will not make the vegans very happy.
Is there any way to automatically update the primary/foreign key references when importing, other than writing an extremely long and complex import-script?
If you need to keep them fully compartmentalized, you have to assign some kind of partitioning column to each table. Is there a reason you need your SQL Server to have the same referential integrity as Access? Are you just importing to SQL Server for read-only reporting? In that case, I would not bother with RI. The queries will all require a partitionid/siteid/customerid. You could enforce that for single-entity access by wrapping tables with a table-valued UDF which required the partitionid. For cross-site that doesn't work.
If you are just loading to SQL Server for reporting, I would also consider altering the data model to support reporting (i.e. a dimensional model is sometimes better than a normalized model) instead of worrying about transaction processing.
I think we need to know more about the underlying goals.
Need more information of requirements.
My basic question is 'Do you need to preserve the original record key?' e.g. 1:apple in table T of user-database A; 1:coconut in table T of user-database B. Table T is assumed to have the same structure in all database instances. Reasons I can suppose that you may want to preserve the original data: (a) you may have a requirement to the reference the original data (maybe a visual for previous reporting), and/or (b) there may be a data dependency in the application itself.
If the answer is 'no,' then you are probably interested only in preserving all of the distinct data values. Allow the SQL table to build using a new key and constrain the SQL table field such that it contains unique data. This approach seems to preserve the original table structure (but not the original key value or its 'location') and may suffice to meet your requirement.
If the answer is 'yes,' I do not see a way around creating an index that preserves a pointer to the original database and the key that was created in its table T. This approach would seem to require an application modification.
The best approach in this case is probably to split the incoming data into two tables: one to identify the database and original key, another to identify the distinct data values. For example: (database) table D has records such as 'A:1:a,' 'A:2:b,' 'B:1:c,' 'B:2:d,' 'B:15:a,' 'C:8:a'; (data) table T1 has records such as 'a:apple,' 'b:banana,' 'c:coconut,' 'd:cheeseburger' where 'A' describes the original database 'location,' 1 is the original value in location 'A,' and 'a' is a value that equates records in table D and table T1. (Otherwise you have a lot of redundant data in the one table; e.g. A:1:apple, B:15:apple, C:8:apple.) Also, T1 has a structure similar to the original T and is seems to be more directly useful in the application.
Ended up creating an SSIS project for this. SSIS is a visual programming tool made by Microsoft (and part of their "Business Integration Studio", which comes with SQL Server) designed for solving exactly these sorts of problems.
Why not let Access use its replication manager to merge the databases? This will allow you to identify the conflicts and resolve them before importing to SQL Server. I'm fairly confident it will retain the foreign key relationships. If I understand your situation correctly, and the databases are the same structure with different data, you could load the combined database to the application and verify the data before moving to SQL Server.
What version of Access are you using? Here's a link for Access 2000. Use the language to adjust search parameters to fit your version.
http://technet.microsoft.com/en-us/library/cc751054.aspx

Is it possible to write a database view that encompasses one-to-many relationships?

So I'm not necessarily saying this is even a good idea if it were possible, since the schema of the view would be extremely volatile, but is there any way to represent a has-many relationship in a single view?
For example, let's say I have a customer that can have any number of addresses in the database. Is there any way to list out each column of each address with perhaps a number as a part of the alias (e.g., columns like Customer Id, Name, Address_Street_1, Address_Street_2, etc)?
Thanks!
Not really - you really are doing a dynamic pivot. It's possible to use OPENROWSET to get to a dynamically generated query, but whether that's advisable, it's hard to say without seeing more about the business case.
First make a stored proc which does the dynamic pivot like I did on the StackExchange Data Explorer.
Basically, you generate dynamic SQL which builds the column list. This can only really be done in a stored proc. Which is fine for applciation calls.
But what about if you want to re-use that in a lot of different joins or ad hoc queries?
Then, have a look at this article: "Using SQL Servers OPENROWSET to break the rules"
You can now call your stored proc by looping back into the server and then getting the results into a rowset - this can be in a view!
The late Ken Henderson has some good examples of this in his excellent book: "The Guru's Guide to SQL Server Stored Procedures, XML, and HTML" (you got to love the little "Covers .NET!" on the cover which captures well the zeitgeist for 2002!).
He only covers the loopback part (with views and user-defined functions), the less verbose PIVOT syntax was not available until 2005, but PIVOTs can also be generated using a CASE statement as a characteristic function.
Obviously, this technique has caveats (I can't even do this on our production server).
Yes - use:
CREATE VIEW customer_addresses AS
SELECT t.customer_id,
t.customer_name,
a1.street AS address_street_1,
a2.street AS address_street_2
FROM CUSTOMER t
LEFT JOIN ADDRESS a1 ON a1.customer_id = t.customer_id
LEFT JOIN ADDRESS a2 ON a2.customer_id = t.customer_id
If you provided more info, it'd be easier to give you a better answer. It's possible you're looking to pivot data (turn rows into columns).
Simply put, no. Not without dynamically recreating the view every time you want to use it at least, that is.
But, what you can do is predefine, say, 4 address columns in your view, then populate the first four results of your one-to-many relation into those columns. It's not quite the dynamic view you want, but it's also much more stable/usable in my opinion.

Mass change datatype and rename of dependent store procedure variables

I am in the process of optimising my database and I was thinking of changing the datatype for some columns from DATETIME to SMALLDATETIME on my tables.
Is there a system stored procedure that returns both the contents/code of a store procedure and the dependent table which will then allow me to do a join on a filtered list of tables?
Cheers!
EDIT1:
Im looking to programatically rename the stored procedures not track dependencies!
The built-in dependency tracking for SQL isn't very good for this type of work. Two tools come to mind thought...
Red Gate SQL Dependency Tracker - Good for determining all the dependent code
Visual Studio for Database Developers - Contains TSQL Code Analysis which can identify if a piece of data is being treated as an incorrect type.
Red Gate has a free trial on their stuff, which might get you through this job
I answered a simliar question to this (link below) with a sample of a scipt I use to find text in stored procedures (and functions and views). It requires a bit of work, but might help you here.
[How to find data table column reference in stored procedures
[1]: http://How to find data table column reference in stored procedures
If your dependencies in SQL Server are accurate, you can use sys.sql_dependencies with appropriate joins.

Resources