SQL equivalent of "using" for schemas? - sql-server

I'm working with a SQL Server DB that's got tables spread across multiple schemas (not my idea), so queries end up looking like this:
select col1, col2
from some_ridiculously_long_schema_name.table1 t1
inner join
another_really_long_schema_location.table2 t2
on...
... you get the idea.
This is a small inconvenience when I put queries into stored procs, etc., but when I'm doing adhoc queries, this gets to be a real pain.
Is there some way I could "include" all the schemas I'm interesed in, and have them automatically addressable? (LINQPad does this).
I'd love to be able to be able to indicate something like this:
using some_ridiculously_long_schema_name, another_really_long_schema_location
... and then query away, with those schemas included in my address space.
If nothing like this exists, I'll look into synonymns, but I'd prefer to do this without having to add artifacts into the DB.

Red-Gate sells an SQL tool that adds intellisense to server management studio. Never tried it but it might help cut down on the keystrokes: http://www.red-gate.com/products/SQL_Prompt/index.htm

I know how you feel, if you need to keep the schemas (for example if you have the same table names in each) and you are consistently writing queries that join across the schemas the best suggestion I can offer is to shorten your schema names.
Low tech and not what you wanted to hear I am sure.
Synonyms as suggested above only work at an object level (you cant have a synonym for a whole schema as far as I know) so you would have to have a synonym for every table, view, stored proc, function etc that you wanted to use from outside your default schema.

no it doesn't. synonims are the only way.

That will not work because if you have Table1 in both schemas then how would you know what schema you want?

Related

Parse SQL scripts to find table dependencies and outputs

I currently have a large set of sql scripts transforming data from one table to another, often in steps, like for example
select input3.id as cid
, input4.name as cname
into #temp2
from input3
inner join input4
on input3.match = input4.match
where input3.regdate > '2019-01-01';
truncate table output1;
insert into output1 (customerid, customername)
select cid, cname from #temp2;
I would like to "parse" these scripts into their basic inputs and outputs
in: input3, input4
out: output1
(not necessarily this format, just this info)
To have the temporary tables falsely flagged would not be a problem:
in: input3, input4, #temp2
out: #temp2, output1
It is OK to take a little bit of time, but the more automatic, the better.
How would one do this?
Things I tried include
regexes (straight forward but will miss edge cases, mainly falsely flagging tables in comments)
Use an online parser to list the DB objects, postprocessing by hand
Look into solving it programmatically, but for example writing a C# program for this will cost too much time
I usually wrap the scripts' content into stored procedures and deploy them into the same database where the tables are located. If you are sufficiently acquainted with (power)shell scripting and regexps, you can even write the code which will do it for you.
From this point on, you have some alternatives:
If you need a complete usage / reference report, or it's a one-off task, you can utilise the sys.sql_expression_dependencies or other similar system views;
Create a SSDT database project from that database. Among many other things that make database development easier and more consistent, SSDT has the "Find all references" functionality (Shift+F12 hotkey) which displays all references of a particular object (or column) across the code.
AFAIK neither of them sees through dynamic SQL, so if you have lots of it, you'll have to look elsewhere.

I am struggling with migrating the temp tables (SQL server) to oracle

I am struggling with migrating the temp tables (SQL server) to oracle. Mostly, oracle don't consider to use temporary table inside the store procedure but in sql server, they are using temp tables for small fetching record and also manipulate same.
How to overcome this issue. I am also searching some online articles about migrating temp table to oracle but they are not clearly explained for my expectations.
i got information like using inline view, WITH clause, ref cursor instead of temp table. I am totally confused.
Please suggest me, in which case may use Inline view, WITH clause, ref cursor.
This may be helpful for improve my knowledge and also doing job well.
As always thank you for your valuable time in helping out the newbies.
Thanks
Alsatham hussain
Like many questions, the answer is "it depends". A few things
Oracle's "temp" table is called a GLOBAL TEMPORARY TABLE (GTT). Unlike most other vendor's TEMP tables, their definition is global. Scripts or programs in SQL Server (and others), will create a temp table and that temp table will disappear at the end of a session. This means that the script or program can be rerun or run concurrently by more than one user. However, this will not work with a GTT, since the GTT will remain in existence at the end of the session, so the next run that attempts to create the GTT will fail because it already exists.
So one approach is to pre-create the GTT, just like the rest of the application tables, and then change the program to INSERT into the gtt, rather than creating it.
As others have said, using a CTE Common Table Expression) could potentially work, buy it depends on the motivation for using the TEMP table in the first place. One of the advantages of the temp table is it provides a "checkpoint" in a series of steps, and allows for stats to be gathered on intermediate temporary data sets; it what is a complex set of processing. The CTE does not provided that benefit.
Others "intermediate" objects such as collections could also be used, but they have to be "managed" and do not really provide any of the advantages of being able to collect stats on them.
So as I said at the beginning, you choice of solution will depend somewhat on the motivation for the original temp table in the first place.

How to systematically manage a Big List of Queries and Tables in SQL Server?

Suppose someone has to work on a lot of different SQL Server Databases which have got a lot of Tables and Queries / Views inside them.
After a period of time, it becomes very difficult to remember exactly what kind of columns are present within a given Table and View.
Please suggest some method by which one can keep a systematic list of all the Tables and Views that are present within a SQL Server Database, along with the columns that are present within them.
Are there any Add-on products or services etc. available that helps in making this type of work systematic?
Currently I add comments to each queries inside SQL Server to remind me of what this query is doing, but this method is not great. I am looking for some better and more efficient methods.
Please share any ideas that you might have in this direction.
Thanks a lot
You may find the following useful for each database.
select s.name, s.type, c.name , s.refdate
from syscolumns c
inner join sysobjects s on s.id = c.id
where s.xtype in('U','V')
order by s.refdate --use refdate for manual quick looks
-- use s.name for file output and long term analysis
I output this to text files with the exact same format and check them into source control for each database. I even make comments about fields as things change. This is not part of the formal process, it is just sanity big picture version tracking independent of the formal deployments.

What's the best way to convert one Oracle table (data) to fill a slightly different Oracle table?

I have two Oracle tables, an old one and a new one.
The old one was poorly designed (more so than mine, mind you) but there is a lot of current data that needs to be migrated into the new table that I created.
The new table has new columns, different columns.
I thought of just writing a PHP script or something with a whole bunch of string replacement... clearly that's a stupid way to do it though.
I would really like to be able to clean up the data a bit along the way as well. Some it was stored with markup in it (ex: "First Name"), lots of blank space, etc, so I would really like to fix all that before putting it into the new table.
Does anyone have any experience doing something like this? What should I do?
Thanks :)
I do this quite a bit - you can migrate with simple select statememt:
create table newtable as select
field1,
trim(oldfield2) as field3,
cast(field3 as number(6)) as field4,
(select pk from lookuptable where value = field5) as field5,
etc,
from
oldtable
There's really very little you could do with an intermediate language like php, etc that you can't do in native SQL when it comes to cleaning and transforming data.
For more complex cleanup, you can always create a sql function that does the heavy lifting, but I have cleaned up some pretty horrible data without resorting to that. Don't forget in oracle you have decode, case statements, etc.
I'd checkout an ETL tool like Pentaho Kettle. You'll be able to query the data from the old table, transform and clean it up, and re-insert it into the new table, all with a nice WYSIWYG tool.
Here's a previous question i answered regarding data migration and manipulation with Kettle.
Using Pentaho Kettle, how do I load multiple tables from a single table while keeping referential integrity?
If the data volumes aren't massive and if you are only going to do this once, then it will be hard to beat a roll-it-yourself program. Especially if you have some custom logic you need implemented.
The time taken to download, learn & use a tool (such as pentaho etc.) will probably not worth your while.
Coding a select *, updating columns in memory & doing an insert into will be quickly done in PHP or any other programming language.
That being said, if you find yourself doing this often, then an ETL tool might be worth learning.
I'm working on a similar project myself - migrating data from one model containing a couple of dozen tables to a somewhat different model of similar number of tables.
I've taken the approach of creating a MERGE statement for each target table. The source query gets all the data it needs, formats it as required, then the merge works out if the row already exists and updates/inserts as required. This way, I can run the statement multiple times as I develop the solution.
Depends on how complex the conversion process is. If it is easy enough to express in a single SQL statement, you're all set; just create the SELECT statement and then do the CREATE TABLE / INSERT statement. However, if you need to perform some complex transformation or (shudder) split or merge any of the rows to convert them properly, you should use a pipelined table function. It doesn't sound like that is the case, though; try to stick to the single statement as the other Chris suggested above. You definitely do not want to pull the data out of the database to do the transform as the transfer in and out of Oracle will always be slower than keeping it all in the database.
A couple more tips:
If the table already exists and you are doing an INSERT...SELECT statement, use the /*+ APPEND */ hint on the insert so that you are doing a bulk operation. Note that CREATE TABLE does this by default (as long as it's possible; you cannot perform bulk ops under certain conditions, e.g. if the new table is an index-organized table, has triggers, etc.
If you are on 10.2 or later, you should also consider using the LOG ERRORS INTO clause to log rejected records to an error table. That way, you won't lose the whole operation if one record has an error you didn't expect.

How can I find out where a database table is being populated from?

I'm in charge of an Oracle database for which we don't have any documentation. At the moment I need to know how a table is getting populated.
How can I find out which procedure, trigger, or other source, this table is getting its data from?
Or even better, query the DBA_DEPENDENCIES table (or its equivalent USER_ ). You should see what objects are dependent on them and who owns them.
select owner, name, type, referenced_owner
from dba_dependencies
where referenced_name = 'YOUR_TABLE'
And yeah, you need to see through the objects to see whether there is an INSERT happening in.
Also this, from my comment above.
If it is not a production system, I would suggest you to raise an user
defined exception in TRIGGER- before INSERT with some custom message
or LOCK the table from INSERT and watch over the applications which
try inserting into them failing. But yeah, you might also get calls
from many angry people.
It is quite simple ;-)
SELECT * FROM USER_SOURCE WHERE UPPER(TEXT) LIKE '%NAME_OF_YOUR_TABLE%';
In output you'll have all procedures, functions, and so on, that in ther body invoke your table called NAME_OF_YOUR_TABLE.
NAME_OF_YOUR_TABLE has to be written UPPERCASE because we are using UPPER(TEXT) in order to retrieve results as Name_Of_Your_Table, NAME_of_YOUR_table, NaMe_Of_YoUr_TaBlE, and so on.
Another thought is to try querying v$sql to find a statement that performs the update. You may get something from the module/action (or in 10g progam_id and program_line#).
DML changes are recorded in *_TAB_MODIFICATIONS.
Without creating triggers you can use LOG MINER to find all data changes and from which session.
With a trigger you can record SYS_CONTEXT variables into a table.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions165.htm#SQLRF06117
Sounds like you want to audit.
How about
AUDIT ALL ON ::TABLE::;
Alternatively apply DBMS_FGA policy on the table and collect the client, program, user, and maybe the call stack would be available too.
Late to the party!
I second Gary's mention of v$sql also. That may yield the quick answer as long as the query hasn't been flushed.
If you know its in your current instance, I like a combination of what has been used above; if there is no dynamic SQL, xxx_Dependencies will work and work well.
Join that to xxx_Source to get that pesky dynamic SQL.
We are also bringing data into our dev instance using the SQL*Plus copy command (careful! deprecated!), but data can be introduced by imp or impdp as well. Check xxx_Directories for the directories blessed to bring data in/out.

Resources