How to check latest value of standard sequence in ODI repository - oracle-data-integrator

I have a mapping which is using the standard sequence in ODI [Oracle Data Integrator]. I want to reset the value of that particular sequence.
It says, this standard sequence stored in repository. Not sure which repository. So could you please advise which repository [MASTER, WORK or RUN] this sequence can be able to view and modify without changing in Mapping level.

Standard sequences are stored in the WORK and RUN repositories. I don't think you can reset them using ODI Studio.
Specific sequences are stored in a database table you specified.
Both types should be avoided where possible because they don't perform well. It is far better to use native sequences provided by your database when available.

Related

Granting access to master.sys.xp_dirtree SQL

There may be an answer to this somewhere else on here, but I can't find it.
My organization uses a EHR called TIER that has a SQL back-end. One of the features of the EHR is that you can "scan" a document to a folder on the network with the unique ID of a row in a table on the server. Then from the EHR, you can open a record from the table and then it links to the documents in the folder with the same Unique ID.
An example may be helpful - In the EHR I create a document (a row in the ScannedFormTable) with unique ID of 100. I then "scan" (basically attaching or copying) a pdf or other document into a folder on the network (say D:\ScannedDocuments) with the name of 100, so abc.pdf is now in D:\ScannedDocuments\100. Then from the document in the EHR, I can open the pdf. However, without opening the document to check I can't see if there is any file in the ...\100 folder.
Through some googling, I found that using master.sys.xp_dirtree (and "Undocumented" procedure I think) I can have the EHR "see" the name of files "attached" to the documents. The issue is that I can run this stored procedure from SSMS, but can't from the EHR itself. I have tried to figure out a way to grant security permission for the user in the EHR to run the procedure vs running the script in the background on the server at regular intervals.
Any insights would be greatly appreciated. As you may have noticed from the number of " used, I am a self taught SQL user who is better at googling than actually understanding the intricacies of the language.
I found that using master.sys.xp_dirtree (and "Undocumented" procedure I think) I can have the EHR "see" the name of files "attached" to the documents.
I think you are confusing different things. Yes, YOU can (and tsql can) use that undocumented and unsupported procedure. However, your "EHR" is a system designed to provide specific functionality. It isn't clear what you are trying to accomplish, but to "get" your system to do something obviously depends on your system, its features, and what you are actually trying to accomplish.
I'll add that tsql is not designed to access the filesystem natively - hence the use of undocumented extended procedures. If you are simply trying to verify or traverse this ScannedFormtable table and verify that there is some sort of file in the appropriate location, you might find this task easier to implement in a typical programming language.
If you want better suggestions, it would help to discuss your goal. And consider carefully what you are trying to do, since it is quite easy to create a security hole by altering permissions.

DB Comparison tool that I can schedule

I'm after a DB Comparison tool for SQL Server that allows me to do the following:
Schedule a comparison to happen on a recurring schedule
Email me the results (in a nice readable format and not the generated script)
Allow me to exclude/include certain object names (for example exclude table names containing %test%. That's not a real example but there is a good reason why that would come in useful.)
As well as the obvious:
Have the usual options for ignoring things like comments, identity seeds etc
Options for selecting different types of objects
If it was free or at least didn't cost a forture that would be an extra bonus of course.
I have tried out RedGate's SQL Compare and also the built-in DB Comparison in Visual Studio but neither seem able to do the first 3 points above. I also looked at other tools recommended in various threads on here but again they don't mention in their features the 3 points above.
One option I found is RedGate's SQL Comparison SDK with which I think I could write something to do what I want.
I just wanted to investigate tools that might do all of the above out of the box.
Thank you!
SQL Compare Pro comes with a command line, which will be easier to set up than the SDK. If you call this via the Windows Scheduler or in an Agent Job you can achieve what you're looking for.
An example of how to invoke the command line from Powershell it can be found here:
http://www.simple-talk.com/sql/database-administration/auditing-ddl-changes-in-sql-server-databases/
This article also covers how to send an email in Powershell. SQL Compare can also be passed a filter using the /filter switch to exclude objects based on various rules.
http://www.red-gate.com/supportcenter/Content/SQL_Compare/help/10.0/sc_cl_Switches_in_the_cl
Do please email support#red-gate.com should you have trouble getting this working.
I don't think any tool would do all of this out of the box. Have you had a chance to look at
sp_CompareDB. I had a similar requirement and ended up writing my own routine based on the same.
http://www.sql-server-performance.com/2001/database-comparison-sp/

Best tool to document T-SQL *source* files?

At work, the database is not documented at all. Furthermore, the stored procedures, functions and views are all encrypted, this rules out a lot of tools that document these objects for you. All I have are the plain .SQL files that generate the database, schemas, tables, functions and all.
I'd like to know, is there a tool that can read these files and generate a Doxygen-like documentation? Preferably open-source or freeware.
I found IzzySoft's HyperSQL and SourceForge's project PLDoc do something very close to what I'd need, though both seem to be very PL/SQL specific. I want something that reads SQL source files (that understands T-SQL's idiosyncracies), parses them, and gets me:
List of SPs, UDFs, etc. defined within each file
List of objects (both tables/views and procs/functions) each object depends on (directly and, if possible, also indirectly)
Calling and dependencies graphs (i.e. what calls what and is called by what)
If possible, when an SP uses a table/view, how's it using it (INSERT/DELETE/UPDATE/SELECT/mix???)
I've already developped a tiny Perl script that minimally parses these files attempting to get first point - but then it's just a hack and lacks a lot of polish. I'm sure there must be a tool out there which does the job, I want to believe I won't have to code it myself.
Thanks in advance,
Joe
We use Red Gate SQL Doc to generate ours.
However, it works from a database not files: it's easier to read everything from system tables (permission, dependecies, datatypes etc) than parse scripts. Parsing scripts is what the DB engine does...
Can you not generate an empty DB from the source files (remove WITH ENCRYPTION) and generate from that?
Or decrypt if you have sa rights?

Web-App : Keeping trace of the version of the application in database?

We are building a webapp which is shipped to several client as a debian package. Each client runs his own server. But the update and support is done by us.
We make regular releases of the product, with a clean version number. Most of the users get an automatic update (by Puppet), some others don't.
We want to keep a trace of the version of the application (in order to allow the user to check the version in an "about" section, and for our support to help the user more accurately).
We plan to store the version of the code and the version of the base in our database, and to keep the info up to date automatically.
Is that a good idea ?
The other alternative we see is a file.
EDIT : The code and database schema are updated together. ( if we update to version x.y.z , both code and database go to x.y.z )
Using a table to track every change to a schema as described in this post is a good practice that I'd definitely suggest to follow.
For the application, if it is shipped independently of the database (which is not clear to me), I'd embed a file in the package (and thus not use the database to store the version of the web application).
If not and thus if both the application and the database versions are maintained in sync, then I'd just use the information stored in the database.
As a general rule, I would have both, DB version and application version. The problem here is how "private" is the database. If the database is "private" to the application, and user never modifies the schema then your initial solution is fine. In my experience, databases which accumulate several years of data stop being private, it means that users add a table or two and access data using some reporting tool; from that point on the database is not exclusively used by the application any more.
UPDATE
One more thing to consider is users (application) not being able to connect to the DB and calling for support. For this case it would be better to have version, etc.. stored on file system.
Assuming there are no compelling reasons to go with one approach or the other, I think I'd go with keeping them in the database.
I'd put them in both places. Then when running your about function you quickly check that they are both the same, and if they aren't you can display extra information about the version mismatch. If they're the same then you will only need to display one of them.
I've generally found users can do "clever" things like revert databases back to old versions by manually copying directories around "because they can" so defensively dealing with it is always a good idea.

What are the best practices for database scripts under code control

We are currently reviewing how we store our database scripts (tables, procs, functions, views, data fixes) in subversion and I was wondering if there is any consensus as to what is the best approach?
Some of the factors we'd need to consider include:
Should we checkin 'Create' scripts or checkin incremental changes with 'Alter' scripts
How do we keep track of the state of the database for a given release
It should be easy to build a database from scratch for any given release version
Should a table exist in the database listing the scripts that have run against it, or the version of the database etc.
Obviously it's a pretty open ended question, so I'm keen to hear what people's experience has taught them.
After a few iterations, the approach we took was roughly like this:
One file per table and per stored procedure. Also separate files for other things like setting up database users, populating look-up tables with their data.
The file for a table starts with the CREATE command and a succession of ALTER commands added as the schema evolves. Each of these commands is bracketed in tests for whether the table or column already exists. This means each script can be run in an up-to-date database and won't change anything. It also means that for any old database, the script updates it to the latest schema. And for an empty database the CREATE script creates the table and the ALTER scripts are all skipped.
We also have a program (written in Python) that scans the directory full of scripts and assembles them in to one big script. It parses the SQL just enough to deduce dependencies between tables (based on foreign-key references) and order them appropriately. The result is a monster SQL script that gets the database up to spec in one go. The script-assembling program also calculates the MD5 hash of the input files, and uses that to update a version number that is written in to a special table in the last script in the list.
Barring accidents, the result is that the database script for a give version of the source code creates the schema this code was designed to interoperate with. It also means that there is a single (somewhat large) SQL script to give to the customer to build new databases or update existing ones. (This was important in this case because there would be many instances of the database, one for each of their customers.)
There is an interesting article at this link:
https://blog.codinghorror.com/get-your-database-under-version-control/
It advocates a baseline 'create' script followed by checking in 'alter' scripts and keeping a version table in the database.
The upgrade script option
Store each change in the database as a separate sql script. Store each group of changes in a numbered folder. Use a script to apply changes a folder at a time and record in the database which folders have been applied.
Pros:
Fully automated, testable upgrade path
Cons:
Hard to see full history of each individual element
Have to build a new database from scratch, going through all the versions
I tend to check in the initial create script. I then have a DbVersion table in my database and my code uses that to upgrade the database on initial connection if necessary. For example, if my database is at version 1 and my code is at version 3, my code will apply the ALTER statements to bring it to version 2, then to version 3. I use a simple fallthrough switch statement for this.
This has the advantage that when you deploy a new version of your application, it will automatically upgrade old databases and you never have to worry about the database being out of sync with the software. It also maintains a very visible change history.
This isn't a good idea for all software, but variations can be applied.
You could get some hints by reading how this is done with Ruby On Rails' migrations.
The best way to understand this is probably to just try it out yourself, and then inspecting the database manually.
Answers to each of your factors:
Store CREATE scripts. If you want to checkout version x.y.z then it'd be nice to simply run your create script to setup the database immediately. You could add ALTER scripts as well to go from the previous version to the next (e.g., you commit version 3 which contains a version 3 CREATE script and a version 2 → 3 alter script).
See the Rails migration solution. Basically they keep the table version number in the database, so you always know.
Use CREATE scripts.
Using version numbers would probably be the most generic solution — script names and paths can change over time.
My two cents!
We create a branch in Subversion and all of the database changes for the next release are scripted out and checked in. All scripts are repeatable so you can run them multiple times without error.
We also link the change scripts to issue items or bug ids so we can hold back a change set if needed. We then have an automated build process that looks at the issue items we are releasing and pulls the change scripts from Subversion and creates a single SQL script file with all of the changes sorted appropriately.
This single file is then used to promote the changes to the Test, QA and Production environments. The automated build process also creates database entries documenting the version (branch plus build id.) We think this is the best approach with enterprise developers. More details on how we do this can be found HERE
The create script option:
Use create scripts that will build you the latest version of the database from scratch, which is empty except the default lookup data.
Use standard version control techniques to store,branch,tag versions and view histories of your objects.
When upgrading a live database (where you don't want to loose data), create a blank second copy of the database at the new version and use a tool like red-gate's link text
Pros:
Changes to files are tracked in a standard source-code like manner
Cons:
Reliance on manual use of a 3rd party tool to do actual upgrades (no/little automation)
Our company checks them in simply because someone decided to put it in some SOX document that we do. It makes no sense to me at all, except possible as a reference document. I can't see a time we'd pull them out and try and use them again, and if we did we'd have to know which one ran first and which one to run after which. Backing up the database is much more important then keeping the Alter scripts.
for every release we need to give one update.sql file which contains all the new table scripts, alter statements, new/modified packages,roles,etc. This file is used to upgrade the database from 1 version to 2.
What ever we include in update.sql file above one all this statements need to go to individual respective files. like alter statement has to go to table as a new column (table script has to be modifed not Alter statement is added after create table script in the file) in the same way new tables, roles etc.
So whenever if user wants to upgrade he will use the first update.sql file to upgrade.
If he want to build from scrach then he will use the build.sql which already having all the above statements, it makes the database in sync.
sriRamulu
Sriramis4u#yahoo.com
In my case, I build a SH script for this work: https://github.com/reduardo7/db-version-updater
How is an open question
In my case I am trying to create something simple that is easy to use for developers and I do it under the following scheme
Things I tested:
File-based script handling in git using GitlabCI
It does not work, collisions are created and the Administration part has to be done by hand in case of disaster and the development part is too complicated
Use of permissions and access via mysql clients
There is no traceability on changes to the database and the transition to production is manual
Use of programs mentioned here
They require uploading the structures and many adaptations and usually you end up with change control just like the word
Repository usage
Could not control the DRP part
I could not properly control the backups
I don't think it is a good idea to have the backups on the same server and you generate high lasgs for the process
This was what worked best
Manage permissions per user and generate traceability of everything that is sent to the database
Multi platform
Use of development-Production-QA database
Always support before each modification
Manage an open repository for change control
Multi-server
Deactivate / Activate access to the web page or App through Endpoints
the initial project is in:
In case the comment manager reads this part, I understand the self-promotion but please just remove this part and leave the rest since I think it complies with the answer to the question reacted in the post ...
https://hub.docker.com/r/arelis/gitdb
I hope this reaches you since I see that several
There is an interesting article with new URL at: https://blog.codinghorror.com/get-your-database-under-version-control/
It a bit old but the concepts are still there. Good Read!

Resources