Print Statements from SQL file in command prompt - sybase

I am executing a SQL file which contains basic create table and lot of insert statements from command prompt. Since there are huge number of inserts, i want to track how many inserts are done. To do that i want to add few statements describing start and end of certain section in the sql file. I want this statement printed in the command prompt itself . Is there anyway i can do this?

Either print or message will do the trick. You can follow the examples inside each link.
In some specific cases you can also simply do SELECT "Some custom message...";

Related

SQL Query to read a text file and display only selected contents from that

I am working on something which requires me to run an sql query to read a text file from a path but it has to display only few contents based on my conditions/requirements. I have read about using ROWSET/BULK copy but it copies the entire file but I need only certain data from the file.
Ex:
Line 1 - Hello, Good Morning.
Line 2 - Have a great day ahead.
Line 3 - Phone Number : 1112223333 and so on.
So, if I read this file and give the condition as "1112223333", it should display only the lines consisting of "1112223333".
NOTE: It should display the entire line of the matched case/condition
Is it possible to achieve this using an sql query? If so then please help me with this.
Thanks in advance.
Unfortunately what you're trying to do doesn't work with ROWSET. There is no way to apply a filter at read time. You're stuck with reading in the entire table. You can of course read into a temp table, then delete the rows. This will give you the desired end result, but you have to take the hit on reading in the entire table.
You may be able to generate a script to filter the file server side and trigger that with xp_cmdshell but you'd still need to take the performance hit somewhere. While this would be lower load on the SQL server, you'd just be pushing the processing elsewhere, and you'd still have to wait for the processing to happen before you could read the file. May be worth doing if the file is on a separate server and network traffic is an issue. If the file is on the same server, unless SQL is completely bogged down, I can't see an advantage to this.

SSIS - Stored Procedure Output Meesages and Results to Text

We currently have a process that calls SQLCMD in shell script that outputs the results of a stored procedure to a text log file. The stored procedure does multiple Updates, Inserts, and Select statements and we capture all the messages and results to a text file partly for having a Select statement that shows the table before it is updated and after it is updated. Now we are converting to SSIS and would like to capture both the results and messages in a text file.
I have 2 questions: Is there a way to do this without calling SQLCMD in SSIS and possibly use execute sql or data flow task? If not, what is the best practice for capturing changes? (I see that I need enterprise edition for Change Capture via SQL so that doesn't work for us.)
Edit (more explanation):
I have stored procedure that does 10 updates in a row. Before I do the update I want to see what the table looks like for that specific update query by selecting the data out of it with the same parameters as the update query. Now each update does something different but one may do something to a record that I did not expect. This will allow me to pinpoint the exact problem. The best idea suggested is triggers, although it may be slow, it can be set up to capture the changes that I need.

Pentaho rows to variables

I am trying to create a transformation which takes values from a table input (lets say, 10 rows) and in turns creates variables from the values from the rows. For each row in the original set, I then need to do a new transformation using the variables.
How can I loop through a bunch of rows, one by one, reading it into variables which will be used later in a transformation of its own?
looping in PDI is a bit complicated.
You can use the following proceed:
Create a job in PDI. First execute a Transformation, which reads or
generates the rows you need and use the "Copy rows to result" step.
After that create a execute Job step in the first Job. Here you have
to check the "Execute for every input row" Option in the "Advanced"
settings of the job. In this job you create and execute the final
Transformation, which is transforming your data.
In this transformation you have to use the "get rows from result".
Here you can finally read the variables you have defined before.
Hope i could help you.
Best regards.

TRANSACT SQL error 216

I have a Transact SQL script that creates a table AND a table value function based on that table which I invoke from another script using SQLCMD mode; for example:
:r .\MyTableScript.sql
All is working well, both at DB generation time and at execution time.
Now I split the script into two files to separate the function, I end up with something like this in the 'master script':
:r .\MyTableScript.sql
:r .\MyTableFunctionScript.sql
All works well to generate the database, however when I call the function I get error "216 Parameters were not supplied for the function".
This is weird.
If I call the function with the wrong number of parameters I get the correct message, something like '... invalid number or arguments ...'; if I call the function with arguments of the wrong type then again I get the appropriate message about the wrong type for the arguments.
But when I call the function with the right arguments I get the aforementioned message.
I know that CREATE FUNCTION must be the first statement in a batch and it is. I also tried with and without semicolon's in the 'right' places.
Now, I put back the function into the same script file as the table itself (just after the DDL for the table) and regenerate the database and all is now working fine again.
I can leave it like this (the table and the function in one big script file), BUT I would prefer to spilt the table and the function scripts into two SQL scripts files.
What am I doing wrong?
Maybe you are passing the right number of arguments but one of them is NULL.

Filemaker not recognizing Applescript command during loop

I'm getting an odd result from an Applescript script being called from within Filemaker that I've not seen before nor have I found a reference to it online here or elsewhere.
The following script creates three new records and makes a call to the Filemaker script "paste_into_container". The FM script "paste_into_container is very simple, it pastes whatever is on the clipboard into a specific container field. When the script is called from within Filemaker, the "paste_into"container" subscript will only paste the contents of the clipboard into the last new record. The container field on the first two records is left blank.
It's almost as if the loop creates the new record, ignores the "paste_into_container" script, and then moves on to the next iteration of the loop.
The script works fine when called from ScriptEditor but fails when called from within Filemaker. The script will also work if I drop the repeat loop and create just one record.
I've tried increasing the delay added at the end of each loop but it does not make any difference whether it is 1 or 5 seconds.
I'm sure it's something simple but I'm not seeing it and after two days it's time for help (of one kind or another)
additional info:
Mac OS 10.6.8
FM 11 running through FM11 server
Thanks in advance
Phil
tell application "FileMaker Pro"
show every record in table "Image_Info" in database myDB
repeat with i from 1 to 3
go to layout 1 of database myDB
set myNewRecord to create new record in database myDB
go to last record
do script "paste_into_container"
delay 1
end repeat
end tell
there's a very good discussion on another forum specifically about this problem. When FM moved from FM10 to FM11 they changed how the program handled requests from imbedded applescripts. Essentially, since FM11 Filemaker treats Applescript commands asynchronously rather than synchoronously whenever the Applescript calls on an internal Filemaker script with the do script command. What this means is that Filemaker will allow the Applescript to continue without waiting for feedback from Filemaker saying the internal FM script is complete. There is a work around but it is hardly elegant and does not seem universal. See link below.
As I understand it this only affects embedded Applescripts calling on other local Filemaker scripts.
link

Resources